id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
30696417
|
pes2o/s2orc
|
v3-fos-license
|
Initial Validation of Cytokine Measurement by ELISA in Canine Feces
Measurement of fecal cytokines has been used as a marker of intestinal inflammation in people and correlates with endoscopic findings. The aim of this study was to evaluate the use of canine-specific enzyme-linked immunosorbant assays (ELISAs) for quantification of cytokines in canine fecal samples as a non-invasive biomarker. Interleukin (IL)-6, -8, -10, -23/12p40 and TNF- were assessed by using spiked fecal samples from 3 healthy dogs. Standard curve validation was performed, and the impact of time to freeze, duration of storage and number of freeze-thaw cycles on cytokine concentration were also examined. All the cytokines assayed could be detected, with varying accuracy. The mean coefficient of variation (CV) for all standard curves ranged from 2.95% 9.8%. The mean intra-assay CV ranged from 3.1% 11.14%, and inter-assay CV from 4.36% 18.83%. Recovery of IL-23 was poor (7.23% 17.12%), precluding further interpretation of stability studies. Mean recovery did not appear to be affected by time to freeze and repeat freeze-thaw cycles in all cytokines investigated. Recovery for all cytokines after short-term storage of 30 days at −80 ̊C showed a recovery of <70% or >130%. In conclusion, although fecal IL-6, -8, -10, and TNF- could be used as biomarkers of intestinal inflammation in the dog, the quality of laboratory performance and poor recovery at lower concentrations limit their application. Bench-top and freeze-thaw stability was acceptable, and samples should ideally be analyzed within a week. Investigation involving dogs with acute and chronic inflammatory intestinal disease is required to determine the role of this methodology in a clinical setting.
Introduction
Existing diagnostic procedures to identify intestinal inflammation in dogs are generally expensive or invasive.Histopathology is currently considered the gold standard for diagnosis of active intestinal inflammation, with biopsies obtained endoscopically or surgically.Biopsies, however, are subject to intra-and inter-observer variation and may be of variable sample quality [1][2][3].Additionally, in clinical gastroenterology the aim of treatment is to induce remission, which is defined by the American Food and Drug Administration as "an absence of inflammatory symptoms in conjunction with evidence of mucosal healing" [4].Repeated endoscopy and biopsies are not always permitted in veterinary practice, and current laboratory tests are unable to establish clearly when remission is reached.The uses of disease activity indices aid in monitoring the progress of patients, but themselves have subjectivity in their scoring, or may be influenced by the presence of co-morbidities [5][6][7].
Fecal biomarkers are a heterogeneous group of substances that leak from, or are generated by, inflamed intestinal mucosa [7].An ideal fecal biomarker should be non-invasive, reproducible, sensitive, specific, and with clear reference intervals able to distinguish between normal and diseased dogs.A variety of potential markers have been assessed in dogs to date, including calprotectin, Alpha1-proteinase inhibitor and S100A12 [8].With people, fecal excretion of 111 Indium-labeled leukocytes currently serves as the gold standard fecal marker of inflammation [7,8].However, its use, along with other techniques such as fecal excretion of 51 Chromium-labeled red cells or radio-labeled proteins has not been widely adopted in the medical or veterinary field due to issues of radiation exposure and the need for fecal collection over 4 days [7,8].Other fecal markers measure gastrointestinal protein loss, such as alpha1-proteinase inhibitor in dogs, which is protected from intestinal pro-teases [9,10].However, these markers are not specific for inflammatory disease [7,11,12].Furthermore, these fecal tests are not widely available [13].
Cytokines are effector proteins that regulate immunity, and are potential biomarkers for diagnostic and therapeutic monitoring being more reflective of general inflamemation.Fecal cytokines such as Tumor Necrosis Factor-Alpha (TNF-) have been shown to be a useful marker of disease activity in people with inflammatory bowel disease (IBD) and to correlate well with endoscopic findings [14,15].Fecal measurement of anti-inflammatory cytokines interleukin (IL)-4 and -10 has been shown to increase with clinical resolution of IBD [16], whilst both IL-2 and interferon (IFN)- have been shown to significantly increase in people with Norovirus associated diarrhea [17].As well, IL-8 and IL-1 were increased in some patients with enteroaggregative Escherichia coli infection [18].
It is thought that different factors e.g.bacterial pathogen-associated molecular patterns or activation of tolllike receptors may incite different combinations of cytokines which are predominantly T helper (Th)-1 (e.g.IL-1, -8, TNF-) or Th-2 (IL-6, IL-10) mediated [19].Recent studies in men have shown that a distinct subset of T helper cells (Th17) drives inflammation and pathology in the human gut.The mechanism by which this occurs is unknown but it is thought to involve a milieu of cytokines including IL-17 and -23 [20,21].The aim of this study was to investigate the use of enzyme linked immunosorbant assay (ELISA) for detection of IL-6, -8, -10, -23/12p40 and TNF-α in canine fecal samples.As this bioanalytical method has been validated for use with other matrixes in the same species, only a partial validation was performed [22].
Samples
Assay validation was performed on fecal samples collected immediately after voiding from three healthy staffowned dogs and consisted of the following breeds: Japanese spitz (female, neutered), Border collie (male, neutered), and a golden retriever (male, neutered).All three dogs were between 2 -3 years of age and had no history of gastrointestinal signs or weight loss in the 2 months prior to sample collection.All samples were collected in the morning.No medications including NSAIDS, antibiotics or corticosteroids had been administered for at least 3 months prior to sample collection, apart from worming prophylaxis.
Collection and Processing of Fecal Samples
Fecal samples were collected and were kept at 2˚C -4˚C until processed, within an hour of submission.Samples were divided into 1 g aliquots and placed in polypropylene tubes with 5 mL of protease inhibitor cocktail P83401 diluted 1:100.The mixture was vortexed for one minute or until the sample was thoroughly homogenized.Samples were then centrifuged at 1200 -1500 RCF for 5 minutes at 4˚C, and 0.5 mL aliquots of the supernatant were separated into polypropylene tubes, stored on dry ice, then at −80˚C until assayed.Samples were kept on ice at all other times during processing.Undiluted supernatant samples were spiked with moderately low levels of each cytokine as the validation sample (VS) for validation studies.All three VSs were included with each validation run and were analyzed in duplicate.Assays were performed over several days, with no more than one run per day per ELISA being evaluated.All assays were performed by a single operator (NP).
IL-6, -8, -10 and TNF- Assays
Canine immunoassays for IL-6, -8, -10 and TNF-2 were used according to manufacturer's instruction, with the exception of overnight incubation of fecal samples at 4˚C to increase sensitivity at the lower limit of detection (LLOD).All samples were run in duplicate.
IL-12/23p40 Assay
An IL-23 immunoassay was developed from a canine IL-12/23p40 assay development kit 3 .The assay employs the quantitative sandwich enzyme immunoassay technique and is performed on 96-well flat-bottom, high-binding plate 4 .The plate was coated with 100 L/well goat anticanine IL-12/23p40 as the capture anti-canine monoclonal antibody and incubated overnight at 4˚C.Subsequent steps were carried out at room temperature.
The blocking agent and reagent diluent were 1 % BSA in phosphate buffered saline, wash buffer was 0.05% Tween 20 in phosphate buffered saline, and the substrate solution a 1:1 mixture of hydrogen peroxide and tetramethylbenzidine. Serial two-fold dilutions were performed on a 4000 pg/mL recombinant canine IL-12/23p40 standard to generate an eight-point curve.A biotinylated goat anti-canine IL-12/23p40 and streptavidin is used to precipitate a color change that is proportional to the amount of IL-12 and IL-23p40 bound in the initial step.The reaction is stopped by 2N sulphuric acid and the optical density of each well is read immediately at 450 nm, and 540 nm.
Standard Curve Validation
All standard curves were prepared using the supplied cytokine standards that were reconstituted with the reagent diluents to produce a two-fold dilution series on the day of the assay.This produced at least six non-zero standards (excluding blank and anchor points).Back calculations were obtained from six standard curves performed for each cytokine.ELISA validation included description of the standard curves with CV of the back-calculated values and Square of Pearson's Correlation Coefficient (R2) calculated for each of the cytokines assayed.Acceptance criteria required that the CVs for at least 75% of the calibration standards should be <20% [23].
The upper limit of detection (ULOD) was measured using the highest standard concentration value measured for that cytokine.Sample analytes of biological systems are predicted not to exceed ULOD based on previous studies on fecal cytokines [13,17,24].
Limits of Detection
The LLOD was calculated using the mean and standard deviation of the absorbance of the blank samples (assay diluent) to define the lowest concentrations of fecal cytokines that can be reliably distinguished.The LLOD was determined based on manufacturer's guidelines by adding two standard deviations to the mean optical density of the zero standard replicates and calculating the corresponding concentration based on Equation (1) below: 2 S.D. absorbance zero standard 0 pg mL / absorbance 0 pg mL lowest standard pg mL lowest standard pg mL All three VSs were used to calculate the intra-assay and inter-assay precision.The nominal spiked concentrations can be found in Table 1.Four pairs of each sample were run within the same assay, as well as in duplicates on three separate days.Inter-and intra-assay CV was used as criteria to validate the precision of ligand-binding assays with a CV of <25% deemed acceptable [23].Dilution series on the day of the assay.This produced at least six non-zero standards (excluding blank and anchor points).Back calculations were obtained from six standard curves performed for each cytokine.ELISA validation included description of the standard curves with CV of the back-calculated values and Square of Pearson's Correlation Coefficient (R 2 ) calculated for each of the cytokines assayed.Acceptance criteria required that the CVs for at least 75% of the calibration standards should lie within 20% [23].
Recovery
Undiluted supernatant samples from all three dogs were used as baselines for the spike and recovery experiments.These samples were spiked with low, moderate and high concentrations of the respective recombinant cytokines.The spiking levels were determined from the assay detection limits, with the low spike being two-fold above the LLOD and the high spike two-fold below the ULOD.
Recovery was quantified as a comparison of an observed (assayed) result to its theoretical true value, expressed as a percentage of the nominal (theoretical) concentration (Equation ( 2)).A recovery of 75% -125% was deemed acceptable [23,25].
Stability Studies
Short-term storage at room-temperature and −80˚C, as well as freeze-thaw stability was assessed [23,25,26].Bench-top stability was assessed at room temperature for up to 3 hours, as well as short-term storage stability at −80˚C for 1 month.The acceptance criterion was defined as a mean recovery of between 70% -130% [23], compared to their respective reference baseline samples assayed i.e. recovery = (assayed concentration of variable/assayed concentration of baseline sample) × 100.Freeze-thaw stability was assessed for up to three cycles.Freeze/thaw intolerance was defined as a recovery of <70% compared to original concentrations.
Results
An overview of the ELISA validation results for IL-6, -8, -10, -23 and TNF- is shown in Table 1.All cytokines could be detected, with varying accuracy.The mean CV of the standard curves of IL-6, -8, -10, -23, and TNF- ranged from a minimum of 2.95% to 9.80%.The R 2 obtained for standard curves derived from each assay ranged from 0.960 to 0.999 (mean of 0.988).The mean intra-assay CV ranged from 3.10% to 11.14%, and inter-assay CV from 4.36% to 18.83%.The recovery of the low spike for IL-8 and TNF- were 66.38% and 65.76% respectively.There was also poor recovery of IL-23 from fecal samples spiked with IL-23, with a recovery of 7.23% -17.12%.This precluded any further interpretation of the test results from the bench-top, storage and stability studies for that particular cytokine.The recovery for all other concentrations for the spike-and-recovery study was more than 75%.
Results for all bench-top and freeze thaw stability studies fell within acceptance criteria of recovery between 70% -130%.The mean recovery ranged from 78.92% -95.95% for samples left for 3 hours on benchtop at room temperature.The mean recovery ranged from 73.80% -94.38% for samples that were subjected to three freeze-thaw cycles.Finally, the samples were found to be intolerant to storage at −80˚C for one month, with all cytokines assayed having a mean recovery of <70% or >130% (range of 28.29-133.17%).
Discussion
There have been several studies using semi-quantitative methods and real-time reverse transcriptase polymerase chain reaction in intestinal biopsy samples to investigate the role of cytokines in mediation of chronic intestinal inflammation in dogs [27][28][29].Initial studies showed increased expression of transcripts encoding IL-2, -5, -12p40 TNF- and Transforming Growth Factor- in dogs with inflammatory bowel disease [30,31], but more recent investigations into intestinal cytokine expression have shown no difference between diseased and control samples [27,32].The choice of cytokines assayed in this study was based on previous fecal cytokine studies in people with acute or chronic enteropathies, assessing both Th1 and Th2 subsets [14,16,18].In addition, IL-23, produced by the distinct subset of T helper cells (Th17), was also included, due to its role in driving inflammation and pathology in the gut [33][34][35].
Studies have shown undetectable or low concentrations of cytokines in plasma and serum samples from healthy subjects [36,37].Cytokine IL-2, -4, -5, -10, TNF-, IFN- and IFN- concentrations documented in pathogen induced diarrhea ranged from 0.0 -51.4 pg/mL [17].As preliminary results showed undetectable concentrations in native samples of the cytokines assayed, and fecal cytokine concentrations in diseased dogs have not been documented to date, it was decided that our VSs should be spiked with moderate concentrations of the respective cytokines to mimic concentrations found in human diseased states.The assays were then evaluated based on these VSs for reproducibility and accuracy.
From the results, the reproducibility of the standard curves in the assays was acceptable.An increase in standard deviation and CV at the lower concentrations was noted.The higher variation in the assay of lower concentrations of cytokine is not unexpected [23,36].However, the clinical implication of this observation is unknown as the presence and degree of fecal cytokine aberrations in dogs with acute and chronic gastrointestinal disease has not yet been documented.Assay accuracy may be affected if concentrations occur at the lower end of the standard curve.Assay precision otherwise appears to be adequate and meets the recommended criteria of <20% for intra-assay CV and <25% for inter-assay CV for validation of ligand-binding assays [25,38].
In this study, there were significant inconsistencies in cytokine recovery for the low spikes of IL-8 and TNF-.The recovery of IL-23 for all spikes was markedly poor and precluded further interpretation of stability studies.The poor recoveries observed may be due to proteases that were not inactivated by the protease-inhibitor cocktail at initial sample handling, or the presence of nonspecific inhibitors.Both of these effects would be more apparent with lower cytokine concentrations.Unfortunately, effects of other protease inhibitors were not assessed as part of our investigations.Depending on the working concentrations of IL-8 and TNF- in a clinical setting, these observations may limit the capacity of these ELISAs to be used as a quantitative test.However, it may still prove useful as a qualitative or semi-quantitative measure of disease activity.
Current recommendations for cytokine measurement are to process and freeze plasma/serum samples within an hour of collection [36].In this study, recovery was still within acceptable limits of 70% -130% in samples left for three hours at room temperature.Sample stability was also deemed acceptable for up to three freeze-thaw cycles.
Finally, the authors investigated the stability of cytokines over short-term storage.In the clinical and experimental setting, stability over longer term storage is ideal to guarantee confidence in the results obtained.Most cytokines in serum have been shown to be stable for up to 2 years, although IL-6 and IL-10 degraded up to 50% of baseline values within 2 -3 years at −80˚C [36].Due to the unpredictable effect of proteases in fecal samples, it was decided to re-assay the samples at a 1 month time point, whereby all the cytokines assayed did not fulfill the acceptance criteria of having a recovery between 70% -130%.As all other validation assays were performed within a week of collection and processing, and the authors recommend that assays be performed during that time frame.Human studies where TNF- has been undetected or measured have unfortunately not specified their respective storage times before assay to assess comparison of performance [17,25].
There are multiple limitations to the validation study performed, with the small number of subjects being the major one.Parallelism was also not proven given the negligible cytokine concentrations in the neat samples assayed.However, the authors have decided to proceed with validation as higher endogenous levels of the cytokines are expected in diseased samples.Also, although a spiked sample of the biological matrix was used as a VS to mimic clinical samples; this may differ from an endogenous protein and its behavior with assay performance.Finally, given the unknown endogenous working range, second VS of high concentration should also have been assayed as part of the validation study [23].
Conclusion
In summary, detection of fecal cytokines (IL-6, -8, -10, and TNF-) by ELISA may be of use as non-invasive biomarker of inflammation in the dog, however IL-12/ 23p40 could not be reliably measured.From the data in this study, the authors propose that clinical fecal samples are processed as soon as possible or within an hour of sample collection and are analyzed within a week.This study provides preliminary information for research in dogs with inflammatory intestinal disease.Further investigation is needed to determine if fecal cytokines can be correlated with clinical signs as a predictor of disease.
|
2018-02-05T12:02:08.733Z
|
2013-10-14T00:00:00.000
|
{
"year": 2013,
"sha1": "4b975d66db5e51cfcd5ceb1457f5fc9719e28c1e",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=38506",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "4b975d66db5e51cfcd5ceb1457f5fc9719e28c1e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
85553579
|
pes2o/s2orc
|
v3-fos-license
|
Multi-Depot Vehicle Routing Optimization Considering Energy Consumption for Hazardous Materials Transportation
Focusing on the multi-depot vehicle routing problem (MDVRP) for hazardous materials transportation, this paper presents a multi-objective optimization model to minimize total transportation energy consumption and transportation risk. A two-stage method (TSM) and hybrid multi-objective genetic algorithm (HMOGA) are then developed to solve the model. The TSM is used to find the set of customer points served by each depot through the global search clustering method considering transportation energy consumption, transportation risk, and depot capacity in the first stage, and to determine the service order of customer points to each depot by using a multi-objective genetic algorithm with the banker method to seek dominant individuals and gather distance to keep evolving the population distribution in the second stage, while with the HMOGA, customer points serviced by the depot and the serviced orders are optimized simultaneously. Finally, by experimenting on two cases with three depots and 20 customer points, the results show that both methods can obtain a Pareto solution set, and the hybrid multi-objective genetic algorithm is able to find better vehicle routes in the whole transportation network. Compared with distance as the optimization objective, when energy consumption is the optimization objective, although distance is slightly increased, the number of vehicles and energy consumption are effectively reduced.
Introduction
Hazardous materials can be toxic, corrosive, inflammable, and explosive, but also are necessary for the development of industry and agriculture.How to choose one or more transportation routes for hazardous materials with low energy consumption and low risk has become a very important topic with practical significance to ensure personal, property, and environmental security as well as economic sustainability.
According to statistics, the daily amount of hazardous materials being transported in China exceeds 1 million tons, and the total amount per year exceeds 400 million tons [1].These large quantities of hazardous materials form a dangerous flowing source on the roads.If an accident occurs, it could cause damage to nearby people and the surrounding environment.Alp et al. [2] studied the effects of meteorologic conditions and wind direction probabilities on the consequences of accidents, and proposed a three-integral mathematical evaluation model for calculating individual and social risks.Leonelli et al. [3] proposed a new personal and social risk assessment model, introducing risk factors such as transportation mode, hazardous materials category, meteorologic conditions, wind direction probability, and seasonal attributes, and considered the impact of population distribution, which greatly improved the accuracy of risk assessment for hazardous materials transportation.Based on a statistical analysis of accidents, Fabiano et al. [4] discussed risk factors from the aspects of road characteristics, weather conditions, and traffic conditions, and proposed a risk assessment model for the hazardous materials transportation process at accident sites.Based on the route segment-specific (location-specific) accident rate, Chakrabarti et al. [5] estimated route segment total risk to measure the average number of persons likely to be exposed to all possible consequence scenarios by computing and comparing the loss of containment and spillage probabilities for different route segments.Taking the length and quality of roads, the population density in different segments of the roads, types of hazardous materials, and types of vehicles into consideration, Faghih-Roohi et al. [6] proposed a dynamic value-at-risk model.Mohammadi et al. [7] designed a reliable hazardous materials transportation network based on four factors: distance from the incident source, number of people exposed to the risk, number of hazardous materials shipments passing that segment, and the characteristics of the road type.With the rapid development of Geographic Information System (GIS) technology, it has also been applied to risk assessment for hazardous materials transportation (Bubbico et al. [8], Sahnoon et al. [9], Chen et al. [10]).Visual map display and data analysis functions provide convenience and improve accuracy for risk assessment during the transportation of hazardous materials.It can be seen that scholars have made many contributions to the problem of transporting hazardous materials.The risk of transporting hazardous materials is related to many uncertain factors, and it is difficult to measure accurately, but it is still a factor that must be considered.
In addition to risk, energy consumption is also an important factor to consider in reducing the transportation costs of enterprises, reducing carbon emissions, and achieving sustainable development of hazardous materials transportation.There are many factors that affect transportation energy consumption, but they can be roughly classified into categories such as vehicle construction, transportation environment, and driver strategies.Factors influencing fuel consumption have been studied by Bigazzi et al. [11], Demir et al. [12], and Suzuki [13].Trucks have to overcome various obstacles while transporting hazardous materials, such as speed, road gradient, and payload.These are key factors that affect energy consumption.Bektaş and Laporte [14] analyzed the relationship between vehicle load, speed, and total cost, and proposed that minimizing the cumulative load alone does not necessarily lead to energy minimization, particularly when there are time window restrictions.Demir et al. [15] derived an optimum driving speed and showed that reductions in emissions could be achieved by varying speed over a network.It should be noted that the optimum driving speed varies to a certain degree between geographic areas due to speed limits and traffic density.Given the fact that the shortest distance may not be the optimal solution for the purpose of lowering fuel consumption, Xiao et al. [16] developed route schedules with lower fuel costs by better managing the trade-off between the total distance and the priority of serving customers with larger demands.Suzuki [17] indicated that significant savings in fuel consumption may be realized by delivering heavy items in the early segments of a tour and lighter items in the later segments.In fact, transportation costs are more related to energy consumption than distance.Therefore, for hazardous materials transportation, considering the factors that affect energy consumption, it is more practical to find one or more low-energy vehicle routes as the optimization objective.
The transportation route is a key factor to guarantee that hazardous materials are transported safely and economically, which many experts and scholars have been studying.Batta et al. classified transportation risk into road risk and node risk [18].Karkazis et al. [19] formulated the routing problem of transporting hazardous materials as an optimization problem, with the objective of minimizing transportation risk and costs.Meng et al. [20] used the dynamic programming method to solve the transportation route optimization problem.Liu et al. [21] applied a multilevel fuzzy comprehensive evaluation method to optimizing hazardous materials transportation routing.Das et al. [22] explored the routing problem in a transportation network with the capacity limit, and developed a multi-objective algorithm to find nondominated solutions.Pradhananga et al. [23] chose minimum transportation time and risk as the optimization objective to search for Pareto optimal solutions.Given the unscientific nature of simply evaluating the total risk of a fleet as a whole without considering individual vehicle risk, Wang et al. [24] developed a two-stage exact algorithm based on the ε-constraint method to solve the proposed problem.Alrukaibi et al. [25] designed a risk/cost algorithm using available data from Kuwait.For their study, incident probability, incident consequence, and risk assessment were used as the algorithm's main criteria to identify available alternative routes.Mohammadi et al. [7] proposed a mathematical model for designing a reliable hazardous materials transportation network on the basis of hub location topology under uncertainties; to cope with the uncertainties, they provided a solution framework based on an integration of the chance-constrained programming approach.Obviously, these studies on the routing problem of hazardous materials transportation have been relatively sufficient, and have gone through a study process from a single objective to multi-objective optimization and from certain to uncertain environments.However, energy consumption is not considered in the model, and to solve the model, most studies turned the multi-objective model into a single objective, and finally got an optimal solution instead of the Pareto optimal solution set.
All of the above studies on the vehicle routing problem (VRP) for hazardous materials transportation assumed that there was only one depot.However, there may, in fact, be more than one vehicle depot in a city, and it is of more practical significance to study the VRP with multiple depots for hazardous materials transportation.At present, only a few studies have focused on the multi-depot vehicle routing problem (MDVRP) for hazardous materials transportation.Zhao et al. [26] considered the return trips between collection centers and recycling centers, and developed a multi-objective model with minimization of cost and risk.This optimization problem with two objectives was then transformed into a single objective problem.Du et al. [27] developed a fuzzy bilevel programming model in which the upper-level formulation allocated customers to depots and the lower level determined the optimal routing for each depot.However, the study only considered transportation risk minimization as the optimization objective.In fact, for hazardous materials transportation, in addition to risk, energy consumption is also a major factor that must be considered.This is mainly due to the fact that energy consumption is a key factor in reducing the transportation costs of enterprises, reducing carbon emissions, and achieving sustainable development.To this end, this study aims to find the optimal vehicle routings for the MDVRP considering both transportation risk and energy consumption.
Many heuristic methods have been developed in the context of the VRP, including the tabu search algorithm [28], the genetic algorithm [29], the ant colony algorithm [30], and so on.Compared to the VRP, the MDVRP is more complicated, because it needs to consider which customer point should be serviced by which depot, in addition to considering the vehicle routing for each depot.It is necessary to coordinate the delivery tasks among multiple depots.Gillett and Johnson presented a clustering procedure and a sweep heuristic for each depot [31].Salhi and Sari proposed a multilevel heuristic method [32].Giosa et al. [33] summarized and proposed a number of heuristics for the two-stage approach to solving the MDVRP.In order to solve the emergency vehicle routing problem with multiple depots, Qin et al. [34] divided it into two steps, and used the nearest assignment and average distance method to transform the MDVRP into multiple VRPs with a single depot in the first step, and then adopted a genetic algorithm to solve the VRPs with a single depot in the second step.Ho et al. [35] developed two hybrid genetic algorithms.The major difference between the two algorithms is that the initial solutions were generated randomly in the first algorithm, and the initialization procedure was incorporated into the Clarke and Wright savings method and the nearest neighbor heuristic in another algorithm.In summary, the methodologies for MDVRP can be categorized into one-stage and two-stage solving methods.
To sum up, a number of studies have explored the traditional VRP with only one depot.These studies move from single objective optimization to multi-objective optimization and from single solution to a nondominated solution set.However, only a few studies have focused on the MDVRP for hazardous materials transportation, and all of them consider a single objective or do not consider transportation energy consumption, due to the complexity of the problem.Furthermore, these studies rarely consider all depots and customer points to find the optimal Pareto solution set to reduce both transportation risk and energy consumption.The existing research has mainly adopted the two-stage method, which cannot make multiple depots coordinate transportation and dynamically solve the problem.In addition, the existing research only considers one objective or converts multiple objectives into one to solve the problem, and they rarely use multi-objective algorithms to seek an optimal Pareto solution set.Different from the current studies on the MDVRP for hazardous materials transportation, this study aims to obtain the Pareto optimal solution set of noncomparable transportation risk and energy consumption.Finally, a two-stage algorithm (TSM) and a one-stage hybrid multi-objective genetic algorithm (HMOGA) are designed, and the solution optimality between these two algorithms is compared and analyzed.
The rest of this paper is organized as follows: Section 2 describes the MDVRP for hazardous materials transportation, Section 3 establishes an optimization model to minimize transportation risk and energy consumption, Section 4 presents a TSM and HMOGA to solve the proposed model, Section 5 gives numerical examples to analyze the effectiveness of the above two algorithms, and Section 6 gives conclusions.
Problem Description
As the name implies, the VRP for hazardous materials transportation with one depot is a problem of vehicle scheduling for a specific depot.All customer points are served by a specific depot.However, in reality, it is very common that multiple depots coordinate the transportation and distribution of hazardous materials.The MDVRP for hazardous materials transportation can be defined as the following: (1) There exist several hazardous materials depots, and each depot has enough vehicles to transport the hazardous materials.(2) Multiple customer points exist and will be assigned to different depots.
(3) Each vehicle will service the corresponding customer points and can service several customer points, while each customer point can be serviced only one time by one vehicle.(4) After the delivery task is finished, all transportation vehicles will return to their hazardous materials depot.
Figure 1 is a sketch of the MDVRP for hazardous materials transportation.It can be seen that with three depots, named a, b, and c, there are 15 customer points, and each customer point can be serviced by only one depot.According to the MDVRP, multiple depots can cooperate with the transportation environment according to customer needs to achieve the global optimal objective.
In the MDVRP, because of the strongly corrosive, highly toxic, explosive, and flammable characteristics of hazardous materials, transportation risk is an optimization objective that must be considered.For enterprises, minimizing transportation costs is paramount.However, low risk means high cost to some extent.Therefore, the MDVRP for hazardous materials transportation should be solved according to the idea of multi-objective optimization, and it is very important to find the Pareto solution set.In many reports in the literature, transportation cost is usually measured by distance and time.However, the shortest transportation distance or time in practice does not mean that transportation cost is the lowest.Compared to distance or time, transportation energy consumption is of great significance to reducing business operating costs, lowering carbon emissions, and achieving sustainable development.Therefore, this paper sets two optimization objectives to minimize risk and energy consumption for hazardous materials transportation.
Energy Consumption Evaluation
Rolling resistance, air resistance, and ramp resistance are the main external forces that affect the energy consumption of truck transportation.According to the research in [36], the following formula for calculating power consumption to overcome resistance is used: Equation ( 1) is the power used to overcome rolling resistance during the transportation of hazardous materials, where ( ) Equation ( 2) is the power consumed to overcome air resistance during transportation, where ( ) a P v is the power consumed to overcome air resistance at speed v (kW); a is the air resistance coefficient; a ρ is air density (kg/m 3 ); and z is the windward area of the vehicle, also known as the front area (m 2 ).
( )
, , Equation ( 3) is the power consumed by the truck from node i to node j to overcome ramp resistance during transportation, where ( ) , , g P m v i represents power consumed to overcome ramp resistance at ramp i and transport speed v, and ij i represents the slope of node i to node j.
In addition, the acceleration of trucks while transporting hazardous materials requires a large amount of fuel.According to the literature [37], the energy consumption of trucks accelerating from 0 to v is:
Energy Consumption Evaluation
Rolling resistance, air resistance, and ramp resistance are the main external forces that affect the energy consumption of truck transportation.According to the research in [36], the following formula for calculating power consumption to overcome resistance is used: Equation ( 1) is the power used to overcome rolling resistance during the transportation of hazardous materials, where P r (m, v) represents the power consumed by transport equipment with a total weight of m to overcome rolling resistance at speed v (kW); c r represents the rolling resistance coefficient; g represents gravitational acceleration (m/s 2 ); m represents the total weight of the transport equipment, including the weight of the cargo m c and the vehicle M(t); and v represents the instantaneous speed of the transport equipment (m/s).
Equation ( 2) is the power consumed to overcome air resistance during transportation, where P a (v) is the power consumed to overcome air resistance at speed v (kW); a is the air resistance coefficient; ρ a is air density (kg/m 3 ); and z is the windward area of the vehicle, also known as the front area (m 2 ).
Equation ( 3) is the power consumed by the truck from node i to node j to overcome ramp resistance during transportation, where P g (m, v, i) represents power consumed to overcome ramp resistance at ramp i and transport speed v, and i ij represents the slope of node i to node j.
In addition, the acceleration of trucks while transporting hazardous materials requires a large amount of fuel.According to the literature [37], the energy consumption of trucks accelerating from 0 to v is: The movement of the truck is a real-time change process.Considering that the data are not easy to obtain, this paper uses the average speed v and the average slope i to calculate the average energy consumption during transportation.Therefore, the average energy consumption of hazardous materials transportation between 2 nodes is: where d ij represents the distance of transportation from node i to node j (km); v represents the average speed during transportation (m/s); i represents the average slope; and n num represents the average number of accelerations per kilometer (times/km).
MDVRP Multi-Objective Model for Hazardous Materials Transportation
Hazardous materials transportation risk is usually defined as the product of the accident rate and the potential loss during transportation.The probability of accidents occurring is often affected by time, weather conditions, vehicle type, loading form, and road conditions, and the severity of the consequences are related to the number of people affected, property losses, and weather conditions.In this paper, we use the risk measurement model proposed by Erkut et al. [38] to calculate hazardous materials transportation risk, and regard the number of people affected on both sides of the road as the main indicator for measuring risk: where r ij is the risk of road segment (persons), a ij is the accident rate, p ij is the population density (persons/km 2 ), and r d is the impact radius of the accident (m).
The MDVRP multi-objective model for hazardous materials transportation is: ∑ In the above model, as the objective functions, Equations ( 7) and ( 8), respectively, are used to minimize total transportation risk and energy consumption, and it is worth mentioning that the risk of the delivery vehicle on its return to the depot from the final customer point is not calculated in total risk, while energy is calculated in total transportation energy.In the objective function, S 0 is the depot set, here S 0 = {i|i = 1, 2, • • • , m}; m is the total number of hazardous materials depots; S 1 is the customer point set, S 1 = {i|i = 1, 2, • • • , n}, where n is the number of customer points; S is the transportation network nodes set, here S = S 0 ∪ S 1 ; V d is the vehicle set of hazardous materials depot d and ijk defines the 0-1 integer variable.Equation ( 9) is the load constraint, which means any vehicle of any depot should satisfy the corresponding load constraint, namely, it cannot overload.In the equation, q j is the quantity demanded at customer point i and L d k is the load capacity of vehicle k from depot d.Equation ( 10) means the number of vehicles in the depot is limited, and the number of vehicles arranged to transport hazardous materials should not exceed the number owned by the depot.Equation (11) indicates that every vehicle departing from the depot should return to the original depot after finishing the transportation task.Equations ( 12) and ( 13) guarantee that each customer point is serviced once and by one vehicle from a depot.Equation ( 14) means the vehicles cannot depart from one depot and return to another depot.Equation ( 15) constrains the maximum risk on the segment.Equation ( 16) denotes that if the route of vehicle k from depot d for transporting hazardous materials contains the road segment from node i to node j, x d ijk is equal to 1, else x d ijk equals 0.
Solving Methods
Different from single objective optimization, possible conflicts exist between multiple objectives in the multi-objective optimization, and an improved sub-objective might cause the performance of the other one to decrease.Multi-objective optimization generally gets a set of Pareto solutions; elements of the set are called Pareto optimal solutions.In reality, decision-makers select one or more solutions from a Pareto optimal solution set as the optimal solution of a multi-objective optimization problem based on personal preference.For a multi-objective optimization problem, the typical algorithms include niched Pareto genetic algorithm (NPGA) [39], nondominated sorting genetic algorithm-II (NSGA-II) [40], and strong Pareto evolutionary algorithm (SPEA2) [41].These algorithms have better solution efficiency for particular problems.However, for a specific problem, these algorithms cannot be used directly, but often need to be modified according to the problem properties.
This paper mainly uses genetic algorithm theory to design a method of solving MDVRP for hazardous materials transportation.Based on biological characteristics, the genetic algorithm realizes diversity and global search by operating the population composed of potential solutions.It focuses on the set of individuals, which is consistent with the Pareto solution set of the multi-objective optimization problem.Moreover, the genetic algorithm does not need many mathematical prerequisites, but can deal with all types of objective functions and constraints, and has good performance for combinatorial optimization.
Two-Stage Method
Solving the MDVRP for hazardous materials transportation has a certain complexity, and the algorithm needs to solve two problems: one is to choose appropriate customer points for hazardous materials depots, and the other is to design a reasonable distribution service order for each depot to service the selected customer points.The method is designed in two stages: In stage one, we use the global search cluster to convert the problem into several multi-objective optimization problems with single depots, which decreases the solving complexity of the problem.In stage two, we design a multi-objective genetic algorithm to solve the routing problem transformed in stage one, then we can obtain the corresponding routing for each depot to service its customer points.
Global Search Cluster
Deciding which depot services which customer points is related to the customer point itself, the current depot, and other customer points that are serviced by the current depot.It also means that there is an inverse relationship between the risk and energy of the customer point, the current depot, and other customer points that are serviced by the current depot to determine whether the depot serves the customer point.By defining intimacy in order to determine which customer points the depot will service, customer points will be serviced by the depot with the highest intimacy.
Intimacy is defined as follows: (rW Here, on the hypothesis that customer point j is serviced by depot d, f (i, d) is defined as the affinity between depot d and customer point i; (rW) ij refers to dimensionless risk and energy-weighted average on network nodes i to j, r ij is the dimensionless nominal risk, and W ij is the dimensionless nominal energy.In the above equations, a, b, α, and β are the weighting coefficients used to adjust the factor magnitude; L d is the current capacity of the depot; S q (d) refers to the customer points assigned to depot d; the initial state for S q (d) is the depot itself; and |S q (d)| is the assigned customer point number to depot d.In this way, all customer points are divided into |S 0 | groups, where |S 0 | is the number of the depot.
Multi-Objective Genetic Algorithm
The designed multi-objective genetic algorithm constructs a Pareto optimal solution set by the banker method, using the gather density method to keep evolving the population distribution.The basic process of the algorithm is shown in Figure 2, in which Pop refers to population, |Pop| is the population size and its consistent size is N, Paretos is the Pareto optimal solution set, |Paretos| is the number of Pareto optimal solutions in the set and its value takes 2N as the maximum, and the end condition of the algorithm is the generation limit.Here, the evolutionary group is a population with a fixed number of chromosomes.The chromosomes of the offspring population are composed of all the Pareto individuals in the Pareto pool, all the individuals in the Pareto set of the current population, and the individuals in the current population selected by the selection operator.
(1) Encoding and Decoding Chromosomes We adopted a natural number coding method.For example, a network has one hazardous materials depot and eight customer points.If the depot is marked 0 and the customer points are respectively marked 1-8, the randomly generated sequence 0 3 5 1 4 2 8 7 6 becomes one of the chromosomes.Because in the model a few vehicles are needed to service the customer points, chromosomes obtained by this coding method also have to be decoded.The chromosomes are decoded based on a greedy strategy, by inserting the customer points into the route according to the order of the genes in the chromosome without violating load capacity constraint.If another customer point is added, it will violate the load constraint, which means that we should assign another vehicle to service this customer point.Assuming that there are eight customer points and their quantity demand, successively, is 3 tons, 2 tons, 4 tons, 1 ton, 2 tons, 2 tons, 2 tons, and 1 ton, and vehicle load capacity is 8 tons, the chromosome 0 3 5 1 4 2 8 7 6 decodes as follows: (
1) Encoding and Decoding Chromosomes
We adopted a natural number coding method.For example, a network has one hazardous materials depot and eight customer points.If the depot is marked 0 and the customer points are respectively marked 1-8, the randomly generated sequence 0 3 5 1 4 2 8 7 6 becomes one of the chromosomes.Because in the model a few vehicles are needed to service the customer points, chromosomes obtained by this coding method also have to be decoded.The chromosomes are decoded based on a greedy strategy, by inserting the customer points into the route according to the order of the genes in the chromosome without violating load capacity constraint.If another customer point is added, it will violate the load constraint, which means that we should assign another vehicle to service this customer point.Assuming that there are eight customer points and their quantity demand, successively, is 3 tons, 2 tons, 4 tons, 1 ton, 2 tons, 2 tons, 2 tons, and 1 ton, and vehicle load capacity is 8 tons, the chromosome 0 3 To chromosome A and B respectively, delete the genes behind the mating area that appeared in the mating area, then get their offspring chromosomes A" = 0 3 4 5 6 7 1 8 2, B" = 0 6 5 1 8 2 3 4 7.
(2) Mutation operator: reverse transcription method In the chromosome two different genes are first randomly selected except the genes representing hazardous materials depot, and then a reverse operation is performed on the selected genes.
(3) Constructing Pareto Optimal Solution Set The banker method is not a kind of backtracking method, and constructing new Pareto optimal solutions does not need to compare with the existing Pareto optimal solutions.Set Paretos as the Pareto optimal solutions set and Pop as evolution group set, then adopt the banker method to construct Pareto optimal solution set for Pop as follows: Initialize the Pareto optimal solutions set as Paretos.
Take out an individual (usually the first one) from Pop, as a banker, compare the banker with other individuals, and delete the individual dominated by the banker.
After the banker is compared with all individuals in Pop, if any individual dominates the banker, delete the banker, otherwise add the banker into Paretos.
Repeat steps and until Pop becomes null.
(4) Calculating Individual Gathering Density Evolving population distribution is an important part of the multi-objective optimization.Here, the gathering density method is used to keep the evolving population distribution, specifically adopting gather distance to represent the individual gather density, and the greater gather distance shows the smaller gather density.Before calculating each individual distance, we need to order the individual according to its objective function value.Set distance I i as the gather distance for individual i, and I j i as the value of objective j for individual i.If individual i contains m objectives, the gather distance for individual i will be calculated by Equation (20):
Hybrid Multi-Objective Genetic Algorithm
When solving the MDVRP for hazardous materials transportation, we need to consider the customer points serviced by each depot as well as the service order.Therefore, we designed a hybrid multi-objective genetic algorithm (HMOGA) to solve the MDVRP for hazardous materials transportation.The algorithm adopts the same method shown in Section 4.1.2in the aspects of Pareto optimal solution set construction and keeping the evolving population distribution.Other specific operations are as follows: (1) Chromosome Coding and Decoding The hybrid multi-objective genetic algorithm adopts a hybrid coding mode combining the customer point and distribution service.The coding mode divides the chromosome into two parts: part 1 is to select the customer points for each depot to service, with a length equal to the number of all customer points and gene values are depot numbers; part 2 is to determine the service order for each customer point, with a length also equal to the number of all customer points, while genes are customer points.Combining part 1 with part 2 and according to the loading capacity constraint of hazardous materials transportation vehicles, we adopt a greedy selection strategy to decode the chromosome.For example, if there are three depots and 10 customer points, the hybrid coding chromosomes are as follows: Chromosome part 1: a→c→b→c→a→b→a→b→c→b Chromosome part 2: 3→9→2→10→1→8→6→7→5→4 (2) Genetic Operator Design Similar to coding, genetic operators are also designed in two parts.Chromosome part 1 adopts two-point crossover and simple mutation operators; chromosome part 2 uses partially mapped crossover and reverse transcription operators.
(1) Crossover operator Two-point crossover operator: First, according to certain probability, freely choose two individual chromosomes as the parent generation, and select two genes in part 1 of the selected chromosomes as the intersections; second, swap the genes between the intersections of the selected chromosomes, generating the offspring chromosomes.Partially mapped crossover operator: Use the partially mapped crossover operator in Section 4.1.2to complete the crossover operation.
(2) Mutation operator Simple mutation operator: With a certain probability, randomly select a chromosome, freely choose a gene on part 1, and change the gene to its allele; if the selected gene is b, change it to a or c.Reverse transcription operator: same as reverse transcription operations in Section 4.1.2.
Cases Design
An enterprise has three hazardous materials depots and 20 customer points with hazardous materials.The depots are designated a, b, and c, and the customer points are numbered 1-20.Each depot uses trucks to deliver hazardous materials, and the dead weight, payload, capacity, engine efficiency, and other vehicle parameters of the trucks are identical.Assume that each depot has enough hazardous materials and transportation vehicles.This paper uses the affected population model in the traditional risk model to measure the risk of each road segment.In addition to the length of the road segment, the traditional affected population model also needs some relevant parameters such as affected radius, accident rate, and population density.For energy consumption, a middle-level method is used.Trucks perform delivery tasks by overcoming rolling resistance, air resistance, and ramp resistance during the transportation of hazardous materials.To calculate rolling resistance and ramp resistance, in addition to the total weight of the vehicle, the rolling resistance coefficient, the ramp resistance coefficient, the road ramp, and other parameters are required.For air resistance, the air resistance coefficient, the vehicle windward area, and the air density are required parameters.In order to better verify the solution effect of the designed methods, we designed two cases, case 1 and case 2.
In case 1, the risk triangle fuzzy number of the first three hazardous materials depots and 20 customer points in the literature [27] are used as the source data of road segment length, population density, accident rate, and road ramp from each depot to and between customer points.The road segment length is obtained by subtracting the lower limit from the upper limit of the fuzzy number; the population density around the road segment is obtained by subtracting the lower limit from the value with maximum possibilities of the fuzzy number and multiplying by 150; the accident rate is obtained by multiplying the value with maximum possibilities of the fuzzy number by 10 −5 ; the road ramp is obtained by subtracting the value with maximum possibilities from the upper limit of the fuzzy number.
In case 2, the road ramp is a random number between −5% and 5%, and the accident rate is a random number between 10 −6 and 5 × 10 −5 .The road segment length, the population density on both sides of the road, and the number of customer points are the same as the corresponding data in case 1.The road ramp and the accident rate are shown in Tables A1-A3 in Appendix A, where the letters indicate the depot number and the figures indicate the number of the customer point.
The parameters related to vehicles transporting hazardous materials in the two cases are shown in Table 1.According to the setting of the affected radius of hazardous materials in the literature [42], the accident affected radius in both cases is 1000 m.The demand amount of each customer point in the two cases is measured by weight (t), as shown in Table 2.
Results Analysis
The parameters for the global search clustering method in the first stage are set as a = 0.6, b = 0.4, α = 0.9, β = 0.1.Table 3 shows the customer points serviced by each hazardous materials depot in case 1 and case 2 by calculating the clustering algorithm proposed in the first stage.In the second stage, the parameters of the multi-objective genetic algorithm are set as follows: population size is 100, maximum evolution generation is 100, partially mapped crossover rate is 0.95, and reverse transcription mutation rate is 0.2.Based on the clustering results for depots a, b, and c in case 1 and case 2, we obtain their Pareto solution set of customer points serviced by each depot, as shown in Tables 4 and 5.
The parameters of the HMOGA are set as follows: population size is 200, maximum evolution generation is 200, two-point crossover rate is 0.6, partially mapped crossover rate is 0.9, simple mutation rate is 0.3, and reverse transcription mutation rate is 0.1.Tables 6 and 7 show the Pareto optimal solution sets obtained by solving the HMOGA in case 1 and case 2, respectively.In the "Routes" column of Tables 4-7, each letter and the figures behind it indicate the transport route of a delivery vehicle.For example, "a-1-4-19-20-a-14-17" in Table 4 indicates the need for two delivery vehicles.Its transportation routes are a-1-4-19-20-a and a-14-17-a.From the above tables, it can be seen that the methods designed here can all obtain the Pareto optimal solution set, and the solutions in the set are not comparable.Figures 3 and 4 are the Pareto optimal distribution curves by solving the HMOGA in case 1 and case 2, respectively.It can be seen from Figures 3 and 4 that the HMOGA can obtain the Pareto solution distributed in the whole solution space instead of focusing on a certain area, where the first point and the last point in each figure correspond to the optimal solution of energy consumption and risk, respectively.Decision-makers can select the appropriate routes for hazardous materials transport vehicles according to the actual situation.If the transportation task is currently carried out according to the fourth solution in Table 4, the decision-maker can choose the first solution in the table to carry out the transportation task when an accident occurs between customer points 2 and 3 and traffic is prohibited.
Figures 4 and 5 show optimal risk value and energy value, respectively, of each depot obtained by the two methods in case 1. Figures 4 and 5 show optimal risk value and energy value, respectively, of each depot obtained by the two methods in case 1.
It can be seen from Figures 5 and 6 in case 1 that compared to the single objective risk and energy consumption optimal solutions, the HMOGA model performed nearly six times and 1.1% better than the TSM.The huge difference in Figure 5 is mainly due to the characteristics of data used in case 1.In the optimal risk solution, depot an optimized by HMOGA only serves three customer points (1, 4, and 6), and the accident rate and population density of the road segment between depot a and the three customer points are relatively small.However, there are six customer points (1,4,14,17,19,20) serviced by depot an obtained by the TSM in case 1, and the accident rate and population density between depot a and customer points 17 and 20 is relatively large.Therefore, the risk value of depot a obtained by the HMOGA is much smaller than the optimal risk value obtained by the TSM.Figures 4 and 5 show optimal risk value and energy value, respectively, of each depot obtained by the two methods in case 1.
It can be seen from Figures 5 and 6 in case 1 that compared to the single objective risk and energy consumption optimal solutions, the HMOGA model performed nearly six times and 1.1% better than the TSM.The huge difference in Figure 5 is mainly due to the characteristics of data used in case 1.In the optimal risk solution, depot an optimized by HMOGA only serves three customer points (1, 4, and 6), and the accident rate and population density of the road segment between depot a and the three customer points are relatively small.However, there are six customer points (1,4,14,17,19,20) serviced by depot an obtained by the TSM in case 1, and the accident rate and population density between depot a and customer points 17 and 20 is relatively large.Therefore, the risk value of depot a obtained by the HMOGA is much smaller than the optimal risk value obtained by the TSM.Figures 4 and 5 show optimal risk value and energy value, respectively, of each depot obtained by the two methods in case 1.
It can be seen from Figures 5 and 6 in case 1 that compared to the single objective risk and energy consumption optimal solutions, the HMOGA model performed nearly six times and 1.1% better than the TSM.The huge difference in Figure 5 is mainly due to the characteristics of data used in case 1.In the optimal risk solution, depot an optimized by HMOGA only serves three customer points (1, 4, and 6), and the accident rate and population density of the road segment between depot a and the three customer points are relatively small.However, there are six customer points (1,4,14,17,19,20) serviced by depot an obtained by the TSM in case 1, and the accident rate and population density between depot a and customer points 17 and 20 is relatively large.Therefore, the risk value of depot a obtained by the HMOGA is much smaller than the optimal risk value obtained by the TSM.It can be seen from Figures 5 and 6 in case 1 that compared to the single objective risk and energy consumption optimal solutions, the HMOGA model performed nearly six times and 1.1% better than the TSM.The huge difference in Figure 5 is mainly due to the characteristics of data used in case 1.In the optimal risk solution, depot an optimized by HMOGA only serves three customer points (1, 4, and 6), and the accident rate and population density of the road segment between depot a and the three customer points are relatively small.However, there are six customer points (1,4,14,17,19,20) serviced by depot an obtained by the TSM in case 1, and the accident rate and population density between depot a and customer points 17 and 20 is relatively large.Therefore, the risk value of depot a obtained by the HMOGA is much smaller than the optimal risk value obtained by the TSM.For optimal energy consumption, the optimization result of the HMOGA only saves 1.1% compared with the optimization result of the TSM.This is mainly due to the strong dependence of energy consumption on the length of the road segment.In case 1, the length and slope of the road segment have little changes, and the slope is only uphill, so the optimization effect of the HMOGA is not obvious.
Similarly, Figures 7 and 8 show the optimal risk value and energy value of each depot in case 2, respectively.In case 2, since the slope and accident rate of the road segment are randomly generated, for the optimal solutions of risk and energy consumption, the HMOGA has a nice optimization effect compared with the TSM.As can be seen from Figures 7 and 8, the optimal risk and energy consumption values obtained by the HMOGA are reduced by 12.4% and 54.8%, respectively, compared with the values obtained by the TSM in case 2. For optimal energy consumption, the optimization result of the HMOGA only saves 1.1% compared with the optimization result of the TSM.This is mainly due to the strong dependence of energy consumption on the length of the road segment.In case 1, the length and slope of the road segment have little changes, and the slope is only uphill, so the optimization effect of the HMOGA is not obvious.
Similarly, Figures 7 and 8 show the optimal risk value and energy value of each depot in case 2, respectively.For optimal energy consumption, the optimization result of the HMOGA only saves 1.1% compared with the optimization result of the TSM.This is mainly due to the strong dependence of energy consumption on the length of the road segment.In case 1, the length and slope of the road segment have little changes, and the slope is only uphill, so the optimization effect of the HMOGA is not obvious.
Similarly, Figures 7 and 8 show the optimal risk value and energy value of each depot in case 2, respectively.In case 2, since the slope and accident rate of the road segment are randomly generated, for the optimal solutions of risk and energy consumption, the HMOGA has a nice optimization effect compared with the TSM.As can be seen from Figures 7 and 8, the optimal risk and energy consumption values obtained by the HMOGA are reduced by 12.4% and 54.8%, respectively, compared with the values obtained by the TSM in case 2. For optimal energy consumption, the optimization result of the HMOGA only saves 1.1% compared with the optimization result of the TSM.This is mainly due to the strong dependence of energy consumption on the length of the road segment.In case 1, the length and slope of the road segment have little changes, and the slope is only uphill, so the optimization effect of the HMOGA is not obvious.
Similarly, Figures 7 and 8 show the optimal risk value and energy value of each depot in case 2, respectively.In case 2, since the slope and accident rate of the road segment are randomly generated, for the optimal solutions of risk and energy consumption, the HMOGA has a nice optimization effect compared with the TSM.As can be seen from Figures 7 and 8, the optimal risk and energy consumption values obtained by the HMOGA are reduced by 12.4% and 54.8%, respectively, compared with the values obtained by the TSM in case 2. In case 2, since the slope and accident rate of the road segment are randomly generated, for the optimal solutions of risk and energy consumption, the HMOGA has a nice optimization effect compared with the TSM.As can be seen from Figures 7 and 8, the optimal risk and energy consumption values obtained by the HMOGA are reduced by 12.4% and 54.8%, respectively, compared with the values obtained by the TSM in case 2.
As to single depots, the optimal solution obtained by the TSM may be better than the HMOGA, but for the entire transportation network, the HMOGA can find better transportation routes than the TSM.Accordingly, to solve the MDVRP for hazardous materials transportation, the HMOGA has certain advantages in practical application.
For the MDVRP for hazardous materials transportation, most studies aim to reduce transportation costs, with distance as the main optimization objective.In order to compare the optimization effect when transportation distance and risk are used as the optimization objective, in addition to considering transportation risk, we test case 2. Table 8 shows the optimal vehicle route obtained by taking energy consumption and distance as the optimization objective in case 2. It can be seen from Table 8 that for the MDVRP for hazardous materials transportation, compared with distance as the optimization objective, when energy consumption is optimized, although total distance increases by 2.3%, total energy consumption reduces by 47.9% and the number of vehicles decreases by one.Therefore, it is more practical to consider the energy consumption of truck transportation in the MDVRP.
Conclusions
Hazardous materials transportation routing is one of the basic elements to assure safe transportation.In reality, it is very common that some depots coordinate the transportation and distribution of hazardous materials.
Designing scientific and reasonable transportation routes considering factors such as transportation risk and energy consumption can help deliver hazardous materials to each customer point safely, quickly, and economically.We took the MDVRP for hazardous materials transportation as the research object, considering transportation risk and energy consumption, and established a hazardous materials transportation routing multi-objective optimization model.In the model, the risk of the road segment is measured by the number of people who could be affected if an accident occurred, and energy consumption is measured by the trucks overcoming resistance during transportation.
As for solving the model, we designed a TSM and a HMOGA.In the TSM, the purpose of the first stage is to find the set of customer points serviced by each depot through the global search clustering method considering transportation energy consumption, transportation risk, and depot capacity, and to determine the service order for customer points to each depot by using a multi-objective genetic algorithm with the banker method to seek dominant individuals and gather distance to keep evolving population distribution in the second stage.The HMOGA combines the solution method of the two stages into one stage.In the design of the algorithm, customer points serviced by the depot and serviced orders are optimized simultaneously.In the end, we compared the above two methods through two cases, and the results show that the methods can obtain different Pareto solution sets.The TSM and HMOGA have their own advantages, but the HMOGA is able to find better transportation routes for the whole transportation network.
In addition, when the length and slope of the road segment do not change much, energy consumption does not change much, and decision-makers can use transport risk as the main basis for decision-making.When the difference between the slope and length of the road segment is large, it is important to optimize the energy consumption to find the Pareto optimal solution set for hazardous materials transportation.The decision-maker can select the appropriate solution to perform the transportation task in the Pareto solution according to the actual situation.Compared with distance, when energy consumption is the optimization objective, although the distance is slightly increased, the number of vehicles and amount of energy consumption are effectively reduced.
Due to the lack of actual data, such as accident probability and slope information of road segments, we designed the example by simply improving the data in the existing literature.Although the rationality of the model and algorithm is verified, a large-scale actual road network study is still necessary for further research.In addition, we found the Pareto optimal solution set, and for each solution in the set, it is difficult to measure which one is better.Hence, in combination with other conditions, choosing one or more Pareto solutions as hazardous materials transportation routes is the focus of the next phase of research.
represents the power consumed by transport equipment with a total weight of m to overcome rolling resistance at speed v (kW); r c represents the rolling resistance coefficient; g represents gravitational acceleration (m/s 2 ); m represents the total weight of the transport equipment, including the weight of the cargo c m and the vehicle M(t); and v represents the instantaneous speed of the transport equipment (m/s).
Transportation route 1 21 Figure 2 .
Figure 2. Basic flow of the multi-objective genetic algorithm.
Figure 2 .
Figure 2. Basic flow of the multi-objective genetic algorithm.(2)Designing a Genetic Operator The selection operator, crossover operator, and mutation operator play important roles in deciding solution efficiency of the genetic algorithm.The basic flow of the selection operator is shown in Figure 1.
( 1 )
Crossover operator: partially mapped crossover Randomly select a mating area in the selected two chromosomes, for example, A = 0 To chromosomes A and B, add their own mating area to the other chromosome, for instance, to the two chromosomes in , after operation to obtain A = 0
Figure 3 .
Figure 3. Pareto optimal solution distribution solved by HMOGA in case 1.
Figure 4 .
Figure 4. Pareto optimal solution distribution solved by HMOGA in case 2.
Figure 5 .
Figure 5. Minimum risk obtained by the two algorithms for each depot in case 1.
Figure 3 .
Figure 3. Pareto optimal solution distribution solved by HMOGA in case 1.
Figure 3 .
Figure 3. Pareto optimal solution distribution solved by HMOGA in case 1.
Figure 4 .
Figure 4. Pareto optimal solution distribution solved by HMOGA in case 2.
Figure 5 .
Figure 5. Minimum risk obtained by the two algorithms for each depot in case 1.
Figure 4 .
Figure 4. Pareto optimal solution distribution solved by HMOGA in case 2.
Figure 3 .
Figure 3. Pareto optimal solution distribution solved by HMOGA in case 1.
Figure 4 .
Figure 4. Pareto optimal solution distribution solved by HMOGA in case 2.
Figure 5 .Figure 5 .
Figure 5. Minimum risk obtained by the two algorithms for each depot in case 1.
Figure 6 .
Figure 6.Minimum energy obtained by the two algorithms for each depot in case 1.
Figure 7 .
Figure 7. Minimum risk obtained by the two algorithms for each depot in case 2.
Figure 8 .
Figure 8. Minimum energy consumption obtained by the two algorithms for each depot in case 2.
Figure 6 .
Figure 6.Minimum energy obtained by the two algorithms for each depot in case 1.
Figure 6 .
Figure 6.Minimum energy obtained by the two algorithms for each depot in case 1.
Figure 7 .
Figure 7. Minimum risk obtained by the two algorithms for each depot in case 2.
Figure 8 .
Figure 8. Minimum energy consumption obtained by the two algorithms for each depot in case 2.
Figure 7 .
Figure 7. Minimum risk obtained by the two algorithms for each depot in case 2.
Figure 6 .
Figure 6.Minimum energy obtained by the two algorithms for each depot in case 1.
Figure 7 .
Figure 7. Minimum risk obtained by the two algorithms for each depot in case 2.
Figure 8 .
Figure 8. Minimum energy consumption obtained by the two algorithms for each depot in case 2.
Figure 8 .
Figure 8. Minimum energy consumption obtained by the two algorithms for each depot in case 2.
Table 2 .
Demand amount of each customer point in case 1 and case 2.
Table 3 .
Customer points for each depot in case 1 and case 2.
Table 4 .
Pareto solution sets for customer points serviced by each depot in case 1.
Table 5 .
Pareto solution sets for customer points serviced by each depot in case 2.
Table 7 .
Pareto optimal solution set solved by the HMOGA in case 2.
Table 8 .
Optimal vehicle routes obtained by using energy consumption and distance as the optimization objectives in case 2.
|
2019-03-27T12:49:32.200Z
|
2018-09-30T00:00:00.000
|
{
"year": 2018,
"sha1": "cf5373a9c66167759b3b1fd0409d2093bcb04155",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/10/10/3519/pdf?version=1538295973",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "cf5373a9c66167759b3b1fd0409d2093bcb04155",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Economics"
]
}
|
2831608
|
pes2o/s2orc
|
v3-fos-license
|
Stochastic modeling suggests that noise reduces differentiation efficiency by inducing a heterogeneous drug response in glioma differentiation therapy
Background Glioma differentiation therapy is a novel strategy that has been used to induce glioma cells to differentiate into glia-like cells. Although some advances in experimental methods for exploring the molecular mechanisms involved in differentiation therapy have been made, a model-based comprehensive analysis is still needed to understand these differentiation mechanisms and improve the effects of anti-cancer therapeutics. This type of analysis becomes necessary in stochastic cases for two main reasons: stochastic noise inherently exists in signal transduction and phenotypic regulation during targeted therapy and chemotherapy, and the relationship between this noise and drug efficacy in differentiation therapy is largely unknown. Results In this study, we developed both an additive noise model and a Chemical-Langenvin-Equation model for the signaling pathways involved in glioma differentiation therapy to investigate the functional role of noise in the drug response. Our model analysis revealed an ultrasensitive mechanism of cyclin D1 degradation that controls the glioma differentiation induced by the cAMP inducer cholera toxin (CT). The role of cyclin D1 degradation in human glioblastoma cell differentiation was then experimentally verified. Our stochastic simulation demonstrated that noise not only renders some glioma cells insensitive to cyclin D1 degradation during drug treatment but also induce heterogeneous differentiation responses among individual glioma cells by modulating the ultrasensitive response of cyclin D1. As such, the noise can reduce the differentiation efficiency in drug-treated glioma cells, which was verified by the decreased evolution of differentiation potential, which quantified the impact of noise on the dynamics of the drug-treated glioma cell population. Conclusion Our results demonstrated that targeting the noise-induced dynamics of cyclin D1 during glioma differentiation therapy can increase anti-glioma effects, implying that noise is a considerable factor in assessing and optimizing anti-cancer drug interventions. Electronic supplementary material The online version of this article (doi:10.1186/s12918-016-0316-x) contains supplementary material, which is available to authorized users.
Background
Glioma differentiation therapy is a novel strategy for inducing glioma cells to differentiate into normal-like cells using specific drugs [1]. Although some advances in exploring the molecular mechanisms involved in druginduced glioma differentiation have been made, a modelbased comprehensive analysis is still needed to understand these differentiation mechanisms and improve the effects of anti-cancer therapeutics.
Experimental studies have revealed a variety of signaling pathways that are involved in the regulation of glioma differentiation. It has been shown that the elevation of cAMP levels by cholera toxin (CT) can induce glioma cell differentiation, which is mediated by CREB phosphorylation at Ser-133 in a PKA dependent manner [2]. cAMP/PKA signaling can also inhibit the PI3K/AKT pathway, leading to the activation of the downstream molecule GSK-3β and subsequent degradation of cyclin D1 [3]. Additionally, the IL-6/JAK2/STAT3 pathway, which is activated by increased cAMP levels, is also involved in glioma cell differentiation [4]. In such studies, Glial fibrillary acidic protein (GFAP) is applied as a reliable marker for evaluating the differentiation of glioma cells.
Mathematical models have shown great potential in contributing to the understanding of biological mechanisms and the generation of testable hypotheses or predictions. In a recent study [5], we constructed an ordinary differential equation (ODE) model for the signaling network involved in glioma differentiation which revealed a bi-stable mechanism for phenotype switching during glioma differentiation. On the other hand, extensive stochastic noise exists in signal transduction and phenotypic regulation [6] and biological regulatory systems are dynamic and stochastic. Several studies have demonstrated an intricate interplay between noise and the structure and spatiotemporal dynamics [7] of the signaling network [8,9] during cancer therapy. However, few studies have examined the relationship between inherent noise and drug efficacy in the induction of glioma differentiation.
In the present study, we adopted glioma differentiation therapy as a realistic case for investigating how the noise that inevitably exists in signaling networks influences drug efficacy and contributes to drug resistance, focusing on the functional role of this noise in the drug response of glioma cancer cells. We developed both an additive noise model (ANM) and a Chemical-Langenvin-Equation (CLE) model to simulate the stochastic dynamics of the signaling network during glioma differentiation therapy. We showed that the increase in noise due to the ultrasensitive response of cyclin D1 in response to drug treatment can induce bifurcation and heterogeneous responses in glioma differentiation. As such, this noise may reduce drug efficacy in the induction of glioma differentiation. Our model further demonstrated that a feedback loop of cyclin D1 activation can increase the variability in signal transduction and phenotypic transition. The results suggest that interventions inhibiting cyclin D1 feedback could help to enhance druginduced differentiation efficiency in a noisy environment during glioma differentiation therapy.
Results
Ultrasensitive response of cyclin D1 controls drug-induced glioma differentiation Based on a validated set of parameter values obtained by fitting experimental data [5], we performed parameter a sensitivity analysis (see Methods) to investigate which of the parameters in the developed signaling network model ( Fig. 1) were most sensitive or critical for glioma differentiation. The value of each parameter was increased by 5 % from its estimated value, and the timeaveraged percent change in the level of GFAP was then obtained. The computations were repeated 20 times, and the mean value and standard deviation were then calculated (Fig. 2a). It was observed that among all of the parameters, two cyclin D1-associated parameters, K 6a (the Michaelis constant for self-feedback of cyclin D1), and d 6 (the deactivation rate of cyclin D1 induced by active GSK3β) were the most sensitive to small variations. These sensitive parameters indicate the critical role of cyclin D1 in regulating glioma differentiation.
The quantified experimental data [2,3] showed the dose responses of cyclin D1 and GFAP to CT. Our simulation ( Fig. 2b) using the validated model further indicated a rapid decrease in the response of cyclin D1 as well as a steep rise in the response of GFAP to increasing CT stimulation within a narrow range (6 to 7 ng/ml). This is a characteristic indication of "ultrasensitivity" in the dose-response relationship [10,11]. Therefore, we employed an "apparent Hill coefficient" [12,13] to quantitatively evaluate whether the response of glioma differentiation is ultrasensitive to CT. This Hill coefficient is defined by the following equation [12,13]: where EC 90 and EC 10 represent the stimuli that generate 90 and 10 % of the maximal response, respectively. The apparent Hill coefficients of the simulated dose-response curves for cyclin D1 and GFAP with respect to CT were 40 and 43, respectively (Fig. 2b), indicating strong ultrasensitivity in the response of glioma differentiation to drug treatment. These results demonstrated that the dynamics of differentiation-associated protein activation (i.e. cyclin D1 and GFAP activation) might be regulated by an ultrasensitive mechanism through which low drug levels induce minimal cyclin D1 degradation and GFAP activation but degradation/activation is strongly induced once the drug dose increases above a threshold. Fig. 2c further shows the time-course of cyclin D1 and GFAP following drug treatment (CT = 10 ng/ml) in the deterministic model. We then experimentally tested the regulatory role of cyclin D1 in the differentiation of human malignant glioma cells (U87-MG cells) by silencing CCND1, which encodes cyclin D1 protein, and pharmacologically downregulating or inhibiting cyclin D1. We selected the most efficient siRNA fragment 003 to knockdown CCND1 (Additional file 1: Figure S1a). Knockdown of CCND1 induced GFAP expression, accompanied by downregulation of proliferating cell nuclear antigen (PCNA, a marker for cell proliferation) (Additional file 1: Figure S1b). Additionally, we used the cAMP analogue 8-CPT-cAMP to mimic the inhibitory effect of cAMP signal activators such as cholera toxin and forskolin on cyclin D1 protein [2,14]. As shown in Additional file 1: Figure S1c, 8-CPT-cAMP triggers downregulation of cyclin D1, leading to a significant increase in GFAP, but a decrease in PCNA. To further demonstrate the regulatory role of cyclin D1 in the glia-fate induction of glioma cells, we introduced a functional pharmacologic inhibitor of CDK4 and 6 which bind to cyclin D1 to form a complex required for G1-S cell cycle phase progression [15]. The CDK4/6 inhibitor induced the same changes in GFAP and PCNA as siCCND1 and 8-CPT-cAMP (Additional file 1: Figure S1c). Moreover, all of the applied strategies targeting cyclin D1 were able to transform the polygonal bodies of U87-MG cells into a glia-like morphology with dramatically extended processes (Additional file 1: Figure S1d). These data demonstrate the role of cyclin D1 in glioma differentiation, in accordance with the characteristics of our model.
Noise-induced heterogeneous response of glioma differentiation
Here, we investigated the stochastic dynamics of cyclin D1 and GFAP concentrations in a noisy environment. The simulations using the ANM model (see Methods) , can induce glioma cell differentiation, which is mediated by CREB phosphorylation at Ser-133 in a PKA dependent manner [2]. cAMP/PKA signaling can also inhibit the PI3K/AKT pathway, leading to activation of the downstream molecule GSK-3β and subsequent degradation of cyclin D1 [3]. Additionally, the IL-6/JAK2/STAT3 pathway, which is activated by increased cAMP levels, is also involved in glioma cell differentiation [4]. Glial fibrillary acidic protein (GFAP) is used as a reliable marker for evaluating the differentiation of glioma cells ( Fig. 3) showed the temporal evolution of cyclin D1 and GFAP activation at different noise intensities (σ in the ANM model is set to 0.1, 1, 5 or 10 %, as in Ref. [16]). Additional file 1: Figure S2 shows good agreement between the experimental data and simulated GFAP levels at a 5 % noise intensity. We found that increasing the noise intensity impacted the dynamics and distributions of both the cyclin D1 and GFAP responses (Fig. 3, Fig. 4a-d). Meanwhile, the simulations using the CLE model ( Fig. 5e-f, i-j) further demonstrated that with an increase of the intrinsic/extrinsic noise intensity, not all trajectories of cyclin D1 are downregulated by CT, and not all trajectories of GFAP are upregulated. This implies that increasing noise strength in signal transduction can induce bifurcation of cyclin D1 degradation, which renders some glioma cells insensitive to drug treatment and induces heterogeneous activation of GFAP. These results indicate that noise can modulate the ultrasensitive response of cyclin D1 and induce heterogeneous drug responses of glioma cells during differentiation therapy.
Increasing noise leads to a reduction of the differentiation efficiency We next examined the noise-induced qualitative changes in cyclin D1 and GFAP in glioma cells. As simulated using the ANM (Fig. 4a-d) and CLE (Fig. 5c, g, k) models, an increase in the noise intensity affected the probabilistic distribution of GFAP, indicating that the frequency of the higher levels of GFAP equilibrium decreases with the increase of noise intensity.
To understand how noise impacts the dynamics of the drug-treated glioma cell population, we define the differentiation potential (D) as the percent differentiation of glioma cells induced during drug treatment. That is, where p GFAP (x, t) is the probability distribution function (PDF) describing the concentration (x) of GFAP across a population, and u(x) is a microscopic indictor function describing the effect of the drug on the differentiation of glioma cells at a given GFAP level. Note that u(x) may be defined as a Heaviside function, such that glioma cells are able to differentiate only if GFAP levels exceed a critical value, x c . That is, u(x) =1 if x > x c and u(x) =0 otherwise. x c is set to 0.8 in this work. deviations are shown with blue error bars at different time points in each situation. As the noise intensity increases, the differentiation potential is significantly reduced, indicating that drug efficacy in inducing glioma differentiation is decreased. These results imply that intra-or extracellular noise or, more generally, complex signaling interference, could reduce the differentiation efficiency of drug-treated glioma cells during differentiation therapy.
We also used the CLE model to investigate the effects of intrinsic and extrinsic noise on the differentiation potential. Figure 5 shows the stochastic temporal responses of cyclin D1 and GFAP, the distribution of GFAP levels and the differentiation potential of glioma cells evaluated after 48 h of drug treatment (CT = 10 ng/ml). In the control group ( Fig. 5a- =0.01) (Fig. 5e-h). When these two groups were compared, we found that elevation of the strength of intrinsic noise resulted in the increased heterogeneity of molecular and cellular responses and a decreased differentiation potential. A similar effect was observed for extrinsic noise, as shown in Fig. 5i-l, where the strength of extrinsic noise was increased from λ = 0.001 to λ = 0.01, which also resulted in a decrease in the differentiation potential. Furthermore, a comprehensive investigation of the effects of the combined strength of intrinsic and extrinsic noise over a wide range (Fig. 6a) clearly showed that increasing the intrinsic and/or extrinsic noise leads to a reduction of the differentiation efficiency. Positive feedback of cyclin D1 activation (e.g., through cyclin D1 auto-activation or the cyclin D1/CDK4-6/Rb/ E2F/cyclin D1 feedback loop [17,18]) has been demonstrated to be involved in glioma differentiation [5]. We investigated whether inhibiting the cyclin D1 feedback loop could enhance the differentiation efficiency. The effect of interventions blocking cyclin D1 feedback is simulated by increasing the value of the Michaelis constant (K 6a ) in feedback loop by 2-, 5-or 10-fold. We first used both the ANM model (Additional file 1: Figure S4) and the CLE model (Additional file 1: Figure S5) to simulate the stochastic responses of cyclin D1 and GFAP in CTtreated cells, with strong or weak cyclin D1 feedback. The noise intensity in the ANM model was set to 5 % to illustrate a typical simulation (Additional file 1: Figure S2). The strength of intrinsic noise was 1= ffiffiffiffi V p =0.01, and the strength of extrinsic noise was λ = 0.01. Both Additional file 1: Figure S4 and Additional file 1: Figure S5 show that, compared with the single CT treatment, the combining therapy using CT and inhibition of cyclin D1 feedback results in a rapid degradation of cyclin D1 and the consistent increase of GFAP activity, with decreased heterogeneities in both cyclin D1 and GFAP responses. We used other methods to investigate the role of the cyclin D1 feedback loop as well: (1) decreasing the Hill coefficient of feedback (n 2 ) by 0.2-or 0.5-fold and (2) decreasing the activation rate of the selffeedback of cyclin D1 by 0.2-and 0.5-fold in the model. The results obtained using both of these two methods were consistent with the previous findings.
We also ran the CLE model with a large range of intrinsic and extrinsic noise strengths (from 10 -3 to 10 -1 ) to examine the effect of inhibition of cyclin D1 feedback on the differentiation potential (Fig. 6). The differentiation potential of CT-treated cells in the presence of strong (Fig. 6a) and weak (Fig. 6b) cyclin D1 feedback were examined. Comparison of these two situations demonstrated that inhibition of cyclin D1 feedback enhances the differentiation potential of CT-treated glioma cells. These results imply that inhibiting the cyclin D1 feedback loop might help to reduce noise-induced drug resistance and improve the anti-cancer effects of glioma differentiation therapy.
Discussion
In this study, we developed a stochastic model of the signaling pathways involved in glioma differentiation therapy to analyze the functional role of noise in the drug response of glioma cells to differentiation inducers. Our analysis indicated that noise can interfere with the ultrasensitive response of cyclin D1 and reduce the differentiation efficiency by inducing heterogeneous responses of glioma cells to drugs. The ultrasensitive response of cyclin D1 is brought about through positive feedback, as inhibiting the feedback loop of cyclin D1 results in rapid degradation of cyclin D1, even without CT treatment. As such, the ultrasensitive mechanism involved in the cyclin D1 response to CT would not exist if this positive feedback loop were blocked. In addition, our simulation suggested that the combination of differentiation therapies with cyclin D1 feedback inhibition might improve therapeutic efficacy.
Noise is an inherent feature of dynamic and stochastic biological systems (e.g., cancer). Whether noise is "beneficial" or "harmful" to a cellular function is an interesting topic, about which there has been some controversy in a b previous studies [19]. Functional noise is thought to be based on mechanisms intrinsic to network structure or biological systems themselves. In this study, we revealed that cyclin D1 ultrasensitivity might result in qualitative modification of the probability distribution of glioma cell differentiation due to the stochastic noise in molecular processes [20]. This noise may include statistical mechanical fluctuations in protein activation (intrinsic noise) [21] and extracellular micro-environmental perturbations (extrinsic noise) [20]. In this context, innate intra-or extracellular noise might be utilized by glioma tumor cells to resist drugs, which may reflect the inherent adaptation characteristics and acquired fitness of cancers. Drug resistance is often a major cause of the failure of chemotherapy [22]. The paradigms surrounding drugresistance mechanisms have focused on understanding drug resistance at the molecular, cellular, and microenvironmental levels. A well established paradigm for the mechanisms underlying drug resistance is that a variety of newly acquired genetic and epigenetic modifications can render tumor cells insensitive to therapeutic agents [23]. Another paradigm that is more often observed in various cancer studies involving targeted therapy is that subtle posttranslational activations of signaling pathways that bypass the stress of the therapeutic target can modulate the expression patterns of oncogenes [24][25][26]. It has been demonstrated that micro-environmental adaptations [27] play an important role in promoting the rapid emergence of acquired drug resistance due to the drug-induced secretion of various resistance factors from tumor cells [28,29]. These studies have provided us with abundant information allowing us to understand and potentially overcome drug resistance.
Our modeling experiments highlighted the possibility that the dynamic and stochastic features of posttranslational modifications [30] of protein activation might also reduce drug efficacy, thus facilitating drug resistance, independent of genetic mutations. The posttranslational mechanism underlying the activity of the cyclin D1 protein revealed in this study is consistent with experimental data (referring to Fig. 5 in Ref. [3], showing that cellular cyclin D1 protein levels are remarkably reduced, while the mRNA levels of cyclin D1 remain unaltered following treatment with CT).
The ANM model includes constant noise that is independent of protein concentrations. The noise term in the ANM model does not take into account the origin of the randomness in biochemical reactions and does not have the capacity to describe intrinsic fluctuations, which is not sufficient in many cases as discussed in Refs. [31,32]. Additionally, as a multiplicative from of noise, the noise term in the CLE model that approximates the chemical master equation depends on protein concentrations, and therefore appropriately describes the intrinsic noise coming from biochemical reactions. The difference between the ANM and CLE models might lead to discrepancies in their simulation results. For example, in the present study, when the inhibition of cyclin D1 feedback was simulated, the CLE model clearly showed a significantly higher steady-state GFAP level (Additional file 1: Figure S5b) compared with the wildtype (Additional file 1: Figure S5a), while in ANM model, additive noise introduced fluctuations as only small oscillations of the steady-state of cyclin D1 and GFAP, which triggered transition of the steady-states in some trajectories of cyclin D1 from high to low and thus, those of GFAP from low to high (Additional file 1: Figure S4a), due to the irreversible "one-way switch" mechanism [5]. Therefore, in the ANM model, the averaged steady-state of GFAP in the wild-type (Additional file 1: Figure S4a) was almost as high as that observed when cyclin D1 feedback was inhibited (Additional file 1: Figure S4b).
To further verify the correlation between the level of noise in the cyclin D1 protein concentration and the differentiation rate of glioma cells, we will utilize timelapse microscopy and a customized cell tracking system to monitor the degradation of cyclin D1 and the levels of the differentiation marker GFAP in thousands of individual cells under exposure to cholera toxin at a series of effective doses (5 to 10 ng/ml) [2]. Specifically, the detailed design of the experimental procedure will be as follows: (1) Cell lines. C6 rat glioma cells will be obtained from the American Type Culture Collection (Manassas, VA, USA) and maintained in DMEM (Invitrogen, Grand Island, NY, USA) supplemented with 10 % FBS in a humidified atmosphere of 5 % CO 2 at 37°C [2].
(2) Constructs. CCND1 cDNA-red fluorescent protein (RFP) fusions and GFAP cDNA-green fluorescent protein (GFP) fusions will be constructed and transduced into C6 cells. After sorting via flow cytometry, pure populations expressing the desired fluorescent reporters will be obtained and used to establish stable cell strains expressing cyclin D1 and GFAP with fluorescent proteins. (3) Drug treatment. Cells from the stable cell strains will be exposed to cholera toxin (Sigma, St Louis, MO, USA) at effective concentrations of 5, 6, 7, 8, 9 and 10 ng/ml and then subjected to continuous image capture for 48 h [2]. (4) Time-lapse microscopy and image processing. Images will be obtained using an IXMicro microscope (Molecular Devices, Sunnyvale, CA, USA). Each image showing a red fluorescent signal from cyclin D1 and a green fluorescent signal from GFAP will be processed using a low-pass Gaussian filter and the Matlab function regionprops followed by procedures similar to those described in Ref. [33]. The algorithm will be performed in MATLAB (MathWorks).
Through the above procedures, the temporal changes in cyclin D1 and GFAP will be measured. We will then calculate the coefficient of variation (CV) for cyclin D1 in response to different doses of cholera toxin. Additionally, we will calculate the correlation coefficient between the CV of cyclin D1 and GFAP level after 24 h or 48 h under the corresponding conditions. If this correlation coefficient is close to -1, then the experimental data are consistent with the model prediction that increasing noise reduces glioma differentiation efficiency.
In our ongoing work, we will further investigate more detailed molecular regulatory networks [34,35] underlying the feedback loop of cyclin D1 activation. First, the identification of such molecules [36] will advance our understanding of the molecular mechanisms underlying resistance to differentiation therapy. Second, when combined with differentiation therapy, the identified proteins (we will specify them) may be candidate targets for reducing drug resistance [37].
Conclusions
We have investigated the functional role of stochastic noise in the drug response and differentiation efficiency during cancer differentiation therapy based on an experimentally validated model. Our stochastic modeling of glioma differentiation therapy as a realistic case study demonstrated that increased noise can modulate the ultrasensitivity of cyclin D1 activity and decrease the efficiency of drug-induced glioma differentiation. Moreover, the combination of differentiation-inducible drugs and inhibition of cyclin D1 feedback can enhance the differentiation efficiency of glioma cells. These results advance our understanding of the relationship between noise and drug efficacy in glioma differentiation. Additionally, our study indicates the potential benefit of targeting the dynamics of some critical molecules during cancer therapy to increase anti-cancer effects.
Gene silencing using CCND1 siRNA
The siRNA fragments 001, 002 and 003 targeting human CCND1 were purchased from Sigma-Aldrich (St Louis, MO), and the sequences of these fragments were described as follows: siRNA 001-CCACAGAUGUGAAGU UCAUdTdT and AUGAACUUCACAUCUGUGGdTdT; siRNA 002-GCAUGUUCGUGGCCUCUAAdTdT and UUAGAGGCCACGAACAUGCdTdT; siRNA 003-GU AAGAAUAGGCAUUAACAdTdT and UGUUAAUGC CUAUUCUUACdTdT. CCND1 siRNA was transfected into U87-MG cells using the Lipofectamine™ RNAi-MAX reagent (Invitrogen, Carlsbad, California). After 1, 2 and 3 days, proteins from the transfected cells were subjected to western blot analysis and the protein levels of cyclin D1, GFAP and PCNA were evaluated with specific antibodies.
Western blot analysis
U87-MG cells were treated with CCND1 siRNA, 8-CPT-cAMP or CDK4/6 inhibitor for different times. Total proteins were extracted with the Mammalian Protein Extraction Reagent (Pierce, Rockford, IL, USA) and then subjected to measurement of the protein concentration with the BCA Protein AssayKit (Pierce, Rockford, IL, USA). Next, equal amounts of the protein samples were separated via sodiumdodecylsulphate-polyacrylamide gel electrophoresis (SDS-PAGE) and then electrotransferred to a PVDF membrane. Primary antibodies against cyclin D1, GPAP, PCNA (Cell Signaling Technology, Beverly, MA, USA) and Tubulin (Sigma-Aldrich, St Louis, MO) and a horseradish peroxidase-labeled secondary antibody (Cell Signaling Technology, Beverly, MA, USA) were used to recognize these specific proteins. Finally, the proteins were visualized with enhanced chemiluminescence detection reagents (Pierce, Rockford, IL, USA) in an immunoblotting imaging and analysis system (BioRad, CA, USA).
Additive noise model
As a simple and the easiest approach for incorporating molecular fluctuations in the model [16,38], an additive noise model (ANM) [39][40][41][42] that incorporates an additive noise term into the stochastic differential equation was adopted in this study to simulate stochastic signal transduction in the regulation of glioma differentiation. The ANM model is described by the following equations: where Y = {y k , k = 1, ⋯, 10} is a set of random variables describing the activation levels of the molecular components in the signaling pathway. The drift term, F(Y), in the above model is a matrix consisting of functions that describe chemical reaction rates between molecular components. The symbol σ represents the noise intensity determining the amplitude of noise in the system. The symbol W = {ω k (t), k = 1, ⋯, 10} represents a set of independent Wiener processes or standard Brownian motion, characterized by the following equation: where N(0,1) is the unit normal distribution. The drug-induced activation of the PKA-PI3K-AKT-GSK3β pathway (Fig. 1) is modeled by the following equations (5-9) using Michaelis-Menten kinetics [43] and Hill functions [44]: The cAMP-PKA mediated activation of the IL-6-JAK2-STAT3 pathway induced by CT is modeled as follows: The stochastic kinetics of cyclin D1, balanced by its activation and degradation, is modeled as follows: where the first term on the right-hand side describes the activation of cyclin D1 promoted by self-amplification or a positive feedback loop for cyclin D1 [17,18] that has been validated in our previous study [5]. V 6 is the maximal activation rate of cyclin D1, and K 6a is the Michaelis constant. n 2 is the Hill coefficient. The second term on the right-hand side of the above equation describes the deactivation of cyclin D1 induced by active GSK3β, which can trigger cyclin D1 translocation and degradation. d 6 is the dephosphorylation rate of cyclin D1, and K 6b is the Michaelis constant for GSK3β-induced cyclin D1 degradation. W 6 is standard Brownian motion, and σ 6 is a diffusion coefficient.
As a reliable marker of the differentiation of glioma cells, GFAP is regulated by CREB, STAT3 and active GSK3β [2,4]. Degradation of cyclin D1 is required for the differentiation of glioma cells [3]. Therefore, the stochastic dynamics of GFAP can be modeled using the following equation: with C being the maximal value of the steady-state of cyclin D1, and . As the upregulation of cyclin D1 is indispensable in the cell cycle and cell proliferation, it is only when cyclin D1 is downregulated that glioma cells can begin to differentiate and, thus, that GFAP can be unregulated. Therefore cyclin D1 is modeled as being dominant in GFAP regulation. The involvement of CREB, STAT3 and GSK3β in the regulation of GFAP is modeled by determining the best fit of the model structure to the experimental data under various conditions [5]. The last two terms in the above equation describe the degradation and fluctuations of GFAP.
Chemical-Langenvin-Equation (CLE) model
The variation in signal transduction arises from various sources, including intrinsic and extrinsic factors [45]. The intrinsic factors include statistical mechanical fluctuations in the diffusion and binding of the molecules involved in protein activation. The extrinsic factors include fluctuations in the extracellular environment [46], the stochasticity of gene expression [47], variations in the epigenetic state [48], and different levels of molecular machines [45,49], etc.
To examine the effects of intrinsic noise on glioma differentiation, we also employed the CLE model to simulate the stochastic molecular responses of glioma cells to drug treatment, and to extend the predictions of the ANM model. Based on the deterministic ODE model, the "white-noise form" Langevin equations [50] are formulated as follows: where V denotes the total number of molecules of each protein in the above signaling pathway and ζ i (i = 1, ⋯, 20) represents temporally uncorrelated, statistically independent Gaussian white noise, i.e., for each i, j = 1, ⋯, 20, Furthermore, when extrinsic noise was taken into account, each parameter, P j , in the model was varied as P j (1 + λε i ), where ε i (i = 1, ⋯, 39) represents statistically independent Gaussian white noise. λ is the strength of the extrinsic noise.
The uniqueness of the solution to the above stochastic differential equations (SDEs) can be guaranteed because their coefficients satisfy certain appropriate growth conditions and local Lipschitz continuity [51]. The biological meaning of the applied parameters and their values are listed in Additional file 1: Table S1. The initial values of the mathematical model are listed in Additional file 1: Table S2. We numerically solved the above SDEs using the Euler-Maruyama method [52]. The simulation was performed in MATLAB R2007b (Math Works, USA). The trajectories of all signaling components in relation to the noise simulated with the CLE model are presented in Additional file 1: Figure S6.
Parameter sensitivity analysis
Parameter sensitivity analysis is often used to quantitatively explore which parameters are more sensitive in affecting signaling dynamics. The time-dependent sensitivity coefficient of GFAP (model output) at time point t with respect to parameter P j was calculated as follows: Time-averaged sensitivities [53] were calculated as shown below to evaluate parameter sensitivity during the entire time course where {t l , l = 1, ⋯ L} is an equal partition of [0, T], with L =100 and T = 48 h in the simulation. A small perturbation (ΔP j =5 %) is imposed for calculating the timeaveraged sensitivities of GFAP with respect to the examined parameters.
Additional file
Additional file 1: Text S1. Cyclin D1 feedback increases variability in signal transduction and phenotypic transition. Figure S1 Both the downregulation of cyclin D1 expression and inhibition of the function of the cyclin D1/CDK4/6 complex induce the differentiation of human glioblastoma U87-MG cells into glia-like cells. Figure S2 Comparison between the simulated time-course of GFAP levels at a 5 % noise intensity and the experimental data. Figure S3 The coefficient of variation of cyclin D1 and GFAP levels affected by cyclin D1 feedback during glioma differentiation. Figure S4 The effect of inhibition of cyclin D1 feedback simulated with the ANM model. Figure S5 The effect of inhibition of cyclin D1 feedback simulated with the CLE model. Figure S6 Trajectories of all signaling components in relation to noise simulated with the CLE model. Table S1 Parameter values involved in the model. Abbreviations AKT, protein kinase B; cAMP, cyclic adenosine monophosphate; CREB, cAMPresponse element binding protein; CT, cholera toxin; GFAP, glial fibrillary acidic protein; GSK-3β, glycogen synthase kinase 3 beta; IL6, Interleukins 6; JAK2, Janus kinase 2; PI3K, phosphoinositide 3-kinase; PKA, protein kinase A; STAT3, signal transducer and activator of transcription 3
|
2017-08-03T01:50:24.838Z
|
2016-08-11T00:00:00.000
|
{
"year": 2016,
"sha1": "d796e1dabc573b0cab853e36dc4bc89374b67bbd",
"oa_license": "CCBY",
"oa_url": "https://bmcsystbiol.biomedcentral.com/track/pdf/10.1186/s12918-016-0316-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d796e1dabc573b0cab853e36dc4bc89374b67bbd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Biology",
"Medicine"
]
}
|
219977240
|
pes2o/s2orc
|
v3-fos-license
|
Mortality risk in post-operative head and neck cancer patients during the SARS-Cov2 pandemic: early experiences
Purpose The objective of this report is to outline our early experience with head and neck cancer patients in a tertiary referral center, during the SARS-Cov2 pandemic, and to describe the poor outcomes of patients who acquired the infection. Methods In this case series from a single-center, national tertiary referral center for head and neck cancer we describe three consecutive head and neck cancer patients who contracted SARS-Cov2 during their inpatient stay. Results Of the three patients described in our case series that contracted SARS-Cov2, two patients died from SARS-Cov2 related illness. Conclusion We have demonstrated the significant implications that SARS-Cov2 has on head and neck cancer patients, with 3 patients acquiring SARS-Cov2 in hospital, and 2 deaths in our that cohort. We propose a complete separation in the location of where these patients are being managed, and also dedicated non-SARS-Cov2 staff for their peri-operative management. Level of evidence IV.
Introduction
The SARS-Cov2 pandemic has major implications for the delivery of elective Otolaryngology-Head and Neck Surgery (ORL-HNS) oncology treatment internationally, with services in many jurisdictions coming largely to a standstill. ORL-HNS surgeons were noted to be in a high-risk group for transmission in reports from Wuhan, China, the UK and Italy [1][2][3][4]. Very high mortality rates have been reported in a wide range of post-operative elective surgery patients who were operated on in the prodromal phase of SARS-Cov2 [5].
Guidelines have been published for pre-and intra-operative management and protection to help reduce the risk of spread of SARS-Cov2 to patients and staff [1,2,6]. The post-operative implications for these high-risk patients are not yet clear.
With this in mind, we describe our early experience with three post-operative head and neck cancer patients at our institution, as it relates to the SARS-Cov2 pandemic, and the repercussions for future surgical practices.
Methods
We describe our experience of three patients with advanced head and neck cancer that underwent treatment prior to the onset the SARS-Cov2 pandemic in Ireland, who subsequently contracted SARS-Cov2 during their post-operative hospital stay. Our institution is the largest acute hospital in Ireland and a national tertiary-referral center for head and neck. This research was granted institutional review board exemption from the Tallaght University Hospital-St. James Hospital Joint Research Ethics Committee. Informed consent was obtained from the patient directly where possible, or their next of kin in cases where the patient was deceased.
Results
Patient 1 was admitted to our institution for definitive treatment of tracheal stomal seeding from a previously resected T4 squamous cell carcinoma (SCC) of the oral cavity. He underwent a total laryngectomy with partial tracheal resection, followed by reconstruction with a pectoralis major pedicled flap. Final histology revealed clear margins.
The patient's post-operative course was complicated by wound healing issues related to his prior radiation treatment, as well as an acute exacerbation of the chronic renal disease. This caused a prolonged inpatient stay. Otherwise, the patient remained clinically well, with no evidence of post-operative infections. On his 4th post-operative week, a patient in the same ward contracted SARS-Cov2, which prompted isolation and testing of our patient. He was also found to be SARS-Cov2 positive, with elevated SARS-Cov2 markers ( Table 1). While the patient initially remained stable, his d-dimer and ferritin continued to climb, and his oxygen requirements escalated 6 days after testing positive for SARS-Cov2. He was commenced on hydroxychloroquine and broad-spectrum antibiotics. He continued to deteriorate both from a respiratory and renal point of view. He was transferred to the intensive care unit (ICU) for ventilatory support and continuous renal replacement therapy. He subsequently succumbed to respiratory complications from SARS-Cov2 7 days after diagnosis ( Fig. 1).
Patient 2 presented with progressive hoarseness, dysphagia and weight loss secondary to a large obstructing supraglottic T4aN2bM0 SCC.
At the time of surgery, the tumor was found to be directly invading his right common carotid artery and was deemed unresectable, upstaging him to T4b disease. A tracheotomy was performed to protect his airway. Following repeat He developed respiratory symptoms during week two of radiotherapy treatment, with increased tracheal secretions and shortness of breath as well as tachycardia, reduced oxygen saturations and raised inflammatory markers. He was swabbed and tested positive for SARS-Cov2 the following day, with elevated SARS-Cov2 markers (Table 1). Unfortunately, he continued to deteriorate, his radiotherapy treatment was stopped and palliative care was initiated. His chest x-ray demonstrated progressive SARS-Cov2 infection with diffuse airspace opacification throughout both lungs and he died 15 days following SARS-Cov2 diagnosis (Fig. 1).
Patient 3 was receiving full-dose concurrent chemo-radiotherapy for a T4b hypopharyngeal SCC with curative intent when he was admitted to hospital with new-onset stridor. Flexible nasolaryngoscopy revealed significant radiotherapy-related supraglottic edema that failed to settle with medical management with steroid and inhaled epinephrine. He underwent tracheotomy to protect his compromised airway, which was completed without complication. The patient subsequently resumed radiotherapy treatment as an inpatient, as he was unsuitable for immediate discharge. One month after admission to hospital he was found to be SARS-Cov2 positive, having shared a ward with patient 2. His initial bloods were not significantly elevated ( Table 1). As he had less than 1 week of radiation treatment remaining, it was decided to halt his treatment. He remains well without oxygen requirements and is planned for discharge in the coming week.
Discussion
The SARS-Cov2 pandemic has brought multiple challenges to clinical practice for ORL-HNS [1]. Strategies are emerging to overcome these difficulties during the pandemic, but concern remains for the high risk of mortality in patients contracting the infection in the postoperative period. A case series from Wuhan of 34 patients who underwent a range of elective surgeries during the incubation period of SARS-Cov2 were found to have very poor outcomes, with all patients developing SARS-Cov2 pneumonia shortly after surgery. Of those, 44% required admission to the ICU, and there was a 20.5% case fatality rate [5].
Furthermore, head and neck cancer patients are likely to have an increased risk of adverse outcomes from contracting SARS-Cov2 [2, 7]. They frequently have multiple comorbidities, are typically elderly, smokers with poor performance status. The current case-fatality rate from SARS-Cov2 for patients above age 70 is estimated to be between 8 and 22.5% [3,8]. The presence of a tracheotomy or laryngectomy stoma is also thought to allow the easy transmission to and from the patient [9]. Each of our three patients was admitted to hospital before Ireland recorded its first SARS-Cov2 death on the 12th of March, and when the total number of cases in the country was less than 10 ( Fig. 1). Furthermore, their procedures were carried out far in advance of the country-wide restrictions put into place on the 13th of March. However, head and neck cancer patients frequently require prolonged inpatient stay post-operatively, in particular those with a tracheotomy, advanced stage or prior radiation [10]. These patients are at increased risk of transmission during care and this risk is exacerbated by a lengthy hospital stay.
In response to the SARS-Cov2 pandemic, our institution implemented aggressive measures to help discharge patients from the hospital, isolate and protect existing inpatients, and create alternative pathways for prospective surgical candidates at other institutions, which are nonreceiving SARS-Cov2 hospitals. Five patients admitted or treated prior to the onset of wide-spread restrictions or SARS-Cov2 cases in Ireland, were unable to be promptly discharged, either due to medical or social reasons. Three of these patients contracted SARS-Cov2, despite routine SARS-Cov2 institutional precautions in keeping with existing best-practice international guidelines. Two of these patients subsequently died from complications of SARS-Cov2 infection.
Our experience suggests that patients undergoing major head and neck cancer surgery are at very high risk during the SARS-Cov2 pandemic. We recommend aggressive separation and precautions for this vulnerable patient cohort. We propose two potential strategies to achieve this. We suggest the nomination of a SARS-Cov2 free hospital as the center for head and neck surgery, or stringent separation at a hospital level into non-SARS-Cov2 and SARS-Cov2 "streams". In the latter scenario, patients would be cared for only by staff not exposed to SARS-Cov2 or high-risk patients. This is supported by evidence from the 2003 SARS outbreak, where it was shown that these patients should be cared for in entirely separate units, or even separate hospitals by designated health care workers [11][12][13][14]. Finally, stringent pre-operative testing and isolation of patients being considered for major head and neck surgery is also of utmost importance, as patients with prodromal SARS-Cov2 were also found to have very poor outcomes [5].
This is a small case series and thus has inherent limitations. However, other than the case series from Wuhan [5], there are as yet no further reports of postoperative surgical mortality in the SARS-Cov2 era. This is likely to be due to increased precautions in hospitals and decreased surgical activity across the globe. However, as countries 'emerge from the lockdown', it is likely that the risk to post-operative patients will become more apparent.
Conclusions
Head and neck cancer patients are not only part of a vulnerable patient cohort, but also present unique challenges for post-operative care. Their prolonged inpatient stay and levels of input from numerous health care professionals puts them at significantly increased risk of nosocomial SARS-Cov2 infection. We have demonstrated the significant implications that SARS-Cov2 has on these patients, with 3 cases of acquired SARS-Cov2 in hospital and 2 deaths in our inpatient cohort.
We propose a complete separation in the location of where these patients are being cared for, and also dedicated non-SARS-Cov2 staff for their peri-operative management.
|
2020-06-23T15:18:10.961Z
|
2020-06-22T00:00:00.000
|
{
"year": 2020,
"sha1": "17196c8e626e105350826de77d18749b51bf9f2f",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00405-020-06138-w.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "17196c8e626e105350826de77d18749b51bf9f2f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
59525482
|
pes2o/s2orc
|
v3-fos-license
|
Increased cortical reactivity to repeated tones at 8 months in infants with later ASD
Dysregulation of cortical excitation/inhibition (E/I) has been proposed as a neuropathological mechanism underlying core symptoms of autism spectrum disorder (ASD). Determining whether dysregulated E/I could contribute to the emergence of behavioural symptoms of ASD requires evidence from human infants prior to diagnosis. In this prospective longitudinal study, we examine differences in neural responses to auditory repetition in infants later diagnosed with ASD. Eight-month-old infants with (high-risk: n = 116) and without (low-risk: n = 27) an older sibling with ASD were tested in a non-linguistic auditory oddball paradigm. Relative to high-risk infants with typical development (n = 44), infants with later ASD (n = 14) showed reduced repetition suppression of 40–60 Hz evoked gamma and significantly greater 10–20 Hz inter-trial coherence (ITC) for repeated tones. Reduced repetition suppression of cortical gamma and increased phase-locking to repeated tones are consistent with cortical hyper-reactivity, which could in turn reflect disturbed E/I balance. Across the whole high-risk sample, a combined index of cortical reactivity (cortical gamma amplitude and ITC) was dimensionally associated with reduced growth in language skills between 8 months and 3 years, as well as elevated levels of parent-rated social communication symptoms at 3 years. Our data show that cortical ‘hyper-reactivity’ may precede the onset of behavioural traits of ASD in development, potentially affecting experience-dependent specialisation of the developing brain.
Introduction
Autism spectrum disorder (ASD) is defined by difficulties in social communication, as well as the presence of restricted interests, repetitive behaviours, and sensory anomalies 1 . Symptoms emerge in the first years of life, and can be reliably identified through behavioural assessments from toddlerhood. Several recent theories have implicated dysregulated coordination of excitatory and inhibitory signals (E/I) in cortical processing and associated homoeostatic/autoregulatory feedback loops as one potential common mechanism through which multiple background genetic and environmental risk factors could converge to produce behavioural symptoms of autism [2][3][4][5][6][7] . The high cooccurrence of epilepsy in individuals with ASD (22%) forms one line of evidence for this hypothesis 8 , in addition to the high rates of ASD in genetic disorders that disturb GABA-ergic functioning (the primary source of inhibitory signalling in the brain), including Fragile X, 15q11-13 and Neurofibromatosis Type 1 [9][10][11] . Alterations in GABA and glutamate levels have been identified in adults and children with ASD using magnetic resonance spectroscopy (MRS) [12][13][14] , although findings are inconsistent across studies and are limited by differences in the measurement variables selected and the lack of spatial precision inherent to this measurement technique. In animal models of ASD, emerging evidence from stem cell studies have implicated an over-production of GABA-ergic neurons 15 , while several animal models provide support for both GABA and glutamatergic dysfunction in ASD-related phenotypes 7,[16][17][18] . Multiple genetic and environmental risk factors for ASD may converge to disrupt the coordination between excitatory and inhibitory neurotransmitters in the developing brain.
Although there is a reasonable body of evidence linking dysregulated E/I to ASD, establishing a causal relationship to symptoms requires mapping these disruptions before symptoms emerge. Genetic evidence indicates that risk factors for alterations in E/I balance are expressed prenatally, including mutations in genes involved in synaptic development and function 6,7,19 . Full implications for early brain development remain unclear, particularly since excitatory and/or inhibitory signalling is dynamically shaped over development 20,21 . For example, GABA-ergic signals are initially excitatory before shifting to their mature inhibitory function sometime in late prenatal/ early postnatal development 22 . Considerable homoeostatic pressure coordinates developmental changes in E/I coordination 23,24 , and activity-dependent GABA signalling is thought to be critical in optimising the balance between excitation and inhibition in the developing cortex 25 , with a critical role in shaping cortical sensitive periods. The importance of the interplay between E/I signalling is further supported by the finding that the Nrg1 and ErbB4 excitatory signalling pathways control development of inhibitory circuitry in the mammal cerebral cortex by regulating connectivity of GABA-ergic neurons 26 . Dysregulation within this pathway has been associated with altered plasticity and the emergence of neurodevelopmental disorders such as schizophrenia and ASD 27,28 . Early alterations in E/I coordination will be influenced by substantial homoeostatic, autoregulatory or adaptive processes that may further compound initial perturbations and alter phenotypic expression over developmental time 3,5,29 . Understanding the nature of these changes in ASD will require direct evidence of the presence and consequences of altered E/I balance in human infants prior to symptom onset. The first year of life may be a critical window of interest, since evidence from analysis of SCN2A mutations indicates that overexcitation specific to the first year can result in later autism [30][31][32] . Considering the availability of pharmacological manipulations that may act on E/I function 16 , studying this system before the age of 1 year also holds promise for effective delivery of pre-emptive interventions 16,33 . To do this, researchers need putative indices of the effects of dysregulated E/I signalling on cortical processing that can be measured in human infants.
One such candidate is electrical activity of the cortex measured during a stimulus repetition paradigm. Repetition of a stimulus typically produces a dampening in subsequent neural responses to that stimulus-a phenomenon termed 'repetition suppression' that has been linked to neurotransmitter systems important in inhibitory control [34][35][36] . For example, blocking GABA activity in the inferior colliculus causes more neurons to respond to repeated stimuli 37 ; and activation of GABAergic receptors in the medial geniculate body decreases responses to common stimuli 38 . Further, models of Fragile X have revealed network hyper-excitability in response to repeated auditory stimulation, including decreased glutamatergic drive on GABAergic inhibitory neurons in sensory cortex 10,39 . Thus, examining repetition suppression can provide us with insight into inhibition/excitation balance in the developing brain. In young infants, repetition suppression can be readily measured using EEG 40,41 a methodology optimal for use with nonverbal populations 33 .
Emerging evidence suggests that repetition suppression may be altered in the early development of ASD. Indeed, there is broad evidence of reduced repetition suppression or habituation in the visual, auditory, and tactile domains of individuals with ASD relative to typically developing infants and children 36,[42][43][44][45] . Alterations may be particularly pronounced in the auditory domain (indeed, GABA levels measured through MRS are most consistently atypical in the auditory cortex 12,46,47 ). For example, Orekhova et al. linked reduced auditory gating of early-stage event-related potential (ERP) (P50) responses 48 to higher ongoing gamma power in 8-12-year-olds with ASD 49 . Others reported atypical gamma activity during auditory processing in children with ASD and first-degree relatives 50,51 as well as atypical theta oscillations in response to speech in adolescents and adults with ASD 52 . Alterations in auditory repetition suppression have also been reported in adolescents and adults with Fragile X, a genetic condition where 50% of individuals also meet the criteria for ASD and in which there is robust evidence of increased excitatory activity 53 . Specifically, the researchers found elevated baseline (pre-stimulus) gamma power in response to a tone repetition paradigm, which was associated with reduced habituation of ERPs. Further, there was a concomitant elevation of phase-locking in alpha/ beta EEG activity to repeated tones, which was interpreted as a further indication of over-responsiveness in cortical networks. Preliminary evidence also indicates altered EEG responses to auditory or multimodal stimuli in infants at familial risk for autism 40,43,44,54 , including an initial report of decreased habituation of responses to auditory tones 55 . Taken together, examining neural indices of repetition suppression in the infant brain may provide an index of cortical hyper-reactivity that is sensitive to early alterations in E/I coordination.
The present study
In the present study, we examined neural responses to repeated auditory tones in an oddball paradigm conducted with 8-month-old infants at low (n = 27) and high (n = 116) familial risk for ASD, an age prior to diagnosis.
Additionally, stimulus-locked gamma-band activity may become easier to detect in typical development within the first year of life 56,57 . Infants at high familial risk for ASD have a 20% chance of developing ASD themselves 58 , and thus longitudinal studies of this population allow examination of causal paths to the disorder. Previously reported ERP data from studies of auditory processing (including a small subset of infants also included in the present investigation) showed no dampening of response following repetition in infants at high risk 55 , although no ASD outcome was available at the time. In the present study, we analysed the underlying spectrogram by breaking down the signal into different frequency bands 59 , which reveals more specific information about the functional organisation of brain activity within the same study design as classic ERP paradigms. Two features of the EEG signal that were expected to produce a robust change with stimulus repetition in the neurotypical brain were evoked high-frequency gamma and inter-trial phase-coherence in the alpha/beta band 60,61 . Both indices have been linked to E/I balance in previous work 53,54 . We focused on evoked/ time-locked, and not induced activity, because these signals are easier to separate from some of the non-timelocked artefacts that are common in the analysis of ongoing EEG (such as motion, myogenic artefact or electrical line-noise).
As we were concerned about multiple comparisons and the likelihood that effects may be subtle, we took two a priori analytic decisions. First, because the previous literature did not provide clear guidance for a priori selection, we selected time windows, AOIs and frequency bands based on data from the low-risk infants. Specifically, we identified bands/areas associated with habituation of gamma responses across repeated tones in the lowrisk group. We then restricted the analysis of the high-risk group to those bands/areas. This avoids multiple comparisons within the high-risk analysis, but does increase the possibility that there are group differences in other scalp regions that we did not assess. Post-hoc analyses conducted to mitigate this are included in the SM. For the ITC, we analysed activity over the same region of interest but selected the time-window and frequency band based on both previous literature and the aggregated grand average technique. Second, two approaches to group analysis have been traditionally used in the infant sibling literature. One is to compare indices across four 'outcome' groups: low risk, high-risk typical development, high-risk atypical development or high-risk ASD. We did not follow this path because we had already used the lowrisk group to identify windows of interest (and thus including them in comparisons was unfair). Also, the intermediate HR-Atyp group was excluded from main analysis as (1) it would reduce power and (2) no clear predictions could be made about this group (for analysis including the HR-Atyp group, see SM6). The second strategy is to compare responses between high-risk infants with typical development (HR-TD) and those with ASD (HR-ASD), which is the most faithful design to the case/ control approach used in the literature 58,62 . This approach was used in the present manuscript. It was predicted that there would be reduced or absent repetition suppression response in the gamma band as well as elevated phasecoherence to repeated tones in the ASD group relative to the high-risk typically developing group 53,55,63 .
To exploit the full dimensional nature of the high-risk design, we then examined whether a cortical reactivity index (CRI) (a composite of gamma habituation and ITC) was associated with dimensional variation in later language skills across the whole cohort (LR, HR-TD, HR-Atyp, HR-ASD 45,64 ). The term 'reactivity index' was selected to represent the conceptual nature of these EEG responses. Specifically, when a repeated train of stimuli are encountered, ongoing oscillatory activity in the brain can change in two ways. The power or amplitude of oscillations can increase in response to the stimulus; and/ or the phase of the oscillation can align with the timing of the stimulus in a consistent way across trials 65 . Both these reactions produce larger amplitude event-related brain responses and thus in a conceptual sense reflect reactivity. Thus, reduced gamma habituation and higher ITC would additively work to produce a higher CRI. Given the critical role of E/I coordination in experience-dependent specialisation 66,67 , it was predicted that alterations in neural responses to auditory tones should be associated with poorer language outcomes, as this is a key skill that develops through experience-dependent tuning to the sounds and patterns of the child's language environment 68,69 . Finally, it was expected that higher scores on the index would be positively associated with parentreport measures of ASD-related traits at 3 years 70 .
Participants
Participants were 116 high-risk (HR) (64 male; 52 female) and 27 low-risk (LR) (14 male; 13 female) children. 'High-risk' infants had an older full sibling with a community clinical diagnosis of ASD (recruited from the British Autism Study of Infant Siblings, BASIS-Phase 2; http://www.basisnetwork.org). Infants in the 'low-risk' group had no reported family history of ASD or other developmental or genetic disorders (recruited from a volunteer database at Birkbeck Centre for Brain and Cognitive Development), and had at least one older full sibling. Infants recruited for the study attended four visits, at 8 months and 14 months, with follow-up visits at 2 and 3 years. EEG data for the present study are taken from infants during their 8-month visit (Mean = 9.03m, Standard Deviation = 1.1m) and outcome data from the 3-year visit (Mean = 39.05m, Standard Deviation = 3.47m). The study was approved by the National Research Ethics Service London Central Ethical Committee (08/ H0718/76) and conducted in accordance with the Declaration of Helsinki (1964). Further details of inclusion/exclusion criteria and proband diagnostic phenotyping are provided in SM1: Clinical Assessment.
Within the high-risk group, developmental outcome at 3 years was used as a grouping variable (See Supplementary Materials SM1 for details of assessment). Sixtyfour infants were considered typically developing and constituted the HR-TD group. Seventeen infant siblings met gold-standard criteria for ASD (determined by consensus clinical judgement of a group of expert clinical researchers based on information including the ADOS-2, ADI-R and interaction with the child; see SM1, ST2). Lastly, 32 infant siblings were classified as atypically developing, i.e. displayed some developmental concerns but not meeting criteria for an ASD diagnosis (HR-Atyp). Table ST1). Sensitivity analysis 71 of the total sample size with a power of 1−β = .80 revealed a population effect size of d = .21.
Stimuli
Sounds were presented in an oddball paradigm originally designed by Guiraud et al. 55,72 . Duration of the sound was 100 ms, with 5 ms rise and fall time. The inter-trial interval was 700 ms. A 'Standard' pure tone at 500 Hz was presented with a 77% probability. The paradigm also included two deviant or infrequent tones, which were presented with a 11.5% probability each. One infrequent sound was a white noise deviant, while the other was a pure tone of 650 Hz (pitch deviant). The sound intensity was 70 dB SPL. The sounds were presented for 5-7 min or until the infant became too restless, which on average yielded 477 trials for low-risk and 462 trials for high-risk infants (See Table ST1 for trial breakdown). Following Guiraud et al. 55 and a priori hypotheses, responses to the first, second, and third presentation of a Standard tone were examined.
Procedure
The auditory oddball task was administered at the end of a battery of visual EEG tasks. Infants were seated on the parent's lap facing the experimenter, who blew soap bubbles throughout the recording session to keep the infant calm and engaged. The experiment was conducted in a sound attenuated room, where the sounds were presented from two speakers, 1 m apart, and located 1 m in front of the infant. The Mullen Scales of Early Learning 73 were administered in the standardised format; with assessments completed in the same laboratory setting by a small team of experimenters. Parents were also asked to complete a set of questionnaires at home at each visit, including the Social Responsiveness Scale 70 at 3 years of age.
EEG recording and pre-processing Electrophysiological activity was measured using an EGI 128-electrode Hydrocel Sensor Net with the vertex electrode as reference and sampled at 500 Hz. A 0.1-100 Hz band-pass filter was applied offline. The recording was segmented into 1000 ms sections (500 ms pre-and 500 ms post-stimulus presentation). Bad channels in each segment were marked by automatic artefact detection and visual inspection in NetStation (v. 4.5.6). The segments with pronounced artefacts, i.e. gross motor movement, eye blinks, or more than 25 bad channels, were rejected from analysis (104 clean data sets remained). All epochs exceeding 150 μV at any electrode were excluded. At least 30% of trials had to be retained in each category to qualify the dataset to be included in the group analysis. For the remaining trials, channels with a noisy signal were interpolated from neighbouring channels with a clean signal using spline interpolation. Following Guiraud et al. 55 and Seery et al. 45 , all standards that followed either deviant were categorised by position-i.e. Standard 1, Standard 2, and Standard 3, to which the present analysis is confined. There was only one restriction during stimulus presentation-that the deviant sound had to follow at least two standard tones. Due to this, there were notably fewer instances of S3 than S1 and S2. The final number of remaining, artefact free, trials did not significantly differ by group (all ps > .05, see ST1).
Wavelet transform
Four regions of interest (ROIs) in the left and right frontal and tempo-parietal cortex were chosen based on previous investigations into auditory gamma activity 24,25 (S1). One hundred and four pre-processed data sets were exported into MatLab® using the free toolbox EEGLAB (v.13.6.5b, http://sccn.ucsd.edu/eeglab/) and rereferenced to the average reference. For analysis of evoked gamma, epochs of raw EEG data were averaged together. A custom-made collection of scripts, WTools (available upon request from Dr Eugenio Parise (available from Dr Parise via email: eugenioparise@tiscali.it)), was used to compute complex Morlet wavelets at 1 Hz intervals between 1 and 80 Hz. A continuous transformation was applied to all epochs through convolution with a wavelet at each frequency in the chosen range, taking the absolute value as a result (i.e. amplitude not power 75 ). To reduce distortion created by convolution, padding of 100 ms at the start and end of the segment was applied to the individual data sets. A baseline period was set between −200 and 0 ms and subtracted from the post-stimulus responses to remove any residual 50 Hz (electrical) noise in the data and to control for pre-stimulus preparatory activity. Amplitude was extracted for low and high (40-60 Hz) gamma bands (30-150 ms post-stimulus onset). The low and high bands were derived from previous examinations of gamma-band activity in infants 57 . Such evoked (rather than induced or baseline) gamma reflects responses time-locked to the stimulus, reducing the likelihood of contamination by muscle artefact or electrical noise.
Secondly, ITC measures were calculated (collapsed across all standards to improve signal-to-noise ratio). This measure represents whether the distribution of phase angles in a single time-frequency point is uniform, with 1 = perfect phase consistency across trials and 0 = c completely random phase angles 60,76,77 . When extracting ITC, small trial numbers can artificially skew results 76 . To identify whether a phase-locking response occurred, we collapsed across all standards for this analysis, such that there were at least 100 good trials included per infant. Individual number of trials per data set were entered as covariates in the model, which also reduced the possibility of introducing further biases from selecting a fixed subset of trials. The number of trials did not affect the stability of the oscillatory response, where the common cut-off for the number of trials used ranges from one 78 to between 200 and 800 79,80 , with many studies not reporting these values 81,82 . Average ITC was extracted using EEGLAB functions in the 10-20 Hz band, with this frequency band and time-window selected based on the aggregated grand average, previous work with individuals with Fragile X 53 and the onset timing of the P150 infant ERP component, which has been shown to be sensitive to reduced habituation in a subset of the high-risk infants (100 to 180 ms; Figure S2) 55 . However, because ITC is also commonly measured in the theta range in previous studies of auditory processing in infancy, analysis of ITC in the 3-6 Hz is also included in the SM (SM5.2. ITC analysis 3-6 Hz).
Analysis of the P150 component response has been conducted and visualised separately in the SM 40,45,55 (details in Figure S2, Supplementary Materials). ERPs are not discussed in the main text beyond this point given that they represent an unspecified combination of power and phase changes in oscillatory rhythms. On the other hand, time-frequency analysis directly relates to the hypothesis under investigation and provides greater sensitivity revealing differences in auditory perception in the developing brain 83 .
Analysis strategy
The analysis approach was designed to constrain the number of contrasts made in testing effects of developmental outcome, in order to minimise Type 1 error and maximise power in this relatively modest sample. Accordingly, gamma responses to tone repetition were first analysed within the low-risk group to select the topography and frequency band associated with a normative repetition suppression (RS) response. RS was defined as a reduction in gamma amplitude between the first and third standards, given that oscillatory response asymptotes after a second repetition 84 . We then focused on that region and frequency band, contrasting responses between the HR-TD and HR-ASD groups 85,86 . Greenhouse−Geisser corrections were applied where appropriate. Similarly, we investigated phase-locking by comparing ITC over the scalp region used in the analysis for HR-ASD and HR-TD groups. The HR-Atyp group were excluded from these analyses for greatest comparability with work with older children using case/control designs 85,86 ; and again to maximise power given the relatively modest sample. However, their data are presented in SM6 (in addition to broadly consistent analyses presented using the alternative approach of pooling the high-risk infants with typical and atypical development into 'HR-noASD' groups).
Repetition suppression of evoked gamma and lowfrequency ITC were then used to create a 'Cortical Reactivity Index' by z-scoring each measure across the cohort (lower repetition suppression of gamma between first and third Standard and greater ITC would give a higher index score), and then averaging the values. We then examined effects of group and continuous relations with dimensional outcomes across the whole cohort. These included development in language skills (difference scores of Expressive and Receptive Language subscales, calculated by subtracting 8-mo from 36-mo scores) 73 , ADOS severity scores 87 at 36 months, as well as the tscore total on the Social Responsiveness Scale™ (SRS™ 57 ) at 36 months.
Repetition suppression analysis
Low risk A paired-samples t test was carried out to examine reductions in amplitude of evoked gamma between the first and third repetition of each standard (1st vs. 3rd Standard) over left and right tempo-parietal regions in the two frequency bands respectively. This indicated a significant decrease in high (40-60 Hz) gamma amplitude (repetition suppression) over the right tempo-parietal region [t(13) = 2.58, p = .023, η 2 = .35] but not the left [t (13) = −.65, p = .527, η 2 = .034]; with no significant differences in the 20-40 Hz band (ts < 1.6, ps > .54). Corresponding analyses over frontal ROIs revealed no significant effects across either gamma band (ts < 1, ps > .21).
High-risk (40-60 Hz)
Based on the pattern of findings in LR infants, analysis of ASD outcome was constrained to the 40−60 Hz frequency band over the right tempo-parietal region. A oneway ANOVA revealed an effect of group on Standard 3 -Standard 1 difference scores [F(1,55) = 6.67, p = .012, η 2 = .105]; which remained significant after co-varying trial numbers [F(1,55) = 6.53, p = .013, η 2 = .105]. Specifically, there was a decrease in gamma activation between the first and third repetition in HR-TD, relative to an increase in the HR-ASD group. Figure 1 illustrates the responses for the three groups; SM3 indicates that responses were not due to ocular muscle activity 88 (see SM3). Notably, the main effect of group persisted when the HR-Atyp group was included in the model (see SM5). Our additional post-hoc analyses of evoked theta (3-6 Hz over 50-400 ms) and late evoked gamma (40-60 Hz over 200-350 ms) did not reveal significant differences for high-risk groups (SM5).
Cortical reactivity index
A composite CRI was created by computing z-scores for the 40-60 Hz evoked gamma and 10-20 Hz ITC responses for the whole high-risk group, and then averaged across the two indices. A higher score on the index would reflect diminished auditory repetition suppression. As expected, an ANOVA indicated significantly higher scores in the HR-ASD than the HR-TD group [F(1,55) = 15.16, p < .001, η 2 = .22]. A logistic regression indicated that this index correctly classified 93% of the HR-TD group and 50% of the ASD group [β = 2.74, S.E. = 0.87, w = 9.89, p = .002]. When the whole sample was included, the main effect of group remained significant, and HR-ASD infants maintained significantly higher CRI scores than the other groups (ps between .002 and .045; see SM4).
As shown in Fig. 2, the composite score was also significantly associated with development in Receptive Language between 8
Discussion
Alterations in excitatory and inhibitory signalling are a feature of several leading neurobiological theories of sensory perturbations observed in ASD, with some researchers proposing that behavioural symptomology is the cumulative developmental consequence of altered excitatory and inhibitory coordination within cortical systems 2,3,6,7,89 . To our knowledge, we present the first human evidence that elevated cortical reactivity is present in infants with later ASD prior to the emergence of behavioural symptoms. This is consistent with the presence of alterations in excitatory and inhibitory function in infancy. In our work, reduced repetition suppression of evoked gamma and generally increased alpha/beta phaselocking to auditory repetition were present in 8-monthold infants with later ASD. A combined index of cortical reactivity was associated with reduced receptive language and social communication at 3 years dimensionally across the entire cohort. Altered EEG responses to repetition may therefore be a candidate stratification biomarker for language functioning in ASD. As atypical cortical auditory reactivity has been previously linked to weakened GABA circuits in mouse infancy 67 , our results could reflect a lack of regulation within the E/I balance in infants with later ASD.
Enhanced reactivity within the developing cortex may relate to the alterations in connectivity that have been broadly observed in ASD, due to the experiencedependent nature of synapse development 43,90,91 . Indeed, 14-month-old infants with later ASD and high levels of restricted and repetitive behaviours show significant overconnectivity between frontal and temporal regions 90 . Increased spontaneous activity in early development could also be associated with structural overgrowth 92,93 , because internal activity-driven processes contribute to brain growth 94 . Future work combining both MRI and EEG techniques in young infants will allow us to investigate the bidirectional links between structure and function in the infant brain. Further, longitudinal studies with repeated EEG phenotyping are necessary to determine whether the observed alterations in cortical reactivity represent a developmentally pervasive phenomenon, or a transient developmental delay in the normal increases in inhibitory activity observed in early infancy 2,20 . Several influential theories propose significant mechanisms that would alter the expression of excitatory/ inhibitory processing over developmental time 3,7,20 . Cortical reactivity to repetition of sensory stimuli could reflect processes that disrupt the experience-dependent specialisation of the social brain, as the inhibitory signals critical for repetition suppression are also critical in shaping sensitive periods in early development 16,66 . Altered trajectories of specialisation are an emerging theme from prospective studies of high-risk infants. Across the first year of life infants with later ASD show a decline in interest in eyes and faces 62 , emergence of slowed attention-shifting 95 and early signs of language delays 33,86 . Such behavioural changes are related to alterations in specialised brain activity. For example, between 5 and 6 months, infants with later ASD show reduced sensitivity over temporal cortex to human sounds 96,97 and altered ERP responses to pictures of faces 98,99 . Intriguingly, our index of cortical reactivity related to dimensional variation in receptive language growth between 8 and 36 months, an age-range associated with specialisation of language regions 100 . Language acquisition is dependent on attention to novelty and change, and it is likely that poor tuning to repetition would have a detrimental effect on this process. Testing how our index of cortical reactivity relates to more finegrained indices of language and social communication through a longitudinal investigation will be an important next step.
Elevated cortical reactivity could result from a range of impairments at the molecular level, contributing to reduced inhibition or increased excitation. Select environmental risk factors may have converging effects through oxidative stress, which is thought to impact parvalbumin inhibitory interneurons. This is thought to be a key mechanism for regulation of E/I balance and maintenance of neural microcircuitry 16,101 , and form a unified risk pathway for schizophrenia and some forms of autism 102,103 . This is supported by Selten et al., who provide succinct evidence of cascading effects of the inhibitory system dysregulation in the cortex and hippocampus on signal processing and later phenotypic changes associated with psychiatric and neurodevelopmental disorders 6 . Recent work with knock-out models of ASD (including mice models of Fragile X) also implicates early increases in spontaneous synchronous activity and upregulation of synaptic turnover in sensory cortices as a common phenotype, consistent with excess cortical reactivity 11,104 . In human adults with Fragile X syndrome, reduced habituation of neural responses to tones was Fig. 2 Associations between cortical reactivity index scores and behavioural data. a Cortical reactivity index z-scores for all outcome groups. Note that the HR-Atyp group is visualised here but was not included in the HR-TD vs. HR-ASD comparison. Composite score for difference in evoked gamma and ITC responses (10-20 Hz) over right tempo-parietal ROI was associated with b smaller change in Receptive Language scores between 8 and 36 months and c higher SRS™ scores, a dimensional measure of ASD-related traits. LR low risk, HR-TD high-risk-typically developing, HR-Atyp high-risk-atypical development, HR-ASD high risk-autism spectrum disorder. Note: The fit line is for an average of all infants. Error bars depict standard error of the mean associated with elevated gamma at baseline 53 , which in animal models can be rescued with the GABA B -receptor agonist arbaclofen 17 . We did not examine baseline or resting state gamma that was previously found atypical in high-risk populations 43,105 due to the increased likelihood of contamination by muscle artefact, and so pharmacological manipulation studies of the repetition suppression of gamma responses are warranted. Nonetheless, the results provide an index suitable for measuring the effects of novel pharmaceutical treatments on core biochemical pathways implicated in autism.
These results further complement previous findings of altered habituation profiles in ASD. Reduced habituation/ repetition suppression of ERP responses have been observed in a subsample of the present cohort of high-risk infants 55 and repetition of speech sounds in other samples 45 . In the visual domain, toddlers with ASD showed delayed adaptation to repetition, particularly to social stimuli 99 , while adults showed diminished repetition suppression with increasing levels of ASD traits 106 . Repetition suppression is a critical process by which the brain devotes resources to novel and unexpected stimuli. Atypical/reduced repetition suppression is therefore likely to affect the efficacy with which the brain encodes complex stimuli, such as language, which rely heavily on focusing attention on important dimensions of change in auditory signals 64,107 . Atypical repetition suppression could also contribute to exaggerated sensory sensitivities observed in children with ASD 49 , as well as delayed language ability, a strong predictor of developmental outcomes in individuals with ASD 108,109 .
In summary, findings from the current study indicate that cortical reactivity is present at 8 months of age in infants who later develop ASD. It remains to be established whether these responses are stable in early development, or change dynamically over time. This study provides the first evidence for the emergence of cortical consequences within the right tempo-parietal region from alterations in E/I coordination as early as 8 months of age in infants with later ASD, and offers candidate mechanistic pathways by linking alterations in gamma to individual level differences in later language and social functioning.
|
2019-02-01T14:02:50.624Z
|
2019-01-30T00:00:00.000
|
{
"year": 2019,
"sha1": "d63862aeca180fbde8a72c6c110838e5e0bb4df3",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41398-019-0393-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "84422e8f031f7e6725f58a13fa872a5a5ce214ed",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245293735
|
pes2o/s2orc
|
v3-fos-license
|
Relationship between Egg Consumption and Body Composition as Well as Serum Cholesterol Level: Korea National Health and Nutrition Examination Survey 2008–2011
We analyzed the relationship between egg consumption, body composition, and serum cholesterol levels. We obtained data on egg consumption by using a food frequency questionnaire (FFQ) (13,132 adults) and the 24-h dietary recall (24HR) (13,366 adults) from the fourth and fifth Korea National Health and Nutrition Examination Surveys (2008–2011). In men, consuming 2–3 eggs/week was associated with higher fat mass (FM), percentage body fat (PBF), and fat-to-muscle ratio (FtoM), compared to consuming <1 egg/week. In women, consuming 1–6 eggs/week was associated with higher low-density lipoprotein cholesterol, consuming 2–6 eggs/week was associated with higher total cholesterol, and consuming 4–6 eggs/week was associated with higher FM and high-density lipoprotein cholesterol, compared to consuming <1 egg/week. There was no relationship between egg consumption and the prevalence of dyslipidemia, and there was no relationship between egg consumption, body composition, and serum cholesterol levels according to the 24HR. However, there was some association with other cardiovascular diseases and consumption of certain amounts of eggs. Egg consumption investigated by FFQ was associated with body composition and serum cholesterol levels. However, the egg consumption investigated by the 24HR resulted in no health benefit or harm with respect to body composition and cholesterol.
Introduction
Eggs are one of the most consumed food groups worldwide [1,2]. They contain various proteins, lipids, vitamins, and minerals. In particular, eggs contain high-quality protein rich in various amino acids that promote protein synthesis. One large egg contains up to 6.3 g of protein, which provides antibacterial and immunoprotective properties to the human body [2]. Therefore, it is plausible that egg consumption may influence body composition. However, studies on egg consumption and body composition have seldom been reported. Liu et al. showed that excessive body fat according to egg intake did not change in men but decreased in women in Chinese adults [3]. Previous studies examining the risk of metabolic syndrome according to egg consumption also examined the changes in waist circumference (WC) and body mass index (BMI) according to egg consumption, but there were no effects of egg consumption on body compositions [4][5][6][7][8].
In addition to proteins, each egg contains 200-275 mg of cholesterol, making it one of the main sources of dietary cholesterol intake [9]. Studies on whether dietary cholesterol intake affects blood cholesterol and lipid levels have shown inconsistent results [10][11][12]. The effects of egg consumption on serum cholesterol and lipid levels have also been reported, but the results were also inconsistent, as follows: (1) there was no significant change in serum low-density lipoprotein cholesterol (LDL-C), high-density lipoprotein cholesterol (HDL-C), and triglyceride (TG) levels according to egg intake [4]; (2) the higher the frequency of egg consumption, the higher the serum total cholesterol (TC) level [5]; (3) serum LDL-C and HDL-C concentrations tended to increase in proportion to egg intake [13]. Guidelines regarding daily cholesterol intake also present conflicting recommendations. The 2000 American Heart Association guidelines recommend less than 300 mg/day of TC consumption, which is equivalent to 1 or 1 and a half eggs per day [14]. In contrast, restrictions on dietary egg consumption have been removed from the 2015-2020 Dietary Guidelines for Americans [15,16]. The American Heart Association, British Heart Foundation, Australian Heart Foundation, and New Zealand Heart Foundation have recently relaxed restrictions on egg consumption [17,18].
Therefore, in this study, we analyzed the differences in serum cholesterol concentration and body composition distribution according to egg consumption in adults. Although most previous studies investigated egg intake using the food frequency questionnaire (FFQ), this study investigated egg consumption using both FFQ and the 24-h dietary recall (24HR).
Study Population
This study obtained data from the 4th and 5th Korea National Health and Nutrition Examination Surveys (KNHANES), which is a nationally representative survey conducted between 2008 and 2011 by the Korea Disease Control and Prevention Agency. To assess the relationship between serum cholesterol level, body composition, and egg consumption, data from 18,915 participants aged ≥19 years who underwent dual-energy X-ray absorptiometry (DXA) were examined. Participants who had severe declines in kidney function (estimated glomerular filtration rate <30 mL/min/1.73 m 2 ), a history of cancer diagnosis, inappropriate fasting duration before examination (>24 h or <8 h), inappropriate nutritional intake (<500 kcal/day or >5000 kcal/day), inappropriate water intake per body weight (≥90 g/kg), and missing data in questionnaire records or unavailable test results were excluded from the analyses. We obtained data on egg consumption by using FFQ from 13,132 participants (5407 men and 7725 women) and 24HR from 13,366 participants (5522 men and 7844 women).
All procedures were approved by the ethics committee of the Korea Disease Control and Prevention Agency (Approval Number: 2011-02CON-06-C, 2010-02CON-21-C, 2009-01CON-03-2C, and 2008-04EXP-01-C), and the study was carried out in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments. Signed informed consent was obtained from all KNHANES participants. The KNHANES data were publicly available.
Assessment of Egg Consumption
Diet was assessed using a self-administered FFQ and the 24HR. The FFQ consists of a list of 63 commonly consumed food items in Korea. Each food intake has ten responses (almost never, 6-11 times/year, 1 time/month, 2-3 times/month, 1 time/week, 2-3 times/week, 4-6 times/week, 1 time/day, 2 times/day, and 3 times/day). We re-categorized the frequency of egg consumption as <1 time/week, 1 time/week, 2-3 times/week, 4-6 times/week, and ≥1 time/day [5]. The 24HR made participants selfreport the type and amount of food consumed in the past 24 h. We categorized egg consumption as "not consumed" and "consumed" by using data from 24HR.
Assessment of Body Composition and Serum Cholesterol Level
Body composition was measured using DXA (Hologic Discovery, Hologic, Marlborough, MA, USA), including values for bone mineral content (BMC), fat mass (FM), fat-free mass (FFM), and percentage body fat (PBF) of the whole body and six regions (head, left arm, right arm, trunk, left leg, and right leg). Whole-body total FFM was calculated by subtracting the BMC from FFM. The fat-to-muscle ratio (FtoM) was calculated as the whole-body total FM divided by the whole-body total FFM. We defined obesity as BMI ≥25 kg/m 2 [19], abdominal obesity as WC ≥90 cm in men and WC ≥85 in women [19], and obesity according to PBF as PBF ≥25% in men and PBF ≥30% in women [20]. TC, TG, HDL-C, and LDL-C levels were measured using enzymatic methods (Hitachi Automatic Analyzer 7600, Hitachi, Tokyo, Japan). The abnormal levels of cholesterol were defined as TC ≥200 mg/dL [21], LDL-C ≥130 mg/dL [21], TG ≥150 mg/dL [19], and HDL-C <40 in men and <50 in women [19].
Variables
The general characteristics of the participants by sex were analyzed using data on age, FM, PBF, FFM, FtoM, BMI, WC, TC, TG, HDL-C, LDL-C, systolic blood pressure, daily energy intake (total energy intake, protein intake, and water intake per body weight), smoking (never, past, or current), alcohol consumption (<once/month or ≥once/month), physical activity (PA), education (0-6 years, 7-12 years, or ≥13 years), individual income quartiles, previous diagnosis of dyslipidemia, hypertension, diabetes mellitus, stroke, myocardial infarction, and angina pectoris by doctors, treatment for hypertension, and survey year. PA was described as metabolic equivalent (MET), or classified as low, moderate, or high PA based on the data processing and analysis guidelines of the International Physical Activity Questionnaire (IPAQ) [22]. We also calculated predicted 10-year risk of a first hard atherosclerotic cardiovascular disease (ASCVD) event [23].
Statistical Analysis
Statistical analyses were conducted using STATA version 14.0 (StataCorp., College Station, TX, USA), and statistical significance was defined as p < 0.05. The sampling design for the KNHANES used two-stage stratified cluster sampling rather than simple random sampling. Thus, when analyzing the data, weights were applied by reflecting the contents of this complex sampling design. The Shapiro-Wilk test was used to evaluate normality of the data [24]. The analysis was conducted by transforming the data to a logarithmic scale to achieve a bell-shaped (approximately normal) distribution of the required variables. Linear regression analyses and chi-square tests were used for the comparative analysis of general characteristics by sex. Linear regression analyses were performed to analyze the relationship between body composition and serum cholesterol levels and egg consumption. Logistic regression analyses were performed to compare the prevalence of dyslipidemia, hypertension, diabetes mellitus, stroke, myocardial infarction, and angina pectoris according to egg consumption. Multivariable models were performed by adjusting for age, BMI, daily energy intake, smoking, alcohol consumption, PA, education, income, and survey year.
General Characteristics of the Study Population by Sex
The mean age of the 13,366 participants whose data for 24HR were available was 44.43 years old, and 58.69% were women (Tables S1 and S2). Tables 1 and 2 shows the general characteristics of the participants according to sex. Among the 13,132 participants whose data for FFQ were available, the mean age was 44.28 years old, and 58.86% were women. The mean FM (15.52 ± 0.12 vs. 19.01 ± 0.10), PBF (21.92 ± 0.13 vs. 32.85 ± 0.12), and FtoM (0.30 ± 0.002 vs. 0.53 ± 0.002) were higher in women than in men. FFM (51.21 ± 0.13 vs. 35.99 ± 0.08 1) was higher in men than in women. Mean TC (186.67 ± 0.61 vs. 185.80 ± 0.53), TG (124.58 ± 1.01 vs. 91.31 ± 1.01), and LDL-C (112.86 ± 0.96 vs. 109.96 ± 0.90) were higher and HDL-C (46.18 ± 0.20 vs. 51.30 ± 0.18) was lower in men than in women. The proportion of dyslipidemia diagnoses by doctors was higher in women than in men. The proportion of hypertension, diabetes mellitus, stroke, myocardial infarction, and angina pectoris diagnoses by doctors was higher in men than in women. In addition, ASCVD risk was higher in men than in women. WC, BMI, total energy intake, and water intake per body weight were higher in men than in women. The proportion of current smokers, alcohol consumers, participants with a high PA level, and highly educated participants was higher in men than in women. The proportion of participants who consumed eggs more than once a day was higher in women than in men (7.42% vs. 8.17%) by using the FFQ; otherwise, in the 24HR, the proportion of participants who consumed eggs was slightly higher in men than in women (47.21% vs. 46.38%). In both methods, those who consumed more eggs tended to be younger and have more daily protein intake in both men and women. BMI, the proportion of obesity, and the proportion of obesity according to PBF in men were significantly different between groups and were higher in the group that consumed eggs compared to the group that did not consume eggs during the previous day. WC did not differ between the groups according to egg consumption in men. In women, PBF, BMI, WC, the proportion of obesity, and the proportion of abdominal obesity tended to decrease as egg consumption increased by both FFQ and 24HR.
TG in men and TC, TG, HDL-C, LDL-C, and the proportion of the abnormal levels of cholesterol in women were significantly different between groups by using the FFQ. The proportion of hypertension, diabetes mellitus, myocardial infarction, and angina pectoris diagnoses by doctors, and ASCVD risk in men and the proportion of dyslipidemia, hypertension, diabetes mellitus, stroke, myocardial infarction, and angina pectoris diagnoses by doctors, and ASCVD risk were significantly different between groups by using the FFQ. TG and the proportion of the abnormal levels of HDL-C in men and TC, TG, LDL-C, and the proportion of the abnormal levels of cholesterol in women were higher, and HDL-C in women was lower in the group that did not consume eggs compared to the group that consumed eggs during the previous day by using the FFQ. The proportion of hypertension, diabetes mellitus, stroke, myocardial infarction, and angina pectoris diagnoses by doctors, and ASCVD risk in men and the proportion of dyslipidemia, hypertension, diabetes mellitus, stroke, myocardial infarction, and angina pectoris diagnoses by doctors, and ASCVD risk in women were higher in the group that did not consume eggs compared to the group that consumed eggs during the previous day by using the FFQ.
Men who consumed more eggs tended to have more nutritional intake, to be current smokers, to be physically active, to be highly educated, and to earn higher income by FFQ. However, by using the 24HR, men who consumed one or more eggs per day tended to have more nutritional intake, to be current smokers, to be alcohol consumers, to be highly educated, and to earn higher income than men who did not consume eggs. Women who consumed more eggs tended to have more nutritional intake, to be highly educated, and to earn higher income by FFQ. Unlike in men, women who consumed less eggs tended to be physically active. In the 24HR, the proportion of participants with more nutritional intake, alcohol consumption, high education, and high-income levels was higher among women who consumed eggs compared to those who did not consume eggs during the previous day. In the case of PA, as in the FFQ, the proportion of high PA levels was higher among women who did not consume eggs compared to those who consumed eggs during the past 24 h.
Association between Serum Cholesterol Level, Prevalence of Dyslipidemia, and Egg Consumption
In men, the amount of egg intake and levels of TC, TG, HDL-C, and LDL-C were not relevant after adjusting for potential confounding variables when using both FFQ (Table 3) and 24HR (Table S3). Those who consumed eggs 4-6 times per week had higher prevalence of diabetes mellitus and those who consumed eggs 2-3 times per week had higher prevalence of stroke and myocardial infarction compared to those who consumed less than 1 egg per week by FFQ. The prevalence of dyslipidemia, hypertension, diabetes mellitus, stroke, myocardial infarction, and angina pectoris was not associated with egg consumption after adjusting for potential confounding variables by the 24HR. In women, those who had an egg 2-6 times per week had higher levels of TC than those who ate less than 1 egg per week. Those who consumed eggs 4-6 times per week had higher levels of HDL-C compared to those who consumed less than 1 egg per week by FFQ, and those who consumed eggs 1-6 times per week had higher LDL-C levels compared to those who consumed less than 1 egg per week by FFQ (Table 4). When using the 24HR, egg consumption and TC, TG, HDL-C, and LDL-C levels were not associated after adjusting for potential confounding variables (Table S4). Those who consumed eggs 2-3 times per week had higher prevalence of hypertension, those who consumed eggs 4-6 times per week had higher prevalence of diabetes mellitus, and those who consumed egg once per week had higher prevalence of angina pectoris compared to those who consumed less than 1 egg per week by FFQ. The prevalence of hypertension was higher among women who consumed eggs compared to those who did not consume eggs during the past 24 h. TC, total cholesterol; TG, triglyceride; HDL-C, high-density lipoprotein cholesterol; LDL-C, low-density lipoprotein cholesterol; NA, not applicable. Data are presented as beta coefficient ± standard error or odds ratio (95% confidence interval). a Multivariable linear (TC, TG, HDL-C, and LDL-C) or logistic (dyslipidemia, hypertension, diabetes mellitus, stroke, myocardial infarction, and angina pectoris) regression model adjusted for age, body mass index status, total energy intake, protein intake, water intake per body weight, smoking, alcohol drinking, physical activity, education, income, and survey year. b Log-transformed.
Association between Body Composition, Waist Circumfernce, and Egg Consumption
In men, the FM, PBF, and FtoM were significantly greater in the group consuming 2-3 eggs per week compared to the group consuming less than one egg per week when using FFQ. There was no significant difference in FFM and WC according to egg consumption (Table 5). Furthermore, there was no significant correlation between FM, PBF, FFM, FtoM, and WC according to egg consumption in men by the 24HR (Table S5). Table 5.
Association between body composition, waist circumference, and egg consumption in men (food frequency questionnaire). Data are presented as beta coefficient ± standard error. a Multivariable linear regression model adjusted for age, body mass index status, total energy intake, protein intake, water intake per body weight, smoking, alcohol drinking, physical activity, education, income, and survey year.
In women, the group consuming 4-6 eggs per week had higher FM than the group consuming less than one egg per week. There was no significant difference in PBF, FFM, FtoM, and WC according to egg consumption according to the FFQ (Table 6). By using the 24HR, there was no significant difference in FM, PBF, FFM, FtoM, and WC according to egg consumption (Table S6). Table 6. Association between body composition, waist circumference, and egg consumption in women (food frequency questionnaire).
Discussion
The purpose of this study was to examine the relationship between egg consumption, serum cholesterol levels, and body composition distribution in Korean adult men and women using the 2008-2011 KNHANES. In men, consuming 2-3 eggs per week was associated with higher FM, PBF, FtoM, and prevalence of stroke and myocardial infarction, and consuming 4-6 eggs per week was associated with higher prevalence of diabetes mellitus than consuming less than one egg per week. In women, consuming 2-6 eggs per week was associated with higher TC, consuming 4-6 eggs per week was associated with higher HDL-C, FM, and prevalence of diabetes mellitus, consuming 1-6 eggs per week was associated with higher LDL-C, consuming 2-3 eggs per week was associated with higher prevalence of hypertension, and consuming 1 egg per week was associated with higher prevalence of angina pectoris, compared to consuming less than one egg per week. According to 24HR, there was no relationship between egg intake and health indicators, except for hypertension.
In two previous studies using the [2007][2008] KNHANES and the 2013 KNHANES data, similar trends were observed: higher egg intake was associated with younger age, higher education, and higher income levels [4,5]. This is consistent with the results of our study, and it can be inferred that the participants who are young, highly educated, and have higher income levels have an interest in a healthy diet, and it can be inferred that egg consumption naturally increased when the egg was recognized as a healthy food. According to Kim et al., subjects who frequently ate eggs tended to have higher intakes of protein and fat, as well as other nutrients such as calcium, phosphorus, and riboflavin. In addition, it was found that the higher the egg intake, the greater the PA [5].
In a meta-analysis [25], it was found that the LDL-C levels and LDL-C/HDL-C ratio increased in proportion to egg consumption. Another meta-analysis found that egg consumption increased TC, LDL-C, and HDL-C levels, but had no effect on LDL-C/HDL-C ratio [26]. However, a recent study reported that consuming more than 3 eggs per week was associated with lower LDL-C levels and LDL-C/HDL-C ratio compared to consuming up to one egg per week [27]. The effect of dietary cholesterol intake on blood cholesterol levels is limited [10][11][12]. In addition, the degree of response to dietary cholesterol may vary depending on various conditions, individual characteristics, and the degree of compensatory mechanisms such as suppression of cholesterol synthesis when a large amount of dietary cholesterol is consumed [5,11,28,29]. In this study, it was confirmed that egg intake in men and women had a relationship with TC, HDL-C, and LDL-C levels. Although there was some association between egg consumption and serum cholesterol levels in our study, a dose-response relationship was not established. This is no different from previous studies in which the effects and responses of cholesterol intake on blood cholesterol levels varied from person to person [5,11,28,29]. In this study, no relationship was found between the prevalence of dyslipidemia and egg consumption. However, there was some association with other cardiovascular diseases and consumption of certain amounts of eggs. In previous studies, it was reported that egg consumption has positive effects on metabolic syndrome [6][7][8], and a follow-up study on the relationship between egg consumption and the risk of dyslipidemia and metabolic syndrome is needed.
According to Liu et al., central obesity and excessive body fat were improved in proportion to egg consumption in women, but there was no significant change in men [3]. In our study, BMI and WC decreased according to egg consumption in women, but there was no significant change in men. However, in our study, FM, PBF, and FtoM in men and FM in women showed a tendency to increase in the egg intake group of a certain amount, whereas in the 24HR, there was no change in body composition distribution according to egg consumption in either men or women. Although the mechanism by which egg intake affects body composition is not clear, eggs are a food rich in protein and essential amino acids and are involved in protein synthesis [2], which is considered to be able to improve muscle mass, and it could be inferred that PBF would be reduced or not affected by egg consumption. However, in our study, FM, PBF, and FtoM showed a tendency to increase with egg consumption. It is considered that the nutrient intake in the specific amount of egg intake group was higher than that of the reference group. In addition, excessive PBF was calculated using the equation and then categorized in the study by Liu et al. [3], whereas the FM values of participants were measured by DXA in our study, which might be a factor contributing to the difference in the results. Studies on the effect of egg consumption on the distribution of body composition are still lacking. More detailed follow-up studies, including data on intake of other nutrients such as protein and fat, and data on the concentrations of various hormones involved in muscle and fat accumulation in the body are needed.
The strengths of this study are as follows: first, data from the KNHANES, which was extensively surveyed over four years on egg consumption in Korean adult men and women, were used. Since egg consumption differs from country to country in terms of recipes and dietary patterns, it is necessary to perform an analysis in each country. Second, body composition data directly measured by DXA, which is the gold standard, were used. Finally, both FFQ and 24HR were used. Most previous studies used FFQ to collect data on egg consumption [6,7,25]. FFQ data has the advantage of being able to investigate the amount or frequency of egg intake in more detail. However, it has the disadvantage of relying on inaccurate long-term memories of individuals when responding to questions. On the other hand, although information on the amount or frequency of egg intake collected through the 24HR is somewhat lacking compared to the FFQ, recall bias may be smaller as individuals recall and record food consumed within the past 24 h. Nevertheless, the 24HR may not fully reflect an individual's usual eating habits, and thus may lead to biased results. In this regard, focusing on interpreting the results of FFQ, there is still no established dose-response relationship between egg intake and body composition distribution as well as serum cholesterol level, but it is thought to be related to the specific amount of egg intake.
This study had several limitations. First, the data based on questionnaires may have recall bias. Second, since it was a cross-sectional study, it had a limited ability to demonstrate a causal relationship. Finally, although the analysis was performed by adjusting for clinically meaningful variables, potential confounding factors that were not considered could not be excluded.
The current results can be generalized to all Koreans due to the large sample size, high response rate (about 80%), and the use of proportional systematic samples through multistage stratification according to region, sex, and age group. Although various lifestyle factors were included in this study, multicollinearity was not detected in the regression analysis in which smoking, alcohol consumption, and PA were included as covariates. Previous studies have also reported that smoking, alcohol consumption, and PA can independently influence body composition [30,31] and cholesterol levels [32]. In 2018, egg consumption worldwide was 9.68 kg/person/year, while egg consumption in Korea was 12.93 kg/person/year [33]. In other words, egg consumption was slightly higher than the global average. Therefore, the poor health results of this study can be emphasized in countries with similar or higher egg consumption than that of Korea, and the good health results of this study can be emphasized in countries with lower egg consumption than that of Korea.
Conclusions
In conclusion, this study found that egg consumption investigated by FFQ was related to body composition as well as serum cholesterol levels in Korean adult men and women. In men, body composition distributions were related to egg intake rather than serum cholesterol levels, and in women, serum cholesterol levels were related to egg intake rather than body composition distributions. Therefore, it is necessary to pay more attention to the advice on egg intake in men at high risk of obesity and women at high risk of elevated cholesterol. However, the egg consumption investigated by the 24HR resulted in no health benefit or harm in terms of cholesterol and body composition.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/jcm10245918/s1, Table S1. General characteristics according to egg consumption in men (24-h dietary recall). Table S2. General characteristics according to egg consumption in women (24-h dietary recall). Table S3. Association between serum cholesterol level, prevalence of dyslipidemia, and egg consumption in men (24-h dietary recall). Table S4. Association between serum cholesterol level, prevalence of dyslipidemia, and egg consumption in women (24-h dietary recall). Table S5. Association between body composition, waist circumference, and egg consumption in men (24-h dietary recall). Table S6. Association between body composition, waist circumference, and egg consumption in women (24-h dietary recall). Informed Consent Statement: Written informed consent was obtained from all study participants.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2021-12-19T17:14:59.560Z
|
2021-12-01T00:00:00.000
|
{
"year": 2021,
"sha1": "b0d9c780255c3b8d1e47aec21096bb14c9ad58cc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/10/24/5918/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dee784497e1a48e24d19bf0a9b26c574fcc8f7c6",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Medicine"
],
"extfieldsofstudy": []
}
|
12650472
|
pes2o/s2orc
|
v3-fos-license
|
Defect symmetry influence on electronic transport of zigzag nanoribbons
The electronic transport of zigzag-edged graphene nanoribbon (ZGNR) with local Stone-Wales (SW) defects is systematically investigated by first principles calculations. While both symmetric and asymmetric SW defects give rise to complete electron backscattering region, the well-defined parity of the wave functions in symmetric SW defects configuration is preserved. Its signs are changed for the highest-occupied electronic states, leading to the absence of the first conducting plateau. The wave function of asymmetric SW configuration is very similar to that of the pristine GNR, except for the defective regions. Unexpectedly, calculations predict that the asymmetric SW defects are more favorable to electronic transport than the symmetric defects configuration. These distinct transport behaviors are caused by the different couplings between the conducting subbands influenced by wave function alterations around the charge neutrality point.
Introduction
As a truly two-dimensional nanostructure, graphene has attracted considerable interest, mainly because of its peculiar electronic and transport properties described by a massless Dirac equation [1,2]. As such, it is regarded as one of the most promising materials since its discovery [3][4][5] because charge carriers exhibit giant intrinsic mobility and long mean-free path at room temperature [6,7], suggesting broad range of applications in nanoelectronics [8][9][10][11]. Several experimental [4,8,12,13] and theoretical [2,14,15] studies are presently devoted to the electronic, transport, and optical properties [16] of graphene. By opening an energy gap between valence and conduction bands, narrow graphene nanoribbons (GNR) are predicted to have a major impact on transport properties [17,18]. Most importantly, GNR-based nanodevices are expected to behave as molecular devices with electronic properties similar to those of carbon nanotubes (CNTs) [19,20], as for instance, Biel et al. [21] reported a route to overcome current limitations of graphene-based devices through the fabrication of chemically doped GNR with boron impurities.
The investigation of transport properties of GNRs by various experimental methods such as vacancies generation [22], topological defects [23], adsorption [24], doping [25], chemical functionalization [26][27][28], and molecular junctions [29] have been reported. Meanwhile, defective GNR with chemically reconstructed edge profiles also have been experimentally evidenced [30] and have recently received much attention [31,32]. In particular, Stone-Wales (SW) defects, as one type of topological defects, are created by 90°rotation of any C-C bond in the hexagonal network [33], as shown by Hashimoto et al. [34]. More recently, Meyer et al. [35] have investigated the formation and annealing of SW defects in graphene membranes and found that the existence of SW defects is energetically more favorable than in CNTs or fullerenes. Therefore, the influences of SW defects on electronic transport of GNRs is crucial for the understanding of the physical properties of this novel material and for its potential applications in nanoelectronics.
In this brief communication, we investigate the influence of SW defects on the electronic transport of zigzag-edged graphene nanoribbons (ZGNRs). It is found that the electronic structures and transport properties of ZGNRs with SW defects can very distinctively depend on the symmetry of SW defects. The transformation energies obtained for symmetric SW defects and asymmetric SW defects are 5.95 and 3.34eV, respectively, and both kinds of defects give rise to quasi-bound impurity states. Our transport calculations predict different conductance behavior between symmetric and asymmetric SW defects; asymmetric SW defects are more favorable for electronic transport, while the conductance is substantially decreased in the symmetric defects configuration. These distinct transport behaviors result from the different coupling between the conducting subbands influenced by the wave function symmetry around the charge neutrality point (CNP).
Model and methods
The optimization calculations are done by using the density functional theory utilized in the framework of SIESTA code [36,37]. We adopt the standard norm-conserving Toullier-Martins [38] pseudopotentials orbital to calculate the ion-electron interaction. The numerical double-ζ polarized is used for basis set and the plane cutoff energy is chosen as 200 Ry. The generalized gradient approximation [39] proposed by Perdew and Burke and Ernzerhof was employed to calculate exchange correction term. All nanostructure geometries were converged until no forces acting on all atoms exceeded 0.01eV/Å.
The electronic transport properties of the nanoribbon device have been performed by using non-equilibrium Green's function (NEGF) methodology [40,41]. In order to self-consistently calculate the electrical properties of nanodevices, we construct the two-probe device geometry where the central region contains the SW defects and both leads consist each of the two supercell pristine ZGNR, as shown in Figure 1. The equilibrium conductance G is obtained from the Landauer formula such T as a function of the electron energy E is given by where Σ l (Σ r ) represents the self-energies of the left (right) electrode, G R (G A ) is retard (advanced) Green's function. It is calculated from the relation: where H S is the Hamiltonian of the system. More details about the NEGF formalism can be found in Ref. [42].
In this study, we consider symmetric and asymmetric SW defects contained in 6-ZGNRs, where 6 denotes the number of zigzag chains (dimers) across the ribbon width [18]. Taking into account screening effects between electrodes and central molecules, we use 10-unit cell's length as scattering regions, and 2 units as electrodes to perform transport calculation. The electron temperature in the calculation is set to be 300 K.
Results and discussions
In Figure 1, we show the geometry of defective ZGNR after relaxation. After introducing symmetric SW defects, the GNR shrinks along the width axis, by 0.526 Å, and correspondingly, the nearest four H atoms move toward the central region by 0.21 Å. As a result, the bond angles of the edge near the SW defects are reduced from 120 to 116°, as shown in Figure 1c, d. In contrast to the shrinking along the width axis, the SW defects stretch from 4.88 to 5.38 Å along the length axis direction. No distinct change for the H-C bond length at the edge is observed. Thus, the effect of symmetric SW defects on the geometry modification is limited to the defective area, with mirror reflection around their axis. However, the presence of asymmetric SW defects to the geometry modification is far more complex. They twist the whole structure by shifting the left side upward, while the right side is downward shifted. Hence, the mirror symmetry is broken because of the asymmetric SW. The transformation energies for symmetric and asymmetric SW defects are 5.95 and 3.34eV, respectively. These results imply that the asymmetric SW defects are energetically more favorable than the symmetric SW defects. Wave functions of electronic states at the Gamma point of the highest-occupied electronic states (HOES) and the lowest-unoccupied electronic states (LUES) are depicted in Figure 2. As expected, the wave functions of the pristine even-index ZGNR at the Gamma-point associated to the HOES and LUES exhibit the welldefined parity with respect to the mirror plane, and their eigenstates in the case of symmetric SW defects, the HOES and LUES, keep the same parity because of the potential induced by the symmetric defects [25,43]. Note that, although the wave functions of both the pristine and symmetric SW defects have well-defined parity, the sign of their wave functions, especially for the electronic states below the CNP, are precisely opposite. For the asymmetric SW defects, the well-defined parity of the wave functions is not preserved. Moreover, the wave function symmetry in this configuration is broken leading to substantial electron backscattering below and above the CNP.
The central issue of this study is to investigate the influence of SW defects in the ZGNRs on their electronic and transport behavior. ZGNRs are known to present very peculiar electronic structure, that is, strong edge effects at low energies originated from the wave functions localized along the GNR edges [44]. Spinunpolarized calculations reveal that all ZGNRs are metallic with the presence of sharply localized edge states at the CNP [25,43,44], while ab initio calculation with spin effect taken into consideration found that a small band gap opens up [18]. The electronic band structures of defective nanoribbons and the corresponding pristine GNRs are shown for comparison. In the case of pristine GNR, zone-folded effects give rise to nondegenerated bands for aand b-spin states, and the corresponding spin bands shift upward and downward with respect to the CNP, respectively. It also leads to gapless electronic structure as well as 3G 0 conductance in the vicinity of CNP (see Figure 3). Meanwhile, zonefolded effects create more subbands near the CNP, namely, four a-spin subbands around 0.4eV and four bspin subbands around 0.4eV. The presence of symmetric SW defects substantially split the electronic bands, especially for the b-spin bands above the CNP, resulting from the bands anticrossing at Γ or π point. More importantly, the symmetric defects open a band gap of about 0.12eV for a-spin bands and 0.09eV for b-spin bands, which is attributed to the mismatch coupling between its LUES and HOES wave functions due to the presence of defects. It is interesting to note that a defect state deriving from the a-spin subband is located at about 1.15eV above the CNP producing a localized state, where complete backscattering is obtained (see red dashed line in Figure 3). Thus, these changes in the band structures arising from introducing symmetric SW defects are unfavorable to electronic transport. In contrast to the extensive split produced by the symmetric SW defects, the electronic structure modification due to the asymmetric SW defects is slight. Except for some bands splitting that could be unfavorable to electron transport, the band structure away from the CNP does not experience much change. Similar to the emergence of defect states induced by the symmetric SW configuration, two defect states are observed in the asymmetric SW configurations; one defect state arising from the a-spin subband locates at about 0.62eV above the CNP, and the other one from the b-spin is -1.20eV below the CNP. Both defects give rise to localized states that lead to conductance gaps (see, dotted line in Figure 3). Overall, the band structure results reveal that the SW defect states near the CNP lead to complete electron backscattering region, where the location depends on the spatial symmetry of the defects. The electronic transport results are displayed in Figure 3. The states induced by H atoms at the edge produce a conductance peak in the vicinity of CNP in the pristine ZGNR. In this study, our results show a good agreement with previous studies [43][44][45]. The first conductance plateau corresponding to the occupied and unoccupied states is G 0 . In the case of symmetric SW defects in the ZGNR, the conductance in the vicinity of CNP is decreased as a result of the four H atoms shrinking. The conductance with symmetric defects remarkably decreases below the CNP, manifesting monotonous reduction of conductance with increasing electron energy. We attribute this effect to the antisymmetry (opposite sign at every position) of wave functions, with respect to the pristine GNR in the wave functions (see Figure 4e) that block the electronic transport. On the other hand, the orientation of about 50% of all wave functions corresponding to LUES is reversed, which gives rise to a conducting plateau (about 0.5G 0 ) that ranged from 0.04 to 0.8eV above the CNP. More importantly, strong electron backscattering induced by the coupling between all states are expanded to lead to full suppression of the conduction channel at particular resonance energies. Accordingly, a smooth conductance valley around 1.12eV corresponding to complete electron backscattering is observed. Concerning the transport properties of asymmetric SW configuration, we find that the absence of conductance peak at the CNP is due to the breaking of edge states. In addition, localized states in the vicinity of CNP lead to reduced conductance. The main feature of the first conducting plateau below the CNP is preserved except for the smooth conductance valley located at about -1.2eV. This illustrates the obvious different transport behaviors between the symmetric and asymmetric SW defects. We indeed found that such different transport behaviors result from different coupled electronic states supported by the wave function results. The HOES and LUES wave functions of asymmetric SW defect configuration are very similar to that of the pristine GNR except for the defective area. Therefore, the first conducting plateau near the CNP is preserved for the asymmetric configuration. Naturally, the asymmetric SW defects are responsible for the two conductance valleys, namely, a smooth valley at -1.2eV and a sharp valley at 1.48eV. The large reduction of conductance at these areas induced by the asymmetric SW defects corresponds to complete electron backscattering region. which is different from the situation in CNTs, where SW defects induce suppression of only half of the conductance channels [46]. However, the impact of the two conductance valleys on the ZGNRs is limited because they are far away from the CNP. The transport properties of asymmetric SW configuration are predicted to be comparable with that of the pristine GNR in spite of non-preservation of the geometry and wave function symmetry for the former. We note that similar results have been obtained under spin-dependent calculation by Ren et al. [47] very recently. Overall, the electronic transport calculations predict that it is more likely to be observed for asymmetric SW defects in the ZGNR, since these defects are more favorable for electronic transport in contrast to the substantially transport degradation in the symmetric defects configuration.
Conclusion
In summary, we investigate the influence of local structural defects on the electronic transport of ZGNR using first principles calculations. The transformation energies reveal that the asymmetric SW defects is energetically more favorable than the symmetric SW defects. Both defects give rise to complete electron backscattering region that depends on the spatial symmetry of the defects. Our transport calculations predict that the asymmetric SW defects are more favorable for electronic transport in contrast to the substantially decreased in the symmetric defects configuration. We attribute these distinct transport behaviors to the different coupling between the conducting subbands influenced by the wave function modification around the CNP.
|
2014-10-01T00:00:00.000Z
|
2011-03-24T00:00:00.000
|
{
"year": 2011,
"sha1": "0d750899b380561ed737566c41d8ad4f8ba0a32a",
"oa_license": "CCBY",
"oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1186/1556-276X-6-254",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0d750899b380561ed737566c41d8ad4f8ba0a32a",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
3992715
|
pes2o/s2orc
|
v3-fos-license
|
Common protein biomarkers assessed by reverse phase protein arrays show considerable intratumoral heterogeneity in breast cancer tissues.
Proteins are used as prognostic and predictive biomarkers in breast cancer. However, the variability of protein expression within the same tumor is not well studied. The aim of this study was to assess intratumoral heterogeneity in protein expression levels by reverse-phase-protein-arrays (RPPA) (i) within primary breast cancers and (ii) between axillary lymph node metastases from the same patient. Protein was extracted from 106 paraffin-embedded samples from 15 large (≥3 cm) primary invasive breast cancers, including different zones within the primary tumor (peripheral, intermediate, central) as well as 2-5 axillary lymph node metastases in 8 cases. Expression of 35 proteins including 15 phosphorylated proteins representing the HER2, EGFR, and uPA/PAI-1 signaling pathways was assessed using reverse-phase-protein-arrays. All 35 proteins showed considerable intratumoral heterogeneity within primary breast cancers with a mean coefficient of variation (CV) of 31% (range 22-43%). There were no significant differences between phosphorylated (CV 32%) and non-phosphorylated proteins (CV 31%) and in the extent of intratumoral heterogeneity within a defined tumor zone (CV 28%, range 18-38%) or between different tumor zones (CV 24%, range 17-38%). Lymph node metastases from the same patient showed a similar heterogeneity in protein expression (CV 27%, range 18-34%). In comparison, the variation amongst different patients was higher in primary tumors (CV 51%, range 29-98%) and lymph node metastases (CV 65%, range 40-146%). Several proteins showed significant differential expression between different tumor stages, grades, histological subtypes and hormone receptor status. Commonly used protein biomarkers of breast cancer, including proteins from HER2, uPA/PAI-1 and EGFR signaling pathways showed higher than previously reported intratumoral heterogeneity of expression levels both within primary breast cancers and between lymph node metastases from the same patient. Assessment of proteins as diagnostic or prognostic markers may require tumor sampling in several distinct locations to avoid sampling bias.
Introduction
Various proteins are established as diagnostic and prognostic biomarkers in breast cancer, including estrogen and progesterone receptor status, human epidermal growth factor receptor 2 (HER2) and E-Cadherin [1]. Novel proteins continue to be assessed as potential therapeutic targets and predictive biomarkers. However, intratumoral heterogeneity of protein expression within a primary tumor can pose a challenge when using smaller tumor samples such as core needle biopsies and has not yet been comprehensively studied.
Our goal was to investigate the intratumoral heterogeneity of proteins with clinical relevance to breast cancer, either predictive markers for therapies targeting HER2 [2] or prognostic markers including HER2, estrogen and progesterone receptors [1], E-Cadherin [3], and uPA and PAI-1 [4,5]. To comprehensively assess protein heterogeneity we further included proteins connected to these candidates via signaling pathways. Thus 15 additional proteins were analyzed belonging either to the same protein family as our candidate proteins (EGFR, HER3, HER4, pPDGFR and VEGFR) or involved in downstream signaling of the candidate molecules (Akt, ERK, FAK, GSK3b, ILK, Integrin aV, PI3K, p38, PTEN and STAT3). In a recent study we demonstrated that several of these proteins are correlated with uPA and PAI-1 expression in primary breast cancers and might be important for uPA and PAI-1 mediated tumor growth and migration [6]. The expression of uPA was correlated with expression of ER and the Stat3/ERK pathway while PAI-1 was associated with Akt signaling and regulation of the HER family. As phosphorylated proteins are often activated proteins we also assessed pAkt, p1086EGFR, p1148EGFR, pER, pERK, pGSK3b, pHer2, pHer3, pPDGFR, pp38, pPR, pPTEN, p727STAT3 and p705STAT3.
The overall goal of this study was to assess the level of heterogeneity of protein expression in breast cancer specimens by analyzing 35 target proteins including 15 phosphorylated proteins representing the HER2, EGFR, and uPA/PAI-1 signaling pathways relevant to breast cancer.
For the analysis of large numbers of samples and target proteins as applied in this study, conventional immunoblot methodology is not suitable as one would need more than 3500 Western blot lanes to conduct a single analysis of all samples and antibodies in our study. The reverse-phase-protein-array (RPPA) is a new approach that allows the simultaneous analysis of multiple samples for the expression of several proteins under the same experimental conditions [7,8]. RPPA technology also allows analysis of proteins in triplicates and serial dilutions thus enabling reliable quantitative detection of protein expression in the samples. RPPA has widely demonstrated its feasibility for the analysis of cryo-preserved clinical samples [9][10][11]. More recently our group could show that RPPA technology also reliably allows the analysis of formalin-fixed and paraffin-embedded patient samples [12,13] and is an adequate tool to address protein heterogeneity within such samples.
The aim of this study was i) to determine the intratumoral heterogeneity of 35 proteins representing the HER2, EGFR, and uPA/PAI-1 signaling pathways in large ($3 cm) primary breast carcinomas, and ii) to identify differences in protein expression levels between axillary lymph node metastases from the same patient. In addition, we assessed the differential protein expression with regard to clinicopathologic parameters and its dependence on the influence of sampling bias with respect to the number of samples taken from each primary tumor.
Tissue Samples
A total of 106 tissue samples from 15 patients with large ($3 cm) primary invasive breast cancer with or without associated lymph node metastases were studied. Exclusion criteria were known distant metastases and prior chemo-, hormone-, or radiotherapy. Written informed consent was obtained from all patients and the study protocol was approved by the local institutional review board (Ethikkommission der Fakultät für Medizin der Technischen Universität München).
Ten (67%) of 15 patients had invasive ductal carcinomas, four (26%) invasive lobular, and one (7%) was a mixed ductulo-lobular subtype. For all 15 cases, 2-3 tissue samples were taken each from the peripheral tumor zone defined as the 5 mm peripheral margin, the central zone defined as the 10 mm spherical center, and the intermediate zone between periphery and center. All tissue samples had a size of 5-10 mm side length and 2-4 mm thickness, and the distance between individual samples was .5 mm. In addition, 2-5 axillary lymph node metastases were available in 8 cases. In 7 cases primary tumor tissue without the presence of lymph node metastases was available. Tissue samples were embedded in paraffin according to standard procedures following fixation for 18-24 hours in 10% neutral buffered formaldehyde. H&E stained sections of all paraffin-embedded samples were reviewed to characterize the histological subtype, percentage of viable invasive tumor cells, fibrosis or necrosis, and percentage of inflammatory cells. In addition, information on immunohistochemical expression of ER, PR, and HER2 was obtained for all cases.
All samples showed a tumor cellularity of .70% and ,10% inflammatory cells or ,10% residual lymphocytes in lymph node metastases. There were no significant differences in epithelial tumor cell content and tumor stroma or inflammatory cell infiltrates between samples from the same patient.
Protein Extraction
All tissue samples from the same patient (primary tumor and lymph nodes) were processed at the same time. Protein extraction was performed as previously described [6]. Briefly, FFPE tissue sections were deparaffinized, and proteins were extracted using EXB Plus (Qiagen, Hilden, Germany). Tissue areas of approximately 0.25 cm 2 from three 10 mm thick sections were processed in 100 ml of extraction buffer. The Bradford protein assay (BioRad, Hercules, California US) was used according to the manufacturer's instructions to determine protein concentrations. A Western blot probing for b-actin was performed from randomly selected lysates (n = 12) to demonstrate successful protein extraction and suitability for RPPA analysis. All protein lysates produced a clear b-actin band on the Western blot.
Immunodetection was performed similar to Western blot analysis as previously described [14]. For estimation of total protein amounts, arrays were stained in parallel with Sypro Ruby Protein Blot Stain (Molecular Probes, Eugene, US) according to the manufacturer's instructions. Further details of the RPPA methodology and its validation has been previously described by Wolff et al. [13]. All antibodies used in this study were validated for specificity by Western blot analysis ( Figure S1).
Reproducibility of Protein Extraction and RPPA
10 randomly selected samples of primary breast cancer (not included in the study collective) were extracted in three independent preparations and applied onto two independent arrays as described above. On both arrays levels of HER2, pHER2, uPA and PAI-1 were determined. The Spearman's rho correlation coefficient and CV were calculated for consecutive extractions and RPPAs, respectively, to assess the technical reproducibility of both methods.
Statistical Analysis
Intratumoral heterogeneity as well as the range of protein expression amongst different patients (inter-tumor variation) were assessed using the coefficient of variation (CV). The CV, defined as the ratio of the standard deviation to the mean multiplied by 100, provides a relative measure for variation independent of the absolute values, and therefore allows comparing the variation of proteins with different absolute expression levels.
Intratumoral heterogeneity was assessed separately for each protein by calculating the CV of all primary tumor samples from the same patient. The variation within tumor zones was assessed by calculating the CV for each tumor zone separately. The variation between tumor zones was assessed by calculating the CV between the mean values for each tumor zone from each patient. The heterogeneity between different lymph node metastases from the same patient was assessed by calculating the CV of all lymph node samples from one patient. As summary statistic, the rootmean-square (RMS) average of the CVs [15] was calculated including all 15 patients to assess the overall intratumoral heterogeneity for a given protein. The RMS-CV was also used to summarize the overall CV of all proteins for a given tumor zone.
The variation between tumors from different patients was assessed for each individual protein by calculating the CV of mean expression values between the different patients. Results are displayed graphically using box-plots showing the median expression value, 25 th and 75 th quartiles, whiskers (1.5 times the interquartile range) and outliers for each patient.
The Wilcoxon signed rank and Friedman tests were used to compare CVs for different proteins or compare CVs between different tumor zones and the Mann Whitney test was used to compare protein expression between unrelated sample groups at a two-sided 5% level of significance. The inconsistency statistic I 2 was used to assess the significance of patient-specific differences in CVs across all 35 proteins. I 2 describes the percentage of variation in CVs between individual patients which is explained by true heterogeneity rather than chance. Cut-offs of 25%, 50% and 75% are commonly used to describe low, moderate, and high variation [16]. The Spearman rank correlation coefficient (rho) was used to assess bivariate relationship of quantitative parameters. All statistical analyses were performed using IBM Statistics (IBM Corporation, Version 19.0) and Origin software (OriginLab Corporation, Version 8).
Technical Reproducibility of Protein Expression Analysis
Reproducibility of protein extraction. There was a high reproducibility of protein expression from independent extraction procedures (n = 10 samples, n = 30 replicates) with a CV #14% for the 4 exemplary proteins HER2, pHER2, uPA and PAI-1. All pairwise correlations showed a Spearman's rho $0.98 (Table S2).
Correlations are displayed graphically in Figure 1, and pictures of stained replicate arrays shown in Figure S2.
Intratumoral Heterogeneity of Protein Expression Assessed by RPPA
For all 35 proteins considerable intratumoral heterogeneity in expression was observed with a mean CV of 31.0% (range 21.5-43.4%) within samples of primary tumors. A similar extent of heterogeneity was found between axillary lymph node metastases from the same patient, with a CV of 27.2% (range 17.8-34.4%) ( Table 1). The extent of intratumoral heterogeneity was different between the 35 individual proteins analyzed (p#0.001). Figure 2 illustrates the total intratumoral heterogeneity of all cases, including primary tumor and lymph node metastases when available, for the 4 exemplary proteins E-Cadherin, EGFR, ER, and HER2.
There was no difference in the extent of heterogeneity between phosphorylated (mean CV 31.8%) and non-phosphorylated (mean CV 30.6%) proteins. Similarly, the heterogeneity observed within one tumor zone (mean CV 28.1%, range 18.1-38%) or between different zones of the same primary tumor (mean CV 23.5%, range 17.0-37.7%) showed no significant difference (Table 1). There was no overall significant correlation between the diameter of the primary tumor (mean 5.4 cm, range 3.0-8.0 cm) and the extent of intratumoral heterogeneity (Spearman's rho between 20.3 and 0.2 for 31 proteins; p.0.2). The heterogeneity of EGFR, ER, pHER2 and PAI-1 showed a moderate correlation with tumor size which did not reach statistical significance (rho between 20.5 and 0.4; p.0.1). There was no significant correlation between the percentage of tumor cell content (mean 80%, range 70-98%) and the extent of heterogeneity for the majority of proteins (rho between 20.1 and 20.3 for 26 proteins; p.0.2). The heterogeneity of ER, pPTEN and pp38 was significantly correlated with the percentage of tumor cell content (rho = 20.8, 20.6 and 20.5; p = 0.001, 0.01 and 0.05, respectively). The heterogeneity of EGFR, both pEGFRs, pErk, pHER3 and pSTAT3 showed a moderate correlation with tumor cell content which did not reach statistical significance (rho between 20.4 and 20.5; p.0.06).
There were moderate patient-specific differences in CVs across all 35 proteins with a total variation of 52% (I 2 ) between patients.
Variation of Protein Expression between Different Patients
All 35 proteins showed a very high variation amongst different patients with a mean inter-tumor CV of 51.4% (range 29.3-98.3%) in primary tumor samples and 65.0% (range 39.7-145.5%) in lymph node metastases. There was no significant difference between phosphorylated (mean CV 56.7%) and non-phosphorylated proteins (mean CV 55.3%) ( Table 1). The total variation of protein expression amongst different patients is illustrated for 4 exemplary proteins (E-Cadherin, EGFR, ER, and HER2) in Figure 2.
Differential Protein Expression According to Tumor Subtype, Tumor Stage, Grade, and Hormone Receptor Status
Potential associations of protein expression with clinicopathologic parameters were assessed using the mean expression value assessed by RPPA for each case.
Lobular primary breast cancers (n = 4) showed significantly higher expression of pGSK3b, p727STAT3, and uPA compared to ductal carcinomas (p#0.03). E-Cadherin showed significantly lower expression in lobular compared to ductal carcinomas.
Estrogen receptor positive tumors (as assessed by immunohistochemistry) showed significantly higher expression of pGSK3b, uPA, PAI-1, and HER4 compared to ER negative tumors (p#0.03).
Higher stage (T3 and T4) breast cancers showed significantly higher expression of pERK compared to lower stage (T2) tumors (p = 0.02).
No significant differential protein expression was observed with regard to lymph node status, tumor size (,/.5 cm), and immunohistochemical HER2 status.
Loss of Significance for Differential Protein Expression When Using Single Samples per Case
To illustrate the relevance of multiple sampling and a potential sampling bias, we assessed associations of protein expression with clinicopathologic parameters when taking single samples per case instead of the mean expression values. Single samples with extreme expression values (highest and lowest alternating) by RPPA were retrospectively chosen for each case.
Subsequently, no significant differential protein expression was observed between patient groups according to tumor subtype, immunohistochemical hormone receptor status, immunohistochemical HER2 status, stage, grade, tumor size (,/.5 cm) and lymph node status.
Discussion
Considerable intratumoral heterogeneity was observed for both common and novel protein biomarkers of breast cancer signaling pathways, including HER2, uPA/PAI-1 and EGFR signaling. All 35 proteins studied by reverse phase protein microarrays (RPPA) showed similar heterogeneity with a mean coefficient of variation (CV) of 31.0% (range 21.5-43.4%) within primary breast cancers and 34.7% (range 9.5-79.3%) within different lymph node metastases from the same patient.
Within primary breast cancers we compared several samples from the central, intermediate, and peripheral tumor zones. Interestingly, the extent of heterogeneity was very similar within distinct tumor zones (mean CV 28.1%, range 18.1-38%) and between different zones (mean CV 23.5%, range 17.0-37.7%) suggesting that sampling bias cannot be avoided by taking a single sample from a defined tumor zone but rather sampling one tumor in several distinct locations. Our findings are line with a previous study analyzing the intratumoral heterogeneity of microRNA expression [17].
The extent of intratumoral heterogeneity in protein expression observed in this study is higher compared to previous reports on morphological and molecular heterogeneity of primary breast carcinomas. A possible explanation may be the more extensive systematic sampling of each tumor in several different locations from distinct tumor zones. In contrast, previous studies assessing intratumoral heterogeneity of biomarkers have commonly only analyzed different areas of one tumor section or different core biopsies of the same tumor. A direct comparison of the extent of heterogeneity reported by different studies is hampered by the lack of uniform criteria. Previous studies have provided semiquantitative descriptions of intratumoral heterogeneity, i.e. for expression of hormone receptors and Her2 [18][19][20] or allelic loss and gene amplification [21][22][23][24] but rarely statistical measures of intratumoral variation.
A possible explanation for the intratumoral heterogeneity in protein expression is variation in the cellular composition of tumor samples. There were no significant differences in epithelial tumor cell content and tumor stroma or inflammatory cell infiltrates among samples from the same patient. Nevertheless, three proteins (ER, pPTEN and pp38) showed a correlation between lower tumor cell content and higher extent of heterogeneity. The presence of different tumor cell clones would be another potential explanation for heterogeneity in protein expression. Previous studies found intratumoral heterogeneity for allelic loss [21,22] and gene amplification [23,24]. A thorough analysis of cell clonality including DNA, RNA, and protein analysis would be required to elucidate this hypothesis. Differences in tumor growth and regional tumor cell proliferation may have contributed to the intratumoral heterogeneity. We previously observed considerable intratumoral heterogeneity in tumor cell proliferation [17] which was similar to the heterogeneity in protein expression detected here (CV 23.0% vs. 31.0%). However, it is unlikely that a single explanation will describe the considerable intratumoral heterogeneity of protein expression observed. Our findings suggest that regional differences in tumor cell proliferation contribute to intratumoral heterogeneity but cannot solely explain the variations found in protein expression in different tumor regions.
There was no overall correlation between the diameter of the primary tumors and the magnitude of intratumoral heterogeneity. Although larger tumors often show more morphological or architectural heterogeneity, such as variation in nuclear grade or tubule formation, we found a comparable heterogeneity of protein expression in tumors ranging from 3-8 cm in diameter. However, we observed a tendency for higher extent of heterogeneity in smaller tumors for few proteins (EGFR, ER, and pHER2).
In comparison, we assessed the variation of protein expression amongst tumors from different patients, which revealed a CV of 51.4% (range 29.3-98.3%) for primary tumor samples and 65.0% (range 39.7-145.5%) for lymph node metastases. Therefore, the intratumoral heterogeneity observed in this study could introduce a significant bias when using only a single sample from tumors. For example, the mean expression of EGFR in the primary tumor of case 4 was significantly lower compared to case 5. Nevertheless, one sample of case 4 showed a higher expression level of EGFR than the lowest of case 5.
To further illustrate the relevance of intratumoral heterogeneity, we assessed associations between protein expression and clinicopathologic parameters when using either all samples per case or just one sample with the lowest or highest expression value. Several proteins including ER, PR, HER4, uPA, PAI-1, and phosphorylated p727STAT3 showed significantly higher expression in moderately differentiated G2 compared to G3 tumors based on mean expression values of all samples for each primary tumor. Interestingly, the significance of this correlation was lost when only one sample was randomly chosen for each primary tumor. Similarly, a significant correlation between higher expression of phosphorylated pGSK3b, uPA, PAI-1, and HER4 in ER positive compared to negative tumors was only observed when using mean expression values of all samples for each primary tumor and lost significance when using only single samples. Significant correlations between protein expression and clinicopathologic parameters were also observed for tumor stage (pERK) and lobular versus ductal subtype (pGSK3b, p727STAT3, uPA, E-Cadherin), all of which lost significance when using only single samples for each primary tumor. We also assessed the influence of technical variations on quantification of protein expression using a total of 30 technical replicates for protein extraction and 60 replicates for RPPA analyses. We found a high reproducibility of protein measurements from independent extractions (CV#14%). Similarly, there was a high reproducibility of protein measurements using independent RPPA analyses (CV#12%). Nevertheless, technical variations may have contributed to some degree to the heterogeneity in protein expression detected in this study.
As mentioned above, an important finding is that the 35 candidate proteins all showed considerable intratumoral heterogeneity although the overall extent of heterogeneity was different between the 35 proteins. It is difficult to estimate if our findings can be extrapolated to other novel proteins relevant to breast cancer. Nevertheless, intratumoral heterogeneity may lead to significant sampling bias when comparing protein expression in tumors from different patients. Intratumoral heterogeneity needs to be taken into account when using protein biomarkers for characterization of different breast cancer subtypes, or prediction of prognosis or response to treatment. In future analyses, the best statistical approach for combining multiple samples from one tumor will depend on the specific study design. Alternatively a practical approach may also be to pool the samples of one case prior to analysis.
Gerlinger et al [25] recently studied 4 metastatic renal cell carcinomas analyzing several regions of the primary tumor and metastatic sites by multiregion sequencing. They observed considerable intratumoral heterogeneity of mutations, with 63 to 69% of all somatic mutations not detectable across every tumor region. In addition, gene-expression signatures of good and poor prognosis were detected in different regions of the same tumor. The authors concluded that a single tumor sample reveals only a minority of genetic aberrations that are present in an entire tumor, and prognostic gene-expression signatures may not correctly predict outcomes if they are assessed from a single tumor region [25].
Limitations of the current study include the number of cases (n = 15) and individual tissue samples (n = 106). An arbitrary cutoff was set at .70% tumor cell content to avoid substantial contamination from non-tumor tissue. We analyzed heterogeneity on a macroscopic level by reverse phase microarrays, and did not assess heterogeneity on a cellular level. The current study assessed only tumors $3 cm in diameter. Although our data shows no indication that the extent of intratumoral heterogeneity is generally dependent on the tumor diameter, it is possible that the extent of intratumoral heterogeneity may be different for smaller tumors. An important strength is the systematic and predefined prospective sampling of the tumors in 8-10 different areas, whereas previous studies assessing intratumoral heterogeneity of biomarkers have commonly only analyzed different areas of one tumor section or different core biopsies of the same tumor.
It is important to note that our assessment of protein expression by RPPA provides continuous quantitative measurements which cannot be directly translated to the 2-or 3-tiered immunohisto-chemical grading system. A direct comparison between RPPA and immunohistochemistry (IHC) has only been performed for few proteins on limited sample numbers [26][27][28]. In 95 breast cancers, Hennessy et al. found a positive correlation between ER and PR levels determined by RPPA and the percentage of positive cells by IHC [28]. Nevertheless, it is important to note that the linear dynamic range of RPPA for detecting differences in protein expression is much larger compared to IHC. Among 64 ERpositive breast cancers as assessed by IHC, RPPA detected a 866fold difference in ER expression [28]. We previously reported high concordance of HER2 expression measured by RPPA and IHC in breast cancer specimens (94.2%-100%), whereas there was no significant correlation between RPPA and IHC-based determination of hormone receptors [26,27]. Although we detected considerable intratumoral heterogeneity in quantitative protein expression by RPPA it is unclear how this may be translated to changes between immunohistochemical staining categories. A comprehensive measurement of protein heterogeneity by IHC was beyond the scope of this study and should be addressed in further investigations.
In conclusion, established and novel protein biomarkers of breast cancer including hormone receptors, HER2, uPA/PAI-1, EGFR, pPDGFR, Akt, ERK, PTEN, STAT3 and others, showed considerable intratumoral heterogeneity when assessed by reversephase-protein-arrays higher than previously reported for common breast cancer biomarkers [18][19][20]. To avoid sampling bias, assessment of novel breast cancer protein biomarkers for diagnosis or prognosis should be based on primary tumor samples from several different locations, or sampling of several tumor-involved lymph nodes.
|
2018-03-22T17:47:57.491Z
|
2012-07-05T00:00:00.000
|
{
"year": 2012,
"sha1": "fdf004af70823b8371a281a86418cb4a29214726",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0040285&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fdf004af70823b8371a281a86418cb4a29214726",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
39495797
|
pes2o/s2orc
|
v3-fos-license
|
Revisiting multimodal activation and channel properties of Pannexin 1
Chiu and colleagues review the primary evidence for divergent activation mechanisms and unitary properties of Pannexin 1 channels.
Introduction
Pannexins are a relatively recently identified class of membrane channels that are garnering interest across diverse fields of biology. The gene family was discovered by Yuri Panchin and colleagues, who used bioinformatics and cloning approaches to search for homologues of invertebrate innexins in Chordata; they proposed to name them pannexins, in recognition of their widespread presence across many phyla (i.e., in all eumetazoans, except echinoderms) and sequence similarity with innexins (Panchin et al., 2000;Abascal and Zardoya, 2012). At this time, family members from chordates are specified as pannexins, and homologues from nonchordate animals are referred as innexins. Members of the innexin/pannexin family contain four transmembrane (TM) domains with intracellular N and C termini and several conserved cysteines in the first extracellular loop. Despite this shared sequence homology and membrane topology, pannexins constitute plasma membrane ion channels under physiological conditions, as opposed to the gap junction channels formed by innexins (reviewed in Sosinsky et al., 2011). It is noteworthy that neither innexins nor pannexins share significant sequence homology with connexins, the vertebrate gap junction proteins that share the same general topological features (Panchin et al., 2000;Bruzzone et al., 2003;Baranova et al., 2004).
Additional members of this extended 4-TM channel family include the leucine-rich repeat-containing eight proteins (LRRC8A-LRRC8E, also known as SWE LL), which are volume-regulating anion channels that allow permeation of anions and electrolytes in response to change of environmental osmolarity; and calcium homeostasis modulator 1 (CAL HM1) channels, which generate mixed ionic currents with weak cation selectivity and allow ATP release when extracellular Ca 2+ concentrations are reduced (Abascal and Zardoya, 2012;Siebert et al., 2013). N-glycosylation has been reported in Pannexin channels (Panx1-Panx3; Boassa et al., 2007;Penuela et al., 2007Penuela et al., , 2009, as well as in LRRC8A (Voss et al., 2014) and CAL HM1 (Dreses-Werringloer et al., 2008). Although the effect of N-glycosylation on channel properties remains unknown, it has been suggested to regulate cell-surface expression of Panx1 and may preclude the formation of gap junctions between Panx1 channels from neighboring cells (Boassa et al., 2007(Boassa et al., , 2008Penuela et al., 2007). The diverse and physiologically important functions of the members of this family of channels is drawing tremendous attention, with much yet to be learned about their respective channel properties and regulatory mechanisms, both shared and distinct.
In this review, we seek to provide a comprehensive analysis of experimental evidence for functions and regulation of pannexins, particularly Panx1. In so doing, we call attention to potential pitfalls in earlier studies and challenge long-held views regarding activation mechanisms and channel properties of Panx1. We propose alternative explanations for disparate results and suggest interpretive cautions and best practices for future research. Although we focus primarily on pannexin channels, it is important to remain mindful of the related channels and how their shared properties could confound interpretations with respect to pannexin channels, as well as how their distinct characteristics might be used to help resolve confounding results.
Overview of Pannexin biology
In vertebrates, three pannexin paralogues (Panx1, Panx2, and Panx3) have been identified. Among these, Panx1 is the most widely represented in diverse tissues and cell types, including lymphocytes, adipocytes, muscle, endothelium, and epithelial cells (Baranova et al., 2004;Penuela et al., 2007;Chekeni et al., 2010;Billaud et al., 2011;Seminario-Vidal et al., 2011;Lohman et al., 2012a;Adamson et al., 2015). In contrast, expression of the other pannexins is limited to select tissues, with prominent expression of Panx2 primarily in the central nervous system and Panx3 in skin and skeletal tissues (Bruzzone et al., 2003;Baranova et al., 2004;Penuela et al., 2007;Iwamoto et al., 2010). These pannexin paralogues share ∼50% similarity in amino acid sequence, with higher similarity found in their N termini and TM regions (Penuela et al., 2009). The C-terminal tails harbor greater variability in both length and amino acid identity; of note, Panx2 possesses the longest C-tail, with proline-rich and hydrophilic regions (Yen and Saier, 2007). Thus, C-tails may present unique regulation sites that could underlie functional divergence among the different pannexin paralogues.
Panx1 forms oligomeric channels in the plasma membrane, potentially hexamers, as suggested by biochemical approaches (e.g., protein cross-linking, size exclusion chromatography) and single-molecule techniques (e.g., fluorescence photobleaching, negative stain electron microscopy; Boassa et al., 2007;Wang et al., 2014;Chiu et al., 2017). On the other hand, Panx2 reportedly forms octameric channels (Ambrosi et al., 2010), and a subunit stoichiometry for Panx3 has not been reported. Unlike Panx1, which is thought to form channels only on the plasma membrane, Panx3 may function as both a plasma membrane channel and an intracellular ion channel, localized at ER (Penuela et al., 2008;Ishikawa et al., 2011); to date, however, Panx3-dependent ionic currents have not been reported (Bruzzone et al., 2003;Poon et al., 2014).
The subcellular localization of Panx2 channels awaits determination. In favor of a role for Panx2 as an intracellular ion channel, a fluorescent protein-conjugated Panx2 was localized intracellularly after heterologous expression, and Panx2 was not detected at the cell surface of mouse neurons by immunostaining with an anti-Panx2 antibody (Lai et al., 2009;Le Vasseur et al., 2014;Boassa et al., 2015). However, functional expression of Panx2 channels at the plasma membrane was clearly demonstrated using cell-surface biotinylation and electrophysiological assays in other heterologous expression experiments (Penuela et al., 2009;Poon et al., 2014), indicating that Panx2 is capable of forming plasma membrane channels. The discrepancies may arise from different techniques or biological contexts (e.g., types of cells and tissues) used to investigate the expression of Panx2. For example, the subcellular localization of Panx2 may be influenced by expression levels or dependent on differentiation state of cells (Swayne et al., 2010).
In general, there is much less known about the regulation and function of Panx2 and Panx3, in comparison with Panx1. The (patho)physiological functions identified for Panx2 or Panx3 channels are currently limited to neuronal development and ischemia-reperfusion injury (Panx2) or skin/skeleton development (Panx3; Celetti et al., 2010;Swayne et al., 2010;Bargiotas et al., 2011;Ishikawa et al., 2011;Caskenette et al., 2016). Moreover, although different pannexins are often considered to be complementary (Bargiotas et al., 2011;Lohman and Isakson, 2014;Penuela et al., 2014), it is not at all certain that they share similar properties or could compensate for each other. For example, Panx1 and Panx2 have different basal activity, activation mechanisms, pharmacological sensitivity, and subcellular distribution (Penuela et al., 2008;Poon et al., 2014;Boassa et al., 2015); in addition, although suggestive evidence exists (Penuela et al., 2009;Celetti et al., 2010;Bargiotas et al., 2011), permeability to nucleotides or fluorescent dyes has not been definitively established for either Panx2 or Panx3.
In keeping with their broad distribution, Panx1 channels have now been implicated in a wide variety of physiological and pathological contexts, such as blood pressure regulation, glucose uptake, apoptotic cell clearance, tumor metastasis, neuropathic pain induction, ischemia-reperfusion injury, and morphine withdrawal responses (Chekeni et al., 2010;Bargiotas et al., 2011;Adamson et al., 2015;Billaud et al., 2015;Furlow et al., 2015;Weilinger et al., 2016;Burma et al., 2017;Weaver et al., 2017). These (patho)physiological roles of Panx1 channels are largely attributed to their ability to release ATP or other nucleotides, which support purinergic signaling in a paracrine or autocrine manner. In keeping with these diverse roles, multiple mechanisms have been implicated in Panx1 channel activation, including increased concentration of intracellular calcium or extracellular potassium, receptor-mediated signaling pathways, and proteolytic cleavage at the C termini of Panx1 proteins (Table 1; see Panx1 channels can be activated by distinct mechanism for discussion).
Despite increasing recognition of the importance of Panx1 activity in these multiple contexts, there have been remarkable discrepancies in studies describing single-channel properties of Panx1 that could serve as diagnostic features for identification of native channels. In addition, the pharmacological tools that are available for attributing effects to Panx1 in native environments are generally nonspecific, and their effects on the channels often remain poorly characterized. This lack of both consensus and reagents hinders advances in the field. Here, we examine critically the experimental evidence supporting specific Panx1 channel properties and activation mechanisms, and we discuss new advances and resources for identifying Panx1 channels and their specific roles. We hope this ap- Lohman et al., 2015 praisal will be useful in raising concerns and limitations regarding current methodologies and guiding future investigations of Panx1 channels so that the field can collectively forge the consensus necessary to provide a consistent and comprehensive understanding of Panx1 channel biology.
Current limitations in studying Panx1 channels
Pharmacological tools have been used extensively, although not always definitively, to understand functions and regulation of Panx1 channels in native contexts. A variety of reagents, including chemicals and peptides, have been identified as inhibitors of Panx1 channels (see D'hondt et al., 2009 for an extended review). However, none of the most commonly used inhibitors are specific for Panx1. For example, although carbenoxolone (CBX) is a widely accepted Panx1 inhibitor, CBX also inhibits other closely related channels, such as connexins and LRRC8/SWE LL1 (Ripps et al., 2004;Bruzzone et al., 2005;Ye et al., 2009;Voss et al., 2014). In addition, a peptide inhibitor, 10 Panx1, has been widely used as a Panx1-specific inhibitor (Pelegrin and Surprenant, 2006;Lohman et al., 2015;Weilinger et al., 2016;Burma et al., 2017), even though its nonspecific inhibition of connexins has been reported (Wang et al., 2007). Several other compounds require high concentration for effective inhibition, such as probenecid (IC 50 ∼350 µM; Ma et al., 2009), and issues associated with solubility as well as nonspecificity demand caution, particularly in applications involving systemic admin- The cells endogenously express both wild-type and a truncated form of PANX1, which only expresses amino acid 1-89 of PANX1. b High concentration of CBX is required for inhibition of ATP release if albumin is present in the assay system (Chekeni et al., 2010). c A C-terminal truncated PANX1 (amino acid 1-371) was used in the study. d This study used a mutant human PANX1, with its caspase cleavage site replaced by a TEV protease cleavage site. istration. Different reagents may preferentially inhibit Panx1 channels activated by different mechanisms or in certain locations. For example, CBX (100 µM) inhibits only ∼75% of Panx1 current induced by high K + (75 mM; Jackson et al., 2014), and small peptides may not be able to pass blood-brain barrier.
To overcome these shortcomings in channel pharmacology, a panel of inhibitors with different chemical structure or modes of inhibition should be used to verify the results. Recent efforts have identified newer "specific" Panx1 inhibitors, including Brilliant Blue FCF (BB FCF) and trovafloxacin, which have little effect on various topologically or functionally related channels, such as connexins, Panx2, or P2X7 receptors Poon et al., 2014); these can be included to implicate Panx1 activity. Moreover, in addition to common surrogate assays for pannexin activity, such as ATP release and dye uptake, direct electrophysiological recordings of channel activity should also be used. This is particularly important in light of reports that various channels, such as connexins or P2X7 receptors, are intrinsically capable of permeating fluorescent dye or ATP (Fiori et al., 2012;Karasawa et al., 2017). Thus, one can achieve greater confidence in attributing effects to Panx1, for example, by using trovafloxacin, a fluoroquinolone antibiotic that displays voltage-dependent inhibition of Panx1 currents (Poon et al., 2014), in parallel with other blockers (e.g., CBX, a steroid-like glycyrrhetinic acid that blocks across the voltage range). Finally, to further strengthen such conclusions, a combination of pharmacological and genetic approaches should also be considered (Burma et al., 2017;Weaver et al., 2017).
In addition to the paucity of selective inhibitors, the fidelity of currently available anti-Panx1 antibodies remains uncertain (Bargiotas et al., 2011;Cone et al., 2013); therefore, specificity of the antibody should be verified by using Panx1 knockout animals or other genetic approaches to prevent false-positive/negative results. Note that several independent Panx1 knockout mouse lines have been generated (Anselmi et al., 2008;Qu et al., 2011;Skarnes et al., 2011;Dvoriantchikova et al., 2012). Although studies using these mice agree on a common physiological function of Panx1 in releasing ATP (Qiu et al., 2011;Qu et al., 2011;Santiago et al., 2011;Seminario-Vidal et al., 2011), a hypomorphic phenotype has been reported in the KOMP knockout-first mouse line (Hanstein et al., 2013), which again emphasizes the importance of validating materials used for studying Panx1 channels. Given fastdeveloping implementations of CRI SPR-Cas9 techniques (Yang et al., 2013), engineering an epitope-knock-in animal model would provide a useful resource to evaluate native expression patterns with well-characterized specific antibodies.
Meanwhile, ionic selectivity is generally determined by examining changes in reversal potential (E rev ) upon altering ionic composition of recording solutions. Indeed, anionic permeability of Panx1 channels was claimed by using such a seemingly straightforward method (Ma et al., 2012;Romanov et al., 2012). However, an opposite interpretation was made in a separate paper, i.e., that Panx1 channels are permeable to sodium, even though both studies based their conclusions on the result that E rev was unchanged in solutions exchanging sodium for NMDG (N-methyl-d-glucamine; Pelegrin and Surprenant, 2006;Ma et al., 2012;Romanov et al., 2012). Arguing against either anion-or cation-selectivity, the activated Panx1 channel apparently allows permeation of large molecules with both positive and negative charge (i.e., TO-PRO-3 and ATP; Chekeni et al., 2010;Qu et al., 2011;Seminario-Vidal et al., 2011). Finally, ionic selectivity has not been rigorously examined, e.g., by determining changes in E rev of open channel currents (tail currents). Even so, interpretation of such experiments often relies on use of counter-ions that are impermeable, and that cannot be assumed in the case of Panx1 channels that reportedly pass large molecules with size >500 Da. Therefore, developing a reconstituted system, such as proteoliposomes, may provide a clean system to measure Panx1 permeability to molecules of different charge and size.
Panx1 channels can be activated by distinct mechanisms
Given the ubiquitous expression and multifaceted functions of Panx1 channels in diverse cell types and tissues, various mechanisms have been reported to activate Panx1 channels in either a reversible or irreversible manner. Here, we first describe experimental approaches and evidence that support the multimodal activation of Panx1 channels (Table 1). We then discuss distinct channel properties that have been attributed to the various activation mechanisms.
The first evidence suggesting that Panx1 might form a mechanosensitive channel came from a heterologous expression system. Using Xenopus laevis oocytes injected with cRNA of human PANX1, Bao et al. (2004) recorded increased channel activity in response to negative pressure (∼40 mbar), applied via the patch pipette. The increased open probability (P O ) of these stretch-activated channels was associated with longer dwell time in larger subconductance states, with no obvious voltage-dependence of either P O or unitary conductance ( Fig. 1 A). Subsequently, several papers reported increased ATP release and fluorescent dye uptake from airway epithelia or erythrocytes swollen by hypotonic solutions; this was attributed to mechanosensitive activation of Panx1, either by using cells derived from Panx1-deficient mice or a combination of pharmacological tools (Locovei et al., 2006a;Qiu et al., 2011;Seminario-Vidal et al., 2011). In addition, ATP release from metastatic breast cancer cells subjected to deformation as they transited the microvasculature or from distended rat bladders was also attributed to stretchinduced activation of Panx1, tested by using CBX or BB FCF (Beckel et al., 2015;Furlow et al., 2015). It is noteworthy, however, that electrophysiological recordings of the corresponding native ionic currents induced by pressure or hypotonicity have not yet been verified by knockout or pharmacological inhibition in any of these cells (i.e., erythrocytes or cancer cells). In fact, even in the original Xenopus oocyte experiments, the stretch-activated channels were not independently verified as Panx1 (e.g., by using channel blockers), so the possibility remains that this activity was from channels endogenous to the oocyte. Thus, the single-channel properties and cognate macroscopic activity of stretch-activated Panx1 channels awaits further investigation and validation.
The molecular mechanisms for mechanosensitivity of Panx1 remain to be established. A demonstrated interaction between F-actin and the intracellular C terminus of Panx1 (Bhalla-Gehi et al., 2010) provides a potential physical transduction mechanism for channel activation in response to membrane stretch. Consistent with such a cytoskeleton-tethering model, inhibition of RhoA or activation of myosin light chain kinase, which can disrupt the actin cytoskeleton, reduced ATP release from bronchial epithelial cells after hypotonic challenge (Seminario-Vidal et al., 2011). On the other hand, channel-intrinsic mechanosensitivity of Panx1 cannot be excluded because membrane deformation reportedly activated Panx1 channels in excised membrane patches where mechanical transduction via loosely anchored actin filaments is less likely (Bao et al., 2004). Further studies to examine the involvement of cytoskeleton proteins or to establish more reduced cell-free systems (e.g., Panx1-reconstituted proteoliposomes) could help clarify the physical basis for mechanosensitivity of Panx1 channels.
In sum, there is evidence for activation of Panx1 channels by membrane stretch, but the data are not definitive and the transduction mechanisms have not been elucidated. Importantly, electrophysiological experiments from verified recombinant channels, and in cells from wild-type and Panx1 knockout mice, would clarify the basic properties of stretch-activated channels and strengthen the conclusion that Panx1 indeed forms those native mechanosensitive channels. In this latter respect, it is critical to recognize that LRRC8/SWE LL channels are also activated by hypotonicity and share multiple characteristics with Panx1, including carbenoxolone sensitivity and large molecule permeation Voss et al., 2014;Gaitán-Peñas et al., 2016;Syeda et al., 2016). Thus, the ATP release from human erythrocytes, which was attributed to Panx1 based only on CBX-sensitivity (Locovei et al., 2006a), could instead reflect LRRC8-mediated ATP release. These considerations demand extra caution in differentiating contributions from Panx1 or LRRC8/SWE LL when examining stretch/pressure-activated channel currents, ATP release and dye uptake.
Panx1 activation by elevated extracellular potassium.
High concentrations of extracellular K + (7∼80 mM) have been observed in several pathological conditions, such as epileptiform convulsions or ischemic injury (Fisher et al., 1976;Hansen, 1978). Several studies have demonstrated that Panx1 channels can be activated by elevated levels of extracellular potassium, with a minimum concentration of 10 mM, by examining Panx1-mediated permeation of ions or large molecules (Silverman et al., 2009;Qiu et al., 2011;Santiago et al., 2011;Suadicani et al., 2012;Wang et al., 2014; Fig. 1 B). High-K +induced Panx1 currents were first described by the Dahl laboratory using Xenopus oocytes heterologously expressing mouse Panx1 (Silverman et al., 2009), and later reported by the Scemes laboratory in mouse astrocytes (Suadicani et al., 2012). These K + -induced currents were reduced by CBX or probenecid and attenuated in astrocytes derived from Panx1 knockout mice. In addition to ionic currents, high-K + -induced dye uptake and ATP release were attenuated in a human astrocytoma cell line (1321N1) by shRNA-mediated PANX1 knockdown, and in hippocampal neurons or astrocytes derived from Panx1-knockout mice (Silverman et al., 2009;Santiago et al., 2011;Suadicani et al., 2012). This evidence provides support for elevated extracellular K + as an activation mechanism for Panx1 channels and implicates potential roles of the channels in etiology of seizure or ischemic stroke (Bargiotas et al., 2011;Santiago et al., 2011).
The mechanism by which high extracellular K + activates Panx1 channels remains to be determined. Increased Panx1 currents were observed under voltage clamp conditions, indicating that channel activation was not simply a result of membrane depolarization caused by elevated extracellular K + (Silverman et al., 2009). It has been proposed that a direct association of K + with the first extracellular loop of Panx1 may be required for high-K + -induced activation . The rationale for this hypothesis follows from a series of studies: first, by using electrophysiological recordings in Xenopus oocytes heterologously expressing wild-type or mutant mouse Panx1, the Dahl laboratory found that Arg-75 in the first extracellular loop was critical for inhibition of Panx1 by high concentrations of ATP or ATP analogues, with IC 50 ≥500 µM for ATP ; second, in a later study, the same group also reported that high extracellular K + attenuated the inhibition of Panx1 currents by high extracellular ATP (500 µM; Jackson et al., 2014). Based on these observa- Bao et al., 2004). The unitary conductance of pressure/stretch-activated channels was reported elsewhere to be ∼475 pS (Locovei et al., 2006a). (B) High extracellular K + -activated single-channel activity obtained by using inside-out patch recording in Xenopus oocytes heterologously expressing human PANX1 (adapted from Bao et al., 2004). Membrane patch was exposed to symmetric 150 mM K + . The high-K + -activated channel visited multiple subconductance states and displayed a unitary conductance up to ∼475 pS. (C) Intracellular Ca 2+ -induced single-channel activity obtained by using inside-out patch recording in Xenopus oocytes heterologously expressing human PANX1 (adapted from Locovei et al., 2006b). The unitary conductance of Ca 2+ -activated channels was reported to be ∼550 pS (Locovei et al., 2006b).
(D) Single-channel activity evoked by caffeine-induced Ca 2+ release, obtained from rat atrial myocytes infected with adenovirus expressing mouse Panx1 or empty vector (adapted from Kienitz et al., 2011). The caffeine-activated channels showed a unitary conductance of ∼300 pS. (E) O 2 /glucose deprivation (OGD)-induced single-channel activity obtained by using cell-attached recording in rat hippocampal neurons (left). Boxed figures are exemplar single-channel opening (top right) and all-point histogram acquired from recordings under control, OGD, and OGD+CBX conditions (bottom right). The OGD-activated channels demonstrated a unitary conductance of ∼530 pS (adapted from Thompson et al., 2006). All figures are reproduced with permission. tions, these authors hypothesized an overlapping interaction site for K + and ATP, and proposed that high K + may activate Panx1 channels through direct binding. It will require further investigation to determine whether ATP and K + compete for the same binding site. In this respect, it would be interesting to test whether high K + is able to activate mutated Panx1 channels that are resistant to inhibition by ATP (e.g., R75A or W74A; Qiu and Dahl, 2009;Qiu et al., 2012). Intriguingly, high K + also interferes with inhibition of Panx1 currents by other compounds with distinct chemical structures, such as CBX or probenecid . It remains to be tested whether all these different chemical classes of Panx1 inhibitors modulate channel activity by competing for the proposed high-K + binding site.
In collaboration with the Sosinsky laboratory, Dahl and colleagues also presented another line of evidence that supports a direct activation mechanism for high K + . In that work, by using electron microscopy, they found an enlarged "pore" diameter of purified, negatively stained mouse Panx1 channels in high-K + solutions . The authors previously reported that maleimidobutyryl-biocytin (MBB), a thiol-modifying reagent, inhibited whole-cell currents in Xenopus oocytes heterologously expressing mouse Panx1 under normal K + concentrations , but not under high-K + conditions . Because this was attributed to an interaction of MBB with Cys-426 at the distal end of C-tails , the authors further suggested that the larger pore diameter might be explained by rearrangement of Panx1 C-tails in high-K + . In general, a higher-resolution, 3-D structure of Panx1 in different K + conditions would provide better understanding of high-K + -mediated activation of Panx1, and how high K + might interact with ATP (and/or other blockers) on the channel.
Despite the evidence cited above, it is important to acknowledge that we have been unable to verify high extracellular K + -induced Panx1 current or dye uptake Fig. 2). Thus, in HEK293T cells heterologously expressing either wild-type or C-terminally truncated human Panx1, current density was unaffected by large changes in K + concentration in the bath solutions (3 vs. 83 mM; ∼300 mOsm). Moreover, by flow cytometry, negligible TO-PRO-3 uptake was found in mouse splenocytes (Fig. 2 D) and human Jurkat cells exposed to 50 mM K + . This was in sharp contrast to a pronounced TO-PRO-3 uptake by UV-irradiated, apoptotic splenocytes (Fig. 2 D). In addition, it appears that the ATP release observed from high-K + -treated erythrocytes (Locovei et al., 2006a;Qiu et al., 2011) is likely caused by hemolysis of erythrocytes upon exposure to the K-gluconate-based buffer (Keller et al., 2017). Together, these new results cast doubt on a long-held view that high-K + is a universal activation mechanism for Panx1, and recommend additional examination of this mechanism by other independent research groups. In any case, one should exercise caution when attributing high-K + -induced activities to Panx1 channels.
Panx1 activation by increased intracellular calcium.
Panx1 channels can be activated by multiple metabotropic receptors, specifically by those that couple via Gαq-containing heterotrimeric G proteins; in some cases (e.g., P2Y purinergic receptors; protease-activated receptors [PARs]), this receptor-mediated channel activation has been attributed to an associated increase in intracellular calcium.
For P2Y receptors, ATP-induced currents were obtained from Xenopus oocytes coexpressing P2Y1 or P2Y2 along with Panx1 channels, whereas ATP did not increase Panx1 currents in the absence of P2Y receptors (Locovei et al., 2006b). Increased whole-cell current was also observed by bath application of the calcium ionophore, A23187, in oocytes expressing Panx1, consistent with a role for elevated intracellular calcium in activating Panx1 channels. In the same study, Locovei et al. (2006b) found that increased activity of Panx1 channels can be observed in the inside-out patch configuration when the cytosolic side of membrane patches was exposed to elevated calcium in a dose-dependent manner (Fig. 1 C). This suggests that Panx1 might be activated directly by increased intracellular calcium expected from P2Y receptor activation, although this was not tested directly (e.g., by chelating intracellular Ca 2+ in P2Y-stimulated cells).
In addition to P2Y receptor-induced currents, PAR1/3-mediated ATP release and fluorescent dye uptake are also considered to be Panx1-dependent events (Seminario-Vidal et al., 2009;Gödecke et al., 2012). PAR1-mediated ATP release from human umbilical vein endothelial cells (HUV ECs) was reduced by shRNA-mediated knockdown of Panx1, but not by Connexin 43 (Cx43) shRNA (Gödecke et al., 2012); in a human lung epithelial cell line, A549, PAR3-mediated ATP release and propidium iodide uptake were attenuated by application of low concentration of CBX (10 µM; Seminario-Vidal et al., 2009). Bath application of A23187 increased ATP release from HUV ECs, consistent with Ca 2+ -mediated activation of Panx1, although involvement of Panx1 in this A23187 effect was not examined concurrently using genetic or pharmacological tools (Gödecke et al., 2012). Also consistent with a role for Ca 2+ , the cell-permeable Ca 2+ chelator, BAP TA-AM, reduced thrombin-induced ATP release from A549 cells (Seminario-Vidal et al., 2009). Furthermore, application of thapsigargin attenuated thrombin-induced ATP release from A549 cells (Seminario-Vidal et al., 2009), suggesting ER calcium stores might be a relevant calcium source for activating Panx1 channels. Although PAR-activated Panx1 channel currents have not yet been recorded directly, these results collectively support the idea that PAR receptors induce ATP release that is dependent on intracellular calcium and mediated via Panx1 channels.
In settings that do not also involve concurrent receptor stimulation, the effects of Ca 2+ on Panx1 channels are less clear. For example, ethidium bromide uptake by N2A cells heterologously expressing mouse or zebrafish Panx1 was increased by ionomycin, another cell-permeable calcium ionophore (Kurtenbach et al., 2013). However, ionomycin was unable to induce ATP release from A549 cells, even though calcium-mediated activation of Panx1 was suggested based on inhibition of thrombin-induced ATP release by BAP TA-AM and thapsigargin in those same cells (Seminario-Vidal et al., 2009). This discrepancy may reflect the specificity of calcium-mediated Panx1 activation by receptorassociated mechanisms. For example, concurrent receptor-mediated activation of the Rho GTPase pathway may also be required to increase ATP release from human A549 cells, in addition to elevated intracellular calcium (Seminario-Vidal et al., 2009). Also not linked to a receptor-mediated process, it was reported that Panx1 constitutes a caffeine-activated, large conductance cation channel in cardiomyocytes (Kienitz et al., 2011; Fig. 1 D), for which channel activation is dependent on elevated intracellular calcium (Loirand et al., 1991). However, those caffeine-activated channels display unitary properties distinct from P2Y receptoractivated channels (Locovei et al., 2006b), which is sur- Figure 2. Raising extracellular K does not activate recombinant or native Panx1 channels. (A and B) Whole-cell currents were obtained from HEK293T cells expressing either wild-type PANX1 (A) or C-terminally truncated PANX1 (B) under control conditions (3 mM K + ), high extracellular K + (83 mM K + ), and high extracellular K + plus CBX (50 µM); insets show time series of current obtained at 80 mV under the indicated conditions. As previously reported , whole-cell voltage-ramp I-Vs (−100 to 80 mV; 0.2 V/s at 0.14 Hz) were obtained at room temperature using borosilicate micropipettes (3∼5 MΩ) filled with internal solution containing (mM) 100 CsMeSO 4 , 30 TEACl, 4 NaCl, 1 MgCl 2 , 0.5 CaCl 2 , 10 HEP ES, 10 EGTA, 3 ATP-Mg, and 0.3 GTP-Tris, pH 7.3. Control (3 mM K + ) bath solution was composed of (mM) 140 NaCl, 3 KCl, 2 MgCl 2 , 2 CaCl 2 , 10 HEP ES, and 10 glucose, pH 7.3. High-K + solution included (mM) 60 NaCl, 83 KCl, 2 MgCl 2 , 2 CaCl 2 , and 10 HEP ES at pH 7.3; glucose was added to maintain equal osmolarity with the control bath solution (∼300 mOsm). (C) Grouped data (mean ± SEM) shows that CBX-sensitive current from wild-type (n = 5) or CT-truncated PANX1 (n = 7) was unaffected by different extracellular K + concentrations. These results were originally reported in the peer review file from Chiu et al. (2017). (D) Dye uptake measured by flow cytometry shows that viable (7-AAD negative) mouse splenocytes display negligible TO-PRO-3 uptake under control K + conditions (5 mM K + ; 322 mOsm), hypotonic high-K + (50 mM K + , 237 mOsm; same ionic composition as Silverman et al., 2009), or osmolarity-adjusted high-K + (50 mM K + with 87 mM d-mannitol, 327 mOsm). In contrast, caspase-mediated Panx1 activation in UV-irradiated cells yields robust TO-PRO-3 uptake by viable cells. Splenocytes were freshly isolated from C57BL/6 mice, as previously described (Jin et al., 2008) and cultured in growth media (RPMI + 10% FBS); one group of cells was also exposed to UV irradiation (15 × 10 4 µJ). After 6 h culture at 37°C, cells were washed three times with RPMI, before a 30-min incubation in solutions containing different concentrations of K + . TO-PRO-3 (Panx1-permeable) and 7-AAD (Panx1-impermeable) were added ∼10 min before flow cytometry, as previously reported (Poon et al., 2014;Chiu et al., 2017). Note that necrotic cells (7-AAD + ) were excluded from the analysis to avoid Panx1-independent TO-PRO-3 uptake.
prising if both depend on increased intracellular Ca 2+ . Thus, it remains uncertain whether Ca 2+ can activate Panx1 directly, or modulates the channel differently in the context of concurrent receptor signaling.
A mechanism for Panx1 activation by calcium has not yet been provided. Because Panx1 channels do not have a conventional calcium-binding domain (i.e., EF hand or Ca 2+ bowl), some undefined calcium binding motif or a potential calcium/calmodulin interaction may instead explain Ca 2+ -mediated activation of Panx1. Understanding the mechanisms underlying activation of Panx1 channels by intracellular Ca 2+ will require further studies to interrogate the effect of raised intracellular calcium itself on Panx1 channel properties and determine precisely how Ca 2+ interacts with Panx1.
Panx1 activation by Src family kinase-mediated phosphorylation. In addition to their activation by G protein-coupled metabotropic receptors, there is evidence that Panx1 channels mediate effects downstream of ionotropic receptors and chemokine receptors, such as NMDA, TNFα, and P2X7 receptors (Iglesias et al., 2008;Thompson et al., 2008;Lohman et al., 2015); in these cases, channel activation is attributed to signaling pathways linked to posttranslational modification of Panx1, particularly phosphorylation.
Thompson and colleagues reported that anoxia activates Panx1 channels in hippocampal pyramidal neurons via NMDA receptors (NMD ARs; Thompson et al., 2008;Weilinger et al., 2012). In hippocampal CA1 neurons, NMDA-or anoxia-induced secondary inward currents were attenuated by bath application of CBX or 10 Panx1 to rat brain slices, or by conditional deletion of Panx1 in mouse brain, supporting the presence of Panx1-dependent currents (Thompson et al., 2008;Weilinger et al., 2012). In addition, NMDA-or anoxia-induced inward currents were reduced by NMD AR antagonists, D-APV or R-CPP, as well as by a Src family kinase (SFK) inhibitor, PP2. Interestingly, however, an NMD AR pore blocker, MK-801, did not affect Panx1-dependent inward currents, suggesting that ion permeation via NMDA receptor channels was not required. These results implicate a metabotropic role for NMD ARs, separate from their ion channel function, in activating Panx1 channels via SFKs (Weilinger et al., 2012(Weilinger et al., , 2016. A likely phosphorylation site on Panx1 was identified by using an antibody specifically recognizing phosphorylation at tyrosine-308 (pY308). Activation of NMD ARs increased pY308-Panx1 immunoreactivity in N2A cells heterologously expressing wild-type Panx1 and Src, and this NMD AR-dependent increase in pY308 was not observed in cells treated with PP2 or expressing a Panx1 mutant resistant to phosphorylation at Tyr-308 (Panx1Y308F; Weilinger et al., 2016). Additionally, application of a cell-permeable peptide containing 14 amino acids of Panx1 surrounding Tyr-308 (305-318) interfered with NMD AR-dependent increases in pY308 on Panx1. This peptide did not affect pY416-Src autophosphorylation after NMD AR stimulation in N2A cells (Weilinger et al., 2016), indicating that its presumed competitor function was downstream of Src activation. Collectively, these results support a model whereby NMD ARs activate Panx1 channels metabotropically, via SFKs (Weilinger et al., 2012(Weilinger et al., , 2016. Of note, phosphorylation of Panx1 at Tyr-308 and a constitutive interaction between NMD AR, Src, and Panx1 were all present before NMDA stimulation (Weilinger et al., 2016). This supports the possibility of a preexisting signalosome for NMD AR-mediated activation of Panx1 and further implies that some critical number of phosphorylation events at Y308 of Panx1 subunits in the oligomeric channel may be required for NMDA-or anoxia-induced activation of Panx1 channels.
A role for Src family kinases was also proposed for TNFα-induced activation of Panx1 channels, which promotes emigration of leukocytes across the venous microcirculation during inflammation. In this case, SFK-dependent phosphorylation of Panx1 occurs at Tyr-198, a site distinct from that targeted downstream of NMD ARs . TNFα-induced ATP release was reduced in human umbilical vein endothelial cells (HUV ECs) after Panx1 siRNA-mediated knockdown or in mesenteric venules obtained from endothelium-specific Panx1 knockout mice. After application of TNFα, phosphorylation at Tyr-198 of Panx1 was increased, as assessed with an anti-pY198 antibody; this pY198 immunoreactivity was diminished by SFK inhibitors , suggesting that Tyr-198 is phosphorylated downstream of SFKs in response to TNFα stimulation. It remains to be determined whether SFKs phosphorylate the channel directly. Further studies to analyze the structural and single-channel properties of these phosphorylated Panx1 channels, at either Tyr-308 or Tyr-198, will provide important fundamental information on activation mechanisms by receptor-mediated tyrosine phosphorylation.
In a further example, SFKs are also implicated in Panx1 channel activation downstream of P2X7 receptors (P2X7Rs; Iglesias et al., 2008). First, it is important to note that this P2X7-Panx1 functional interaction is controversial. For example, ATP-induced YO-PRO-1 uptake is retained in bone marrow-derived macrophages (BMDMs) from Panx1 −/− mice, even though it is diminished in BMDMs from P2X7 −/− mice (Qu et al., 2011). In addition, it was recently demonstrated that purified P2X7 receptors are themselves sufficient to permeate fluorescent dye in reconstituted proteoliposomes (YO-PRO-1; Karasawa et al., 2017). These findings suggest that Panx1 is dispensable for P2X7R-induced large pore formation and dye uptake. Nevertheless, in J774 mouse macrophage cells, whole-cell inward currents and YO-PRO-1 uptake was induced by BzATP, a P2X7R agonist, and these effects were reduced by Panx1 inhibition (i.e., by using CBX and mefloquine, and also by siRNA-mediated knockdown). Implicating SFKs in P2X7R-mediated Panx1 activation, BzATP-induced inward currents and YO-PRO-1 uptake were both attenuated by PP2 (Iglesias et al., 2008). The phosphorylation site of Panx1 relevant for this proposed P2X7R-SFK signaling mechanism has not been identified; potentially this could involve either (or both) of the Tyr-198 or Tyr-308 sites implicated in TNFα or NMD AR pathways.
A further relevant observation is that functional coupling of P2X7R and Panx1 are seemingly dependent on a single nucleotide polymorphism (SNP) of P2X7Rs: only P2X7Rs that contain Pro-451, but not Leu-451, can mediate Panx1 channel activation (Iglesias et al., 2008;Sorge et al., 2012). The P2X7R(Leu-451) variants retain calcium permeability and phospholipase D (PLD)-coupling ability, so calcium-mediated or PLD-induced activation of Panx1 channels is unlikely (Le Stunff et al., 2004;Sorge et al., 2012). It remains to be determined if direct binding of P2X7Rs and Panx1 channels is required for channel activation, and whether functional coupling between SFK and Panx1 is also dependent on the P2X7R SNP. Note that, although a preexisting interaction between P2X7R and Panx1 was suggested by coimmunoprecipitation (Pelegrin and Surprenant, 2006;Iglesias et al., 2008), it is not clear that this presumed physical interaction is required for P2X7R-mediated Panx1 activation.
In consideration of the possibility that SFKs act as direct activators for Panx1 in multiple pathways that evidently lead to phosphorylation at distinct channel residues, a stimulus-dependent coupling between SFKs and Panx1 channels may exist in order to render specificity among the different signaling pathways. Thus, it will be critical to identify how these various SFK-dependent pathways and phosphorylation sites differentially modulate Panx1 channel activity and properties.
Regulation of Panx1 activity by other kinases. In addition to SFKs, other kinases, such as c-Jun NH 2 -terminal kinase (JNK; Xiao et al., 2012) and protein kinase G (PKG; Poornima et al., 2015) have been implicated in regulating the activity of Panx1 channels. Xiao et al. (2012) reported that palmitic acid-induced ATP release and YO-PRO-1 uptake were reduced by a JNK inhibitor, SP600125, or by knocking down Panx1 in HTC rat hepatoma cells using shRNA. Even though JNK has been implicated in palmitic acid-induced apoptosis in hepatocytes (Malhi et al., 2006;Wei et al., 2006), it is unlikely that palmitic acid-induced ATP release from HTC cells is mediated by caspase cleavage-activated Panx1 because the ATP release was not decreased by a pan-caspase inhibitor, zVAD-fmk (Xiao et al., 2012). It remains to be determined whether JNK can activate Panx1 channels through a direct phosphorylation or via other signaling messengers.
In contrast to the Panx1 channel activation reported for most signaling pathways, Poornima et al. (2015) showed that nitric oxide (NO) reduced constitutive whole-cell current in HEK293 cells heterologously expressing rat Panx1; this effect was abrogated by inhibiting soluble guanylyl cyclase or PKG, whereas a PKA inhibitory peptide had no effect. Mutational analysis of potential phosphorylation sites indicated that rat Panx1(S206A) was resistant to NO-induced inhibition (Poornima et al., 2015). These data suggest that NO may cause inhibition of Panx1 current by PKG-mediated phosphorylation at Ser-206 and further imply a potential role for phosphatases in maintaining basal activity of the channels. Note, also, that a previous study found that Panx1 channels were inhibited by NO-induced Snitrosylation on cysteine residues (Lohman et al., 2012b). Thus, multiple different types of channel modification may contribute to inhibition of Panx1 by NO.
A unique activation mechanism by irreversible C-tail cleavage. In addition to the aforementioned reversible activation mechanisms, Panx1 channels can be activated by caspase-mediated cleavage, an irreversible process. During apoptosis of T lymphocytes, activated caspase 3 or 7 cleaves Panx1 channels at a C terminal site ( 376 DVVD 379 of human Panx1); the cleaved channel then mediates release of nucleotides to serve as "find-me" signals that attract phagocytes for clearance of the dying cells (Chekeni et al., 2010). Subsequently, this effect of caspase attributed to Panx1 channels was independently verified in lymphocytes derived from Panx1 knockout mice (Qu et al., 2011). Panx1 channels recorded in inside-out patch configuration are activated by bath application of activated caspase 3, and dissociation of the cleaved C-terminal tail from the channel is required for channel activation, supporting an intrinsic channel mechanism whereby the C-tail functions as an autoinhibitory region (Sandilos et al., 2012).
Other unresolved activation mechanisms for Panx1 channels. Beyond those described above, molecular mechanisms for activation of Panx1 channels by other receptors, including α1 adrenergic receptors, insulin receptors, and CXCR4 chemokine receptors, await further elucidation (Billaud et al., 2011(Billaud et al., , 2015Adamson et al., 2015;Velasquez et al., 2016). The ATP release or fluorescent dye uptake induced by these receptors was diminished by Panx1 inhibitors (probenecid or 10 Panx1; Adamson et al., 2015;Velasquez et al., 2016) or in Panx1-deleted cells (Billaud et al., 2015;Velasquez et al., 2016). For α1D receptor or insulin receptor, complementary results from whole-cell recordings in a heterologous system also support activation of Panx1 channels by receptor-activated pathways (Adamson et al., 2015;Billaud et al., 2015). Aforementioned molecular mechanisms, such as phosphorylation of Panx1 or elevated intracellular Ca 2+ , may provide potential directions for future investigations.
Other recent work suggests that Panx1 channels may be regulated by pH (Kurtenbach et al., 2013). In N2A cells expressing zebrafish Panx1 (drPanx1a or drPanx1b), ethidium bromide uptake was increased by extracellular alkalization and reduced by extracellular acidification. It would be intriguing to test whether a similar regulation by pH can be observed in other Panx1 homologues and by using other measurements of Panx1 function (i.e., ATP release or ionic current); in addition, it would be important to determine whether pH regulation represents an intrinsic mechanism that involves direct channel titration by protons, or whether it reflects effects mediated indirectly by other channel modulators that are sensitive to the prevailing pH.
Activation mode-dependent unitary conductance and ATP permeation
Unique single-channel properties, such as unitary conductance or activation/inactivation kinetics, are often considered a signature for specific ion channels and provide fundamental information for identifying and characterizing ion channels in their native environment. Unlike other channels, however, Panx1 has been associated with strikingly divergent unitary properties. This includes major differences reported for channel conductance, open-closed kinetics, voltage dependence, and permeant selectivity in different modes of activation. Here we discuss different lines of evidence that attribute divergent unitary properties to Panx1 and compare those earlier findings with our recent discovery demonstrating a graded increase in conductance and gating of Panx1 upon sequential removal of C-tails.
For many years, Panx1 channels have been routinely described as large-conductance (i.e., 300-500 pS), nonselective, voltage-activated ion channels (Bao et al., 2004;Iglesias et al., 2008;Santiago et al., 2011;Kurtenbach et al., 2014). Large-conductance, nonrectifying channels attributed to recombinant Panx1 were initially described in inside-out recordings from Xenopus oocytes injected with human Panx1 cRNA (isoform 2, 422 amino acids) during symmetrical exposure to 150 mM potassium gluconate (Bao et al., 2004; Fig. 1 B). This channel activity was reportedly not observed in uninjected oocytes. In addition, potassium gluconate-induced ATP release is significantly higher in Panx1 cRNA-injected oocytes than in Cx43 cRNA-injected or uninjected oocytes (Bao et al., 2004). In the same study, Bao et al. (2004) also suggested that ATP can permeate through Panx1 channels, because they observed a reversal potential of the channels in asymmetrical concentration of K 2 ATP that could not be explained by an exclusive permeation of K + . Similar large-conductance channels were observed in (a) rat hippocampal neurons during oxygen-glucose deprivation (∼530 pS; Thompson et al., 2006;Fig. 1 E), conditions that also activated a probenecid-and 10 Panx1-sensitive whole-cell current (Weilinger et al., 2012); (b) human erythrocytes subjected to high-K + or hypertonic challenge (∼450 pS; Locovei et al., 2006a); and (c) rat atrial cardiac myocytes upon caffeine stimulation, only under conditions when Panx1 was exogenously expressed (∼300 pS; Kienitz et al., 2011; Fig. 1 D). This evidence supported a very large unitary conductance as a common characteristic for Panx1 channels.
However, even though a large, nonrectifying conductance was a common feature shared by these recorded channels, it is noteworthy that they displayed other single-channel properties that were quite dissimilar. First, the high-K + -activated Panx1 channels visited multiple subconductance states (Bao et al., 2004; Fig. 1 B) whereas the ischemia-or caffeine-induced channels appeared to have only a single conductance state (Thompson et al., 2006;Kienitz et al., 2011;Fig. 1, D and E). Second, the caffeine-induced, Ca 2+ -activated high-conductance channels display short open times that contrast with the prolonged openings observed for either high-K + -activated or ischemia-activated channels ( Fig. 1 D vs. Fig. 1, B and E). To date, the reasons for the divergence of unitary properties among the high-conductance Panx1 channels remain unknown and await further exploration.
Panx1 as small-conductance channels that do not release ATP. Several recent studies described constitutively active Panx1 channels with a much smaller and outwardly rectifying unitary conductance (i.e., ∼15 pS at hyperpolarized potentials and <100 pS at depolarized potentials) after heterologous expression in mammalian cells, challenging the common characterization of Panx1 as a large-conductance channel. For example, multiple laboratories have reported inside-out or cell-attached recording of wild-type mouse Panx1 channels expressed heterologously in HEK293 cells with a unitary conductance of ∼70 pS at depolarized potentials and ∼15 pS at hyperpolarized potentials (Ma et al., 2012;Romanov et al., 2012; Fig. 3 A). These relatively small-conductance Panx1 channels were recorded in the absence of additional stimulation; they were reportedly anion-selective (Ma et al., 2012;Romanov et al., 2012) and unable to release ATP at detectable levels (Romanov et al., 2012).
The discrepancies observed in unitary conductance (300∼500 pS vs. maximum of ∼70 pS) may stem from Chiu et al. (2017). (C) Unitary current amplitudes closely overlay CBX-sensitive whole-cell currents using two-point normalization (left), suggesting that the outwardly rectifying whole-cell current is mainly attributed to the outwardly rectifying unitary conductance. Note that the same data points are not well aligned when normalized to the peak current amplitude at 80 mV (right). All figures are reproduced with permission.
their relative states of activation. Indeed, it was suggested based on single-particle electron microscopy that mouse Panx1 channels exist basally as a small-pore, small-conductance channel that cannot release ATP and transition to a large-pore, large-conductance channel when activated by high extracellular K + to release ATP . The cation/anion selectivity was not directly examined under different K + concentrations. Nevertheless, a general conclusion was advanced that only large-conductance Panx1 channels are capable of permeating large molecules, such as ATP or fluorescent dyes . As discussed in the next section, this does not appear to be the case.
Large conductance is not a prerequisite for ATP permeation. The suggestion that large molecule permeation (ATP, dye) is only a property of large-conductance Panx1 channels is not supported by recent work. Specifically, by examining native and engineered human Panx1 channels, our group demonstrated that smaller-conductance Panx1 channels are compatible with release of ATP or permeation of large dyes .
We found that cleavage-activated recombinant human PANX1 channels recorded under inside-out conditions displayed an outwardly rectifying unitary conductance, with a maximum of ∼96 pS at depolarized potentials (between 50 and 80 mV; Fig. 3 B) and ∼15 pS at hyperpolarized potentials (between −50 and −80 mV; Fig. 3 B); native hPANX1 channels activated by caspase cleavage in a lymphocyte cell line (Jurkat cells) revealed generally similar properties, but with a slightly lower peak conductance under cell-attached recording conditions (∼80 pS; Chiu et al., 2017). In addition, we found that caspase-mediated cleavage activates PANX1 channels in a stepwise manner; this was clearly demonstrated by sequentially removing C-tails from individual subunits in the oligomeric channel, which led to graded increases in both unitary conductance and open probability. A corresponding graded increase in permeation of both ATP and fluorescent dyes (i.e., TO-PRO-3) was observed from PANX1 channels as more C-tails were removed, even from channels that displayed maximum conductance ranging from ∼50 to ∼96 pS. Of note, because of the pronounced outward rectification in cleavage-activated channels, the unitary conductance is even smaller at the negative potentials expected under conditions in which ATP release and dye uptake were measured. Thus, a large unitary conductance is not required for ATP release or dye uptake via activated PANX1 channels. Finally, because both ATP (negatively charged) and TO-PRO-3 (positively charged) permeate through cleavage-activated Panx1 channels, it is unlikely that these lower-conductance channels could be strictly anion selective.
Release of ATP by smaller-conductance Panx1 channels is not unique to C-tail cleavage-mediated activation.
In human PANX1 channels activated by α1D adrenergic receptor signaling, a similar conductance (∼80 pS) also supports ATP release (Billaud et al., 2015). Intriguingly, in the absence of additional activation, mouse Panx1 channels show constitutive activity with a unitary conductance comparable to ATP-releasing human PANX1 channels partially activated by C-tail cleavage (Ma et al., 2012;Romanov et al., 2012;Wang et al., 2014;. However, these basally active mouse Panx1 channels appear to be ATP-impermeable (Romanov et al., 2012;Wang et al., 2014;Billaud et al., 2015). Together, these data indicate that ATP release by Panx1 does not require formation of large-conductance channels but is dependent on the activation status of the channels. Thus, by itself, single-channel conductance is not a reliable indicator of large molecule permeation; the specific changes in channel properties that allow for ATP release after different activation mechanisms remain enigmatic.
In summary, Panx1 channels appear to display different properties under basal, unstimulated conditions and when activated by different mechanisms. Although this may well be the case, caveats abound. In many instances, especially in native systems, but even also in some heterologous contexts, the recorded channels have not been identified definitively as Panx1. In native systems, this is largely a function of imperfect pharmacology and/ or failure to apply existing molecular tools to verify that recorded channels are indeed Panx1. Future work attempting to reconcile these discrepancies should use a combination of currently available tools, especially when studying native channels. For example, even though multiple pharmacological and molecular approaches were used to implicate Panx1 as the ischemiaor caffeine-activated channels in cardiomyocytes (e.g., Panx1 inhibitors, siRNA knockdown, viral-mediated expression of exogenous Panx1; Kienitz et al., 2011), a more compelling identification could be obtained by using the now widely available Panx1 knockout mice.
Panx1 channel gating is voltage independent
Another discrepancy regarding Panx1 channel properties within the extant scientific literature is the question of whether the channels are gated by changes in membrane potential, i.e., voltage-gated. Whereas it is widely recognized that Panx1 channels generate a voltage-dependent current, whereby outward currents at positive membrane potentials are relatively larger than inward currents at negative potentials, such voltage dependence can reflect effects of membrane potential on single-channel conductance, channel open probability, or both. However, only the latter effect on P O constitutes voltage gating. This is not a semantic issue because it has clear mechanistic implications: voltage gating implies the existence of a "voltage sensor" that can react to changes in membrane potential to initiate conforma-tional changes that favor either open or closed states of the channel. Given that Panx1 channels typically mediate voltage-dependent, outwardly rectifying whole-cell currents (Bruzzone et al., 2003;Romanov et al., 2012;Sandilos et al., 2012;Jackson et al., 2014), and in light of early observations suggesting that channel unitary conductance of Panx1 was linear and unaffected by membrane potential (Bao et al., 2004;Thompson et al., 2006; Fig. 1, B and E), it was reasonable to suggest that voltage-dependent currents reflected an underlying voltage gating of the channel. This view garnered indirect support when normalized current-voltage (I-V) relationships of single-channel amplitudes from basally active mouse Panx1 (i.e., unitary conductance) were out of register with superimposed I-V relationships of Panx1 whole-cell currents. It was suggested that this misalignment of the single-channel and whole-cell I-V curves could be explained if channel P O is reduced at negative membrane potential, i.e., if Panx1 channels are voltage-gated (Romanov et al., 2012). A recent study also suggested that the P O of basally active mouse Panx1 increased with membrane depolarization and invoked an unconventional mechanism in which voltage-dependent anion flux modulates gating (Nomura et al., 2017).
A different conclusion was reached, however, when voltage gating was assessed by direct measurement of channel P O in human PANX1 channels activated by C-terminal cleavage. Specifically, those cleavage-activated channels display essentially identical P O over a wide range of membrane potentials (i.e., from −80 to 80 mV; Fig. 3 B), indicating that channel gating is independent of voltage . Although one could argue that caspase-mediated cleavage of the C-tail might eliminate the voltage-gating mechanism, this is not supported by the observation that whole-cell I-V relationships were unaltered in Panx1 channels that retain varying numbers of C-tails. In addition, the unitary conductance and whole cell I-V relationships of human PANX1 channels show essentially identical outward rectification when activated by caspase cleavage (C-tail removed) or α1-adrenoceptor signaling (C-tail intact; Chiu et al., 2017). These observations suggest that it is unlikely that the C-tail of PANX1 is responsible for voltage-dependent gating. In sum, direct measures of P O provide little evidence for voltage-dependent gating of human PANX1 channels.
In light of this, we reexamined the data and assumptions inherent in the previous indirect analysis that inferred voltage-gating of Panx1 based on comparisons between normalized single-channel and whole cell I-V relationships and identified a critical factor that likely yielded misleading information (Romanov et al., 2012). That analysis involved a normalization procedure that was based on a single data point (i.e., peak current at 80 mV); this single-point normalization does not recognize the outwardly rectifying nature of the whole-cell and single-channel currents, and disproportionally skews the normalized single channel I-V relationship, especially at hyperpolarized potentials. By using a more appropriate two-point normalization procedure and making comparisons with the CBX-sensitive whole cell current component, we found that unitary conductance and whole cell currents of C-terminally cleaved human PANX1 are closely aligned across a wide range of voltages (Fig. 3 C). This indicates that the single channel rectification can account essentially entirely for the whole-cell rectification and obviates any need to invoke voltage-gating of the channel. Moreover, given that Panx1 channels allow both cation and anion permeation (Pelegrin and Surprenant, 2006;Santiago et al., 2011;Seminario-Vidal et al., 2011), the unconventional voltage-gating mechanism recently proposed for the channels, which depends on anionic selectivity, should receive additional scrutiny (Nomura et al., 2017).
A structure-function model for Panx1 activation by displacement of C-tails Using the substituted cysteine accessibility method (SCAM) to analyze inhibitory effects of thiol-modifying reagents on mouse Panx1 current in Xenopus oocytes, Wang and Dahl (2010) suggested that a region near the first TM domain and the first extracellular loop forms the outer pore lining, and the C-tail is localized within the pore itself. In agreement with this work, our group later demonstrated that currents from C-terminally truncated PANX1 channels were inhibited by overexpression of C-tails, suggesting that they have an autoinhibitory role (Sandilos et al., 2012), and that physical dissociation of the autoinhibitory C-tail is necessary for cleavage-mediated PANX1 activation (Sandilos et al., 2012). By analyzing unitary properties of Panx1 channels, we further demonstrated that both unitary conductance and P O , along with ATP/dye permeation, are gradually increased when C-tails are sequentially removed from the channel complex . These observations led us to propose a structure-function model that envisions the process of Panx1 activation by either irreversible or reversible displacement of C-tails. In such a model (Fig. 4 A), increased conductance may reflect reduced steric hindrance associated with C-tail removal; such a steric effect is implied by the stepwise increase in single-channel currents as additional tails were removed, and the observation that passage of small ions (i.e., current) occurred when only a single C-tail was removed but permeation to larger molecules required removal of at least two C-tails . Cleavage of the C-tails may also decrease resistance by shortening the length or increasing the width of the permeation pathway. Note, however, that C-tails appear to have a negligible effect on ion selectivity, as indicated by a similar reversal potential among channels containing varying numbers of intact C-tails .
In addition to the quantized increases in unitary conductance, removal of C-tails also increased channel open probability in a stepwise fashion. Even so, it is clear that the C-tails cannot form the actual channel gate because fully cleaved channels transition between open and closed states, with a maximum P O of ∼0.4. Interestingly, human PANX1 channels activated by α1 adrenoceptors achieve a unitary conductance similar to that of fully cleaved channels, despite their intact C termini. The major difference is that α1 receptor-activated channels show substantially shorter open time than C-tail-cleaved channels Fig. 4 B). The more flickering behavior of α1 adrenoceptor-activated channels may reflect repeated association and dissociation of C-tails from the channel pore (e.g., in response to rapid addition or removal of the posttranslational modifications) or a reduced binding affinity caused by stable posttranslational modifications. On the other hand, the flickering opening may be caused by allosteric effects on Panx1 channel gating by receptor-induced posttranslational modifications Fig. 4 C). Interestingly, ATP permeation has been reported in α1D adrenoceptor-activated mouse Panx1 despite similar unitary conductance of the unstimulated and receptor-activated channels (Billaud et al., 2015). High-resolution, 3-D structures of Panx1 channels activated by different mechanisms would help reveal the specific conformations that account for these different channel properties.
Remaining questions for activation of Panx1 channels
Several questions remain unanswered regarding the various gating mechanisms of Panx1 channels, especially for reversible modes of channel activation such as high extracellular K + or NMD AR-Src signaling. For example, does the model of C-tail displacement apply in these circumstances, and/or does high K + or tyrosine phosphorylation activate Panx1 channels in a progressive manner that reflects sequential subunit modification? For NMD AR-Src signaling, where there appears to be basal phosphorylation of the unstimulated channel, is there a specific number of phospho-tyrosines that support channel activation? For these and similar questions, a comprehensive analyses of the unitary properties of Panx1 channels activated by these mechanisms is required.
Another set of intriguing questions relate to differences between mouse and human Panx1 channels. Unlike human homologues, mouse Panx1 is basally active without additional stimulation (Ma et al., 2012;Romanov et al., 2012;Sandilos et al., 2012;; this may be partially explained by a nonconserved region following the C-terminal caspase cleavage site (Sandilos et al., 2012). Most puzzling, however, is the observation that unstimulated mouse Panx1 channels have conductance properties that appear to be quite similar to those of human PANX1 after cleavage or α1 adrenoceptor activation: the basally active channels are also outwardly rectifying, with a peak conductance ∼70 pS. The flickering behavior and the mean open time of unstimulated mouse Panx1 channels resemble the gating kinetics of full-length human Panx1 channels activated by α1 adrenoceptors (Romanov et al., 2012;Chiu et al., 2017;Nomura et al., 2017). Nevertheless, despite unitary properties that are generally similar to receptor-activated human PANX1, the unstimulated mouse Panx1 channels do not appear to support ATP release or dye uptake (Romanov et al., 2012;Wang et al., 2014;Billaud et al., 2015). The reasons for this are currently not known. It is possible that the lower P O or shorter open time of unstimulated mouse Panx1 is incompatible with permeation of ATP or fluorescent dyes.
Nevertheless, given that similar gating mechanisms are likely shared by mouse and human Panx1, further exploration of the structural and biophysical differences between the mouse and human homologues may provide better understanding of Panx1 gating. In addition, it is known that mouse Panx1 can be further activated by caspase-mediated cleavage or by α1 adrenoceptor signaling (Qu et al., 2011;Billaud et al., 2015). Whether mouse Panx1 channels undergo a graded increase in both unitary conductance and P O in response to various activators requires further analysis. Aside from a relative hypotension during the active period, genetic deletion of Panx1 in mice is generally benign in the absence of an insult or physiological challenge (e.g., with ischemia-reperfusion or nerve injury; Bargiotas et al., 2011;Billaud et al., 2015;Weaver et al., 2017). Are native mouse Panx1 channels basally silent in vivo because of unidentified inhibitory mechanisms? Do native Panx1 channels contribute to cellular homeostasis and membrane potential? Also note that Panx1 orthologues from different species (e.g., rat, mouse, or human) have very different splice variants (Bruzzone et al., 2003;Ma et al., 2009;Li et al., 2011), and caution should be used when extrapolating between these different channel orthologues.
Outlook
Studies of Panx1 are drawing tremendous attention in various fields because of the diverse (patho)physiological functions of the channel. Although many channel activation mechanisms have been described, other regulatory mechanisms remain relatively unexplored. For example, little is known regarding transcriptional or translational regulation (Dufresne and Cyr, 2014;Zhang et al., 2015), channel protein trafficking and degradation (Boassa et al., 2007(Boassa et al., , 2008Gehi et al., 2011), or context-specific interactomes. In addition, current views on Panx1 have largely focused on its role in purinergic signaling; however, its role as an ion channel, affecting cell membrane potential dynamics, or its potential ability to release other large molecules, such as glutamate, osmolytes, or other electro-neutral permeants, remain to be examined. To date, Panx1 activity has been investigated primarily by loss-of-function approaches, either pharmacological or genetic, and there may be great value in taking the converse approach if Panx1 channel activators could be discovered. Moreover, efforts to elucidate the biophysical mechanisms for various inhibitors and to search for specific activators would provide new information regarding channel function, and may present a potential avenue for designing better agents to treat various Panx1-related defects in a context-dependent manner.
A C K n o w l E D g m E n t S This work was supported by grants from the National Institutes of Health: R01 GM107848 (to D.A. Bayliss) and P01 HL120840 (to D.A. Bayliss and B.N. Desai).
The authors declare no competing financial interests. Lesley C. Anson served as editor.
|
2018-04-03T04:34:21.793Z
|
2018-01-02T00:00:00.000
|
{
"year": 2018,
"sha1": "eb2fbab7620d09b71a3a205fd5546e691a5f88ba",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jgp/article-pdf/150/1/19/1234851/jgp_201711888.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb2fbab7620d09b71a3a205fd5546e691a5f88ba",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
258397828
|
pes2o/s2orc
|
v3-fos-license
|
Estimating time-dependent contact: a multi-strain epidemiological model of SARS-CoV-2 on the island of Ireland
Mathematical modelling plays a key role in understanding and predicting the epidemiological dynamics of infectious diseases. We construct a flexible discrete-time model that incorporates multiple viral strains with different transmissibilities to estimate the changing patterns of human contact that generates new infections. Using a Bayesian approach, we fit the model to longitudinal data on hospitalisation with COVID-19 from the Republic of Ireland and Northern Ireland during the first year of the pandemic. We describe the estimated change in human contact in the context of government-mandated non-pharmaceutical interventions in the two jurisdictions on the island of Ireland. We take advantage of the fitted model to conduct counterfactual analyses exploring the impact of lockdown timing and introducing a novel, more transmissible variant. We found substantial differences in human contact between the two jurisdictions during periods of varied restriction easing and December holidays. Our counterfactual analyses reveal that implementing lockdowns earlier would have decreased subsequent hospitalisation substantially in most, but not all cases, and that an introduction of a more transmissible variant - without necessarily being more severe - can cause a large impact on the health care burden.
Introduction
During an epidemic, behavioural changes are encouraged, and sometimes mandated, to curtail infectious disease transmission. These changes aim to reduce the number of contacts between people broadly (e.g., closure of schools, workplaces, commercial establishments, roads, and public transit; restriction of movement; cancellation of public events; maintenance of physical distances in public) and reduce the chance of infection upon contact (e.g., use of personal protective equipment). Furthermore, tracing and isolating known infectious cases can limit the contact between infectious and susceptible individuals. These actions are collectively referred to as non-pharmaceutical interventions (NPIs) and are often mandated by governments. Slowing the surge of infection (or "flattening the curve") affords an opportunity to reduce infection-induced mortality and morbidity, alleviate health care burden and wait out an epidemic until pharmaceutical solutions (i.e., treatment and vaccines) become available. Implementation of mandated NPIs in historical outbreaks, including during the 1918 influenza pandemic, was crucial for preventing excess death in the United States [1]. NPIs have also been mandated globally during the COVID-19 pandemic.
Mathematical modelling and quantitative analyses of empirical data play a pivotal role in understanding and predicting epidemiological dynamics. Mechanistic epidemiological models have been widely applied to study the dynamics of SARS-CoV-2, and to make predictions of clinical outcomes under alternative scenarios (e.g., an assumed decrease in physical contact [2]). Despite their public health benefits, social distancing measures have been shown to incur high costs in several domains, including in economy [3], mental health [4], and civil liberty [5]. Thus, it is crucial to quantify infection contact, or its derivative quantities like the effective reproductive number, R -to monitor changes in infection burden, achieve desired public health outcomes and improve policy transparency and public engagement. While it is not possible to measure human contact (and its effect on disease transmission) directly, fitting a mathematical model to longitudinal data on observed processes such as reported cases and hospital admissions allows estimation of human contact and its derivatives [6,7].
Many epidemiological models follows a rich tradition of ordinary differential equation (ODE) models [8], which track the spread of infection and often immunity in a population. Specifically, ODE models assume that waiting time processes (such as infectious period and time to hospitalisation) are memoryless, that is to say, that the waiting time until an event (such as recovery and hospitalisation) does not depend on the elapsed time. Seen at the population level, this assumption deduces that times spent by individuals in each compartment are distributed exponentially, implying large individual variability. While mathematically convenient, the lack of memory is unsupported for certain epidemiological processes [9], and empirical evidence indicates other probability distributions with smaller individual variability and nonmonotonic densities (e.g., gamma-, Weibull and log-normal distributions) are better equipped to describe those processes. Previous studies have also demonstrated that quantitative predictions of epidemiological outcomes depend on the assumed probability distribution in a variety of systems [10][11][12][13], including SARS-CoV-2 [7]. As such, it is pertinent to incorporate realistic waiting time distributions, particularly when one aims to obtain quantitative and short-term, rather than qualitative and long-term insights from epidemiological models.
Here we develop a compartmental epidemiological model that accurately predicts inter-individual contact over the first year of the epidemic in the Republic of Ireland (ROI) and Northern Ireland (NI). These neighbouring jurisdictions on the island of Ireland present a compelling contrast due to independent policymaking over a small geographical area. We use a flexible discrete-time approach that incorporates waiting time distributions that reflect accurate assumptions about times spent in each compartment [7].
The COVID-19 pandemic has been characterised by subsequent waves of novel variants with varying disease transmissibility and severity. To separate the effect of human behaviour from the difference in transmissibility of multiple strains, we explicitly model multiple SARS-CoV-2 strains, with differing transmissibilities, seeded in the population at differing times. Our multiple strain model allows consistent estimates of relative human contact between periods even when the dominant variant has changed.
Previous studies of SARS-CoV-2 have estimated the time-dependent contact ratio and its derived quantities using continuous (e.g., basis splines [14]), or piece-wise, discrete functions of time (e.g., consisting of the specified period corresponding to NPI mandates [7]). However, both approaches can be fraught with challenges. On the one hand, it is not obvious to choose the appropriate extent of smoothing of a continuous function, for example, by deciding the number of knots in a basis spline function. On the other hand, abrupt changes imposed by piece-wise functions are at odds with empirical data on human movement during the COVID-19 pandemic [15]. Furthermore, it is difficult to establish a precise definition of the level of an intervention over time as definitions changed over time [16,17]. To address these concerns, we develop an intermediate approach, in which we introduce a prior that allows smoothness in human contact between neighbouring weeks in the absence of information otherwise from empirical data.
In the Irish context, compartmental models have been used by several other research groups to understand the dynamics of the virus, make forecasts of outcomes under various scenarios, and assess economic impacts of policy restrictions [18][19][20][21][22][23]. Our study complements these studies by providing a high-resolution description of the change in human contact over time, comparing the two jurisdictions on the island of Ireland. Leveraging the epidemiological model and estimated parameters, we also perform counterfactual analyses to explore the effects of alternative interventions on cumulative hospitalisation and assess the impact of a novel variant.
Epidemiological model and data
Our multi-strain discrete-time model consists of three types of host compartments (Fig. 1): a susceptible compartment (S) and two infectious compartments for viral strain s, (J s, i and Y s, i where i indicates the infection age, i.e., day since exposure). The compartments J and Y differ in their future clinical outcome: individuals in the components Y eventually get hospitalised while those in J remain out of hospitals. As our primary focus is the inference of NPI in the community, we did not consider within-hospital transmission, recurring hospital admissions of the same patients, or demographic turnover (including death). We ignored the dynamics of recovered hosts who were assumed to have minimal influence on the transmission during the period investigated.
Our model is parameterised by θ, the probability of hospitalisation, Δ s , the daily probability of infection with strain s, and a discrete random variable H that characterises a set of probabilities governing daily transitions to hospital. Δ s is informed by a discrete random variable, Z, that characterises a set of probabilities governing daily transitions into infected compartments, and is defined in Section 2.1.1. Z denotes the time in days from exposure of the infector to exposure of the infectee for a randomly chosen infectee-infector pair (i.e., generation interval), and can be viewed as the average relative contribution of each day to the individual reproduction number. The probability that infection occurs at infector age i, Transmission of infection does not affect the individual's stay in the compartment; however, for transition out of the Y compartments, the event going to hospital at infection age i is conditional on still being in the compartment at infection age i − 1. Given H, the random variable representing time from infection to hospital admission in hospitalised patients, we denote by η i the probability of hospitalisation at infection age i given the individual was still not hospitalised at infection age i − 1, . This is the discrete hazard of hospital admission at infection age i. We use published estimates for Z and H, as described in section 2.1.3 below. Our model assumes that the proportion of infected people hospitalised and the processes governing hospitalisation and recovery over time are constant across strains.
Infection dynamics
We extend a discrete epidemiological modelling framework by Sofonea et al. [7] to accommodate multiple viral strains spreading simultaneously. First, we express the effective density of infectious host Once infected, individuals progress to the next square each day (J and Y), capturing the memory effect of the infection age. After spending n j days, infectious hosts in J are no longer infectious. Alternatively, a fraction, θ of infectious hosts (in Y), is admitted to the hospital with a delay specified by the probabilities η 1 , …, η n y , where η i is the probability that the individual is admitted to hospital on the day i, conditional on their being infectious for i − 1 days. The grey arrows indicate the daily transition of individuals that occurs with probability 1.
population on a given day d that contributes to transmission of the strain s as: is the number of individuals with strain s on day d with infection age i in the community. Multiplying by ζ i and summing over infection ages, can be regarded as the total 'potential for infection' in the community on day d. The effective infectious density, I s (d), is this sum scaled by the contact ratio, c(d), on day d. The contact ratio is the ratio of contact rate on day d to contact rate on day 0. The infectious density is thus a measure of the total amount of transmission in a completely susceptible population.
We allow susceptible hosts encounter the viral strain s with a probability Λ s (d), which as in [7] is assumed to follow the Michaelis-Menten function that saturates with the effective infectious density, I s (d) and the contact rate; under assumptions about initial conditions as in [7] we derive the following: where τ s is the relative transmission advantage of strain s, S 0 the population size, and R 0 1 the basic reproductive number of the original strain.
We define the τ s as the ratio of the basic reproductive numbers of strain s to the original strain R 0 s = τ s R 0 1 .
When a host encounters multiple strains, we model the interaction between strains assuming superinfection with priority determined by order of exposure: i.e., only the first strain that encounters a host establishes infection when the same host subsequently encounters multiple strains. Thus, in the case of two strains, the probability of getting infected with the strain s, Δ s (d), is: where s ′ denotes the non-focal strain. Similarly, expressions can be derived for more than two strains. It follows then that the number of susceptibles on the next day is expressed as: Of those exposed to either viral strain, the proportion θ will develop severe symptoms and eventually be admitted to the hospital (Fig. 1).
For less severe cases that do not result in hospitalisation, J, the infection progresses towards recovery until they are no longer infectious on the day n j : Those that develop severe symptoms, Y, are admitted to hospital with the probability η i on the i-th day following exposure.
It follows then that the number of hospital admissions on day d + 1 equals
Observed longitudinal data
Epidemiological models are often fitted to data on infected caseshowever, case data depends on levels of testing, which varied over time during the COVID-19 pandemic. It can also be problematic to rely on data on deaths -for instance, many deaths in the ROI occurred following outbreaks in care homes, and thus data on deaths may not reflect disease spread in the general community. Biases and uncertainty in estimating the reproductive number arising from such issues are discussed elsewhere [6]. Thus, hospital admission data is likely a better reflection of the community spread of SARS-CoV-2. We used daily COVID-19 hospital admissions in the ROI and NI, reported respectively by the Central Statistics Office COVID data hub for the ROI [24], and the NI Department of Health [25]. One potential caveat of publicly available hospital admission data is their ambiguity on whether infections were acquired in the community or health-care settings. Modelling studies of hospitalacquired SARS-CoV-2 in Germany [26] and England [27] estimate roughly 10 and 20% of hospitalised cases may have originated from transmissions within hospitals: such estimates are not available for Ireland to our knowledge. As our study focuses on community transmission alone, we conduct a sensitivity analysis fitting our model to 80% of reported hospital admission numbers.
Successive invasions of new variants have characterised the COVID-19 pandemic. Our study tracks two strains that circulated in the island of Ireland in the first 12 months of the pandemic: i.e., the original strain (initially detected in Wuhan, China) and the Alpha strain (also known as B.1.1.7., initially detected in Kent, UK). We used publicly accessible data on the frequency of the Alpha strain in the ROI [28], and NI [29], respectively.
Incorporating empirical estimates of waiting time distributions
Linking transitions within and between model compartments are two random variables, each describing a waiting time process. These are the infectious period (generation interval), Z, and the delay between infection exposure and hospitalisation, H. The probability distributions representing these random variables have been estimated elsewhere empirically for SARS-CoV-2 in a global and European context as described below.
Generation interval
The generation interval refers to the time between infection events in a pair of infector and infectee, reflecting the incubation duration and recovery timing. Here, we used the distribution of this interval to model the relationship between the age of infection (i.e., time since exposure) and the infectiousness of the infector. We employed an estimate by Ferretti et al. [30] who found the variation in SARS-CoV-2 generation interval was best described by the Weibull distribution with the mean interval of 5.5 days (shape=3.29 and scale=6.12). We truncated the Weibull distribution at the upper-integer-rounded 99%-quantilewithout this truncation, the discrete model would require infinite timetracking sub-compartments due to a right-unbounded support [0, ∞]. We then discretised the distributions because the dynamics unfold in discrete-time intervals of one day in our model: the upper limit of the discretised distribution corresponds to n j (eq. 6).
Exposure to hospital admission
The waiting time between exposure and hospital admission was estimated as the sum of the incubation period and the delay between symptom onset and hospitalisation. We assumed that the two waiting times were independent due to the absence of evidence otherwise. A meta-analysis of global, but predominately, Chinese data found that the SARS-CoV-2 incubation period was log-normally distributed with parameters μ = 1.63 and σ = 0.50 [31], corresponding to a mean incubation time of 5.78 days (standard deviation of 3.97 days). The distribution of waiting time between symptom onset and hospitalisation was estimated assuming a gamma distribution by Public Health England with a mean of 5.14 days (standard deviation of 4.2 days) [32]. We fitted a gamma distribution to the simulated sum of the two distributions to represent the timing between exposure to infection and hospital admission (shape =4.76 and rate =0.435). As for the generation interval, we discretised the distributions and the upper limit of the discretised distribution corresponds to n y (eq. 8).
Weekly contact ratio
Here, we defined the contact ratio c as the human contact rate relative to the pre-pandemic, pre-intervention baseline (eq. 1 & 2) and estimated this quantity using a piece-wise function consisting of weekly intervals. Specifically, we estimated the ratio in each area a (NI and ROI), per week w (i.e., c a, w ) as a function of ϕ a, w , the log proportional change in the contact ratio from the previous week. We index w from the date of the first public health intervention in either jurisdiction, which took place in ROI on 2020-03-12 (Supporting Information S1: Table S1 & S2); hence the preceding, pre-intervention contact ratios are defined as 1.0.
With this formulation, hierarchical Bayesian inference with priors on the ϕ a, w allows us to estimate the time-varying weekly contact ratios with minimal prior information specific to the modelled system. Specifically, we used a prior ∼ N (0, ε), where ε is a hyperparameter specifying the standard deviation of ϕ, such that c a, w would equal c a, (w− 1) in the absence of signals from epidemiological data ( Table 1). A priori, this formulation avoids over-fitting random weekly variation at the potential risk of smoothing over valid signals of an abrupt change in the weekly contact ratio, for example, following an introduction of lockdown measures. To check for such bias, we examined the extent to which our smoothing approach affects the estimation of sudden changes in the contact ratio, c. We showed that our formulation is unlikely to introduce substantial bias (Supporting Information S2).
Initial conditions
The first case of SARS-CoV-2 on the island of Ireland was identified in NI on 2020-02-27 from an individual travelling back from Northern Italy via Dublin Airport located in ROI (Table S2). Two days later, the first official case in the ROI was also confirmed from a traveller from Northern Italy (Table S1). Initially, most known cases are travel-related, and contact tracing may successfully contain infections. As our model solely tracks community transmission, we started our simulations on the first day that community transmission was detected on the island of Ireland: 2020-03-05 (Table S1). Coincidentally, the exponential growth of confirmed cases appears to have begun around 2020-03-05 in both ROI and NI [24,33]. We account for the uncertainty of the beginning of community transmission by estimating the initial infectious density independently in the two jurisdictions (Table 1). We assume implicitly that the contribution of the travel-related cases is negligible once the infection starts growing exponentially in the community.
The first cases of the Alpha strain were reported in November and December 2020, respectively, in ROI and NI (Tables S1 & S2). Due to high connectivity with the island of Britain, the Alpha strain likely entered the island of Ireland soon after it emerged in England, where the strain was detected in mid-September [34]. By February 2021, the Alpha strain comprised the majority of infections in both ROI and NI. To estimate the date of introduction, we fitted a three-parameter logistic function to the longitudinal data of the Alpha frequency and identified the date on which Alpha cases (frequency of Alpha × known new cases) intersects 1: the date of introduction was estimated as 2020-09-22. Again, we account for the sensitivity of the timing of introduction by estimating the founding infectious density of the Alpha strain, independently in the two areas (Table 1). In our model, viral strains differ only in their transmissibility, τ s .
Fitting
We used a Bayesian approach to fit the above model to two types of longitudinal data from the ROI and NI: daily counts of hospital admissions and the Alpha strain frequency. Model parameters are detailed in Table 1. Hospital admissions per day were modelled as log-normally distributed with standard deviation parameters σ h . We set the probability of hospital admission given infection, θ, to the observed figure in ROI published by the Health Service Executive [42] ( Table 1). The frequency of the Alpha strain was fitted assuming the beta proportion distribution with a standard deviation parameter, σ f . We fitted our model to the data from the first year of the pandemic from the first confirmed case of community transmission on the island, which was detected on 2020-03-05, in ROI, to the end of February 2021. Our modelling period precedes the widespread administration of the full course of vaccination in either jurisdiction: the proportion of fully (twice) vaccinated individuals in ROI and NI was less than 3% and 2% at the end of February 2021, respectively [24,33].
Our model was written in Stan 2.21.2 and fitted through the RStan interface [35]. We fitted the model in parallel in four independent chains, each with 5000 sampled iterations and 1000 warmup iterations. For diagnostics, we confirmed over 400 effective samples and ensured convergence of independent chains (R < 1.1) for all parameters [36]. We assessed the goodness of fit to data using standardised residuals (Supporting Information S3). We also quantified the posterior z-score and posterior contraction to examine the accuracy and precision of posterior distributions and the relative strength of data to prior information [37] (Supporting Information S4).
Counterfactual analyses
Estimating human contacts with a multi-strain model separates the effect of human behaviour from the difference in transmissibility of multiple strains. This separation allows us to leverage the epidemiological model and estimated parameters to simulate an epidemic based Table 1 Description of model parameters and their fixed values, or prior distributions used in Bayesian statistical inference. We assigned an informed prior for R 0 , τ 2 and a generic, weakly informative prior for I s,a (0), ε and measurement error parameters. half-N (0, 1) on data-generating processes consistent with the observed data. In turn, we can modify one part of the fitted modelwhile everything else is constantto conduct counterfactual analyses, which allows us to explore the impact of different factors that affect disease transmission.
Here, we explored two counterfactual scenarios: to examine the effect of lockdown timing; and to isolate the impact of the more transmissible Alpha strain on the hospitalisation outcome.
Effect of lockdown timing
We explored the impact of the timing of lockdown introduction by simulating an epidemic with parameters estimated from the model fitted to the observed data on hospitalisations and strain proportions, but the contact ratios counterfactually shifted earlier by seven and 14 days relative to the actual start of the three lockdowns imposed in the ROI and NI. We then compared counterfactual scenarios and reality by computing the percentage difference of the cumulative hospital admission numbers for the subsequent days under the counterfactual versus the observed scenarios.
Suppose the intervention is to shift the first lockdown date earlier by seven days. Denoting the actual lockdown date d l and the counterfactual contact ratio on day d as c*(d): The counterfactual infectious density on day d follows from eq. 1, and we denote this where the J*, Y* denote the counterfactual numbers in these compartments on day d. From this follows the counterfactuals on day d Λ s *(d), Δ s *(d), and H*(d + 1) from eqs. 2 to 9. For the second and third lockdowns, we assume that the epidemic had proceeded as observed up to the second and third lockdown, respectively.
Impact of a more transmissible variant
We investigated the extent to which the introduction of the more transmissible Alpha strain contributed to public health burden by simulating an alternative epidemic with parameters estimated from the model fitted to the observed data on hospitalisations and strain proportions, but without introducing the Alpha strain in September 2020. We then used the percentage difference to compare the cumulative hospital admission numbers between the counterfactual and real scenarios over time until the end of the modelled period at the end of February 2021.
Estimated contact ratios
We estimated a rapid decline in contact ratios during the first month of the pandemic before a strict lockdown was implemented ( Fig. 3; Tables S1 & S2). The first lockdown started on 2020-03-28 in both jurisdictions (Fig. 3 NI-a & ROI-a). By this date, the estimated contact ratio was already down to about 60% of the pre-pandemic baseline in both jurisdictions. This finding is consistent with pre-lockdown movement alternations reported elsewhere, for example in China, Italy and New York [43]. The changes were likely driven by lighter restrictions that preceded strict lockdowns and spontaneous behavioural changes in response to increasing perceived infection risk (e.g., increased incidence within the social sphere and media coverage) [44]. During the first lockdown, human contact fluctuated only slightly.
In both jurisdictions, the easing of the first lockdown began from 2020-05-18 (Fig. 3 NI-b & ROI-b), and a long period of slow restriction easing took place during the summer months. In the ROI, the estimated contact ratio increased from June and fluctuated between approximately 70-80% of the pre-pandemic baseline in July, August and September. In NI, the contact ratio rose to a peak around the end of July. We detected higher human contact in NI than the ROI in mid-June (indicated by 95% predictive intervals of the difference excluding zero; Fig. 3; bottom panel). Of potential relevance, we note that all nonessential retail outlets were allowed to reopen earlier in NI than the ROI during this period from 2020-06-12 and 2020-06-29, respectively (Fig. 3 NI-c & ROI-c; Tables S1 and S2). Human contact in NI decreased through August but elevated again to about 90% of baseline by the end of September with no parallel increase in the ROI (Fig. 3 NI-d & ROI-d). This period corresponds to the first time primary and secondary teaching resumed in person in both jurisdictions. The increased contact in NI mirrors a trend in detected England where the September schooling reopening led to increased cases, most notably among the teaching staff [45].
Ahead of the second lockdown (Fig. 3 NI-e & ROI-e), the estimated contact ratio declined to about 60% of baseline in both jurisdictions during October. Unlike during the other two lockdowns, the contact ratios tended to increase during the lockdown period in both jurisdictions throughout November (Fig. 3; top and middle panels).
At the beginning of December in the ROI, several mitigation measures were lifted, allowing non-essential businesses, restaurants, cafes and gastro-pubs to open as well as relaxing household gathering restrictions ( Fig. 3 ROI-f; Table S1). This period coincides with an increasing trend in the estimated contact ratio, which reached about 90% of the pre-pandemic baseline the week before Christmas. In NI, on
Fig. 3. Estimated weekly contact ratios in
Northern Ireland (top) and the Republic of Ireland (middle) and differences in the contact ratio between the two jurisdictions (bottom). The three lockdown periods, corresponding to the most strict restrictions in each jurisdiction, are marked in yellow, blue and green, respectively. The letters a-g in black dots correspond to the timing of events described in the main text. The black line, and grey bands correspond to the median, the 50% (dark) and 95% (light) credible intervals. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) the other hand, the lockdown remained in place almost two weeks longer (Fig. 3 NI-f; Table S2), and the estimated contact ratio reached a maximum of about 75% of the baseline value before Christmas. Our estimates indicate that human contact was substantially higher in the ROI than NI for two weeks over the Christmas period (indicated by 95% predictive intervals of the difference excluding zero; Fig. 3 NI-g & ROIg): the ROI experienced the highest per capita infection rate in the world during this period. [46]. In both NI and the ROI, the third lockdown introduced in late-December 2020 coincided with the lowest contact ratio ( Fig. 3; green) followed by the first lockdown in late March ( Fig. 3; yellow) and second lockdown in November ( Fig. 3; blue).
The above finding are based on the assumption that 100% of reported hospital admission cases originate in the community. However, recent estimates indicate up to 20% of hospital admission numbers may be attributable to transmissions that take place in health-care settings [27], though the estimates vary across counties and viral strains [26]. Our sensitivity analysisassuming only 80% of reported admission numbers were community-acquireddemonstrate little impact of this assumption on the estimated contact ratio (Supporting Information S5: Fig. S5). The difference in fitted proportion of hospital admission numbers was absorbed by a 25-30% change in the estimated initial effective infectious densities (Supporting Information S5: Fig. S6), which also affects incidence, but the priors for which were less informative than either the contact ratios or R 0 (Supporting Information S4: Fig. S4).
Counterfactual scenarios Effect of lockdown timing
Lockdown measures have been shown effective in reducing the infection burden of SARS-CoV-2, and the timing of introduction is the most significant factor in determining their effectiveness [47,48]. We found that bringing forward the lockdown dates by either seven or 14 days would have substantially reduced the cumulative hospitalisation over the subsequent 50 days from the date of lockdown in most scenarios (indicated by the 95% predictive interval excluding zero; Fig. 4). Of note, we found that a counterfactual simulation to bring forward the second lockdown date by seven days showed a non-conclusive impact on the cumulative hospitalisation in the subsequent 50-day period in either jurisdiction (judged by the 95% predictive interval containing zero; Fig. 4). The second lockdown was preceded by a declining trend in contact ratios while the contact during the lockdown remained relatively higher than the first or third lockdown (Fig. 3).
Impact of a more transmissible variant
Our model estimated that the Alpha strain was approximately 19% more transmissible than the original strain (τ 2 , 95% prediction intervals [16.0,21.8]). It is worthwhile noting that our model does not consider continuous inputs of infection into the island of Ireland, despite the high connectivity among the British Isles. Thus, our estimate of the Alpha transmissibility may be confounded by repeated introductions, for example, from England, where the Alpha strain was first detected. Nonetheless, our estimate is consistent with those from England [34].
To assess the impact of the Alpha strain, which arrived later and is more transmissible than the original strain, we compared the fitted model ( Fig. 5; orange) to a counterfactual simulation without the Alpha strain, in which we assumed the same estimated contact ratio ( Fig. 5; blue). We detected a statistically distinguishable impact of the Alpha strain on the cumulative hospital admissions by earlier January in both jurisdictions -approximately 3.5 months after the initial introduction (indicated by the 95% predictive interval excluding zero; Fig. 5). By the end of February 2021, we show that the Alpha strain was responsible for a 38 and 55% increase in cumulative hospitalisation, in NI and the ROI, respectively (Fig. 5). Our findings demonstrate that an introduction of a more transmissible variant -without necessarily being more severe -can cause a large impact on the health care burden. Fig. 4. Counterfactual analysis demonstrates the effect of lockdown timing on epidemiological outcomes. We examined counterfactual introductions of three lockdowns in Northern Ireland and the Republic of Ireland, assuming that they would have started seven days and 14 days earlier. The percentage difference in cumulative hospital admissions between the counter-factual and factual scenarios is shown. The black line and grey band indicates the median and 95% predictive interval, respectively. The predictive interval signifies 95% of simulated outcomes generated based on samples from the posterior distribution of model parameters and measurement errors.
Conclusion
We developed a multi-strain model of SARS-CoV-2 and estimated timedependent human contact over the first 12 months of the pandemic on the island of Ireland. Unlike many earlier COVID-19 modelling studies that estimate the effective reproductive number of a single strain, our model explicitly incorporates multiple viral strains and focus on estimating contact ratios. An important difference between the contact ratio and the effective reproductive number is that the former is unaffected by changes in virus transmissibility, which is modelled independently. As such, our approach separates the effect of human behaviour from that of the difference in transmissibilities between multiple, co-circulating strains.
Examining the longitudinal patterns and geographical differences in the estimated contact ratios allowed us to identify corresponding policies and events. In addition, we leveraged estimated parameters to conduct counterfactual analyses, in which we examined the role of lockdown timing and a novel variant on cumulative hospitalisation. In a companion paper, we extended the application of the estimated contact ratios to causal inference [49]. Specifically, we used mobility and maskwearing data to independently predict the contact ratios estimated from our epidemiological model described in the current paper and subsequently compared observed hospitalisations with predicted hospitalisations under a counterfactual mask-wearing scenario.
We presented a generic, epidemic model parameterised for SARS-CoV-2 to fit longitudinal hospitalisation data, one of the most reliable and available data types [6]. Of most relevance to COVID-19 at the time of publication, our model lacks human age structure and vaccination: these omissions give rise to certain limitations. For example, hospitalisation risks increase with age while older individuals adjust their behaviour differently from young counterparts [50]. Thus ignoring the age structure may bias our estimate of human contact estimated from hospitalisation data. In addition, the lack of vaccination and associated immunity in our model restricted our scope to the first 12 months of the COVID-19 pandemic. Technically, our model can be extended modularly to relax these assumptions about age structure and vaccination. However, these extensions were outside the scope of this study due to challenges in parameterising these processes reliably. For instance, the output of age-structured models are highly sensitive to assumptions of age-specific contact patterns [51], which likely changed during the epidemic, yet empirical data for time-dependent contact matrices are scarcely available. Behavioural adjustment in response to the pandemic is further complicated by the interaction between age-and sex-specific effects [52]. Furthermore, it is difficult to track and parameterise the state of immunity generated by natural infections from multiple viral strains and multiple vaccine doses using compartmental models.
Finally, our work contributes to the growing COVID-19 modelling literature by providing a transparent Bayesian workflow for fitting a multi-strain epidemic model to longitudinal epidemiological data, which may be readily adapted to modelling SARS-CoV-2 in other jurisdictions and other infectious diseases.
Fig. 5.
Counterfactual analysis shows the extent to which the Alpha strain elevated the burden of hospitalisation. To compute the impact of the Alpha strain, the counterfactual simulation (blue) assumes that the Alpha strain never invaded either jurisdiction. The crosses indicate data, and the coloured bands correspond to 95% predictive intervals of the fitted model (orange) and counterfactual scenario (blue), respectively (left panels). The percentage difference in cumulative hospital admissions between the two scenarios is shown. The black line and grey band indicates the median and 95% predictive interval, respectively (right panels). The predictive interval signifies 95% of simulated outcomes generated based on samples from the posterior distribution of model parameters and measurement errors. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
Declaration of Competing Interest
None.
|
2023-04-30T13:07:10.420Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "f517facaec177f306648f0bee58cb842f0daa544",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.gloepi.2023.100111",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "834b62a8af2c1cf7060bd9e98196ddb3b12efc3d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
201017975
|
pes2o/s2orc
|
v3-fos-license
|
Donkey’s Milk in the Management of Children with Cow’s Milk protein allergy: nutritional and hygienic aspects
Background The therapeutic strategy for children with cow’s milk allergy (CMA) consists in the elimination of cow’s milk (CM) from their diet. Donkey’s milk (DM) has been reported to be an adequate alternative, mainly to his nutritional similarities with human milk (HM) and excellent palatability. The aim of present prospective study was to evaluate the nutritional impact of DM on the diet of children with CMA in term of children growth. Methods Before the nutritional trial on children and during the study the health and hygiene risks and nutritional and nutraceuticals parameters of DM were monitored. Children with CMA were identified by the execution of in vivo and in vitro tests for CM and subsequent assessment of tolerability of DM with oral food challenge (OFC). Finally, we prescribed DM to a selected group of patients for a period of 6 months during which we monitored the growth of children. A total of 81 children, 70 with IgE mediated cow’s milk protein allergy (IgE-CMPA) and 11 with Food Protein Induced Enterocolitis Syndrome to CM (CM-FPIES), were enrolled. Results Seventy-eight out of 81 patients underwent the OFC with DM and only one patient with IgE-CMPA (1.5 %) reacted. Twenty-two out of 81 patients took part of the nutritional trial. All the 22 patients took and tolerated the DM, moreover DM did not change the normal growth rate of infants. Conclusions In conclusion, DM resulted safe in term of health and hygiene risks and nutritionally adequate: no negative impact on the normal growth rate of children was assessed. Therefore, it may be a suitable alternative for the management of IgE mediated CMA and FPIES, also in the first 6 months of life, if adequately supplemented.
Background
Human milk (HM) is the exclusive or primary supply in the first months of a new-born's life [1], but in cases where it is not available it becomes essential to provide a suitable alternative. Cow's milk (CM) based formulas are widely used as a substitute for HM, however 2-3% of children present an IgE-mediated cow's milk protein allergy (IgE-CMPA) [2,3] and it is also known that in the 0.34% of children, CM can cause the Food Protein Induced Enterocolitis Syndrome (CM-FPIES) [4].
The therapeutic strategy for children with IgE-CMPA or CM-FPIES consists of the total elimination of cow's milk protein (CMP) from their diet [4,5,7]. During the first years of life, milk represents an important source of nutrients, so it's difficult to eliminate from the everyday diet. Therefore, one of the major objectives of paediatric allergists is to find an appropriate alternative with a pleasant taste, good nutritional values, and hypoallergenic properties that will not induce cross-reactivity with CM [6]. The current guidelines [2,[8][9][10][11][12][13] recommend extensively hydrolyzed formulas (eHFs) as the first choice with IgE-CMPA treatment except for the more severe reactions where free amino acid formulas (FAAFs) are preferable. Unfortunately, eHFs and FAAFs are hampered by their unpleasant taste not only related to the hydrolysis itself but also to their particular composition (eg fatty acid profile) and by the possibility of residual allergenicity [14][15][16][17]. While soy infant formula can be considered a good additional alternative choice because it is readily available, has an acceptable taste, and ensures proper growth in children, it is not recommended as a first choice either for IgE-CMPA treatment, especially in infants younger than 6 months, because of the major risk of developing allergy to soy [18,19], or for CM-FPIES treatment because a large percentage of these infants can also react to soy [20][21][22].
Donkey's milk (DM) has recently received growing interest as has been reported to be an adequate alternative for children with CMPA and CM-FPIES, mainly due to its nutritional similarities with human milk [23] and excellent palatability and tolerability [24][25][26][27][28][29], unlike the milk of other species, such as goat's and sheep's milk, which can lead to cross-reactivity between their proteins and CM proteins [17,30,31]. In fact, DM shows a protein fraction more similar to HM than CM, in addition to which, the primary structure of DM's caseins presents significant differences compared to other species, and it is always more closely related with HM counterparts [18,[32][33][34]. This may contribute towards explaining the less allergenic properties of DM and its greater digestibility [35]. Furthermore, the high lactose content of DM confers good palatability.
The stimulatory effect of lactose on intestinal calcium absorption -known for its important role in bone mineralization -has been observed in animal models [36], while there are contradictory reports in humans [37].
Among other positive properties, DM also has a high content of lysozyme which, together with immunoglobulins, lactoferrin, and lactoperoxidase, exerts both an immunoregulatory and an anti-tumour activity, and it may also act on the digestive tract by reducing the incidence of gastro-intestinal infections [18,25,31,32,38].
However, DM has a low-fat content, which corresponds to a low energetic value [25]. While the low lipid content of DM can be considered an advantage in low calorie diets or when a low intake of animal fat is recommended, it may represent a limit in children who require an adequate intake of lipids. In fact, lipids represent 50% up to 12 months and about 40% between 12 months and 24 months of age of daily caloric needs; therefore, if donkey milk is administered as the sole source of nutrition, it must be adequately supplemented with lipids.
The number of studies that focus on the hygiene and health characteristics of DM is increasing [39]. There are reports that show the interactivity of lysozyme and lactoferrin may affect the antimicrobial properties of DM [40] and that the consumption hazards of DM are lower than for CM, especially for microorganisms like enterotoxigenic E. coli and thermotolerant Campylobacter [41]. Moreover, the low prevalence of mastitis agents in DM has been demonstrated [39,42]. As pathogenic bacteria and DNA from protozoa have been found in DM [42,43], due to its use in sensitive consumers, heat treatment of raw milk is recommended to avoid the risk of food-borne diseases. Pasteurisation guarantees both the preservation of the milk's nutritional properties and the elimination of any pathogenic microorganisms that could be present in raw milk.
The main purpose of this study is to evaluate the nutritional impact of DM, appropriately integrated, on the diet of patients with IgE-CMPA and CM-FPIES in terms of children's growth. For this purpose a multidisciplinary and prospective study tested the nutritional and nutraceutical characteristics and sanitation of DM, as well as its palatability and tolerability.
Methods
DM was supplied from a farm located in central Italy, where about 160 Amiata donkeys are reared outdoors, in a semi-intensive system and routinely machine milked twice a day. The farm has been recognised according to European Union (EU) regulation 853/2004.
Before the nutritional trial on children and during the study, the health and hygiene risks and nutritional and nutraceuticals parameters were monitored by the Istituto Zooprofilattico Sperimentale del Lazio e della Toscana (Florence section-Florence, Italy) and the Department of Veterinary Sciences of the University of Pisa (Italy) respectively. The palatability and tolerability of the milk were assessed by the Department of Allergy of the Anna Meyer Children's Hospital (Florence, Italy): a specific allergological work-up that included skin tests, in vitro tests and oral provocation tests with DM was performed in a day-hospital setting in children with IgE-CMPA or CM-FPIES. The Department of Allergy and Professional dietetic Unit also drew up nutritional plans that included DM, adapted to the needs of patients with IgE-CMPA and CM-FPIES in relation to their age, sex and disease. The same departments monitored the palatability of DM and the growth and the quality of life of the children enrolled in the study for a period of 6 months.
Evaluation of the health hazards of DM consumption and nutritional and nutraceutical analyses
The health and hygiene risk analyses were carried out on 36 bulk milk samples (18 of raw milk and 18 of the corresponding milk pasteurised at 65°C for 30 min) taken monthly, while the nutritional analysis regarded the pasteurised samples. All the samples were taken to the laboratories in tanks at 4°C; no preservatives were added. All the samples were analysed for dry matter and lactose content via infrared analysis (Milkoscan, Italian Foss Electric, Padova, Italy); proteins, caseins and ashes [44]. Fat was gravimetrically determined after extraction as per the Rose-Gottlieb method [45]. The individual mineral content (Ca, P, Mg, K, Na, Zn) (mg/L) was determined by atomic absorption spectroscopy and ultraviolet-visible spectroscopy according to the AOAC (2000) [46], and Murthy and Rhea (1967) [47]. Methyl esters of fatty acids for gas chromatographic analysis were prepared using methanolic sodium methoxide according to Christie (1982) [48]. The gas chromatographic analysis of the milk was conducted as described in the paper by Ragona et al., 2016 [39].
For the Vitamin D quantification, lipids from 75 ml of DM were saponified by adding KOH pellets directly to the milk according to Perales et al. (2005) [49] at 40°C for 32 min. Ethanol and double distilled water were then added to the sample in order to remove the polar compounds and prevent foaming. Afterwards the solution was transferred into a 500-mL separatory funnel, and an initial extraction of the unsaponifiable fraction was performed using 75 ml hexane. The aqueous phase was thus drained and collected in order to repeat two extractions by adding 75 ml of hexane each time and the organic phase from both was collected in a rotavapor flask. Finally, the organic phase was evaporated to dryness on a rotary evaporator and the extract re-suspended in 500 μl of acetonitrile and filtered through a 0.45-μm diameter syringe filter. 100 μl of the extract were injected into an HPLC and isocratically eluted using acetonitrile-methanol 97: 3 as a mobile phase at a flow of 1 ml/min, as described by Hagar et al. (1994) [50]. A Kinetex core-shell column (Phenomenex, Inc. A) was used as the stationary phase and the UV detector was set at 254 nm. Cholecalciferol and ergocalciferol in the milk samples were quantified by comparison with a calibration curve obtained via the injection of the pure standards (Sigma Chemical Co., St. Louis). The activity of the lysozyme was assessed by the Fluorimetric method on a microplate (EnzChek Lysozyme Kit, Invitrogen, Carlsbad CA, USA), measured by means of a Spectrofluori Meter (Ascent, Thermo Labsystem FL, USA) and expressed in units/ml.
The allergological work-up included: skin prick test (SPT), specific serum IgE (s-IgE) and oral food challenge (OFC) with DM. The OFC was performed in patients with IgE CMPA according to the DRACMA guidelines and to the protocol of Leonard et al. in patients with IgE-FPIES [29,[51][52][53].
While performing the OFC we also evaluated the palatability of the DM: in children ≥3 years of age, DM palatability was assessed with a specific Wong-Baker modified pain scale while in children < 3 years of age it was assessed through the physician's judgment [53]. Before beginning the study informed parental consent was given.
Development of nutritional plans and monitoring growth
The Department of Allergy and the Professional dietetic Unit of the Anna Meyer Children's Hospital drew up nutritional plans which included DM and were appropriate for the needs of 16 out of the 70 patients with IgE-CMPA (12 M: 4 F) and 6 out of the 11 patients with CM-FPIES (4 M: 2 F), who referred to the Allergy Unit.
On the basis of the DM nutritional analyses, a number of different nutritional plans were formulated with an appropriate daily calorie intake depending on the age of the patients (Table 1). For children older than 3 years in which milk accounts for less than 10% of calories, we estimated that its replacement with DM does not give rise to significant variations. The daily-prescribed dose of DM varied depending on age (Table 1). DM was provided free of charge by the Meyer Children Hospital to all children enrolled for the six-month study period. In addition, we prescribed vitamin D (cholecalciferol, Vitamin D3) in specific doses for each age and included a supplement of fat content in the nutritional plans due to the low-fat content in DM (Table 1). Lipid supplementation consisted of 3 g of lipids for every 100 ml of donkey milk taken, in the form of extra virgin olive oil for children over 6 months of age, to be added either to the milk itself or to savoury meals; in infants younger than 6 months the aforementioned lipid supplementation is half represented by extra virgin olive oil and half by a gluco-lipid supplement to be mixed with donkey's milk. The gluco-lipidic supplement contains 40% of MCT lipids in its lipid fraction. Also, Vitamin D supplement was provided free of charge during the period of the study.
The patients enrolled in the study met with a dietician three times during the six-month study period: at the beginning (T0) and at the end of the study (T1) in order to monitor the nutritional conditions. The patients were also evaluated at 3 months to monitor the compliance of the nutritional plan with its supplements. During the first visit, the dietician explained to parents the nutritional plan specifically designed for their children, the nutritional, nutraceutical and hygienic characteristics of Amiatina DM, the importance of the supplements of fat content and vitamin D, the methods of DM supply and procedures for storing and consuming DM at home. The dietician together with the paediatric allergist also explained the correct reading of the labels to avoid accidentally taking of CM. During the follow-up period the parents kept a food diary, recording daily DM consumption and the related fat and vitamin D supplements, in order to implement changes to the nutritional plan, where necessary.
In addition, the nutritional status of the patients was assessed with blood biochemical parameters and auxological parameters. The biochemical parameters of nutritional interest measured included: blood count (in particular red blood cells and haemoglobin), serum albumin, 25-OH vitamin D level, azotaemia and thyroid function tests (TSH, fT4). Blood was drawn in the Allergy Unit at the beginning of the study (T0) and after 6 months of DM consumption (T1). The nutritional state was also evaluated with auxological parameters which considered weight and body length for children and infants up to 2 years, and stature thereafter. Weight was measured with electronic integrating scales (SECA 757; Hamburg, Germany: precision ±1.0 g). Supine length was measured with a Harpenden infant meter and stature was measured with a Harpenden stadiometer. The weight was expressed in kilograms (kg) and the growth curves used were those of the World Health Organisation (WHO) for children and infants up to 2 years of age [54], and the Central Disease Control (CDC) for children over 2 years of age [55]. The length/height was expressed in centimetres (cm) and the growth curves used were those of the WHO for children up to 2 years of age and those of the CDC for children over 2 years of age. The auxological parameters were collected at T0 and T1.
Moreover, weight and length/stature-gain were evaluated in terms of Z-score This method evaluates changes in anthropometric parameters associated with the introduction of DM. Z-score for weight and length/stature-gain were calculated at T0 and T1 in the 16 patients with IgE-CMPA and the 6 patients with CM-FPIES. We also focused on the weight and length/stature-gain in terms of Z-score at T0 and T1 in the patients younger than 1 year in which milk consumption is relevant.
Statistical analysis
Health data and chemical composition of DM are reported as mean and standard deviation (SD). Z-scores of weight and length/stature for age were calculated from the formula Z = x-|X|/|SD|, taking the Gardner and Pearson growth curves as reference for children up to 2 years and the Tanner curves after 2 years of age. Z-score values obtained between check-ups were analysed with the paired t-test. Significance was established with the paired t-test, with p < 0.05 as cut-off. The data were analysed using a commercially available statistical software package (SPSS, Chicago, IL, USA).
Evaluation of the health hazards of DM consumption and nutritional and nutraceutical analyses
In raw and pasteurised milk samples, the TVC was on average 74,333.33 CFU/mL (±34,416.57) and 4332.22 CFU/mL (± 3046.78) respectively. In raw milk samples, the bacteria responsible of food-borne outbreaks (Salmonella spp., Listeria monocytogenes, Campylobacter spp.) were never detected. Moreover, coagulase-positive Staphylococci were found in the raw milk with a count ranging from < 1 CFU/mL to 190 CFU/mL, and an average value of 133.83 CFU/mL. The Enterobacteria count was always lower than 1 CFU/mL in the pasteurised milk samples in compliance with Regulation (EC) No 1441/2007 and L. monocytogenes was never isolated. DM showed a dry matter percentage of 9.32, of which the major component was lactose with a percentage of 7.05, whereas 0.81 was the percentage of casein; fat and ash percentages were 0.31 and 0.37 respectively (Table 2). Among the minerals, calcium and potassium were present in higher quantities (633.31 and 653.32 mg/L respectively) while zinc was 3.16 mg/L. As regards the milk fatty acid profile, the most frequently represented fatty acids were palmitic acid (22.10 g/100 g of fat), oleic (21.58 g/100 g of fat), and linoleic (11.18 g/100 g of fat) (Table 3). Saturated and unsaturated fatty acids were 52 and 48 g/100 g of fat respectively. Among the unsaturated, n-3 fatty acids were about 8 g/100 g of fat and the major n-3 component present in milk was linolenic acid (7.52 g/100 g of fat), whereas linoleic was the main n-6 fatty acid; n-3/n-6 ratio was 0.72. DM showed a mean lysozyme activity of 1402 + 286.658 U/ml and total content of vitamin D1.97 μg/ 100 ml principally due at vitamin D2 ( Table 2).
Allergological work-up and palatability assessment
The DM was well tolerated and showed good palatability both in patients with IgE-CMPA and CM-FPIES. In particular, 67 out of 70 patients with IgE-CMPA underwent the OFC, the parents of the others 3 patients refused to give their consent because of positivity to the SPT and/or s-IgE to DM. Only one out of 67 (1.5%) patients with IgE-CMPA reacted to the DM. Of the patients with CM-FPIES, 11 out of 11 (100%) underwent the OFC and all patients tolerated the DM. In general, 77 out of 78 patients (98.7%) that underwent OFC with DM tolerated it.
Nutritional plans, monitoring growth and quality of life
Sixteen out of 66 patients with IgE-CMPA and six out of 11 patients with CM-FPIES took and tolerated the DM for the prescribed length of time. All 22 patients also followed the nutritional plans formulated for each one, without significant variations in the quantity of DM consumed.
Tables 4 and 5 report the variations in the auxological values (expressed with Δ z-score) in the 22 patients enrolled at T0 and T1 grouped for pathology (Table 4) and in the children younger than 1 year (Table 5).
At T0, patients with IgE-CMPA had negative weight and length/stature Z-scores and showed an improvement at T1, statistically significant for length/stature Z-score (p < 0.05). Similarly, we found a statistically significant improvement for length/stature Z-score in patients younger than 1 year ( Table 5). The growth in weight was similar to that of the reference population both in IgE-CMPA and in infant younger than 1 year. The infants with CM-FPIES showed a normal nutritional status from the beginning of enrolment and maintained it during the 6 months of being fed DM, with a good increase in weight and length/stature similar to the reference population.
The blood biochemical parameters with nutritional interest were evaluated in 19 patients (16 with IgE-CMPA and 3 with FPIES) out of 22 (86.4%), the other 3 patients with CM-FPIES did not perform the blood tests due to refusal by their parents. No relevant variations were observed; in fact, all the blood values were always in the normal range (data not shown).
Discussion
The TVC of the raw milk was on average lower than the limit required by the Regulation (EC) 853/2004 (≤1.500 × 103 CFU/mL). In addition, TVC was lower than that described in other studies on pasteurised donkey's milk [52]. Coagulase-positive Staphylococci showed low average values below the limit of 105 CFU/mL which is considered a risk for food safety.
Although L. monocytogenes is killed by pasteurisation, it may represent a high food safety hazard in milk not properly pasteurised or contaminated after thermal treatment, especially in vulnerable subjects such as infants and pregnant women. Therefore, a careful risk assessment of L. monocytogenes can help ensure the food safety of pasteurised DM. The Enterobacteria count in the pasteurised milk samples was in compliance with Regulation (EC) No 1441/2007. Compared to milk from other dairy species, DM is the most similar to HM [23]. In particular, the nutritional similarities concern the low content of caseins and ashes, thus limiting allergy and favouring a lower contribution of renal solutes and a high content of lactose that promotes good palatability. Donkey and human milk share a similar unsaturated:saturated ratio [56]. Furthermore, the fat content in donkey's milk is lower compared other milks, and this implies a low energetic value of the milk [25]. A multidisciplinary group approach, including a dietician, is fundamental in the planning of a "personalized nutritional plan", to fully satisfy the nutritional needs of patients based on age, symptoms but also food preferences and nutritional behavior of the patient : mean values of z-score after 6 months of the the donkey milk assumption; Δz-score: variation of the mean values of z score [57] However, our study shows that this limit can easily be overcome with appropriate supplementations. As already described in two of our previous papers, DM is well tolerated and appreciated by children with CMPA and CM-FPIES [29,53]. It has long been known that despite similar energy intakes, children with CM allergy have a shorter stature than controls without CM allergy, [58,59]. Our results are in line with previous studies that show a check-up growth after the introduction of DM in children with CM allergy [25][26][27]. No relevant variations in terms of blood and metabolic parameters were detected.
In particular, despite major concerns regarding the use of un-modified DM as sole nutritional source (if not adequately supplemented), our results indicate that it could be considered a valid alternative in weaned infants (older than 5-6 months), and also in infants aged between 0 and 6 months with appropriate nutritional supplements. In fact, our study found that DM allowed a regular increase in weight in children aged 0-12 months and an improvement in their length/stature growth.
A very positive aspect deriving from our study was the improvement in the quality of life of the patients and their families. The parents of the children referred to the dietitian that their children/infants were less restless and more relaxed; they ate with more pleasure and showed greater curiosity towards the various foods that were offered. Probably this improvement is mainly due to the exclusion of CM from the diet, but also to the good taste and nutritional characteristics of DM, as well as the dietetic follow-up that we offered to patients and their families. As a result, there was more serenity in the family, less anxiety, and in general, a better quality of life.
A limit in the consumption of DM as a substitute for CM remains its high cost, however, the eHFs and FAAFs are also expensive and, unlike DM, they have low palatability. Our encouraging results should potentiate the production of DM, which could lead to a reduction in its cost.
Moreover despite the encouraging results, limitations of our study were the small number of the enrolled patients and the lack of a control group. More extensive studies are needed regarding the use of DM as a substitute of CM in patients with IgE-CMPA and CM-FPIES and it would be interesting a head-to-head comparison between DM and other nutritional sources in terms of nutritional and allergenic aspects.
Conclusions
In conclusion DM resulted safe in term of health and hygiene risk and nutritionally adequate: no negative impact on the growth rate of infants and children was assessed. Therefore, it may be a suitable alternative for the management of IgE-CMPA and CM-FPIES, also in the first 6 months of life, if adequately supplemented.
|
2019-08-17T13:52:39.450Z
|
2019-08-17T00:00:00.000
|
{
"year": 2019,
"sha1": "3f0a5afc3bb9a39f9bdb659aed516d8f8ee3392c",
"oa_license": "CCBY",
"oa_url": "https://ijponline.biomedcentral.com/track/pdf/10.1186/s13052-019-0700-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f0a5afc3bb9a39f9bdb659aed516d8f8ee3392c",
"s2fieldsofstudy": [
"Medicine",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
38469573
|
pes2o/s2orc
|
v3-fos-license
|
Subjects Agree to Participate in Environmental Health Studies without Fully Comprehending the Associated Risk
Recent advances in environmental health research have greatly improved our ability to measure and quantify how individuals are exposed. These advances, however, bring bioethical uncertainties and potential risks that individuals should be aware of before consenting to participate. This study assessed how well participants from two environmental health studies comprehended consent form material. After signing the consent form, participants were asked to complete a comprehension assessment tool. The tool measured whether participants could recognize or recall six elements of the consent form they had just reviewed. Additional data were collected to look for differences in comprehension by gender, age, race, and the time spent reading the original consent form. Seventy-three participants completed a comprehension assessment tool. Scores ranged from 1.91 to 6.00 (mean = 4.66); only three people had perfect comprehension scores. Among the least comprehended material were questions on study-related risks. Overall, 53% of participants were not aware of two or more study-related risks. As environmental public health studies pose uncertainties and potential risks, researchers need to do more to assess participants’ understanding before assuming that individuals have given their ‘informed’ consent.
Background
Environmental Public Health Research. While environmental public health research is not a new field, in recent years advances in technology have greatly improved our ability to measure and quantify how individuals are exposed. For example, biomonitoring and genetic research are two tools environmental health scientists are using more frequently as advances in these fields improve our ability to understand environmental influences on individuals and communities. Although these tools are revolutionary resources, there are new bioethical uncertainties, interpretative challenges, and potential risks that individuals who agree to participate in these studies should know [1][2][3][4].
Informed Consent. In conducting ethical research, scientists inform individuals about these risk factors via a consent process so that each individual can voluntarily decide for him or herself whether they want to participate. The process of obtaining informed consent implements safeguards designed to protect the welfare, privacy, and legal rights of study participants [5]. While obtaining information consent is ethically necessary a number of studies have found that participants have limited comprehension of the consent form materials they are given. Thus their decision to participate may not be based on the inherent risks and benefits of study participation. While this issue has been widely documented among specific subpopulations such as the elderly, substance abusers, the mentally challenged, or participants in clinical trials [6][7][8][9][10][11][12][13] To our knowledge, no studies have measured comprehension of consent material provided to the broader population involved in general environmental public health research. Therefore, this study measured the comprehension (using recognition and recall) of consent form material provided to individuals in one of two environmental health studies. The study also ascertained whether certain demographic factors (i.e., gender, age, race) or the amount of time spent reviewing the form were associated with the ability to recognize or recall specific information.
Methods
Study Population. Comprehension of consent form material was measured among study participants from two environmental health studies conducted by the U.S. Agency for Toxic Substances and Disease Registry (ATSDR). The first study was an asbestos screening program (NAHP) and the second was a study on the variation in urinary creatinine and dissolved solids (VUCS). These two studies were selected to measure comprehension of consent form material because they were conducted by the same team of scientist at ATSDR and because both studies were implemented during a similar time period. Specific details on the consistency of the informed consent process within each study are further described below.
The purpose of the NAHP was to assess the development of radiological and pulmonary changes associated with exposure to asbestos-contaminated vermiculite. The target population included current and former workers from U.S. vermiculite processing facilities and their family members. Participants were offered a chest x-ray and spirometry test. Letters were used to introduce subjects to the study. The letter was followed-up by a telephone call. Both the letter and telephone call provided the subject with basic information about the study (e.g., study's title, who was conducting the study).
The VUCS assessed variation in urinary creatinine and dissolved solid levels in children and young adults. The target population for the study included adults who currently worked for the public health agency conducting the study and their family members, ages 2 to 30 years old. Individuals were recruited using e-mails and flyers. These materials included basic information about the study (e.g., study title, who was conducting the study, how to obtain more information). Interested individuals contacted the study investigators to enroll in the study.
Consent Process. Both studies followed the Code of Federal Regulations that stipulates all federally funded research must convey the following information as part of the consent process: 1. why the research is being conducted, 2. what participants will be asked to do, 3. whether participation is voluntary, 4. who is conducting the research, 5. who the participant can contact for information, 6. whether there are health risks associated with participating, 7. what benefits may result from participation, and 8. to what extent participant confidentiality will be maintained [14]. This information was conveyed in a written consent form (parents of child VUCS participants were given a parental consent form). To further standardize these forms, each was reviewed and approved by the same Institutional Review Board (IRB). Potential participants were instructed to read the form, ask questions pertaining to the material, and to sign the form if they were willing to participate.
The NAHP consent form contained 1,301 words. The VUCS consent forms for adult participants and for parents of child participants contained 994 and 1,088 words, respectively ( Table 1). The Flesch-Kincaid Reading levels [15] (FKR) for the NAHP and VUCS consent forms were below an eighth grade reading level ( Table 1). The FKR for the comprehension assessment tools was below a sixth grade reading level (Table 1). Assessment of Consent Comprehension. Information needed to obtain informed consent requires a three-step process [16,17]. First, a potential study participant must receive information; second, they must comprehend the received information; and third, they must choose whether to use what they comprehended to aid in making a decision. Thus to determine whether a participant has made an informed consent researchers might measure the participant's comprehension and the use of the comprehended information to make their informed decision. However, comprehension is difficult to measure and therefore a standard proxy for comprehension is to measure an individual's ability to recognize and recall information they have received [6,7,9,16,[18][19][20]. Recognition addresses the participant's ability to recognize content provided in the consent form and is measured using multiple choice and yes/no/unsure formatted questions. Open-ended questions are used to assess the subject's ability to recall information described in the consent process. If a person is not able to recognize or recall conveyed information, they will subsequently not be able to comprehend or use that information to make an informed decision regarding their study participation.
Our comprehension assessment tool measured each participant's comprehension of six required consent form elements (Table 2). We used recognition to measure comprehension of information on voluntary participation (3 questions), study methods (7 questions), risk of participation (5 questions for the NAHP and 4 for the VUCS), and confidentiality (1 question). Each question was phrased as a statement requiring either a yes/no/unsure response. The following elements were measured using recall: benefits of participation (1 question), and study objectives (1 question).
Recall questions were open-ended questions. All questions addressed information described in the two primary environmental health studies' consent forms. In an effort to make the NAHP and VUCS comprehension assessment tools comparable, similar questions and wording were used. Recognition questions on voluntary participation and confidentiality were identical in both studies (e.g., -I may choose to stop participating at any time?‖ and -My identifying information will be used when presenting the study results to the public?‖), as were recall questions concerning benefits of participation (e.g., -In 1 sentence, describe what if any is the immediate benefit of participating in this study.‖) and study objectives (e.g., -In 1-2 short sentences, describe why this study is being done.‖). The questions on study methodology (what the participant would be asked to do) and risks of participation addressed study-specific information and therefore the wording varied between the two studies. The risks associated with participating in the NAHP included exposure to radiation from the x-ray and other minimal risks (e.g., dizziness, light headed). Risks for VUCS participants included temporary urine discoloration and the identification of glucose or other compounds that are not normally found in urine.
Although the VUCS included both adult participants and parents of child participants, the comprehension assessment tools contained virtually the same questions; the difference was in the object of the sentence. For example, the voluntary participation question for adult participants read, -I choose freely to join in this study?‖ while the question for parents read, -I choose freely to let my child join in this study?‖ The wording and clarity of the questions were reviewed and edited by the investigator's IRB. The comprehension assessment tools for the NAHP and VUCS are available from the authors upon request.
To measure overall consent comprehension, each of the six required elements contributed one point to a total comprehension score of 6 points. Points accumulated for correct scores only. The responses -do not know‖ and -unsure‖ were classified as incorrect. Interviewers were instructed to probe each participant about questions left unanswered. This was done to assess whether participants purposefully refused to answer the question or if they left the question blank because they did not know the answer. As there were no individuals who stated they purposefully refused to answer a question, all unanswered questions were classified as incorrect responses. For the elements with more than one question, each sub-part contributed an equal proportion to the total score of 1 point. For example, there were three questions on voluntary participation. Thus, each of the three questions contributed 0.33 points to the total score of 1 point. For the open-ended recall questions, researchers developed a list of correct responses. Participant responses were then independently reviewed and scored by two researchers. Discrepancies in scores among the two researchers were then reviewed and discussed before a final correct or incorrect score was designated.
Administering the Comprehension Assessment Tool. For each study, interviewers were trained to ensure that all participants received the same information during the consent process. Specifically, interviewers met with participants and reviewed standardized communication points (e.g., study title, who was conducting the study). The interviewer asked each participant to read the consent form, to ask questions if necessary, and to sign the consent form if they wanted to participate. Without the participant's knowledge, interviewers recorded the total length of time each participant took to review the consent form. To assess whether specific questions or parts of the consent form were unclear, the interviewer documented all questions asked by the participant.
An interviewer asked all NAHP participants who reviewed and signed the NAHP consent form, on one of five recruitment days, to participate in the consent study by answering a few questions about the NAHP consent form they had just signed. All VUCS adult participants and parents of minors who did not assist in the development of the VUCS study protocol or the consent material were asked to participate in the consent study by answering a few questions about the VUCS consent form. If the participant agreed, he or she was asked to answer the questions on the comprehension assessment tool to the best of their ability. After completing the comprehension assessment tool the interviewer discussed the correct answers with the participant and verbally reconfirmed their willingness to participate.
Demographic Data. Each of the primary environmental health studies collected demographic data on the primary study participant's gender, age, and race/ethnicity. These data were used to analyze potential differences in observed consent comprehension. Shortly after the VUCS study began, approval was received to collect the same demographic data from consenting VUCS parents of child participants Since the VUCS target population included adults who worked for a public health agency, familiarity with conducting human health studies and developing consent forms could influence the level of comprehension assessed. To control for this potential bias we asked each VUCS consent study participant whether they developed or reviewed consent forms, or study protocols as part of their work.
Data Analyses. Data were entered into Epi Info version 3.3.2 (Centers for Disease Control and
Prevention, Atlanta, GA). All survey responses were double entered to ensure data quality. The data were then analyzed using SAS version 9.1.3 (SAS Institute, Inc., Cary, NC). We assessed whether total comprehension was associated with gender, race and ethnicity (Non Hispanic White vs. Other) using the Wilcoxon rank-sum test. To determine whether age group was associated with comprehension, the Kruskal-Wallis test was used. Lastly, the Spearman rank correlation coefficient was used to assess the relationship between each participant's total comprehension score and the amount of time they spent reviewing the consent form.
Results
All NAHP and VUCS participants asked to participate in the consent comprehension study agreed to participate. This included 10 NAHP participants and 63 VUCS participants for a total of 73 people. The VUCS participants consisted of 21 adult participants and 42 parents of child participants. With one exception, all comprehension assessment tools were self-administered; for one individual the VUCS interviewer verbally read the comprehension assessment questions to the participant.
Demographics. The NAHP participants differed from the VUCS participants on gender and age (Table 3). Eighty percent (n = 8) of the NAHP participants were male compared to 42% (n = 63) of the VUCS participants. Similarly 80% (n = 8) of the NAHP participants were between the ages of 51 and 76 compared to only 10% (n = 5) of the VUCS participants. Age was not collected on 11 of the VUCS parents. Forty percent (n = 25) of VUCS participants stated that they developed or reviewed study protocols or consent forms. More parents of child participants developed or reviewed these materials compared to adult VUCS participants (Table 3). Table 4). The VUCS participants scored significantly higher on comprehension of issues pertaining to voluntary participation, study methodology and confidentiality (Table 4). The comprehension of potential study-related risks were similar among both NAHP (Mean = 0.62; 95% CI: 0.38, 0.86) and VUCS (Mean = 0.59; 95% CI: 0.51, 0.67) participants (Table 4). Of the five questions pertaining to study-related risks, 60% (n = 6) of the NAHP participants were unaware of two or more risks. Similarly, 52% (n = 33) of the VUCS participants were unaware of two or more risks. Eight percent (n = 5) of VUCS participants answered all four risk related questions incorrectly.
The differences in comprehension observed across the two studies remained the same after restricting the analyses to only those participants who did not develop or review study protocols or consent forms as part of their regular work. After reviewing the consent form, 27 people asked a question about the study before they signed the form. The majority of questions asked concerned appointment scheduling or whether normal daily routines could be followed while participating in the study (e.g., can a multi-vitamin be taken while participating). There was no statistical association between overall comprehension and having asked a question before signing the consent form.
The amount of time a participant spent reviewing the consent form was recorded for 65 participants. The mean reviewing time was 2.06 minutes (range: 0.00-11.00 minutes). On average NAHP participants reviewed the form for a slightly longer period of time, 4.49 minutes (range: 0.00-11.00 minutes) compared to 1.71 minutes (range: 0.03-4.73 minutes) for VUCS participants. On average, high school students read between 214 and 250 words per minute [21]. Using standard reading rates for comparison, the majority of our participants spent insufficient time reviewing the consent form. Standard reading rates suggest NAHP participants should have spent at least 5 minutes reading the consent form; 4 minutes for VUCS participants.
There was no relationship between total comprehension score and time spent reviewing the consent form. There were weak correlations between a participant's total comprehension score and the demographic factors gender and age; however, after stratifying by study (NAHP vs. VUCS) these correlations were no longer evident (data not shown).
Discussion
The study-related risks are likely the most important information a researcher must convey in the consent form. However, our participants scored low on study-related risks. It is possible that study participants would have been more inclined to consider the risks had they been of greater magnitude (more than minimal risk). However, previous research suggests otherwise; participants in placebo-controlled clinical trials and those scheduled for invasive medical procedures also have limited comprehension of consent form material [6,7]. Thus, it is reasonable to assume that people who consent to environmental health studies with more than minimal risk, such as some biomonitoring and genetic research studies, may also not fully comprehend the associated risks.
Given our participants were the least aware of the study-related risks, this poses the question, -how do we relay consent material in a way that study participants receive and comprehend what we wish to convey.‖ Some researchers suggest using bulleted information and plain language [22]. One of the most popular methods is reducing the reading level of the form to one that is appropriate for the target population [23][24][25]. In our study, a low reading level did not ensure comprehension. Others have advocated that the consent process be recorded so that researchers could identify problems and suggest corrective measures [26]. The use of multimedia has also been considered; multimedia may include asking potential participants to view a short video or to partake in an interactive computer program [27][28][29]. However, Flory and Emanuel's review found verbal communication on study-related benefits and risks was the best method to improve comprehension when compared to other multimedia approaches [30].
Researchers also need to consider why individuals choose to participate. Do they participate because of a sense of enlightened self-interest in reducing scientific uncertainty? One study suggests that altruism is a factor [31]. Some people will participate regardless of whether they understand the risks or benefits; people participate because they desire to help others. Others advocate that some subjects participate because of a therapeutic misconception in which participants think researchers want to promote the participant's individual health [32,33]. Given individuals participate for different reasons, understanding the predominant reasons will aid researchers in developing consent forms more suited for their target population.
Limitations. To our knowledge a validated tool for assessing comprehension among environmental health study participants does not exist. Therefore, we developed and used non-validated tools. Although it was not within the scope of this study to test the reliability and validity of the tools, the questions were reviewed and approved by scientists with expertise in human subjects research as well as the author's IRB. It is also important to mention our study included a small sample of individuals from two distinct target populations. As the majority of the participants worked for a large public health agency, our participants may have had a greater awareness and knowledge of public health studies and practices. Alternatively, participants may have been less inclined to consider the material they were given as part of the consent material. While it is difficult to know how generalizable our results might be of the greater general population, our participants are likely to represent a broader cross section of the population compared to those who have been studied previously (the elderly, substance abusers, the mentally challenged, and participants in clinical trials) [6][7][8][9][10][11][12][13] and our results are similar to larger studies that found comprehension of traditional consent form material to be low [6,7,[34][35][36]. In addition, NAHP and VUCS study participants differed on demographics and overall mean comprehension scores, thus we report study specific as well as the aggregate data. Another limitation is that we were restricted to demographic data collected by the primary environmental health studies. This prohibited us from collecting additional data such as each participant's educational attainment or reading level.
Conclusion. In environmental health, researchers have successfully improved community studies by seeking community involvement in the design stage. Specifically, community members have successfully aided researchers in determining how to measure exposure (e.g., which exposures to look for, where to site environmental monitors), and have aided in increasing an individual's willingness to support and participate in research studies [37][38][39][40]. We propose that preliminary discussions with members of the target community also include dialogue on reasons why people may participate and how to best convey consent material such as study related risks. These preliminary discussions with community members may shed light on how to improve consent comprehension among individuals who are asked to participant in environmental public health studies.
|
2016-03-22T00:56:01.885Z
|
2011-03-01T00:00:00.000
|
{
"year": 2011,
"sha1": "e1cafec97b14ca05fd96de12b2f0eb2dfb0b09b1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/8/3/830/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b200ded4e3f0ec04545ee86c1dce2e376ae9b6b",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
258268930
|
pes2o/s2orc
|
v3-fos-license
|
Old T cells pollute with mito-litter
The mysteries behind immune aging and its related inflammation are being unmasked. The research of Jin et al. reveals that the defective turnover of damaged mitochondria in CD4+ T cells from aged individuals results in the exacerbated secretion of mitochondrial DNA, fuelling inflammaging and impairing immune responses.
CD4 + T cells with a defective autophagy system drive inflammaging by the secretion of non-degraded damage products to the extracellular medium.
Cells harbour a variety of mechanisms that detect, mark and eliminate intracellular molecules and organelles that are damaged or need to be recycled to ensure homeostasis.For instance, the proteasome pathway breaks down ubiquitin-labelled proteins into small peptides, and the autophagy system is responsible for the turnover of intracellular components, such as mitochondria, making use of the endolysosomal compartment to degrade them.Thereby, injured organelles are sealed into autophagosomes, which require processing and fusion with lysosomes where luminal contents are degraded in an acidic environment.
This research by Jin et al. indicates the ubiquitin-proteasome and autophagy pathways regulate each other and both are altered in old T cells.They describe that the age-associated decline in TCF1, a transcription factor related to T-cell stemness 6 , enhances the expression of the gene encoding for the cytokine-inducible SH2-containing protein (CISH).The study illustrates that CISH binds directly to ATP6V1A, a catalytic subunit of the lysosomal proton pump ATPase complex, favouring its ubiquitination and subsequent proteasomal degradation.Consequently, lysosomal acidification is diminished and, thus, its recycling function (Figure 1).Therefore, the age-related upregulation of CISH in T cells leads to lysosome dysfunction.This paper reveals that the age-associated blockade of autophagy flux in human CD4 + T cells expands the entire endolysosomal compartment, including the accumulation of multivesicular bodies (MVBs) containing exosomes, autophagosomes, amphisomes and autolysosomes.However, their luminal contents are not degraded fostering the intracellular accumulation of damaged components.Bektas et al. previously reported that CD4 + T cells from aged individuals exhibited an increased number of dysfunctional mitochondria in the autophagic compartment, reflecting a defect in mitochondrial recycling 7 .Accordingly, Jin and colleagues observe that autophagy-impaired human CD4 + T cells accumulate amphisomes charged with damaged mitochondria during aging (Figure 1), depicting a new molecular mechanism by which lysosomal function is corrupted during T cell aging.
Then, how is this molecular garbage managed when the recycling machinery does not work properly?Jin et at.describe in their current paper of Nature Aging that aged CD4 + T cells secrete amphisome-derived exosomes together with damaged mitochondria components, increasing the concentration of extracellular mtDNA (Figure 1).These findings fit with previously published data from these authors showing that lysosomal dysfunction in CD4 + T cells from old individuals prompted the secretion of granzyme B-enriched exosomes with highly cytotoxic properties to the neighbourhood cells 8 .In addition, circulating levels of mtDNA in humans augment with age in parallel with the concentration of proinflammatory cytokines 9 and, strikingly, Jin and colleagues correlate the levels of T cell-derived mtDNA with parameters of inflammaging.Adoptively transferring antigen-specific CD4 + T cells to young immunized mice increase serum levels of mtDNA along with the concentration of TNF and IL-6, which is prevented by silencing CISH in donor T cells.
Mechanistically, it is known that mtDNA is sensed as a damage-associated molecular pattern (DAMP) through the endosomal TLR9, or via the cytosolic cGAS/STING and NLRP3/inflammasome pathways, all converging on the activation of a proinflammatory program 10 .Whether mtDNA derived from old CD4 + T cells is secreted naked or associated with exosomes or other types of vesicles, such as mitochondria-derived vesicles 11 , in this scenario requires further investigation.CD4 + T cells are able to transfer mtDNA via exosomes to nearby immune cells activating their intracellular cGAS/STING pathway 12 .Thereby, mtDNA from damaged mitochondria could be shuttled from old CD4 + T cells to the extracellular medium and reach other bystander cells firing the aforementioned inflammatory signalling cascades.However, it is still opened how secreted mtDNA from aged T cells is sensed by surrounding cells provoking inflammaging.Importantly, the extracellular release of other mitochondria-derived DAMPs (i.e., cardiolipin, N-formyl peptides, ATP or Tfam) could also act as immunomodulatory cues in the development of inflammaging 10 .
Inflammaging underlies also defective immune responses of older people, who have a higher susceptibility to infectious and oncologic diseases as well as a poor vaccination efficacy.Surprisingly, targeting the CISH-induced lysosomal dysfunction in CD4 + T cells not only attenuates premature inflammaging, but also improves antibody responses in young recipient mice followed by a viral and non-infectious challenge.In particular, immunized mice receiving CISH-deficient CD4 + T cells display an increased number of T follicular cells, which tailor T-dependent antibody responses and, accordingly, an increased production of antigen-specific antibodies.Recent findings have uncovered that knockingout CISH enhances T cell anti-tumor activity and susceptibility to PD-1 blockade, being the foundation of a current human clinical trial testing adoptive T cell therapy to treat gastrointestinal cancer patients (ClinicalTrials.govIdentifier NCT04426669) 13 .Interestingly, Jin et al. results suggest that CISH silencing could mitigate complications derived from adoptive T cell therapy such as inflammatory cytokine release syndrome, an important challenge in cancer immunotherapy.Therefore, this novel study could have important clinical implications with the aim to boost T-cell immunity while keeping inflammation at bay.This timely piece of work reinforces the idea that old T cells with defective mitochondria have an active role in inflammaging.The defective disposal of dysfunctional mitochondria through autophagy in CD4 + T cells from aged individuals results in the extracellular secretion of mtDNA fuelling chronic inflammation.This research not only supports the growing body of evidence showing an age-dependent decline in the CD4 + T cell autophagy system 7,8,14 , but also highlights its relevance in mitochondrial quality control, providing mechanistic insights into how old T cells accumulate dysfunctional mitochondria.Interestingly, mimicking the age-related mitochondrial decline in T cells results in lysosome dysfunction and alterations in the autophagic flux 4,5 .Thereby, the intimate bidirectional crosstalk between the endolysosomal system and mitochondria could be exploited to rejuvenate the T cell compartment as well as to put off inflammaging to foster healthier aging.The age-dependent decline in TCF1 in human CD4 + T cells upregulates CISH gene transcription, which encodes for a scaffolding protein involved in protein ubiquitination.CISH binds and facilitates the ubiquitin-dependent degradation of ATP6V1A, a catalytic module of the lysosomal proton (H + ) pump ATPase, leading to lysosomal dysfunction.Consequently, there is an accumulation of non-degraded cargos including exosomes and dysfunctional mitochondria in the endolysosomal system (i.e., amphisomes), which are ultimately released to the extracellular milieu serving as source of mitochondrial DNA (mtDNA) and correlating with inflammaging.mtDNA: mitochondrial DNA; Ub: ubiquitin.
Figure 1 .
Figure 1.Lysosomal dysfunction in T cell aging fosters mtDNA secretion and inflammaging.
|
2023-04-22T15:17:10.497Z
|
2023-04-20T00:00:00.000
|
{
"year": 2023,
"sha1": "58645c2ec54afa49ee1f7cd6d7de7372e8d26934",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b62298a65f975bdf87fbafd99fdc0cd53dd5d522",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256895096
|
pes2o/s2orc
|
v3-fos-license
|
Semi-Supervised Classification of PolSAR Images Based on Co-Training of CNN and SVM with Limited Labeled Samples
Recently, convolutional neural networks (CNNs) have shown significant advantages in the tasks of image classification; however, these usually require a large number of labeled samples for training. In practice, it is difficult and costly to obtain sufficient labeled samples of polarimetric synthetic aperture radar (PolSAR) images. To address this problem, we propose a novel semi-supervised classification method for PolSAR images in this paper, using the co-training of CNN and a support vector machine (SVM). In our co-training method, an eight-layer CNN with residual network (ResNet) architecture is designed as the primary classifier, and an SVM is used as the auxiliary classifier. In particular, the SVM is used to enhance the performance of our algorithm in the case of limited labeled samples. In our method, more and more pseudo-labeled samples are iteratively yielded for training through a two-stage co-training of CNN and SVM, which gradually improves the performance of the two classifiers. The trained CNN is employed as the final classifier due to its strong classification capability with enough samples. We carried out experiments on two C-band airborne PolSAR images acquired by the AIRSAR systems and an L-band spaceborne PolSAR image acquired by the GaoFen-3 system. The experimental results demonstrate that the proposed method can effectively integrate the complementary advantages of SVM and CNN, providing overall classification accuracy of more than 97%, 96% and 93% with limited labeled samples (10 samples per class) for the above three images, respectively, which is superior to the state-of-the-art semi-supervised methods for PolSAR image classification.
Introduction
Polarimetric synthetic aperture radar (PolSAR) is an advanced active microwave imaging system with the strong capability of information acquisition, which can perform under all-day and all-weather conditions [1]. As an important part of PolSAR image interpretation, PolSAR image classification is valuable in both civil and military applications, and has been a hot research topic for a long time.
Traditional methods of PolSAR image classification usually focus on two aspects, i.e., feature extraction and classifier design [2]. In the aspect of feature extraction of PolSAR images, a large number of polarimetric decomposition methods have been developed successively, such as Cloude decomposition [3], Freeman decomposition [4] and Yamaguchi decomposition [5]. Besides this, some other features have also been widely used, such as the polarimetric rotation domain features [2,6], texture features [7][8][9] and color features [10]. In the aspect of classifier design, many classifiers have been developed for PolSAR image classification, such as Wishart classifier [11], decision tree [12,13], k-nearest neighbor (KNN) [14] and support vector machine (SVM) [15]. PolSAR image classification methods have been widely studied in the past decades, but due to the limited performance of the To address these problems, in this paper, a semi-supervised classification method based on the co-training of CNN and SVM is proposed for PolSAR images. The motivations of our work are as follows. In the past few decades, SVM has been deeply studied and widely used in classification tasks. Many studies show that SVM has superior classification capabilities to many other classical classifiers, particularly with respect to small training sample sizes [32][33][34][35]. According, SVM is able to outperform CNN given limited labeled samples. Considering the superiority of CNNs with sufficient training samples, a cotraining method with SVM and CNN as base classifiers is proposed so as to integrate their complementary advantages. In our method, an eight-layer CNN with a residual network (ResNet) structure [36] is designed as the primary classifier due to its excellent classification potential. Besides this, to reduce the dependence of the method on the amount of given labeled samples, SVM is introduced as an auxiliary classifier. Moreover, a twostage co-training strategy is designed to gradually increase the amount of pseudo-labeled samples. Experiments are carried out on both airborne and spaceborne PolSAR images of different bands, and the results show that the proposed method is much superior to the state-of-the-art semi-supervised methods with very limited labeled samples.
PolSAR Image Data
Each pixel of a single-look PolSAR image is usually represented by a polarimetric scattering matrix, i.e., Equation (1) [1] S = S HH S HV S V H S VV (1) where "H" and "V" denote the horizontal and vertical polarization, respectively, and S kl , k, l = H, V is the scattering coefficient for transmitting in l polarization and receiving in k polarization. Under the reciprocity condition, the scattering matrix can be equivalent to a Pauli basis scattering vector, i.e., where the superscript "T" indicates the transpose operation. In order to suppress the inherent speckle noise, multi-look processing is often carried out for PolSAR images. Each pixel of the multi-look PolSAR image can be represented by a polarimetric coherence matrix [1] where T ij is the element of the coherence matrix T 3 , · denotes the ensemble averaging operation, and the superscript "H" represents the conjugate transpose operation.
PolSAR Image Features
In the past few decades, various polarimetric features have been developed for PolSAR images, which are in fact good representations of the original data. In this section, six polarimetric features widely used in PolSAR image classification are introduced, including three features yielded by the Cloude decomposition, the total power SPAN and two polarimetric rotation null angle features. According to Cloude decomposition [3], the conjugate symmetric polarimetric coherence matrix T 3 can be expressed as where λ 1 , λ 2 , and λ 3 are the three eigenvalues of the matrix T 3 , and U is the unitary matrix composed of the three unit orthogonal eigenvectors. Thus, three features, i.e., the scattering entropy H, anisotropy A and mean alpha angle α, can be constructed as where P i = λ i /(λ 1 + λ 2 + λ 3 ) and α i are the i-th eigenvector parameters. The entropy H represents the randomness of the scattering data, the anisotropy A reflects the relative influence of the second and the third eigenvalues, and the angle α can be used to identify the potential average scattering mechanism of the data to a certain extent [1,3]. The total power SPAN of PolSAR images is a widely used rotation invariant feature, which equals the trace of the matrix T 3 , namely, [1] Besides this, in recent years, the characteristics of the polarimetric rotation domain have been deeply studied, and two null angles in this domain are given by [2,6] where Re[·] and Im[·] represent the real part and imaginary parts of the complex number, respectively, and Angle{·} refers to the phase of the complex number. Many studies have shown that these two features are sensitive to different terrains, and are useful for PolSAR image classification [2,18,37].
Methodology
The co-training method was originally proposed by Blum and Mitchell [38]. It usually contains two base classifiers or uses two different feature views. By taking advantage of the complementarity between different classifiers or features, the method can achieve a good performance, which cannot be achieved by using only a single classifier or a single feature view [20,30]. CNNs are able to complete the classification tasks, but they are data-driven models, and usually heavily rely on the amount of labeled training samples. When the labeled samples are limited, it is difficult to obtain satisfactory classification results using CNNs. To solve this problem, a co-training method for PolSAR image classification has been developed, combining the CNN with the SVM, which is less sensitive to the amount of labeled training samples. The flowchart of the proposed method is shown in Figure 1, and more details of the proposed method are as follows.
Convolutional Neural Network
In recent years, CNN has been widely used in the field of computer vision. Especially since Krizhevsky et al. successfully used a CNN called AlexNet in the ImageNet large-scale visual recognition challenge competition in 2012 [39], CNNs have been rapidly developed. Many excellent CNNs have been proposed, such as the GoogleNet [40], VGG-Net [41], ResNet [36] and so on, the performances of which are significantly better than those of various traditional machine learning methods. A CNN usually consists of multiple cascaded layers, including the input layer, convolutional layer, activation function, pooling layer, full connection layer, etc. With the increase in network layers, the fitting ability of the CNN often becomes stronger, while some serious problems of gradient disappearance and model degradation may occur in the model training. To address these problems, He et al. proposed a CNN named ResNet in 2015, which achieved amazing results by designing the residual learning blocks (denoted as ResBlock) [36]. The illustration of a basic ResBlock is shown in Figure 2. In Figure 2, X denotes the input of the ResBlock, ReLU means the rectified linear unit function [42] used as a nonlinear activation function, F(X) is the mapping of multiple weight layers in the block with respect to X, and H(X) = F(X) + X is the underlying mapping of the ResBlock. Different from the conventional CNN structure, this design makes the weight layer learn the difference between H(X) and the block input X through the identity mapping formed by cross-layer connection, which is called "residual". Many studies show that the ResBlock can effectively avoid the model degradation and gradient disappearance problems caused by the deepening of network layers, and the deep network with ResBlock has significantly better learning performance than the network with simple stacking weight layers [43].
In view of the advantages of ResBlock, a simple ResNet is designed in our co-training method for PolSAR image classification. The original ResNet with 152 layers has a strong learning ability and performs excellently on the ImageNet dataset [36]. However, since PolSAR image classification is a pixel-level task, neighborhood patches of small size are used to represent pixels, and served as the input of CNN in our method. Therefore, a CNN with many layers would cause serious losses of image features due to the convolutional and pooling operations. Accordingly, we design a shallow ResNet with eight weight layers for our task, making it more suitable for processing image patches of small sizes. The architecture of the employed CNN is shown in Figure 3. As seen from Figure 3, this network consists of a convolutional layer, a max-pooling layer, three ResBlocks (composed of two convolutional layers), a global average pooling layer and a fully connected (FC) layer. The size of all convolutional kernels in this network is 3 × 3, and the kernel numbers in convolutional layers 1-7 are 32, 32, 32, 64, 64, 128 and 128, respectively. Besides this, the strides of convolutional layers 1-7 are 1, 1, 1, 2, 1, 2 and 1, respectively, and the stride of the max-pooling layer is 2. It also should be pointed out that a batch normalization operation is performed after each convolutional layer. Since there are eight weight layers (including seven convolutional layers and a full connection layer) in this CNN, we refer to it as ResNet-8 for simplicity in this paper. Besides this, in the classification of PolSAR images using CNN, a common approach is to represent each pixel using its neighborhood patch data, which are then classified by the trained CNN, and the predicted label is reassigned to the corresponding pixel. The main flowchart of this method is illustrated in Figure 3.
Support Vector Machine
SVM has been deeply studied and applied in many classification tasks since the 1990s. It was one of the most widely used classifiers before deep learning was proposed, and its development is relatively mature. Many studies demonstrate that SVM has good capabilities with respect to small training sample sizes [32][33][34][35]. Considering the complementary advantages of SVM relative to CNN, we introduce SVM as the auxiliary base classifier in our co-training method.
The original SVM is proposed to solve the binary classification problem, and the data classification is completed by solving the classification hyperplane that can correctly partition the dataset and has the largest geometric interval. The main formula of SVM can be expressed as [20] min 1 2 where w and b are the normal vector and displacement term of the classification hyperplane, respectively, N is the number of training samples, γ is the penalty factor and ρ i is the relaxation variable of the i-th sample, which are used to improve the fault tolerance of the model.
In SVM, the problem of maximizing the geometric interval is transformed to the problem of obtaining the maximum value. Under the condition of limited samples, it still has a relatively strong learning ability, and can avoid overfitting and dimension disasters. For nonlinearly separable samples, SVM implicitly transforms them from the original data space into a high-dimensional data space via a kernel function, making the samples linearly separable in the high-dimensional space. In addition, the binary SVM can be easily extended to the multi-class SVM by using some strategies, such as the one-versus-one and one-versus-all strategies [44,45].
Construction of Feature Views
As a semi-supervised method based on disagreement, co-training can make full use of information by using the features of different views to improve the performance of model. According to the characteristics of CNN and SVM, two feature views of each pixel are constructed in our method. Firstly, since the inputs of a CNN are usually image patches of a certain size, each pixel is represented by its neighborhood image patch, which serves as the input of the CNN and is called the neighborhood feature view for short. Neighborhood data contain not only the information of the pixel itself, but also the spatial context information. By comparison, the data of each pixel itself are used as the input of SVM, which forms a difference view and is called point feature view here.
Each pixel of a PolSAR image can be described by a polarimetric coherence matrix, in which the non-diagonal elements are usually complex. In order to facilitate the processing of CNN and SVM, which are usually defined in the real number domain, this method separates the real and imaginary parts of the complex elements in the coherence matrix. Thus, each pixel of a PolSAR image can be represented by a nine-dimensional (9-D) real vector, namely, Many studies have shown that the utilization or combination of appropriate artificially designed features can effectively alleviate the dependence of CNN on the number of training samples [2,18,37]. Therefore, the six polarimetric features described in Section 2.2 are employed as part of the input data of CNN and SVM, which can be expressed as a six-dimensional (6-D) vector as Then, by combining the 9-D original data and the 6-D polarimetric features, each pixel of a PolSAR image can be represented by a 15-dimensional (15-D) feature vector, i.e., F = [F 1 ; F 2 ]. It should be pointed that, to reduce the impact of inherent speckles, PolSAR images are filtered before constructing two feature views in our method, where the method of global weighted least squares (GWLS) filtering [46] is used.
Co-Training Method of CNN and SVM
The co-training method has two obvious characteristics. On the one hand, unlike the fully supervised approaches that only use the labeled samples to train models, the co-training method is semi-supervised, and attempts to use the information of unlabeled samples too. On the other hand, different to traditional classification methods with a single classifier, each co-training method has two classifiers, and aims to integrate their complementary advantages. So, there are two key issues for a co-training method, i.e., how to make full use the unlabeled samples and how to integrate the two base classifiers. Focusing on these two issues, we proposed a co-training method using CNN and SVM, as shown in Algorithm 1.
There are two sample sets in our method, namely, a labeled one of small size (L : {L 1 , L 2 }) and an unlabeled one of large size (U : {U 1 , U 2 }). L 1 and L 2 are the labeled samples of the neighborhood view and point view, respectively. Similarly, U 1 and U 2 are the unlabeled samples of the neighborhood view and point view, respectively. As shown in Algorithm 1, the two base classifiers are trained iteratively through several learning rounds. In each round, there are four main steps, i.e., the training of base classifiers, the classification of unlabeled samples in a buffer poll, the extension of the labeled sample set via sample selection, and the updating of the unlabeled sample set and the buffer pool.
In the semi-supervised method, unlabeled samples are classified, and the samples with high reliability are selected to extend the labeled sample set, where their predicted labels, called pseudo-labels, are used. Then, the selected samples are used for training in the next learning round. With the progression of iterative learning rounds, more and more unlabeled samples become pseudo-labeled ones, which are useful for improving the performance of classifiers. In practice, there may be a large number of unlabeled samples. If all of them are classified in each learning round of the co-training method, it will be very time-consuming, and this may incur serious storage overhead. Accordingly, at the beginning of our method, a buffer pool of unlabeled samples B : {B 1 , B 2 } is constructed, and only the samples in the buffer pool are classified in each learning round. h: initial size of buffer pool; k: learning round, k = 1; K: maximum learning round; K 1 : maximum learning round in stage 1; M: maximum number of selected samples of each class in each learning round.
Output:
The trained CNN and SVM Process: Construct a buffer pool of unlabeled samples: Select h samples randomly from U : {U 1 , U 2 } to form a buffer pool B : {B 1 , B 2 }, and remove the selected samples from U. While k ≤ K (1) Training of base classifiers: Train CNN and SVM using L 1 and L 2 , respectively.
(2) Classification of samples in buff pool B: Classify every sample x (1) i and x (2) i in the buffer pool B : {B 1 , B 2 } using the trained CNN and SVM, respectively. If U = Φ, i.e., the unlabeled sample set U is empty: In each learning round of our method, the CNN and SVM are firstly trained with the samples of the neighborhood view and the point view, respectively. Then, the samples in buffer pool B : {B 1 , B 2 } are classified by the trained CNN and SVM. Next, the labeled sample set is extended by selecting some of the previously classified samples, which is the core step of our co-training method.
The classification results given by CNN are usually unreliable when only a few labeled training samples are provided, and may even be far inferior to many traditional classifiers. Consequently, in the design of this method, we divide the extension of the labeled sample set into two stages, i.e., the first K 1 learning rounds of the algorithm are stage 1, and the subsequent rounds are stage 2. In stage 1, to avoid the error accumulation caused by the unreliable initial classification performed by CNN, the classification results yielded by SVM are taken more seriously. The condition of selecting the training samples in this stage can be expressed as where X 1 and X 2 are the selected sample sets of the neighborhood view and the point view, are the labels of x i predicted by CNN and SVM, respectively, , and Th p is the threshold of prediction probability, which is set as 50% in this paper. This means that stage 1 selects the samples whose prediction labels given by the two classifiers are the same, and the prediction probability yielded by SVM is greater than 50%. The prediction probability yielded by CNN is not considered due to its relative unreliability. Besides this, the prediction labels of the selected samples are used as their "pseudo labels".
After several learning rounds in stage 1, the classification results of both CNN and SVM become more credible, and are then combined to select samples. So, in stage 2, the prediction labels and probabilities of the two classifiers are all considered, and the condition is given as where is the prediction probability of sample x i given by CNN. This means that the method selects the samples whose labels predicted by the two classifiers are the same, and the prediction probability of any classifier is greater than Th p , i.e., 50%. Compared to condition 1, it is more relaxed, and allows more samples to be selected.
Then, in the last step of each learning round, the unlabeled sample set and the buffer pool are updated. The previously selected samples are deleted from the buffer pool, and some new samples are randomly selected and removed from set U, which are then added into the buffer pool B. The size of U decreases while the size of B increases, since the number of selected samples is twice the total number of "pseudo-labeled" samples selected in the current learning round.
The algorithm will stop if all the samples of the initial unlabeled sample set are labeled, or the learning round reaches the given maximum number. Thus, two trained classifiers, i.e., CNN and SVM, are obtained via the co-training procedure. Since CNN often achieves superior classification performance with sufficient samples, this algorithm uses the trained CNN as the final classifier. This is also the reason we call the CNN a primary classifier in our co-training method.
Datasets Description and Parameters Settings
To evaluate the performance of the proposed method, three actual datasets of PolSAR images are used in the experiments in this paper. Dataset 1, acquired by the AIRSAR system in 1989, is an L-band PolSAR image of a region in Flevoland, Netherlands, with the image size of 750 × 1024 pixels. Its Pauli-RGB image, ground-truth map and legend are shown in Figure 4a-c, respectively. It has 15 labeled terrain categories, including stembeans, peas, forest, lucerne, wheat, beet, potatoes, bare soil, grass, rapeseed, barley, wheat 2, wheat 3, water, and buildings. The total number of labeled pixels is 157,296. Figure 5a-c, respectively. It has 14 labeled terrain categories, i.e., potato, fruit, oats, beet, barley, onions, wheat, beans, peas, maize, flax, rapeseed, grass, and lucerne. The total number of labeled pixels is 135,350. Figure 6a-c, respectively. It has five labeled terrain classes, including water, vegetation, high-density urban, low-density urban and inclined urban areas. The total number of labeled pixels is 3,136,780. In our experiment, the ResNet-8 given in Section 3.1.1 is employed as the CNN. The size of image patches of the neighborhood view is 15 × 15 pixels. For training CNN, the stochastic gradient descent (SGD) method is adopted, Adam with a learning rate of 0.01 is used as the optimizer, and the cross entropy loss function is used. Besides this, the SVM with the radial basis function (RBF) kernel is used, which is implemented by the SVC function in the sklearn package [48], and its parameters are set by the grid searching algorithm using the GridSearchCV function. The maximum learning round K = 15, the maximum learning round of stage 1 is set as K 1 = 4, the maximum number of selected samples per class in each learning round is set as M = 20, and the initial sizes of buffer pools are set as h = 3000, 5000, and 6000 for datasets 1-3, respectively. In addition, to avoid too many unlabeled samples occupying the memory, a certain proportion of samples indicated by the ground-truth map are randomly selected as unlabeled samples. These proportions are set as 5%, 10% and 0.5% for datasets 1-3, respectively. As can be observed in Figure 7, with the increase in the sample size, the OA value obtained by CNN or SVM increases as well, which means that the classification performance of the two classifiers is positively related to the sample size. In addition, according to Figure 7a-c, when the number of LSPC does not exceed 10, the OA value obtained by SVM is significantly higher than that obtained by CNN, regardless of whether we use dataset 1, 2 or 3. It can be seen that in the case of limited labeled samples, the SVM achieves a better classification performance than the CNN, and the classification results yielded by SVM are more reliable. However, with the increase in sample size, the increase in OA value given by SVM is relatively small, while that given by CNN is relatively large. When the number of samples exceeds 50 for each category, the OA values given by CNN exceed those given by SVM. This indicates that the CNN has a stronger classification ability than the SVM when there is a large number of samples. To sum up, SVM and CNN show different classification advantages with different numbers of labeled samples, which supports the utility of the proposed co-training method using these two classifiers. By comparing the classification maps in Figures 8a1-a3, 9a1-a3 and 10a1-a3 and the corresponding ground-truth maps in Figures 4-6, it can be observed that poor classification results are obtained by the FS-CNN when only a few training samples are provided. A large number of pixels are misclassified, especially under the condition of 3 LSPCs, such as those of peas and wheat, wheat 2 and rapeseed for dataset 1, those of potato, beet and fruit for dataset 2, and those of vegetation and high-density urban for dataset 3. So, the corresponding values of OA and Kappa coefficients of the classification results are small, as shown in Table 1. By comparison, the FS-SVM yields much better classification results than the FS-CNN, i.e., the classification maps shown in Figures 8b1-b3, 9b1-b3 and 10b1-b3 are more similar to the corresponding ground-truth maps, and the OA values and Kappa coefficients are much higher. These results have further validated the superiority of SVM over CNN with very limited training samples.
Comparison of Co-Training and Self-Training Methods
By comparing the classification maps in Figures 8a1-a3, 9a1-a3, 10a1-a3, 8c1-c3, 9c1-c3 and 10c1-c3, we can see that the semi-supervised ST-CNN obtained significantly better classification results than the FS-CNN under the same conditions. For example, the obtained OA values increased about 20% for dataset 1, i.e., from 66.63% to 85.07% with 3 LSPCs, from 68.13% to 88.51% with 5 LSPCs, and from 71.36% to 90.48% with 10 LSPCs. Similar results can be observed for the Kappa coefficients. This is mainly because many unlabeled samples were effectively utilized for training the classifiers. In contrast, though the semi-supervised ST-SVM obtained better results than the FS-SVM under the same conditions, the improvement is relatively small. The classification maps, OA values and Kappa coefficients by the FS-SVM and ST-SVM are similar. For example, the OA values given by ST-SVM are only about 1.5% more than those given by FS-SVM. These results indicate that the SVM is less sensitive to the number of samples than the CNN.
Compared with the previous four compared methods, the proposed co-training method provided the best classification results under the same conditions, i.e., the classification maps shown in Figures 8e1-e3, 9e1-e3 and 10e1-e3 are most similar to the ground-truth, and the obtained values of OA and Kappa coefficients are the highest. For example, as shown in Table 1, the OA value of dataset 1 given by the proposed method with 10 LSPCs is 97.84%, which is much greater than the 90.48% and 89.69% given by the ST-CNN and ST-SVM, respectively. Similar results can be observed for other datasets with different numbers of LSPC. These results reflect the positive role of SVM in our co-training method. Under the condition of 10 LSPCs, the OA values yielded by the SVM were more than 80% for these datasets, which ensures the reliability of most "pseudo-labeled" samples selected in the initial learning rounds of the method. It is worth noting that, even when very limited samples were provided, i.e., three LSPCs, acceptable classification results were still obtained by the proposed co-training method-about 90% for these datasets, which value is much better than the compared self-training ones. These results indicate that our co-training method can effectively integrate the advantages of CNN and SVM.
In order to further analyze the roles of different base classifiers in our method, the OA values obtained by SVM and CNN are also calculated in different learning rounds of the co-training procedure under the conditions of 3, 5 and 10 LSPCs. The obtained OA curves of datasets 1-3 are shown by the dashed lines in Figure 11a-c, respectively. Moreover, the obtained OA curves in different learning rounds of the ST-SVM and ST-CNN methods under the condition of three LSPCs are also presented, which are shown in Figure 11 using the orange dotted lines with triangle and circle markers, respectively. Figure 11. OA values obtained by SVM and CNN in different learning rounds of co-training and self-training methods applied to (a) dataset 1, (b) dataset 2 and (c) dataset 3, where "CT" denotes "Co-Training", "ST" denotes "Self-Training" and "LSPC" denotes "Labeled Samples per Class".
By comparing the six OA curves (dashed lines) of each dataset given by the base classifiers in the co-training procedures, some important results are given, which can be summarized as follows.
(1) For any base classifier, SVM or CNN, the obtained OA curve of each dataset is higher when more initial labeled samples are provided. That is, the red curves (10 LSPC) are higher than the blue ones (5 LSPC), and the green ones ((3 LSPC)) are the lowest. This indicates that the initial labeled sample size has a significant impact on the performance of the classifiers.
(2) In the first learning rounds, since there is no pseudo-labeled sample, the two classifiers conduct full supervised learning with the given labeled samples, and the obtained OA values of all datasets are small, while those obtained by SVM are obviously greater than those given by CNN. These results are consistent with those shown in Figures 8a1-b3, 9a1-b3 and 10a1-b3. With the increase in the number of learning rounds, the OA values obtained by each base classifier increase in the overall trend. This indicates that the performance of the two classifiers, especially the CNN, can be effectively improved by using more pseudo-labeled samples for training.
(3) As the number of learning rounds increases, the OA values obtained by CNN increase faster than those given by SVM. After a certain number of learning rounds, the OA values obtained by CNN are almost aways greater than those obtained by SVM. This is also the reason why the CNN is used as the primary classifier of our co-training method, and is used for the final classification. This result further validates that CNN is superior to SVM for classification when a large number of samples are provided.
In addition, by comparing the OA curves given by the same classifier (CNN or SVM) but in different training approaches, some important results are derived.
(1) As shown by the green and orange curves with circle markers in Figure 11, for each dataset, the CNN trained by co-training provides significantly greater OA values than the same CNN trained by self-training, even though they yield the same results in the first learning round. This indicates that the SVM is useful for improving the performance of the trained CNN in our co-training method.
(2) Similarly, as shown by the green and orange curves with triangle markers in Figure 11, the SVM trained by co-training also provides greater OA values than the same SVM trained by self-training, though the improvement is not as obvious as that given by CNN. It indicates that the CNN is also useful for improving the performance of the SVM trained by co-training.
These results demonstrate that the two base classifiers promote each other well following co-training. In summary, compared to the self-training methods, the co-training method can make better use of the unlabeled samples and yield superior classifiers.
In order to further evaluate the effectiveness of the SVM used in the proposed method, two other traditional classifiers, i.e., K-nearest neighbor (KNN) and multi-layer perceptron (MLP), are used as auxiliary classifiers and replace SVM in our co-training method for comparison; the corresponding methods are briefly denoted as CNN-KNN and CNN-MLP, respectively. In the experiment, the two classifiers are implemented using the functions in the sklearn package [48]. The MLP contains two hidden layers, of which the neuron numbers are 20 and 30, respectively. Besides this, the parameters of KNN and the other parameters of MLP are the default in the package. The two methods are applied to the previous three datasets under the conditions of 3, 5 and 10 LSPCs, and the obtained values of the OA and Kappa coefficient are listed in Table 1. Considering the length of the paper, the corresponding classification maps are not given.
As can be seen in Table 1, under the same limited sample conditions, when KNN or MLP is used to replace SVM as the auxiliary classifier in the framework of our cotraining method, the obtained values of the OA and Kappa coefficient are both higher than those obtained by the FS-CNN. For example, when the given sample number is three LSPCs, the OA values of CNN-KNN and CNN-MLP for dataset 1 are 81.04% and 86.09%, respectively, which are significantly higher than the 66.63% of FS-CNN. Similar results can be obtained for the two other sample conditions and for the other two datasets. These results demonstrate that these traditional classifiers are also helpful in promoting the performance of the trained CNN. However, in comparison, the OA and Kappa coefficients obtained by our co-training method using SVM are the highest, which verifies that SVM is superior to KNN and MLP under the condition of limited samples, and is also more suitable for use in cooperation with CNN.
Comparison with Other Semi-Supervised Methods
In order to further evaluate the performance of the proposed method, many other state-of-the-art semi-supervised classification methods for PolSAR images with limited labeled samples are compared in this section. The methods used for comparison include tri-training with neighborhood minimum spanning tree (TT-NMST) [14], self-training with neighborhood minimum spanning tree (ST-NMST) [29], stacked sparse auto-encoder (SSAE) [49], two recurrent complex-valued CNNs (RCV-CNN1 and RCV-CNN2) [25], the superpixel restrained deep neural network with multiple decisions (SRDNN-MD) [22], the superpixel graph-based CNN (SPGraphCNN) [21], and two methods based on a spatial anchor graph (SSA1 and SSA2) [24].
Note that, in many studies of PolSAR image classification, datasets 1-2 are commonly used for method comparison, and these are used here. Moreover, in most existing papers on PolSAR image classification with limited samples, there are two typical approaches to tingset the amount of labeled training samples. One is to select 10 LSPCs, as done in the previous section, and the other is to select a certain ratio, such as 1% of labeled samples, for training. Therefore, in this section, we compare different methods separately according to these two methods. It should be pointed out that not all of the existing methods mentioned above have been employed on both datasets 1 and 2, or used two methods to select the samples, so only the tested results given in the literature are used for comparison. The accuracy of each category, as well as the OA and Kappa coefficients of datasets 1-2, obtained by different methods with different amounts of training samples, are presented in Tables 2-4. Besides this, in order to visually compare these results, the classification accuracy values listed in Tables 2-4 are also shown graphically in Figure 12a-c, respectively. As shown in Table 2 and Figure 12a, for the classification of dataset 1 under the condition of 10 LSPCs, the proposed method obtains the highest classification accuracy for almost all categories. Consequently, the OA value given by our method is as high as 97.84%, which is significantly greater than those given by the two methods used for comparison, i.e., 87.01% by TT-NMST and 89.92% by ST-NMST. Similarly, the Kappa coefficient given by our method is 0.9764, which is much greater than the 0.8542 and 0.8852 given by the other two methods. Besides this, as shown in Table 3 and Figure 12b, for the classification of dataset 1 under the condition of 1% labeled samples, the SPGraphCNN and our method provide better results than the other five methods used for comparison. The highest accuracy for most categories is obtained by these two methods, i.e., among the 15 categories, the SPGraphCNN yields the highest accuracy for 6 categories, while our method is the best for the other 8 categories. The OA given by SPGraphCNN is 98.82%, which is obviously larger than that given by the other five methods used for comparison. By comparison, our method provides the highest OA-99.20%-and the largest Kappa coefficient-0.9913, which are somewhat better than those given by SPGraphCNN.
In addition, as shown in Table 4 and Figure 12c, for the classification of dataset 2 under the condition of 1% labeled samples, the proposed method also obtains the highest classification accuracy for most categories. The OA value given by our method is 99.17%, which is obviously greater than those of the two methods used for comparison, i.e., 96.93% given by RCV-CNN1 and 96.97% given by RCV-CNN2. Similarly, our method gives the largest Kappa coefficient, 0. 9903, compared to the other two methods.
From the above results, it can be concluded that the proposed co-training method can address the problem of PolSAR image classification with limited labeled samples very well, and has obvious advantages over the state-of-the-art semi-supervised classification approach used for PolSAR images.
Conclusions
In this paper, to improve PolSAR image classification with limited labeled samples, a novel semi-supervised classification method has been proposed that integrates the complementary advantages of CNN and SVM in a co-training framework. In our method, there are two base classifiers, i.e., an eight-layer CNN with a ResNet architecture and an SVM with a radial basis function (RBF) kernel. It has been shown that the two base classifiers can promote each other very well in the co-training framework, making the method much more powerful and able to address the problem of limited labeled samples. We performed many experiments on the L-band and C-band PolSAR image datasets acquired by the AIRSAR and GaoFen-3 systems. The experimental results demonstrate that the proposed method can effectively integrate the complementary advantages of SVM and CNN, providing overall classification accuracies of more than 97%, 96% and 93% with limited labeled samples (10 samples per class) for the above three images, respectively, which values are superior to those of the self-training SVM, the self-training CNN and the other state-of-the-art semi-supervised classification methods used for PolSAR images when few labeled samples are provided.
It should be noted that the framework of our co-training method is not limited to PolSAR image classification, but expands to solving the problem of image classification under the condition of limited labeled samples. It can be predicted that, for the classification of other images, our method is also theoretically applicable if the labeled samples are limited, but the results will be slightly different in feature extraction. In the future, we will carry out more experiments and analyses on our method, applying it in other image classification tasks, such as hyperspectral and infrared image classification.
|
2023-02-16T16:12:10.438Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "9d108719cd6930acf6e1d134677ac758ef42d9d1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/23/4/2109/pdf?version=1676362920",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "add525eb4fc355bedd8cf7607a7b4dd6e7cb318e",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
257807175
|
pes2o/s2orc
|
v3-fos-license
|
Editorial: The role of alcohol in modifying behavior
© 2023 Peters, Trabace and Di Giovanni. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. Editorial: The role of alcohol in modifying behavior
This special topic presents experimental work on the effects of alcohol (ethanol) on the brain, and how these effects impact behavior across multiple domains. The World Health Organization (WHO) estimates that 2.3 billion people regularly consume alcohol, making alcohol one of the most widely used drugs in human society (WHO, 2022). Alcohol consumption has both acute and long-term effects on behavior. Whereas most of the acute effects are rewarding, if higher doses are consumed, negative effects including motor and cognitive impairment can occur and can be lasting. Although recreational use of alcohol can enhance sociability, excessive repeated alcohol use can lead to alcohol use disorder (AUD) and physical dependence associated with a dangerous withdrawal syndrome, such as delirium tremens. Substance use disorders are characterized by frequent comorbidity with the use of other substances, and alcohol is commonly co-used with other substances, including psychostimulants like cocaine, as well as opioids (Bobashev et al., 2018;Cicero et al., 2020). Comorbid use of substances increases the risk of adverse outcomes and relapse (Wang et al., 2017), and this complexity of the human condition requires the use of preclinical animal models to tease apart the complex effects of alcohol on behavior (Crummy et al., 2020).
Animal models play a crucial role in understanding the effects of alcohol on the brain and behavior (Mineur et al., 2022;Valyear et al., 2023). Studies have shown that a range of brain structures are involved in alcohol use including the amygdala, nucleus accumbens, and insula. Targeted stimulation and suppression of these areas of the brain is able to alter alcohol consumption. For instance, Haaranen et al. used a chemogenetic approach to alter neuronal activity in these individual brain regions, and in the specific insula outputs to the nucleus accumbens and basolateral and central subregions of the amygdala, to determine the functional role of this network on alcohol consumption in alcohol preferring Alko Alcohol (AA) rats. This type of sophisticated circuit-level analyses is necessary to understand how neural networks function to control alcohol consumption, in order to design targeted treatment strategies aimed at altering network function. The previous study found that activating the insula projections to amygdala or nucleus accumbens increased alcohol consumption, consistent with prior work demonstrating the insula is a critical driver of alcohol relapse (Campbell et al., 2019). Emerging potential new medications for treating AUD like Glucagon-Like Peptide 1 (GLP-1) may work in part by decreasing cue-associated .
craving-related increases in insula activity, as systematically reviewed by Eren-Yazicioglu et al. in this special edition.
Alcohol use can be triggered by numerous factors, and stress is one of the most potent triggers for craving and relapse (Wemm et al., 2019). Interestingly, Deal et al. found that both social and non-social stressors enhance the release of catecholamines in the basolateral amygdala, and acute alcohol blunts this stress response, perhaps providing a brain-based rationale for the selfmedication hypothesis (Ayer et al., 2010). This adds to a growing body of literature implicating the amygdala as an important brain site by which stress can alter alcohol seeking and use (Mineur et al., 2022). Furthermore, while the health benefits of daily exercise cannot be denied, the study by Buhr et al. suggests that it does not alter alcohol's effects on serotonin and dopaminerelated turnover in the striatum and brain stem. However, alcohol drinking altered neurochemical correlates of exercise in the hypothalamus, a key component of brain networks responsible for maintaining physiological homeostasis. As demonstrated in the study by Starski et al., repeated and prolonged alcohol use can lead to allostasis and further exacerbate behavioral engagement with alcohol. Furthermore, the behavioral and brain response to stress is sexually dimorphic, and the brain response to stress and drug cues predicts subsequent relapse (Smith et al., 2023).
Genetic factors, as well as age and sex, can influence alcohol use and behavioral phenotypes associated with alcohol use. Alcohol drinking often begins in adolescence (Abela et al., 2023), and the study by Corongiu et al. demonstrates that adolescents typically drink more alcohol than adults, but that this precise phenotype interacts with genetic background. Moreover, sex can influence alcohol use and behavior, with AUD being diagnosed more often in men than women. In line with this, Bryant et al. found that male mice were more sensitive to the motivating effects of alcohol, and Landin and Chandler report that male rats exposed to alcohol during adolescence were more prone to have greater behavioral responses to threat in adulthood, although females were already predisposed to this phenotype, regardless of alcohol history. To make matters more complex, the neurobiological hallmarks of adolescent alcohol exposure may be sexually dimorphic. For example, Asarch et al. found that in male rats, mesolimbic dopamine peaks during adolescence then declines and stabilizes in adulthood, but adolescent alcohol exposure prolongs the elevated dopamine levels into adulthood, an "arrested development" phenotype not observed in female rats, whose dopamine levels are stable throughout adolescence and adulthood. On a more positive note, Rodd et al. report that negative allosteric modulators of the nicotinic α7 receptors may hold promise as prophylactics against the deleterious effects of binge alcohol use during adolescence.
Overall, the contributions to this special topic have broadened our understanding of how, where, and when alcohol acts in the brain to promote continued alcohol use, which in some individuals can lead to full blown AUD. The extensive comorbid use of alcohol with other drugs is also of growing concern and calls for novel preclinical models of polydrug use to determine the neurobiological consequences of comorbid alcohol use with other substances and to effectively screen emerging therapeutics. Continued research in this area is needed in order to develop novel treatment interventions, including prophylactics, medications, and natural remedies.
Author contributions
JP wrote the original editorial draft. All authors edited and revised the editorial.
Funding
This work was funded by DA045836, DA056365, and DA056660 to JP.
Conflict of interest
JP is a consultant for Delix Therapeutics, Inc. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
|
2023-03-30T13:16:25.523Z
|
2023-03-30T00:00:00.000
|
{
"year": 2023,
"sha1": "b20fb285780e9ebc2dcff650eaa802d4ea772de6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "b20fb285780e9ebc2dcff650eaa802d4ea772de6",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
203706858
|
pes2o/s2orc
|
v3-fos-license
|
A photographic protocol for image sampling on the evaluation of color similarity between animal and substrate
Camouflage has long intrigued scientists, who have been trying to quantify animal coloration. Recent advances in the field of computing allow the sampling and analysis of data from digital photographs. Thus, new quantitative studies on animal coloration have used these resources due to their practicality and accuracy. However, new methods and resources also bring new questions and challenges, in which improvement opportunities can be identified. In this study, we proposed a new sampling method and tested the influence of photographic parameters on a recent method for color similarity quantification between animal and substrate. Therefore, 135 digital photographs were obtained under controlled conditions and comparative tests were performed to test the proposed sampling method and differences caused by different photographic parameters. The results showed significant differences between the colors sampled with different methods and between photographs obtained under different photographic parameters.
Introduction
Animal coloration has always been of scientific interest, due to both its beauty and biological function. What is now known as cryptic coloration has been of interest to many scientists since XVIII century. Darwin's grandfather, Erasmus Darwin, wrote: "The colors of many animals seem adapted to their purposes of concealing themselves, either to avoid danger or to spring upon their prey" [1] . Since then, many studies have been developed in animal coloration, mainly trying to explain the mechanisms of function and the quantification of concealment coloration. Recent work on camouflage and quantification of animal coloration have been using digital photography and image analysis software, due to its practicality and analytical power [2][3][4] . However, some studies also involve the use of expensive proprietary software, which end up making the use of this approach unaffordable. The work of Samia (2015) developed an affordable way to measure color similarity between animal and substrate by extracting color frequencies from digital images through the ImageJ [5] and Color Inspector 3D [6] software and analyzing them with an R [7] coded mathematical function named Color Overlapping Index (COI). Based on this method we identified an improvement opportunity for color frequency sampling. The developed method states that any selection tool available in the ImageJ software can be used to select the areas of the animal's body and substrate to be compared. However, fixed shape selection tools, like the rectangular or circular, do not allow the selection of irregular areas, such as animal bodies and their substrates, making it impossible to fully sample the colors present in animal and substrate. With the use of the polygonal selection tool, due to its ability to select areas of any size and shape, we developed a protocoled method which fully samples the color of both animal and substrate areas. In addition to the sampling issue, we also considered another important factor, the photographic variables, such as the use of artificial light and the angle from which one photographs. Therefore, the present work performs tests of consistency in relation to the variables source of light, white balance and shooting angle and proposes a new protocoled method for photographing and color sampling.
Materials and methods
In order to test the influence of different variables of the color sample method and the study of photographic variables, we obtained images of anuran amphibians, of which 17 individuals were of the species Rhinella ornata Spix, 1824 and 10 individuals of the species Thoropa miliaris Spix, 1824. The specimens were conserved in alcohol and belonged to the collection of the Laboratório de Herpetologia, of the Department de Bio logia Animal of the Universidad Federal Rural do Rio de Janeiro. All the photographs were taken under controlled conditions of light exposition, angle and white balance. The photographs were taken with a Canon EOS 7D Mark II camera and an external Canon Speedlite 430EX II flash, using as setting, ISO 400, f/8 aperture. Shutter speed, white balance and flash power were kept in auto mode. Since white balance is a crucial step in the photographic pipeline, ensuring the proper rendition of photographs by eliminating colors created by different illuminations [8] , we choose then to standardize the white balance in the post production, using as parameter a neutral color common to all the photographs. A simulated substrate was crafted to be the closest to the natural environment of the species under study. The sampling areas were selected using the polygonal selection tool, which allows delineating areas of any size and shape. The selection was made to follow the contour of the animal's body precisely, excluding areas of shade or any element that was not part of the animal's body (figure 1). The substrate area of each photograph was divided into four subareas, not necessarily equivalent in size. This method was chosen to facilitate the selection of irregular areas with obstacles or artifacts to be excluded. In addition, this approach makes it possible to detect the influence of different parts of the substrate allowing us to identify the parts of the substrate that are most similar to the animal. It was also defined that the contour lines of the substrate area were to be a few millimeters away from the contour of the animal's body area to avoiding overlapping the samplings ( Figure 2). To test the consistence between sampling methods, the photographs were also sampled with the rectangular tool, as described in Samia (2015). Aiming to test each variable of the photograph, each individual was photographed in three different situations, at 90° angle under natural light, at 90° angle under artificial light and at 45° angle under artificial light. The white balance of all photographs was standardized in the post-production, using Photoshop Lightroom software (Adobe Systems Inc., USA).
The photographs taken at 90° angle, under artificial light were determined as the control group. Artificial light was chosen as control because it ensures that all photographs are taken with light of the same color. Four groups were then sorted according to their specific parameters: photographs taken under natural light; photographs which white balance was not standardized; photographs taken at 45°angle; photographs sampled with a fixed shape selection tool, totaling 135 photographs. The color information of all selected areas, from both the animal's body and the substrate, were then obtained through the Color Inspector 3D and exported in the form of data tables. The data were then inserted into the R platform and analyzed through COI function, which results vary between 0% (no color similarity) and 100% (full color similarity). Student's t-test at a significance level of 5% was used to compare the test groups to the control group.
Results
The COI function analysis generated five results, four for polygonal sampling and one for rectangular sampling. The mean of the four values generated by the polygonal sampling and the value generated by the rectangular sampling, were analyzed through student's t-test and plotted in dispersion graphs (figures 3 and 4).
Discussion
The results demonstrated that the color sampling method employed significantly altered the result of the Color Overlapping Index. This is probably due to the fact that the polygonal method sampled the entire photograph, regarding all animal and substrate colors, while fixed-shape methods, such as rectangular sampling, neglected part of the colors present in the animal and in the substrate. Since the COI function considers the frequency of the colors shared by animal and substrate, a full sampling of these areas should be considered.
The results showed that the colors shared by animal and substrate varied unevenly when under natural and artificial light. This makes photographs taken under different illuminations incomparable between each other. Light variations also occured in photographs taken under natural light at different times of the day, as the color of natural light changes according to the time and weather of the moment the photograph is taken. Therefore, the use of artificial light is the only way to ensure that all photographs are made under the same light condition, ensuring control over this variable, thus its use must be considered in order to guarantee the comparability of the photographs when using the COI function.
The results demonstrated that the standardization of white balance in photographic post-production affects the color of the animal and the substrate unevenly, making photographs with different white balance settings incomparable. Thus, the standardization of white balance should be considered to minimize the influence of the equipment in-built white balance algorithms when using the COI function.
The results demonstrated that the angle from which the photograph is taken can influence the color similarity between the animal and the substrate, depending on the color homogeneity and the body plan of the animal. Since the COI function weights the relative frequencies of each color shared by animal and substrate, an animal with larger flanks may have its results altered by the angle of the photograph.
However, in an animal with flat body, because the flank area is small in relation to the rest of the body, the influence of the angle from which the photograph is taken is reduced. Therefore, the angle from which the animals are photographed should be determined considering their body plan and the study objective. For example, studies about predation of amphibians whose main predators are birds and snakes, with high and low view angles, respectively, should take these as the most indicated for photographing. Data from all tests showed that the color sampling method, the source of light, the photographic equipment settings and the photographed angle have the potential to alter the color relation between the animal and the substrate perceived through the photograph. These factors suggest that for the applications of this method, which aim to measure coloration similarity through photography, one should consider the Closer diaphragm apertures ensure the focus on both animal and substrate, avoiding unfocused parts on the photograph. 5. Shooting angles should be chosen based on the body plan of the object of study. This aims to sample the body portion which is seen more often in its natural habitat. 6. Photographs should be taken in raw format and then converted to a format capable of being analyzed. Taking photographs in raw format provides more color and texture information on the image files. 7. The areas of the photograph should be sampled using the polygonal method described in this work. The polygonal method as described samples all the color of the animal and substrate areas, providing a more accurate result.
Conclusion
The COI function method is an affordable and accurate way to measure color similarity. However, the color sampling method, the photographic equipment configuration, the light source and the photographed angle can significantly alter the results of the Color Overlapping Index. Thus we conclude that a pre stablished protocol, like the one proposed above, should be followed, making photographs comparable between each other and ensuring the most accurate results possible.
|
2019-09-26T08:54:58.857Z
|
2019-05-01T00:00:00.000
|
{
"year": 2019,
"sha1": "2df0e497b4eea6650e294acf1631972f85c6425f",
"oa_license": null,
"oa_url": "https://www.faunajournal.com/archives/2019/vol6issue3/PartA/6-2-17-116.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8f5047b66444636e31ec7968e0db2e5a938a239b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
249491471
|
pes2o/s2orc
|
v3-fos-license
|
Technical practices used by information literacy and media information literacy services to enable academic libraries to handle the COVID-19 pandemic
This study analyses the techniques and procedures that were developed and the changes that took place in the National Autonomous University of Mexico (UNAM) and the Autonomous University of Puebla (BUAP), both in Mexico, and the University of Minnesota Duluth (UMD), in the United States of America. To face the crisis of the COVID-19 pandemic, librarians in these institutions improved their Information Literacy (IL) and Media Information Literacy (MIL) programmes. Design / methodology / approach This study has a mixed methodology with a comparative analysis. For this purpose, data shows the universities’ contexts: the communities of students, teachers, researchers, and librarians, and the e-learning strategies of IL and MIL programmes. Findings As part of the results of the crowdsourcing collaboration between the UMD, UNAM and BUAP, the study shows the different online learning communities and their innovations. Originality Although there is theoretical knowledge about IL and MIL in Mexican universities and University of Minnesota Duluth, the e-learning strategies used by their librarians in this document sought to provide technical solutions and other options for a virtual work scheme that responded to the specific problems presented by COVID-19. In this case, the framework for creating online library services was designed by their librarians for their communities in the context of the current crisis, even when online services had already been established for more than ten years. Research limitations / implications The technological infrastructure, the professionalisation of the library staff and a lack of knowledge of the new virtual teaching-learning needs.
Introduction
The COVID-19 pandemic crisis forced libraries to close spaces and provide only e-resources and other library services without an established plan for exclusively online operations. Even when the online services existed previously, the increased consumption of web content and the demand for innovations in communication processes have changed the way librarians will plan their traditional services in the near future (Coghill & Sewell, 2020).
In recent years, most universities in Mexico have applied Information Literacy (IL) and Media Information Literacy (MIL) to support online programmes. According to the Association of College and Research Libraries, IL 'is the set of integrated abilities encompassing the reflective discovery of information, the understanding of how information is produced and valued, and the use of information in creating new knowledge and participating ethically in communities of learning' (ACRL, 2015).
On the other hand, MIL has become essential because it offers 'a basis for enhancing access to information and knowledge, freedom of expression, and quality education. It describes knowledge, skills, and attitudes that are needed to value the functions of media and other information providers, including those on the Internet, in societies and to find, evaluate and produce information and media content' (Grizzle et al., 2013, p. 197).
In fact, the study 'Radiografía sobre la difusión de fake news en México' ('X-ray of the spread of fake news in Mexico') conducted by the Autonomous National University of Mexico (UNAM) found that Mexico is the second-largest producer of fake news after Turkey (Hurtado Razo, 2020). In this context, the role of librarians as Information Literacy (IL) and Media Information Literacy (MIL) instructors has been highlighted.
Although IL and MIL are necessary for academia, this topic has represented a challenge in relation to funding, training, hiring and retirements due to . In this context, the creation of partnerships between various institutions has begun to yield productive projects and an example of this is the MIL Mexico network.
This international organisation is integrated by UNESCO, DW Akademie, the National Electoral Institute (INE), the Mexican Institute of Radio (IMER), the Autonomous University of Nuevo León (UANL), the Veracruzana University (UV), Social TIC and Tomato Valley, Mentoralia and Technovation Girls (UNESCO, 2021).
This network is focused on MIL, but one of the aspects that stands out among its objectives, along with a focus on improving IL in Mexican universities, is the intention to address the lack of Journal of Information Literacy, 16(1). http://dx.doi.org/10. 11645/16.1.3057 techniques and good practices in e-teaching Mexican programmes. In fact, this is one of the main factors that motivated the demand for IL and MIL in universities during the COVID-19 pandemic (De Paor & Heravi, 2020).
In the 2020, UMD, BUAP and UNAM started a 'crowdsourcing' project to explore the strategies used by their librarians to implement and guide IL and MIL. At first this study explored only IL skills, but Mexican libraries had already established an important set of diverse activities taking place exclusively within social networks, so MIL become part of the plan to support e-learning and e-teaching with these communities. It is important to point out that this collaboration was possible due to an invitation from the Kathryn A. Martin Library (KAML) at the University of Minnesota Duluth.
IL instruction at the KAML is provided across all subjects and all levels of undergraduate and graduate education, with a high degree of attention given to 'College Writing', a required writing course for undergraduate students.
This teaching and learning process has a high level of specialisation because the course seeks to develop competencies in research and academic writing. The design of the course allows faculty members to organise their agenda and use library-provided modules that focus on evaluating an article, drafting a research plan, and reviewing academic drafts, which are offered as an instruction service through liaison librarian staff.
Academic crowdsourcing as part of the development of IL and MIL in Mexico
IL has been the focus of projects developed between higher education institutions in Mexico and other universities in the United States for more than thirty years. This partnership is more evident among the border states and as an example of this collaboration, the Autonomous University of Ciudad Juarez in Chihuahua was the first to publish the norms for IL in the country (Cortés & Lau, 2002).
Projects developed in IL prior to the pandemic raised questions about online teaching models, which became more evident when the COVID-19 crisis began. Some of those concerns relate to how IL and MIL services should be offered and what lessons should be planned to develop e-learning, e-teaching and MIL between Mexico and United States. In these circumstances, it was necessary to crowdsource ideas that would create strategies, processes, policies, and LibGuides related to the educational support of librarians in both synchronous and asynchronous modalities.
Crowdsourcing is defined as 'the process of leveraging public participation in or contributions to projects and activities' (Hedges & Dunn, 2018, p. 1). The authors realised that online courses and other educational and training programmes collaboratively developed by librarians and their communities of practice should be considered within an overarching framework with specific objectives for each institution.
In this context, the aim of this study was to analyse the strategies, processes, policies, and other activities in the authors' institutions (UMD, UNAM, BUAP) to identify strengths, similarities and changes experienced before, during, and in the months after quarantine.
From face-to-face IL to MIL
The first challenge that came up for discussion among Minnesota-México library users and staff was: how many changes have we had to make to deal with IL programmes since COVID-19? With the pandemic crisis, the associated impact on the economy and other factors affecting academic libraries, two important aspects have been highlighted: 1) Expert communities (libraries, faculty members, students, etc.) who are experienced in the use of an integral e-learning platform will be a vital resource for the development of online programmes within Mexican universities.
2) Although services need to be adapted to individual contexts, and the pandemic created adverse situations that required improvisation, it is clear that developing strategies for teaching IL and MIL in e-learning environments is a process that will need international collaboration between libraries and librarians, especially in the use of electronic resources and their copyrights. This international collaboration can expand opportunities to learn and develop in ways that build on the resources of each other's libraries, as well as the efforts and experiments of library associations or communities of practice in supporting student learning, extending access to resources, and adapting libraries during times of crisis. These international collaborations provide a chance to expand the development of best practices enriched by the diversity of experiences and conditions in each other's countries.
This collaboration depends significantly on internet access and the behaviour of internet users.
In the most recent survey of internet access in Mexico, the results show that 70.1% of the total population has access to the Internet, but 91.5% of this population use it for entertainment (INEGI, 2019); in contrast, 87% of population in the United States have access to the internet in houses but 100% of those have access through mobiles (World Bank, 2020).
In Mexico, the use of entertainment platforms (social networks, tv, radio, etc.) has become increasingly popular as a way to support e-learning content. For example, events recorded and published simultaneously on YouTube (BibliotecasUNAM, 2021) and Facebook (DGBSDI, 2021), have more than one thousand followers and their impact has been national and international.
The University of Minnesota Duluth (UMD) supports online e-learning programmes through integrated platform systems such as Canvas. Since 2014, UMD has been a member of the Unizin consortium of research institutions in the United States of America which represents more than 900,000 students in the country. The aim of this ecosystem is to create a shared framework between higher education institutions.
From this perspective, the future of international collaboration between Mexican and American academic libraries could be integrated across social media, entertainment platforms, and learning management systems. When considering this possibility with regard to online IL and MIL programmes, it is vital to take into account the five categories outlined by Zhu et al. (2020) in their study of 'social annotation' during e-learning: Before COVID-19, the role of librarians as teachers was controversial in terms of authority over academic programmes, but, as Campbello outlines, a librarian, as an 'instructional consultant': should integrate the library program into the school curriculum, collaborate in the teaching-learning period and in the planning and implementation of curricular activities (2010, p. 91).
The COVID-19 crisis showed us how important it is for academic librarians, faculty members, and institutions to share know-how, good practices, impacts, and feedback from library services to develop the IL and MIL involved in the e-learning of academic programmes.
Contact and communication in the digital library as a comparative analysis between IL and MIL
The COVID-19 pandemic was the trigger that forced resources and services to adapt to a new reality with social distancing and no printed collections (Guo & Huang, 2021). The first obvious change in Mexican universities was the demand to create more online services, courses, and events to support the virtual agenda of the library systems and their institutions.
This study was conducted from October 2020 until March 2021 between the National Autonomous University of Mexico, University of Minnesota Duluth, and the Autonomous University of Puebla. Its methodology was qualitative, and its technique was descriptive. Data on the services, their techniques, tools, and experiences were provided by the authors, who were part of the group that designed, planned, and implemented the activities described. The two axes analysed were the IL and Media Literacy (ML) reports, as a list of actions, activities, resources, and services used to counteract isolation and efficiently supply existing resources to our communities.
As Table 1 shows, the websites of the libraries that formed the basis of this study were the main channel of communication between libraries and their communities before and after the pandemic. During the pandemic, UMD launched outreach efforts that included the 'Northeastern Minnesota COVID-19 Community Archive', a project that started as an example of emerging ways for creating, preserving, and sharing content about COVID-19 in the university. This idea was put into action by members of UMD campus: https://libguides.d.umn.edu/covid-19. They also hosted a webinar featuring a visiting professor talking about presidential election polling, and staff members organised an online 'unconference' that can be consulted on this website: https://lakesuperiorlibrariessymposium.com/.
In the cases of UNAM and BUAP, two axes were analysed as shown in Figure 1. IL and MIL were registered, comparing the actions, activities, resources, and services used to counteract isolation and ensure that existing resources reached our communities efficiently. The study had four main factors for analysis: learning, information resources, training, and library services. To obtain the data of the participant institutions, it was necessary to create a guide to analysis and explore the different elements and communication between them. Librarians in leadership roles conducted interviews over Zoom to obtain responses that identified the actions, activities and processes carried out during the pandemic for IL and MIL.
Results and explanations of how IL and MIL were used to face the COVID-19 pandemic
The UNAM, BUAP and UMD have different needs for information in relation to academic programmes with different educational levels, as Figure 2 shows. IL activities and services in the UNAM, UMD and BUAP libraries have a significant impact on the entire community, including the staff. Since COVID-19, digital resources have increased in number and have been used more heavily. Some of the service orientations that stood out in a positive way during the pandemic and that we already see reflected in other institutions are: 1. Creation of courses for teaching support 2. Researcher accompaniment 3. Co-participation in research development 4. Personalised assistance for focal groups The IL service in its traditional form still supports meaningful learning within academic programmes in the university and the design of any new model depends on the perspective of the library and the aims proposed in the institutional plans. One challenge was how to organise and preserve the material produced during the pandemic and store it on platforms or social networks. We must think about the new policies that will be generated due to the creation of this material that proved so important for these institutions.
Out of the meaningful learning resources that received more positive feedback from the online communities, those that offer potential future growth opportunities are: 1. Creation of repositories that manage and preserve the learning resources created during the pandemic. 2. Didactic guides on the use of the digital library and other information resources. 3. Video tutorials.
Online workshops and MOOCs (Massive Open Online Courses) about different topics
of interest to the community.
As Figure 3 shows, the IL analysis of UMD, UNAM and BUAP registers the main activities that were carried out following the massive library closures caused by the COVID-19 pandemic. The implementation of IL online services was not a new task for the librarians but coordinating the work behind these types of activities without being physically present and with limited access to libraries represented a new challenge. Figure 4 shows the priority actions during the total and partial closure of the facilities.
Figure 4: Priority actions within UNAM, UMD, and BUAP during library closures
The aim of IL as a library service is to supply the information needs of the community but at the same time all activities were required to follow rules and guidelines authorised by the institutional plans. In Figure 5 the level of specialisation of Kathryn A. Martin Library (KAML) in the University of Minnesota, compared with the library systems in the Mexican universities UNAM and BUAP, is evident. The pedagogical approach to teaching-learning in each university has different purposes, perspectives and tools that are described in Figure 6. The pedagogical model based on competencies seeks to teach specific skills to students and from this to evaluate their competence to perform tasks related to their area of knowledge. The AUMI seeks to merge interactive applications with a playful approach to teaching IL, offering a creative perspective that helps to keep the students' attention and to assimilate knowledge with the help of games and challenges that can be posed individually or in groups under a cooperative learning approach.
Although it should be noted that IL activities were successfully carried out, the use of media also played an important role in the communication of content. MIL activities in UNAM and BUAP required a complex design, and the aim was to focus on the development of critical reading of digital content.
Since the COVID-19 pandemic stopped the in-person provision of all library services, the UNAM and BUAP used social networks and other popular channels to transmit their events and other specialised courses to the students, the staff, and the faculty members. As Figure 7 explains, large-scale social media broadcasting activities had more impact in the Mexican universities.
Conclusion
The COVID-19 crisis has demonstrated that the role of libraries had a significant impact on the teaching-learning process of distance students and teachers. It is important to emphasise that the use of digital resources and services does not replace the value of library work, as has been discussed in the past, even when this was done from home. The response to the demands of the university community was optimal in all three universities and the transition to virtual media demonstrated the ability to adapt to current circumstances.
Although the pandemic forced decision making, the libraries of the universities participating in the analysis did not experience a significant change in the work previously done in digital environments in Mexico, but new interactions were discovered within social networks and other unexpected communities such as families and people close to the students.
The use of international guidelines and government restrictions played an active role in the creation of contingency agendas within libraries and their universities. The total suspension of face-to-face classes created a new digital work environment for the staff of the Kathryn A. Martin Library.
In fact, the COVID-19 crisis and its restrictions opened the possibility to be more creative and flexible about the digital library, its services and collections. These innovations saw the creation of different guides and policies to support access to LibGuides, academic writing and library modules on Canvas, courses, online chat, and Zoom meetings with different groups with relevant expertise and liaison librarians.
The retirement of many librarians had a significant impact on the division of tasks, but the implementation of home-based work, the flexibility of work patterns and the schedules set by department leaders, helped the work team to avoid burnout. This collaboration helped UMD create new opportunities for inclusion and cultural awareness in the library by expanding and internationalising perspectives and partnerships in resource sharing, IL, and library outreach. Furthermore, this exchange of information highlights the ways in which the organisations involved can learn from each other's use of platforms, social media and/or learning management systems, to connect with students and to leverage the content created for those platforms to benefit the entire campus community.
In the case of UNAM, the online reference service played a leading role in the COVID-19 crisis, because there was a long history of experience in its implementation. The use of free software such as Google Meetings, Zoom and social networks was indispensable for the adaptation of traditional services to digital services, and the use of videoconferencing platforms was necessary to humanise contact with users and reduce the dangers of heat stroke and confinement. For the library system of the BUAP, the transmission of webinars and the use of social networks was necessary due to the limitations of its staff regarding the use of specialised software. It is interesting to mention that in the case of BUAP, professors and students who could not consult printed books were forced to redesign their curricular plans based on the contents of the digital library. The sessions were developed in collaboration with database providers and other publishers attached to its digital library. In addition to the measures above, the use of applications such as WhatsApp and Instagram, among other options, established new means of communication between their learning communities and their librarians.
Software
The analysis of these three academic libraries showed that regardless of the budget available in each of them, the preparation and experience of the librarian is a key element in the design and innovation of their services. The person operating these systems and their experience in the use, management, and protection of data has considerable influence.
The exchange of ideas, techniques and experiences as a part of this international cooperation resulted in the prospective design and implementation of other online services such as academic writing modules in other languages and topics related to inclusion, anti-racism and international cooperation, as part of a strengthening of the values that libraries teach their community.
With the gradual return of the university community, libraries and their services are being planned in a mixed format (in person and distance/online). The new normality has demonstrated that the advantages of this work system make it a viable option that reduces risks and establishes new learning horizons, especially among librarian communities.
|
2022-06-09T15:03:17.273Z
|
2022-06-05T00:00:00.000
|
{
"year": 2022,
"sha1": "803439df2ac22a3cec4624b64378ac6baff5140b",
"oa_license": "CCBYSA",
"oa_url": "https://ojs.lboro.ac.uk/JIL/article/download/COV-V16-I1-4/3134",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b3df4d6b3d5d70f0ddfbe666daad7657d6fb0193",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250642030
|
pes2o/s2orc
|
v3-fos-license
|
Baropodometric comparison of orthopedic footwear to assess the effectiveness of pairs of orthopedic shoes for reducing the forefoot pressure
Objectives This study aimed to evaluate the distribution of foot pressure with the most frequently used orthopedic shoes and demonstrate the effect of offloading philosophy on the pressure distributions of rocker bottom or heel support shoes applied unilaterally or bilaterally. Materials and methods Three bilateral and four unilateral, a total of seven shoe designs with sensors included in the insole were tested by the same subject in a standard acquisition protocol. Two of the unilateral and one of the bilateral shoes had heel support, while others had a rocker bottom design. A descriptive analysis was performed for each shoe and compared to a reference value obtained from a standard shoe. Results Shoes with an offloading heel resulted in a greater reduction in pressure on the forefoot than other models. All other shoes increased pressure on the first metatarsophalangeal joint. Heel offloading models performed the best in forefoot offloading, and bilateral heel offloading shoes performed the best in first metatarsal offloading, with the highest scores of 83% and 82%, respectively. Conclusion This study showed that orthopedic shoes sold in pairs could reduce pressure on the forefoot at a comparable level to unilateral shoes. It supports their use to limit the disadvantages of single orthopedic shoes, such as limb length discrepancies.
osteosynthesis techniques have resulted in earlier postoperative weight bearing and early discharge from the hospital. [4,5] Although there exist numerous studies that evaluate the distribution of plantar pressure with different orthopedic shoes [1,[6][7][8][9][10][11][12][13] and its role in the diabetic foot, [14,15] to our knowledge, none of them compare pairs of orthopedic footwear to other single orthopedic shoes. Sold in pairs, these shoes have the capacity to limit the limb length differences. Overcoming the leg length discrepancy This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes (http://creativecommons.org/licenses/by-nc/4.0/). ©2022 All right reserved by the Turkish Joint Diseases Foundation might increase the comfort and speed of walking while decreasing the postoperative falling rates. [16][17][18] However, prescription of bilateral shoes is not a part of daily practice, probably due to economic concerns and refusal of refund by the assurance for the healthy side.
In 2019, Dearden et al. [19] performed a prospective study and did not find any difference in the clinical and radiological progress following surgery of the forefoot between heel offloading shoes and rocker bottom shoes. However, they reported greater comfort and patient satisfaction and better stability with full rocker bottom soled shoes. A reasonable explanation may be the theoretical advantage of bilateral shoes, which may help patients adapt to a relatively new postsurgical condition.
The main goal of this study was to evaluate the distribution of foot pressure with the most frequently used orthopedic shoes and demonstrate the effect of offloading philosophy on the pressure distributions of rocker bottom or heel support shoes applied unilaterally or bilaterally. With the use of unilateral postoperative shoes, there is presumed to be a difference between the two sides in terms of elasticity of footwear, comfort, and walking patterns, although the patients wear a suitable shoe on the healthy side. We hypothesized that the reduction in plantar pressure with orthopedic footwear sold in pairs would be superior to one-sided shoes that are commercially available.
MATERIALS AND METHODS
A protocol was designed for data collection. The left foot of a 25-year-old female with no past medical or surgical history was bandaged as if she had undergone forefoot surgery. The subject was 1.64 meters tall, weighed 52 kg, and wore a size 38 shoe. The reference shoe was a comfortable, standard sports shoe. When unilateral orthopedic footwear was being tested on the left foot, the reference shoe was worn on the right foot without a sock.
The subject walked 10 m in a straight line in each tested shoe; this was performed twice. The first 10 m allowed the subject to become familiar with walking in the shoe, and the second was used to register the data. Thus, the second walk constituted the data. Walking speed was natural and unguided. Recording of plantar pressure patterns was obtained with an insole (Moticon Science, Munich, Germany; Figure 1). This 3 mm insole was bilaterally placed. Walking was recorded, and an automatic video synchronization program associated the measurements of plantar pressure with the gait cycle. The amount of pressure exerted was quantitatively measured (N/cm 2 ) and retranscribed to color code. The same insoles were used in all of the tested shoes.
Statistical analysis
All analyses were performed using the IBM SPSS version 23.0 software (IBM Corp., Armonk, NY, USA). A descriptive analysis of the measured plantar pressure values was performed. The number of steps, walking speed in terms of strides per minute, mean forefoot pressures (N/cm²), and mean first metatarsophalangeal (MTP1) pressures (N/cm²) were measured and compared. The insole consisted of seven forefeet and six hindfeet, a total of 13 pressure sensors, and a three-dimensional acceleration sensor with an embedded ANT radio source, a thermal sensor, a flash memory, and a power supply. The S2 and S3 sensors represented the contact area of the MTP1, and the mean values of these sensors were utilized to express the corresponding parameter. Mean pressure
Maximum pressure
The insole was capable of wireless transfer with the capacity of providing a visual data interface (Figures 1 and 2).
RESULTS
All measured values of the plantar pressures are presented in Table II and Figure 2. The paired orthopedic footwear performed a similar reduction in plantar pressure to unilateral orthopedic shoes. This was against our initial hypothesis. Walking speed was similar in the tested shoes. Walking was slower in shoes with a forefoot offloading heel or a single shoe. Shoes with an offloading heel, namely Barouk secondgeneration, Gemini Alpha, and the Orthowedge, resulted in the greatest reduction in pressure of the forefoot compared to the reference shoe with 83%, 81%, and 55%, respectively. Among the shoes with a full rocker bottom sole, the Gemini shoe reduced pressure on the forefoot by 22%, which surpassed the values produced by Podalux, CHV, and Halten shoes. Figure 3 presents the reduction in forefoot pressure with each shoe.
If the mean pressure values of the S2 and S3 sensors were separately evaluated, representing the pressures on the MTP1, heel offloading property of Gemini Alpha, Barouk second-generation, and the Orthowedge shoes resulted in a reduction of 82%, 80%, and 73%, respectively, compared to the reference shoe. In addition, all of the shoes with full rocker bottom soles increased pressure on the MTP1 in different proportions. Gemini and Podalux shoes resulted in an increase of pressures by 16% and 14%, respectively. More remarkable increases in the MTP1 pressure were noted with CHV and Halten shoes by 31% and 43%, respectively. Figure 4 presents measurements of the pressure on the MTP1 with different shoes.
DISCUSSION
The knowledge on the characteristics of postoperative shoes continues to accumulate. As expected, orthopedic shoes sold in pairs may limit the problems of limb length discrepancies created by wearing a single orthopedic shoe. [16][17][18]20,21] There is also existing evidence for the use of heeled shoes, which provide significantly more forefoot offloading (between 2.5 and 4 times more depending on the design) compared to rocker bottom soles. [22] The findings of this study support the use of heel offloading shoes to acquire the highest degree of forefoot offloading, although rocker bottom shoes provided more speed in pairs.
Back and hip pain are the most frequently reported effects of the limb length discrepancies created by inequal orthopedic shoes. [18] Michalik et al. [16] compared a rocker bottom sole shoe to an offloading heel shoe and found a statistically significant difference in the oblique angle of the pelvis and the lateral deviation of the spine compared to a regular shoe. Furthermore, they found hardly any difference in gait between the two orthopedic sole designs that were evaluated. They concluded that whatever the design of the shoe, a compensating shoe on the contralateral side should be used to transfer the pressure to the lateral side of the operated heel. [16] Pairs of orthopedic shoes may not only be beneficial for proper transfer of load but also decrease falling risk in the elderly subjects caused by unbalanced shoes. [23] In this study, heeled shoes resulted in a remarkable reduction in the MTP1 pressure in a range of 73% to 82%. On the contrary, shoes with rocker bottom soles increased the forefoot pressure. Comparable results were reported by Van Schie et al. [22] in 14 of the 17 patients evaluated by using the Darco ® VFE (DARCO (Europe) GmbH, Raisting, Germany) shoe. Schuh et al. [24] evaluated the characteristics of five shoes on the market and five prototypes to a reference shoe and found similar results. They blamed the increased torsional force on the hallux due to the excessive stiffness of the soles and assumed that a combination of excessive rigidity and an exceedingly posterior rocking line could have created this paradoxical effect. They concluded that the quality of orthopedic footwear was based on a subtle balance of the rigidity of the sole, the rolling characteristics, the rocking line, and shoe comfort. Moreover, they reported a need for a proper correlation between the length and the width (length-width ratio) to ensure patient comfort. There were certain differences in the quantitative pressure values (N/cm 2 ) reported in the study of Schuh et al. [24] and the results of the present study. This difference was probably due to the system of measurement and calibration. [25] The measurement system utilized in our study was validated by numerous studies. [26][27][28] To our knowledge, this is the first study to evaluate the distribution of plantar pressures in orthopedic shoes sold in pairs. [29] The main limitation of this study is the evaluation of only a single healthy subject. The subject on whom the tests were performed was healthy and had no problem with her foot or limping. We did not report the measurements in the healthy foot, as this was not pertinent to our analysis. Second, the speed of walking was not guided, which has been reported to modify the plantar pressure. [30] As the assessment of walking speed is not easy in a short distance, we assumed that the speed was nearly constant for 10 m and that any variations and their influence on the pressure measurements was negligible. Finally, postoperative bandaging following foot surgery markedly varies from one surgeon to the other and may significantly modify the distribution of the plantar pressure.
In conclusion, the reduction of pressure on the forefoot was significantly greater with a forefoot offloading heel support model compared to shoes with rocker bottom soles, which conversely increased the pressure on the first metatarsophalangeal joint. Paired heel supporting models may be considered as an alternative option during the early postoperative period to obtain significant forefoot offloading and comfort.
Data Sharing Statement:
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Author Contributions: Writer, reviewer, analysis and interpretation: H.B.S.; Idea, concept, design of the study and superviser: A.H.; Idea, concept, data collection and writer: R.L.
Conflict of Interest:
The authors declared no conflicts of interest with respect to the authorship and/or publication of this article.
Funding:
The authors received no financial support for the research and/or authorship of this article.
|
2022-07-20T06:17:37.230Z
|
2022-07-06T00:00:00.000
|
{
"year": 2022,
"sha1": "e63dd9f0068d4e3e732c933c933025c4ed689ea9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5765e1dc8321b105216ee2da0434189d3133c54c",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
5442965
|
pes2o/s2orc
|
v3-fos-license
|
An audiovisual political speech analysis incorporating eye-tracking and perception data
We investigate the influence of audiovisual features on the perception of speaking style and performance of politicians, utilizing a large publicly available dataset of German parliament recordings. We conduct a human perception experiment involving eye-tracker data to evaluate human ratings as well as behavior in two separate conditions, i.e. audiovisual and video only. The ratings are evaluated on a five dimensional scale comprising measures of insecurity, monotony, expressiveness, persuasiveness, and overall performance. Further, they are statistically analyzed and put into context in a multimodal feature analysis, involving measures of prosody, voice quality and motion energy. The analysis reveals several statistically significant features, such as pause timing, voice quality measures and motion energy, that highly positively or negatively correlate with certain human ratings of speaking style. Additionally, we compare the gaze behavior of the human subjects to evaluate saliency regions in the multimodal and visual only conditions. The eye-tracking analysis reveals significant changes in the gaze behavior of the human subjects; participants reduce their focus of attention in the audiovisual condition mainly to the region of the face of the politician and scan the upper body, including hands and arms, in the video only condition.
Introduction
Humans use a variety of apparent communicative features in order to judge social aspects of behavior. Either in direct interactions or during observation in passive roles observers quickly rate other people's personality and trustworthiness utilizing verbal and non-verbal cues (Argyle, 1975;Grammer et al., 2002). For example, in intergroup communication speakers' attitudes are signaled verbally and nonverbally (Gallois and Callan, 1988), while the leadership and affective strength of an actor are judged on the basis of non-verbal cues in situations of listening to announcements and speech (Albright et al., 1988). It has been demonstrated that motion cues are a rich source of communicating nonverbal information which is perceived and reliably interpreted by observers in a context-dependent fashion (Koppenheimer and Grammer, 2010;Grammer et al., 2002). Such judgements can be accomplished even with impoverished visual motion signals, as in point-light displays, still allowing the perception of emotion cues in interpersonal dialogue situations (Clarke et al., 2005). More recently, it has been shown that observers selectively sample information of motion cues in point-light displays using eye-movements with gaze patterns depending on the particular task (Saunders et al., 2010). Taken together, this demonstrates that active communicators and listeners make use of various cues to encode and decode communicative signals and actively search for the presence of specific hints for socially relevant stimuli. While in the latter reported eye-movement task the analysis focuses on one modality only, it remains unclear whether and how specific eye-movement patterns also vary when multimodal information provides verbal and non-verbal signals. In this study, we present details of an analysis of audio and visual factors and features of political speaking styles that correlate with human perceptual evaluations. We produced several audiovisual features of political speeches from little-known speakers of the German parliament and performed a statistical analysis of eye-tracking data and perceptual ratings from seven naive subjects on this data. We compared audiovisual features related to the perception ratings and analyzed the gaze behavior in both video-only and multimodal conditions (see Section 2.).
The audio features comprise prosodic parameters such as articulation rate, pitch range, voice quality parameters, intensity measures, and speech timing that were subject of the analysis in related work (Rosenberg and Hirschberg, 2005;Strangert and Gustafson, 2008), except for the voice quality parameters. As a basic but nevertheless meaningful visual feature the relative motion energy contained within a sequence of a speaker was used (much in the spirit to consider movement quality analysis as suggested by (Grammer et al., 2002)). Unlike the approach of (Koppenheimer and Grammer, 2010), we refrained from using complex high level visual features, such as the characteristics of an estimated geometric body model or the trajectory of the hand, since most of them are difficult to obtain under unrestricted realistic conditions. Gaze behavior was analyzed using the relative time a subject was fixating a specific body part (in this case the face) of a speaker as an indicator on the influence of that body part on the perception of a speaker's qualities.
The remainder of this paper is organized as follows: in Section 2. the data and the experimental setup are introduced. Section 3. reports key findings in the various statistical tests and Section 4. summarizes the results and presents future work. The paper concludes with a discussion of potential real-world applications of this work.
Data and Perception Experiment
The stimuli for the perception study were taken from three individual plenary sessions (i.e. earthquake in Japan (March 17th 2011), adjustments within the organization of the German armed forces (May 27th 2011), and the plagiarism scandal of the defense minister (February 23rd 2011)) of the German parliament 1 . We chose 40 sequences by eight different rather little-known speakers (four female, four male) of 10-20 seconds in length (average: 16.68s, variance: 4.35s). Each speaker is represented in five different sequences (exemplary pictures are shown in Fig. 1). Two separate experimental runs are conducted in two sessions. For each subject one randomly chosen half of the stimuli is presented as video only and the other half audiovisual. In the second session the former video only half is presented audiovisual and vice versa. Within the runs the stimuli's order of presentation is as well randomized. The subject's position is fixed at a distance of 50 cm by a stationary eye-tracker (SMI iView X TM Hi-Speed) to precisely record their gaze direction. The stimuli are presented on a flat 20.1 inch LCD Display (Dell 2001FP). Each stimulus stands still for three seconds in order to ground the subjects. After the stimulus is presented, a dialog appears where the subject has to answer five questions on a five point Likert-scale (from absolutely disagree to absolutely agree) using the mouse. The questions posed to the subject were shown to be reliable in previous studies (Rosenberg and Hirschberg, 2005;Strangert and Gustafson, 2008), namely "The speaker is ..." • "... insecure." • "... monotonous." • "... expressive." • "... persuasive." • "... overall a great speaker that is capable of capturing the attention of an audience." In the presented study we recorded the eye-movements and acquired the subjective speaker ratings of seven subjects (two female, five male; with an average age of 24). Currently, effort is undertaken to record a larger cohort of subjects.
Evaluation
In order to evaluate the influence of the speakers' behavior on their rating we conducted multiple statistical tests with two basic foci, i.e. the disparity in the speakers' perception with and without audio using the eye-tracking data, and the influence of acoustic prosodic measures, as well as basic visual features on the perception of the speakers' qualities.
The results and evaluations of the investigated foci will be covered in subsections 3.1., 3.2. and 3.3.
Audio Measure Evaluation
For the audio evaluation, we extracted a battery of parameters representing different aspects of prosody, including statistics of the fundamental frequency (f 0 ), intensity, voice quality parameters, and timing related features. Among the extracted features are articulation rate (i.e. number of syllables per second), statistics of f 0 (i.e. minimum, maximum, mean and span) extracted using the ESPS/waves+ software package, mean F 1 (i.e. the first formant), average pause time, mean amplitude and normalized amplitude quotient (NAQ) (Alku et al., 2002) and the so called Peak-Slope parameter identifying breathy regions of the speech (Kane and Gobl, 2011). In order to identify their influence on the perception of the speakers' style and quality (with respect to the set of questions posed to the subjects after each segment), we calculated Pearson correlation coefficients ρ ∈ [−1, 1]. The ρ values represent the strength of positive or negative linear correlation of the extracted parameters and the perception of a speaker. Table showing Pearson's ρ values and significant positive or negative linear correlations between prosodic parameters and subjects' perceptual ratings (PART 1). The correlations are calculated for three groups, i.e. all speakers (ALL), female speakers (F), and male speakers (M). The perceptual ratings are shortened using the following abbreviations: overall (Ov.); insecurity (Ins.); monotony (Mon.); expressiveness (Exp.); and persuasiveness (Per.). Significant correlations are denoted with * (p < .05) and * * (p < .01). Leading zeros were omitted.
with the perceptual ratings of the naive subjects that rated the short speech segments. The strongest correlations were found for the pause time parameter which highly negatively correlates with the overall rating (i.e. the shorter the time spent for pauses the better the overall rating), and the speakers' expressiveness and persuasiveness. Further, the parameter is positively correlated with insecurity and monotony.
Also the voice quality parameters PeakSlope and NAQ highly correlate with the perceptual ratings. In general, breathy voice qualities (i.e. high NAQ and PeakSlope values) correlate strongly and positively with insecurity and monotony. The other three categories correlate negatively with these parameters, as small NAQ and PeakSlope values indicate more tense voice qualities.
Relatively moderate correlations were found for the f 0 related parameters. Mean f 0 had only slight effects for male speakers and span f 0 as well as max f 0 had small significant correlations for female speakers. This finding might be an effect of the strong speaker dependence of the f 0 parameter. In (Rosenberg and Hirschberg, 2005) highly significant correlations (i.e. p < .001) for f 0 related statistics were found for charismatic speech, which roughly corresponds to our overall rating scores. However, we could not verify those highly significant results as f 0 only rarely significantly correlates with overall ratings. Also speaking rate (i.e. syllables per second) correlated significantly with charisma (p = .085). As we did not assume significance at p < .1 (as in (Rosenberg and Hirschberg, 2005)) but at p < .05, the results are not necessarily comparable. Unfortunately, no correlation values such as Pearson's ρ were given in (Rosenberg and Hirschberg, 2005), with which we could compare our values.
Further, Table 1 shows that gender specific differences are found in the data. The mean intensity of the speech for example only shows significant correlation with ratings for male speakers but none for female speakers, which indicates that a raised intensity in speech only changes the perceptional quality of the speech for male speakers. PeakSlope and mean F 1 also show similar effects. These findings show that it is important to separate the analysis for male and female speakers as the same prosodic parameters might have opposing effects for the different genders. However, no significant change of directional correlation was found for a single parameter.
Visual Feature Evaluation
As a basic but meaningful visual feature we estimated the optical flow in each of the 40 sequences using the algorithm proposed by (Brox et al., 2004). The results are vector fields of regularized estimates of the spatiotemporal changes resulting in image shifts and deformations (or warpings) of structures in the intensity function over time.
At each location the estimated amount (speed) and direction of the velocity of a pixel is encoded. In Fig. 1 exemplary optical flow fields are shown for three different underlying body movements; the different colors resemble the direction of the movement. Using such flow fields u(x, y, t) we calculate the motion energy of each image by a framewise summation of the length of the vectors and discarding their direction information. This identifies wheather motion occurs in an image and also captures its average strength. Since the speaker is the only person visible in each sequence and the background is almost perfectly constant, the calculated motion energy can be treated as an indicator for the overall intensity of a speaker's motion. Any conceivable body movement causes a change in the motion energy, and thus a direct relationship between the motion energy and the gesticulation of a speaker exists. The motion energy is finally averaged over the whole sequence and used as a measure reflecting the overall motion amplitude a speaker exhibits within one sequence. In more formal terms the operation is denoted by showing the pixel-wise averaging of motion vector lengths (compare (Bobick and Davis, 2001) for a slightly different approach based on temporal differencing alone). Fig. 2 shows the estimated motion energy over the first 400 frames of two sequences, one with a speaker rated as monotonous (Mean: 3.571, Std.: 0.787), and the other showing a speaker who is perceived as expressive (Mean: 3.714, Std.: 0.488). Almost all rating categories were found to be strongly correlated to the relative motion energy (see Table 2). Merely insecurity showed no correlation in the audiovisual setup. In accordance to the semantic relationship between the rating categories, monotony showed a strong negative correlation to the relative motion energy, whereas overall, expressiveness and persuasiveness were positively correlated. There was a slight tendency to stronger correlations in the visual only than in the audiovisual setup, potentially indicating a higher importance of the motion energy in the absence of audio. Further, we could show a significant difference of the relative motion energy within the tested rating categories. Here, the ratings below 2.5 were treated as disagreement to one of the speaker characterizations, whereas ratings exceeding 3.5 were considered as agreement. Within most of the categories, there is a highly significant difference in the relative motion energy between the sequences which were rated in agreement with a category and their disagreement counterparts (see Fig. 3 correlation, the results show stronger significant differences for the visual only setup, again supporting the assumption of a higher importance of visual features in the absence of audio.
Audiovisual vs. Video-only Evaluation
Beside the speaker ratings, we recorded the gaze direction of the subjects to address the question whether the absence of the acoustic channel influences the kind of visual features used for the judgments of the speakers. During the experiments, the point of regard (POR) in pixel coordinates was measured at a speed of 240 Hz using an SMI iView X TM Hi-Speed eye-tracker. Analyzing the POR over time allows to infer from a rich set of different features, such as the fixation time on a specific target or the characteristics of its trajectory. In the presented study, we compared the relative time a subject is focusing on a speaker's face in the audiovisual and the visual only setup. The fixation time on the face reflects the ratio between facial and other body features (e.g. facial expression vs. upper body pose) used for the judgment of the speaker's performance. In Fig. 4, the relative fixation time of an exemplary subject is shown. The accordance of the POR and the position of the face in an image was verified using the OpenCV implementation of the Viola Jones face detection algorithm 2 (Viola and Jones, 2004). As shown in Table 3, the subjects focused significantly longer (p < .01) on the face in the audiovisual than in the visual only condition. We hypothesize, that in the absence of acoustic input subjects are trying to gather a larger amount of visual features contributing to their judgment. An example for the influence of audio on the fixation on the face is shown in Fig. 4 ** *** ** ** *** *** * * * * *** rated disagree (1-2.5) rated agree (3.5-5) Figure 3: Differences in the relative motion energy within the tested rating categories. Mean ratings in the interval [1, 2.5) were treated as disagreement to a speaker's characterization, whereas ratings in (3.5, 5] were considered as agreement. Analyzing audiovisual and visual only sequences as a whole, we found strong significant differences in all categories, except insecurity. Considered separately, the visual only setup showed a higher number of significant differences. This, in conjunction with the correlation results, indicates a higher importance of visual features in the absence of audio. Significant differences in the Mann-Whitney-U-Test are denoted with * (p < .05) , * * (p < .01) and * * * (p < .001) (homoscedasticity was checked using Levene's test). The perceptual ratings are shortened using the following abbreviations: overall (Ov.); insecurity (Ins.); monotony (Mon.); expressiveness (Exp.); and persuasiveness (Per.).
Conclusions and Outlook
In the present study, we investigated the impact of combined verbal and non-verbal features in the communication and judgement of target persons in zero acquaintance situations. In particular, we could show various effects with respect to the combined visual and auditory perception of political speeches in comparison with visual-only movie presentations in the same speech scenario. The most prominent effects could be found for the correlations of the five perceptual speaker ratings (i.e. insecurity, monotony, expressiveness, persuasiveness and overall quality) by seven naive subjects and several prosodic parameters and motion energy. Our results thus add new knowledge to the discussion of the role of different verbal and non-verbal cues in communication and social interaction. It has already been shown previously that different cues from speech and nonvocal channels are evaluated in order to judge speakers as friendly or showing solidarity (Gallois and Callan, 1988) while the rapid analysis and judgement of personality is mainly based on visual cues (Albright et al., 1988). The use of multiple channels in communication and the contextdependency of the meaning of received social signals has been emphasized in (Shanker and King, 2002;Grammer et al., 2002). The authors foster the emergence of a paradigm shift from treating communication as a sequential information processing mechanism towards a dynamical system's approach in which communication and interaction is a parallel, tunable, and dynamic process. With our results, we add to the latter by showing that a receiver actively seeks for specific signatures of relevant social signals in the audi- Figure 4: Influence of the acoustic channel on the relative fixation time on the face and limbs of a speaker. The relative fixation time is color encoded (from yellow indicating a very short fixation time, through green and blue to red, indicating a very long fixation time). In the audiovisual case (at the top), the subject mainly focuses on the face of the speaker and just rarely on the upper body or the background of the scene. In contrast, the subject virtually spends the same time fixating the face, the body and the hand of the speaker if no audio is present (at the bottom). tory and visual stream, depending on the currently available information channels.
In our study the role of the perceiver is rather passive in terms of a communication scenario. The subjects had to watch (and listen to) political speeches of unknown actors, a situation that closely resembles the zero acquaintance scenario in which the perceiver cannot directly interact with the target. Still, as a result of our evaluation of comparison conditions, we demonstrate that the dynamics of body movements, as encoded in the perceived motion, is an important indicator people use for judging the personality of speakers. Similar to the approach suggested by (Grammer et al., 2002) we utilized the optical flow field estimated from the speakers' video sequences and computed the average motion energy pattern thereof. The input scene is actively sampled over time by the observer through actively gazing at specific locations which show high dynamics in the structure and its changes over space and time (Vig et al., 2012). We have demonstrated that the selection of specific locations is not only driven by sensory data, but by the active search for information based on the available input modalities. Thus, top-down expectations and task-dependent information sampling is observed here (Rothkopf et al., 2007;Saunders et al., 2010). Unlike the approach in (Koppenheimer and Grammer, 2010) we do not instantiate geometric movement models but make use of image-based motion energy directly. It should be noted here, that further information could be easily derived from this optical flow pattern since we have access to a more rich repertoire of, e.g., motion direction patterns over time, in turn, allowing to estimated robust spatiotemporal parameters from the input directly. At the moment we kept the analysis as simple as possible to highlight the key statistical dependencies. In our investigation, we identified that pause time and voice quality parameters (i.e. NAQ and PeakSlope), which have not been used in previous studies, showed the strongest correlations with the ratings. The results for the audio feature correlations can be found in Table 1. Values indicating breathy voice qualities for example correlate strongly with perceived insecurity and monotony of male speakers, which is in line with previous expectations. Further, we were able to identify strong positive correlations between the motion energy feature and the ratings of good overall performance as well as strong expressivity and persuasiveness. In addition, we found strong negative correlations of motion energy with monotony, which corresponds to our previous hypothesis. We found some differences in the strength of correlation between the feature of motion energy and all the speaker ratings in the mute and audiovisual conditions. To be precise, the correlations of motion energy with all ratings is weaker in the audiovisual condition indicating that the raters rely less on visual features if auditory cues are at their disposal. In the case of insecurity the correlation between the motion energy feature and the subject's even switches the algebraic sign from the mute to the audiovisual condition; this in turn indicates that for insecurity auditory features complement perception. Standard parameters such as statistics of f 0 , however, show little to no correlations, which might be an effect of the coarse level of analysis. As already pointed out above, a more fine-grained analysis of both audio and video features might reveal more interesting effects and is subject to fu-ture analysis. It should be noted here that all the analysis assumes a dedicated actor that is engaged in his political speech. The mere correlation analysis could be fooled through clownish presentations with large movement and otherwise useless speech content. The eye-tracking analysis revealed that subjects are more likely to look at the faces of the speakers in the audiovisual condition than in the video only condition (see Table 3 for details). This indicates that the subjects have a less focused gaze in the video only condition and scan the full politician's body for cues. It almost seems like the observers are searching for indications in the whole picture that might help them in judging the speaker without knowing the verbal content of the speech. The revealed effects and correlations have great potential for future applications such as the automatic classification of the quality of public speeches or the training of speakers. In order to be able to train classifiers the findings need to be confirmed on a larger scaled analysis for more speakers as well as more naive subjects. Currently effort is undertaken to increase the number of investigated features as well as the cohort of human subjects. In addition, the study reveals that information fusion in humans using visual and auditory streams influences the patterns of active seeking and sampling of relevant information in the ambient environment. Further studies are needed which will highlight the dynamic processing and evaluation of features from different sensor streams and their subtleties in different situations of social communication.
|
2014-10-01T00:00:00.000Z
|
2012-05-01T00:00:00.000
|
{
"year": 2012,
"sha1": "ca1bc713e50df8047b8c3aa2d30e55f227d29e90",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "4d58a7df0189a75ac7ef131fdb8c47b0f456bfc9",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
233396981
|
pes2o/s2orc
|
v3-fos-license
|
The Great Capacity on Promoting Melanogenesis of Three Compatible Components in Vernonia anthelmintica (L.) Willd.
To investigate a possible methodology of exploiting herbal medicine and design polytherapy for the treatment of skin depigmentation disorder, we have made use of Vernonia anthelmintica (L.) Willd., a traditional Chinese herbal medicine that has been proven to be effective in treating vitiligo. Here, we report that the extract of Vernonia anthelmintica (L.) Willd. effectively enhances melanogenesis responses in B16F10. In its compound library, we found three ingredients (butin, caffeic acid and luteolin) also have the activity of promoting melanogenesis in vivo and in vitro. They can reduce the accumulation of ROS induced by hydrogen peroxide and inflammatory response induced by sublethal concentrations of copper sulfate in wild type and green fluorescent protein (GFP)-labeled leukocytes zebrafish larvae. The overall objective of the present study aims to identify which compatibility proportions of the medicines may be more effective in promoting pigmentation. We utilized the D-optimal response surface methodology to optimize the ratio among three molecules. Combining three indicators of promoting melanogenesis, anti-inflammatory and antioxidant capacities, we get the best effect of butin, caffeic acid and luteolin at the ratio (butin:caffeic acid:luteolin = 7.38:28.30:64.32) on zebrafish. Moreover, the effect of melanin content recovery in the best combination is stronger than that of the monomer, which suggests that the three compounds have a synergistic effect on inducing melanogenesis. After simply verifying the result, we performed in situ hybridization on whole-mount zebrafish embryos to further explore the effects of multi-drugs combination on the proliferation and differentiation of melanocytes and the expression of genes (tyr, mitfa, dct, kit) related to melanin synthesis. In conclusion, the above three compatible compounds can significantly enhance melanogenesis and improve depigmentation in vivo.
Introduction
Skin pigmentation is an important human phenotypic trait whose regulation is related to many factors. The pigment melanin is produced by melanocytes in a complex process called melanogenesis. It is a physiological process leading to the production of melanin pigment and a crucial step for the regulation of melanocyte functions, including photoprotection [1][2][3]. The melanocyte interacts with endocrine, immune, inflammatory and central nervous systems, and its activity is also regulated by extrinsic factors such as ultraviolet radiation and drugs [4].
The skin is susceptible to oxidative damage, and reactive oxygen species (ROS) are an important by-product of melanin synthesis [5]. However, ROS production has been shown to suppress pigmentation in pigment cells and accumulated in the epidermis, which modulates the expression of the key pigment "melanin" synthesizing enzymes and cell damage [6,7]. Further, increased levels of ROS in melanocytes may cause defective apoptosis resulting in release of aberrated proteins, which can lead to inflammation [8]. The intracellular levels of H2O2 and other ROS also increase in response to cytokines such as TNF-α and TGF-β1, which are potent inhibitors of melanogenesis [9]. Moreover, inflammation-associated pigmentation changes are extremely common, for example, IL-17 and TNF synergistically modulate cytokine expression while suppressing melanogenesis [10]. However, in any case, maintaining the balance of pigment regulation is the ultimate goal.
Vitiligo is an acquired depigmenting disease manifested by chalk-or milk-white colored macules of several millimeters to centimeters in diameter [11]. It affects nearly 100 million people around the world and has regional and ethnic differences. The pathogenic factors mainly include genetics, mental stress, autoimmunity, neurochemical factors, oxidative stress, endocrine, melanocyte shedding, but not yet clear [12,13]. Up to now, the only drug approved by the FDA for the treatment of vitiligo is monobenzone cream, and this drug is the reverse treatment of vitiligo. Drugs are used for depigmentation treatment of patients with vitiligo whose skin area is more than 50% and hyperpigmentation diseases such as melanoma [14]. Therefore, there is no specific treatment for vitiligo at present. Due to the loss of functional melanocytes, restoring cell numbers and promoting melanin synthesis is a key strategy for the treatment of vitiligo.
Vernonia anthelmintica (L.) Willd. has a long history of traditional use for the management of several disorders related to skin, central nervous system, kidney, gynecology, gastrointestinal, metabolism and general health, especially in the treatment of vitiligo for thousands of years in China [15,16]. Although the anti-vitiligo mechanisms remain ambiguous, the herb has been universally used in Uyghur hospitals to treat vitiligo. To date, about 20 biologically active compounds of Vernonia anthelmintica (L.) Willd. were collected from the TCMID database, the Chinese Academy of Sciences Chemistry Database and the TCMSP database including flavonoids, caffeic acid-based quinines, sesquiterpenoids and steroids, etc [17]. Evidence suggests that ethanol seed extract induces melanogenesis by increasing the expression of TYR, TRP-1, TRP-2 and MITF in B16 cells, and moreover, it exhibits significant in vitro antioxidant and in vivo anti-inflammatory potential [18,19]. Butin, as one of the most important active compounds in Vernonia anthelmintica (L.) Willd., plays a therapeutic effect by increasing the expression of TYR and TRP-1 protein and reducing the activity of MDA and CHE in a mouse model of hydroquinone-induced vitiligo [20]. Equivalently, more active compounds are isolated and identified such as isorhamnetin, kaempferide, 1,5-dicaffeoylquinic acid and benzoyl-vernovan, etc. They can significantly increase the expression of melanin-biosynthetic genes (MC1R, MITF, TYR, TYRP1 and DCT) and the tyrosinase activity with multiplex signaling pathways [21][22][23]. However, the current anti-vitiligo treatment for Vernonia anthelmintica (L.) Willd. is still at the stage of discovery of the material basis and the mechanism, and no monomer or component has been found to be more effective than the extract.
Here, we verified the melanogenesis, anti-inflammatory and anti-oxidation effects of butin and found two newly discovered compounds with similar effects, caffeic acid and luteolin in B16F10 cells and zebrafish. What is more, when they are combined with butin, they can synergeticly enhance the effect of melanogenesis. We performed in situ hybridization on whole-mount zebrafish embryos to further explore the effects of multidrug combination on the proliferation and differentiation of melanocytes and the expression of genes (tyr, mitfa, dct, kit) related to melanin synthesis.
Cytotoxicity toward B1610 Cells
To determine whether butin, caffeic acid and luteolin have cytotoxic effects, we treated B16F10 cells with these compounds at various concentrations; cell viability was determined using the MTT assay. As shown in Figure 1, the compounds were found to be non-toxic at concentrations ranging: butin 0-10 µmol·L −1 , luteolin 0-1 µmol·L −1 and caffeic acid 0-5 µmol·L −1 . (B,C) B16 cells incubated with various concentrations of luteolin and caffeic acid for 48 h. cell viability was determined using an MTT assay. Results shown are mean ± SEM and are representative of three independent experiments. Data were analyzed by one-way analysis of variance (ANOVA) followed by post hoc Tukey test. * p < 0.05, ** p < 0.01, *** p < 0.001 vs. Control. The dotted line indicates the relative viability of 75% of the cells.
Melanin Content Assays in B16F10 Cells
As mentioned previously, ethanol seed extract of Vernonia anthelmintica induces melanogenesis by multiple ways. We determined the effect of extract (0.5 mg·mL −1 ) of Vernonia anthelmintica (L.) Willd. in B16F10 cells. It suggests that the seed can significantly promote melanogenesis in vitro ( Figure 2). To examine these compounds against B16 cell lysate to identify their influence on stimulation of melanin pigmentation. We also utilized the method of Masson-Fontana melanin ammonia silver stain to demonstrate the effect of drugs to promote melanin synthesis. As the results shown in Figure 3, the melanin contents were significantly increased as the concentration increases within a certain range. However, α-MSH, as a natural source agent, shows the best effects in an exceptionally low concentration. Furthermore, butin at 10 µmol·L −1 was identified as the most potent compound in vitro, and the effects of luteolin and caffeic acid are slightly inferior.
Effect of Compounds on Melanogenesis In Vivo
We utilized zebrafish to test the effect of compounds on melanogenesis, and prior to the investigation of melanogenesis, a toxicity assay was performed to determine the toxicity of the selected compounds to zebrafish. A significantly lower toxicity rate was observed in embryos at the following concentrations: butin 0-150 µmol·L −1 , caffeic acid 0-200 µmol·L −1 , luteolin 0-80 µmol·L −1 (the concentration is related to its maximum solubility), for 48 h indicating their optimal effective concentration toward zebrafish embryos. Pigmentation was analyzed by photography as shown in Figures 4-6. As shown in Figure 4A lateral and dorsal view, the melanin content of zebrafish first increases and then decreases as the concentration of butin increases, and exhibited excellent results at a concentration of 10, 40, 80 and 100 µmol·L −1 . When the concentration continues to increase, melanin synthesis is inhibited, possibly due to the toxicity of the compound. It suggests that the highest melanin content is up to~65%. What is more, the result of tyrosinase activity is consistent with melanin, except that its highest point appears at 40 µmol·L −1 ( Figure 4C). Then we investigated the effect of caffeic acid on melanogenesis in vivo ( Figure 5). As the same operation as butin, we found that the melanin content of zebrafish first increases and then decreases as the concentration of compound increases, and exhibited excellent results at a concentration of 10, 40 and 80 µmol·L −1 . This performance is incredibly significant, means that caffeic acid has a narrower onset range. However, we did not see it had a big impact on tyrosinase activity except in 40 and 80 µmol·L −1 .
Luteolin is a potential compound from Vernonia anthelmintica (L.) Willd. in inducing melanogenesis. However, the highest concentration we could get is 80 µmol·L −1 because its solubility in water has certain limitations. The results shown in Figure 6A, within the allowable range of solubility, the effect increases with increasing concentration, and it exhibits excellent results at a concentration of 80 µmol·L −1 ( Figure 6B). While the 40 µmol·L −1 of concentration highly up-regulated compared with PTU-2 ( Figure 6C). This suggests that the augmentation of melanogenesis by butin, caffeic acid and luteolin in zebrafish occurs via stimulation of tyrosinase activity. The proliferation of cellular tyrosinase activity by these compounds may be caused either by direct stimulation of enzyme activity or by an augmentation in the amount of tyrosinase protein in melanocytes.
Effects of Anti-Inflammatory and Inhibition of Reactive Oxygen Species
As explained previously, ethanol seed extract of Vernonia anthelmintica exhibits significant in vitro antioxidant and in vivo anti-inflammatory potential. Therefore, we investigated whether these three compounds of butin, caffeic acid and luteolin have such functions in vivo or not.
As the results show in Figure 7, 50 µM CuSO4 could induce leukocytes to become localized preferentially to a few clusters along the horizontal midline of the trunk and tail (see white arrows), meaning severe inflammation in Tg(mpx:GFP) zebrafish larvae. When treated by butin (100 µM), caffeic acid (100 µM) and luteolin (80 µM), the inflammation was effectively suppressed, and the number of leukocytes bundle significantly reduced, which results in the best luteolin, butin followed, caffeic acid worst.
Caffeic acid has good antioxidant properties in vitro, but the antioxidant properties of butin and luteolin are unknown [24]. We used hydrogen peroxide to build a ROSenriched zebrafish model and utilized DCFH-DA fluorescent probe to detect the antioxidant properties of the compounds. The intensity of fluorescence reflects the level of ROS in zebrafish. From our results in Figure 8, caffeic acid showed the best antioxidant, at a concentration of 100 µM, the level of ROS is even lower than normal. Butin and luteolin also have certain antioxidant properties. As expected, with increasing concentration of admin-istration, ROS levels were all reduced. Therefore, these three compounds represent certain effects of anti-inflammatory and inhibition of reactive oxygen species in vivo.
The Inducing Melanogenesis Activity and Combinational Design of Components
The D-optimal Design was employed to investigate the synergistic effect of three active components (butin, luteolin and caffeic acid) with varying concentrations. Each combination group is equal to concentration (100 µM). The numbers in each row indicate concentrations of three monomers. Their relative of melanin contents (% of control) mean ± s.d. was used to evaluate the effect of each component mixture group as shown in Table 1. Similar to the efficacy of the monomer, we obtained the zebrafish melanin phenotype and quantitative data for each component mixture group. Compared with the PTU-2 group, the melanin levels in most of groups have a significant increase ( Figure 9A,B).
Moreover, the effect of melanin content (% of control) recovery in some groups is stronger than that of the monomer, which suggests that the three compounds have a synergistic effect on inducing melanogenesis.
Application of response surface methodology (RSM) in the optimization of analytical procedures is today largely diffused and consolidated, principally because of its advantages to classical one-variable-a-time optimization, such as the generation of large amounts of information from a small number of experiments and the possibility of evaluating the interaction effect between the variables on the response [25,26]. RSM aims to find the optimal process settings to achieve peak performance. In this paper, we utilized the model Y = f (X1, X2, X3). The variables are independent variables, such that the response depends on them. The Least Square method was used to estimate the parameters in the polynomials. We used the Design Expert 12.0.3.0 to complete the above computation process. The results are shown in Figure 9C,D. From the mathematic model, we can get a predicted best melanin content (% of control) recovery rate of 81% by combination of butin, caffeic acid and luteolin (butin:caffeic acid:luteolin = 7.38:28.30:64.32) rather than mono-compounds.
Validation of Mathematical Model of Drug Combination
In order to verify the reliability of the model, we selected the predicted optimal solution (butin:caffeic acid:luteolin = 1:4:10) and the non-optimal solution (butin:caffeic acid:luteolin = 1:2.5:3 and 3:1:3) to verify the effect of melanin synthesis. As the results shown in Figure 10A,B, compared with the latter, the predicted optimal solution has the highest relative melanin content (% of control), although the predicted value has a certain deviation, which confirmed that the model has certain credibility.
Combination Compounds Enhance the Expression Level of Melanogenic Genes
Melanogenesis is regulated by melanogenic enzymes including tyrosinase (TYR), tyrosinase-related protein 1 (TRP 1) and tyrosinase-related protein 2 (TRP 2) [27,28]. TYR directly mediates the production of melanin via the oxidation of melanogenic substrates tyrosine [29]. Microphthalmia-associated transcription factor (MITF) is a master regulator of the transcription of genes involved in melanin synthesis. In the process of differentiation of neural crest cells into melanocytes, gene mitfa, as one of its earliest occurrence markers, plays a core regulatory role in the development and differentiation of melanocytes, and it also plays an important role in the formation of zebrafish pigment [30][31][32]. During the early development of zebrafish body color, gene dct and kit, they promote the differentiation and migration of melanocyte stem cells into melanocytes [33,34]. Table 1.
Therefore, we utilized the method of in situ hybridization on whole-mount zebrafish embryos to investigate the effects of the best drugs combination on the expression of melanogenic genes (tyr, mitfa, kit and dct) during the development of zebrafish (35 hpf). In 35 hpf, all the genes are expressed in the eyes and dorsum of zebrafish larvae, small amount of tail and almost no abdomen ( Figure 11). Contrasted to control group, the combination improves the expression level of tyr, kit and dct in the eyes and dorsum of zebrafish larvae (see red, black and green arrows). However, the expression level of mitfa is only slightly increased in the dorsum (purple arrows), and it may be related to the time course of melanocyte development in the eyes and dorsum. . Combination compounds enhance the expression level of melanogenic genes. By the method of in situ hybridization on whole-mount zebrafish embryos to investigate the effects of the best drugs combination on the expression of melanogenic genes (tyr, mitfa, kit and dct). Zebrafish larvae were incubated with combination of butin, caffeic acid and luteolin (butin:caffeic acid:luteolin = 7.38:28.30:64.32). In 35 hpf, collecting zebrafish and hybridizing with labeled nucleic acid probes. The color rendering (blue) is means expression of specific genes, the red, purple, black and green arrows indicate the expression sites of tyr, mitfa, kit, and dct, respectively. The scale bars represent 300 µm.
Discussion
In this study, we attempted to prove that the extract of Vernonia anthelmintica (L.) Willd. can enhance the melanin synthesis function of B16F10 cells in vitro and confirmed that it can be used to treat vitiligo. We found that butin, caffeic acid and luteolin have a certain melanogenesis effect, of which butin is the best. However, the results in vitro are not particularly stable, which is related to the state of the cells, which leads to changes in melanin content. In addition, melanogenesis is closely related to the microenvironment of melanocytes and in vitro studies cannot reflect this feature. Hence, we chose the holistic-model-organism zebrafish to confirm the excellent melanogenesis effects of three compounds. In addition, previous employment suggested that the extract of Vernonia anthelmintica (L.) Willd. equally efficient [35]. Caffeic acid and luteolin are the first we discovered that have biological functions in melanin synthesis, although the effect is not extraordinarily strong.
As is well known, vitiligo is a multifactorial disease, with both genetic and environmental factors implicated in its initiation [36]. The same causal mechanisms might not apply to all cases, and different pathogenetic mechanisms might work together, ultimately leading to the same clinical result. Progression and maintenance of vitiligo were identified to be related with numerous inflammatory signaling pathways [37]. Previously, melanocytes are particularly susceptible to stress because they perform melanogenesis, accompanying mitochondrial energy metabolism generating ROS [38]. Finally, the process of melanogenesis itself liberates hydrogen peroxide, a ROS precursor. This kind of cellular stress in melanocytes activate the innate immune system through the generation and release of DAMPs, which provide the initiating danger signal [39]. The inflammation that ensues ultimately leads to activation of the adaptive immune system, thereby facilitating autoimmune destruction and vitiligo progression [36]. Therefore, relieving melanocytes in oxidative stress may become a treatment strategy for vitiligo. Consequently, our work confirmed that butin, caffeic acid and luteolin have superior properties on anti-inflammatory and antioxidant.
As a traditional Chinese medicine, Vernonia anthelmintica (L.) Willd. extract is proven to be an effective drug in curing cutis disease, but its component range and working mechanism remain unknown [40]. In addition, traditional Chinese medicine (TCM) usually takes effect when multiple components play a synergistic role at the same time. It previously confirmed that butin, caffeic acid and luteolin in promoting melanogenesis, anti-inflammatory and antioxidant capacity have their own advantages. Furthermore, Vernonia anthelmintica (L.) Willd. extract is more effective than any monomer in promoting melanogenesis, which means that there is a synergistic effect between the compounds. To optimize the proportion of the three mono-compounds to exert the maximum pharmacological effect, we adopted the mathematical model called the D-Optimal Design and RSM method (Figure 9). The best melanin content recovery rate (of 81%) is achieved with a butin:caffeic acid:luteolin ratio of 7.38:28.30:64.32. Subsequently, we validated the accuracy of our current result, but we will continue to use the mathematical model to balance the varied positive and negative effects with the overall disease curing purpose.
Finally, we utilized the method of in situ hybridization on whole-mount zebrafish embryos to investigate the effects of the best drugs combination on the expression of melanogenic genes (tyr, mitfa, kit and dct). In Figure 11, the combination enhances tyr, kit and dct expression on eyes and dorsum of zebrafish larvae, and mitfa on dorsum. It indirectly proved that the compounds could promote the proliferation, differentiation and melanin synthesis of melanocytes.
Consequently, our work confirmed that Vernonia anthelmintica (L.) Willd. extract has melanogenesis-promoting capacity in vivo. We also confirmed and indicated that the monocompounds may be responsible for the seed extract the capacity. We also showed that butin, caffeic acid and luteolin have promoting melanogenesis, anti-inflammatory and antioxidant effects. We used a combination of pharmacological and statistical techniques to form a suitable model for developing drug combinations. Furthermore, we have indicated the best combination of mono-compounds present in Vernonia anthelmintica (L.) Willd. extract that is responsible for melanogenesis activity. Hence, this study forms the basis for future research on these pharmacologically active compounds and confirms their effectiveness. This study also uses methodologies that can be helpful in the future modernization of TCMs during modern drug development procedures.
Cell Culture
The melanoma cell line B16F10 was purchased from the Chinese Cell Bank (Shanghai, China) and maintained as a monolayer culture in DMEM supplemented with 10% (v/v) FBS, 100 U·mL −1 penicillin and 100 µg·mL −1 streptomycin (Beyotime Biotechnology, Shanghai, China) at 37 • C in a humidified 5% of CO 2 incubator.
Cell Viability Assay
Cell viability was determined by MTT assay. B16F10 cells were plated in 96 well plates at a density of 3 × 10 3 cells per well. When cells grow to 30% density, the cells were incubated with the compounds for 48 h, the culture medium was removed and replaced with 20 mL of MTT solution (5 mg·mL −1 ) dissolved in fresh DMEM and incubated for 4 h. The medium was removed completely, and 150 µL of DMSO was added to each well and fully dissolved for 5 min. Optical absorbance was set at 570 nm with a microplate spectrophotometer, (New Jersey, USA). Absorbance of cells without treatment was considered as 100% of cell survival. Each treatment was performed in six multiple holes.
Masson-Fontana Melanin Ammonia Silver Stain
B16F10 cells with a growth density of about 40% in a 6-well plate were added with different concentrations of test drugs and cultured at 37 • C and 5% CO 2 for 48 h. Fontana-Masson Stain Kit was used for melanin staining. The melanin content was observed by NIKON Upright Fluorescence Microscope.
Maintenance of Zebrafish
Zebrafish (Danio rerio) farming was carried out in accordance with methods detailed in "The Zebrafish Book" with a 14 h light (8:00 a.m. to 10:00 p.m.) and 10 h dark (10:00 p.m. to 8:00 a.m.) cycle at a temperature of 28 • C; animals were fed twice a day (9:00 a.m. and 5:00 p.m.) [41]. The experiments were started at 8:00 p.m. by placing one female and one male in the spawning tank, divided by a separator plate. The following day, the lights were turned on at 8: 00 a.m. and the separator plate was removed for 20 min. The zebrafish were then returned to their respective tanks, and the embryos in the spawning tank were collected and cleaned before they were cultured in zebrafish embryo medium and placed in an illuminator at 28 • C.
Melanogenesis and TYR Activity in zebrafish (or Cell)
To characterize the melanogenic effects of these compounds, total melanin content of whole zebrafish extracts was measured. The effect of these compounds on the pigmentation of zebrafish embryos was determined according to a previous report [42]. The embryos were pre-treated with 0.2 mmol·L −1 PTU from 6 hpf to 30 hpf (24 h). PTU was then removed, embryos were immediately washed and treated with different concentrations of the test drugs or not (PTU-2); untreated embryos were used as a control and kept undisturbed for the next 24 h, up to 54 hpf. Phenotype-based evaluation of body pigmentation was performed at 54 hpf. Embryos were anesthetized and photographed under SZX16 stereo microscope. The levels of intracellular and secreted melanin were measured as described previously.
The levels of intracellular and secreted melanin were measured by NaOH hydrolysis method. Total melanin in the cell pellet was dissolved in 100 µL of 1N NaOH/10% DMSO for 1 h at 80 • C. The absorbance at 405 nm was measured. Melanin content was calculated as a percent of the control. Specific melanin content was adjusted by the amount of protein in the same reaction.
TYR activity and melanin content assay was carried out according to a previous report with slight modifications [35]. TYR activity in zebrafish was examined by measuring the rate of oxidation of L -DOPA. Briefly, the zebrafish were treated with a test compound; after 54 hpf they were washed with ice-cold PBS and lysis was performed by incubation in cell lysis buffer at 4 • C for 20 min. After sonication, the lysates were then centrifuged at 14, 000 rpm for 15 min. Tyrosinase activity was then determined as follows: 100 mL of supernatant containing total 20 µL of centrifuged proteins was added to each well in a 96-well plate, and then mixed with 100 mL 0.1% L -DOPA in 0.1 M PBS (pH 6.8) (M/V). After incubation at 37 • C for 0.5 h, dopachrome formation was monitored by measuring absorbance at 475 nm. Specific tyrosinase activity was normalized with protein content in the reaction.
Chemically Induced Inflammation Assay in zebrafish
On the night before delivery, healthy and mature spotted macrophage fluorescent zebrafish were placed in the breeding tank. The embryos were collected on the second day and incubated in a constant temperature light incubator.
At 56 hpf, the hatched larvae are divided into six-well plates with 10-20 tails per well. A control group, a model group and a drug group are set up. The control group is not treated, and the drug group is first infiltrated with liquid. After 1 h, the drug solution was washed, the model group and the administration group were simultaneously added with 50 mM copper sulfate solution to induce inflammation. After 40 min, they were taken out and immediately photographed to observe the aggregation of zebrafish nerve mound macrophages [43].
Antioxidative Activities in Zebrafish with DCFH-DA Fluorescent Probe
Zebrafish embryos were collected and add 0.2 mmol/L PTU solution added at 6 hpf. Soaked until the zebrafish develop 120 hpf, then washed with the PTU solution, a certain number of zebrafish were soaked in the medicated egg water for 48 h, washed with liquid, and made a model with hydrogen peroxide (1 mmol·L −1 ) for 4 h. After washing out the hydrogen peroxide solution, zebrafish were soaked in the diluted DCFH-DA fluorescent probe solution (10 µmol·L −1 ), in the dark for 0.5 h, and then take pictures with a stereo microscope.
Briefly, zebrafish embryos were collected, synchronized and arrayed by pipette into a 6-well plate. Each well was filled with 40 embryos along with 5000 µL of zebrafish embryo medium. When the embryo develops to 9 hpf, one of the holes is selected to give the combination drug, and one hole is used as the blank control group, and the embryos at the 35 hpf period are collected. Each group of embryos is divided into 4 EP tubes, and 10 embryos in each tube. Replace the solution with fresh fixative 4% PFA. Repeat 3 or 4 times. Finally, end up with 100% fixative. Generally, the fixed embryos will be placed at 4 • C overnight to allow the fixation process completely. Dehydrate embryos with methanol and rehydration with PBST step by step. Remove the washing PBST, add new fresh PBST to 1.5 mL, add the desired proteinase K directly into the tube and gently roll the tube to increase the proteinase K diffusion. (Vortex the tube and make sure the precipitate be mixed evenly before use). After that, post-fix, hybridize wash, block, immnohybridize and visualize. The embryos were observed by NIKON Upright Fluorescence Microscope.
Drug Combination
To further discover the synergistic effect of the mono-compounds, a D-optimal Design was employed to arrange the experiment. The details regarding D-optimal Design are described in many references. A Response Surface Methodology (RSM) method was utilized to optimize the combination of butin, caffeic acid and luteolin. We used the software Design-Expert ® 12 to analyze the whole process.
Statistical Analysis
All data were expressed as mean ± standard error. Statistical analysis was performed with a one-way ANOVA followed by Tukey's post hoc test for correction of multiple comparisons. Values with p < 0.05 were considered significant.
|
2021-04-27T05:16:33.941Z
|
2021-04-01T00:00:00.000
|
{
"year": 2021,
"sha1": "de1bc9b4ca8186f3dfc3f2127baf68a4fdd7351d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/8/4073/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "de1bc9b4ca8186f3dfc3f2127baf68a4fdd7351d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
8816952
|
pes2o/s2orc
|
v3-fos-license
|
Atmospheric deuterium fractionation: HCHO and HCDO yields in the CH2DO+O2 reaction
The formation of formaldehyde via hydrogen atom transfer from the methoxy radical to molecular oxygen is a key step in the atmospheric photochemical oxidation of methane, and in the propagation of deuterium from methane to molecular hydrogen. We report the results of the first investigation of the branching ratio for HCHO and HCDO forma-5 tion in the CH 2 DO + O 2 reaction. Labeled methoxy radicals (CH 2 DO) were generated in a photochemical reactor by photolysis of CH 2 DONO. HCHO and HCDO concentrations were measured using FTIR spectroscopy. Significant deuterium enrichment was seen in the formaldehyde product, from which we derive a branching ratio of 88.2 ± 1.1% for HCDO and 11.8 ± 1.1% for HCHO. The implications of this fractionation on the propa-10 gation of deuterium in the atmosphere are discussed.
Introduction
Changes in atmospheric chemistry during the Anthropocene are linked to perturbations to the carbon cycle (IPCC, 2001;Wang and Jacob, 1998), documented in part by isotopic analysis.The goal of the present work is to determine how deuterium propagates through the CH 3 O + O 2 reaction which is a key step in the atmospheric oxidation of methane (Feilberg et al., 2005b(Feilberg et al., , 2007;;Keppler et al., 2006;Quay et al., 1999;Rockmann et al., 2002;Saueressig et al., 2001;Weston, 2001): An overview of the reaction system is shown in Fig. 1.Knowledge of the isotopic signatures of the specific steps Correspondence to: M. S. Johnson (msj@kiku.dk)can be used to reduce uncertainties regarding the sources and sinks of the trace gases involved.Formaldehyde is a key intermediate in the oxidation of methane and non-methane hydrocarbons (in particular isoprene) (Hak et al., 2005;Palmer et al., 2006).The photolysis of formaldehyde is the only important in-situ source of molecular hydrogen in the atmosphere; at present about half of atmospheric H 2 is produced in this way (Gerst and Quay, 2001).There is interest in using hydrogen as an energy carrier replacing conventional hydrocarbon fuels.Potential advantages include reductions in CO 2 , NOx and hydrocarbon emissions.Potential impacts associated with leaks from storage and distribution systems are modest and include small increases in stratospheric water vapor and additional consumption of OH (that would otherwise react with CH 4 ) and implying a small effect on greenhouse gas budgets (Prather, 2003;Schultz et al., 2003).There are significant uncertainties in our knowledge of the hydrogen budget (IPCC, 2001;Quay et al., 1999;Rhee et al., 2006b).
The present atmosphere is the foundation for predicting future trends in greenhouse gas emissions.Uncertainties concerning the sources and sinks of greenhouse gases have been identified by the IPCC as a significant obstacle to accurately predicting future climate change (IPCC, 2001).Isotopic analysis is an important tool for investigating sources and loss mechanisms for atmospheric trace gases (Brenninkmeijer et al., 2003;Johnson et al., 2002).Examples include understanding ice core records of injection of sulfur into the stratosphere (Baroni, 2007), quantifying terrestrial CO 2 sinks (Miller et al., 2003), refining the nitrous oxide budget (von Hessberg et al., 2004) and identifying the missing source of atmospheric methyl chloride (Gola et al., 2005;Keppler et al., 2005).Two key sources of atmospheric hydrogen, fossil fuel combustion and biomass burning, are depleted in D, having δD(H 2 ) values of −196±10‰ and −290±60‰ respectively (Gerst and Quay, 2001).The processes removing molecular hydrogen from the atmosphere, soil uptake and OH reaction, are slower for HD than for HH by factors of 0.943±0.024(Gerst and Quay, 2001) and 0.595±0.043(Ehhalt et al., 1989;Sander et al., 2006;Talukdar et al., 1996) respectively.These processes enrich deuterium in atmospheric hydrogen, however they are not sufficient to explain the high deuterium content of tropospheric hydrogen, δD(H 2 )=120±4‰ (Gerst and Quay, 2001;Rahn et al., 2003).Therefore it is necessary that atmospheric photochemical processes enrich D in hydrogen relative to the starting material to balance the isotope budget.The starting material is typically methane with a δD(CH 4 ) value of −86±3‰ (Quay et al., 1999), or isoprene, which is likely to be at least as depleted in D as methane (Feilberg et al., 2007).Isotopic analysis has been used to study hydrogen in the stratosphere which is confined to a small number of photochemicaly coupled species (mainly CH 4 , H 2 and H 2 O (McCarthy et al., 2004;Rhee et al., 2007) It has recently been shown that HCDO is photolysed more slowly than HCHO, and that it produces less HD than HCHO produces H 2 , (Feilberg et al., 2007) which would seem to contradict the observation that the process converting CH 4 to H 2 leads to deuterium enrichment (McCarthy et al., 2004;Rahn et al., 2003;Rockmann et al., 2003;Zahn et al., 2006).However the contradiction could be resolved if the depletion of deuterium in hydrogen produced by formaldehyde photolysis is offset by an even stronger enrichment in D in the steps producing formaldehyde.Unfortunately the isotope effects in this portion of the mechanism are not wellcharacterized.The relative rate of CH 3 D vs. CH 4 oxidation is known, (Sander et al., 2006) but not the branching ratio for H vs. D abstraction from CH 3 D. Atmospheric methyl radicals will be converted to methyl peroxy, which may react with NO to produce methoxy radicals.The formaldehyde and CO isotope effects have been described (Feilberg et al., 2004(Feilberg et al., , 2005a(Feilberg et al., , c, 2007;;Weston, 2001).This paper addresses a key un-described reaction in the methane oxidation mechanism.
The title reaction (the numbering scheme of Table 1 will be used throughout) has been studied by several laboratories (Gutman et al., 1982;Lorenz et al., 1985;Wantuck et al., 1987) and the results have been analyzed and are available in kinetics compilations (Atkinson et al., 2006;NIST, 2007;Sander et al., 2006); k 1 =1.9×10 −15 cm 3 molecule −1 s −1 at 298 K.The Arrhenius A-factor, 3.9×10 −14 cm 3 s −1 is low for a hydrogen atom transfer reaction indicating that the mechanism may be more complex than a simple abstraction (Sander, et al., 2006).
To improve understanding of hydrogen in the atmosphere we conducted a smog chamber FTIR study of the products of the CH 2 DO + O 2 reaction.The results and a model of the photochemistry occurring in the reactor are presented herein and discussed with respect to the literature data and atmospheric implications.
Experimental and data analysis
2.1 FTIR-Smog chamber system at Ford The experimental system used in this work has been described previously (Wallington and Japar, 1989) and is summarized here.The system is composed of a Pyrex tube with aluminum end flanges and has a volume of 140 L. The reactor was surrounded by 22 UV fluorescent lamps which were used to generate Cl via the photolysis of molecular chlorine.Experiments were performed at ambient temperature (295±2 K) and a pressure of 930±10 mbar of synthetic air.The concentrations of species in the photochemical reactor were determined using FTIR spectroscopy.The output beam of a Mattson Sirius 100 spectrometer was reflected through the reactor using White cell optics, giving an optical path length of 27 m.IR spectra at a resolution of 0.25 cm −1 were obtained by co-adding 32 interferograms.Unless stated, quoted uncertainties are two standard deviations derived from least-squares regressions.
Experimental procedure
CH 2 DONO was synthesized by the drop-wise addition of concentrated sulfuric acid to a saturated solution of NaNO 2 in CH 2 DOH (Cambridge Isotope Laboratories Inc., >98%) (Sokolov et al., 1999).CH 3 ONO was synthesized analogously.The isotopic purity of the CH 2 DONO was checked Table 1.Reaction mechanism used to model methyl nitrite photolysis chemistry.The figure does not show the analogous monodeutero reactions and cross reactions; the full list of reactions and rates is available as supplementary information (http://www.atmos-chem-phys.net/7/5873/2007/acp-7-5873-2007-supplement.pdf).The mechanism was truncated by assuming in R28 that the peroxide has same reactivity as c-C 6 H 12 .
using IR spectroscopy which indicated an upper limit for a possible CH 3 ONO impurity of 0.016%.Cyclohexane (ca. 100 ppm) was added to the reaction cell to limit unwanted reactions involving hydroxyl radicals.The concen-tration of cyclohexane in the initial reaction mixtures was about twice the concentration of methyl nitrite (ca.50 ppm).
During each experiment the reaction mixture was photolyzed in 5-7 steps, photolysis alternating with recording spectra, Compounds included in the fit are H2O, N2O, NO2, HCHO, HCDO, CH2DONO and HO2NO2.For clarity the spectra are offset vertically by 0, -1 and -0.9 units.For clarity the spectra are offset vertically by 0, −0.9 and −0.9 units.
giving a total photolysis time of about 4 min.During a typical experiment 30% of the methyl nitrite was consumed and the final concentration of cyclohexane was >99% of the initial concentration.The FTIR-spectra obtained from the experiments were analyzed using a nonlinear least squares spectral fitting procedure developed by Griffith (Griffith, 1996).Reference spectra for H 2 O, N 2 O and NO 2 were taken from the HITRAN database (Rothman et al., 2005).Absolute IR absorption cross sections for HCHO, HCDO and DCDO were measured in the LISA photochemical reactor at a resolution of 0.125 cm −1 , a path length of 12 m, a temperature of 296 K and a total pressure of 1013 mbar as detailed by Gratien et al. (2007a).The remaining reference spectra were recorded using conditions (temperature, total pressure in cell, path length, resolution) nominally identical to those used in the experiments.
Chemical model
A model of the chemistry occurring in the smog chamber using the CH 3 ONO precursor was constructed using Maple (2005).The chemical mechanism is shown in Table 1 and Fig. 2, which omit the isotopic variants of the reactions for brevity; the 36 reactions shown here expand into 126 isotopic sub-reactions.The entire model and output are available as Electronic Supplementary Information (http://www.atmos-chem-phys.net/7/5873/2007/acp-7-5873-2007-supplement.pdf).Reaction rates were obtained from standard compilations when available, see Table 1 (Atkinson et al., 2006;NIST, 2007;Sander et al., 2006).Initial conditions were set using the nominal conditions of the reactor, in addition to photolysis rates based on the known performance of the lamps in past experiments.The main goal of the model was to investigate the possible role of sources of formaldehyde other than the title reaction, and to examine the potential magnitude of various loss mechanisms for formaldehyde including reactions with OH and HO 2 , and photolysis.
Results and analysis
Figure 3 shows an example of the spectrum measured after 20 s of photolysis of the CH 2 DONO precursor, the fit to the spectrum, and the residual.The measured spectrum is accurately reproduced by the fitting procedure.Compounds included in the fit are H 2 O, N 2 O, NO 2 , HCHO, HCDO, CH 2 DONO and HO 2 NO 2 .Spectral fitting becomes progressively more challenging as the experiment proceeds.Figure 4 shows the same series (experiment, fit, residual) after 4 min of photolysis.In addition to the compounds mentioned above, DCDO, HCOOH, DCOOD, DCOOH and CH 2 DOH were included in the later fits.While the match between experiment and modeled spectrum is not as good as in Fig. 3, the overall shape and fine structure of the spectrum are fitted well, and from this we can conclude that all but trace components of the reaction mixture are accounted for.The region 1720-1750 cm −1 is the most important since this is where we see the ν(2) bands of HCHO and HCDO at 1746 and 1724 cm −1 respectively.It is likely that the larger residual in this region (compare Figs. 3 and 4) is due to the large number of species.
In all experiments, formic acid is present at concentrations comparable to those of HCHO.The relation between the concentrations of the formic acids appears to be [DCOOH] > [HCCOH] > [DCOOD], but this relation can not be quantatively verified due to the lack of calibrated reference spectra.DCDO is present at concentrations just slightly above the measurement threshold.
Figure 5 shows the results of the analysis, with the mole fraction of HCDO plotted versus that of HCHO.Results from three independent experiments are included in the figure.The The photochemistry of the precursor introduced competing sources of and loss processes for formaldehyde.The model showed a particular sensitivity to the isotopic purity of the CH 2 DONO, since CH 3 ONO would produce only HCHO, the minor product of CH 2 DO + O 2 .A spectrum of the CH 2 DONO sample was recorded at relatively high pressure to look for traces of CH 3 ONO, which were not seen.The previously determined upper limit of the concentration of CH 3 ONO (0.016%) was used as a worst-case scenario in the model.The model showed that the title reaction is the dominant source of HCHO, accounting for >99.5% of the amount formed.The model shows that by the end of the experiment, 12% of the HCHO that is formed is lost via reaction with HO 2 , and 4% by reaction with OH.
The methyl nitrite was synthesized from methanol and purified by distillation.As shown in Fig. 2 residual methanol is an alternative source of the formaldehyde isotopologues.
The model showed that 2.3% of the HCDO formed did not come from the CH 2 DO precursor but instead reactions such as CH 2 DONO + OH → HCDO + NO + H 2 O, CHDOH + O 2 and CH 2 DONO 2 + OH.By the end of four minutes of photolysis 12% of the HCDO has been lost through reaction with HO 2 , and 3% through reaction with OH.We are not aware of any reactions that would lead to isotopic scrambling of the hydrogen isotopes connected to the carbon atom.
The model was used to determine the branching ratio for HCDO vs. HCHO formed by the title reaction and without loss, as well as the actual concentration ratio of HCDO to HCHO that would be present in the cell.This indicated that the experimental branching ratio (7.293±0.026)should be decreased by ∼1.7%, which is insignificant compared to the other sources of error, discussed below.Even though some HCHO and HCDO is lost to reactions through the course of the experiment the model shows that this affects both formaldehyde isotopologues almost equally (i.e. in proportion to their concentration), thereby canceling when the relative rate is determined.Feilberg et al. (2004) have determined that the relative rate of reaction k OH+HCHO /k OH+HCDO is 1.28±0.01(Feilberg et al., 2004).The relative rate for the bimolecular combination reaction k HO2+HCHO /k HO2+HCDO is not known but based on the mechanism only minor isotope effects are expected as D is a spectator.Nonetheless we assign a large uncertainty to the isotope effect of this reaction.In some cases when the reactivity of a deuterated species was not known a best estimate was made, for example the rate of H abstraction will scale with the number of H atoms, and a C-D bond's reactivity is 1/8 that of C-H, based on the reactivity of deuterated methanes.Since there is some unavoidable uncertainty in this process, it is reassuring that the model only indicated a minor correction; the final result is largely the direct result of the experiment and is rather insensitive to changes in the model.
The model takes advantage of earlier studies of the relative rate of reaction of the formaldehyde isotopologues with OH, (Feilberg et al., 2004) and of their relative photolysis rates and quantum yields (Feilberg et al., 2007) Sources of error in the experiment include the standard deviation of the measurement of the IR absorption cross sections of HCHO and HCDO, estimated to be 3% (Gratien et al., 2007a, b), and the error in the spectral fit shown in Fig. 5, 0.3%.The model predicts that altogether 19% of HCHO and 17% of HCDO have been lost after 4 min of photolysis (and that this results in a 1.7% correction to the product ratio).An error of 50% in the amount of HCHO and HCDO lost was used to assess the overall error including for example uncertainty in the isotope effect of k HO2+HCHO /k HO2+HCDO .Altogether these considerations result in a relative rate of production of HCDO to HCHO of 7.46±0.76,or a branching ratio of 88.2±1.1% for HCDO and 11.8±1.1% for HCHO.The isotope effect is significant, given that on a purely statistical basis the corresponding numbers are 67%:33%.
A fraction of the formaldehydes is lost through reaction over the course of the experiment, mainly by reaction with OH and HO 2 .These radicals cannot be observed directly, but their concentrations can be approximated using the model and the model can be checked for its ability to predict the concentrations of long-lived species.The reaction of OH with CH 2 DONO is the main competing source of HCDO, producing <2.5% of the total.The source of OH is directly linked to methyl nitrite photolysis: Cyclohexane was added to the cell to remove OH: The mechanism R1-R3 is straightforward, producing OH quantitatively when methyl nitrite is photolysed.In addition the lifetime of OH is controlled by R4 since cyclohexane is present in excess; according to the model over 95% of OH reacts with c-C 6 H 12 .According to the model the relative concentration of c-C 6 H 12 changes by ∼10 −4 through the course of the experiment, maintaining a constant sink.Since the sources and sinks of OH are understood we feel that the model produces an accurate concentration, certainly within the 50% error assigned to the model correction.
According to the model over half of HO 2 is produced by R29, RO + O 2 , and over 40% by the title reaction.These production rates are largely constant throughout the experiment.Over half of HO 2 reacts with NO via R3, and 1/3 produces HO 2 NO 2 (PNA) through R10.These rates are also relatively constant.HO 2 cycles between R11 and R20, converting formaldehyde into formic acid.One check that the model handles this reaction mechanism is shown in Fig. 6, the comparison between model and experimental formaldehyde concentrations.
Figure 6 also shows the carbon balance in the experiment and in the model.The fitting procedure can account for over 80% of the carbon in the experiment.One source of uncertainty is the lack of calibrated reference spectra for DCOOH, HCOOD and DCOOD; another is the estimated 3% error in the reference spectra for HCHO and HCDO.The model provides insight into the reaction mechanism that would otherwise not be possible.The model supports the theory that a fraction of formaldehyde is lost by HOx reactions, and allows one to estimate what that fraction is.In addition the model demonstrates that alternative sources of formaldehyde are minor, and that the effect of side reactions on the branching ratio is minor.
Discussion
The greater activity of the H atom in methoxy towards abstraction by O 2 , relative to D, leads to an enrichment in deuterium in atmospheric hydrogen.As shown in Feilberg et al. (2007) this enrichment is only partly counteracted by depletion in the photolysis of formaldehyde to produce molecular hydrogen and thus photochemically produced hydrogen is expected to be enriched in deuterium relative to the starting material, methane.This finding is in agreement with the results of field and modeling studies (Gerst and Quay, 2001;Keppler et al., 2006;Quay et al., 1999;Rhee et al., 2006b;Rockmann et al., 2003;Zahn et al., 2006).In addition the mechanistic detail provided can help in extending and refining the results of such studies.
Wantuck and coworkers, who investigated the absolute rate of the reaction in the range 298-973 K using laser induced fluorescence, distinguish three processes; simple hydrogen abstraction, isomerization (CH 3 O + O 2 → CH 2 OH +O 2 ), and decomposition (CH 3 O + O 2 → HCHO + H +O 2 ) (Wantuck et al., 1987).The isomerization and decomposition processes explain the non-Arrhenius behavior at high temperature.
Setokuchi and Sato used variational transition state theory to derive the temperature dependence of the rate constant (Setokuchi and Sato, 2002), and the reaction mechanisms have been examined in two studies (Bofill et al., 1999;Jungkamp and Seinfeld, 1996).Setokuchi and Sato report a theoretical rate constant of 9.9×10 −16 cm 3 molecule −1 s −1 at 300 K, slightly below the experimental rate constant.Their work assumes that the reaction proceeds by direct hydrogen abstraction via a pre-reaction complex, as shown by Bofill et al. (1999).Three possible mechanisms are discussed by Bofill et al. (1999).The two more complicated mechanisms are disregarded since the activation energy is around 20 times higher than for the direct hydrogen abstraction reaction.The hydrogen abstraction reaction is predicted to have an activation energy of only 11.7 kJ/mol and an Afactor of 3.57×10 −14 cm 3 molecule −1 s −1 at 298 K.These results are obtained from relative energies calculated at the RCCSD(T)/cc-pVTZ level and CASSCF/6-311G(d,p) level, using harmonic vibrational frequencies.Bofill et al. (1999) explained the small A-factor obtained in both computational and experimental studies in terms of a cyclic transition state (Bofill et al., 1999).According to the theoretical study by Bofill et al. (1999) the reaction of the methoxy radical with molecular oxygen occurs via direct hydrogen atom transfer, with a ring-like transition state (Bofill et al., 1999).The activation energy was determined to be 11.7 kJ mol −1 and when quantum tunneling was considered a rate constant was obtained that is in good agreement with the recommended experimental value.At room temperature hydrogen tunneling was the main route of reaction.This could explain the large value of the branching ratio in our study; hydrogen atoms, due to their lower mass, tunnel more easily than deuterium, thus leaving deuterium enriched on the carbon.The higher zero point energy of the C-H bond relative to C-D also plays a role.
The results of this study can be used in detailed models of tropospheric chemistry to investigate the carbon cycle.Several studies of deuterium make use of simplified chemical mechanisms (Rahn et al., 2003;Rhee et al., 2006a;Rockmann et al., 2003), however much remains to be learned, especially given the high variability of deuterium content in atmospheric formaldehyde.For example, a recent study by Rice and Quay found variation in δD(HCHO) between −296 and +210‰ for samples obtained in Seattle, Washington (Rice and Quay, 2006).
Figure 2 .
Figure 2. Reactions of the carbon-containing species included in the model.The figure does not show the analogous mono-deutero-reactions and cross reactions.
Fig. 2 .Figure 3 .
Fig. 2. Reactions of the carbon-containing species included in the model of reactor photochemistry.The figure does not show the analogous mono-deutero-reactions and cross reactions.
Fig. 3 .
Fig. 3.Typical measured spectrum, fit to spectrum, and residual of fit after 20 s of photolysis.Compounds included in the fit are H 2 O, N 2 O, NO 2 , HCHO, HCDO, CH 2 DONO and HO 2 NO 2 .For clarity the spectra are offset vertically by 0, −0.9 and −0.9 units.
Figure 4 .Fig. 4 .
Figure 4. Typical measured spectrum, fit to spectrum, and residual after 4 minutes of photolysis.Compounds included in the fit are the same as those in Figure 3, plus DCDO, HCOOH, DCOOD, DCOOH and CH2DOH.For clarity the spectra have been offset vertically by 0, -1.2, and -1.3 units, respectively.
Figure 5 .
Figure 5.The results of the analysis, with concentration of HCDO plotted versus concentration of HCHO.Results from three experiments are included in the figure.The slope of the line gives a branching ratio of 7.593 ± 0.026.
Fig. 5 .
Fig. 5.The results of the analysis, with concentration of HCDO plotted versus concentration of HCHO.Results from three experiments are included in the figure.The slope of the line gives a branching ratio of 7.593±0.026.
Figure 6 .
Figure 6.Products yields predicted by the model (filled symbols) and those observed experimentally (open symbols).
Fig. 6 .
Fig. 6.Products yields predicted by the model (filled symbols) and those observed experimentally (open symbols).
Propagation of deuterium through gas-phase atmospheric reactions converting methane to hydrogen.Co-reagent species are omitted for clarity, for full reaction mechanism see Table1.
Fig.1.Propagation of deuterium through gas-phase atmospheric reactions converting methane to hydrogen.Co-reagent species are omitted for clarity, for full reaction mechanism see Table1.
|
2014-10-01T00:00:00.000Z
|
2007-11-27T00:00:00.000
|
{
"year": 2007,
"sha1": "1ca516b76894d64c98d6da91a75eafc06fc84ebc",
"oa_license": "CCBYNCSA",
"oa_url": "https://acp.copernicus.org/articles/7/5873/2007/acp-7-5873-2007.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8b478901697b6fbbab9317459d37c6dcb1ad28d7",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
252022383
|
pes2o/s2orc
|
v3-fos-license
|
Construction and Implementation of Marxist Learning Platform in New Media Environment
Popularizing contemporary Chinese Marxism is urgently needed in order to support the ongoing development of socialism with Chinese characteristics as well as the inherent necessity of Marxism. (is essay views the popularization of Marxism as a turning point in the new media environment. It examines the necessity and reality of this popularization in the new media era, considers the new development needs of the popularization of Marxism in propaganda, and further unearths the original construction concepts of the popularization of the Marxism propaganda network. In parallel, a Marxist learning platform is built using data mining technology. Studies reveal that this algorithm has a high clustering accuracy and a recall rate that is about 6% higher than DECluster’s. Additionally, this algorithm takes less time to execute under the same scale transaction set. (is demonstrates the superior performance of this algorithm.(e user’s learning record and learning interests can be formed into an intuitive law using the algorithm presented in this study, which can be used to analyze and calculate the user’s learning content related to Marxism. (is law can then be used to assist the user in creating a customized learning plan for Marxism.
Introduction
Marxist theory is the guiding ideology of Communist Party of China (CPC), and it is also an important theoretical support to guide China's revolution, reform, and development. In the process of Marxism popularization, there is a relationship between subject and object. Among them, the object is the receiver, which is in a subordinate position in the process and is influenced by the subject of the popularization of Marxism [1]. Promoting the spread of Marxism is conducive to providing China with an ideological foundation and spiritual strength, coping with the challenges in the field of consciousness, understanding the essence through phenomena, and clarifying one's responsibilities and responsibilities. Internet is an important medium for spreading Marxism in the new era, which has opened up a new channel for spreading Marxist theory [2]. Using network media to promote the popularization of Marxism in the new era is not only an important content of theoretical innovation but also a great task for practical development. In the new media era, all kinds of ideas and cultures agitate against each other, and all kinds of contradictions and conflicts caused by this have caused certain obstacles for the general public to accept Marxism, and the popularization of Marxism is facing severe practical challenges [3]. In order to realize the enduring appeal and widespread cohesion of classical theories in the new information age and modern society with a diverse set of values, great efforts must be made in the carrier, support and form, as well as in writing enough articles, for the new breakthrough of Marxism's popularization in the new period. Promoting Marxism's popularization and realizing its cohesion are of utmost importance [4]. To improve national and societal cohesion, uphold socialist principles, and implement socialism with Chinese characteristics in the new era, it is crucial to study this issue. e modernization of society has given Marxism a new purpose. is study discusses the development and application of a Marxist learning platform with the aim of better promoting the popularization of Marxism in the new media environment and strengthening the dominant position of Marxism in ideology. Because educational goals differ greatly from one another, personalized education is required. To meet each student's unique learning needs, the online learning platform must also be a personalized teaching method [5]. e age of big data, in which "data drives schools and analysis changes education," is currently in effect. From vast amounts of data, rules can be extracted using big data technology. Data mining, sorting, and analysis of massive data have been proven to be effective decisionmaking tools used by various industries [6,7]. e education sector shares the same reality. To enhance users' learning outcomes and teaching management capabilities, it is effective to integrate data mining technology into the Marxist learning environment. e following innovations are discussed in this study as it relates to the development and application of a Marxist learning platform in the context of new media: (1) In the new media environment, this study takes the popularization of Marxism as the breakthrough point, and based on the consideration of the reality and necessity of the popularization of Marxism in the new media era, further excavates the unique construction ideas of the popularization of Marxism propaganda network. It has certain characteristics and values of the times. (2) e existing learning platform cannot meet the needs of users with different backgrounds in different forms, different goals, and at different times. Based on this, this study designs a Marxist learning platform based on data mining technology. e mature B/S architecture is selected for server development, and the popular MVC5 development model is adopted. e mobile terminal uses HTML5 and MUI framework. It realizes the interactive online course between PC and multiple mobile terminals.
from the overall height at the age of learning knowledge and to establish correct values in practice [10]. Blackledge analyzes the opportunities and challenges of new media to promote the popularization of Marxism, and finally proposes a path strategy to enhance the popularization of Marxism, including content and dominance, subject and audience, scientificity, and credibility [11]. Lasslett systematically analyzes capitalism and socialism from the perspective of Marxism. From the perspective of Marxist empirical theory and social justice, it is believed that no capitalist society in history can be more democratic than the actual capitalist society, nor will a socialist society have a bureaucratic privilege system [12]. Goncalves mentioned that the global flow of information is the premise, process, and result of economic globalization. e collection, sharing, and dissemination of information have become the center of human activities. On the one hand, new media undertakes the function of mass media, and on the other hand, it also provides new space and channels for the dissemination of massive information content [13]. Westra talked about the spread of Marxism, analyzed the spread of Marxism in China, talked about the predicament of realizing the popularization of Marxism and talked about the fate of Marxism and human civilization [14]. Hornborg is based on the background of profound changes in globalization and the media environment of the vigorous development of information technology, guided by dialectical materialism and historical materialism; A systematic and comprehensive analysis was carried out [15]. Marxism must be made more widely known through the use of new media, and this process must be developed and innovated in order to maximize this influence. is study further examines the distinctive design concepts of the popularization of Marxism propaganda network in the context of the new media, taking the popularization of Marxism as the breakthrough point and taking into account the reality and necessity of the popularization of Marxism in the new media era. In addition, this study develops a data mining-based Marxist learning platform. e server side created an interactive online course between the PC side and numerous mobile terminals, selected a mature B/S architecture, and implemented it. is study exhibits some traits and has contemporary value.
Marxist Communication Innovation in the New Media
Technology Environment. Marxism is a valid worldview and scientific methodology that can better serve the general public. e development of new media has created previously unheard-of opportunities and obstacles for the spread of Marxism. e new media is omnidirectional, three-dimensional, multidimensional, and interactive, which is more timely, vivid, and penetrating than traditional media, which is typically one-dimensional and linear plane media. e vast majority of people are the primary force behind the popularization of Marxism. It is necessary to enhance the effectiveness and accomplishment of the Marxist theory of the masses in order to achieve popularization in the new era 2 Journal of Environmental and Public Health [16]. e degree of comprehension and application of Marxist theory is referred to as the "theoretical quality." e way people live and communicate has changed significantly as a result of the new media and network technology's rapid popularization. e starting point for investigating the impact of mass communication is the audience. Audience groups vary as a result of social development differences. e emergence of new media has changed the situation where the audience of one-way communication from paper media to traditional media tends to be Marxist professionals with the development of society. Marxism is becoming more and more popular among the general public as a result of learning, which is a dynamic process of development from shallow to deep and from easy to difficult. We should make full use of short videos and new media platforms, and guide and encourage them with popular forms of communication.
In communication science, the interaction between information and environment is to constantly collect, integrate and optimize information in feedback, so as to achieve better communication effect. Adhering to the combination of Marxist theory with the effective dissemination of new media can make people baptized and educated subtly, and make Marxist theory more convincing and attractive. To improve the quality of Marxist theory, we should grasp the key links and highlight the important position of the younger generation in the audience of Marxist theory, especially the importance of young college students. e question of ideology is the question of belief. Young students have relatively high comprehensive quality and more open ideas, which lays a practical foundation for ideological communication.
e most important feature of Marxist theory is that it is a changing and developing theory with practice as the only test standard. Specifically, Marxist theory is a theory based on practice. Its flexibility lies in that it is not a static theory. It is constantly updated and optimized with the development of the times and the evolution of history, so it has its special historical mission in each period. e popularization of Marxism must be realized with the help of effective carriers, and the rapid development of new media makes us have to face up to its existence. e technical characteristics of the Internet and the spreading law of new media have made the new media go through decades and hundreds of years of other media in just 10 years, and gradually become the mainstream media with wide influence. To promote the popularization of Marxism in the new era, we should actively make use of new media network platforms and short videos, and other new communication methods, so as to make them the spiritual pillar and action program of the Chinese nation; Make a good start for the ideological construction of socialist modernized countries.
Data
Mining. Data mining is the process of removing previously unknown, but potentially useful, hidden information and knowledge from a large volume of imperfect, noisy, fuzzily distributed, and random practical application data. Data mining is a field of study that aims to discover knowledge, visualize data and knowledge, and make it simple to understand. Its concepts come from database systems, statistics, and machine learning.
ree strong technical pillars that support data mining research are databases, AI, and mathematical statistics. Statistics, decision trees, neural networks [17,18,19], fuzzy logic, linear programming, and other mathematical techniques are used in data mining. Data mining technology is essentially a type of technology that processes database information using statistics. Its main goal is to present it as quantitative and visually appealing available data using data regularisation. According to data mining, the most common database types are relational databases, object-oriented databases, transaction databases, deductive databases, multimedia databases, active databases, spatial databases, heterogeneous databases, text databases, Internet information databases, and emerging data warehouses. Data mining is a business information processing technology that, from a business standpoint, extracts useful information from a large amount of business data and then, through conversion, analysis, and modeling operations, ultimately obtains some effective data that can help with business decision-making [20]. A lot of network teaching technologies, including automatic management, the virtual classroom, collaborative learning, and others, use data mining. ere is a lot of data in the current information system, but not much of it is truly valuable. We can therefore obtain information that is helpful for business operations and the enhancement of competitiveness through a thorough analysis of a large amount of data. According to the established business objectives, this new information processing technology can explore and analyze a large amount of data, uncover hidden, undiscovered, or verified laws, and further model them. e steps in educational data mining are shown in Figure 1.
Data preprocessing, data mining implementation, and evaluation and representation of mining results are the three general stages that makeup data mining. In order to obtain statistical analysis data that can be used as a reference for decision-making, it uses the query, retrieval, and report functions of the existing database management system, combines with multidimensional analysis and statistical analysis methods, and performs online analysis and processing [21]. On a deeper level, the database contains unheard-of implicit knowledge. e process of extracting different models, summaries, and derived values from existing data sets is known as data mining. e three main components of the knowledge discovery process are data collection, data mining, and interpretation and result evaluation. e general steps involved in data mining can be summed up as follows: selecting the mining object, preparing the data, creating the model, data mining, producing the results, applying the rules, etc. Association rules, feature rules, distinction rules, classification rules, summary rules, deviation rules, clustering rules, pattern analysis, and trend analysis are all part of the acquired knowledge. Some laws found in data sets are called association rules. One of the most significant models that are frequently used in the field of data mining is the process of mining data to discover association rules before data.
Data mining technology has the following characteristics: ① e scale of processed data is very large. ② In data mining, the discovery of rules is based on statistical rules. ③ e rules discovered by data mining are dynamic. In this study, data mining is used in the field of education to analyze each learner's access patterns. By mining the corresponding access history records, the system then offers various users page information tailored to their access preferences and learning needs. In addition, it can assess how the demand of students is changing, which helps to make educational websites more competitive. e three types of data mining techniques are classification and prediction, cluster analysis, and association analysis. Two fundamental types of data analysis are classification and prediction, which are primarily used to extract and describe significant data sets and forecast future data development trend models. It is necessary to create a continuous value function model because classification can predict the degree of dispersion of data objects without classification, but the prediction is used to predict the continuous value of data objects. Data mining is the main stage of the process, and its main tools are computer and network technology. ese tools are used to process the data in the database and carry out some statistical analysis operations. Statistical analysis, machine learning, pattern recognition, artificial neurons, and other techniques are frequently used in data mining.
Construction of Marxist Learning Platform.
Data mining is a comprehensive process that extracts useful, previously untapped information from sizable databases and then uses it to inform decisions or advance knowledge. Following the creation of the pertinent data mining model for each individual application, a variety of algorithms will be available.
e goal of clustering is to divide all of the data into distinct groups, maximizing the distance between groups and minimizing the variation in data within groups. Data are gathered and divided into different categories by clustering based on the characteristics of the actual data and how similar they are. Data are categorized using clustering based on their attributes. After clustering, you will then have some neat data, some of which can directly reflect the internal relationships of objects, while others require additional processing using other tools. Users with similar lines can be grouped using the clustering algorithm from Web access information data. Users can use these clustering results to enhance their learning, and network administrators can use them to enhance their network services. In association analysis, the database's values are correlated. Association rules and sequential patterns are two methods that are frequently employed. Finding the correlation between various items in the same event is what the association rule is for. Similar to this pattern, the sequence looks for the temporal correlation between events, such as examining the relationship between students' test scores and their mastery of a particular knowledge point. Data mining technology's primary goal is to discover the internal relationships and laws that govern data, so the output of results and the application of laws can be thought of as the technology's result presentation stage. e results of the prior data mining are visually displayed through the digital and graphical data output form, and the results with application significance are screened out for directional use. e model of the Marxist educational platform is shown in Figure 2.
Data mining requires a significant amount of manual labor because it is not an automated process. e business object that is being mined for data serves as the basis for the entire process, the engine that propels data mining, the foundation for evaluating the results, and the roadmap that directs analysts through the data mining process. Smoothing, aggregation, data generalization, normalization, and attribute construction are all common types of data transformation. It is essential to standardize the data in order to reduce the data value to a specific range for object distance mining algorithms. Using Web mining to automatically find and extract information from Web pages and services is a crucial technical method for offering personalized services. e personalized information of users can be obtained by keeping track of the previous web pages they have visited. ese websites can be used to determine users' typical search patterns and anticipate which websites they will want to visit in the future. Despite being extremely rich, some of the information imported after data collection is useless. the caliber of research data in advance of additional analysis.
Reprocessing the data produced in the earlier stage, verifying its accuracy and consistency, handling noisy data, and filling in the missing data. e data format must be changed and cleaned up before the mining program can process any data. Package cleaning, user identification, session identification, path completion, and format conversion are all included in the preprocessing steps. It is necessary to share the original data with the teaching system's database during the data preprocessing process. e core of the Marxist learning platform is the data acquisition and preprocessing module. e system's user analysis model, which serves as the data source for the personality analysis engine, determines the kind of information it gathers. e caliber of the system analysis will be directly impacted by the type and volume of data it gathers. Tuples can be filled manually, automatically filled with global constants, automatically filled with the average value of attributes, or manually filled with missing data.
Let X be a set of some items, if X⊆T, then the transaction T is said to contain X. en the association rule is expressed as: Among them: e support s represents the frequency with which the transaction appears in the rule. e support degree of association rule X⇒Y is defined as: Among them, |T(X ∪ Y)| represents the number of transactions containing X ∪ Y in the dataset; |T| represents the total number of transactions in the dataset. Confidence c represents the strength of association rule X⇒Y, which can be defined as: Among them, the number of transactions containing X ∪ Y in the |T(X ∪ Y)| dataset; |T(X)| represents the number of transactions containing X in the dataset. e confidence c X⇒Y of the association rule X⇒Y represents the conditional probability of Y given X, namely: Suppose there are m students and n courses, and the students are divided into t classes. Let the grade of the i student and the j course be X ij . e average grade for course j is: e sample range is: en the normalized score is: Journal of Environmental and Public Health e extreme difference will rise when a student's grade is excessively high or low, and the importance of the class grade will decline. e impact of one-off, unintentional factors is currently too great. For this purpose, the sample standard deviation S j is used instead of the range R j , namely: en the normalized score is: is eliminates the influence of extreme circumstances on judging students' achievements. e data collection and preprocessing module, the personalized processing, and analysis center, and the personalized scheduling center are the three main parts of the personalized engine. It is important to make sure that all user data is accurately and completely imported into the system during the user data import process. Even though some user data is useless, after subsequent changes to the clustering rules and screening logic criteria, some useless data can still be used as a crucial cluster analysis parameter. Some information is removed in this case because the system is still in the research stage. A knowledge point describes the knowledge in the teaching field and is a complete teaching unit. It has some fundamental characteristics like learning content, difficulty, and importance. encompassing compound knowledge points and meta-knowledge points. Meta-knowledge Points are knowledge points that cannot be separated structurally, and compound Knowledge Points are knowledge points that have been combined with a group of knowledge points. e central component of the entire data warehouse environment, where data is stored and supports data retrieval, is the data warehouse database. Its support for large amounts of data and quick retrieval technology set it apart from manipulation databases. e biggest drawback of data integration is how simple it is to produce redundant data in large quantities and data inconsistency. ese redundant data will significantly slow down the algorithm's performance and may even result in the production of inaccurate or useless data. As a result, the main challenge in data integration is getting rid of some redundant data. To ensure the matching of functional dependencies and reference constraints during data integration, special attention should be paid to the data structure when comparing the attributes of one database with those of another. e Marxist learning platform model adds a pattern discovery engine and a more extensive user information base compared to the conventional model. A closed-loop feedback link is added to the system from the perspective of control so that it can automatically adapt in response to new information to meet user learning needs.
Result Analysis and Discussion
is chapter focuses on providing timely, accurate, and scientific decision-making basis for Marxist teaching, focusing on the analysis of knowledge point difficulty, knowledge point sequence, user browsing behavior, personalized service, teaching strategy support, teaching evaluation, user classification, and other topics. e technology is mainly based on data mining, supplemented by OLAP analysis. is system adopts B/S structure. B/S mode is a mode based on WWW technology, which inherits and expands the application of network hardware and software platforms and development in traditional client/browser mode. e system development environment of this study is shown in Table 1.
B/S mode is especially suitable for online information publishing so that the whole Internet/Intranet information can be shared. e system adopts B/S mode, and users can learn online only by installing browser software on the client. Taking the data in network teaching as the main data source, supplemented by other information systems and external data sources, a unified analysis data view is established to form a professional data warehouse. e structure of the system mainly includes three parts, namely, student client, teacher client, and administrator client. e main functions of the student client include basic information and course information query, qualified and unqualified results query, and make-up examination arrangement for unqualified results.
e data source mainly includes standardized data and non-standardized data, which enter the analytical data environment-data warehouse through the ECTL process. Based on OLAP multidimensional data set and data mining model, personalized teaching service is provided. Two-way arrows can show the two-way flow of information. Pattern analysis module will improve Marxist teaching service according to user feedback. e data sources of data preprocessing in this study include Log files, web pages, web page structures, user files, login information, etc. e Log files include Serverlog, Proxyserverlog and Clientcookielog.
e log records the visiting and browsing behavior of website users, which can be stored in two formats: one is the common log file format; e other is the extended log file format. First, in order to compare the convergence performance of this algorithm, K-means algorithm, and DECluster algorithm, this chapter conducts experiments. e convergence of the fitness function on Iris data set with the number of iterations is shown in Figure 3. Its convergence with the number of iterations on the Wine dataset is shown in Figure 4.
In each learning topic, there is a topic table system, in which various data related to this topic are placed. In order to support decision-making, a summary table group of thousands of data is also set up. In order to further support decision-making, there are thousands of information market groups, in which decision-making support information generated after data processing is placed. If you want to get learners' interest in the course, you can start from the following two aspects: if learners customize the course in the Marxist learning platform, it means that learners are more interested in the course; By analyzing learners' access logs, we can get learners' historical browsing records and evaluation information of courses. e recall results of different algorithms are shown in Figure 5. Teaching strategy evaluation is one of the important responsibilities of educators. e behavior of evaluating teaching strategies not only plays the role of information feedback and stimulating learning motivation for users, but also serves as a means to check the curriculum plan, teaching procedures, and even teaching objectives. Evaluation should follow the principle of "comprehensive evaluation content, diversified evaluation methods, multiple evaluation times, and organic combination of self-evaluation and mutual evaluation." In this chapter, tests are carried out when the support degree is 0.2, 0.3, 0.5, and 0.7, respectively. e test data are shown in Tables 2 and 3.
According to the above two data tables, this chapter charts the data when the number of transactions in the transaction set is 1000, as shown in Figure 6.
e data in Figure 6 shows that, as the minimum support degree gradually increases, more items are filtered out and fewer frequent itemsets are generated, which tends to reduce the execution time of various algorithms. Users' achievements need to be designed with emphasis since they are the most significant data source in the Marxist learning platform. Nine forms in particular need to be included: the college form, specialty form, student form, curriculum schedule, course selection form, achievement form, student client form, and teacher client form. Hot course recommendations and personalized course recommendations are two methods of course recommendation. e number of courses played will be tallied in this module, along with the hot courses that are currently being recommended on the homepage. e relevance algorithm will also be used to rank nearby users in accordance with each user's search content and suggest appropriate courses for users based on their search content. Figure 7 displays the execution times for various algorithms for various scale transaction sets.
It can be seen that the execution efficiency of this algorithm is better than the other two comparison algorithms.
is is because the algorithm in this study reduces the connection computation. Experiments in this chapter show that the clustering accuracy of this algorithm is high, and the recall rate is about 6% higher than that of DECluster algorithm. In addition, under the same scale transaction set, the execution time of this algorithm is less. is shows that this algorithm has a certain superior performance. Generally speaking, the data mining technology has greatly helped us to obtain the required information from the vast network of teaching resources more quickly and accurately. Based on the data used, the power of mining has been brought into full play in the Marxist learning platform. It can timely adjust the website and steadily improve learners' satisfaction with purpose and basis.
Conclusions
New media can greatly facilitate all kinds of communication between teachers and students, which can be in study and work, or in life. At the same time, the publicity and dissemination of Marxism in new media are highly professional and rich in science and technology, which requires a group of professional and technical talents and compound talents with excellent political quality and professional skills, who can keep up with the forefront of information technology development and have strong R&D capabilities. rough literature research and the method of combining theory with practice, this article studies the new media communication ways to promote the popularization of Marxism. To promote the popularization of Marxism in the new era, it is necessary to improve the organizational level of the masses' study of Marxism; Stimulate the masses to learn the endogenous vitality of Marxism; Organize the masses to study Marxism institutionally; Encourage the masses to study Marxism independently. Based on this, this study constructs a Marxist learning platform. Experiments show that the clustering accuracy of this algorithm is high, and the recall rate is about 6% higher than that of the DECluster algorithm. In addition, under the same scale transaction set, the execution time of this algorithm is less. is shows that this algorithm has a certain superior performance. is research has certain value significance on theoretical value and practical levels. Under the new media environment, we should be active, change with the times, follow the trend, give full play to our subjective initiative and make use of the advantages of new media to promote the popularization of Marxism.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
|
2022-09-03T15:20:57.450Z
|
2022-09-01T00:00:00.000
|
{
"year": 2022,
"sha1": "0f70a619387ad84d635db706fd8d7373ca71d0f6",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jeph/2022/1231601.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "522cf02da84aca3637e3d16bfe3cf307537ccf63",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
81484435
|
pes2o/s2orc
|
v3-fos-license
|
Multiple Primary Malignancies : Metastatic Renal with Early Breast and Endometrial Cancers : A Case Report
Double primary malignancies could be divided into two categories, depending on the interval between tumor diagnoses. A secondary malignancy could be defined as a new cancer that has occurred as a result of previous treatment with radiation or chemotherapy. Second primary malignancy can occur at any age but it’s commonly at old age. A 46 premenopausal female patient presented to our outpatient clinic complaining from a mass in her right breast, routine metastatic work-up for distant metastasis declared multiple hepatic metastases, RT renal mass, and bone metastases. Palliative radiotherapy to tender and weight bearing sites followed by 4 cycles of systemic chemotherapy FEC regimen were received. Tru-cut needle biopsy from renal mass detected renal cell carcinoma of clear cell type, the patient started sunitinib and tamoxifen with bisphosphonate (Zoledronic acid), assessment of the response revealed reduction of the size and number of HFLs, and the size of renal mass, so the patient was decided to do cytoreductive nephrectomy and then continued on TAM and sunitinib. Collectively, due to the rising incidence of multiple primary malignancies, further studies should be done not only for better clinical evaluation and treatments but also for accurate determination of possible causes, pathogenesis, effective managements and screening programs.
Introduction
In general, the incidence of double primary malignancy is not considered rare [1] [2]; the risk of second cancer has been 10% at 20 years and 26% at 30 years following Hodgkin disease treatment [3], and 3.8%.At 10 years versus 7% at 15 years for patients treated with doxorubicin-based regimens for breast cancer [4].
Double primary malignancies could be divided into two categories, depending on the interval between tumor diagnoses [6].Synchronous malignancies were second tumors which have been occurring either simultaneously, or within 6 months after the first malignancy while metachronous malignancies were secondary tumors that have developed after 6 months, or even more than that from the first malignancy.
Warren and Gates [7] [8] [9] [10] defined the criteria for diagnosis of double primary malignancies based on histologic confirmation of primary and secondary tumors, there should be at least 2 cm of normal mucosa and separated by at least 5 years in time if both tumors in the same location, and the probability of second tumor being metastasis of the primary one must be excluded.
A secondary malignancy could be defined as a new cancer that has occurred as a result of previous treatment with radiation or chemotherapy.Depending on the schedule of treatment, the most common secondary cancers were skin cancer, breast cancer, acute leukemia, colorectal, lung and stomach cancer.Second primary malignancy can occur at any age but it's commonly at old age [11].
We aimed from this case report to set forth a rare case of triple malignancy, and to what extent our treatment was successful especially the combination of tamoxifen and sunitinib.
Case Report
A 46 premenopausal female patient presented to our outpatient clinic complaining from a mass in her right breast.On physical examination; a mass about 3 cm in diameter in the upper outer quadrant with palpable axillary lymph nodes, palpable right renal mass with tenderness at right shoulder, lumbo-sacral spine, tru-cut needle biopsy revealed infiltrating duct carcinoma, NOS, grade I (Figure 1), routine metastatic work-up for distant metastasis including TC-99m whole body bone scan, MSCT chest and pelviabdomin with contrast that declared multiple hepatic metastases with restricted diffusion in segments VIII, VII, VI, and IV varying in size from 1 -3 cm, Figure 2 and Figure 3, with RT renal mass about 6 cm in diameter, Figure 2
Discussion
Not only environmental factors are responsible for the etiology of double malignancy but also genetic factors, previous chemotherapy, previous radiotherapy, hormonal treatments and others, these factors interacted together to result in these complicated lesions.Synchronous breast cancer and renal cell carcinoma is very rare and only few reported cases worldwide [12].Eight cases of synchronous breast primaries with RCC was reported in a population based study done by Jiao F. et al. [12], metachronous tumors of breast with renal primary have been reported in literature with both metastatic as well second primaries [13].
Only 3% of metastatic RCC cases reported to have breast metastasis [14], although both breast IDC and RCC were discovered synchronously but we considered our case as having metachronous tumors with early RCC that became metastatic and lastly developed breast cancer, in order to confirm both diagnoses; immunohistochemistry with vimentin and cytokeratin on paraffin breast block were done and was detected to be positive cytokeratin and negative vimentin, furthermore, immunohistochemistry to detect ER and PR on paraffin RCC block were done and detected to be negative.
Based on complete response to chemotherapy and hormonal treatment of breast cancer that was confirmed by PET/CT and the expected short survival from m RCC, we decided not to proceed to mastectomy.Although there is a controversy about the role of CN in m RCC patients receiving targeted therapy [15] as it doesn't change the clinical outcome, but our patient had an intermediate risk at presentation and received a 6-months period of pre-surgical targeted therapy that changed her risk into favorable one and subsequently was decided to do CN.Now, our patient achieved good clinical response with only solitary hepatic, and few asymptomatic bone metastases, good organ functions, and better performance status.Does our patient gain benefit from modified radical mastectomy, or radiofrequency for HFL?We prefer to continue on hormonal treatment and targeted therapy.
Conclusion
Due to the rising incidence of multiple primary malignancies, further studies should be done not only for better clinical evaluation and treatments but also for accurate determination of possible etiologies, pathogenesis, effective managements and screening programs.
Figure 2 .
Figure 2. MRI breast showing breast mass the pathology reveal infiltrating duct carcinoma.
Figure 3 .
Figure 3. Whole body MRI showing renal mass at lower pole with pathology of clear RCC with hepatic and bony metastasis.
Figure 4 .
Figure 4. Axial PET image of the liver with low SUV of solitary hepatic focal lesion.
Figure 5 .
Figure 5. Coronal PET image of the body showing limited SUV activity of rt shoulder lesion.
|
2019-02-09T22:35:18.126Z
|
2018-10-31T00:00:00.000
|
{
"year": 2018,
"sha1": "495884c614b24d89f4f5e91345f414650048b723",
"oa_license": null,
"oa_url": "https://doi.org/10.4236/jct.2018.911075",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "f3c581764fb3aa1cd6d76e295f20320e61496c3d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56465194
|
pes2o/s2orc
|
v3-fos-license
|
Biomechanics in Orthopaedics
In clinical orthopaedics, an approach based on biomechanical knowledge is a prerequisite. Studies on load distribution, gait analysis and implants have been extensively published aiming to aid clinicians in the processes of decision making and evaluation of treatments prior to using them in clinical practice. However, despite powerful scientific methods, the relevance of biomechanical studies with clinical orthopaedics, the adaptability and tolerance of living tissue, and the impact of these studies for clinical practice is debatable. Indeed, these studies may have limited clinical relevance unless they account for important parameters such as biological behaviour, tissue tolerance and adaptability. This article summarizes the history of biomechanics in orthopaedics, and discusses the clinical relevance of biomechanical studies in orthopaedic and trauma surgery.
History of Orthopaedic Biomechanics
Biomechanics aims to enlighten into the mechanics of tissue function, failure or injury, and to provide information on the most effective and safe motion patterns and exercises to improve movement, and how professionals might improve movements, implants or osteosyntheses [1].The word biomechanics originates from the Ancient Greek "βίος" (life) and "μηχανική" (mechanics), and refers to the study of the mechanical principles of living tissue, particularly their movement and structure and how forces create motion [2].The roots of biomechanics date back to Greek antiquity.Hippocrates of Kos (460-370 BC), a Greek physician of the Age of Pericles, referred to as the Father of Western Medicine, wrote on many pragmatic treatments of common ailments such as bone fractures, joint dislocations and articular cartilage injuries, and promoted the application of mechanics (force and motion) to reduce dislocated knees and straighten spinal deformities [3,4].Aristotle of Stagira, Chalkidice (384-322 BC), a Greek philosopher who studied at Plato's Academy, was the first who studied on physiology and animal motions; many of his ideas on animals, physics and other scientific topics laid the broad foundations of the biological and physical sciences that were not to be superseded for nearly 2,500 years [4][5][6].
The discipline of biomechanics arose in the 16 th century with the investigations of Galileo Galilei and the studies of Giovanni Alfonso Borelli on the forces imposed on human and animal bodies by the activities and functions of life; they were the first who recognized the relationship between the mechanical environment and living tissue responsiveness (adaptation) [7,8].The studies of Dr. YC Fung, referred to as the Father of Modern Biomechanics, contributed to a crescendo of biomechanics during the mid-1960s [9][10][11].By the beginning of the 1970s, growth of the field accelerated; scientists from many different disciplines such as kinesiology, engineering, physics, biology, zoology and medicine including orthopaedics have been interested in biomechanics
Ivyspring
International Publisher [12].Since then, several studies allowed for biomechanics to become a recognized specialization in science, and for the biomechanical principles to become systematically applied [13][14][15].
The word orthopaedics originates for the Ancient Greek "ὀρθός" (straight) and "παιδίον" (child); it was coined by the French doctor Nicholas Andry in 1741 [3,4].Until the 20 th century, orthopaedic doctors were mainly involved in straightening scoliotic spines, performing fracture fixation with braces and plaster casts, treating infections of the bone and joints, and other nonoperative procedures.With the development of modern orthopaedic surgical techniques and durable implants, orthopaedic surgery has greatly evolved.At that time, orthopaedics joined with biomechanics in a concerted effort to improve orthopaedic surgery [4].Currently, orthopaedic biomechanics is a basic scientific and engineering discipline that is robust, vital, and dynamic [1,4].
Biomechanics in Clinical Orthopaedics
Clinical biomechanics is defined as the application of mechanical principles to the management of clinical problems.In orthopaedics, this implies that biomechanics should be applied in clinical orthopaedics.Much research and published studies have improved the understanding of the mechanical principles involved in musculoskeletal disorders.However, it is difficult to adapt all information obtained with mechanical studies in tissue, even living tissue, to clinical practice [18].Clinical orthopaedic biomechanics should cover the biomechanical aspects, etiology, diagnosis, treatment and prevention of a musculoskeletal disorder, and should involve a scientific approach to develop novel medical applications, with emphasis on scientific integrity and clinical relevance [18].
In clinical orthopaedics, an approach based on biomechanical knowledge is a prerequisite.While most biomechanical knowledge is not perfect and can only be organized into some general principles, it is much better at informing professional practice than merely using information, opinions or data with no implied degree of accuracy [1].To gain new knowledge, one must start from a domain of generally accepted and accumulated prior knowledge.Most of the accumulated biomechanical knowledge is obtained by routine experiments and analyzed by well-accepted theories.However, with the accretion of knowledge, conflicts, inconsistencies and new hypotheses will arise.Each new hypothesis needs to be validated, and each new paradigm needs to overcome an existing theory or paradigm in a logical and rational way before it can be accepted and generally used by scientists and engineers [4].For biomechanical studies, validity requires a context of adaptation and tissue material clarification, and clinical relevance.
Tissue in Orthopaedic Biomechanics
Four types of tissue exhibit properties which are different and probably non-interpretable in biological terms: (1) viable tissue in situ with no necrosis, (2) viable tissue in vitro maintained in a suitable medium and at body temperatures, (3) nonviable (dead) tissue maintained in some sort of medium and at body temperatures, and (4) nonviable tissue maintained moist, but either dried or cooled at some time [19].Although cadaveric tissues are the gold standard simulators, they suffer from major drawbacks, including the risk of disease transmission, high cost, and prolonged preparation time [20].Furthermore, cadaveric bone tissue disproportionately represent the elderly population whose bone quality may not be representative of most of the orthopedic population [21].Accordingly, cadaveric tissue may not accurately represent the behavior of osteosynthesis constructs and orthopaedic implants in young, healthy patients with fractures.Furthermore, there is a high degree of variation in biomechanical properties between cadaveric tissues, reportedly up to 100% of the mean in some parameters [22].The use of traditional formalin-based embalming solutions may excessively stiffen soft tissues [23].Recently developed embalming solutions may preserve cadaveric tissue characteristics, but they are expensive and require even more specialized storage of specimens under vacuum refrigeration [23,24].Cost-effectiveness is a major concern in research.Any measure requiring new concepts will be easier to introduce in clinical practice if it has been previously validated with a biomechanical study.However, cost-effectiveness does not directly relate to medical efficacy, and can be the cause for clinical failure of biomechanical measures.Cost-effectiveness will become increasingly more important in the application of new measures.The measuring tools in biomechanical studies should reduce costs by eliminating unnecessary treatment or by identifying conditions early and avoiding expensive complications [25].
Cadavers, even though they are fixed in embalming chemicals, may still pose infection hazards.Infectious pathogens in cadavers at risk for disease transmission include Mycobacterium tuberculosis, hepatitis B and C viruses, HIV, and prions that cause transmissible spongiform encephalopathies [26][27][28].In general, the risk of Mycobacterium tuberculosis transmission is decreased by fixation.However, it has been shown that bacilli remain viable and infectious for at least 24 to 48 h after an infected cadaver has been embalmed [28].Specific serologic markers of hepatitis B and C viruses can be detected in cadaveric tissue banks (hepatitis B surface antigen, 18.1%; hepatitis C antibody, 14.3%) [27,29].Infectious HIV has been reported in pleural fluid, pericardial fluid, blood, bone fragments, spleen, brain, bone marrow, and lymph nodes of such deceased patients after storage at 2°C for up to 16.5 days after death [30].Prions, the infectious agents that cause Creutzfeldt-Jakob disease are highly resistant to conventional methods of sterilization and disinfection [31,32].Therefore, every cadaver should be regarded as an infectious material, and specific safety precautions should be obtained to avoid accidental disease transmission.These include a detailed file, indicating the reason of death and containing previous hospital records for the deceased, using embalming chemical, although there is inadequate information about their disinfectant properties, discard tissue remnants, debris and the sheet covering the table after the dissection is completed, and clean the environment with a phenolic disinfectant [28].
The challenges, risk of disease transmission and costs associated with the use of cadaveric tissues for biomechanical studies, in addition to inconsistencies between tissue specimens, has prompted the development of synthetic tissues that accurately reproduce the complex properties of natural human tissues.Synthetic tissues provide a number of advantages over cadaveric bone for biomechanical studies.First, the quality of cadaveric bone varies widely, requiring a large number of specimens to be tested for important results.Second, fixation implants are often used in relatively young patients whose bone quality can be poorly represented by the often osteoporotic bone characteristics of the elderly donors.Third, for a long-term in vitro study to be performed, deterioration of the properties of the cadaveric bone over time must be considered.Fourth, the bone density of cadaveric bone is highly variable and has a significant effect on the results of biomechanical testing; bone mineral density tests such as dual X-ray absorptiometry (DXA) are widely available, easy to perform and correlate highly and significantly with bone strength in many modes of failure [33].
Initially introduced in the late 1980s, sawbones (artificial or composite bones) were designed to simulate the bone architecture, as well as the physical properties of bones.Since then, sawbones have been extensively used in orthopaedic biomechanical research and for surgical training that traditionally relied on cadavers [21].Unlike cadaveric bones, sawbones are relatively inexpensive, widely available, have minimal variability between specimens, are not ethically controversial, and require no special storage or preservation techniques and no Institutional Review Board/Ethics Committee approval.Sawbones are available in various formulations to optimize desirable properties for specific applications, such as enhanced radiopacity or ease of cutting, reaming, or drilling [21,34,35].
The basic components of sawbones are plastics and epoxies.First-generation sawbones consisted of a rigid polyurethane foam core surrounded by an epoxy-reinforced, braided glass sleeve.However, mismatch between the glass fiber size and epoxy component resulted in delamination of the cortical material, and were subsequently poorly represented in the biomechanics literature [21,36].Second-generation sawbones were fiberglass-fabricreinforced (FFR) composites, constructed from layers of woven fiberglass matting that were solidified into the cortical matrix by the pressure injection of epoxy resin [37,38].However, they had no intramedullary canal and limitations of the FFR cortical material were noted; although the 45° orientation of glass fibers in the FFR matrix excelled at reproducing physiologic lateral bending rigidity, this geometry bolstered material strength in the rotational plane [21,22,36].Third generation sawbones were manufactured with an entirely pressure-injected technique by which short glass fiber reinforced (SGFR) epoxy was injection-molded around the polyurethane foam core to form the cortical wall [36].Additionally, direct castings of cadaver bones from an adult male donor was undertaken for the third generation sawbones to maintain a high level of anatomic fidelity with regard to topography of the cortical wall and gross specimen size.Including the glass fiber and epoxy resin components in the same material phase improved the consistency in bone shape and anatomic detail within and between specimens [37][38][39].The properties of the new SGFR material resulted in better approximation of organic bone when stressed in the rotational plane.However, third-generation sawbones were still stiffer (140%) than cadaveric specimens under torsion, and their physiologic bending properties were similar to second-generation sawbones [36].Fourth-generation sawbones are currently available.They use the same SGFR construction and injection molding manufacturing process as the third-generation models and therefore have similar reproduction of anatomic detail and consistency of geometry of the cortical wall.However, they benefit from an optimized epoxy component, resulting in incremental improvement in torsional and bending stiffness [21].Moreover, the fourth-generation sawbones have a high fatigue threshold, improved thermal and solvent stability, and better bicortical screw purchase relative to third-generation models, making them ideal for repeat loading applications and biomechanical testing under physiologic conditions, which is critical in orthopaedic implant testing [39][40][41].However, fourth-generation sawbones have demonstrated uncharacteristic interspecimen variability, and cannot undergo bone remodeling, features seen in bones having undergone previous fixation.Therefore, many investigators still prefer to perform small-scale cadaver validation studies when testing previously unscrutinized composites [21].
The mechanical environment of the tissue, tissue motion and load distribution are also important.In most cases, a clinical outcome relates to adaptation of tissues to their mechanical environment.Potting sawbones may be problematic as living bone is never strongly secured proximally and distally; however, secured potting is necessary for mechanical stability.Tissue load magnitudes and directions are an estimate of stresses and strains, and can be measured with a reasonable accuracy using validated approaches [42][43][44][45][46].
Relevance in Orthopaedic Biomechanics
Researchers rarely show whether differences in tissue material (living or dead) might have relevance to their study question.This is a necessity if adaptation is not considered within the framework of the question being asked [47].A recent article raised the issue of relevance for biomechanical studies in orthopaedics [47], arguing that biomechanical studies have exerted a relatively minor impact in clinical practice, and that most of biomechanical studies have had limited relevance to biology and clinical medicine because of failure to distinguish living from non-living systems by their biological responsiveness, tissue adaptation and tolerance [47].When these fundamental requirements are met, biomechanical studies can provide powerful tools to explain the function of the body and to predict the success or failure of treatments prior to using them on patients.If these are not met, any biomechanical study is suspect, and requires to be interpreted with great caution.Yet, no current approach to numerically predicting tissue adaptation has been correlated with clinically relevant situations [47].Furthermore, biomechanics should not be considered the study of the mechanical aspects of the structure and function of biological systems because biological systems do not have mechanical aspects [2].Living tissues properties differ from those of non-living tissues.The key distinctions are that living tissue is able to sense the environment, respond to their external environment in a seemingly infinite number of ways, and adapt over time.A living tissue is not static, but through internal processes alters certain of its characteristics in response to external stimuli [48]; some living tissues are able to repair themselves, and modify their behavior in both the short term and the long term [2,19,[47][48][49].Failure to recognize living and non-living tissue may be a major source of scatter in biomechanical studies [19].In this setting, biomechanical studies using non-living, non-adaptable systems would be questionable; consequently, the use of the term "necromechanics", from the ancient Greek "νεκρός" (dead), as previously suggested, would be logical [47].
A valid study should not contain flaws and should be internally consistent.For biomechanical studies, validity requires a context of adaptation and tissue material clarification that should be explicitly reported.The researcher should recognize and should inform the readers for their study design and limitations [50].Biomechanical studies should also have clinical relevance, which should be meaningful for the clinicians and their patients [47].
Several key parameters are required for a biomechanical study to be clinically relevant in orthopaedics [47].The mechanical parameter chosen should be a surrogate for relevant biological behavior; obviously, the choice of the mechanical parameter depends on the question being asked.The mechanical parameter should also be obtained with physiological force magnitudes and directions.Most studies apply a single loading regimen; instead, a set of loading regimens that represent the entire range of repetitive loadings experienced in vivo should be used.Load magnitudes should not be chosen for convenience.Tissue type (living or not) and its tolerance to the mechanical parameter should be clarified and discussed in relation to the experiment.Tissue adaptation to the mechanical parameters over time should be addressed for the biomechanical study to be clinically relevant [47].If the above parameters are not addressed when designing a study or addressing its limitations, the results of that investigation should be regarded with caution.In contrast, if the above requirements are met, the power of the biomechanical studies increases and their results are important and valid for clinical decision making and to predict success or failure of treatments prior to attempting them in patients [47].
Conclusion
Novel research directions should be emphasized in future clinical orthopaedic biomechanical studies for their direct clinical application, with emphasis on scientific integrity and clinical relevance.Readers have to critically and properly interpret the results of biomechanical studies.The authors should clarify the tissue type, tolerance and adaptation should provide key questions that are clinically relevant, and should inform the readers that biomechanical models have inherent limitations.Limitations should not be suppressed but rather discussed in the discussion section of the article; if not, the study results should be regarded with caution.
|
2018-12-19T22:24:53.870Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "5aa42d140eb752b763857c419a882a7149d3a94c",
"oa_license": "CCBYNC",
"oa_url": "http://www.jbiomed.com/v02p0089.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5aa42d140eb752b763857c419a882a7149d3a94c",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
14456451
|
pes2o/s2orc
|
v3-fos-license
|
Helicity and partial wave amplitude analysis of D ->K^* \rho decay
We have carried out an analysis of helicity and partial-wave amplitudes for the process D ->K^* \rho in the factorization approximation using several models for the form factors. All the models, with the exception of one, generate partial-wave amplitudes with the hierarchy $\mid S\mid>\mid P\mid>\mid D\mid$. The one exception gives $\mid S \mid>\mid D \mid>\mid P \mid$. Even though in most models the D-wave amplitude is an order of magnitude smaller than the S-wave amplitude, its effect on the longitudinal polarization could be as large as 30%. Due to a misidentification of the partial-wave amplitudes in terms of the Lorentz structures in the relevant literature, we cast doubt on the veracity of the listed data, particularly the partial-wave branching ratios. (PACS numbers: 13.25.-k, 13.25.Ft)
I. INTRODUCTION
The weak hadronic decay of charm D meson to two resonant vector particles is difficult to analyze experimentally, as well as to understand theoretically. At the theoretical level much past effort was devoted to understand mainly the rate Γ(D → V 1 V 2 ) ( V stands for vector meson). Studies based on factorization model were carried out by Bauer el al. [1] and Kamal et al. [2]; approaches based on flavor SU(3) symmetry and broken SU(3) symmetry were pursued also by Kamal et al. [2] and by Hinchliffe and Kaeding [3]; and Bedaque et al. [4] made a pole-dominance model calculation .
One peculiarity of a pseudoscalar meson, P , decaying into two vector mesons is that the final-state particles are produced in different correlated polarization states. The hadronic matrix element, A = V 1 V 2 | H weak | P , involves three invariant amplitudes which can be expressed in terms of three different, but equivalent, bases; the helicity basis | + + , | − − , |00 , the partial-wave basis (or the LS-basis) |S , |P , |D and the transversity basis |0 , | , | ⊥ . The interrelations between the amplitudes in these bases are presented in the next section. The data [5] for D → K * ρ decay are quoted either in terms of the helicity branching ratios or the partial-wave branching ratios. Hence our study of the process D → K * ρ is carried out in these two bases. We have undertaken a theoretical analysis for the particular decay, D → K * ρ, assuming factorization (more on it in section 2) and using a variety of models for the form factors. Such a study has not been undertaken in the past.
The experimental analysis of D → K * ρ ( measurement of the branching ratio, partial-wave branching ratios, polarization etc.) is done by considering the resonant substructure of the four-body decays D → Kπππ [6,7]. There are several two-body decay modes (examples: D → K * ρ and D → Ka 1 ) which contribute to the final states in D 0 →K 0 π − π + π 0 , D + → K − π + π + π 0 , D 0 → K − π + π + π − . Following the standard practice, Refs. [6,7] took the signal terms of the probability density to be a coherent sum of complex amplitudes for each decay chain leading to the four-body decays of D. Hence, the different contributing amplitudes can interfere among themselves. In general, the interference terms do not integrate to zero (see [8] for more details about the three-body decay D → Kππ). Consequently, the sum of the fractions f i does not add up to unity: f i = 1( see ref. [6,8,9]). The branching fractions into two-body channels are then determined by maximizing the likelihood function. The branching fraction into any particular two-body channel, such as D → K * ρ, can be analyzed in terms of the helicity amplitdes (H ++ , H −− , H 00 ), or the partial-wave amplitudes (S, P , D), or the transversity amplitudes (A 0 , A , A ⊥ ). As a results of the completeness of each one of these bases, the decay rate Γ(D → K * ρ) is expressed as an incoherent sum of the helicity, or the partial-wave, or the transversity amplitudes [10,11,12]. This imposes some constraints on the helicity and the partial-wave branching fractions B as they should add up to the total branching fraction for the mode D → K * ρ as follow: B ++ +B −− +B 00 = B 0 +B +B ⊥ = B S +B P +B D = B K * ρ . A similar situation occurs in np annihilation to 3 ( and 5) pions [13] where S and P waves are treated incoherently. An obvious problem with the D 0 → K * 0 ρ 0 data [5] is that this constraint is violated: the sum of the branching fractions into S and D states exceeds the total branching fraction. The fact that this sum also exceeds the transverse branching fraction is, by itself, not a problem due to the interference between the S and D waves. However, the problem with the data [5] is that the transverse branching fraction saturates the total branching fraction. There is, therefore, an internal inconsistency in the data: all the data listings can not be correct. The Particle Data Group listing of D → K * ρ data has remained unchanged since 1992 .
We believe that the source of the inconsistency in the data [5,6,7] has to do with the identification of the partial-wave amplitudes, S, P and D, with the Lorentz structures in the decay amplitude (see Table II and, especially, Eqns. (32) -(34) of [6]). The decay amplitude A for the process P → V 1 V 2 is expressed in terms of three independent Lorentz structures and their coefficients, represented in the notation of [14,15] by a, b, c, and in the notation of [16] by the form factors A 1 (q 2 ), A 2 (q 2 ) and V (q 2 ) . We discuss this point in detail in the next section, but suffice it to say here that in [6] the P -wave amplitude is identified with c of [14,15] (or V of [16]), which is correct; however, they identify the S-wave amplitude with a of [14,15] (or A 1 of [16]) and D-wave amplitude with b of [14,15] (or A 2 of [16]), which is incorrect. We discuss this point in some detail in sections 3 and 4.
Part of the problem could also be that the transverse amplitudes H ++ , H −− and the longitudinal amplitude H 00 were fitted independently in [6]. Their argument for doing so was the large measured polarization of K * in semileptonic decay of the D meson [17]. However, later measurements [18] of the polarization of K * being much smaller vitiate this procedure.
II. METHOD
The decay D → K * ρ is Cabibbo-favored and is induced by the effective weak Hamiltonian which can be reduced to the following color-favored (CF ) and color-suppressed (CS) forms [19]: where V qq ′ are the CKM matrix elements. The brackets (ūd) represent (V − A) color-singlet Dirac bilinears. O 8 andÕ 8 are products of color octet currents: . λ a are the Gell-mann matrices. a 1 and a 2 are the Wilson coefficients for which we use the values a 1 = 1.26 ± 0.04 and a 2 = −0.51 ± 0.05 [19] . In general a 1 and a 2 are related to the coefficients c 1 and c 2 [20] by where N is an effective number of colors, and c 1 = 1.26, c 2 = −0.51 [20]. Using a value of N different from 3 is a way to parametrize nonfactorization effects. Our parametrization amounts to setting N → ∞. This particular decay, D → K * ρ, has also been studied by Kamal et al. [19] and by Cheng [21] from the view point of explicit ( rather than implicit as here) nonfactorization.
In the factorization approximation one neglects the contribution from O 8 and O 8 , and the matrix element of the first term is written as a product of two current matrix elements. Since we are effectively working with N = 3, one could argue that the nonfactorization arising from O 8 andÕ 8 is being included. We consider the following three decay: (i) D 0 → K − * ρ + , a color-favored decay which gets contribution from external W -exchange, known as a class I process, (ii) D 0 →K 0 * ρ 0 , a colorsuppressed process which get contribution from internal W -exchange, known as a class II process, and (iii) D + →K 0 * ρ + , a class III process which gets contribution from external as well as internal W -exchange mechanisms. The decay amplitudes are given by: the extra √ 2 in Eq. (5) comes from the flavor part of the wave function of ρ 0 . Each of the current matrix elements can be expressed in terms of meson decay constants and invariant form factors. We use the following definitions: [16] . In terms of the helicity amplitudes the decay rate is given by where p is the center of mass momentum in the final state. H 00 , H ++ and H −− are the longitudinal and the two transverse helicity amplitudes, respectively, for the decay D 0 → K − * ρ + they are given by: where α, β and γ are function of r and t and given by: with r, t and k defined as follow: For D → K * ρ the values of the parameters α, β and γ are Equivalently one can work with the partial wave amplitudes which are related to the helicity amplitudes by [11,22] , The partial waves are in general complex and can be expressed in terms of their phases as follow For completeness, we introduce here the transversity basis, A 0 , A and A ⊥ , through The longitudinal polarization is defined by the ratio of the longitudinal decay rate to the total decay rate Using equations (10), (11) and (15) to solve for S, P and D in term of form factors, we obtain ( we drop a common factor of i These real amplitudes are assumed to be the magnitudes of the partial wave amplitudes. The phases are then fed in by hand. The decay rate given by an in- is independent of the partial-wave phases. However, the polarization does depend on the phase difference, δ SD = δ S − δ D , arising from the interference between S-and D-waves, The knowledge of the different forms factors is required to proceed further with the numerical estimate of the decay rate, Γ, and the longitudinal polarization P L . Since it is not yet possible to obtain the q 2 dependence of these form factors from experimental data, and a rigorous theoretical calculation is still lacking, we have relied on several theoretical models for the form factors in our analysis. They are: i) Bauer, Stech and Wirbel (BSWI) [16], where an infinite momentum frame is used to calculate the form factors at q 2 = 0, and a monopole form (pole masses are as in [16] ) for q 2 dependence is assumed to extrapolate all the form factors to the desired value of q 2 ; ii) BSWII [20] is a modification of BSWI, where while F 0 (q 2 ) and A 1 (q 2 ) are the same as in BSWI, a dipole q 2 dependence is assumed for A 2 (q 2 ) and V (q 2 ); iii) Altomari and Wolfenstein (AW) model [23], where the form factors are evaluated in the limit of zero recoil, and a monopole form is used to extrapolate to the desired value of q 2 ; iv) Casalbuoni, Deandrea, Di Bartolomeo, Feruglio, Gatto and Nardulli (CDDGFN) model [24], where the form factors are evaluated at q 2 = 0 in an effective Lagrangian satisfying heavy quark spin-flavor symmetry in which light vector particles are introduced as gauge particles in a broken chiral symmetry. A monopole form is used for the q 2 dependence. The experimental inputs for this model are from the semileptonic decay D → K * lν, and we have used the recent experimental values [25] of the form factors A DK * 1 (0) = 0.55 ± 0.03, A DK * 2 (0) = 0.40 ± 0.08 and V DK * (0) = 1.0 ± 0.2, and f D = 194 +14 −10 ± 10 MeV [26] in calculating the weak coupling constants of the model at q 2 = 0 [24] , which are subsequently used in evaluating the required form factors ; v) Isgur, Scora, Grinstein and Wise (ISGW) model [27], where a non-relativistic quark model is used to calculate the form factors at zero recoil and an exponential q 2 dependence, based on a potential-model calculation of the meson wave function, is used to extrapolate them to the desired q 2 ; vi) Bajc, Fajfer and Oakes (BFO) model [28], where the form factors A 1 (q 2 ) and A 2 (q 2 ) are assumed to be flat, and a monopole behavior is assumed for V (q 2 ); and finally (vii), a parametrization that uses experimental values (Exp.F) [25] of the form factors at q 2 = 0 and extrapolates them using monopole forms.
A. Parameters
For the numerical calculations we use the following values for the CKM matrix elements and meson decay constants : In Table 1 we present the predicted values of the form factors in the different models as well as their experimental values [29]. One observes that while the model predictions for the form factors A 1 (q 2 ) and V (q 2 ) are in the range (0.6 − 1) and (0.8 − 1.6), respectively, the model-dependence of A 2 (q 2 ) shows a spread over a larger range: (0.4 − 3.7). A striking feature of the BFO model [28] is the large value of the form factor A 2 , which is incompatible with its experimental determination.
We calculate the experimental value of polarization from the listing of Ref. [5]: In Table 2 we have summarized the results for the decay rates Γ, logitudinal polarization P L , and partial-wave ratios |S| |P | and |S| |D| in different models. We note from Table 2 that the models CDDGFN, BFO, and the scheme that uses experimentally measured form factors, predict a decay rate within a standard deviation of the central measured value. All other models overestimate the rate by several standard deviations. As for the longitudinal polarization, given the freedom of the unknown cosδ SD , all models are able to fit the data. In particular, all models except BFO are able to predict the polarization correctly for δ SD = 0; in the BFO model for δ SD = 0, D 0 → K * − ρ + becomes totally transversely polarized. This circumstance arises from the fact that BFO model predicts a large D-wave contribution, |S| |D| ≈ √ 2. It then becomes evident from Eq. (15) that H 00 vanishes. All models except BFO also display the partial-wave-amplitude hierarchy: | S |>| P |>| D |; BFO model on the other hand predicts | S |>| D |>| P |, which we believe is less likely. The reasoning goes as folows: For decays close to threshold, one anticipates the L th partial-wave amplitude to behave like (p/Λ) L , where p is the center of mass momentum and Λ a mass scale. For p ∼ 0.4 GeV and Λ ∼ m D , one expects the hierarchy | S |>| P |>| D |.
C. D + →K 0 * ρ + In contrast to the decay mode D 0 →K − * ρ + , here the data listing [5] is at best confused. First, since the longitudinal and/or transverse branching ratios are not listed, it is not possible to calculate the longitudinal polarization. Second, though in Refs. [6], [9] the identification of the transversity amplitudes, (A T , A L and A l=1 in the notation of Ref. [6]) in terms of the partial-wave amplitudes is correct (see Eqns. (20) -(26) of Ref. [6]), their identification of the partial-wave amplitudes S and D in terms of the Lorentz structure of the decay amplitude is incorrect. In Table II, and more succinctly in Eqns. (32) and (34) of Ref. [6], S-wave amplitude is identified with the Lorentz structure that goes with the form factor A 1 , and D-wave amplitude with that of A 2 . In fact, the correct identification of the S-and D-wave amplitudes given in Eq. (19) shows that they are both linear superpositions of A 1 and A 2 .
With the caveat that the identification of the partial waves in Refs. [6,9] is incorrect (note also that the listing of Ref. [5] uses these references only), we take the S-, P -and D-wave branching ratios at their face value and calculate the 'experimental' ratios |S| |P | and |S| |D| . In Table 2, we have shown the calculated decay rate, the longitudinal polarization and the ratios of the partial-wave amplitudes in different models and compared them with the data. The BFO [28] model is the only one that reproduces the total rate correctly. This model also generates a large D-wave amplitude, with the partialwave hierarchy | S |>| D |>| P |. This feature of the BFO model is due to the exceptionally large value of the form factor A 2 , which is in contradiction with the experimental detarmination of the form factor as shown in Table 1.
Ref. [5] lists the branching ratio, and the transverse branching ratio. This enables us to calculate the longitudinal polarization from Ref. [5] also lists the S-and D-wave branching ratios. However, our criticism of these numbers in the previous subsection applies also to D 0 →K 0 * ρ 0 decay. With this caution, we have taken their [5] numbers at face value and calculated the experimental and theoretical ratios of the partial wave amplitudes and listed them in Table 2. We note from Table 2 that the rate in the BFO model is too low by three standard deviations; the rates predicted in BSWI and BSWII models are 1.5 standard deviations too high, while all other models fit the rate within one standard deviation. As for the longitudinal polarization, all models predict a value consistent with the data. All models also satisfy the |S| |P | bound, but only the BFO model fits the |S| |D| ratio. This is because the BFO model generates a large D-wave amplitude.
A final comment: The inconsistency of the data is evident in the listing [5] of the total branching ratio and the individual partial-wave branching ratios. We know that the total branching ratio is an incoherent sum of the individual branching ratios in S-, P -, and D-waves. Yet, in the Particle Data Group listing [5], the sum of S-and D-wave branching ratios exceeds the total. This by itself should cast doubt on the veracity of data.
IV. S-waveand A 1 (q 2 )-dominance Since S-wave and D-wave amplitudes are linear superpositions of the form factors A 1 and A 2 , see Eq. (19), the concept of S-wave-dominance is different from that of A 1 -dominance. All the models we have discussed, with the exception of BFO model [28], predict that S-wave amplitude is the dominant partial wave amplitude. Further, since Ref. [6] identifies S ∼ A 1 and D ∼ A 2 , we need to look at what is meant by S-wave-dominance and contrast it with A 1 -dominance.
Consider first the concept of S-wave dominance. We see from Eq. (9,15) that in this approximation, Γ ∝| S | 2 , and | H 00 |=| H ++ |=| H −− |=| S √ 3 |. In practice, most of the models predict the S-wave amplitude to be roughly an order of magnitude larger than the D-wave amplitude. Consequently, D wave would contribute only 1% to the rate relative to the S-wave. However, it could influence the longitudinal polarization considerably through its interference with the S wave. Depending on the value of δ SD the interference term could amount to a 30% correction to P L (see also Ref. [30]). However, regardless of the exact size of the D-wave amplitude, S-wave dominance would lead to P L → 1 3 , for δ SD = π 2 . Consider now the concept of A 1 -dominance. From Eqns. (10) and (11), we see that H 00 ∝ αA 1 and H ++ = H −− ∝ A 1 . With α = 1.52, the longitudinal helicity amplitude is the largest, and the longitudinal polarization becomes in contrast to a value 1/3 (with an error from S − D interference) for S-wave dominance. Further, from Eq. (19), we note that in A 1 -dominance, which makes the S-wave amplitude five times larger than the D-wave amplitude -not quite what one would term "S-wave dominance."
V. CONCLUSION
We have carried out an analysis of the process D → K * ρ in terms of the helicity, and partial-wave amplitudes. We used several models for the form factors, as well as their experimental values, when available, from semileptonic decay. A general feature of our calculation is that all the models, with the exception of the BFO model [28], are consistent with the expected threshold behavior |S| > |P | > |D|; BF O model, on the other hand, gives |S| > |D| > |P |. Even though in most models the D-wave amplitude is almost an order of magnitude smaller than the S-wave amplitude, it could effect the polarization prediction significantly through S − D interference. As we see from Table 2, models BSWI, BSWII, AW and ISGW grossly overestimate the rate for D 0 → K * − ρ + , while models CDD, BFO, and the model that uses experimental form factor input, more or less agree with the measured rate. For this decay mode, we trust the measurement of the longitudinal branching ratio as the identification of the transversity amplitudes in Ref. [6] is correct. Due to the large error in P L , and the uncertainty arising from the S − D interference, all models are consistent with the polarization measurement.
For the mode D + →K * 0 ρ + , all the models, with the exception of the BFO model [28], grossly overestimate the rate. Before one gets the impression that the BFO model does well, we would like to point out that its prediction for the form factor A 2 is in sharp disagrement with the measurements from the semileptonic decays. There are no direct measurements of the longitudinal (or transverse) polarization for this Table 1: Model and experimental predictions for the form factors : ) and the ratios x = A 2 (0) For the mode D 0 →K * 0 ρ 0 , BSWI and BSWII models predict a rate within 1.5 standard deviations. The remaining models, with the exception of the BFO model, predict a rate in agreement with data within one standard deviation. BFO model underestimates the rate by three standard deviations. The transverse rate has been measured [5], from which we have calculated the longitudinal polarization. The measured value of P L has large errors, but it is consistent with the longitudinal polarization in D 0 → K * − ρ + . Given the freedom of the S − D interference, all models are consistent with the measured polarization. The predicted longitudinal polarization is almost decay-mode independent.
Final comment: because of the misidentification of the S-and D-waves with the Lorentz structures in [6,9], we don't trust the partial-wave branching ratios listed in [5]. For this reason the listings of |S| |P | and |S| |D| ratios in the last column of Table 2 have to be read with this caveat. Table 2: Decay rates for D +,0 −→K 0 * ρ +,0 . The values of Γ must be multiplied by 10 11 s −1 . The parameter z = cos δ SD . The experimental values of P L are listed only if measurements of longitudinal or transverse branching ratios are available [5]. [32]
|
2014-10-01T00:00:00.000Z
|
1999-05-04T00:00:00.000
|
{
"year": 1999,
"sha1": "edfea8166da656f38dbd44c97be710e085546f58",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/9910350",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a0c4552d6fdf56563633c338d853f9310aadbfb8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
8288689
|
pes2o/s2orc
|
v3-fos-license
|
A Method for Implementing a Probabilistic Model as a Relational Database
This paper discusses a method for implementing a probabilistic inference system based on an extended relational data model. This model provides a unified approach for a variety of applications such as dynamic programming, solving sparse linear equations, and constraint propagation. In this framework, the probability model is represented as a generalized relational database. Subsequent probabilistic requests can be processed as standard relational queries. Conventional database management systems can be easily adopted for implementing such an approximate reasoning system.
Introduction
Probabilistic models [4,9,10] are used for making de cisions under uncertainty. The input to a probabilistic model is usually a Bayesian network [10]. It may also consist of a set of potentials which define a Markov network [4]. In this paper, we assume that the proba bilistic model is described by a Markov network. For this model, the propagation method [5,6,7,12,13] can be conveniently applied to convert the potentials into marginal distributions.
There is another important reason to characterize a probabilistic model by a Markov network, as it has been shown that such a network can be represented as a generalized relational database (14,15,16]. That is, the probabilistic model can be transformed into an equivalent (extended) relational data model. More specifically, the marginal corresponding to each po tential can be viewed as a relation in the relational database. Furthermore, the database scheme derived from a Markov network forms an acyclic join depen dency [15], which possesses many desirable properties [1,8] in database applications.
As the probabilistic model is now represented by a re lational data model, a probability request expressed as a conditional probability can be equivalently trans formed into a standard query to be executed by the database management system. Naturally, all query optimization techniques can be directly applied to pro cessing this query including data structure modifica tion. Thus, these transformations allow us to take full advantage of the query optimizer and other per formance enhancement capabilities available in tradi tional relational databases.
This paper, a sequel of the presentation in the IPMU conference [15], reports on the technical details in volved in the design of a probabilistic inference system by transforming a Markov network into a relational database.
Our paper is organized as follows. In Section 2, for completeness we review a unified relational data model for both probabilistic reasoning and database manage ment systems. In Section 3, we show that a factored probability distribution can be expressed as a general ized acyclic join dependency. The method for imple menting a probabilistic inference system is described in Section 4. First, we describe how a relational database is constructed for a given probabilistic model. We then show that processing a request for evidential reason ing is equivalent to processing a standard relational query. We conclude by pointing out that the extended relational database system can in fact model a number of apparently different but closely related applications [12]. Before introducing our data model, we need to define some basic notions pertinent to our discussion such as: hypergraphs, factored distributions, and marginal ization. Then we show how under certain conditions a factored joint probability distribution can be expressed as a generalized acyclic join dependency m the ex tended relational model.
Basic Notions
Hypergraphs and Hypertrees : Let C denote a lattice. We say that 1{ is a hyper graph, if 1{ is a finite subset of C. Consider, for ex ample, the power set 2x, where X= {xl, X2, ... ,xn} is a set of variables. The power set 2x is a lattice of all subsets of X. Any subset of 2x is a hypergraph on 2x. We say that an element t in a hypergraph 1{ is a twig if there exists another element b in 1{ , dis tinct from t, such that t n (U(1i-{t })) = t n b. We call any such b a branch for the twig t. Figure 1. This hypergraph is in fact a hy pertree; the ordering, for example, h1, h2, h3, h4, is a hypertree construction ordering and b(2) = 1, b(3) = 1, and b( 4) = 3 define its branching function. where each h; is a subset of X, i.e., h; E 2x, and ¢h, is a real-valued function on h;. Moreover, X= h1 U h2 U ... U hn = U7= l h;. By definition, 1i = {h1, h2, ... , hn} is a hypergraph on the lattice 2x. Thus, a factored probability distribution can be viewed as a product on a hypergraph 1{ , namely: Let V x denote the discrete frame (state space) of the variable x E X. We call an element of V x a configura tion of x. We define v h to be the Cartesian product of the frames of the variables in a hyperedge h E 2x: We call vh the frame of h, and we call its elements configurations of h.
Let h, k E 2x, and h � k. If c is a configuration of k, i.e., c E Vk, we write c.l. h for the configuration of h obtained by deleting the values of the variables in k and not in h. For example, let h = {x1, x2}, k = {x1, x2, x3, x4}, and c = (c1, c2, c3, c4), where c; E V x;· Then, c.l. h = ( c1, c2). If h and k are disjoint subsets of X, ch is a configura tion of h, and Ck is a configuration of k, then we write (Chock) for the configuration of h U k obtained by con catenating ch and Ck. In other words, ( ch o Ck) is the unique confi guration of hUk such that ( ch ock).l. h = ch and ( Ch o ck ).l. k = Ck. Using the above notation, a fac tored probability distribution ¢ on U1{ can be defined as follows: where c E v x is an arbitrary configuration and X = U?i.
where ch is a configuration of h, Ck-h is a configuration of k-h, and ch o Ck-h is a configuration of k. We call <Pt h the marginal of tPk on h.
A major task in probabilistic reasoning with belief net works is to compute marginals as new evidence be comes available. By definition, the product ¢h · ¢k of any two function c/> h and </Jk is given by: where c E Vh u k . We can therefore express the product c/>h ·cf>k equivalently as a product join of the relations cl>h and cl>k, written cl>h 0 cl>k, which is defined as follows: (i) Compute the natural join, cl>h txJ cl>k, of the two relations of cl>h and cl>k.
(ii) Add a new column with attribute !¢ h·¢k to the relation cl>h txl cl>k on h U k. Each value of l¢h·¢k is given by tPh ( c. l. h ) · tPk ( c.l. k ) , where c E Vh u k.
(iii) Obtain the resultant relation cl>h 0 cl>k by project ing the relation obtained in Step (ii) on the set of attributes h U k U U¢h·¢ k }.
'"' · <i> k a1 · bt a1 · b2 a,· b3 a2 · b4 a, · bt a, . b, a4 · b3 a4 · b4 Since the operator 0 is both commutative and associa tive, we can express a factored probability distribution as a join of relations: We can also define marginalization as a relational op eration. Let c�>t h denote the relation obtained by marginalizing the function ¢k on h � k. We can con struct the relation c�>t h in two steps: x,
Two important properties are satisfied by the operator t of marginalization.
Lemma 1 [12,15] (i) If <I>k is a relation on k, and h � g � k, then (ii) If <I>h and <I>k are relations on h and k, respec tively, then Before discussing the computation of marginals of a factored distribution, let us first state the notion of computational feasibility introduced by Shafer ( represent relations on these attributes, join them, and marginalize on them. We assume that any subset of feasible attributes is also feasible. Fu rthermore, we as sume that the factored distribution is represented on a hypertree and every element in 1l is feasible.
Lemma 2 [12,15] Let <I>= @{<I>hlh E 1l} be a fac tored probability distribution on a hypertree 1l. Let t be a twig in 1l and b be a branch for t. Then, where 1l-t denotes the set of hyperedges 1l -{t}, <I>i: t = <I> b ® <t> f tnb , and <l>h" t = <I>h for all other h in 1l-t .
Note that by property (ii) of Lemma 1, we obtain: On the other hand, we have:
The method for implementing a
Probabilistic Inference System
In order to convert a probabilistic model into a re lational model, fi rst we need to be able to efficiently transform the input potentials into marginals. Since we assume that the hypergraph induced by the po tentials is a hypertree, we can apply the propagation method [6,12] to compute all their marginals. This process involves first moving backward along the hy pertree construction ordering to find the marginal of the root, then moving forward from the root to the leaves for determining marginals of the other poten tials.
The next task is to transform a probability request into a standard relational query addressed to the database which is equivalent to the original probability model. The relational query can be formulated by scanning the probability request to determine the marginals in volved along the hypertree construction ordering, as well as the specific variables (attributes) within each respective marginal. Once the query is expressed in terms of the query language provided, it is then sub mitted to and processed by the standard database management system in the usual manner.
Transformation of Potentials to Marginals (Relations)
We are given as input a set of potentials ¢ h 's which define a factored joint probability distribution <I> = @{<I>hlh E 1l}, where 1l = {h 1 , h 2 , ... , hn} is the cor responding hypergraph. The first step in this transfor mation is to check if the hypergraph 1l is a hypertree [1], but if so determine a branching function b( i) for it. If we do not have a hypertree, then some potentials can be combined so that the resultant hypergraph is a hypertree [12].
In the following discussion, we henceforth assume that 1l = {h 1, h 2, ... , h n } is a hypertree. Let the branch ing function b(i), i = 2, ... , n define a hypertree con struction ordering. The procedure for computing the marginal of the root h 1 by moving backward along the hypertree construction ordering has been described in Section 3.
Once we have determined the root marginal, q,. Hence, by continuing moving forward, we will arrive at the above general formula.
Consider, for example, a factored joint probability dis tribution defined by six potentials [4] as shown in col umn 2 of Tables 1 to 6. We have modified the column names to reflect the notation used in this paper. The Figure 7. This hypergraph is in fact a hy pertree. The sequence h 1 , h 2 , h 3, h 4 , h 5, h 6 , is a hyper tree construction ordering which defines the branching To compute the root marginal q,.J. h 1 , we may move backward from the leaf hyperedge towards the root h 1 along the hypertree construction ordering. Thus we first transform the hypergraph 1{ 6 ( = 1l ) to 1{ 5 . That co nfi gura ti o n 4>( x 1 x 2) c�>+ h 1 c�>+ h 1 -, ,x l """'1 X 2 <t>"TciT
4.2
Transformation of a Probability Request to a Query Just as we can transform a potential <I> h , to a marginal relation <J> .I.h ;, we can transform a probability request of the form p(xa, ... , xdi Xe = f, ... , X g = 1) to a re lational query. This query can then be processed by the database management system. There are, how ever, two ways to construct the query depending on whether the product join ( 0) and generalized join ( 0') operators have been incorporated into the database management system. We will show how to transform the probability request to a relational query in either situation.
(i) In the first case we assume that the database management system has been extended to include the product join and generalized join operators.
Then with respect to a particular hypertree con struction ordering, we first determine the join path h r , ... , hs such that the union h r U ... U hs of these relation schemes (hyperedges) con � tains all the variables in the probability request p(xa, ... , xd lxe = f, ... , X g = 1). Then the re lation <I>"' = <I>{x a , ... , xd }' depicted in Figure 7 At this point, we have the information needed to answer the probability request all in a single rela tion <I>"'. However, to compute the required condi tional probability, we need to marginalize <I>"' onto {xa, ... , xd} by the following query: SELECT Xa, ... , Xd, SU M(f<t>J INTO w"' FROM <I>"' GROUPBY Xa, . .. ,Xd Since the relation W"' is not normalized, we have to define the normalization relation q,"' which is a constant relation as shown in Figure 8, where Finally, the answer to the probability request is given by the relation 1J!, ® �;1. This demon strates that any probability request can be eas ily answered by submitting simple queries as de scribed to the relational database management system.
The above discussion indicates that we need not implement the marginalization operator .J_, as the standard relational query languages al ready provide the SUM and GROUP BY facil ities. These two functions are indeed equivalent to the marginalization operation.
(ii) In the second case we simulate the product join and generalized join operators as we are interfac ing with a standard database management sys tem. We will first discuss the simulation of the product join (®) and generalized join (®') oper ators , before we construct the relation to answer the probability request.
Suppose we want to compute the product join of two relations <l>h and <l>k, i.e. , <l>h ®<l>k. According to the definition of ® (see the example in Fig ure 3), we construct the relation <I> h tx1 <I> k by the query: Next we create a new column labelled by the at tribute f t/> h ·t/> k , representing the product ¢Jh · ¢Jk by the query: By definition, the entries in this column are: where c E Vh u k. The following query: accomplishes this task. The last step in simulat ing the product join ® is to project <l>huk onto the set of attributes h U k U { f t/> h ·t/> k } using the query: Thus we have derived the relation <l>h0k = <l>h ® <l>k .
Since <l>h ®' <l>k = <l>h ®<l>k0<1>hnk -1 , the simulation of the generalized join 0 ' is just a simple exten sion of the product join 0. That is we need only compute <l>hnk -1, the inverse relation of <l>hnk. We construct <l>hnk by the query: SELECT h n k, SU M( f¢h) Note that we can use SU M( f t/> k ) and <l>k in the above query, since <I>i hnk = <I>t hnk . It is straight forward to construct the inverse relation <I> hnk -1 from <l>hnk. Now the relation <l>h ® 'k = <l>h 01 <l>k is obtained by performing the product join <l>h ® k ® <l>hnk -1 .
We construct 1J!, and �,, depicted in Figure 8, as described in (i) of this subsection. The relation 1J! 0 �; 1 is the answer to the given probability request.
Conclusion
Once it is acknowledged that a probabilistic model can be viewed as an extended relational data model, it immediately follows that a probabilistic model can be implemented as an everyday database application. Thus, we are spared the arduous task of designing and implementing our own probabilistic inference system and the associated costs. Even if such a system was successfully implemented, the resulting performance may not be comparable to that of existing relational databases. Our approach enables us to take advantage of the various performance enhancement techniques including query processing, query optimization, and data structure storage and manipulation, available in traditional relational database management systems. Thus the time required for belief update and answer ing probability requests is shortened.
The proposed relational data model also provides a unified approach to design both database and proba bilistic reasoning systems.
In this paper, we have defined the product join oper ator Q9 based on ordinary multiplication primarily be cause we are dealing with probabilities. By defi ning Q9 differently (e.g. based on addition) , our relational data model can be easily extended to solve a number of ap parently different but closely related problems such as dynamic programming [2], solving sparse linear equa tions [11], and constraint propagation [3].
|
2013-02-20T07:24:15.000Z
|
1995-08-18T00:00:00.000
|
{
"year": 1995,
"sha1": "dbe3eb48569c3cf51b35d3064a59645a99b112de",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "70a9177b0ac784464745399885aa8da97105e3dc",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
118685510
|
pes2o/s2orc
|
v3-fos-license
|
$\textit{Ab initio}$ study of phosphorus anodes for lithium and sodium-ion batteries
Phosphorus has received recent attention in the context of high-capacity and high-rate anodes for lithium and sodium-ion batteries. Here, we present a first principles structure prediction study combined with NMR calculations which gives us insights into its lithiation/sodiation process. We report a variety of new phases phases found by AIRSS and the atomic species swapping methods. Of particular interest, a stable Na$_5$P$_4$-C2/m structure and locally stable structures found less than 10 meV/f.u. from the convex hull, such as Li$_4$P$_3$-P2$_1$2$_1$2$_1$, NaP$_5$-Pnma and Na$_4$P$_3$-Cmcm. The mechanical stability of Na$_5$P$_4$-C2/m and Li$_4$P$_3$-P2$_1$2$_1$2$_1$ has been studied by first principles phonon calculations . We have calculated average voltages which suggests that black phosphorus (BP) can be considered as a safe anode in lithium-ion batteries due to its high lithium insertion voltage, 1.5 V; moreover, BP exhibits a relatively low theoretical volume expansion compared with other intercalation anodes, 216\% ($\Delta V/V$). We identify that specific ranges in the calculated shielding can be associated with specific ionic arrangements, results which play an important role in the interpretation of NMR spectroscopy experiments. Since the lithium-phosphides are found to be insulating even at high lithium concentrations we show that Li-P-doped phases with aluminium have electronic states at the Fermi level suggesting that using aluminium as a dopant can improve the electrochemical performance of P anodes.
Introduction
Owing to their relatively high specific energy and capacity, Li-ion batteries (LIBs) are the energy source of choice for portable electronic devices. 1 Despite the vast technological advances made since the first commercial LIB was released by Sony in 1991, the specific energy of commercial LIBs is limited to approximately 250 Whkg −1 , which is half the estimated need for a family car to travel 300 miles without recharge. 2 The demand of higher specific energies and capacities motivates the study of novel materials for the next generation of LIBs. In almost all conventional LIBs available for commercial use the cathode is typically a transition layered metal oxide, LiMO 2 , with M=Co, Ni, Mn, etc, and the anode material is graphite. [2][3][4] Intercalation electrodes experience slight changes during charge and discharge, e.g., less than 7% volume change in C negative electrodes, 3 leading to a high capability of retaining their capacity over charge/discharge cycles. However, these electrodes suffer from low specific capacity due to the limited intercalation sites available for Li ions in the host lattice, 5 e.g., 372 mAhg −1 for graphite. In order to overcome the capacity limitation of intercalation anodes, it has been suggested to use different alloys of lithium as LIB anodes. 3,[5][6][7][8][9] A wide range of materials have been studied for this purpose, such as group IV and V elements, magnesium, aluminium and gallium among others. 3,7 Alloy materials can achieve 2-10 times higher capacity compared to graphite anodes, where the highest capacity is achieved by silicon, 3579 mAhg −1 . 10 However, alloys tend to undergo relatively large structural changes under lithiation, 2,3,7,10 leading to a poor cycle life.
Due to the high abundance, low cost, and relatively uniform geographical distribution of Na, Na-ion batteries (NIBs) have received recent attention. Despite some disadvantages, such as its larger ionic radius (1.02 Å compared to 0.76 Å for Li) and the lower cell potential of most Na systems, 11 NIBs are considered to be one of the most promising alternatives to meet large-scale electronic storage needs. 12 In spite of similarities between elemental Li and Na, Na systems present significantly different kinetic and thermodynamic properties. 9 The widely used graphite negative anode in LIBs is not successful for NIBs 13 due to a poor specific capacity and bad cyclability. The Si alloy suggested for LIBs is not suitable for NIBs as the Na concentration in Na-Si systems is limited to 50 %, 12 therefore most recent studies have focused on Na-Sn and Na-Sb systems. 12 Reaction of phosphorus with three Li or Na atoms produces Li 3 P 14 and Na 3 P 15 respectively; which corresponds to a large theoretical capacity of 2596 mAhg −1 and theoretical volume expansion ∆V /V of 216 % for Li 3 P and 391 % for Na 3 P from black phosphorus (BP). Of the several known allotropes, black phosphorus, red phosphorus, and the recently synthesised phosphorene 16 have been studied as candidates for LIB and NIB anodes [17][18][19] . Recent experimental studies 9,[20][21][22][23][24][25] showed that the addition of carbon to the phosphorus anode leads to an improvement in the reversibility of charge/discharge processes, probably due to an enhancement in its electrical conductivity and mechanical stability. The study conducted by Qian et al., 9,24 showed that amorphous phosphorus/carbon nano composite anodes are capable of achieving relatively high storage capacities per total mass of composite, 2355 mAhg −1 for LIBs and 1765 mAhg −1 for NIBs, good capacity retention after 100 cycles and high power capabilities at high charge/discharge rates. All the experimental studies agree that Li 3 P and Na 3 P are formed during the discharge process; however, the formation of other phases during the lithiation/sodiation process remains unclear. In the case of LIBs, differential capacity plots suggest the formation of Li x P phases, however the assignment of the XRD spectra can be challenging. 9,19,20,24 In a study presented by Sun et al. 21 it has been suggested, based on ex situ XRD, that crystal phases of Li-P form at the end of the charge.
During Li/Na insertion and extraction, anodes are expected to form non-equilibrium structures. Evidence of a metastable structure formation in the Li-P system has been reported by Park et al.,20 where the authors suggested the formation of Li 2 P phase during the first discharge based on an electrochemical study. In the case of the well studied Li-Si system, the lithiation induces an electrochemical solid phase amorphisation, where the crystalline Si is consumed to form a Li x Si amorphous phase; 5 nevertheless, the equilibrium crystalline compounds are generally used as a first step in order to study the electromechanical process (See Ref. 12 and references therein).
Ab initio techniques have been shown to be successful in giving insights into a better understanding of different processes occurring in an electrode. 26 From total energies, important properties of an electrode like voltage profiles and volume change can be estimated. In addition, NMR parameters can be calculated for certain systems offering a powerful method to understand the local structure of the studied system as well as a way of complementing experimental studies.
In this work, we present an ab initio study of Li-P and Na-P compounds. We first perform a structure prediction study combining atomic species swapping along with ab initio random structure searching methods for the Li-P and Na-P systems. We report various new stable and metastable structures and suggest connections between lithium/sodium contents and expected ionic arrangements. Lithiation/sodiation processes are assessed by calculating average voltage profiles, electronic density of states and NMR chemical shifts of the ground state phases, allowing us to predict the local environment evolution of P under lithiation/sodiation. We conclude showing the effect of dopants on the electronic structure of Li-P compounds, where we propose doping the anode with aluminium in order to improve the anode performance.
Methods
Structure prediction was performed using the ab initio random structure searching method (AIRSS). 27 For a given system, AIRSS initially generates random structures which are then relaxed to a local minima in the potential energy surface (PES) using DFT forces. By generating large numbers of relaxed structures it is possible to widely cover the PES of the system. Based on general physical principles and system-specific constraints, the search can be biased in a variety of sensible ways. 28 The phase space explored by the AIRSS method was extended by relaxing experimentally obtained crystal structures. All combinations of {Li,Na,K}-{N,P,As,Sb} crystal structures at different stoichiometries were obtained from the International Crystallographic Structure Database (ICSD). For each structure, the anions and cations were swapped to Li/Na and P respectively and then relaxed using DFT forces. The AIRSS + species swapping method been successfully used for Li-Si, 29 Li-Ge 29,30 and Li-S 31 systems. Furthermore, a study on point defects in silicon has been presented 32,33 using the AIRSS method.
AIRSS calculations were undertaken using the CASTEP DFT plane-wave code. 34 The gradient corrected Perdew Burke Ernzerhof (PBE) exchange-correlation functional 35 was used in all the calculations presented in this work. The core electrons were described using Vanderbilt "ultrasoft" pseudopotentials, the Brillouin zone was sampled using a Monkhorst-Pack grid 36 with a kpoint spacing finer than 2π × 0.05 Å −1 . The plane wave basis set was truncated at an energy cutoff value of 400 eV for Li-P and 500 eV for Na-P. The thermodynamical phase stability of a system was assessed by comparing the free energy of different phases. From the available DFT total energy of a given binary phase of elements A and B, E{A x B y }, it is possible to define a formation energy per atom, The formation energies of each structure were then plotted as function of the B element concentration, C B = y x+y , starting at C B = 0 and ending at C B = 1. A convex hull was constructed between the chemical potentials at (C B , E f /atom) = (0, 0); (1, 0) drawing a tie-line that joins the lowest energy structures, provided that it forms a convex function. This construction gives access to the 0 K stable structure as the second law of thermodynamics demands that the (free) energy per atom is a convex function of the relative concentrations of the atoms (see Figure 1).
Average voltages for the structures lying on the hull were calculated from the available DFT total energies. For two given phases on the hull, A x 1 B and A x 2 B with x 2 > x 1 , the following two phase reaction is assumed, The voltage, V, is given by, 37 where it is assumed that the Gibbs energy can be approximated by the internal energy, as the pV and thermal energy contributions are small. 37 The low energy structures obtained by the AIRSS search were refined with higher accuracy using a k-points spacing finer than 2π × 0.03Å −1 and an energy cut-off of 650 eV for Li-P and 800 eV for Na-P and more accurate pseudopotentials 1 . The structures obtained from the ICSD were relaxed with the same level of theory and the formation energies and voltages were obtained. The same level of accuracy was used to calculate the 1 Pseudopotentials generated by the CASTEP on-the-fly generator: Li 1|1.2|10|15|20|10U:20(qc=6) Na 2|1.3|1.3|1.0|16|19|21|20U:30U:21(qc=8) P 3|1.8|2|4|6|30:31:32 nuclear magnetic shielding of the structures on the convex hull employing the gauge-includingprojector-augmented-wave (GIPAW) algorithm 38 implemented in CASTEP and the electronic density of states. The latter were calculated with the OptaDOS code 39 using the linear extrapolative scheme. 40,41 Phonon dispersion curves were calculated using Density Functional Perturbation Theory in CASTEP. 42 Norm-conserving pseudopotentials 2 were used, the Brillouin zone was sampled using a Monkhorst-Pack grid 36 with a k-point spacing finer than 2π × 0.03 Å −1 and the plane wave basis set was truncated at an energy cut-off of 1000 eV. The structures were fully relaxed at this level of accuracy. The NMR parameters and density of states of black P were calculated with the CASTEP semi empirical dispersion correction, 43 using the scheme of Grimme (G06). 44 Results Lithium phosphide Figure 1 shows the formation energy as a function of lithium concentration of the low-energy structures obtained by the search. The stable structures found on the convex hull, in increasing lithium concentration order, are black P-Cmca, LiP 7 -I4 1 /acd, 45 Li 3 P 7 -P2 1 2 1 2 1 , 46 LiP-P2 1 /c, 47 Li 3 P-P6 3 /mmc 48 and Li-Im3m. A novel DFT Li 4 P 3 -P2 1 2 1 2 1 phase is found 4 meV/f.u. above the convex hull, well within DFT accuracy. All the known Li-P phases are found on the convex hull, except for LiP 5 -Pna2 1 45 which is found 12 meV/f.u. from the convex tie-line in our 0 K DFT calculation. The average voltage profile was calculated between pairs of proximate stable structures relative to Li metal. A plot of the average voltages as a function of Li concentration is presented in Figure 2. Table 1 summarises the structures presented in new metastable structures, which are of importance when studying the lithiation process of the anode during cycling, as the anode is unlikely to reach thermodynamic equilibrium during charge and discharge. We identify that the structures can be categorised in four main regions according to their P ionic arrangement, as is highlighted in Figure 1. As the Li concentration is increased, the structures change as following: tubes, cages and 3-D networks → chains and broken chains → P dumbbells → isolated P ions. For 0 ≤ x ≤ 0.5 in Li x P structures are composed mainly by tubes, cages and 3-D networks where threefold P bonding is mainly favoured. The least lithiated phase found on the hull is LiP 7 which shows tubular helices of connected P 7 cages along the [001] axis. LiP 6 -R3m and LiP 5 -Pna2 1 exhibit relatively similar structures formed by a 3-D networks with the majority of the P ions threefold bonded. The next structure found on the convex hull is Li 3 P 7 , where the phosphorus tubes are broken, forming isolated P 7 cages dispersed in the 3D structure.
In the 0.5 < x < 1.33 region the structures are significantly different, tending to form chains and broken chains. The structure of Li x P x , x=5-9, has recently received attention in the context of inorganic double-helix structures, 51 where it was shown that AIRSS predicts the P2 1 /c symmetric Li 1 P 1 bulk phase; moreover, the stability of an isolated double-helix was demonstrated by phonon calculations. Two phases are found very close to The convex hull (tie-line) is constructed by joining the stable structures obtained by the searches. The convex hull has been divided in four main regions to guide the eye, highlighting the kind of ionic arrangement in each region. Selected structures are shown with green and purple spheres denoting Li and P atoms, respectively, with the purple lines indicating P-P bonds. For a full description of the phases, see Table 1. Table 1: Description of the experimental and predicted Li x P phases. We indicate with a star ( ) the stable phases which are found on the convex hull. We identify four main regions with different ionic arrangements; for 0 ≤ x ≤ 0.5 the structures show tubes, cages and 3-D networks composed of three and two P bonds, for 0.5 < x < 1.33 P chains and broken chains, for 1.33 < x ≤ 2 P dumbells and for concentrations larger than x = 2 the structures are mainly composed of isolated P ions. the hull in this region, Li 5 P 4 and Li 4 P 3 . Li 4 P 3 -P2 1 2 1 2 1 is an AIRSS structure with a formation enthalpy 4 meV/f.u. above the tie-line, a difference which is within DFT accuracy. Li 5 P 4 -C2/m was obtained by swapping ions from Na 5 As 4 50 and it is found 10 meV/f.u. above the convex hull. Both structures are formed by three (Li 4 P 3 ) and four-bonded (Li 5 P 4 ) in-plane chains, see Figure 1 for an illustration. We have explored the possible mechanical stability of Li 4 P 3 -P2 1 2 1 2 1 by performing a phonon calculation, the calculated phonon dispersion is presented in Figure 9. The stability of a structure in terms of lattice dynamics is confirmed by the absence of any imaginary frequency in the Brillouin zone. For 1.33 < x ≤ 2 two structures are found by AIRSS, Li 2 P-P2 1 /c and Li 3 P 4 -C2/m, which form P-P dumbbells. Dumbbell formation in Li-X (X=S,Si,Ge) systems, has played an important role in the interpretation of the electrochemical behaviour in terms of structure transformation. 29,30,52 For concentrations larger that x=2 the P structures are formed by isolated P ions. The most lithiated phase found on the convex hull is Li 3 P, 14 a phase which is generally observed at the end of discharge in electrochemical experiments. 9,20,24 Three more structures are found by the AIRSS searches for x > 3, Li 4 P, Li 5 P and Li 6 P, all of them composed of isolated P atoms.
The electronic density of states (eDOS) of the structures found on the convex hull were calculated with the OptaDOS code 39 and are shown in Figure 4. All structures, except for Li, show a semiconducting-like eDOS, which is surprising especially for the phases with high lithium concentration. The experimental ability of measuring NMR spectra during charge and discharge, can be an extremely powerful tool to elucidate the structural evolution of the anode during the lithiation. 26 We have calculated the phosphorus chemical shielding for the stable structures of the Li-P system. We have included the LiP 5 Pna2 1 chemical shielding calculation for comparison with the experimental data reported in Ref. 53. A plot of the correlation between the calculated and experimental NMR parameters of LiP 5 , 53 Li 3 P 23 and black P 25 is presented in Figure 5, where a good correlation is seen between experimental and calculated values.
DOS (Arbitrary Units)
The resulting NMR parameters of all the structures are illustrated in Figure 6. A general trend is observed in chemical shielding, where the latter increases with the Li concentration in Li x P. We identify three main regions in the chemical shielding described in Figure 6 which can be roughly related to the amount of lithium and phosphorus nearest neighbours (See caption in Figure 6 for detailed description). NaP 2 Na 2 P 3 Na 3 P 4 NaP Na 5 P 4 Na 4 P 3 Figure 7: Formation enthalpy per atom vs the fractional sodium concentration in the Na-P compound. The convex hull (tie-line) is constructed by joining the stable structures obtained by the searches. Figure 5: Correlation between the 31 P NMR calculated chemical shielding, σ calc. , and experimental chemical shifts, δ exp. , referenced relative to an 85% H 3 PO 4 aqueous solution for LiP 5 53 Li 3 P 23 and black P 25 . The data was fitted to a linear function δ exp. = ασ calc. + σ re f . with resultant fitting parameters α = −0.96 ± 0.1 and σ re f . = 245.9 ± 34.1. The deviation of α from the ideal value of −1 is well known when correlating between calculated shielding and experimental shifts (See Ref. 54 for details), the obtained σ re f . was used to reference the presented NMR results (See Figures 6 and 11).
Sodium phosphide
Na-P forms similar structures to those found for Li-P, as expected due to their similar chemistry. However, the convex hull of the NaP system, as shown in Figure 7, exhibits two main differences: first, the Li 1 P 1 phase has a lower formation energy than Na 1 P 1 by approximately -0.125 eV, the second is that the Li 3 P phase has lower formation energy than Li 1 P 1 by -0.125 eV, whereas Na 3 P has higher formation energy than Na 1 P 1 by 0.05 eV/f.u.. These differences are manifested in the calculated average voltages (see Figures 8 and 2), where the Na-P voltage profile drops to lower values at high Na concentrations. The stable phases predicted by the DFT calculations are summarised in Table 2.
The average voltages were calculated for Na-P and are shown in Figure 8. The least sodiated Na-P structure found in the Na-P convex hull construction is a locally stable NaP 5 -Pnma phase, which was obtained by swapping species from LiP 5 . 45 Increasing in sodium content, two known phases are found on the convex hull, Na 3 P 11 -Pbcn 49 and Na 3 P 7 -P2 1 2 1 2 1 . 46 In the 0.45 < x < 1 region we find three structures with rather different ionic arrangements, exhibiting broken black P -like layers (NaP 2 -C2/m 55 ), P six-fold rings (Na 2 P 3 -Fddd 56 ) and in-plane connected chains ( Na 3 P 4 -R3c predicted by AIRSS). For x > 1 the structures show similar arrangements as in the Li-P system, although, unlike in Li-P, Table 2: Description of the experimental and predicted Na x P phases. We indicate with a star ( ) the stable phases which are found on the convex hull. The Na-P structures show similar ion arrangements as those observed in Li-P (see Figure 1 for illustration), with differences in the 0.45 < x < 1 region and the absence of P dumbbells . Na-P does not seem to favour dumbbell formations. The Na 5 P 4 -C2/m obtained by swapping atoms from Na 5 As 4 50 exhibits a layered structure consisting of Na sheets separated by four-bounded in-plane P chains. This new phase is predicted to be thermodynamically stable by our calculations. Furthermore, its calculated phonon dispersion curve confirms the stability of the phase in terms of lattice dynamics. As in the Li-P system, the Na-P phases exhibit a semiconducting behaviour, except for the Na 5 P 4 phase which shows a finite value of eDOS at the Fermi energy.
DOS (Arbitrary Units)
-10 -5 0 5 E-E F (eV) black-P Na 3 P 11 Na 3 P 7 NaP Na 5 P 4 Na 3 P Na Figure 10: Total electronic density of states of the Na-P phases found on the convex hull. The Na-P phases exhibit a semiconductor-like eDOS, except for the Na 5 P 4 which has a finite value of eDOS at the Fermi level.
Aluminium doping of phosphorus
In order to suggest a way for improving the electrical conductivity of phosphorus anodes we have tested the effect of different extrinsic dopants on Figure 11: Calculated 31 P NMR chemical shifts for various Na-P compounds showing the change in chemical shielding as the local environment of phosphorus changes. For visualisation purposes a Lorentzian broadening is assigned to the calculated 31 P NMR parameters. For each crystallographic site a cluster with a radius of 3 Å is shown and labeled accordingly. The background has been coloured as in Figure 6 to emphasise regions in the chemical shift associated with specific atomic arrangements. Despite the similarities to the Li-P, it may be more difficult to experimentally differentiate the mid and high sodiated regions due to a more similar chemical shielding.
the electronic DOS of Li-P compounds by performing interstitial defects AIRSS searches. The initial generated structures were composed of the underlying perfect crystal plus the interstitial element positioned randomly. Consequently, the ionic positions were relaxed keeping the lattice vectors fixed. The electronic DOS of the lowestenergy structures were then calculated using Opta-DOS. Silicon and aluminium interstitial defect searches were carried out in a 2 × 2 × 1 LiP supercell composed of 32 LiP formula units, we denote these structures as 32LiP + 1Si and 32LiP + 1Al, respectively. The eDOS calculation revealed that the aluminium point defect introduces electronic states at the Fermi energy (E F ), whereas the silicon defect introduces states within the band gap but with the eDOS remaining zero at E F . To further investigate the effect of Al doping, AIRSS searches were performed in larger Li-P cells with different Li concentrations. The cells were chosen to be large enough to allow a maximum stress of ca. 0.5 GPa. Figure 12 shows the resulting eDOS of 8LiP 7 + 1Al, 64LiP + 1Al and 36Li 3 P + 1Al for the lowest-energy structure resulting from the searches. From Figure 12, we learn that the formed Li-P compounds with different Li concentrations exhibit finite electronic DOS at E F , suggesting that doping phosphorus with aluminium could increase the electronic conductivity of the anode, thus improving its performance.
Discussion
We have presented a study of Li-P and Na-P systems using AIRSS and atomic species swapping of ICSD structures. We have shown that the combination of the two methods allows us to have access not only to the ground state structures, but also metastable phases found close to the convex hull. These structures might form at room temperature and non-equilibrium conditions, e.g., during lithiation/sodiation. The aim of this work is to elucidate the structural evolution of phosphorus anodes during the lithiation/sodiation as well as to give insights into their electronic structure, some of these aspects are discussed below.
The method of AIRSS + atomic species swap- ping has been shown to predict a variety of locally stable phases in the Li-P system (see Table 1 for full description). Combining the known phases with those predicted in this work, we are enabled to catalogue phosphorus ionic arrangements according to their lithium concentration. This has proved to be extremely valuable when attempting to understand electrochemical processes, as has been recently shown in Ref. 52 for LiS batteries. Our findings suggest that the lithiation mechanism proposed in Ref. 20, Black P → Li x P→LiP→Li 2 P→Li 3 P, could be reinterpreted in terms of tubes, cages and 3-D networks → chains and broken chains → P dumbbells → isolated P ions. Moreover, phases found by our structure searching can clarify possible intermediate structures in a more robust way. Park et al. 20 predicted the existence of a metastable Li 2 P structure based on the appearance of a XRD peak at 2θ ≈ 22.5 • which corresponded to a molar ratio of Li:P 2 at 0.63 V. Our Li 2 P-P2 1 /c structure exhibits a high intensity predominant peak at 2θ ≈ 25 • , a discrepancy which can be attributed to the difference between the DFT and experimental lattice parameters.
The convex hull of the Na-P system predicts a locally stable NaP 5 -Pnma phase which is very close to the convex hull; this phase has been synthesised at high-pressure. 58 A new phase, Na 5 P 4 , with C2/m symmetry is predicted to be stable by the convex hull construction. The phonon dispersion of the stable phase, Na 5 P 4 -C2/m, and the Li 4 P 3 -P2 1 2 1 2 1 metastable phase found very close to the convex hull suggest that these predicted structures are mechanically stable and might be observed in future experiments.
NMR chemical shielding calculations reveal a general trend in the local environment change of both Li-P and Na-P systems as lithium/sodium content is increased. For Li-P the chemical shielding range was roughly divided in three regions, where each region was correlated to distinct local ionic arrangements. These calculations were driven by the experimental ability of measuring NMR shifts, where the assignment of the local environments of the probed ion can be particularly challenging. Na-P shows a similar trend, but due to overlap between the regions it may be more difficult to assign experimental data.
Li-P and Na-P exhibit relatively high average voltage profiles, which in principle leads to a lower voltage of the full cell and a reduced energy density. The Na-P voltage profile differs from the Li-P profile, the voltage drops to 0.28 V in the case of the Na 3 P phase, whereas in the case of Li-P it drops to 0.8 V at the same lithium concentration. Despite this disadvantage, high voltages prevent the formation of Li dendrites, thus enhancing the safety of the battery. A second advantage of high voltages versus lithium metal, is the prevention of electrochemical reduction of the electrolyte as SEI forms, which can improve the cyclability of the battery. 3 Despite several advantages, pure phosphorus shows a relatively poor cyclability. 20,21 Park et al. 20 attributed the low performance of phosphorus anodes to its low electronic conductivity. Sun et al. 21 showed that black P samples exhibit good conductivity properties, and put the low performance of the anode down to the non-crystallinity of the samples. Our results show that even for low concentrations of Li, Li-P compounds can exhibit a relatively large band gap, e.g. 1.7 eV for LiP 7 , compared to the experimental 0.33 eV of black P, hinting than the conducting properties of black P can be worsened as the anode is lithiated. In order to address this, we have sought to reduce the band gap of Li-P compounds by doping them with aluminium. Furthermore, we have performed a preliminary study on the effect of Ge and Ga doping on Li-P compounds electronic structure, where results show a similar behaviour as Si and Al respectively. 32LiP +1Ga exhibits a larger eDOS at E F compared to 32LiP +1Al, 22.88 eln/cell and 10.78 eln/cell respectively. However, the lighter weight and high abundance of aluminium make it a promising dopant.
Summary
We have presented above an ab initio study of phosphorus anodes for Li and Na-ion batteries and proposed a theoretical lithiation/sodiation process using the structure prediction AIRSS method. Our searches reveal the existence of a variety of metastable structures which can can appear in outof-equilibrium processes such as charge and dis-charge. In particular, a Li 4 P 3 -P2 1 2 1 2 1 AIRSS structure found to lie very close to the convex hull and a new Na 5 P 4 -C2/m structure obtained by the species swapping method found stable at 0 K. The dynamical stability of these structures was probed by phonon calculations. Our calculations showed a high theoretical voltage vs Li metal for Li-P, which makes phosphorus a good candidate for safe anodes at high rate charges. We have calculated 31 P NMR chemical shielding and related them to local structure arrangements, which combined with future 31 P NMR experiments can elucidate lithiation and sodiation mechanisms. Finally we have studied the effect of dopants on the electronic structure of Li-P compounds, where we conclude that doping the anode with aluminium can improve its electrochemical behaviour.
|
2015-10-12T12:11:54.000Z
|
2015-10-12T00:00:00.000
|
{
"year": 2015,
"sha1": "d28066b35bd62a93ca7b72f31e8af4bfd7e1c88f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d28066b35bd62a93ca7b72f31e8af4bfd7e1c88f",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
204962991
|
pes2o/s2orc
|
v3-fos-license
|
Value of radiomics in differential diagnosis of chromophobe renal cell carcinoma and renal oncocytoma
Purpose To explore the value of CT-enhanced quantitative features combined with machine learning for differential diagnosis of renal chromophobe cell carcinoma (chRCC) and renal oncocytoma (RO). Methods Sixty-one cases of renal tumors (chRCC = 44; RO = 17) that were pathologically confirmed at our hospital between 2008 and 2018 were retrospectively analyzed. All patients had undergone preoperative enhanced CT scans including the corticomedullary (CMP), nephrographic (NP), and excretory phases (EP) of contrast enhancement. Volumes of interest (VOIs), including lesions on the images, were manually delineated using the RadCloud platform. A LASSO regression algorithm was used to screen the image features extracted from all VOIs. Five machine learning classifications were trained to distinguish chRCC from RO by using a fivefold cross-validation strategy. The performance of the classifier was mainly evaluated by areas under the receiver operating characteristic (ROC) curve and accuracy. Results In total, 1029 features were extracted from CMP, NP, and EP. The LASSO regression algorithm was used to screen out the four, four, and six best features, respectively, and eight features were selected when CMP and NP were combined. All five classifiers had good diagnostic performance, with area under the curve (AUC) values greater than 0.850, and support vector machine (SVM) classifier showed a diagnostic accuracy of 0.945 (AUC 0.964 ± 0.054; sensitivity 0.999; specificity 0.800), showing the best performance. Conclusions Accurate preoperative differential diagnosis of chRCC and RO can be facilitated by a combination of CT-enhanced quantitative features and machine learning.
Introduction
The incidence of renal cell carcinoma is increasing worldwide [1]. Chromophobe cell carcinoma (chRCC) of the kidney is second only to clear cell carcinoma of the kidney and papillary cell carcinoma of the kidney [1][2][3]. Renal oncocytoma (RO) is a benign renal tumor, accounting for about 3-7% of all renal tumors [4,5]. Medical imaging plays an important role in the clinical management of renal tumors, such as detection of renal tumors, prediction of benign and malignant tumors, grading, and surgical treatment [6,7]. Studies have shown that chRCC and RO not only overlap in morphological and immunological manifestations, but also have similar imaging manifestations [8,9]. Although some researchers believe that a central scar is the characteristic of RO, its proportion is only about 33% [4,6], but there are also a few cases of chRCC with a central scar [8].
3
Therefore, it is obviously impossible to distinguish the two pathological types by the presence or absence of a central scar. Some reports suggest that there are some differences in the enhancement degree of CT between the two [9]. The enhancement in chRCC is slightly higher than that in RO, but the difference in the CT value is small and is greatly influenced by subjective factors. There are also studies showing that many MR findings for chRCC and RO are quite similar, such as a central scar, segmental enhancement inversion, and enhancement characteristics of each phase, none of which can accurately identify the two [10].
At present, the differential diagnosis of benign and malignant renal tumors still depends on pathology. Percutaneous renal biopsy is the main preoperative examination. However, solid tumors show spatial and temporal inconsistencies in genetic and molecular pathways, microenvironment, tissues, and organs, limiting the accuracy and representativeness of biopsy results. With the innovation and development of medical imaging technologies, images can include the characteristics of tissue anatomy and physiological function. The advantages of non-invasive, comprehensive, and quantitative observation of imaging technology overcomes the shortcomings of biopsies, and can efficiently detect tumor heterogeneity [11,12]. In 2012, Lambin et al. [11] proposed the concept of radiomics for the first time based on the heterogeneity of solid tumors. By extracting features from highthroughput image data, more reliable feature information can be extracted compared with that obtained from visual observation. Differential diagnosis between chRCC and RO is difficult by conventional diagnostic methods, and the use of radiomics for their differential diagnosis is rare. Accurate preoperative differentiation between chRCC and RO can aid better management of patients and help develop follow-up strategies. Additionally, it can mitigate the requirement and risk of radical nephrectomy in patients with RO. Therefore, in this study, we used a radiomics-based approach to analyze chRCC and RO and investigated the possibility of a higher preoperative diagnostic accuracy.
Patients
A retrospective analysis of 44 cases of chRCC and 17 cases of RO was performed. The cases were confirmed pathologically in our hospital between 2008 and 2018, and the patients had undergone preoperative enhanced CT scan. There were 31 men and 13 women aged 22-79 years (average age 50.8 years) in the chRCC group and 9 men and 8 women aged 35-79 years (average age 54.9 years) in the RO group. Except for 8 cases without clinical data, most of the patients showed clinical signs that were non-specific (27/53) and then waist pain on the corresponding side or hematuria (20/53), and so on.
CT examination
Most of the patients underwent multi-phase enhanced CT scanning, including a plain scan and phase scanning of the corticomedullary (CMP), nephrographic (NP), and excretory (EP) phases. In five of the 61 cases, the patients did not undergo EP scanning. Images for the 61 patients were captured using MDCT (LightSpeed VCT, GE Healthcare, Japan; SOMATOM Definition Flash, Siemens Healthcare, Germany) systems. The scanning ranged from the top of the diaphragm to the level of the iliac wing. The scanning parameters were tube voltage, 120 kV and scanning thickness, 5-8 mm. After the abdominal plain scan, a contrast agent was injected using a high-pressure syringe around the vein. The injection flow rate was 3 ml/s, and three-phase enhanced scanning was performed at 30-200 s.
Tumor segmentation
The original digital imaging and communications in medicine image were imported into a post-processing platform (Big Data Intelligent Analysis Cloud Platform, Huiying Medical Technology Co., Ltd., Beijing). A radiologist manually delineated the region of interest (ROI) along the edge of the lesion, layer by layer, on each phase of the contrastenhanced CT image. The volume of interest (VOI) of the lesion was automatically generated by the computer. Another senior radiologist examined the outline results. The criteria for delineation were as follows. CT axial images, except for the two planes where the lesion just appeared and was about to disappear, were evaluated. The ROI was used to delineate the boundary of all planes of the renal mass, including necrosis, cystic degeneration, and hemorrhage; however, it did not include the normal renal tissue or perirenal fat. Before extracting the VOI from the ROI, the window width and window position was adjusted to achieve the best contrast between the mass and the surrounding normal renal parenchyma. The window width and window positions were about 350 and 50 Hounsfield units (HU), respectively.
Feature extraction and selection
After delineating the VOI of each lesion, high-throughput data features based on feature classes and filter classes were automatically extracted from the aforementioned Radcloud platform. The features can be classified into three categories as follows: I. The characteristics of the intensity statistics, such as peak value, mean value, and variance, which are used to quantitatively describe the distribution of voxel intensity in CT images; II. Shape features, such as volume, surface area, and spherical value, which reflect the threedimensional characteristics of the shape and size of the outlined area; and III. texture features, including the gray-level co-occurrence matrix, gray-level run length matrix, and gray-level size zone matrix, which can quantify the heterogeneity of the selected region. Additionally, Laplace-Gauss filtering, exponential, logarithmic, square, square root, and wavelet filters can be used to calculate the image intensity and texture features. Wavelet filters used included wavelet-LHL, wavelet-LHH, wavelet-HLL, wavelet-LLH, wavelet-HLH, wavelet-HHHH, wavelet-HHL, and wavelet-LLL.
First, all radiomic features were standardized using the StandardScaler function by Min-Max Scaling, and each set of feature values was mapped to the range of [0,1]. Then, a fivefold cross-validation was performed based on standardized features, and the optimal λ parameter was obtained from the minimum of the average mean square error by 1000 iterations. Finally, the least absolute shrinkage and selection operator (LASSO) feature selection algorithm was used to select the relevant features based on the optimal λ parameters, and the coefficients were calculated for each feature; then, radiomic features with non-zero coefficients were obtained. The LASSO algorithm can be used to reduce the dimensions of features and select the most meaningful features effectively [13,14]. Further, using the T test on the optimum features between chRCC and RO patients, a probability value (p value) is calculated.
Classifier training
Five classifiers, k-nearest neighbors (kNN), support vector machine (SVM), random forests (RF), logistic regression (LR), and multi-layer perception (MLP), were trained to construct the model by using fivefold cross-validation, which divided the data into five parts, training one part in turn, and estimating the accuracy of the algorithm by calculating the mean of the results of the five rounds of training. From these, the best model to distinguish chRCC from RO was selected. Finally, the performance of the feature classifier was validated and evaluated. The evaluation indicators included area under the curve (AUC), sensitivity, specificity and accuracy, accuracy, recall, and F1-score using the receiver operating characteristic curve (ROC).
Feature selection of radiomics
A total of 1029 image features were extracted from each phase of enhanced images of each patient. The optimal λ parameter for the CMP, NP, and EP images features and a combination of CMP and NP images features (Fig. 2) were obtained. The LASSO algorithm was used to reduce the dimensionality of the above high-dimensional features based on the optimal λ parameters, and the best features were screened. These features were mainly texture and intensity statistical features, while only one morphological feature was screened out in the EP images. The combination of CMP and NP resulted in eight screened features from 2058 features, including five texture features and three intensity statistics features. The selected features and the corresponding p values are shown in Table 1. A radiomics set was built using the optimum features. And features which were correspond to the optimal alpha value were extracted following coefficients on images
Diagnostic performance of radiomics models
As shown in Table 2, AUC values under ROCs of multiple radiomics models obtained by a fivefold cross-validation method show that all models can obtain better diagnostic results, with AUC values greater than 0.850, and SVM being the best. The ROC curves of SVM at different enhancement stages are shown in Fig. 3. Four classifiers, kNN, SVM, LR, and MLP, used the combined features of CMP and NP to obtain the best discriminant diagnosis results for enhanced phase 3 (CMP, NP, and EP) and the combination of CMP and NP. Only the RF classifier obtained the best discriminant effect when analyzing features in NP phase. Table 3 compares the results of the five classifiers. The evaluation indexes included AUC sensitivity, specificity, accuracy, precision, recall, and F1-score.
Discussion
In 2004, the World Health Organization formally classified chRCC as a new pathological classification of renal tumors. The incidence of chRCC is second only to that of renal clear cell and renal papillary cell carcinomas [3]; moreover, it has potential for metastasis. RO is a benign tumor with good prognosis [5]. Presently, surgical treatment, including partial and radical nephrectomy, is an effective method for treating local renal tumors. Radical nephrectomy can lead to an increased risk of chronic kidney disease, and is associated with an increased risk of cardiovascular disease morbidity and mortality. Compared to radical nephrectomy, a partial nephrectomy can preserve partial renal function, reduce overall mortality, and reduce the incidence of cardiovascular disease [2]. Therefore, radical nephrectomy should be avoided when nephron retention is achievable. Percutaneous renal biopsy is the most commonly used preoperative examination method, and it has 97% accuracy rate for distinguishing malignant renal masses [15]. However, the diagnosis of chRCC and RO on percutaneous renal biopsy presents difficulties [16]. chRCC and RO have many overlapping imaging features [17]. Clinically, it is difficult to distinguish chRCC from RO only by visual imaging. Currently, controversies exist on the imaging manifestations of the two kinds of tumors. For example, Rosenkrantz [10] stated that MRI features including fat, hemorrhage, margin of the mass, perirenal fat infiltration, renal vein cancer thrombus, enhancement uniformity, vascular proliferation and central scar, and segmental enhancement inversion cannot be used to distinguish between chRCC and RO. Wu et al. [9] reported that cases of RO present more instances of central scar, radial enhancement, and segmental enhancement inversion than those observed for chRCC on contrast-enhanced CT. Kim et al. [18] asserted that differential diagnosis between different tumor types can be achieved by CT enhancement. Most cases of chRCC showed homogeneous enhancement, whereas most renal clear cell and papillary cell carcinomas showed heterogeneous enhancement. Additionally, RO is characterized by homogeneous enhancement of the solid mass. The enhancement of RO and chRCC at each stage is lower than that of the normal renal cortex; however, enhancements of RO are more prominent than those of chRCC [9]. Thus, in clinical practice, it is difficult to distinguish chRCC from RO only by visual imaging. Therefore, this study used radiomics to differentiate chRCC from RO.
Texture analysis refers to the process of extracting texture feature parameters through certain image processing technologies, so as to obtain quantitative or qualitative description of texture. This technique can be used to detect subtle differences that cannot be detected by the naked eye and is more objective for tumor discrimination. Because chRCC and RO are relatively rare compared to renal clear cell and renal papillary cell carcinoma, radiomic studies of renal tumors are focused on relatively common renal tumors. Studies on the most frequently occurring renal clear cell carcinoma have focused on different aspects such as preoperative diagnosis [19][20][21][22], tumor grade [23], prognostic evaluation [24], and molecular analysis of the cancer genes [25][26][27]. Yu et al. [20] extracted the texture features of four types of renal tumors, including renal clear cell carcinoma, renal papillary cell carcinoma, chRCC, and RO. The tumors were classified by an SVM classifier, and the histogram feature median demonstrated an AUC of 0.882 for differentiating chRCC from RO. In this study, SVM was used to classify the features screened by CMP and NP. The AUC of differential diagnosis between chRCC and RO was found to be as high as 0.964, which is better than previously reported results. Zhang et al. [19] combined several texture features including SD, entropy, mean positive pixels, and kurtosis to differentiate renal clear cell carcinoma from non-transparent cell carcinoma. The value of the AUC was 0.94 ± 0.03 and the accuracy was 0.87 (sensitivity = 89%, specificity = 92%). Similar methods were used to differentiate between renal papillary cell carcinoma and chRCC, and the accuracy of differential diagnosis was 78%. Most of the subjects in the above study had renal malignant tumors, but no benign ROs were included in the comparison. Thus, this study has more clinical significance in terms of comparing chRCC with RO.
Considering the isodensity of tumor masses on plain CT scans, errors may easily occur while describing an ROI. Therefore, this study analyzed the contrast-enhanced images; however, we did not analyze the CT plain scan images. Some studies such as the one by Hodgdon et al. [21] only analyzed CT plain scan images of renal tumors. Using CT plain scan texture features and subjective visual features to differentiate and diagnose fat-deficient angiomyolipoma from other renal tumors, the accuracy of the classification methods based on texture features was found to be higher than that of radiologists' subjective judgment of tumors or that observed with the use of an SVM classifier to identify renal tumors. The accuracy of differential diagnosis was about 83-91%. Schieda et al. [28] did not analyze the vascular characteristics of renal masses according to the nuclear grading system for chRCC; therefore, only CT plain scan images were used for radiomic analysis for clinical grading of the tumors. Importantly, to ensure the accuracy of the boundary delineation of the CT plain scans, it is still necessary to use the image of CT enhancement as a reference. Kocak et al. [23] analyzed the influence of different edge segmentation methods on feature selection and classification performance, including contour focusing and edge contraction by 2 mm. The results show that the latter method can extract more texture features; however, the former method has better reproducibility of features and better classification performance for the nuclear grading systembased classification of renal clear cell carcinoma. In this study, manual ROI extraction was performed to segment the edge contour of each transverse mass. Recently, a variety of mathematical techniques have been used in radiomics to quantify image textures, including statistical, Fourier, and wavelet analysis, and have been applied to the study of a variety of tumors. Varghese et al. [22] used multi-phase CT fast Fourier transform index to analyze the CT-enhanced solid and fat-deficient renal masses. Good classification results were obtained when distinguishing benign from the malignant renal masses, differentiating RO from chRCC, and RO from lipid-poor angiomyolipoma (AUC > 0.7).
Bektas et al. [29] used different machine learning classifiers, such as SVM, MLP, RF, kNN, and naive Bayes, for predicting Fuhrman nuclear grade of clear cell renal cell carcinomas (ccRCCs), and the best model was created using SVM (AUC = 0.851, accuracy = 0.913). Lee et al. [30] combined different feature selection methods and different feature classifiers, which included SVM, RF, kNN and LR, to distinguish benign fat-poor angiomyolipoma from malignant ccRCC. kNN and SVM classifiers with ReliefF feature selection achieved the best accuracy of 72.3 ± 4.6% and 72.1 ± 4.2%, respectively. The results of this study show that five classifiers have good diagnostic performance in feature classification methods (accuracy > 0.89, AUC > 0.90). One of the best models for the differential diagnosis between chRCC and RO was the use of SVM to classify the features screened by the combination of CMP and NP. The accuracy was found to be as high as 0.945, and the AUC was 0.964 ± 0.054. We suggest the use of radiomics of enhanced CT images for differentiating between chRCC and RO.
This study has some limitations. First, this was a singlecenter study and the sample size was small; notably, there were relatively fewer RO cases. A multi-center study of these rare cases must be undertaken under favorable conditions. Second, the CT equipment was not uniform and the scanning parameters were different, which may have influenced the repeatability of the results. Third, failure to analyze plain CT images may have led to the omission of internal masses. Considering the isodensity of tumor masses on plain CT scans, errors could have occurred when describing the ROIs for some characteristic information. To address this issue, only prominent enhancement phase images were analyzed.
In summary, we established a machine learning model that can distinguish chRCC from RO on enhanced CT images. These models are expected to help clinicians formulate better clinical diagnosis and devise improved treatment strategies. Our results indicate that radiomics can accelerate the development of personalized therapy.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
|
2019-10-31T05:30:30.949Z
|
2019-10-29T00:00:00.000
|
{
"year": 2019,
"sha1": "7cb057bfdbecdb38ab67a19a9780e82dc76a988c",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00261-019-02269-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7cb057bfdbecdb38ab67a19a9780e82dc76a988c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234872344
|
pes2o/s2orc
|
v3-fos-license
|
Multilayer observation and estimation of the snowpack cold content in a humid boreal coniferous forest in eastern Canada
Cold content (CC) is an internal energy state within a snowpack and is defined by the energy deficit required to attain isothermal snowmelt temperature (0°C). For any snowpack, fulfilling the cold content deficit is a pre-requisite before the onset of the snowmelt. Cold content for a given snowpack thus plays a critical role because it affects both the timing and the rate of snowmelt. Estimating the cold content is a labour-intensive task as it requires extracting in-situ snow temperature 10 and density. Hence, few studies have focused on characterizing this snowpack variable. This study describes the multilayer cold content of a snowpack and its variability across four sites with contrasting canopy structures within a coniferous boreal forest in southern Québec, Canada, throughout winter 2017-18. The analysis was divided into two steps. In the first step, the observed CC data from weekly snowpits for 60% of the snow cover period were examined. During the second step, a reconstructed time series of CC was produced and analyzed to highlight the high-resolution temporal variability of CC for the 15 full snow cover period. To accomplish this, the Canadian Land Surface Scheme (CLASS; featuring a single-layer snow model) was first implemented to obtain simulations of the average snow density at each of the four sites. Next, an empirical procedure was used to produce realistic density profiles, which, when combined with in situ continuous snow temperature measurements from an automatic profiling station, provides a time series of CC estimates at half-hour intervals for the entire winter. At the four sites, snow persisted on the ground for 218 days, with melt events occurring on 42 of those days. Based on snowpit 20 observations, the largest mean CC (−2.62 MJ m) was observed at the site with the thickest snow cover. The maximum difference in mean CC between the four study sites was −0.47 MJ m, representing a site-to-site variability of 20%. Before analyzing the reconstructed CC time series, a comparison with snowpit data confirmed that CLASS yielded reasonable estimates of the snow water equivalent (SWE) (R = 0.64 and percent bias (Pbias) = −17.1%), bulk snow density (R = 0.71 and Pbias =1.6%), and bulk cold content (R = 0.90 and Pbias = −2.0%). A snow density profile derived by utilizing an 25 empirical formulation also provided reasonable estimates of cold content (R = 0.42 and Pbias = 5.17%). Thanks to these encouraging results, the reconstructed and continuous CC series could be analyzed at the four sites, revealing the impact of rain-on-snow and cold air pooling episodes on the variation of CC. The continuous multilayer cold content time series also provided us with information about the effect of stand structure, local topography, and meteorological conditions on cold content variability. Additionally, a weak relationship between canopy structure and CC was identified. 30
Introduction
The use of spatially distributed, process-based (physical) hydrological models has substantially improved decision-making in the area of water resources management (Wigmosta et al., 2002). The snow processes included in such models rely on the energy balance (EB) approach, since snow accumulation and melt depend on the exchanges of energy and mass between the 35 snowpack and its surrounding environment (soil, atmosphere, and vegetation). The concept of snowpack energy budget was first introduced by the U.S. Army Corps of Engineers (1956). Since then, the single bulk layer representation (e.g Wigmosta et al., 1994) has evolved into multilayer schemes (Gouttevin et al., 2015;Koivusalo et al., 2001;Lehning et al., 2002;Vionnet et al., 2012). Recent studies have looked at the sources of uncertainty associated with snow models (Essery et al., 2013;Rutter et al., 2009) and revealed the importance of including some key state variables, particularly cold content, in their modelling 40 schemes.
Cold content (CC) is the amount of energy required for a snow cover to reach 0°C for its entire depth. Any additional energy input translates into melting. By definition, CC is a linear function of the snow water equivalent (SWE) and snowpack temperature, and is defined by: where CC is cold content (MJ m −2 ), ci is the specific heat of ice (2.1 × 10 −3 MJ kg −1 °C −1 ), ρs is the snow density (kg m −3 ), ρw is the density of water (kg m −3 ), HS is the snow depth (m), Tm is the melting temperature (0°C), and Ts is the snowpack temperature (°C). Thus, CC ranges from -∞ to 0 MJ m −2 , meaning that the larger the absolute value of CC, the more energy required for the snowpack to eventually reach a uniform temperature of 0°C, over the entire depth of the snowpack.
CC plays a central role in delaying snowmelt (Molotch et al., 2009), as a deep, dense, and cold snowpack requires a substantial 50 amount of energy for snow to reach 0°C and initiate melt. As such, understanding CC is essential for the accurate forecasting of water availability in demanding sectors such as agricultural systems, urban water supply (Barnett et al., 2005), and hydropower generation (Schaefli et al., 2007).
The exact determination of CC requires direct observations of the snowpack temperature, density, and depth, usually collected from manual snow surveys. As manual collection is tedious and demanding, few datasets that describe snowpack CC are 55 available. For lack of a better approach, CC is often estimated using one of the following three methods: an empirical formulation that relies solely on air temperature (DeWalle and Rango, 2008;Seligman et al., 2014), an empirical formulation based on air temperature and precipitation (Andreadis et al., 2009;Wigmosta et al., 1994), or a residual from an energy balance model (Marks and Winstral, 2001). Jennings et al. (2018) resorted to snowpit data, collected at alpine and subalpine sites within the Rocky Mountains in Colorado, to study CC. They reported a weak relationship between CC and the cumulative 60 mean of air temperature. They also found that newly fallen snow was responsible for 84.4% and 73.0% of the daily gains in CC for alpine and subalpine snowpacks, respectively. Of note, a slight contrast was observed by Seligman et al. (2014), who reported that the contribution of spring snow storms to CC had a smaller impact on delaying snowmelt than the porous space from dry fresh snow. However, Jennings et al. (2018) reported shifts in the onset of snowmelt by 5.7 h and 6.7 h at alpine and https://doi.org/10.5194/tc-2021-98 Preprint. Discussion started: 13 April 2021 c Author(s) 2021. CC BY 4.0 License. subalpine sites, respectively, when CC at 6:00AM was less than 0 MJ m −2 . This suggests that even a small energy deficit has 65 a substantial effect on the rate and timing of snowmelt. Overall, previous studies agree that the careful consideration of CC improves snowmelt simulations (Jost et al., 2012;Mosier et al., 2016;Valéry et al., 2014).
Little to no previous research has focused on CC behaviour in forested environments. Snowpack energy exchanges within a forest are obviously different than those in open or alpine areas, as the presence of a canopy impacts snow accumulation and melt (Andreadis et al., 2009;Gouttevin et al., 2015;Mahat and Tarboton, 2012;Wigmosta et al., 2002). For instance, 70 intercepted snow may sublimate, undergo densification, or fall beneath the canopy when maximum canopy storage is reached or when there are heavy winds present (DeWalle and Rango, 2008). Snow interception typically leads to shallower snow depths and less melt beneath the canopy (Musselman et al., 2008), even in the presence of rain-on-snow events (Marks et al., 1998).
Frequent density profiles of the snow cover allow for the tracking of unloading episodes and the identification of spatial differences of CC within a forest. 75 Despite all of the associated challenges, it is possible to simulate snow in a forested environment with some success. For instance, physically-based land surface models are regularly used to simulate snow at forested sites (e.g., Roy et al., 2013).
One such example is the Canadian LAnd Surface Scheme (CLASS), which relies on a single-layer snow model articulated around the energy balance. In a recent study, Alves et al.(2020) used CLASS driven by ERA5 reanalysis data to model snow depths from four dissimilar forested sites across the Canadian boreal biome. They reported average snow persistence lengths 80 and average spring melting periods that were similar to our field observations. By definition, CLASS considers the whole snowpack as a single bulk unit, and as such, is unable to simulate the multilayer behaviours that one sees in nature. One option for addressing this is to resort to a multilayer snow model such as SNOWPACK (Lehning et al. 2001), which was recently equipped with a thorough canopy module (Gouttevin et al., 2015); however, even models such as this are not free of biases.
Alternatively, bulk snowpack values can be distributed between several layers. For instance, Roy et al. (2013) disaggregated 85 CLASS-derived snow water equivalents into multilayer values at each time step, for the purpose of estimating the specific surface area (SSA) of a snowpack. They attained an acceptable root mean square error (RMSE) of 8.0 m 2 kg −1 in CLASSderived SSA for individual layers.
In view of the obvious lack of observational studies that are required to support model development in forested environments, detailed analyses of multilayer in situ snowpack CC are necessary. Building on Jennings et al. (2018), this study investigates 90 53 snowpit-derived CC observations at four distinct coniferous forested sites, over the course of one winter. The temporal variability of the CC is also analysed by reconstructing time series that include bulk and multilayer CC with a 30-min time step, and combine automated snow temperature observations and bulk snow density estimates that was calculated using the CLASS model.
Study sites and data collection
Observations were collected in the Bassin Expérimental du Ruisseau des Eaux-Volées (BEREV), which is a small boreal forest catchment within Montmorency Forest, Quebec, Canada (Fig. 1). This region experiences substantial precipitation (1583 mm), with 40% falling in solid form between November and May (Isabelle et al., 2018). The boreal catchment lies in the Laurentian Mountains of the Canadian Shield and is characterized by a humid continental climate (Schilling et al., 2021). There are patches 100 of forest clearings found within the basin due to past logging operations that have led to variability in stand structure (Parajuli et al., 2020b). Over the years, several vegetation species such as black spruce (Picea mariana (Mill.)) and white spruce (Picea glauca (Moench.)) were planted. However, the environment favoured the regrowth of balsam fir (Abies balsamea) stands. Isabelle et al. (2020) provide detailed information on the vegetation cover at the study site. The current analysis focuses on the four contrasting sites presented in Table 1. 110 Inspired by Lundquist and Lott (2010), we deployed an automated snow-profiling station at each location, composed of 18 Ttype thermocouples vertically spaced 10 cm apart and of an ultrasonic depth sensor (Judd Communication, USA). An additional T-type thermocouple was enclosed in a radiation shield (Fig.1c) 2 m above ground for simultaneous air temperature measurements. Maintaining a weekly timeline was sometimes difficult due to uncontrollable circumstances such as freezing rain, rain-onsnow events or even winter storms. During melt, from 21 April 2018 and on, it was impossible to reach all study sites because 120 of reduced snow depths preventing the safe use of snowmobiles, except for site A2 that was more easily accessible from the main road. Snow-profiling stations malfunctioned occasionally (less than 1% of the time), mostly in spring. Missing values were filled with snowpit observations. An exponential moving average procedure was implemented to reduce noise in snow depth observations.
Construction of CC time series 125
The exercise of constructing 30-minute time series of the snowpack CC represents a certain challenge. On the one hand, it requires time series of the vertical profile of snow temperature, which is obtained from the snow-profiling stations. On the other hand, time series of the snow density profile are needed as well. This is where the main difficulty lies. A simple approach would be to interpolate the density values extracted from snowpits, but this would be incomplete and error-prone given their limited number and absence early in the season. Herein, it was opted to produce multilayer time series of snow density thanks 130 to CLASS bulk simulations complemented with empirical formulations, as detailed below.
CLASS model
CLASS is a physically-based land surface model that simulates the exchanges of water and energy between the Earth's surface and the atmosphere (Bartlett et al., 2006;Verseghy, 1991). It considers four distinct surface subareas: bare soil, canopy cover over bare soil, canopy with snow cover, and snow cover over bare soil (Bartlett and Verseghy, 2015;Verseghy et al., 2017). 135 In this analysis, CLASS version 3.6 was used in offline mode and with a 30-min time step, enabling the stability of the prognostic modelled variables (Roy et al., 2013). CLASS allows for the inclusion of multiple soil layers and accounts for snow interception, snow thermal conductivity, and snow albedo, as described in Bartlett et al. (2006). The following subsections describe the meteorological forcing data required to run CLASS, and the methodology (CLASS + snow-profiling station) to produce single-and multi-layer time series of snow density, following Andreadis et al. (2009). 140
CLASS setup and forcing
The meteorological inputs required to run CLASS include precipitation rate, wind speed, air specific humidity, incoming shortwave and longwave radiation, air temperature, and surface atmospheric pressure (Alves et al., 2019; Leonardini et al., 2020). As CLASS is designed to explicitly consider the energy exchanges between the soil surface, vegetation, snowpack, and atmosphere, above-canopy meteorological forcings are used. The model accounts for local effects associated with the presence 145 of a canopy (e.g., attenuation of incident radiation, etc.), and incorporates user-defined parameters such as vegetation height, canopy density, and leaf area index (Table 2). Precipitation rates were determined using a GEONOR weighting gauge equipped with a single Alter shield approximately 4 km north of the study area, and were considered to be uniformly distributed throughout the catchment. Given the known windinduced bias associated with this type of gauge (Pierre et al., 2019), a simple adjustment was applied. This adjustment involved twice-daily manual precipitation observations from a Double Fence Intercomparison Reference (DFIR) setup close by, as in 155 Parajuli et al. (2020a). Vegetation parameters were extracted at each site from a LiDAR dataset. Wind speed, air specific humidity, shortwave and longwave radiation, and surface atmospheric pressure measurements were taken from flux towers at sites A1 and A2. Comparable data were unavailable at sites A3 and A4 (Table 2).
This study was carried out in a small experimental watershed with an area of 3.49 km 2 , where the sampling sites (A1 to A4) were close to one another ( Fig.1) but had distinct characteristics (Table 1). Given the similarity (more or less) in canopy 160 structure, we opted to use the inputs recorded at site A2 to run CLASS simulations at sites that lacked direct measurements of meteorological inputs (Table 2). Here we assumed negligible differences in the above-canopy inputs between sites A2, A3 and A4. The following sub-section highlights the steps adopted to generate the multilayer density estimates needed to calculate the CC time series for all snow layers.
Reconstruction of multilayer snow density time series (a hybrid approach) 165
The empirical formulation described in Andreadis et al. (2009), based on Anderson (1976), is used to reconstruct multiple layer snow density estimates by combining the CLASS-derived snow water equivalent (SWE) estimates (hereafter referred to as the hybrid procedure). Fresh snow density follows the formulation from Brun et al. (1989), who developed the method using data collected in the French Alps. We initialized the density of fresh snow by imposing a minimum snow density of 76 kg m -3 , based on available snowpit observations, and then using the equation: 170 where ρf is the density of fresh snow (kg m −3 ), um is the wind speed (m s −1 ), and Ta is the air temperature (K). With the exception of Eq 2, both Ta and Ts are presented in degrees Celsius (°C). As snow undergoes compaction due to metamorphism and the increasing weight of overlying snow, density is assumed to increase according to the following rate: where t is time (s), CRm is the snow compaction due to metamorphism (kg m -3 s -1 ), and CRo is the compaction due to the weight of overlying snow (kg m -3 s -1 ). CRm is then calculated as (Andreadis et al., 2009 where n0 = 3.6×10 −6 N s m −2 is the snow viscosity, c5 = 0.08 K −1 , c6 = 0.021 m 3 kg −1 and Ps (N s m −2 ) is the load pressure for each layer. The load pressure is defined as: where g is the acceleration due to gravity 9.8 m s −2 , Wns and Ws are the amount of new snow and the snow (derived from 185 CLASS) within the snowpack layer (mm w.e.), respectively, and f is the empirical compaction coefficient taken as 0.6 (Andreadis et al., 2009).
Local meteorological conditions
Figure 2 displays daily air temperature and wind speed observations. The shaded zone and site-specific dots illustrate the 190 temporal distribution of the manual snow surveys. Air temperature measurements were taken at 2 m above ground. Wind speed sensors were located 3 m and 2 m above ground at sites A1 and A2, respectively. To compensate for this height difference and enable fair comparisons between sites A1 and A2, wind speed measurements at site A1 were adjusted to a 2-m height, assuming a log profile. As expected, the sapling site (mean canopy of 1.8 m) experienced higher wind speeds (mean 1.3 m s -1 ) than the juvenile one (mean canopy of 8.1 m and mean wind speed of 0.12 m s -1 ). Air temperatures were homogenous from site to site, 195 with average values of −6.1 °C, −6.3 °C, −6.9 °C, and −6.5 °C at sites A1, A2, A3, and A4, respectively. Figure 3 illustrates 10-cm CC derived from the snowpit surveys. One has to sum up all the values of a single profile to find the total CC for a specific date. Variability in snow depth, mainly induced by contrasting canopy structure, is indicated in 205 Figure 3. When comparing layer-wise (10 cm) differences (A1, A2, A3, and A4), the lowest (−0.013 MJ m −2 ) and peak (−0.67 MJ m −2 ) CC both occurred at site A2. Unlike for spring melt, when CC is low and relatively uniform, the accumulation period portrays substantial layer-wise variability structured around three distinct layers. For instance, sites A1, A2, A3, and A4 reported 9, 15, 12, and 5 observations of CC that were below −0.35 MJ m −2 . Note that the occurrence of large amplitude of CC values was not always confined to the topmost layer, as the layer just beneath the top layer also exhibited such amplitude 210 (see Fig. 3, week 10, site A3 at a snow depth of 106 cm, for example). However, the layers that are close to the ground experienced smaller amplitude of CC throughout the winter. Peak CC occurred in early February ( Fig. 4 and Table 3), when the minimum daily air temperature fell to about -25°C (Fig. 2). At that time, the amplitude of CC was highest at site A1, which also had the deepest snowpack (128 cm). The total CC time series highlighted the variability of CC across the four study sites (Fig. 4). Overall, maximum snow depth occurred at site A2 (194 cm), which also experienced the largest amplitude of mean total CC 220 (−2.62 MJ m -2 ), as a thicker snowpack can hold more CC (Table 3). For its part, A4 experienced the smallest maximum snow depth (142 cm) and the lowest amplitude of mean total CC (−2.15 MJ m −2 ). The CC difference across sites reached 0.47 MJ m -2 in total cold content, representing a variability of 20%.
Figure 5. Observed versus CLASS-simulated bulk values of SWE, snow density, and CC. R 2 denotes the coefficient of determination and Pbias (%) represents the percent bias.
After confirming that CLASS successfully models bulk snow cover variables, we moved forward with the next step in our methodolgy. We adopted a "hybrid procedure" in which we reproduced a vertical structure following Andreadis et al. (2009).
Reconstructed CC time series
Additionally, sites with less vegetation (site A1) experienced higher peak CC than sites with mature forest (A4) (Fig. 8).
Notably, the rain-on-snow episodes that occurred on 11 January, 20 February, and 30 March 2018 (thin vertical bands of low CC) were absent from the weekly series shown in Figure 3. Sites A1, A3 and A4 had a shallower snowpack and the rainfall 265 penetrated deeper, resulting in a reduced CC throughout the snowpack. Contrarily, at site A2 which had a deeper snowpack, similar rain penetration into the snowpack was only observed on 11 January 2018. All snowpacks became isothermal from 21 https://doi.org/10.5194/tc-2021-98 Preprint. Discussion started: 13 April 2021 c Author(s) 2021. CC BY 4.0 License.
Figure 8. Seasonal variability of 10-cm CC simulations stored at 30-min time intervals. The colour bar indicates CC values in MJ m −2 . Light green shading represents rain-on-snow events and light blue shading represents melt.
Due to differences in snow accumulation and melt patterns, mostly induced by differences in vegetation and topographic characteristics, there is noticeable site-to-site variability in CC (Fig. 8). The detailed variability of total CC across the four forested sites is presented in Figure 9, along with snow depth. The amplitude of total CC at site A2 was larger than at A1 275 approximately 60% of the time. At site A3, this fraction drops to 32%.
280
We examined the relationship between CC and the snow density (ρs), snow depth (HS), snowpack temperature (Ts), and air temperature (Ta; Fig. 10). Pearson's correlation coefficient (r) was used to determine these relationships for the observed and estimated values, respectively. Snowpack temperature (r = 0.83 and 0.69) and air temperature (r = 0.56, and 0.66) exhibited a positive correlation. Conversely, snow depth (r = −0.5 and −0.45) exhibited a negative correlation, whereas snow density (r = 0.4 and 0.24) showed a weak relationship. 285 Next, we examined the relationship between each of the above-mentioned variables and CC at the individual sites. This was done to identify any trends in the site-wise relationship between CC and ρs, HS, Ts, and Ta. A decreasing trend in the correlation coefficient (r) with increasing mean tree heights was observed when we examined the snow temperature and the reconstructed cold content for each site (r = 0.75, 0.69. 0.67 and 0.60 for sites A1, A2, A3, and A4, respectively). Beyond that relationship, https://doi.org/10.5194/tc-2021-98 Preprint. Discussion started: 13 April 2021 c Author(s) 2021. CC BY 4.0 License.
we did not identify any site-wise trends between CC and the other variables, thereby suggesting a weak dependency on forest 290 structure in the relationship between CC and other pertinent variables.
CC observations
As illustrated in Figures 2 and 3, the four experimental sites exhibited unique snow depths, wind speeds, and air temperatures that ultimately resulted in temporal and spatial differences in CC. Variability was such that the maximum CC was not always exhibited by the top layer, but also by the middle layer (Fig. 11). For instance, in week 15, the snowpack was denser in the top https://doi.org/10.5194/tc-2021-98 Preprint. Discussion started: 13 April 2021 c Author(s) 2021. CC BY 4.0 License. layer than in the middle layer. In week 13, the top layer snowpack was warmer than the layer beneath it. Such patterns in 300 temperature and density are counterexamples of the general patterns depicted in Figure 11. Furthermore, the importance of snow mass on CC (total) at the study sites is highlighted in Figure 4. As expected, a deeper snowpack is typically associated with higher CC. For instance, CC peaked at all sites in February, but more CC was observed 305 in the deeper A1 and A2 snowpacks. The same finding holds when CC is averaged over the 15-week period: A2 experienced more snow and higher CC, followed by A1. In both instances (peak and average CC conditions), a deeper snowpack led to larger amplitude of CC. In a similar study of alpine and subalpine snowpacks in the Rocky Mountains of Colorado, USA, Jennings et al. (2018) reported peak CC to be 2.6 times greater for the alpine snowpack than for the subalpine location, which they mostly attributed to the higher SWE accumulation at the alpine site. 310 In early February (during peak CC conditions), the snow depth difference between sites A1 and A2 was very small (Table 3).
Nonetheless, A1 exhibited higher CC than A2 (Fig. 4). This is because in addition to snow depth, CC values depend on the density and temperature of the snow (Fig. 11). The higher peak CC found at site A1 can be explained by the higher snow https://doi.org/10.5194/tc-2021-98 Preprint. Discussion started: 13 April 2021 c Author(s) 2021. CC BY 4.0 License. density that is typically associated with higher wind velocities (Vionnet et al., 2012) and wind speed-induced densification.
As illustrated in Figure 2, site A1 was windier than A2. This is expected, as it is well known that wind speed is low within 315 forest canopies (Davis et al., 1997;Harding and Pomeroy, 1996), such as those in site A2.
Reconstructed CC time-series
As mentioned previously, gaps between weekly snowpit surveys failed to capture short-lived events such as warm and cold spells or rain-on-snow events. In an attempt to produce higher frequency CC time series, we used the CLASS land surface model to simulate 30-min bulk snow density and SWE (Fig. 5). Given the limitations of bulk estimations, which are often too 320 broad to properly describe all snowpack processes (Roy et al., 2013), several studies have opted for a multilayer snow model (Brun et al., 1997;Lehning et al., 2002;Vionnet et al., 2012). Our study explored the (simpler) hybrid procedure proposed by Andreadis et al. (2009). Using this method, we generated snow density values which support the derivation of CC time series that are more prone to capturing short-lived events (Fig. 8). As reported in weekly snowpit surveys, the simulated CC time series suggest that the highest peak in CC occurred at site A1 and the highest peak in mean CC was at site A2. In both instances 325 (snowpit observations and reconstructed CC), there was a decrease in peak CC with an increase in tree height ( Fig. 8 and Table 3). Parajuli et al. (2020b) explored the spatiotemporal variability of SWE in the same forest and reported reduced snow accumulation beneath canopies composed of taller trees. Initially, site A1, with lower vegetation, experienced more snow than the other sites (Table 3). Favourable conditions (lower temperature and deeper and denser snowpack) supported the occurrence of higher peak CC at this site (Fig. 2, Fig. 11 and Table 3). However, over the entire winter, site A2 experienced more snow 330 and higher amplitude of CC values than the snowpack at site A1 (Fig. 9). In order to understand this variability in CC across all four sites, our analysis revealed that there were 60% occurrences where amplitude of CC was higher at site A2 than at A1, contributing to the overall CC at site A2.
For site A3, there were 32% occurrences where amplitude of CC values was higher than at A1, beginning in early February and continuing through the rest of the study period (Fig. 9). Most of the time, the measured snow depth at site A3 was also 335 shallower than at site A1. We hypothesized that cold air pooling might explain this phenomenon. During stable atmospheric boundary layer conditions, with weak synoptic forcing, there is reduced wind flow. This results in thermal decoupling in the valley depression, which favours the formation of a cold air pool (Fujita et al., 2010;Mott et al., 2016). This is substantiated by the rapid cooling of near-surface air within the valley depression, typically at night or early in the morning (Smith et al., 2010). As site A3 is situated in a valley depression (Fig. 1), cold air pooling most likely explains the higher peak CC at this 340 location (Fig. 2).
Based on Figures 2 and 9, snow depth and air temperature appear to influence CC distribution across the study sites. In general, the observed and simulated snowpack CC values at all sites were strongly (positively) correlated with snowpack temperature and air temperature, and weakly correlated with the snow density and snow depth values ( Figure 10). It should also be noted that the snowpack CC values at all sites only showed negative correlations with snow depth (Fig. 10). Based on CC 345 observations and the hybrid procedure, we were able to identify a relationship between the mean CC and the tree height ( Fig. 8). However, we were unable to report any trends in the site-wise relationship between CC and the above-mentioned variables (Fig. 10). Conversely, Jennings et al. (2018) attempted to establish a relationship between CC development and the cumulative mean of air temperature across the alpine and sub-alpine sites in the Rocky Mountains in Colorado, USA, but were unsuccessful. 350
Sources of uncertainty
One of the shortcomings of our multilayer snowpack scheme is the use of empirical fresh snow density estimates. Russell et al. (2020) explored a range of fresh snow density formulations and concluded that a constant value of 100 kg m −3 provided a better outcome than most empirical formulations. Nonetheless, they tested some empirical formulations that omitted the influence of wind speed on snow densification, as in the Brun et al. (1989) method. We compare observations to snow density 355 estimates (top 10 cm) from three empirical methods: Diamond-Lowry (Russell et al., 2020), Hedstrom-Pomeroy (Hedstrom and Pomeroy, 1998), and Brun (Shrestha et al., 2010;Vionnet et al., 2012) (Fig. 11). These formulations are typically used to determine snowpack density. Examples include a study by Gouttevin et al. (2015) where the Brun method was used, and one by Bartlett et al. (2006) where the Hedstrom-Pomeroy method was implemented. (Fig. 7). Several multilayer snow models use the Brun formula/method to estimate fresh snow density (e.g. Shrestha et al., 2010;Vionnet et al., 2012). When the same (Brun method) empirical snow density model was adopted into our methodology, the snow density estimates (especially the top layer) were 365 disastrous (Figure 7). https://doi.org/10.5194/tc-2021-98 Preprint. Discussion started: 13 April 2021 c Author(s) 2021. CC BY 4.0 License.
In a slightly different context, Raleigh and Small (2017) concluded that snow density modelling was a major source of uncertainty when studying catchment SWE derived from satellite data. Additionally, several errors and biases could arise due to poor data quality and modelling deficiencies, thereby affecting the snowmelt models (Parajuli et al., 2020a;Raleigh et al., 2015Raleigh et al., , 2016Rutter et al., 2009). For instance, Jennings et al. (2018) applied the SNOWPACK multilayer model and reported 370 an overestimation in fresh-snow temperature. As reported in the present study, CC depends heavily on snowpack temperature. The quality of model inputs also influences model performance. For example, sites A1 and A2 benefitted from local flux tower measurements, but such direct measurements were not available for sites A3 and A4, for which many assumptions were necessary in order to create/complete missing input time series. This problem has also been observed in several other studies(e.g. Pomeroy et al., 2007;Qi et al., 2017). Important snowpack properties beyond just CC, such as thermal conductivity 375 (Oldroyd et al., 2013) and snow interception (Hedstrom and Pomeroy, 1998) also need to be further addressed. Therefore, future research that utilizes the physically-based snow model and describes the internal snowpack processes should focus on improving snow density estimations.
Conclusion
The purpose of this study was to document the spatial variability of CC in a humid boreal forest, using detailed measurements 380 supplemented by physically-based and empirical model outputs. The studied boreal forest is characterized by a non-uniform stand structure that led to site-to-site variations in the 10-cm weekly observations of CC. Areas with lower vegetation had the highest snow accumulation and thus resulted in the largest peaks in total CC, while the juvenile forest experienced the highest amplitude of average CC over the 15 weeks.
The Canadian Land Surface Scheme model was then coupled with complementary empirical formulations to construct bulk, 385 followed by 10-cm, 30-min snow density time series. Both CLASS and the empirical formulations supplied reasonable snow density and CC estimates. When the latter 10-cm time series were split into three layers, the bottom and the middle layers also resulted in reasonable simulations. However, modelling of the top layer was not as successful. The constructed time series were used to illustrate the influence of phenomena that are not detectable when only snowpit data are used, such as rain-onsnow episodes or the formation of cold air pools at the bottom of the valley. 390 We used the Pearson's correlation coefficient (r) to identify the role of pertinent variables (snow density, snowpack temperature, snow depth and air temperature) that affect the distribution of CC at our boreal forest sites. Snowpack and air temperature appeared to be highly influential on CC distribution compared to the depth and the density of the snowpack. Our study was supported by 30-min time step time series of 10-cm snow temperature profiles and bias-corrected precipitation inputs. The inclusion of such inputs helped us to reduce errors and biases. This study also highlighted the uncertainty associated 395 with fresh snow density estimates when simulating the physically-based snowmelt models.
Data availability:
The data that support the findings in this study will be available in the public repository.
|
2021-05-20T23:40:10.081Z
|
2021-04-13T00:00:00.000
|
{
"year": 2021,
"sha1": "cbfbb186b8c78e59223008139098cbca1dfcbd7e",
"oa_license": "CCBY",
"oa_url": "https://tc.copernicus.org/articles/15/5371/2021/tc-15-5371-2021.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "f2db17f44b4e9b3ffb0a0fa7f2142a9f4752716f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
35844942
|
pes2o/s2orc
|
v3-fos-license
|
Geometric phase gate for entangling two Bose-Einstein condensates
We propose a method of entangling two spinor Bose-Einstein condensates using a geometric phase gate. The scheme relies upon only the ac Stark shift and a common controllable optical mode coupled to the spins. Our scheme allows for the creation of an SzSz type interaction where Sz is the total spin. The geometric phase gate can be executed in times of the order of 2{\pi} /G, where G is the magnitude of the Stark shift. In contrast to related schemes which relied on a fourth order interaction to produce entanglement this is a second order interaction in the number of atomic transitions. Closed expressions for the entangling phase are derived and the effects of decoherence due to cavity decay, spontaneous emission and incomplete de-entangling of the light to BEC are analyzed.
For macroscopic objects such as Bose-Einstein condensates (BECs), proposed schemes for entanglement generation are less developed. The only proposed scheme for BECs, to our knowledge, involves a photon mediated scheme for BECs placed in optical cavities [30,31]. Other possible methods include those originally formulated for single atoms, using state-dependent forces [29]. Experimentally, entanglement between a BEC and a single atom was achieved in Ref. [32]. For atomic ensembles, entanglement and teleportation has been performed using a continuous variable approach [33,34]. Here, the entanglement is in the form of two-mode squeezing in the total spin variables of the ensembles. Another approach involves using spin wave states, where teleportation was recently achieved [35]. Besides these, geometric phase gate approach first introduced for trapped ions [26,27], has been shown to be a robust and fast method of creat- FIG. 1: Schematic experimental configuration for the geometric phase gate. Two macroscopic spins (shown as BECs here) are placed in a cavity such that an ac Stark shift occurs on the energy levels. The geometric phase gate then performed by the following procedure. (a) A laser is applied to the cavity such that both BECs are illuminated, and controlled such as to follow the evolution |α(t) for a time t = [0, T ]. The phase (11) is induced at this point. (b) The laser is then applied individually to each BEC, with the opposite detuning ∆ ′ = −∆ following the same displacement force F (t). This completes the geometric phase gate and a S z S z interaction is induced between the BECs.
ing entanglement between two qubits [36]. The geometric phase gate is an attractive method for producing an entangling gate from the point of view of its robustness, as it may tolerate various imperfections such as variations in the initial conditions of the common bosonic mode [26]. Here we apply the geometric phase to control the state of light in phase-space to achieve a fast and robust S z S z interaction between two different collective spins.
We show in this paper that by using the geometric phase gate, entanglement is possible using the ac Stark shift Hamiltonian, which is a second order process. This improves upon previous works using photon-mediated entanglement such as Ref. [31] which relied on a fourth order transition to produce the entangling gate.
Creating entanglement between collective spin states, apart from the perspective of fundamental interest in macroscopic entanglement [37], is a key element to QIP based on spin coherent states. Previously we have introduced a scheme of performing quantum information processing using macroscopic states encoded on twocomponent BECs [30]. The basic idea involves using spin coherent states in place of genuine qubits. It has been shown that many QIP schemes such as quantum algorithms [30], quantum teleportation [38], and quantum communication [39] can be performed using such "BEC qubits". In order to perform such QIP protocols, it is necessary to produce entanglement between different BEC qubits. Similarly to standard qubits, for universality the scheme requires at least one and two BEC qubit control [30]. As coherent control of single twocomponent BEC has already been achieved [40], a major remaining technological issue is in the efficient creation of two BEC entanglement. We note that the structure of the entanglement between two spinor BECs has been recently predicted to display a fractal entanglement (the "devil's crevasse") [41], thus displays interesting physics in its own right.
A. The Protocol
In order to realize the geometric phase gate for the collective spins mediated by cavity photons, we consider the following Hamiltonian (see Appendix for a derivation) where a is an annihilation operator for a common bosonic mode, S z is the total z-spin of the BEC. The first term in (1) gives the energy of the bosonic mode ω 0 , the second is an ac Stark shift to the spin states, and the last term is the displacement operator for the bosonic mode that is controllable with a time-dependent coefficient F (t).
The basic idea of the scheme is presented in Figure 1. The two BECs are placed in a cavity and illuminated with the same controllable laser field of the form of a coherent state |α(t) [42]. The laser is far detuned from the transition to an excited state of the qubit, such that there is an ac Stark shift to the energy level, giving rise to the second term in (1). Initially the cavity starts in the vacuum state |α(t = 0) = |0 , and it is controlled by the displacement operation such that after a time t = T it returns to its original state. After this point the BECs become entangled due to a spin-dependent geometric phase. Let us illustrate the general procedure by taking the example of qubits instead of BEC. Assuming the particular case of initially x-polarized spins, the combined state of the system is where |σ i with σ =↑, ↓ and i = 1, 2 are the basis states of the two qubits. The light is then controlled through phase space (Re(α), Im(α)) in a state dependent way such that partway through the evolution the light and the qubits become entangled with light where Φ is a spin-dependent phase picked up due to the evolution of the light. By demanding that the state of light at the end of the evolution is the same as the initial state for all terms in the superposition, we then have which is for suitable Φ an entangled state. The procedure for the BEC is similar, but now the spins are expanded in terms of S z , taking eigenvalues [−N, −N + 2, . . . , N ] [30].
In standard derivations of the geometric phase such as that given in Ref. [26], state-dependent forces are used to create the state dependence to the phase Φ S z 1 S z 2 (T ), thus the spin-boson interaction comes in the last term of (1), rather than the ac Stark shift as we have here. Nevertheless, we will see that this creates a state-dependent phase Φ S z 1 S z 2 (T ), thus creating entanglement. The method based on the ac Stark shift may also be used for standard qubits [28,43], although we shall concentrate upon the macroscopic spin case in this paper.
B. Entangling phase
We assume that the state of the BECs are initially unentangled, such that the initial state is The total state of the light and the atoms then follows the ansatz Substituting into the evolution equation i d|ψ(t) /dt = H|ψ(t) we obtain (see Appendix for details) where 2 )]t , and we have dropped the S z 1 S z 2 labels on α and Φ for brevity. Starting from an initial amplitude α c (0) = 0, the above creates a time dependent displacement according to We impose that after a time T , the coherent state returns back to its original state. This requires the condition The phase [26,27] picked up by the coherent state is then where τ r = τ 1 − τ 2 . To show that this is in fact an entangling gate, we now expand the exponential in G/ using a Taylor series up to second order. We later give a concrete example of how (9) may be satisfied, while justifying the expansion. The phase we obtain is We see that the phase is proportional to S z 1 S z 2 , which is the entangling operation as desired. There are however also terms of the form (S z i ) 2 , which would not be present for qubits since (S z /N ) 2 = I in contrast to Pauli operators. Hamiltonians of this form correspond to squeezing operators [44] and can be of use in quantum metrology applications [45]. However, for our purposes this is an unwanted by-product of the geometric phase gate and require elimination using a suitable procedure.
C. Elimination of undesired phases
To remove the undesired terms in (11) we subject our system with another set of laser pulses obeying the Hamiltonian where a i refers to the photons in cavity i = 1, 2. Such a Hamiltonian may be performed by separately illuminating the BECs (Figure 1b). Performing the same calculation, the phase that is picked up by each BEC i = 1, 2 is then The phase for each BEC is precisely the same as that in (11) except that the S z 1 S z 2 is missing. Therefore by applying the reverse phase φ 2 (∆ ′ ) = −φ 2 (∆) we may eliminate the undesired terms in (11). This may be achieved by choosing ∆ ′ = −∆. The total phase after the two operations is Our final expression for the phase contains the desired S z 1 S z 2 interaction, together with a single qubit rotation term which may be compensated for using single BEC qubit control. We note that a similar method was presented in Ref. [46] to create (S z ) 2 interactions.
III. EXAMPLE SOLUTION
We now give an example solution for satisfying (9). Unlike the original formulation for geometric phase gates in Ref. [26,27], in our case Ω is state dependent, so that (9) must be satisfied for all possible |S z 1 S z 2 states. Choosing ∆ = 2Gn (16) where n is an integer, the set of frequencies that Ω take are then Ω = 2G(n − N ), 2G(n − N + 2), . . . , 2G(n + N ), (17) where N is the maximal eigenvalue of S z . Noting that all the frequencies are even multiples of G/ , Eq. (9) may then be satisfied by choosing where m is an odd integer. The phases induced by the geometric phase gate may then be evaluated exactly to give We observe that for n ≫ m, the phases decrease in steps of n. Accounting for the fact that S z ∼ O(N ), the expansion Eq. (11) may be made to converge if n ∼ N . The remaining parameters F 0 and m may then be used to tune the phase to the desired value. The decrease of the coefficients φ i justifies the expansion made in (10) thus showing that to a good approximation S z S z interactions can be implemented while exactly satisfying (9).
In an realistic experimental situation, the control of the coherent fields |α(t) will not be perfect, which can arise due to a variety of reasons such as technical noise and cavity decay. The imperfect control will leave the coherent state with a remnant component α(T ) = 0. In general this will lead to imperfect disentangling, leading to decoherence. Figure 2(a) show various trajectories of the coherent states for various spin states as evolved by (8) and using (18). Various S z 1 + S z 2 eigenstates follow different trajectories, leaving the final state after time T with a random remnant offset α c (T ) = 0. This effect may be simply modeled by assuming that the |S z 1 S z 2 basis state of the BEC becomes entangled with the coherent state |δαe −i(S z 1 +S z 2 )δθ , where δα is the remnant offset at the end of the evolution, and the phase variation δθ arises due to the conversion from α c to α. The dependence of the imperfect disentangling on the entanglement may be calculated by finding the reduced density matrix ρ = Tr a |ψ ψ| where |ψ is the entangled state of the BECs and the photons after the evolution, and the trace is over the photon degrees of freedom. Figure 2(b) shows the logarithmic negativity [47] of the state following the geometric phase gate and shows a diminished entanglement as expected. The effect of the imperfect disentangling is similar to a S z -dephasing by diminishing the off-diagonal components of the density matrix.
IV. EXPERIMENTAL PARAMETER ESTIMATION
We now give some experimental details of our scheme. The displacement terms in (1) and (13) which realize the geometric path of the laser in phase space can be performed using standard quantum optical methods [48]. We thus describe the atomic configuration that would realize the Hamiltonians, specializing to the BEC case. The quantum information is stored in the hyperfine ground states of the BEC. For a two-component BEC implementation as given in [40], states creating the spinor are |F = 1, F z = −1 and |F = 2, F z = 1 . A circularly polarized laser pulse is detuned from the atomic resonance transition is incident on the atoms [49,50]. The beam couples one of the ground states to an excited state giving rise to an ac Stark shift as given in the second term of (1).
There are two primary sources of external decoherence in our scheme: spontaneous emission and cavity photon loss. Considering spontaneous emission first, the ac Stark shift involves a virtual excitation to an excited state, which is susceptible to spontaneous decay. For a effective coupling G = g 2 0 ∆ the effective decoherence rate is Γ eff = ΓN g 2 0 ∆ 2 [31]. Here g 0 is the single atom cavity coupling, ∆ is the detuning, and Γ is the spontaneous emission rate. The factor of N arises due to stimulated emission of the excited state into the ground states. In order that spontaneous emission does not cause the state of the BEC to decohere, we require that the time required for the operation is within the effective decoherence time 1/Γ eff . We thus require which gives the first constraint on the detuning that should be used Cavity photon loss will affect our scheme as during the evolution there are superpositions of coherent states which depend upon the spin state, as illustrated in Fig. 2(a). While a coherent state |α remains a pure state even in the presence of loss, superpositions of coherent states such as |α ± | − α decohere into a mixture with an effective rate κ eff = κ|α| 2 [51][52][53]. In our case the magnitude of the coherent states is determined by α ∼ F 0 /G as F 0 / is the rate of displacement and /G is the time of the evolution. We thus have a second constraint such that the coherent state superposition does not decohere during the evolution Combining this with (21) we obtain a constraint on the brightness of the coherent light that can be used Let us now see how these contraints compare with experimental parameters. The cavity-BEC coupling may be estimated using parameters in Ref. [54] and is g 0 / = 1350MHz with a cavity decay rate of κ = 330MHz and a spontaneous emission rate of Γ = 19MHz. Then for example, the ac Stark shift assuming a detuning (21) and N = 10 3 is G/ ∼ Γ eff ≈ 96MHz, for a detuning of ∆/ = 19GHz. The second constraint (23) then evaluates to |α| 2 ≤ 0.3, which corresponds to rather weak coherent light pulses, but within experimental feasibility. For smaller atom numbers the estimates improve as (21) means that smaller detunings can be used, increasing G. This in turn increases the photon number in (23) which allows for larger phases to be generated in (19). Naturally, as the quality of the cavities improve (23) allows also for brighter coherent states to be used.
V. CONCLUSIONS
In summary, we investigated a scheme for entangling two macroscopic spin states via a geometric phase gate. Rather than a state dependent force used in the original ion trap formulation of the gate, the scheme is based on the ac Stark shift which is routinely realized in experiments. One difference between the the original formulation and the present work is in the condition (9) that must be satisfied in order for the coherent state to be disentangled at the end of the displacement operation. Due to the spin dependent frequency Ω, this gives a more complicated condition that needs to be satisfied. Despite this, by choosing a suitable form of the force F (t), this may be satisfied exactly. The timescale of the geometric phase gate is determined by the ac Stark shift coupling G which in turn is dependent on the cavity-atom coupling and the detuning. The main decoherence mechanisms are cavity photon decay, which provides the common mode for the BECs to entangle with, and spontaneous emission of the atoms. It is likely possible to improve upon the gate time to decoherence time ratio, using alternative solutions to that presented here satisfying (9). For example, using very high frequency F (t) would approximately satisfy (9) as long as frequencies above ∼ GN/ are used. This would allow for much shorter gate times, at the price of a larger F 0 . The many possible solutions for the geometric phase gate is one of the reasons for the robustness of the approach, which gives it a degree of tolerance for experimental imperfections as the phase Φ is determined by the trajectory on the phase space of the light. Due to spontaneous emission scaling with the number of atoms in the BEC N , currently the scheme is limited to relatively small BECs, around N = 10 3 . For BECs with small numbers of atoms, the proposed method is viable with current experimental parameters. As the qualities of cavities improve, this would allow for the reliable production of entanglement between macroscopic objects, which could be employed for various quantum information tasks. several types of ground and excited states, such as hyperfine states, which are labeled by the index j. We will consider the case of two-component BECs, hence for our case the sum runs over j = 1, 2. The bosonic field operatorsψ † (r) acting on vacuum state create an atom at the position r. For the photons we work in the single mode approximation where we consider only the cavity photon mode, with a being the photon annihilation operator. d j is the dipole moment of the atom. In the single mode approximation the electric field operator iŝ where ε is a complex unit vector describing the polarisation of light, ω is the frequency of the incident light, E ± (r) is the complex amplitude of electric field. The atom-light interaction Hamiltonian consist of terms that are energy conserving and non-conserving energy terms. Making rotating-wave approximation in order to remove the energy non-conserving terms giveŝ whereÊ (+) =â † E + (r)ε * . In the regime of very strong detuning, the excited states of the atoms are hardly populated and therefore can be eliminated adiabatically. The effective atom-light interaction Hamiltonian as seen by the atoms in the ground state iŝ (A6) where ∆ j = ω ej − ω gj − ω, and d j ·Ê (−) = E − (r) ej|d · ε|gj â are the dipole transition matrix elements. Assuming all the atoms are in the ground state of the trapping potential, we may writê ψ † gj (r) = b † gj ψ * g (r) (A7) where ψ g (r) is the wave function of an atom in the ground state (we assume this is independent of j) and b † gj is an operator that acts on vacuum state to create an atom in the ground state. For a two component BEC, the effective interaction Hamiltonian can be written in terms of their relative population difference aŝ whereN = b † 1 b 1 +b † 2 b 2 is the total atom number operator, G = G 1 − G 2 and g = G 1 + G 2 with G j = dr|E − (r)| 2 ψ * (r) gj|d j · ε * |ej ej|d j · ε|gj ψ(r) ∆ j , (A9) being the strength of the atom-light interaction experienced by the j th ground state. Rewriting (A8) in terms gives the ac Stark Hamiltonian.
Appendix B: Derivation of the evolution equations for the coherent state The time evolution of the state |ψ(t) is determined by where |γ = e iΦ(t) |α(t) . Now decompose the state a † |γ into |γ and the state orthogonal to this |γ ⊥ by a Gram-Schmidt process. We may now write a † |γ = A|γ + B|γ ⊥ . Substituting this into (B1), we obtain the equations of motion Making the transformation into the rotating frame α c = αe i[ω0+ G (S z 1 +S z 2 )]t yields the evolution equations in the main text.
|
2014-04-18T09:52:09.000Z
|
2014-04-18T00:00:00.000
|
{
"year": 2014,
"sha1": "c344b8d6dda3bbd0d06f71f26972d2c29fdc21d4",
"oa_license": null,
"oa_url": "https://research-repository.griffith.edu.au/bitstream/10072/65542/1/99352_1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c344b8d6dda3bbd0d06f71f26972d2c29fdc21d4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
119616884
|
pes2o/s2orc
|
v3-fos-license
|
Optimal frame completions with prescribed norms for majorization
Given a finite sequence of vectors $\mathcal F_0$ in $\C^d$ we characterize in a complete and explicit way the optimal completions of $\mathcal F_0$ obtained by adding a finite sequence of vectors with prescribed norms, where optimality is measured with respect to majorization (of the eigenvalues of the frame operators of the completed sequence). Indeed, we construct (in terms of a fast algorithm) a vector - that depends on the eigenvalues of the frame operator of the initial sequence $\cF_0$ and the sequence of prescribed norms - that is a minimum for majorization among all eigenvalues of frame operators of completions with prescribed norms. Then, using the eigenspaces of the frame operator of the initial sequence $\cF_0$ we describe the frame operators of all optimal completions for majorization. Hence, the concrete optimal completions with prescribed norms can be obtained using recent algorithmic constructions related with the Schur-Horn theorem. The well known relation between majorization and tracial inequalities with respect to convex functions allow to describe our results in the following equivalent way: given a finite sequence of vectors $\mathcal F_0$ in $\C^d$ we show that the completions with prescribed norms that minimize the convex potential induced by a strictly convex function are structural minimizers, in the sense that they do not depend on the particular choice of the convex potential.
Introduction
A finite sequence of vectors F = {f i } n i=1 in a d-dimensional complex Hilbert space H is a frame for H if the sequence spans H. It is well known that finite frames provide (stable) linear encodingdecoding schemes. As opposed to bases, frames are not subject to linear independence; indeed, it turns out that the redundancy allowed in finite frames can be turned into robustness of the transmission scheme that they induce, which makes frames a useful device for transmission of signals through noisy channels (see [4,5,6,13,25,29,28]).
The frame operator of a given a sequence of vectors F = {f i } n i=1 in H is the positive operator on H defined as where g ⊗ f is the linear operator in H given by: g ⊗ f (h) = h , f g for every h ∈ H. Thus, a sequence F is a frame for H if and only if S F is an invertible operator. When the frame operator is a multiple of the identity, the frame is called tight. Tight frames allow for redundant linear representations of vectors that are formally analogous to the linear representations given by orthonormal basis; this feature makes tight frames a distinguished class of frames that is of interest for applications. In several applications we would like to consider tight frames that have some other prescribed properties leading to what is known in the literature as frame design problems [1,7,10,15,17,18,19,27]. It turns out that in some cases it is not possible to find a frame fulfilling the previous demands.
An alternative approach to deal with the construction of frames with prescribed parameters and nice associated reconstruction formulas was posed in [2] by Benedetto and Fickus; they defined a functional, called the frame potential, and showed that minimizers of the frame potential (within a convenient set of frames) are the natural substitutes of tight frames with prescribed parameters (see also [12,23,26,31] and [11,32,33] for related problems in the context of fusion frames). Moreover, in [31] it is shown that minimizers of the frame potential under suitable restrictions (considered in the literature) are structural minimizers in the sense that they coincide with minimizers of more general convex potentials.
Recently, there has been interest in the following optimal frame completion problem: given an initial sequence F 0 in H and a sequence of positive numbers a then, compute the sequences G in H whose elements have norms given by the sequence a and such that the completed sequence F = (F 0 , G) is such that the eigenvalues of its frame operators are as concentrated as possible: thus, ideally, we would search for completions G such that F = (F 0 , G) is a tight frame. Unfortunately, it is well known that there might not exist such completions (see [19,20,21,30,34,35]). In this setting, the initial sequence of vectors can be considered as a checking device for the measurement, and therefore we search for a complementary set of measurements (given by vectors with prescribed norms) in such a way that the complete set of measurements is optimal in some sense. Following [2,12] we could measure optimality in terms of the frame potential i.e., we could search for completions with prescribed norms G such that F = (F 0 , G) minimizes the frame potential among such completions; alternatively, we could measure optimality in terms of the so-called mean squared error (MSE) of the completed sequence (see [21]). More generally, we can consider a natural extension of the previous problems: given a functional defined on the set of frames, compute the frame completions with prescribed norms that minimize this functional. Moreover, this last problem raises the question of whether the completions that minimize these functionals coincide i.e., whether the minimizers are structural in this setting.
A first step towards the solution of the general version of the completion problem was made in [34]. There we showed that under certain hypothesis (feasible cases, see Section 2.3), optimal frame completions with prescribed norms are structural (do not depend on the particular choice of functional), as long as we consider convex potentials, that contain the MSE and the frame potential. On the other hand, it is easy to show examples in which the previous result does not apply (nonfeasible cases); in these cases the optimal frame completions with prescribed norms were not known even for the MSE nor the frame potential.
In [35] we considered the structure of completions that minimize a fixed convex potential (non feasible case). There, we showed that the eigenvalues of optimal completions with respect to a fixed convex potential are uniquely determined by the solution of an optimization problem in a compact convex subset of R d for a convex objective function that is associated to the convex potential in a natural way. Then, we showed an important geometrical feature of optimal completions F = (F 0 , G) for a fixed convex potential, namely that the vectors in the completion G are eigenvectors of the frame operator of the completed sequence F (see Section 2.2 for a detailed exposition of these results). Based on these facts, we developed an algorithm that allowed us to compute the solutions of the completion problem for small dimensions. In this setting we conjectured some properties of the optimal frame completions in the general case, based on common features of the solutions of several examples obtained by this algorithm (see Section 3 for a detailed description of these conjectures).
In this paper, building on our previous work [34] and [35], we give a complete and explicit description of the spectral and geometrical structure of optimal completions with prescribed norms with respect to a convex potential induced by a strictly convex function. Our approach is constructive and allows to develop a fast and effective algorithm that computes the spectral structure of optimal completions. As we shall see, given an initial sequence F 0 in H and a sequence of positive numbers a, both the spectral and geometrical structure of optimal completions depend only on the frame operator of F 0 and a, but they do not depend on the particular choice of the convex potential. Hence, we show that in the general case the minimizers of convex potentials (induced by strictly convex functions) are structural.
In order to obtain the previous results, we begin by proving the properties of general optimal completions conjectured in [35]. These properties (that are structural, in the sense that they do not depend on the convex potential) are then used to compute several other structural parameters -that involve the notion of feasibility developed in [34] -that completely describe the spectral structure of optimal completions. As a consequence of this description, we conclude that optimal solutions have the same eigenvalues and hence, the eigenvalues of optimal completions are minimum for the so-called majorization preorder. Moreover, all the parameters involved in the description of the spectral structure of optimal completions can be computed in terms of fast algorithms. With the spectral data and results from [34] we completely describe the set positive matrices that correspond to the frame operators of sequences G with norms prescribed by a and such that F = (F 0 , G) are optimal. Finally, every optimal completion G can be computed by using recent results from [7] (see also [17] and [22]).
The paper is organized as follows. In section 2 we describe the context of our main problem -namely, optimal completions with prescribed norms, where optimality is described in terms of majorization -and give a detailed account of several related results that were developed in our previous works [34] and [35] that we shall need in the sequel, in a way suitable for this note; in particular, we include a new construction of the spectra of optimal completions in the feasible cases. In Section 3 we introduce new structural parameters -that can be efficiently computed in terms of explicit algorithms -and show how to give a complete description of the spectra of optimal completions for strictly convex potentials, in terms of these parameters in the general case. This allows us to show that the spectra of such optimal completions do not depend on the choice of strictly convex potential, so that minimizers are then structural. The proofs of the technical results of this section are presented in Section 4. In particular, we settle in the affirmative some features of the structure of optimal completions for strictly convex potentials that were conjectured in [35]. As a byproduct we also settle in the affirmative a conjecture on local minimizers of strictly convex potentials with prescribed norms posed in [31].
Optimal completions with prescribed norms
In this section we give a detailed description of the optimal completion problem and recall some notions and results from our previous work [34,35]. We point out that the exposition of the results in Section 2.3 differs from that of [35], since this new presentation is better suited for our present purposes.
By now, finite frame theory is a well established area of intensive research. For a modern introduction to several aspect of this subject see [14]. In what follows we shall use the following notations and terminology: Notations and terminology: let F = {f i } n i=1 be a finite sequence in a complex d-dimensional Hilbert space H. Then, 1. T F ∈ L(C n , H) denotes the synthesis operator of F given by denotes the analysis operator of F and it is given by 3. S F ∈ L(H) denotes the frame operator of F and it is given by Notice that S F is positive by construction. 4. We say that F is a frame for H if F spans H; equivalently, F is a frame for H if S F is a positive invertible operator acting on H.
5.
In order to check whether F is a frame, we will inspect the spectrum of S F . Thus, given a positive operator S ∈ L(H), λ(S) = (λ i ) d i=1 ∈ R d ≥0 denotes the eigenvelues of S F , counting multiplicities and arranged in non-increasing order i.e. λ 1 ≥ . . . ≥ λ d ≥ 0.
Presentation of the problem
In several applied situations it is desired to construct a sequence G in a complex d-dimensional Hilbert space H in such a way that the frame operator of G is given by some positive operator B and the squared norms of the frame elements are prescribed by a sequence of positive numbers a = (a i ) k i=1 . That is, given a fixed positive operator B on H and a ∈ R k >0 , we analyze the existence (and construction) of a sequence G = {g i } k i=1 such that S G = B and g i 2 = a i , for 1 ≤ i ≤ k . This is known as the classical frame design problem. It has been treated by several research groups (see for example [1,7,10,15,17,18,19,27]). In what follows we recall a solution of the classical frame design problem in the finite dimensional setting.
Recently, researchers have made a step forward in the classical frame design problem and have asked about the structure of optimal frames with prescribed parameters. In particular, there has been interest in the following problem: let H ∼ = C d and let F 0 = {f i } no i=1 be a fixed (finite) sequence of vectors in H. Consider a sequence a = (a i ) k i=1 of positive numbres such that rk S F 0 ≥ d − k and denote by n = n o + k. Then, with this fixed data, the problem is to construct a sequence such that the resulting completed sequence F = (F 0 , G) -obtained by appending the sequence G to F 0 -is a frame such that the eigenvalues of the frame operator of F = (F 0 , G) are as concentrated as possible: thus, ideally, we would search for completions G such that F = (F 0 , G) is a tight frame. Unfortunately, it is well known that there might not exist such completions (see [19,20,21,30,34,35]). In this setting, the initial sequence of vectors can be considered as a checking device for the measurement, and therefore we search for a complementary set of measurements (given by vectors with prescribed norms) in such a way that the complete set of measurements is optimal in some sense. We could measure optimality in terms of the frame potential i.e., we search for a frame F = (F 0 , G), with g i 2 = a i for 1 ≤ i ≤ k, and such that its frame potential FP (F) = tr S 2 F is minimal among all possible such completions (indeed, this problem has been considered before in the particular case in which F 0 = ∅ in [2,12,23,26,31]); alternatively, we could measure optimality in terms of the so-called mean squared error (MSE) of the completed sequence F i.e. MSE(F) = tr(S −1 F ) (see [21]). More generally, we can measure robustness of the completed frame F = (F 0 , G) in terms of general convex potentials: and Conv s (R ≥0 ) = {f ∈ Conv(R ≥0 ) : f is strictly convex }. Following [31] we consider the (generalized) convex potential P f associated to any f ∈ Conv(R ≥0 ), given by where the matrix f (S F ) is defined by means of the usual functional calculus.
In order to describe the main problems we first fix the notation that we shall use throughout the paper.
be a sequence of vectors in H and a = (a i ) k i=1 be a positive nonincreasing sequence such that d − rk S F 0 ≤ k. Define n = n o + k. Then
1.
In what follows we say that (F 0 , a) are initial data for the completion problem (CP).
2. For these data we consider the set When the initial data (F 0 , a) are fixed, we shall use the notations S 0 = S F 0 and λ = λ(S 0 ) ↑ will denote the eigenvalues of S 0 arranged in a non-decreasing order i.e. λ 1 ≤ . . . ≤ λ d , where x ↑ ∈ R d denotes the vector obtained from x ∈ R d by re-arranging the coordinates of x in non-decreasing order.
Main problems: (Optimal completions with prescribed norms for majorization) Let (F 0 , a) be initial data for the CP and let f ∈ Conv s (R ≥0 ).
P1. Give an explicit description (both spectral and geometrical) of F ∈ C a (F 0 ) that are the minimizers of P f in C a (F 0 ).
P2. Construct a fast algorithm that efficiently computes all possible F ∈ C a (F 0 ) that are the minimizers of P f in C a (F 0 ).
P3. Verify that the set of F ∈ C a (F 0 ) that are the minimizers of P f in C a (F 0 ) is the same for every f ∈ Conv s (R ≥0 ).
In previous works we have obtained some results related with the problems above. Indeed, in [34] we obtained a partial affirmative answer to P3, while in [35] we obtained some partial results related with P1. and a non-efficient algorithm as in P2. that worked in small examples (see Sections 2.2 and 2.3 below).
In this paper, building on our previous work, we completely solve the three problems above in terms of a constructive (algorithmic) approach.
2.2
On the structure of the minimizers of P f on C a (F 0 ) In this section we collect results from [35] that we shall use in this paper. Throughout this section we fix the initial data (F 0 , a) for the CP. Recall that λ = (λ i ) d i=1 are the eigenvalues of S 0 = S F 0 arranged in a non-decreasing order. Therefore we recast the results from [35] by reversing the ordering used in that work. Also notice that we are assuming that the sequence a is arranged in non-increasing order, that is, In what follows, the notion of majorization will play a fundamental role: recall that given x, y ∈ R d we say that x is submajorized by y, and write where x ↓ ∈ R d denotes the vector obtained from x by rearrangement of its entries in non-increasing order. If x ≺ w y and d i=1 x i = d i=1 y i , then we say that x is majorized by y, and write x ≺ y. In particular, Prop. 2.1 states that the eigenvalues of B majorize the sequence of squared norms (to be precise, we must add zeros to one of the two vectors if they have different sizes).
Our analysis of the completed frames
Hence, the following result plays a central role in our approach.
Proposition 2.4. Let (F 0 , a) be the initial data for the CP and let S be a positive operator on H. Then S is the frame operator for some completion (2) . By Proposition 2.4 we get the following partition: Building on Lidskii's inequality (see [3,III.4]) we obtained the following result: ) and let P f be the convex potential induced by f . By the well known interplay between majorization and minimization of convex functions (see [32]) and Theorem 2.
That is, if we consider the partition of C a (F 0 ) described in Eq. (3), then in each slice defined by µ the minimizers of the potential P f are characterized by the spectral condition (4). This shows that in order to search for global minimizers of P f on C a (F 0 ) we can restrict our attention to the set of completions F = (F 0 , G) ∈ C a (F 0 ) such that Indeed, Eqs. (3) and (4) show that if F = (F 0 , G) is a minimizer of P f in C a (F 0 ) then the eigenvalues of S F are such that λ(S F ) = (λ + λ(S G )) ↓ . Therefore, we analyze the existence and uniqueness of ≺-minimizers on the set {λ + µ : µ ∈ R d ≥0 , µ = µ ↓ and a ≺ µ}.
Theorem 2.7. Let (F 0 , a) be initial data for the CP. Denote by λ = λ(S F 0 ) ↑ . Then The following result is a reduction of the computation of the spectral structure of the optimal completions with respect to a fixed convex potential.
such that a ≺ µ and: and a ≺ γ} .
3. We have that S F g j = c i g j , for every j ∈ J i and every 1 ≤ i ≤ p .
The statement is still valid if we assume that F is just a local minimizer, but if we also assume as a hypothesis that F satisfies item 2 (for example if S F 0 = 0).
The feasible case of the CP
In this section we recall the results from [34] that we shall need in the sequel. Throughout this section we keep the notation used previously. That is, given the initial data (F 0 , a) for the CP, we denote by Let L(H) + denote the convex cone of positive (semidefinite) operators acting on H. In [34] we introduced the following set In [34,Theorem 3.12] it is shown that there exist ≺-minimizers in U t (S 0 , k). Indeed, there exists Notice that by construction ν(λ , a) is not necessarily an ordered vector (nor decreasing, nor increasing); yet, in terms of the terminology from [34], we have that ν λ , m (t) = ν(λ , a) ↓ . Thus, we have reversed the order of the vector µ(λ , a) -accordingly with reversing the order of λ = λ(S F 0 ) ↑ -and we have changed the description of the vector ν(λ , a) -while preserving all of their majorization properties -with respect to [34]. Nevertheless, we point out that the ordering of the entries of the vector ν(λ , a) presented here plays a crucial role in simplifying the exposition of the results herein, as it guaranties that µ(λ , a) = ν(λ , a) − λ.
The following definition and remark show the relevance of the notions introduced above for the computation of the spectral structure of solutions for the optimal completion problem.
Remark 2.11. With the previous notations, assume that the pair (λ , a) is feasible and denote µ = µ(λ , a). In this case (see [34]) for any S which is a ≺-minimizer in U t (S 0 , k) it holds that λ(S − S 0 ) = µ and hence, by Proposition 2.4, we conclude that S is the frame operator for some completion in C a (F 0 ). Moreover, Proposition 2.4 also shows that the frame operators of completions in C a (F 0 ) are in U t (S 0 , k). Then S is also a ≺-minimizer in the set of frame operators of sequences in C a (F 0 ). Therefore, since [32] for a detailed account of these facts), any completion F = (F 0 , G) ∈ C a (F 0 ) such that S F = S is a minimizer of P f for every f ∈ Conv(R ≥0 ).
On the other hand, as a consequence of the geometrical structure of S = S F as above (see [34,35]), we conclude that there exists c > 0 such that S F g i = c g i for every 1 ≤ i ≤ k . That is, in this case the structure of the completing sequence G given in Theorem 2.9 is trivial: the partition of {1, . . . ≤ k} has only one member and there exists a unique constant c = c 1 .
It is worth pointing out that it is easy to construct examples of initial data (F 0 , a) for the CP such that the pair (λ, a) is not feasible (see [34]), so that comments in Remark 2.11 do not apply in these cases.
Remark 2.12. Let (F 0 , a) be initial data for the CP with k ≥ d, recall the notations λ = λ(S F 0 ) ↑ and t = tr a + tr λ. As shown in [34] we can explicitly compute ν(λ , a) according to the following two cases: where 1 s ∈ R s is the vector with all its entries equal to one.
Moreover, λ i ≤ ν(λ , a) i for every 1 ≤ i ≤ d, ν(λ , a) = ν(λ , a) ↑ and in this case the index s also satisfies that In what follows we obtain an explicit description of the vector ν(λ , a) in case d ≤ k and That is, we compute the parameters s and c of Eq. (8). The way in which these constants are found is the key for the developments of Section 3. Our present techniques differ substantially from those introduced in [34]. We begin by showing that the vector ν(λ , a) above is unique. Then, we show that the computation of ν(λ , a) for k < d can be reduced to the case when k = d. First we need to introduce some notations: Given j , r ∈ {0, 1, . . . , d} such that j < r, by Q j , r we denote the final averages: We shall abbreviate Q r = Q 0 , r .
Lemma 2.14. Let k ≥ d and 1 ≤ r ≤ d . Then 1. If r < d and Q r < λ r+1 then Q r < Q j , for every j such that r < j ≤ d.
2. If r < d and Q r ≤ λ r+1 then Q r ≤ Q j , for every j such that r < j ≤ d.
3. If λ r ≤ Q r then Q r ≤ Q j , for every j such that 1 ≤ j < r.
Proof. Denote by c = Q r for a fixed r < d. Recall that λ = λ ↑ . If j > r then The proof of item 2 is identical. On the other side, if j < r then Proposition 2.15. Let (F 0 , a) be initial data for the CP with k ≥ d and suppose that 1 d [ tr a + tr λ ] < λ d . Then 1. There exists a unique index 1 ≤ s ≤ d such that λ s ≤ Q s < λ s+1 , and in this case Q j } and ν(λ , a) = (Q s 1 s , λ s+1 , . . . , λ d ) (10) 2. If another index r ≤ d − 1 satisfies that λ r ≤ Q r ≤ λ r+1 , then (a) Q r = min 1≤j≤d Q j = Q s and r ≤ s.
Proof. The existence of an index s such as in item 1 is guaranteed by the properties of ν(λ , a) stated in [34]. Indeed, it is easy to see that the index s described in Eq. (10) satisfies that λ s ≤ Q s < λ s+1 . The formula given in Eq. (10), which shows the uniqueness of ν(λ , a), is a direct consequence of Lemma 2.14. Assume that λ r ≤ Q r ≤ λ r+1 . Then Q r = min 1≤j≤d Q j = Q s and r ≤ s by Lemma 2.14. If r < s, then Q s = 1 s (r Q r + s i=r+1 λ i ) = Q r . This clearly implies all the equalities of item (b). Finally, observe that item 2 =⇒ item 3.
and ν(λ , a) is constructed as in Proposition 2.15 (notice that in this caseλ ∈ Rd withd = k).
The proof is direct by observing that, extracting the entries λ k+1 , . . . , λ d of the vector ν(λ , a) as described in [34,Def. 4.13], the vector that one obtains (with the reverse order) satisfies the conditions of item 3 of Proposition 2.15 relative to the pair (λ , a).
The following result is in a sense a converse to Remark 2.11. It establishes that if there exists f ∈ Conv s (R ≥0 ) and a minimizer F = (F 0 , G) of P f in C a (F 0 ) such that the structure of the completing sequence G as described in Theorem 2.9 is trivial, then the underlying pair (λ , a) is feasible. Recall the notation ν f (λ , a) given in Theorem 2.8.
Lemma 2.17. Let (F 0 , a) be initial data for the CP , with k ≥ d and let f ∈ Conv s (R ≥0 ). Let F = (F 0 , G) be a minimum for P f on C a (F 0 ) such that λ(S F ) = (λ + λ(S G )) ↓ . Suppose that, for some c > 0, The same final conclusion trivially holds if s = dim W = d and S F = c I.
Uniqueness and characterization of the minimum
In this section we shall state the main results of the paper. For the sake of clarity of the exposition, we postpone the more technical proofs until Section 4.
3.1 (Fixed data, notations and terminology). Until Theorem 3.8, we fix f ∈ Conv s (R ≥0 ) and F = (F 0 , G) ∈ C a (F 0 ) a minimizer of P f on C a (F 0 ).
Since
because g i ∈ ker (S − c j I W ) for every i ∈ J j . Note that, by Theorem 2.9, each W j reduces both S F 0 and S G .
6. If p = 1 then J 1 = {1, . . . , k} and S = c 1 I W . Hence the minimum F satisfies the hypothesis of Lemma 2.17, so that the pair (λ , a) is feasible.
We denote by
be the initial averages. We abbreviate P 1 , r = P r . Remark 3.2 (A reduction procedure). Consider the data, notations and terminology fixed in 3.1. For any j ≤ p − 1 denote by 0 ) in H j , i.e. an optimal completion for the reduced problem. Indeed, recall that the minimality is computed in terms of the map tr f The importance of the previous remarks lies in the fact that they provide a powerful reduction method to compute the structure of the sets G i , K i and J i for 1 ≤ i ≤ p as well as the set of constants c 1 > . . . > c p > 0. Indeed, assume that we are able to describe the sets G 1 , K 1 , J 1 and the constant c 1 in some structural sense, using the fact that these sets are extremal (e.g. these sets are built on c 1 > c j for 2 ≤ j ≤ p).
Then, in principle, we could apply these structural arguments to find G 2 , K 2 , J 2 and the constant c 2 , using the fact that these are now extremal sets of F 1 , which is a P f minimizer of the reduced CP for (F (1) 0 , a L 1 ). On the other hand, the minimality of the final reduction F p−1 produces a pair (λ I p−1 , a L p−1 ) which is feasible by item 6 of 3.1, because it has a unique constant c p associated to the unique set K p . As we shall see, this strategy can be implemented to obtain (inductively) a precise description of the sets above. Remark 3.3. Let (F 0 , a) be initial data for the CP with d ≤ k, fix f ∈ Conv s (R ≥0 ) and let F = (F 0 , G) ∈ C a (F 0 ) be a global minimum for P f on C a (F 0 ). In section 4.1 we shall prove the following properties of the sets J j and K j defined in item 4. of 3.1 describing µ f (λ , a) and ν f (λ , a): 1. Each set J j and K j consists of consecutive indices, for 1 ≤ j ≤ p .
2. The sets K j and J j have the same number of elements, for 1 ≤ j ≤ p − 1 .
In particular, by items 1 and 2 above, K j = J j for 1 ≤ j ≤ p − 1 .
We state the properties of the sets J j and K j , 1 ≤ j ≤ p described in Remark 3.3 in the following: With the notations of Remark 3.3 (so that, in particular, d ≤ k) then 1. There exist 0 = s 0 < s 1 < s 2 < · · · < s p−1 < s p = s F such that h i = P s r−1 +1 , sr for 1 ≤ r ≤ p − 1 , or also c r = λ j + µ j for every j ∈ K r = J r for 1 ≤ r ≤ p − 1 .
3. The constant c p (and the index s p ) is determined by the identity where, with the notations in Remark 3.2 (notice that I p−1 = {s p−1 + 1, . . . , d} and L p−1 = {s p−1 + 1, . . . , k} according to item 1 above) and ν(λ I p−1 , a L p−1 ) is computed as in Remark 2.12.
Let (F 0 , a) be initial data for the CP. In what follows we shall need the following notion, that allow us to show feasibility in the more general case in which, in the notations of Theorem 3.4, p > 1. for the unique r > s such that 2. The following recursive method allows to describe the vector ν f (λ , a) as in Theorem 3.4: (a) The index s 1 = max j ≤ s p−1 : P 1 , j = max i≤s p−1 P 1 , i , and c 1 = P 1 , s 1 .
(b) If the index s j is already computed and s j < s p−1 , then s j+1 = max s j < r ≤ s p−1 : P s j +1 , j = max s j <i≤s p−1 P s j +1 , i and c j+1 = P s j +1 , s j+1 .
The following are the main results of the paper. In order to state them, we introduce the spectral picture of the completions with prescribed norms, given by Theorem 3.7. Let (F 0 , a) be initial data for the CP with d ≤ k. Then the vector ν = ν f (λ , a) is the same for every f ∈ Conv s (R ≥0 ). Therefore, Proof. By Proposition 3.6, the minima ν = ν f (λ , a) are completely characterized by the data (λ , a) without interference of the map f . Therefore, given any γ ∈ Λ(C a (F 0 )), The following result shows that the structure of optimal completions in C a (F 0 ) in case d > k can be obtained from the case in which k = d.
Theorem 3.8. Let (F 0 , a) be initial data for the CP with d > k. If we let where ν f (λ , a) is constructed as in Proposition 3.6 (since d = k, by construction of λ ∈ (R d ≥0 ) ↑ ). In this case the vector ν f (λ , a) is the same for every f ∈ Conv s (R ≥0 ) and also satisfies Eq. (15).
Remark 3.9. The construction of the minimum ν f (λ , a) given by Proposition 3.6 is algorithmic, an it can be easily implemented in MATLAB. It only depends on an -already available, see [34] routine for checking feasibility, which is fast and efficient.
Proofs of some technical results.
In this section we present detailed proofs of several statements in section 3. All these results assume that the initial data (F 0 , a) for the CP satisfies that k ≥ d. As seen in Theorem 3.8, the general case can be reduced to this situation.
4.1 Description of the sets K i and J i .
4.1.
We begin by recalling the notations of 3.1: Let (F 0 , a) be initial data for the CP, with k ≥ d.
Fix a convex map f ∈ Conv s (R ≥0 ). We consider the following objects: We remark that, if F 0 = ∅, these facts are still valid for local minima by Theorem 2.9.
By Theorem 2.7 there exists an orthonormal basis of H {v
The next three Propositions give a complete proof of Theorem 3.4. The first of them justifies the convention that λ = λ(S F 0 ) ↑ .
Remark 4.2.
In what follows we shall need the following elementary property of majorization (see [3]): if x 1 , y 1 ∈ R r and x 2 , y 2 ∈ R s are such that Proposition 4.3. Let (F 0 , a) be initial data for the CP with λ = λ( S F 0 ) ↑ , and consider the notations of 4.1. If p > 1, then Inductively, by means of Remark 3.2, we deduce that all sets K j consist on consecutive indices, and that K i < K j (in terms of their elements) if i < j.
Proof. Suppose that there are i ∈ K 1 and j ∈ K r (for some r > 1) such that j < i. Then λ j ≤ λ i and µ i ≤ µ j . For t > 0 very small, let µ i (t) = µ i − t > 0 and µ j (t) = µ j + t. Consider the vector µ(t) obtained by changing in µ the entries µ i by µ i (t) and µ j by µ j (t). Observe that not necessarily µ(t) = µ(t) ↓ , but we are indeed sure that c 1 > c r .
Nevertheless, by Remark 4.2, (µ i , µ j ) ≺ (µ i (t) , µ j (t) ) =⇒ a ≺ µ ≺ µ(t). Therefore there exists F = (F 0 , G ) ∈ C a (F 0 ) such that, using the ONB of Eq. (12), Considering the restrictions to V as operators in L(V ) ∼ = M 2 (C) we get that for t small enough in such a way that c 1 − t > c r + t, so that (c 1 − t , c r + t) = (c 1 − t , c r + t) ↓ . Then a contradiction. The inductive argument follows from Remark 3.2.
4.4.
In the following two statements we assume that, for some f ∈ Conv s (R ≥0 ), the sequence F = (F 0 , G) ∈ C op a (F 0 ) is a global minimum for P f , or it is a local minimum if S F 0 = 0 and λ = 0. In both cases 4.1 applies.
Proposition 4.5. Let (F 0 , a) be initial data for the CP, and let F = (F 0 , G) ∈ C op a (F 0 ) as in 4.1 and 4.4. Suppose that p > 1. Given h ∈ J i and l ∈ J r then In particular, the sets J i consist of consecutive indices, and J 1 < J 2 < . . . < J p (in terms of their elements).
Proof. Let us assume that i < r ≤ p , h ∈ J i and l ∈ J r , but l < h (even less: that a l ≥ a h ). Then We also know that g l , g h = 0. Denote by w h = g h g h = a −1/2 h g h and w l = g l g l = a −1/2 l g l . Let g h (t) = cos(t) g h + sin(t) g h w l and g l (t) = cos(γt) g l + sin(γt) g l w h for t ∈ R for some convenient γ > 0 that we shall find later. Let F γ (t) be the sequence obtained by changing in F the vectors g h by g h (t) and g l by g l (t), for every t ∈ R. Notice that g h (t) 2 = a h and g l (t) 2 = a l for every t ∈ R, so that all the sequences F γ (t) ∈ C a (F 0 ).
Let W = span{w h , w l }, a subspace which reduces S F and S Fγ (t) . Note that g h (t), g l (t) ∈ W . In the matrix representation with respect to this basis of W we get that w h w l , g l ⊗ g l = 0 0 0 a l w h w l and g l (t) ⊗ g l (t) = a l sin 2 (γ t) cos(t) sin(t) cos(t) sin(t) cos 2 (γ t) If we denote by S(t) = S Fγ (t) , we get that Therefore S(t)| W ⊥ = S F | W ⊥ . On the other hand, S F | W = c i 0 0 c r . Then S(t)| W = c i + a h (cos 2 (t) − 1) + a l sin 2 (γt) a h cos(t) sin(t) + a l cos(γt) sin(γt) a h cos(t) sin(t) + a l cos(γt) sin(γt) c r + a h sin 2 (t) + a 2 l (cos 2 (γt) − 1) Note that tr A γ (t) = c i +c r for every t ∈ R. Therefore λ(A γ (t) ) ≺ (c i , c r ) strictly ⇐⇒ A γ (t) 2 2 < c 2 i + c 2 r . Hence we consider the map m γ : R → R given by Note that S(0) = S F =⇒ m γ (0) = c 2 i + c 2 r . We shall see that, for a convenient choice of γ, it holds that m γ (0) = 0 but m γ (0) < 0. This will contradict the (local) minimality of F, because m γ would have in this case a maximum at t = 0, so that λ(A γ (t) ) ≺ (c i , c r ) strictly (18) =⇒ λ(S Fγ (t) ) ≺ λ(S F ) strictly =⇒ P f (F γ (t) ) < P f (F) for every t near 0.
As we are assuming that a l ≥ a h then D > 0, because Hence there exists γ ∈ R such that m γ (0) < 0. Observe that as long as 0 we arrive at the same contradiction.
The following result is inspired on some ideas from [12].
Proposition 4.6. Let (F 0 , a) be initial data for the CP, and let F = (F 0 , G) ∈ C op a (F 0 ) as in 4.1 and 4.4. For every j < p, the subsequence {g i } i∈J j of G is linearly independent.
Proof. Suppose that there exists 1 ≤ j ≤ p − 1 such that {g i } i∈J j is linearly dependent. Hence there exists coefficients z l ∈ C, l ∈ J j (not all zero) such that |z l | ≤ 1/2 and l∈J j z l a l g l = 0 .
Let I j ⊆ J j be given by I j = {l ∈ J j : z l = 0} and let h ∈ H such that h = 1 and Fix l ∈ I j . Let Re(A) = A+A * 2 denote the real part of each A ∈ L(H). Then Let S(t) denote the frame operator of F(t) and notice that S(0) = S F . Note that Then R(t) is a smooth function such that and such that R (0) = 0. Therefore lim t→0 t −2 R(t) = 0. We now consider W = span {g l : l ∈ I j } ∪ {h} = span g l : l ∈ I j ⊥ C · h .
Then dim W = s + 1, for s = dim span{g l : l ∈ I j } ≥ 1. By construction, the subspace W reduces S F and S(t) for t ∈ R, in such a way that S(t)| W ⊥ = S F | W ⊥ for t ∈ R. On the other hand where we use the fact that the ranges of the selfadjoint operators in the second and third term in the formula above clearly lie in W . Then λ S F | W = c j 1 s , c p ∈ (R s+1 >0 ) ↓ and where we have used the definition of s and the fact that |z l | > 0 for l ∈ I j . Hence, for sufficiently small t, the spectrum of the operator A(t) ∈ L(W ) defined in (20) is where we have used the fact that g l , h = 0 for every l ∈ I j . Let us now consider Recall that in this case lim t→0 t −2 δ j (t) = 0 for 1 ≤ j ≤ s + 1. Using Weyl's inequality on Eq. (20), we A direct test shows that, for small t, this ρ(t) ≺ λ(S F | W ) = c j 1 s , c p strictly. Then, since f is strictly convex, for every sufficiently small t we have that This last fact contradicts the assumption that F is a local minimizer of P f in C op a (F 0 ). Remark 4.7. Proposition 4.6 allows to show that in case F 0 = ∅ then local and global minimizers of a convex potential P f , induced by f ∈ Conv s (R ≥0 ), on C a (F 0 ) -endowed with the product topology -coincide, as conjectured in [31].
Recall that a local minimizer F is a juxtaposition of tight frame sequences {F i } p i=1 which generate pairwise orthogonal subspaces of H. Notice that by [35,Lemma 4.9] F is a frame for H. Moreover, by Proposition 4.5, it is constructed using a partition of a with consecutive indices. Now by inspection of the proof of Proposition 4.6 we see that only one of such frame sequences can be a linearly dependent set: that with the smallest tight constant c p . This forces that the (ordered) spectrum ν of a local minimizer must be either ν = c1 d or ν = (a 1 , a 2 , . . . , a r , c , · · · , c) , where a r > c ≥ a r+1 , and c is the constant of the unique tight subframe constructed with a linear dependent sequence of vectors with norms given by {a i } k i=r+1 (notice that this forces c ≥ a r+1 ). But it is not difficult to see that this vector can be constructed in a unique way, that is, there is only one r such that That is, the spectrum of local minimizers is unique and therefore local and global minimizers of P f coincide, for every potential P f as above.
Several proofs.
Let (F 0 , a) be initial data for the CP with λ = λ(S F 0 ) ↑ , a = a ↓ and d ≤ k. Recall that we denote by h i = λ i + a i for every 1 ≤ i ≤ d and, given j ≤ r ≤ d, we denote by We shall abbreviate P 1 , r = P r .
3. The constant c p (and the index s p ) is determined by the identity where, with the notations in Remark 3.2 (notice that I p−1 = {s p−1 + 1, . . . , d} and L p−1 = {s p−1 + 1, . . . , k} according to item 1 above) and ν(λ I p−1 , a L p−1 ) is computed as in Remark 2.12.
Lemma 4.9. Let (F 0 , a) be initial data for the CP with k ≥ d.
Remark 4.10. Let (F 0 , a) be initial data for the CP with k ≥ d and recall the description of a minimum ν f (λ , a) given in Theorem 3.4. As in Lemma 4.9 (or by an inductive argument using Remark 3.2) we can assure that for every r ≤ p − 1, the constants c r = P s r−1 +1 , sr ≥ P s r−1 +1 , j for every j such that s r−1 + 1 ≤ j ≤ s r .
Lemma 4.11. Let (F 0 , a) be initial data for the CP. With the notations of Theorem 3.4, the global minimum ν f (λ , a), its constants c j and the indices s j (for 1 ≤ j ≤ p) satisfy the following properties: 1. Suppose that p > 1. For every 1 ≤ j ≤ p − 1 such that j > 1, the constant c j satisfies that 2. Fix 1 ≤ j ≤ p − 1 such that j > 1. Then 3. In particular the averages P 1 , Proof. The inequality of item 1 follows since Now we prove the inequality of Eq. (26): Given an index t such that s j−1 < t ≤ s j , where we used the fact that c j−1 ≤ P 1 , s j−1 for 1 ≤ j − 1 ≤ p − 1, which follows from item 1. In particular we have proved item 3, and this also proves that Eq. (26) holds for s j < t ≤ s p−1 . (λ , a), its constants c j and the indices s j (for 1 ≤ j ≤ p) satisfy the following properties: suppose we know the index s p−1 , and that p > 1. Then we have a recursive method to reconstruct ν: 1. The index s 1 = max j ≤ s p−1 : P 1 , j = max i≤s p−1 P 1 , i , and c 1 = P 1 , s 1 .
2. If we already compute the index s j and s j < s p−1 , then s j+1 = max s j < r ≤ s p−1 : P s j +1 , r = max s j <i≤s p−1 P s j +1 , i and c j+1 = P s j +1 , s j+1 .
Proof. The formula P 1 , s 1 = max which also implies that s 1 must be the greater index (before s p−1 ) satisfying this property.
The iterative program works by applying the last fact to the successive truncations of ν which are still minima in their neighborhood, by Remark 3.2.
Recall that h i = λ i + a i and that, for 0 ≤ j < r ≤ d, we denoted by and we abbreviate Q 1 , r = Q r .
Recall also the notion of feasible indices given in Definition 3.5: given 1 ≤ s ≤ d − 1 denote by λ s = (λ s+1 , . . . , λ d ) ∈ R d−s and a s = (a s+1 , . . . , a k ), the truncations of the original vectors λ and a. Recall that the index s is feasible if the pair (λ s , a s ) is feasible for the CP. In any case we denote by ν s = ν(λ s , a s ) = c 1 r−s , λ r+1 , . . . , λ d where c = Q s , r for the unique r > s such that λ r ≤ c < λ r+1 . This means that λ s ν s ∈ (R d−s >0 ) ↑ and that tr ν s = tr λ s + tr a s . In other words, r is the unique index which satisfies: Given j > s, Q s , r < Q s , j if j > r and Q s , r ≤ Q s , j if j < r .
Given an index
l > s and Q s , l < λ l+1 =⇒ l ≥ r , where r is the index associated to ν s of item 1.
Proof. Item 1 follows from Proposition 2.15 applied to λ s and a s .
Item 2 : Assume that l < l + 1 ≤ r. Then Q s , l < λ l+1 ≤ λ r ≤ Q s , r . In this case tr λ s + tr a s = tr λ s + tr a s , a contradiction. Hence l ≥ r.
Proposition 4.14. Let (F 0 , a) be initial data for the CP which is not feasible, with k ≥ d. Let s * = min {1 ≤ s ≤ d : s is feasible } .
Let ν * be constructed using the recursive method of Proposition 4.12, by using s * instead of s p−1 (which can always be done). Then if we get the constants c 1 > . . . > c q−1 , and we define c q as the feasibility constant of λ s * and a s * , then c q−1 > c q .
Proof. For simplicity of the notations, by working with the pair (λ s q−2 , a s q−2 ), we can assume that q = 2. Denote by s 1 = s * < s 2 and c 1 , c 2 the indices and constants given by: h i = P 1 , s 1 and c 2 = Q s 1 , s 2 = 1 s 2 − s 1 and we must show that c 1 > c 2 . Recall that h i = λ i + a i . We can assume that: • By Proposition 4.12, c 1 ≥ 1 p p i=1 h i = P 1 , p for every 1 ≤ p ≤ s 1 .
• λ s 2 ≤ c 2 < λ s 2 +1 , where the second item follows by the feasibility of s * and the last item states that c 2 is the feasible constant for the second block.
Suppose that c 1 ≤ c 2 and we will arrive to a contradiction by showing that, in such case, the pair (λ , a) would be feasible (that is, s * = 0 or s q−2 ). In order to do that, let be the unique constant such that λ t ≤ b < λ t+1 , which appears in ν(λ , a). Then a i = 1 s 2 (s 1 c 1 + (s 2 − s 1 ) c 2 ) ≤ c 2 < λ s 2 +1 .
To show the feasibility, by Lemma 4.9 we must show that b ≥ P 1 , p for every p ∈ I t . First, if we are in the case t ≤ s 1 , this is clear since b ≥ c 1 ≥ P 1 , p for every p ≤ s 1 . Finally, suppose that t ≥ s 1 + 1. As before, b ≥ c 1 implies b ≥ P 1 , p for every p ≤ s 1 . On the other hand, if s 1 < p ≤ t then Lemma 4.13 applied to ν s 1 (whose "r" is s 2 ) assures that = t b − s 1 c 1 .
Since p ≤ t and b ≤ c 2 , this implies that (p − s 1 ) c 2 ≤ p b − s 1 c 1 . Therefore p P 1 , p = s 1 c 1 + (p − s 1 ) P s 1 +1 , p ≤ s 1 c 1 + (p − s 1 ) c 2 ≤ p b . Proof. Denote by s * the minimum of the statement. Since s p−1 is feasible (recall the remark after Definition 3.5), then s * ≤ s p−1 . On the other hand, let us construct the vector ν * of Proposition 4.14, using the iterative method of Proposition 4.12 with respect to the index s = s * , and the solution for the feasible pair (λ s * , a s * ) after s * . Write ν * = (ν * 1 , . . . , ν * s , c 1 r−s , λ r+1 , . . . , λ d ), where c is the constant of the feasible part of ν * . Observe that Proposition 4.14 assures that c < min{ν * i : 1 ≤ i ≤ s}. Using this fact and Proposition 4.12 it is easy to see that the vector µ = ν * − λ ↑ satisfies that µ = µ ↓ . On the other hand Lemma 4.9 and Remark 4.10 assure that a ≺ µ (using the majorization in each block and the fact that a = a ↓ ). Then ν * ∈ {λ + µ ↓ : µ ∈ R d ≥0 and a ≺ µ}. Moreover, in each step of the construction of the minimum ν = ν f (λ , a) we have to get the same index s j = s j (ν * ) of ν * or there exists a step where the maximum which determines s j (for ν f (λ , a)) satisfies that s j > s * (in the eventual case in which s p−1 > s * ).
|
2013-02-15T20:00:17.000Z
|
2013-02-15T00:00:00.000
|
{
"year": 2014,
"sha1": "dc025317849c9b9daab8a2d77fce96b0a15a7dbf",
"oa_license": "CCBYNCSA",
"oa_url": "http://ri.conicet.gov.ar/bitstream/11336/33464/2/CONICET_Digital_Nro.87de9821-4659-446b-9aec-6978699381ed_A.pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ba80ee3d5de2d7cf97d53d436c9ec9fdd44b4250",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
225144254
|
pes2o/s2orc
|
v3-fos-license
|
DEIXIS IN THE SONG LYRICS OF LEWIS CAPALDI'S "BREACH" ALBUM
Article History Received: August 2020 Revised: September 2020 Published: October 2020 The research aims to analyze the three types of deixis analysis using Yule's theory and interpret the reference meaning of that are found in the Lewis Capaldi's "Breach" album song lyrics. The researcher selected this song as the subject of analysis because of the song's popularity and the deictic words used. The researchers select this song as the subject of analysis because of the popularity of the song and deictic words. Therefore, the "Breach" album song lyrics are analyzed by using a pragmatic approach, especially using the theory of Yule (1996) about deixis analysis. This study was conducted by using a descriptive qualitative method. The kind of research is content analysis. The data which used song lyrics of Lewis Capaldi's "Breach" album (2018). In addition, this song was classified into three types of deixis analysis based on their own criteria. The result of this research showed that the three types of Deixis using Yule's theory, such as person Deixis, spatial Deixis and temporal Deixis are used in the Lewis Capaldi's "Breach" album. The most dominant type of Deixis in this research were found is person Deixis with 11 data (55%), spatial Deixis with 6 data (30%), and temporal Deixis with 3 data (15%). The use of person deixis indicated the participant in this song. While, the spatial Deixis indicated location and place of event from the participant. Moreover, the temporal Deixis indicated the timing of the speech event, which is used in this song.
INTRODUCTION
Language is a mean of communication and interaction between individual and group by which individuals cooperate and live together in society. Hutajulu& Herman (2019) stated that language as the tool of communication that has essential part in making communication. It is for someone to master the language, so that she or he can share their ideas with somebody else, no matter that the communication will be done either orally or in written manner. Moreover, Marpaung (2019) added that, language is a tool communication that allows the people to communicate one each other and describe their purpose to keep interacting in showing new ideas. So, language is the basic modal to interact with other people wherever she or he lives.
To communicate, there are many kinds of languages in the entire world, such as Indonesian, South Korean, Chinese, Javanese, Scottish, Arabic, English and many others. From the various languages which are used by the people, one of them is English. Sinaga, Herman & Pasaribu (2020) mentioned that English is a vital for our life besides in building relationship with others, English is a significant for our education because English has become one of the subject of national examination that demands the students must comprehend English. In learning English, there are four basic skills need to be learned in schools, namely listening, speaking, reading and writing. All of those language skills are very important to learn.
Talking about English language, it directly relates to linguistics. Linguistic is a scientific study of human language. Thus, linguistic deals with the meaning express by modulation of a speaker's voice and the listeners related new information to the information they already have. There are many sub-fields of structure focused linguistic. One of them is Pragmatics. Pragmatics is the study of "invisible" meaning, or how we recognize what is meant even when it isn't actually said or written Yule. According to Levinson (1997) as quoted in Pardede, Herman & Pratiwi (2019), pragmatics is the study of ability of language users to pair sentences in the context which they would be appropriate. Furthermore, Yule (1996) as cited Herman (2015) defined that pragmatic is the study of relationship between linguistic forms and the users of those forms. Pragmatic is also the only one allowing human into the analysis, because pragmatics, can talk about people's intended meanings, their assumptions, their purposes, and the kinds of actions such as requests and apologizes when they speak. From all definitions above, it can be deduced that pragmatics is the study of meaning based on the context here are expression of relative distance and contextual meaning. Pragmatic have several parts namely; Deixis, reference, presupposition, speech acts, politeness, discourse analysis, conversation analysis, the co-operative principle, implicatures. In this research, the researcher discuss about Deixis.
Deixis are words that are pointed at certain things, such as people, objects, place, or time like you, here, now. So, a word can be called as Deixis if the referent change, depending several sources who are the speaker and when the word is spoken. Furthermore, Deixis is claimed by Yule (2010), are words such as here and there, this or that, now and then, yesterday, today or tomorrow, as well as pronouns such as you, me, she, him, it, them. He continued, "They are technically known as deictic expression, from the Greek word deixis, which means "pointing" via language." According to Nisa, Asi& Sari (2020), Deixis is an important study in pragmatic when the listener (especially the music lovers) does not understand the context in a song lyric. A song lyric can be understood when the listener knows what the references are, or when, and where the utterances are spoken. This also deals with the listeners who do not understand what the speaker means so that the communication cannot run properly because of their misinterpretation, Sari (2015). Moreover, Dallin (1994) in Firdaus (2013) mentioned that lyrics are written as a form of interaction between the writer and listener. Song lyric is essentially a language in its formulation which is not separated from the rules of music such as the rhythm, melody and harmony of the song. Lyrics can be categorized as a part of discourse, because it consists of word or sentences which have different grammatical function. By writing a song lyric, people are easy to show their feeling and emotion.
Furthermore, most people find it confusing and slightly, when they are listening to some songs as well as noticing at the lyrics of the songs. They do find it confusing and difficult with the referents to which or whom the words or pronoun refer to. As example, the research stated some lyrics from Lewis Capaldi's songs which contain Deixis in its lyric with the title "Someone you loved" in 'Breach' album below: (1) "I'm going under and this time I fear there's no one to save me" Person deixis Spatial Deixis Temporal Deixis I (singular first person), Me (singular first person ) Under, This, There Sinaga, Herman, and Marpaung The Deixis in the Song……….. JOLLT , under, this, time, there, me, somebody, it's, you. Those are just a little example, the research assume there still many deixis words left in the Breach album that could be found.
Other researcher, Amaliyah (2017) also depicted the problems in entitling "A Pragmatics Study on Deixis Analysis in The Song Lyrics of Harris J's Salam Album Song", analyzed that are three types of Deixis in pragmatic of Harris J's Salam Album based on Yule's theory. This previous research shows that the problem is the people listen to the song lyrics not only to try to apprehend the meaning of lyrics itself, and it is different with this research, and another difference is the object of the research. The object of the previous research is Harris J's Album. Meanwhile, the object of this research is Lewis Capaldi's Sinaga, Herman, and Marpaung The Deixis in the Song……….. JOLLT "Breach" Album. In addition, the researchers are interested to conduct a research entitling "Deixis in The Song Lyrics of Lewis Capaldi's "Breach" Album".
RESEARCH METHOD Research Design
The research design is the researcher's plan of how to proceed to gain understanding of some group in it is context. The researchers used qualitative research design. According to Creswell (2014), writing a methods section for a proposal for qualitative research partly requires educating readers as to the intent of qualitative research, mentioning specific designs, carefully reflecting on the role the researcher plays in the study, drawing from an everexpanding list of types of data sources, using specific protocols for recording data, analyzing the information through multiple steps of analysis, and mentioning approaches for documenting the accuracy or validity of data collected. Moreover, According to Nassaji (2015), qualitative research is more holistic and often involves a rich collection of data from various sources to gain a deeper understanding of individual participants, including their opinions, perspectives, and attitudes.
There are some qualitative research types such as basic interpretative studies, case study, document or content analysis, ethnography, grounded theory, historical studies, narrative inquiry, phenomenological. Based on the kinds of the way to do qualitative research, this research is a kind of document or content analysis. The content analysis focuses on analyzing and interpreting recorded material to learn about human behavior. The material may be public records, textbooks, letters, films, tapes, diaries, themes, reports, or other documents. Content analysis usually begins with a question that the researcher believes can best be answered by studying documents. Content analysis is sometimes quantitative, such as when one investigates middle school science textbooks to determine the extent of coverage given to minority scientists' achievements. The research is considered document or content analysis because it describes and analyzes the song lyrics' data.
Data Source of the Research
The subject of the research is song lyrics of Lewis Capaldi's "Breach" Album. The object of the research is Deixis found in that Song Lyrics. The researcher later identified and analyzed all the songs in the album and seek for the types of Deixis found within. It means the source of the data is a kind of document. According to Ary et al. (2010), the term 'documents'here refers to a wide range of written, physical, and visual materials, including what other authors may term artifacts. Documents may be personal, such as autobiographies, diaries, and letters, official, such as files, reports, memoranda, or minutes; or documents of popular culture, such as books, films, and videos. Document analysis can be of written or text-based artifacts (textbooks, novels, journals, meeting minutes, logs, announcements, policy statements, newspapers, transcripts, birth certificates, marriage records, budgets, letters, e-mail messages, etc.) or of nonwritten records (photographs, audiotapes, videotapes, computer images, websites, musical performances, televised political speeches, YouTube videos, virtual world settings, etc.).
The analysis may be existing artifacts or records. In some cases, the researcher may ask subjects to produce artifacts or document, for example, asking participants to keep a journal about personal experiences writing family stories, to drawing pictures in expressing memories, and explaining thinking aloud as it is audiotaped.
The reason of choosing this album because, the Deixis in this album have not analyzed yet. This album contained many Deixis in the song lyrics. That is why that the researchers are interested to analyze the Deixis in this album. As we all know, the song is the second extended play by Scottish singer-songwriter Lewis Capaldi. It was released as a digital Sinaga, Herman, and Marpaung download on 8 November 2018. It includes the singles "Tough", "Grace" and "Someone You Loved" and a demo of "Something Borrowed".
Instruments of the Research
Instruments are tool facilities that are used by any researcher in order to collect the data. Instruments make a researcher does the research project easily on the other hand it can be clearer, complete, and systematic. This part is also very important in any form of research. This research instrument is a script of song lyrics downloaded from the suitable downloading link on the internet of Lewis Capaldi's "Breach" Album.
Technique of Data Collection
This part is very important in any form of researchers. According to Ary et. al. (2010), the most common data collection methods used in qualitative research are observation, interviewing, questionnaire, and document or artifact analysis. In this research, the researcher used the document to obtain the data. In addition to that, the technique of data collection is document or content analysis. As in Ary et al. (2010) explained, content or document analysis is a research method applied to written or visual materials to identify the material's specified characteristics. The materials analyzed could be textbooks, newspapers, web pages, speeches, television pro-grams, advertisements, musical compositions, or any of a host of other types of documents.
In collecting the data, the researchers used listening and taking notes methods. The steps were as follows: First, searching the scripts of the lyrics of "Breach" album, second, transcribe all of the lyrics in "Breach" album and the last coding all the lyrics to be analyzed based on Deixis.
Techniques of Data Analysis
The data will then be analyzed using the theory of Cresswell (2009). There are essentially five steps in analyzing the data, they are as follows: First data preparation, at the beginning, in preparing the data, the researcher choosed the song lyrics to be identified. Furthermore, the researcher identifies it so that they can be related to the focus and formulated problems, Second data reading, after preparing the data, the researcher read the whole song lyrics carefully in order to find out and the types of Deixis found within the selected song lyrics, third data classification, the researcher uses an instrument called qualitative code-book. According to Cresswell (2009), qualitative code-book is a table which contains predetermined codes, forth data confirmation, after all types of Deixis were certainly found, the data were then confirmed by using the theory of Yule (1996) in Fitria (2015). It was used to determine the types of Deixis found so that they could be classified correctly and appropriately into their own types, fifth frequency and percentage calculation, after all the data are classified, the researcher finally calculated the data in order to know the frequency of the most dominant type of Deixis found within the whole song lyrics.
Triangulation
The validity of a qualitative research can be seen by triangulation. As a qualitative research, triangulation made the research can be believed, became conventional, acceptable, and responsible. So, triangulation is the technique to examine the data using something others to the output of the data for needs of checking or as comparison to the data. According to Denzin (1978) in Unaids(2010), identified four basic types of triangulation: (1) data triangulation: the use of multiple data sources in a single study; (2) investigator triangulation: the use of multiple investigator/ researchers to study a particular phenomenon; (3) theory triangulation: the use of multiple perspectives to interpret the results of a study, and (4), methodological triangulation: the use of multiple methods to conduct a study.
In this research, the researcher used data triangulation. Denzin (1978) in Unaids (2010) stated that data triangulation is the use of a variety of data sources, including time, space and persons, in a study. Findings and any weaknesses in the data can be corroborated and compensated by the strengths of other data. Thereby, it is increasing the validity and reliability of the results.
In this research, the researcher used written documents, archives, historical documents, official records, personal notes or writing, or simply as known as journal, books or articles. While, in this research, the researcher uses time triangulation which is the part of data triangulation. Data triangulation involved the use of various qualitative models, if the conclusions of each method are the same, so that truth was established. Data triangulation was chosen because this research was attempt to check the validity of the data (types of Deixis) within the subject of the research (selected song lyrics) and it was attempt to check the degree of trust of the data.
RESEARCH FINDINGS AND DISCUSSION Research Findings
The data analysis shows that three are types of Deixis that used in song lyrics of Lewis Capaldi's "Breach" album. There are 11 data of person deixis (55%), 6 data of spatial Deixis (30%), and 3 data of temporal Deixis (15%). Involving the total data of the types of Deixis are 20 data. Based on the data analysis above, the dominant type of Deixis in the song lyrics of Lewis Capaldi's "Breach" album is person Deixis. The detailed findings of the data can be seen in the following table 1.
Discussion
After analyzing the data using Yule's theory in Lewis Capaldi's "Breach" album, there are three types of Deixis, namely: person deixis, spatial Deixis and temporal Deixis. Person deixis is deictic reference to the participant role of a referent, which means first person deixis, second person deixis and thirs person deixis. Spatial Deixis is a replace relation between peoples, and it is something borrowed to this basic difference. Temporal Deixis concerns itself with the various times involved in and referred to in an utterance.
In the album there are 4 songs, namely: Lewis Capaldi's Breach Album is the second extended play by Scottish singer-songwriter Lewis Capaldi. It was released as a digital download on 8 November 2018. It includes the singles "Tough", "Grace" and "Someone You Loved" and a demo of "Something Borrowed". The reason in choosing this album because, the Deixis in this album have not analyzed yet. This album contained many Deixis in the song lyrics. That is why the researcher interested to analyzed Deixis in this album.
Based on the findings of this research, the researchers inferred that there was a similarity with the research findings by Amaliyah (2017). The theory used was same In Yule (1996), it was about the deixis analysis in the song lyrics. The difference of the research done by Amaliyah (2017) is the object of the research. The object of this previous research is Harris J's Album. Meanwhile, the object of this research is Lewis Capaldi's "Breach" Album. The English song lyrics entitling Harris J's Salam Album have a moral value, speaker's experiences and feelings in his religious life. Meanwhile, this research was only conducted by using theory of Yule (1996). The researcher selected these songs as subject of analysis because the popularity of the songs and also it consisted of such as deictic words.
CONCLUSION
Based on the findings and discussion in the previous chapter, the researchers conclude that types of Deixis occur in the song lyrics of Lewis Capaldi's "Breach" album: person deixis, spatial Deixis, and temporal Deixis. The most types of Deixis that occurs in Lewis Capaldi's "Breach" album is person Deixis. The researchers found 20 data that contained Deixis, including 11 person deixis, 6 spatial deixes, and 3 temporal deixes. In this research, the researchers found all the types of Deixis. The researchers also found the most dominant types of Deixis in uses in the song lyrics based on Yule's theory; it can be concluded that there are three types of Deixis found in the song lyrics. Based on the research findings, the researcher found the most dominant type of Deixis in this research is person deixis. Furthermore, this research hopefully becomes a reference for the types of deixis field for students and the next researcher.
|
2020-10-30T05:06:33.778Z
|
2020-10-25T00:00:00.000
|
{
"year": 2020,
"sha1": "f2bc1f2d5d17539a28e96ef0111d41297db4d96f",
"oa_license": "CCBYSA",
"oa_url": "https://e-journal.undikma.ac.id/index.php/jollt/article/download/2843/2117",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f70b16e96c069aeb7b39637ca57f80ac65db7ca4",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
214319002
|
pes2o/s2orc
|
v3-fos-license
|
Insect Pests of Pulse Crops and their Management in Neolithic Europe
ABSTRACT Insect pests affecting standing and stored crops can cause severe damage and reduce yields considerably. Was this also the case in Neolithic Europe? Did early farming populations take a certain amount of harvest loss into account? Did they decide to change crops or rotate them when they became too infested? Did they obtain new crops from neighbouring communities as part of this process? Or did they actively fight against pests? This paper focuses on pulse crop pests, presenting the earliest evidence of fava beans displaying boreholes and of the presence of pea weevil in two different archaeological sites: Can Sadurní (in a phase dated to ca. 4800-4500 cal BC), located in the NE Iberian Peninsula and Zürich-Parkhaus Opéra (in a phase dated to ca. 3160 BC), located in Central Switzerland. Evidence suggests that early farmers were aware of the damages produced by pests and we propose different strategies for their management, including potential evidence for the use of repellent or trap plants in the plots.
Introduction
Animal and insect pests, diseases and weeds are notable constraints on crop production (Dark and Gent 2001).Currently, without crop protection one could lose up to 50% of a barley or wheat harvest due to these multiple agents.Infestations during storage can be one of the most catastrophic of all depending on the crop, the type of storage and the climatic conditions.The FAO estimated yearly losses during storage due to insect attack of ca.10% of the world's production in 1947, but some authors consider that this value is an underestimation when considering past societies (Buckland 1978;Smith and Kenward 2011).Ethnographic research shows how different farming populations, even those without access to chemicals or modern technology, actively fight against insect pests and other animals causing damages to their crops.They are aware of their existence and they aim to have pest-free crops (Narayanasamy 2006).One of the reasons of replacing a crop might be that it was too susceptible to insect pest infestation or other diseases (e.g.Dark and Gent 2001).Nowadays intensive industrial agriculture is widespread, and the production and storage of food has reached gigantic proportions.Pests in current monocultures, usually with plants that no longer resist local or incoming pests, can cause devastating damage, for which (often toxic) pesticides have been used at a large scale.Current knowledge about Neolithic farming practices establishes small-scale intensive mixed farming as the most widespread farming model in central and southern Europe, often based on considerable crop diversity (Antolín 2016;Bogaard 2004).To what extent should we then contemplate consideration of pests as a factor in crop choice and farming practices during the Neolithic period?Some authors such as Dark and Gent (2001) have theorised that at the beginning of agriculture insect pests would not have survived long after arriving in new climatic areas, and that crop exchange was not frequent enough to sustain for any time populations that could arrive at sparse intervals.On the other hand, authors like Panagiotakopulu and Buckland (1991) consider that it would be impossible for early farmers to completely remove infested crops until the development of modern cleaning techniques, so we should assume that infested seeds were eaten, as also current ethnographic observations confirm (R. Pelling, pers. com.).One option to remove infested seeds would be to pan them or to float them.Whatever the case may be, in recent syntheses it has been observed that cereal grain pests did not survive long once crops spread towards central Europe and it is not until much more recent periods (mostly during the Roman period) when they seem to reappear and stay (Panagiotakopulu andBuckland 2017, 2018).The main difference between the effects of pests nowadays and in the Neolithic period is probably to be found in the scale: regional (at a large-scale) in the former, and local in the latter.All farmers in a settlement would have been threatened by the presence of a pest (Halstead and O'Shea 1989), but the likelihood that this pest was transmitted to a much larger scale (through exchange, for instance) is more limited than in more recent chronologies.We should therefore assume that measures were taken against pests, since the danger of having infested crops existed and it could have affected a whole community, but at a relatively local scale.
The existence of pests in the Neolithic has long been known, associated with cereal crops.Obata et al. found impressions of Sitophilus zeamais/oryzae on potsherds dating to ca. 9000 BP (King et al. 2014).Most of the first identifications in SW Asia such as in Hacilar layer VI or in Atlit-Yam are of wheat weevil (Sitophilus granarius) (King et al. 2014).The wheat weevil cannot fly and completes its life cycle in the storage area.It has also been documented in Neolithic Europe but largely disappeared around 4500 BC and only returned during the Iron Age (Panagiotakopulu andBuckland 2017, 2018).This cannot be an artefact of research tradition or preservation issues, since intensive insect-research has been conducted for some well-preserved lakeshore/bog sites dated to the 4th millennium cal.BC in central Europe (Büchner and Wolf 1997;Schäfer 2017;Schmidt 2006Schmidt , 2011) ) and also in the UK.The eradication of this pest could relate to changing storage methods, since silo pits develop anoxic conditions during the storage of grain, thus preventing the survival of pests such as the wheat weevil, as demonstrated through experiments (Reynolds 1974).Previous research has already observed that economic changes starting in the Iron Age related to more extensive farming practices and the regular trade of crops, had favoured the arrival and propagation of insect pests that eventually caused significant damage to medieval and modern crop yields (Panagiotakopulu and Buckland 1991;Smith and Kenward 2011).
For this paper we would like to concentrate on pests affecting pulses, a topic which has not received as much attention in the literature.Among these, we have to highlight the significance of the Bruchidae group, which affect standing crops and can only be detected in the store (Kislev 1991).
The study of insect pests in Archaeology has several difficulties.First of all, identifying the crop to which these pests belong is not always straightforward.This is particularly the case for pulses (Medovic et al. 2011), which are usually underrepresented in most sites.In theory, archaeobotanists should be able to document attacked seeds, but pulses are scanty in prehistoric archaeobotanical assemblages of many areas of Europe, and insect boreholes, when present, are not always reported.The extraordinary preservation conditions of wetland sites are not helpful, since pulses are underrepresented in those sites as well.Likewise, for cereals, grains with boreholes have seldomly been noted by archaeobotanists (e.g.Kislev 2015).Boreholes are likely to go unnoticed, since attacked grains may survive charring in a less recognisable form and the original hole may disappear due to the effects of charring (with the swelling of the endosperm), and they may be more fragile in terms of post-depositional taphonomic agents.In this case, only desiccated contexts seem to be ideal for their recognition (Borojevic et al. 2010;Morales et al. 2014).In fact, infested seeds cannot always be identified, because if larvae died within the seed they can barely be detected (Panagiotakopulu and Buckland 1991).
Secondly, identifying the insect species is also difficult, even when a reference collection is available.Insect remains appear fragmented in archaeological sites and entomologists that are not trained with archaeological material are not necessarily skilled to identify them.There is another issue related to recovery biases.Insect remains are often preserved only under anoxic conditions.There are times when they are recovered in a charred state, but the remains become too fragile, and large-scale flotation techniques probably do not allow a proper recovery (Panagiotakopulu and Buckland 1991).In addition, not all archaeological contexts are equally suitable for the recovery of well-preserved insect remains.Ideally, primary contexts of accumulation of dumped material (pits or floors) should be sampled (Smith and Kenward 2011).One further problem involves contamination from recent layers or from recent material during sample processing and storage (King et al. 2014).Finally, not all taxa are equally resistant, so the most delicate taxa might always be underrepresented in the analyses (King et al. 2014).
If pests were present during the Neolithic period in Europe we should expect that farmers acted against them.Recent research, partly in the framework of the SNF-Funded AgriChange Project (2018-2021) (Antolín et al. 2018) has brought to light evidence of pests of pulse crops dated to the 5th and 4th millennia cal.BC.Beyond the mere attestation of these finds, our research questions ask if the available data suggests awareness of the existence of these pests by Neolithic farmers and if traditional methods could have been used to eradicate them.
Can Sadurní Cave
The archaeological site of Cova de Can Sadurní is located at c. 425 m asl, in the Garraf Massif, next to the small village of Begues and ca.30 km from the city of Barcelona (Spain).The site includes both the deposits inside the cave and an external terrace of c. 200 m 2 .Inside of the cave, a surface of around 50 m 2 is being excavated (Figure 1).The stratigraphy recovered at the site to date is very impressive; over 4 m of deposits date from 10,500 cal.BC until Roman times (Edo et al. 2019;Edo and Antolín 2016;Edo, Blasco, and Villalba 2011).Recently, excavations have focused on the layers dated to the 5th millennium cal.BC (from layer 12 to layer 10, with several layers and sub-layers in between).There are a number of episodes that are clearly connected to the penning of domestic animals inside the cave.These are dated between 4800 and 4300 cal.BC (Table 1, Fig. 2).Sedimentologically, a high anthropogenic component is detected, basically consisting in very organic deposits, with in situ preservation of burnt herbivore dung, sometimes visible as stratigraphic units of deposits of white and black colour.These are known as fumier, frequently found in Neolithic cave deposits, mostly in the Mediterranean area (Bergadà et al. 2018) and the SW Alps (Martin 2014).Ovicaprines were dominant in the animal assemblage (Saña et al. 2015).
The archaeobotanical study of the site is currently ongoing.Previous publications have dealt with the Early Neolithic funerary deposits uncovered by a smaller sondage (Antolín and Buxó 2011) and material from older excavations (between 1993 and 2008) from the Middle Neolithic deposits (Antolín 2016;Antolín, Buxo, and Edo i Benaiges 2015a).The sampling strategy has varied over time (Antolín 2008(Antolín , 2016) ) but since 2010, 100% of the sediment has been processed using a flotation machine, and sieves of 2 and 0.5 mm have been used to recover the flot and a 2 mm mesh has been used to recover the heavy fraction.All fractions have been dried because most of the material is charred (only a small number of mineralised remains have been recovered so far).The heavy fraction has been sorted by naked eye, while the other fractions have always been sorted under the binocular microscope.Currently the whole Middle Neolithic sequence has been studied for 3 pilot squares, while the contents of the fumier deposits have also been analysed.The results will only be partially presented here (presence/absence per layer/stratigraphic unit) because it is not the goal of this paper to discuss them.We thus focus on layers 11a4 and 11a5 and the structures or fumier layers found within them: XIII, XIV, XVII, XVIII and 12.We consider the remains found outside of the burnt dung deposits for comparative purposes.The volume of sediment investigated from the structures is around 92 litres, while more than 1150 litres of sediment have been investigated from the surrounding deposits.All in all, almost 3000 plant macroremains have been retrieved (ca.475 from inside the fumier deposits) and recorded in ArboDat (Kreuz and Schäfer 2014).The total results (presence/ absence) can be found in the ESM 1.
At least five different cereals could have been cultivated at the site (Table 2): naked barley (Hordeum vulgare var.nudum), naked wheat (Triticum aestivum/ durum/turdigum or T. 'nudum'), emmer (T.dicoccon), einkorn (T.monococcum) and the so-called 'new' glume wheat (Triticum sp., 'new type').The most remarkable diversity is found among pulses: chickpea (Cicer arietinum), pea (Pisum sativum), bitter vetch (Vicia ervilia), fava bean (Vicia faba) and common vetch (Vicia sativa).Finally, two oil plants have been identified: flax (Linum usitatissimum) and opium poppy (Papaver somniferum).Regarding the evidence for pulse crop pests we want to highlight that the two seeds of fava bean recovered at the site have boreholes (Fig. 3).The holes are of less than 1 mm in width.We tried to date one of the seeds but it disintegrated during the cleaning process.Fava beans have been frequently recovered in Neolithic sites of the Iberian Peninsula (Peña-Chocarro, Pérez-Jordà, and Morales 2018), but less often in Catalonia (Antolín, Jacomet, and Buxó 2015b).It is very likely, though, that they are underrepresented and that their role in the economy is not yet well known.Chickpea has never been identified in the Neolithic of the central and western Mediterranean areas.
Among the wild plants, there is a considerable diversity (at least 32 taxa have been identified to date) and some of them are particularly abundant, such as Trifolium sp., Hyoscyamus niger, Capsella bursa-pastoris type, Solanum nigrum, Quercus sp., Arbutus unedo, Pistacia lentiscus, Vitis vinifera subsp.sylvestris or Rubus sp.
The contents of the different fumiers are quite diverse in terms of species found.Among the most common plants we have Trifolium sp., Quercus sp., Arbutus unedo, Pistacia lentiscus, Vitis vinifera subsp.sylvestris or Rubus sp.
Zürich-Parkhaus Opéra
Zürich-Parkhaus Opéra is located in the northern shore of lake Zürich (Switzerland) (Fig. 1).It was excavated during 2010 and 2011, over an area of 3000 m 2 .The main archaeological results of the site have already been published (Bleicher and Harb Archaeobotanical analyses at the site allowed for the identification of around 225,000 plant macroremains (mostly seeds and fruits) from layer 13 and 40,000 for layer 14.Among the cereals, emmer (Triticum dicoccon), and naked wheat (Triticum aestivum/ durum/turgidum, mostly belonging to the durum/turgidum type) are better represented in the uncharred record, while barley (Hordeum vulgare, multi-rowed and mainly of the naked type) is one of the most important cereals when considering the remains preserved by charring.Oil plants, including flax (Linum usitatissimum) and opium poppy (Papaver somniferum), were found in very large amounts.Additionally, dill (Anethum graveolens) has also been found.Dill is considered a potential crop coming from the Mediterranean regions (Jacomet 1988).During the identification process, the archaeobotany team was informed of the presence of the pea weevil (Bruchus pisorum) in the entomological record.We decided to look for reference material of pea pods and sub-fossilise them (namely, soak them in water for a long time).As a result, we realised that we had been missing their characteristic remains when the preservation quality was not optimal (Fig. 4).Pea pods have three functional cell layers: an exocarp, a mesocarp and an endocarp.Exocarp and mesocarp together make the most characteristic part of the pod, with the vascular network traversing them.Conversely, pea endocarps happen to be the most commonly found remains in archaeological sites with waterlogged deposits in Central and Southern Europe.They present two cell layers: an external one, showing parallel stripes (fibres); and an internal one, with packed small round cells that appear as a pitted surface (Craig et al. 1977 and references therein).
Remains of pea (Pisum sativum) were identified at Zürich-Parkhaus Opéra and it seems to have been an important crop considering the average concentration (ca.40 remains/litre), the maximum concentration (ca.500 r/L) and ubiquity (ca.80%) of uncharred pea pod fragments in layer 14.In comparison, charred seeds were found in less than 10% of the samples (25) and normally only 1 per sample (in total, 27 seeds).In layer 13, pod fragments were identified in only 28 samples (due to the difficulties mentioned above), with a similar an average concentration of ca.40 r/L and a maximum concentration of ca.375 r/L, which suggests that this crop would have been equally well-represented in layer 13.
Large-seeded wild fruits (such as hazelnuts, acorns, and wild apple/pears) have also been observed to have played a very significant role in the economy of the settlement (Antolín et al. 2016, in press).A large number of other wild plant taxa has been documented (ESM 2): around 200 taxa.It is worth highlighting for the purposes of this paper the presence of Mentha sp., Origanum vulgare, Solanum sp., Thymus serpyllum and Rubus sp.
Insect remains were also studied from Parkhaus Opéra.From layer 13, 8263 insect fragments were analysed, with 1448 fragments from layer 14.They were sorted together with plant remains and, since the fractions were always subsampled, it was necessary to estimate the total amount of finds in each sample.It is estimated that 49980 insect elements were recovered from layer 13 and 7000 from layer 14 (Schäfer 2017).The assemblages of both layers are dominated by larvae of aquatic insects, which related to the local environment of the settlement.Invertebrate remains from terrestrial environments have also been found in significant amounts, particularly those of puparia of several types of flies.Decomposing organic material must have been lying around both inside and outside of the houses providing an optimal environment for flies to lay their eggs.Strongly sclerotised wing covers of different species of beetle were also found.Among these, the dominant ones are dung beetles, such as earth-boring dung beetle (Hister funestus), and scarab beetle (Onthophagus taurus and Oxyomus silvestris).In addition to these, woodland beetles have also been found.These live preferentially under the bark of trees or in tree fungi such as hair fungus beetle (Litargus connexus) or the cylindrical bark beetle (Bitoma crenata).Among all these insect remains only one pest was identified: the pea weevil (Bruchus pisorum).In total, 8 elytra were identified in layer 13 and none in layer 14 (Fig. 5).Despite intensive and active research for the identification of other pests affecting cereal crops, none was found among the weevils (Curculionidae) or the darkling beetles (Tenebrionidae).
The pea weevil is a thermophilic species that attacks mostly pea plants during their flowering period on the field.Females deposit the eggs on the young immature pods.After hatching, the ca.1.5 mm white larvae bore their way into the pod and nest themselves in a seed.The larvae feed on the seed and in warm conditions they emerge while the crop is still in the field, but under cooler conditions they may be harvested with the crop and overwinter within a puparium and eclose inside the storage structure.Pea beetles cause large crop failures, as each female is able to lay up to 400-500 eggs.The remaining pea seeds are not suitable for human consumption, and they are also barely viable, thus reducing the seed available for sowing in the coming year (Koch 1992;Reichmuth 1997;Weidner and Sellenschlo 2010).
The wing covers (elytra) of pea weevil found in Parkhaus Opéra are spread across the settlement in layer 13 and no concentration has been observed, so we can interpret that they were present in the stores of pea seeds in several houses of the settlement.
Were pulse pests recognised by prehistoric farmers?
There are records of seeds with boreholes (Table 3) and identifications of insect remains of the Bruchidae group (Table 4) from previous investigations of prehistoric sites in Europe and SW Asia.
The earliest legume seeds with boreholes are pea seeds from the sites of Beida (Jordan) and Hacilar (Turkey), dated to the 7th and 6th millennia cal BC.The seeds of fava bean found in Can Sadurní Cave are actually the oldest record for this species that we could find in the literature, being the chronologically closest ones already in Chalcolithic contexts (Table 3).These finds increase in the Bronze age, being reported in sites such as Kastanas (Greece) or Zug (Switzerland), which indicates that pests might have affected pulses more significantly during this period, both in central and southern Europe, probably continuing into the Iron Age, as shown by finds in Horbat Rosch Zayit (Israel) or Le Câtel de Rozel (United Kingdom).Unfortunately, it is not possible to judge how representative the available dataset is.The scarcity of records of seeds of pulses with boreholes might be due to the fact that specialists do not always mention it in publications.There are dozens of sites with more or less isolated finds of cultivated legumes (sometimes over 50 remains), and also sites with concentrations (>500 seeds): pea seeds in Les Valladas (L.Martin, unpublished), in France; fava beans in several sites of the Iberian Peninsula, such as Buraco da Pala (Rego and Rodriguez 1993), in Cueva del Toro (Buxó 2004) and Castillejos (Rovira 2007); and in northern Africa (Morales et al. 2016).No mention of the presence of boreholes was found in the publications.Some authors confirmed the absence of infested seeds at these sites (Martin, Buxó and Pérez-Jordà, pers. com.) but it is not possible to know if finds from other sites showed any boreholes or not, since their absence is not systematically recorded.Is it possible to suggest whether Neolithic farmers were aware of pests affecting pulses?
If pests were known, and infested seeds were deliberately avoided by farmers, seeds with boreholes would not necessarily appear in large concentrations of pulses, but as part of the discarded everyday waste, and thus, if present in archaeobotanical assemblages, the only evidence remaining would be in the form of some scattered finds as waste.In the case of Can Sadurní, the seeds were recovered in layers of burnt dung.This could indicate that they were detected by farmers, removed from the stored crop and given to animals as fodder.Regarding archaeoentomological remains, the number of finds of Bruchideae is quite low and, other than an indirect reference to the presence of other Bruchus species in Runnymede (UK), the oldest identification of pea weevil seems to be from Zürich-Parkhaus Opéra (Table 4).This might be due to the scarcity of archaeoentomological studies in the Mediterranean area, where sites with waterlogged deposits have rarely been investigated, hence it becomes even more important that infested seeds found in archaeobotanical analyses are systematically reported in order to have a comprehensive overview of the importance of these pests in prehistory.As observed regarding the presence of infested seeds, the finds of Bruchidae become slightly more common in the Bronze and Iron Ages, showing a similar trend to the one observed in the seed and fruit record.
In the case of Zürich-Parkhaus Opéra, we could not find any evidence of pea weevil in layer 14 (ca.3090 BC), which was formed after a short settlement hiatus of ca.50 years, while layer 13 (ca.3160 BC) provided substantial evidence of the presence of this pest.It is unclear if the inhabitants of the site relocated anywhere far (they could have moved some hundred metres away along the lakeshore), or if they abandoned their fields during this time.It would anyway be hard to prove if they did this specifically to get rid of this pest.It is possible that they had adopted measures against it during the first occupation (layer 13) or that the weather was no longer favourable for the warmth-loving pea weevil once they settled again.
Traditional methods for fighting against pests
If we consider the possibility that pests were recognised by early farmers, it should be expected that some decisions were taken in order to eliminate them from the fields or grain stores.This sort of discussion has rarely entered the archaeological discourse and archaeological evidence for such practices is not straightforward.There are several sources for the investigation of traditional methods to fight against crop pests: textual sources from the classical and medieval authors, and current ethnological and ethnobotanical records.Both have advantages and disadvantages.Most classical authors refer to extensive farming methods applied in the Mediterranean area from Roman times until the Middle Ages, so they are only partly relevant to our case studies.On the other hand, ethnographic data may also come from completely different environments, crops and scales of farming.
There are a variety of methods that have been reported as useful for the protection of standing or stored crops from pests (Table 5).The Ebers Papyrus XCVIII (an Egyptian medical document dated to ca. 3552 BP) contains instructions for deterring insects using burnt gazelle dung diluted in water (King et al. 2014).Zadoks (2013) compiled data from classical and medieval agronomists, including several Iberian ones such as Columella (4-70 AD), Gabriel Alonso de Herrera (1474-1540) and Miquel Agustí (1560-1630).He provides a long list of plants that are useful to fight different types of pests: dill (Anethum graveolens), henbane (Hyoscyamus niger), bay leaves (Laurus nobilis), oregano (Origanum vulgare), mastic oil (Pistacia lentiscus), acorns (Quercus sp.), Rubus sp.(often the shrub is used as fencing), Solanum nigrum (recommended as a strong vinegar against insects (aphids) on fruits and vegetables), thyme (Thymus serpyllum) (toxic to storage insects when its essential oil is fumigated), fenugreek (Trigonella foenum-graecum) (grown mixed with other pulses because its strong scent prevents the rest from being eaten), bitter vetch (Vicia ervilia) (the seeds contain tannins, useful repellent in storage; sown among vegetables to protect against fleas, lice and birds).The author also mentions ashes.
The use of ashes as pesticides is also commonly recorded in the ethnographic record (kitchen ash, wood ash, dung ash).They can be used both with the stored crop or on the standing crop (Chandola, Rathore, and Kumar 2011) and basically prevent insect from biting the vegetative tissues.Additionally, leaves of certain trees (e.g.walnut tree, or Vitex), waterdiluted cow dung or urine, and salt (Chandola, Rathore, and Kumar 2011;Mehta et al. 2012;Narayanasamy 2006) are also mentioned.Another important processing step to avoid pests and diseases is drying the crop under the sun (Chandola, Rathore, and Kumar 2011).Usually it is women who are in charge of keeping the crop pest-free (Mehta et al. 2012).
Used specifically against pulse beetles, turmeric powder or powdered Vitex leaves has been recorded with the purpose of protecting stored seeds (Narayanasamy 2006), as have essential oils of Artemisia (Titouhi wheat and opium poppy.Although speculative, chickpea might have arrived to the Iberian Peninsula in the same way.This is a rare crop in the Neolithic period, only well documented in Turkey and in Bulgaria, and authors do not interpret it necessarily as a crop in the latter case (Marinova and Popova 2008).Chickpea needs warm temperatures for germinating and it is considered to be a summer crop (Halstead 2014).It is reported ethnographically for being used as a trap crop planted around cereal fields (Tesfaye and Gautam 2003) and in classical texts as a plant that would attract pests (snails and slugs, for instance) thus minimising the effect of pests on other plants around them (Zadoks 2013, 142).It is usually grown as a mixed crop nowadays in places like India, either with oil-seeds (rape-seed, mustard, linseed) or with lentil and barley (Saxena 1987) and it has been recorded as a typical plant sown in cultivated fallows (Halstead 2014).If we rule out the unlikely possibility that chickpea arrived to the site through long-distance trade, it seems plausible that it was a tolerated 'weed' in the fields that was perceived to be favourable for the main crop and possibly also consumed by humans, even though it is only rarely found in the archaeological record.Animals could have also consumed repellent or trap crops that were present in the fields after the harvest, and the seeds potentially incorporated in the archaeobotanical record in layers of charred dung.In fact, the only other unsure identification of chickpea in Neolithic contexts from the Iberian peninsula is from Mirador Cave, in Atapuerca, in similar chronologies and context (Rodríguez, Allué, and Buxó 2016), which would suggest a similar taphonomic origin for the finds from both sites.Unfortunately, under the current state of research we cannot offer a conclusive interpretation for these finds, and only raise awareness of the importance of certain plants for pest management in traditional societies and highlight that these plants may be archaeobotanically detected.
Conclusions
The discovery of the earliest finds of pea weevil (in Switzerland) and of infested fava beans (in Catalonia, Spain) led us to review the current record for pulse crop pests in Prehistoric Europe and to query the significance of these pests, whether they were perceived as such by Neolithic farmers and if measures may have been taken to eradicate them.Although losses might have only been significant at a local scale, ethnography seems to indicate that farmers would have strived for pest-free crops.It had already been observed by other authors (Panagiotakopulu and Buckland 2018) that pests associated with cereals disappeared from Europe short after the arrival of farming.This might be due to the changing climatic conditions in central Europe, or due to the isolation of villages (insects such as Sitophilus granarius cannot fly), or indeed as a result of the existence of pest management strategies and adequate storage practices.
Nevertheless, the data presented here shows that pulse crop pests appear in the 5th and 4th millennia BC.The presence of infested seeds in a layer of burnt dung in Can Sadurní is discussed as potential evidence that these seeds had been discarded and given to animals as fodder, which would suggest awareness of pests by Neolithic farmers.In Parkhaus Opéra, two settlement phases dated to the 32nd and 31st centuries BC, with a hiatus of several decades between them, were investigated.The pea weevil was found in the oldest phase but not in the youngest one, which could suggest the existence of a successful strategy to remove this pest or that the abandonment of the site facilitated its eradication.We found several plants present in both of our case studies that could have potentially been used either as trap crops or as insect repellents.Based on ethnographic and textual resources, we discussed the possible uses of dill and chickpea (besides for edible purposes) as pest repellents or trap crops.
We highlight the need for more systematic archaeoentomological analyses (particularly in continental Europe) and a standardised recording of infested seeds in archaeobotanical reports.Likewise, the management of pests seems to be a poorly explored subject in prehistoric farming and further research might contribute to interesting insights into traditional methods of protecting standing or stored crops against them.
Notes on contributors
Ferran Antolín is an Assistant Professor at the University of Basel.He is an archaeobotanist and his research is focused on early farmers in Europe, agricultural decision-making processes among self-sufficient farmers, the cultivation and domestication of plants and the management of wild plant food resources.He currently leads the project AgriChange (2018-2021), focusing on sites with waterlogged preservation conditions in the NW Mediterranean and Switzerland and combining multiple lines of evidence, including archaeobotany, archaeoentomology, archaeozoology, radiocarbon dates, stable isotopes and archaeological structures used for storage.
Marguerita Schäfer is a Postdoctoral researcher at the University of Basel.She is an archaezoologist and archaeoentomologist specialised on the Neolithic period in Central Europe.She has vast experience both in Neolithic dry sites and in pile-dwelling sites and is interested in animal management strategies, prehistoric diet, environmental reconstructions and taphonomic analyses.
Figure 1 .
Figure 1.Map showing the area currently being studied by the AgriChange Project with the location of both sites presented in this paper and the site plans.
Table 1 .
Radiocarbon dates from layers of burnt dung done on seed and fruit remains from Can Sadurní.
Table 2 .
(Antolín 2008nce of cultivated taxa in the different fumier deposits of Can Sadurní and in the layers where they were embedded.Mostly unpublished, except for layer 12(Antolín 2008).
Figure 3. Remains of pulses found in Can Sadurní Cave: a and b.Fava beans with boreholes, c: bitter vetch, d: chickpea (Fotos: R. Soteras).
Table 3 .
Compilation of records of legume seeds with boreholes from Prehistoric sites in Europe and SW Asia.
Table 4 .
Compilation of entomological finds of Bruchideae in archaeological sites of the circum-Mediterranean areas and Europe.
Table 5 .
Methods reported in classical and medieval texts for protecting crops (from Zadoks 2013) and recorded in current indigenous populations.
Table 6 .
List of plants recorded as used as trap crops or repellents in or around plots.
|
2020-01-23T09:06:53.288Z
|
2020-01-17T00:00:00.000
|
{
"year": 2020,
"sha1": "34cd0af3117f70aa880f92d5615dde3849fd1888",
"oa_license": "CCBYNCND",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/14614103.2020.1713602?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "c23584e43597341e761f37bb03845ed21157a057",
"s2fieldsofstudy": [
"Environmental Science",
"History",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
237399021
|
pes2o/s2orc
|
v3-fos-license
|
AMPK mediates regulation of glomerular volume and podocyte survival
Herein, we report that Shroom3 knockdown, via Fyn inhibition, induced albuminuria with foot process effacement (FPE) without focal segmental glomerulosclerosis (FSGS) or podocytopenia. Interestingly, knockdown mice had reduced podocyte volumes. Human minimal change disease (MCD), where podocyte Fyn inactivation was reported, also showed lower glomerular volumes than FSGS. We hypothesized that lower glomerular volume prevented the progression to podocytopenia. To test this hypothesis, we utilized unilateral and 5/6th nephrectomy models in Shroom3-KD mice. Knockdown mice exhibited less glomerular and podocyte hypertrophy after nephrectomy. FYN-knockdown podocytes had similar reductions in podocyte volume, implying that Fyn was downstream of Shroom3. Using SHROOM3 or FYN knockdown, we confirmed reduced podocyte protein content, along with significantly increased phosphorylated AMPK, a negative regulator of anabolism. AMPK activation resulted from increased cytoplasmic redistribution of LKB1 in podocytes. Inhibition of AMPK abolished the reduction in glomerular volume and induced podocytopenia in mice with FPE, suggesting a protective role for AMPK activation. In agreement with this, treatment of glomerular injury models with AMPK activators restricted glomerular volume, podocytopenia, and progression to FSGS. Glomerular transcriptomes from MCD biopsies also showed significant enrichment of Fyn inactivation and Ampk activation versus FSGS glomeruli. In summary, we demonstrated the important role of AMPK in glomerular volume regulation and podocyte survival. Our data suggest that AMPK activation adaptively regulates glomerular volume to prevent podocytopenia in the context of podocyte injury.
Introduction
Podocytes are specialized epithelial cells on the urinary side of the glomerular filtration barrier, and podocyte actin cytoskeletal disorganization is near universal with nephrotic syndrome (NS) and visualized as foot process effacement (FPE) (1). Focal segmental glomerulosclerosis (FSGS) causes proteinuria and NS, in which podocytes show diffuse FPE associated with podocyte loss, glomerulosclerosis, and progressive renal failure (2); minimal change disease (MCD), in spite of diffuse FPE, shows no podocytopenia and low rates of disease progression. MCD can be morphologically indistinguishable from early FSGS, and some MCD cases reportedly transition to FSGS (2). Hence, comparing signaling events in MCD podocytes during FPE versus FSGS podocytes could define critical mechanisms that maintain podocyte survival in MCD. Using morphometry in humans and experimental models, FSGS has been associated with larger glomerular volumes (Vglom) and podocyte hypertrophy (3)(4)(5)(6)(7)(8). By contrast, the specific significance of restricting glomerulomegaly (or Vglom) in MCD is unknown. Signals that promoted an "MCD-like" pathology in the setting of podocyte injury and FPE by restricting glomerulomegaly, preventing podocytopenia and progression to FSGS, would be of considerable therapeutic interest in all NS.
Herein, we report that Shroom3 knockdown, via Fyn inhibition, induced albuminuria with foot process effacement (FPE) without focal segmental glomerulosclerosis (FSGS) or podocytopenia. Interestingly, knockdown mice had reduced podocyte volumes. Human minimal change disease (MCD), where podocyte Fyn inactivation was reported, also showed lower glomerular volumes than FSGS. We hypothesized that lower glomerular volume prevented the progression to podocytopenia. To test this hypothesis, we utilized unilateral and 5/6th nephrectomy models in Shroom3-KD mice. Knockdown mice exhibited less glomerular and podocyte hypertrophy after nephrectomy. FYN-knockdown podocytes had similar reductions in podocyte volume, implying that Fyn was downstream of Shroom3. Using SHROOM3 or FYN knockdown, we confirmed reduced podocyte protein content, along with significantly increased phosphorylated AMPK, a negative regulator of anabolism. AMPK activation resulted from increased cytoplasmic redistribution of LKB1 in podocytes. Inhibition of AMPK abolished the reduction in glomerular volume and induced podocytopenia in mice with FPE, suggesting a protective role for AMPK activation. In agreement with this, treatment of glomerular injury models with AMPK activators restricted glomerular volume, podocytopenia, and progression to FSGS. Glomerular transcriptomes from MCD biopsies also showed significant enrichment of Fyn inactivation and Ampk activation versus FSGS glomeruli. In summary, we demonstrated the important role of AMPK in glomerular volume regulation and podocyte survival. Our data suggest that AMPK activation adaptively regulates glomerular volume to prevent podocytopenia in the context of podocyte injury.
Based on this surprising morphometric finding, we first examined a cohort of human NS cases from the Neptune consortium. We identified significantly reduced Vglom in MCD versus FSGS, suggesting maintained Vglom regulation in MCD cases and glomerulomegaly in FSGS. We then performed detailed morphometric studies in our Shroom3-KD model of diffuse FPE, followed by in vitro studies to understand the mechanism and phenotypic studies in the context of podocyte injury, to examine the impact of regulation of Vglom on podocyte survival. These studies revealed enhanced activation of AMPK with either Shroom3 or Fyn KD in podocytes, which accounted for reduced podocyte and Vglom. We confirmed that the mechanism of AMPK activation in podocytes in this model is via cytoplasmic redistribution of the AMPK-activating kinase, LKB1. Because AMPK is a major regulator of cell growth and survival by inhibiting cellular protein synthesis and enhancing autophagy, we postulated that AMPK is a key signaling molecule mediating Vglom regulation and podocyte survival in the context of FPE. Consistent with this, we found that inhibition of AMPK signaling in Shroom3-KD mice with podocyte FPE increased Vglom and induced podocytopenia. Furthermore, activation of AMPK in mice after nephron loss-induced hypertrophic injury restricted Vglom and protected from podocytopenia and progression to FSGS. Finally, evaluation of NS cases within the Nephrotic Syndrome Study Network Consortium (NEPTUNE) showed significant enrichment of signatures of FYN kinase downregulation (and Ampk activation) within MCD glomerular transcriptomes, confirming the translational relevance of our findings.
Results
Glomerular morphometry shows significantly lower Vglom in MCD versus FSGS cases. The Neptune consortium is the largest multicenter prospective cohort of NS collated in the United States with uniform sample/data collection (16). A subset cohort of biopsy-proven NS cases currently has available Aperio-scanned images enabling morphometric evaluation by the NEPTUNE pathology core (5). We investigated Vglom in this subset with annotated diagnoses of FSGS, MCD, or membranous nephropathy (n = 80). We excluded the diagnosis of "other NS" from our analyses given the heterogeneity of this entity. The clinical and demographic characteristics of this cohort at enrollment and the outcomes of these patients by diagnosis are shown in Table 1. We applied the Weibel-Gomez formula (adapted from ref. 5 and Methods; see Supplemental Figure 1A; supplemental material available online with this article; https://doi.org/10.1172/jci.insight.150004DS1) on area cross-sections of glomerular tufts identified in the 2 paraffin sections (3-60 glomeruli/patient) to perform Vglom estimation. Mean Vglom (μm 3 ) calculated from these 2 random sections within the same biopsy were highly correlated, validating the morphometric assessment (R 2 = 0.987; P < 0.001; Supplemental Figure 1B). We then utilized data from all glomerular profiles from 1 periodic acid-Schiff (PAS) section for mean Vglom estimation and analyses. We found that MCD cases (n = 27) had significantly lower mean Vglom than FSGS (n = 38) or membranous nephropathy (n = 15) ( Figure 1A). To minimize confounding of mean Vglom by older age in FSGS cases, we restricted our analyses to pediatric cases alone (≤18 years). Consistently, pediatric MCD (n = 18) also had lower mean Vglom than pediatric FSGS (n = 13) ( Figure 1B). There were no pediatric membranous nephropathy cases. Furthermore, in Cox proportional hazard models agnostic to diagnoses, increasing Vglom was associated with increased composite risk of end-stage renal disease and/or 40% or greater decline in estimated glomerular filtration rate (eGFR) during follow-up, independent of age (HR = 1.18 per 10 6 μm 3 ; Table 2). The covariates included in these models were those identified as significantly different between the 3 NS diagnoses in univariable analyses (Table 1). Hence, these data identified lower Vglom in MCD cases versus FSGS, representing maintained Vglom regulation in MCD (and dysregulation in FSGS), and the data showed an association of higher Vglom with adverse renal outcomes in NS. Global or podocyte-specific Shroom3 KD reduced glomerular and podocyte volume. Our human data provided rationale to comprehensively evaluate glomerular morphometry in our inducible Shroom3-KD mice, where we had observed diffuse FPE without podocytopenia, along with reduced podocyte volume. In this mouse model (Shroom3-KD mice), universal shRNA-mediated Shroom3 KD and turbo GFP (tGFP) production were induced in all tissues by doxycycline feeding (DOX; see Supplemental Methods) (9,10,17). Nontransgenic or monotransgenic littermates were used as controls. As we previously reported, global Shroom3-KD mice develop diffuse podocyte FPE by 6 weeks of DOX (10). We first evaluated Vglom using the Cavalieri principle (Supplemental Figure 2, A and B) after inducing Shroom3 KD. Here, we identified significant Vglom reductions in Shroom3-KD versus controls (Figure 2A and Supplemental Table 1A; n = 8; mean difference in Vglom ~23%). As described previously (10,18), we also estimated the volume of glomerular components, i.e., podocytes (PodoVglom), capillary lumens plus endothelium (Cap+EndoVglom), and mesangial components (see Methods and Supplemental Figure 2, C and D). Glomerular component volume analysis revealed reductions in PodoVglom (P < 0.05) and Cap+EndoVglom (P = 0.06) in Shroom3-KD glomeruli versus controls (Supplemental Table 1A). No podocytopenia was identifiable at 6 weeks DOX in Shroom3-KD mice ( Figure 2B); indeed, podocyte numerical density (podocytes per unit Vglom expressed as n/μm 3 ) was higher in Shroom3-KD glomeruli (Supplemental Figure 2E). As previously described, Shroom3 KD induced increased albuminuria ( Figure 2C) without azotemia (Supplemental Figure 2F). We also observed significantly lower single kidney weights in Shroom3-KD mice, while body weights remained similar to controls ( Figure 2D and Supplemental Figure 2G, respectively). The mean difference in kidney weights was 24%, similar to Vglom changes, and suggested the involvement of nonglomerular kidney cells in the renal phenotype observed with global Shroom3 KD. We evaluated nephron endowment in control and Shroom3-KD mice using nephron density (Vglom density and number density by morphometry), which was similar in the 2 groups and did not explain changes in Vglom (controls vs. Shroom3-KD = 189 ± 50.7 vs. 174 ± 44.8 per mm 3 of cortex; P = 0.5; Supplemental Figure 2, H and I).
JCI
To test the hypothesis that podocyte Shroom3 regulated Vglom, we crossed Nphs1-rtTA (19) mice with our inducible Shroom3-shRNA mice (9, 10) for podocyte-specific Shroom3 KD (Podocyte-Shroom3-KD mice). Glomerular protein extracts showed Shroom3 KD and tGFP production in Podocyte-Shroom3-KD mice (Supplemental Figure 2J), and quantitative (qPCR) analysis confirmed Shroom3 KD in Podocyte-Shroom3-KD glomeruli (Supplemental Figure 2K). Podocyte-Shroom3-KD mice with 6 weeks of DOX similarly demonstrated significantly reduced mean Vglom (mean difference ~15%) as well as reduced PodoVglom and Cap+EndoVglom ( Figure 2E and Supplemental podocytopenia was identifiable by the fractionator-disector method ( Figure 2F). Podocyte Shroom3-KD mice also showed increased albuminuria without azotemia ( Figure 2G and Supplemental Figure 2L). Neither body weights (Supplemental Figure 2M) nor single kidney weights ( Figure 2H) were significantly different between Podocyte-Shroom3-KD and control mice, suggesting minimal effect on nonglomerular cells due to podocyte-specific Shroom3 KD. Electron microscopy examination revealed podocyte FPE ( Figure 2I), similar to global Shroom3-KD animals (10). Quantification of foot process width (FPW) consistently revealed higher FPW among representative Podocyte-Shroom3-KD mice versus controls (Supplemental Figure 2N). These data suggested that, in addition to inducing albuminuria with FPE, global or podocyte-specific Shroom3 KD reduced Vglom in adult mice without podocytopenia. Shroom3 KD restricted glomerular hypertrophy after unilateral nephrectomy. Because morphometric data from podocyte-specific Shroom3-KD phenocopied global Shroom3-KD, we used global Shroom3-KD mice, which were backcrossed into a susceptible BALB/c background for further experiments. To further examine Vglom regulation by Shroom3, we performed unilateral nephrectomy in Shroom3-KD and control mice as described (8) and evaluated glomerular hypertrophy using nephrectomized and remnant kidneys ( Figure 3A) at 1 week after nephrectomy (n = 4 vs. 5, respectively). First, after unilateral nephrectomy, the mean weight of the remnant kidney was reduced in global Shroom3-KD versus control animals ( Figure 3B). By morphometry, the percentage change in mean Vglom after nephrectomy was restricted in Vglom, glomerular volume obtained from area-profile method and Weibel-Gomez formula; eGFR, estimated glomerular filtration rate. To examine whether Shroom3/Fyn KD in podocytes reduced cell size by reducing cellular protein content, we performed protein/DNA ratio estimation in Shroom3/Fyn-KD podocytes versus scramble controls. We used cycloheximide (20), a protein synthesis inhibitor, as a positive control. We identified markedly reduced protein/DNA ratio with Shroom3 or Fyn KD ( Figure 4C). Next, we examined RNA biogenesis by quantifying ribosomal RNA copies, including 18S, 5S, and RPS26 in KD cells (normalized to actin). We observed significantly reduced 18SrRNA copies in vitro in Shroom3 or FYN-KD podocytes ( Figure 4D). RPS26 transcripts were also reduced in SHROOM3-shRNA podocytes ( Figure 4E). In vivo in both glomerular and tubular fractions, 18S rRNA copies were significantly reduced in Shroom3-KD kidneys versus controls ( Figure 4F), also suggesting inhibited protein synthesis in nonglomerular kidney cells in global Shroom3-KD animals. Interestingly, 5S rRNA transcripts were unchanged in in vitro Shroom3/Fyn-KD and in vivo Shroom3-KD versus control animals (Supplemental Figure 4, A and B).
Shroom3 or Fyn KD increases cellular AMPK activation. Since cellular size and protein biosynthesis were reduced with Shroom3-Fyn KD and previous data from Fyn-knockout mice showed increased activation of AMPK (21), a negative regulator of cellular protein biosynthesis, we examined AMPK signaling after Shroom3 or Fyn KD in podocytes. We identified significantly increased AMPK phosphorylation at threonine-172 (or pAMPK) in both SHROOM3-and FYN-shRNA-transduced podocytes versus respective scramble controls ( Figure 5, A and B; n = 4 sets). Cellular AMPK activation is stereotypically induced by increased AMP/ATP ratio and is a negative regulator of protein synthesis (22,23). Consistent with this, phosphorylated EF2/total EF2 ratio downstream of AMPK was enhanced in KD podocytes versus controls, suggesting inhibited protein translation ( Figure 5, A and B). Phosphorylation of MTOR was, however, not significantly different in scramble versus KD lines. Increased levels of phosphorylated ULK1 and LC3II (downstream of pAMPK) were also identified in KD podocytes ( Figure 5, A and B). We identified increased LC3-positive vacuoles in SHROOM3-shRNA podocytes versus controls; bafilomycin (24) treatment further accentuated LC3-positive vacuoles in SHROOM3-shRNA cells, confirming significantly increased autophagic flux ( Figure 5, C and D). In agreement, increased pAMPK staining was seen in glomeruli of Shroom3-KD versus control mice (n = 4 vs. 5 mice; Figure 5, E and F; and Supplemental Figure 5A). Glomerular lysates from Shroom3-KD/control animals confirmed increased pAmpk and phospho-Ef2 in Shroom3-KD mice ( Figure 5, G and H). We also examined whole kidney lysates and tubular extracts of Shroom3-KD/control animals (n = 4 each) and confirmed increased Lc3-II in Shroom3-KD mice (Supplemental Figure 5, B-D), suggesting extension of Ampk-activation to nonglomerular cells with global Shroom3 KD. Together, these data demonstrated increased cellular AMPK activation after Shroom3 or FYN KD with reduced protein synthesis and increased autophagy, leading to reduced cellular protein content.
We previously demonstrated that Shroom3 KD led to Fyn inactivation due to loss of Shroom3-Fyn interaction between the respective SH3-binding and SH3 domains (10). Additionally, cells with FYN deletion or inactivation showed increased pAMPK via increased LKB1 cytoplasmic distribution (21). LKB1 phosphorylates Thr-172 of the kinase subunit of AMPK and is ubiquitous. We therefore examined LKB1 localization in SHROOM3-shRNA podocytes. Nuclear, cytoplasmic, and membrane protein extracts after subcellular fractionation showed a consistently reduced nuclear pool of LKB1 and increased LKB1 cytoplasmic/nuclear ratio, suggesting LKB1 redistribution to the cytoplasm in SHROOM3-shRNA podocytes versus scramble (Supplemental Figure 5, E and F). Consistent with AMPK activation, phosphorylated-EF2 was also increased in SHROOM3-shRNA podocytes in subcellular fractions (Supplemental Figure 5E). In summary, after Shroom3 KD in podocytes, Fyn inactivation led to LKB1 redistribution to the cytoplasm and consequent AMPK activation (Supplemental Figure 5G).
AMPK activation reduces Vglom and mitigates podocytopenia in aged Shroom3-KD mice with podocyte FPE. We have previously reported that aged mice (>1 year) with a similar duration of Shroom3 KD developed podocyte loss and early FSGS (10), distinct from young Shroom3-KD mice. To understand whether this loss of podocyte protection during aging was associated with reduced Ampk activation in response to Shroom3 KD (since age-related decline in AMPK activation is also reported elsewhere; refs. 25-27), we studied aged control and Shroom3-KD mice. Using aged versus young controls, we demonstrated reduced pAmpk in kidney lysates and in glomeruli of aged controls ( Figure 6A and Supplemental Figure 6, A and B), representing age-related decline of Ampk activation in renal tissues. Further, previously seen enhanced pAmpk in young Shroom3-KD mice was not observed in aged Shroom3-KD mice ( Figure 6A vs. Figure 5, E-H). Hence, Shroom3 KD alone was insufficient to enhance AMPK activation in aged kidneys. At 6 weeks of DOX, aged Shroom3-KD mice developed azotemia and podocytopenia ( Figure 6, B and C, and Supplemental Figure 6E, respectively; n = 5 each), in contrast to young Shroom3-KD mice. Mean Vglom was significantly higher in aged KD mice, suggesting an inability to regulate Vglom when Shroom3 KD was not associated with AMPK activation ( Figure 6D). PAS staining also showed mesangial expansion in aged KD mice ( Figure 6E).
In subsequent experiments, we used an AMPK activator (28), metformin in drinking water (MF; see Methods), to further study the contribution of Ampk to the podocytopenia observed in aged Shroom3-KD mice (n = 5 each group). First, MF treatment restored enhanced Ampk phosphorylation with Shroom3 KD in aged Shroom3-KD lysates (significantly greater than in aged controls) (Supplemental Figure 6, C and D). Albuminuria in MF-treated aged Shroom3-KD mice was lower than untreated aged Shroom3-KD mice (Figure 6F). Compared with aged controls, aged Shroom3-KD mice treated with MF did not show podocytopenia at 6 weeks of DOX ( Figure 6G). MF-fed aged controls and Shroom3-KD mice showed similar levels of blood urea nitrogen (BUN) and creatinine (Supplemental Figure 6, F and G). MF treatment was also associated with a reduction in mean Vglom in aged Shroom3-KD mice versus aged controls at 6 weeks ( Figure 6H), and thus was similar to young Shroom3-KD mice.
These data suggested that in aged Shroom3-KD mice, loss of Vglom regulation and podocytopenia occurred when podocyte FPE occurred in the absence of enhanced AMPK activation. MF use in aged Shroom3-KD mice enhanced AMPK activation and reduced Vglom (versus aged controls), improved proteinuria (versus aged KD mice), and was protective against podocytopenia. AMPK inhibition reverses Vglom reduction and promotes podocytopenia in young Shroom3-KD mice. Next, we studied whether pharmacological AMPK inhibition altered Vglom regulation and reduced podocyte survival in young Shroom3-KD mice with FPE without podocytopenia. We employed Compound C (29), a selective small-molecule competitive AMPK inhibitor acting via its ATP binding site reported to inhibit AMPK activation even in the presence of AMPK activators (30,31). We administered Compound C at week 5 of DOX feeding (20 mg/kg/dose × 4 doses i.p.) to 8-week-old Shroom3-KD mice and controls (n = 4 vs. 3). We aimed to inhibit AMPK activation after inducing Shroom3 KD and podocyte FPE. As shown, Shroom3-KD mice had significantly lower body weight after Compound C administration at 8 weeks versus controls ( Figure 7A). Kidney lysate immunoblotting ( Figure 7B) and glomerular immunofluorescence (Supplemental Figure 7, A-C) confirmed complete inhibition of Ampk activation by Compound C in both groups of mice. Azotemia was induced by Compound C only in KD mice and not in controls ( Figure 7C and Supplemental Figure 7D Table 2). Glomeruli of Shroom3-KD mice, which previously showed podocyte FPE but without podocytopenia at 6 and 8 weeks of DOX ( Figure 2 In the box-and-whisker plot, the box represents the middle quartiles, the lines indicate the median, and the whiskers denote the 5th-9th percentile. Line and whiskers indicate mean ± SEM; unpaired t test; *P < 0.05, **P < 0.01, ***P < 0.001; WB, Western blot; PAS, periodic acid-Schiff; WT1, Wilms' tumor 1 protein; 30 glomerular profiles/animal were used for WT1 immunofluorescence; 10 glomeruli/animal were measured for Vglom).
AMPK activation reduces Vglom and preserves podocyte numbers in nephron loss-induced glomerular hypertrophy.
Finally, we asked whether pharmacological Ampk activation would promote favorable Vglom regulation and podocyte survival in WT mice. To test this hypothesis, we administered PF0640957 (PF), a highly specific AMPK agonist (32), to BALB/c mice subjected to 5/6th nephrectomy, a model for FSGS resulting from maladaptive hypertrophy of remnant glomeruli and podocytes. BALB/c mice, without or with PF (BALB/c+PF mice) were subjected to 2/3rd nephrectomy, followed by contralateral nephrectomy 7 days later, and euthanized after a further 6 weeks (n = 6 vs. 5, respectively; see Methods). PF06409577 gavage was initiated a day before the first surgery.
Baseline BUN was similar in both groups (not shown), and mice showed similar weight loss trends with surgery (Supplemental Figure 8A). Kidney lysates from the BALB/c+PF group obtained from sequential nephrectomy samples confirmed Ampk activation ( Figure 8A). BALB/c+PF mice showed significantly attenuated albuminuria ( Figure 8B) and improved azotemia ( Figure 8C and Supplemental Figure 8B) by euthanization.
Transcriptomes of MCD versus FSGS reveal signatures of Fyn kinase inactivation and Ampk activation in MCD glomeruli. Based on our murine data suggesting a key role for Ampk in transition between an MCD-like pathology to one with podocytopenia and FSGS, we examined human MCD and FSGS cases in a published cohort within NEPTUNE. To test the hypothesis that Ampk may be specifically activated in human MCD cases, we examined expression microarray data from glomerular transcriptomes of MCD (n = 9) versus FSGS (n = 17) in this cohort (Affymetrix 2.1 ST located at NCBI's Gene Expression Omnibus, GSE68127; ref. 33). First, we identified significantly upregulated and downregulated differentially expressed genes (DEGs) in MCD to FSGS comparisons from this data set (by LIMMA test P < 0.05, i.e., significant DEGs; 916 upregulated and 1044 downregulated DEGs; Supplemental Table 3). These DEGs were input into ENRICHR (34) and analyzed using multiple enrichment platforms. As shown in Supplemental Figure 9, A-D, upregulated DEGs revealed significant enrichment of signals of Fyn kinase inactivation and Ampk activation. Notably, among the top enriched kinases, CSNK1E (casein kinase epsilon isoform) is a canonic Ampk target that links metabolism with circadian rhythm (35,36), while NUAK2 is an Ampk-like kinase regulated by LKB1 (37) with an identical kinase domain as Ampk and overlapping kinome (38). Consistently downregulated DEGs analyzed using kinase enrichment assay and protein-protein interaction platforms identified LCK as the top enriched kinase (Supplemental Figure 9, E and F). LCK is an Src kinase whose binding activates Fyn and overlaps with the kinome of Fyn (39). However, LCK is not expressed in podocytes (40), whereas Fyn is abundant. Key metabolic signals downstream of AMPK activation (41), PPAR-alpha signaling, and β-oxidation of fatty acids were also significantly enriched in MCD glomerular transcriptomes versus FSGS (Supplemental Figure 9, C and D).
Premade concept nodes directly curated in Nephroseq also demonstrated significantly enriched β-oxidation of fatty acids (top 5% DEGs) and TCA cycle (top 1% DEGs) in MCD versus FSGS glomeruli from this data set using ENRICHR (Supplemental Figure 9, G and H). Hence, these data suggest downregulation of Fyn signaling and upregulation of AMPK signaling within human MCD glomerular transcriptomes compared with FSGS.
In summary, we revealed AMPK signaling as a key regulator of Vglom and podocyte survival in injured podocytes with FPE or when facing glomerular hypertrophic stress ( Figure 9) and translationally demonstrated enrichment of this pathway in human MCD versus FSGS.
Discussion
Podocyte loss correlates with renal survival in experimental models (8,42). The association of Vglom and podocyte morphometric parameters with outcomes is reported from human cohorts (3,5,43,44). Hence, identifying novel signals for Vglom regulation in injured glomeruli and podocyte survival mechanisms in the presence of FPE are both crucial. Using a multicenter NS cohort, we showed that MCD diagnosis, where diffuse podocyte FPE does not lead to podocytopenia, was associated with lower Vglom versus FSGS and membranous nephropathy. Furthermore, reduced Vglom by itself was associated with improved renal outcomes. Based on these observations and analogous findings in our young Shroom3-KD mice with diffuse FPE without podocytopenia, we investigated downstream signaling involved in Vglom regulation and podocyte survival. We identified enhanced AMPK phosphorylation with Shroom3 or Fyn KD via LKB1 release (21). Ampk activation was associated with increased autophagy and reduced cellular protein content downstream.
In this regard, we made the important observation here that podocyte-specific Shroom3 KD regulates not only PodoVglom but also Cap+EndoVglom and total Vglom. These data suggest that primary alterations in podocytes without evidence of podocyte loss can regulate adjacent capillary volume. Recent elegant data using mice with hyperactive Mtor in podocytes similarly showed increases in podocyte size, PodoVglom, and total Vglom before the onset of podocyte loss (4). Although we and others (4) have now demonstrated the regulation of underlying glomerular capillary size by primary changes within podocytes, the mechanisms underlying this finding need examination using coculture experiments and simultaneous interrogation of podocyte-endothelial transcriptomes to unravel crosstalk pathways that are initiated in the podocyte and involved in Vglom regulation. Ampk activation regulates several downstream signaling pathways, including anabolism, autophagy, energy conservation, and mitochondrial homeostasis (reviewed by Carling;ref. 45). Among these, identifying the central pro-survival mechanism(s) after podocyte AMPK is activated needs further examination. AMPK regulates cell growth by modulating mTORC1 signaling but also by directly inhibiting ribosomal biogenesis and protein translation (by phosphorylating EF2) and activating autophagy (22,23). We showed here that AMPK activation in podocytes regulated podocyte volume, albeit without significant changes in MTOR phosphorylation (4). This is consistent with recent work showing the in vivo role of Ampk in the regulation of podocyte autophagy (46,47). This is also relevant given that the AMPK/autophagy axis is reported to be protective in FSGS models (46,48). Reduced 18S RNA seen here with increased pAMPK is likely from inhibition of RNA-polymerase-I by PRKAG2 (γ-2 subunit of AMPK) also expressed by podocytes (23).
We and others (49)(50)(51)(52) have demonstrated that Ampk activation can mitigate FSGS and improve podocyte survival in animal models. Our experiments further revealed that in Shroom3-KD mice with podocyte FPE (10), coincidental AMPK activation regulated Vglom and prevented podocytopenia, promoting an MCD-like pathology. The critical role of Ampk activation in preventing glomerular enlargement, podocytopenia, and azotemia was demonstrated in aged mice (in which Ampk activation is reduced) and after pharmacological Ampk inhibition by Compound C. Most importantly, activating Ampk in podocytes with Shroom3 KD induced FPE or after nephron loss in WT mice, promoted adaptive Vglom regulation and prevented podocytopenia. While the beneficial role of AMPK activators in proteinuric disease in diabetes or obesity is established (50,(53)(54)(55)(56), our data describe a role for AMPK signaling in preventing podocytopenia and restricting glomerulomegaly in the context of podocyte FPE, in turn controlling the MCD-FSGS transition. Further experimental data are essential to study the modulation of the podocyte Ampk signaling axis by established circulating mediators of CKD and podocyte injury, such as SUPAR (57,58), TGF-β (9,59), and TNF-α (60, 61) that could then link podocytopenia, glomerulomegaly, and progressive disease with Ampk signaling in each of these instances. More translational work is also needed to examine enriched Ampk signaling genes as biomarkers of an MCDlike clinical course in NS cases and in understanding the role of Ampk activation in mitigating specific role in mitigating the transition to FSGS by sustaining podocyte survival during injury with FPE. These data are therefore an important platform to examine AMPK-based therapeutics in human NS using approved AMPK activators (62) or LKB1-based agents (63).
We have not addressed Fyn knockout in vivo (64). We acknowledge this as a limitation of our work. However, global Fyn-KO mice had FPE without podocyte loss until at least 1 year of age (similar to Shroom3-KD mice) and reduction in cell size, suggesting in vivo Ampk activation (13,21). We cannot completely rule out ancillary mechanisms of Ampk activation by cytoskeletal or metabolic stressors in Shroom3-KD cells.
Since we used global Shroom3-KD mice, our current data also cannot exclude beneficial effects of Ampk activation (by MF or PF) and deleterious effects of Ampk inhibition in nonpodocyte cells as contributing to observed renal phenotypes.
In summary, utilizing a Shroom3-KD murine model, we revealed the role of Ampk signaling in the regulation of podocyte and Vglom. We applied an aging model and pharmacological agents to demonstrate the key role of AMPK in regulating podocyte survival in injured glomeruli with podocyte FPE. These findings are of importance to podocyte biology and pathology in NS and have considerable application to therapeutics in glomerular diseases given the availability of AMPK activators.
Methods
The following is a summary of the methods used. See Supplemental Methods for more detailed methods.
Cell culture. A human podocyte cell line (gift from Moin Saleem, University of Bristol, Bristol, United Kingdom) was differentiated using RPMI-1640. Protein/DNA ratio assay was done using 1000-podocytes/well in 96-well plates. in vitro autophagy studies were performed using Bafilomycin A1 at day 7 of differentiation (100 nM for 24 hours) (24).
Reverse transcription qPCR. Transcript expression was assayed by real-time PCR using specific primers (see Supplemental Table 4). Amplification curves were analyzed via the 2 -ΔΔCt method.
Immunofluorescence. For immunofluorescence, 5 μm paraffin sections of formalin-fixed kidney-tissues were deparaffinized and processed for unmasking and antigen retrieval followed by incubation with primary antibodies.
Quantitative image analysis. For pAMPK, glomeruli were outlined using Zen Pro 2.6 (blue edition) software (40× images), and the areas of pAMPK staining were measured for signal intensity and expressed as average signal intensity total area/glomerulus.
Vglom -Weibel-Gomez method. The Weibel-Gomez method was used to measure Vglom in human NS biopsies. This method uses 1 PAS-stained and 1 trichrome-stained paraffin section from Aperio-scanned images of NS biopsies from the NEPTUNE study (5). The areas of all complete glomerular profiles present in the section were measured by planimetry (Supplemental Figure 1A). Vglom = A 3/2 × 1.38 μm 3 where A is the average glomerular tuft area and 1.38 is the shape correction factor assuming glomeruli are spheres (67). The mean Vgloms obtained from 2 sections within each patient were highly correlated (Supplemental Figure 1B). The PAS-stained sections were used for stereological analyses.
Vglom -Cavalieri method. One-millimeter cubes were cut from the cortex, fixed in glutaraldehyde, and embedded in Epon. Vglom was measured using 1 μm thick sections and the Cavalieri principle (Supplemental Figure 2A). As described previously (10,18), high-magnification images (~1700×) were used for estimation of Vglom components (Supplemental Methods; Supplemental Figure 2, B-D), i.e., PodoVglom, Cap+Endo Vglom, and mesangium were calculated by measuring the volume fraction of each component and then multiplying each fraction by the Vglom. Podocytes were counted using the fractionator/disector method (Supplemental Methods; ref. 68). The morphometrist was blinded to disease group for the human biopsies and to experimental group in murine data.
FPW quantification. For quantitative ultrastructural analysis of the glomerulus by transmission electron microscopy (the number of podocyte foot processes present in each micrograph was divided by the total length of GBM; to calculate the mean density of podocyte foot processes. Capillary loops in at least 3 separate glomeruli of each animal, adding up to a mean of 1088 ± 339.6 foot processes, were counted over an average of 467.9 ± 113.7 mm of the basement membrane. The thickness in each image was measured using ImageJ (NIH).
Data collection. Publicly available human microarray data sets for all kidney diseases, including FSGS, membranous nephropathy, MCD, and others, were downloaded from NCBI's Gene Expression Omnibus (GEO GSE68127). We collected high-throughput transcriptome data for 99 disease and control samples. Within this data set we manually selected the samples with clinical information at GEO. For each study, we grouped the samples with the clinical and phenotypic information reported by the original study. Then, for the raw microarray data, we performed quality assessment, and all the microarray platform data were reannotated to the most recent NCBI Entrez Gene Identifiers by AILUN (http://ailun.ucsf.edu). All the expression values were base-2 log-transformed and normalized by quantile-quantile normalization.
DEG identification. Principal component analysis was first performed to assess the sample correlations using the expression data of all the genes. The LIMMA test was applied for analysis of data. A specific gene was considered differentially expressed if the P value given by these methods was less than or equal to 0.05.
Pathway network, generation, and analyses. The significant DEGS from tubular and glomerular microarray data were identified by comparing between MCD and FSGS cases in this data set. We used two methods to perform to perform gene enrichment analysis. INGENUITY IPA (www.ingenuity.com/products/ipa) and Enrichr (https://amp.pharm.mssm.edu/Enrichr/) were used for GO and pathways for DEGs with a foldchange cutoff of 1.5 or higher.
Statistics. Deidentified clinical information was obtained for the NS morphometry cohort and linked to morphometry using unique IDs. For human data, univariate comparisons of clinical/demographic factors between NS categories were done using 1-way ANOVA (Kruskal-Wallis for corresponding nonparametric analysis with Dunn's post test) for continuous variables and χ 2 for proportions. Spearman's correlation coefficient was used to compare Vgloms within patients. Cox proportional hazard models were used for multivariable survival associations, including clinical/demographic factors identified as significantly different in univariable analyses. NEPTUNE-determined outcomes of end-stage renal failure, eGFR decline 40% or greater from baseline, or a composite of these events were evaluated as outcomes. Time from biopsy to event was utilized. For in vitro and in vivo MF experiments, an unpaired 2-tailed t test (Mann-Whitney test for corresponding nonparametric analysis) was used for data between 2 groups. The cutoff for statistical significance was a 2-tailed P value less than 0.05.
Author contributions
KB, QL, JCH, and MCM contributed to the research/study design. KB, QL, CW, MP, AC, FG, and AG performed experimentation. FG, MP, QL, KB, and MCM did mouse husbandry. JMB and KVL performed morphometry. FS and LL conducted histology. KB, QL, WZ, NC, and MCM analyzed data. AS, LK, BD, BM, and JCH provided reagents. KB, JCH, and MCM wrote the manuscript. All authors contributed to the editing of the manuscript.
|
2021-09-04T06:18:43.220Z
|
2021-09-02T00:00:00.000
|
{
"year": 2021,
"sha1": "74c75151bfe8d662700cb43c69d1ed027989df37",
"oa_license": "CCBY",
"oa_url": "http://insight.jci.org/articles/view/150004/files/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5e415ea8b692e502b2aa5786e8679d059accb0fa",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
270607246
|
pes2o/s2orc
|
v3-fos-license
|
Neuronal Cell Differentiation of iPSCs for the Clinical Treatment of Neurological Diseases
Current chemical treatments for cerebrovascular disease and neurological disorders have limited efficacy in tissue repair and functional restoration. Induced pluripotent stem cells (iPSCs) present a promising avenue in regenerative medicine for addressing neurological conditions. iPSCs, which are capable of reprogramming adult cells to regain pluripotency, offer the potential for patient-specific, personalized therapies. The modulation of molecular mechanisms through specific growth factor inhibition and signaling pathways can direct iPSCs’ differentiation into neural stem cells (NSCs). These include employing bone morphogenetic protein-4 (BMP-4), transforming growth factor-beta (TGFβ), and Sma-and Mad-related protein (SMAD) signaling. iPSC-derived NSCs can subsequently differentiate into various neuron types, each performing distinct functions. Cell transplantation underscores the potential of iPSC-derived NSCs to treat neurodegenerative diseases such as Parkinson’s disease and points to future research directions for optimizing differentiation protocols and enhancing clinical applications.
Introduction
Neurological disorders, especially cerebrovascular diseases and strokes, are a significant global issue [1].These conditions lead to irreversible neural damage, and currently, there are limited effective treatments available for repairing damaged tissue or restoring function [2][3][4].To overcome this, regenerative medicine has begun to focus on the differentiation of neural cells from induced pluripotent stem cells (iPSCs) [5].
Stem cells inherently possess two key functions: the capacity for unlimited selfrenewal and the ability to differentiate into one or more specialized cell types [6].These characteristics play a fundamental role in exploring tissue repair and disease treatment methods through stem cells [7].
iPSCs are cells that have regained pluripotency through the reprogramming of already differentiated mature cells and are created by manipulating the expression of specific genes [8,9].The technology of iPSCs, which restores pluripotency from mature cells, offers innovative potential for generating patient-specific disease models and developing personalized treatments [10].Neural cells generated from iPSCs can be used to replace or repair damaged neural tissue [11].Moreover, using neural cells differentiated from patient-derived iPSCs allows for effective testing of new drugs' efficacy or toxicity [12].Transplanting these iPSC-derived neural cells could lead to functional recovery in neurodegenerative diseases such as Alzheimer's or Parkinson's disease.
Against this background, it is expected that the process of neuronal differentiation of iPSCs will be examined, and the mechanisms of neuronal differentiation will be elucidated, providing an important step in the development of regenerative medicine and disease therapies.
Inhibiting the SMAD Pathway in iPSCs for Neural Differentiation
The process of differentiating iPSCs into various cells includes several complex signaling pathways and molecular mechanisms.iPSCs have important advantages over embryonic stem cells (ESCs).iPSCs are derived from adult cells; they bypass the ethical issues of destroying embryos to derive ESCs [13,14].iPSCs can be self-derived from the patient, allowing for the creation of patient-specific cell lines [12,15].They can differentiate into multiple cell types, allowing drug testing to assess effectiveness and identify side effects safely and efficiently [16].Furthermore, iPSCs retain the same pluripotency as that of ESCs [17].Both iPSCs and ESCs exhibited equivalent neuronal differentiation potential, and both cells showed similar cholinergic motor neuron differentiation potential and the ability to induce the contraction of myotubes [18].In another study, while iPSC-derived neural stem cells (NSCs) had decreased ATP production compared to that of ESC-derived NSCs, iPSC-derived astrocytes had increased ATP production compared to that of ESC-derived astrocytes [19].
Specifically, the differentiation of neuronal cells is induced by the dual inhibition of the Sma-and Mad-related protein (SMAD) pathway (Figure 1).Before understanding the SMAD pathway, it is necessary to understand the transforming growth factor-beta (TGFβ) signaling pathway, which includes SMAD.repair damaged neural tissue [11].Moreover, using neural cells differentiated from patient-derived iPSCs allows for effective testing of new drugs' efficacy or toxicity [12].Transplanting these iPSC-derived neural cells could lead to functional recovery in neurodegenerative diseases such as Alzheimer's or Parkinson's disease.Against this background, it is expected that the process of neuronal differentiation of iPSCs will be examined, and the mechanisms of neuronal differentiation will be elucidated, providing an important step in the development of regenerative medicine and disease therapies.
Inhibiting the SMAD Pathway in iPSCs for Neural Differentiation
The process of differentiating iPSCs into various cells includes several complex signaling pathways and molecular mechanisms.iPSCs have important advantages over embryonic stem cells (ESCs).iPSCs are derived from adult cells; they bypass the ethical issues of destroying embryos to derive ESCs [13,14].iPSCs can be self-derived from the patient, allowing for the creation of patient-specific cell lines [12,15].They can differentiate into multiple cell types, allowing drug testing to assess effectiveness and identify side effects safely and efficiently [16].Furthermore, iPSCs retain the same pluripotency as that of ESCs [17].Both iPSCs and ESCs exhibited equivalent neuronal differentiation potential, and both cells showed similar cholinergic motor neuron differentiation potential and the ability to induce the contraction of myotubes [18].In another study, while iPSC-derived neural stem cells (NSCs) had decreased ATP production compared to that of ESC-derived NSCs, iPSC-derived astrocytes had increased ATP production compared to that of ESCderived astrocytes [19].
Specifically, the differentiation of neuronal cells is induced by the dual inhibition of the Sma-and Mad-related protein (SMAD) pathway (Figure 1).Before understanding the SMAD pathway, it is necessary to understand the transforming growth factor-beta (TGFβ) signaling pathway, which includes SMAD.Reverse-differentiated iPSCs can be induced to undergo mesoderm or endoderm differentiation through the activation of the SMAD pathway.Inhibition of the SMAD pathway induces the neural stem cell differentiation of iPSCs.BMP: bone morphogenetic protein, TGFβ: transforming growth factor-beta, NSC: neural stem cell, iPSC: induced pluripotent stem cell, PBMC: peripheral blood mononuclear cell, OSKM: Oct4/Sox2/Klf4/c-Myc, SMAD: Smaand Mad-related protein.
SMAD Pathway Inhibition
Inhibition of the SMAD pathway directs the fate of iPSCs towards the neuroectoderm and induces neural cell differentiation through the inhibition of TGFβ and BMP-4 signaling, as mentioned above [20].For the dual inhibition of the SMAD pathway, SB431542 is used to inhibit the TGFβ pathway and Noggin is used to inhibit the BMP pathway.
SB431542 inhibits the Lefty/Activin/TGFβ pathway by blocking the phosphorylation of ALK4, ALK5, and ALK7 receptors.SB431542 also inhibits differentiation to the mesoderm by inhibiting Activin/Nodal signaling.Noggin inhibits differentiation to the ectoderm by inhibiting the BMP pathway.A combined treatment of SB431542 and Noggin induced the neural differentiation of stem cells with high efficiency [20].The mechanisms by which Noggin and SB431542 induced neural cell differentiation include Activinand Nanogmediated network destabilization [21], BMP-induced inhibition of differentiation [22], and the inhibition of mesodermal and endodermal differentiation through the inhibition of endogenous Activin and BMP signaling [23,24].Treatment with SB431542 decreases Nanog expression and significantly increases CDX2 expression.The inhibition of CDX2 in the presence of Noggin or SB431542 demonstrates that one of the key roles of Noggin is the inhibition of endogenous BMP signaling, which induces trophoblast fate during differentiation.
TGFβ Signaling Pathway
The TGFβ signaling pathway is a pathway that regulates cell growth, differentiation, migration, death, and homeostasis [25].The superfamily of TGFβ includes bone morphogenetic protein (BMP), Activin, Nodal, and TGFβ.Signal transduction in this pathway begins with the binding of superfamily ligands of TGFβ to TGFβ receptor type II and TGFβ receptor type I [26].Activated TGFβ receptors recruit Smad2/3 for TGFβ and activation signaling [27] and form complexes of CoSmad and R-smad, such as Smad4, for BMP signaling [28].Smad complexes accumulate in the nucleus and are directly involved in the transcriptional regulation of target genes [29].
BMP Signaling Pathway
BMPs are cytokines that belong to a group of growth factors [30].BMPs have a role in early skeletal formation during embryonic development and were originally known to act as bone growth factors [31].BMPs bind to a heteromeric receptor complex composed of type I and type II serine/threonine kinase receptors, which are received by different activin receptors and BMP receptors [32].The two receptors are highly homologous and can activate both Smad and non-Smad signaling.
BMP-4 is a member of the BMP superfamily, which induces the ventral mesoderm to establish dorsal-ventral morphogenesis.BMP4 signaling is found in the formation of early mesoderm and germ cells, and the development of the lungs and liver is attributed to BMP4 signaling [33].Inhibition of this BMP-4 signaling induces neurogenesis and the formation of the neural plate.Indeed, the knockout of BMP-4 in mice resulted in little mesodermal differentiation [34].
RA Pathway
Retinoic acid (RA) is a molecule that contributes to the development and homeostasis of the nervous system [35].The RA signaling depends on cells having the ability to metabolize retinol.Transcription is regulated by the binding of RA to its receptor, RA receptor (RAR), which forms a complex with the retinoid X receptor (RXR) [36].The RA is involved in the differentiation of NSCs into neurons, astrocytes, or oligodendrocytes [37].RA activates the Hox gene, which is required for hindbrain development and regulates the head-trunk transition [38].RA is required for the formation of primary neurons [39].In an embryonal carcinoma cell line in vitro, RA promoted neurite outgrowth and stimulated the expression of neural differentiation markers [40].Furthermore, RA is essential in embryonic development and is essential for the development of many organs, including the hindbrain, spinal cord, skeleton, heart, and brain [41].
BDNF, GDNF, and NGF Pathway Regulation
Brain-derived neurotrophic factor (BDNF) is a neurotrophic factor found primarily in the brain and central nervous system that regulates nerve cell survival, growth, and neurotransmission [42].BDNF promotes neuronal survival and growth in dorsal root ganglion cells and in hippocampal and cortical neurons [43,44].In in vitro experiments in which neural differentiation was induced in a variety of stem cells, neural differentiation was confirmed after treatment with BDNF [45,46].
Glial-cell-line-derived neurotrophic factor (GDNF) is a protein that promotes the survival of many different neurons [47].GDNF can be secreted by neurons and peripheral cells during development, including astrocytes, and interacts with GDNF family receptor alpha 1 and 2 [48].In particular, it has a protective effect on dopamine-producing nerve cells, making it an important target in neurodegenerative diseases such as Parkinson's disease [49].
Nerve growth factor (NGF) is a neuropeptide involved in regulating the growth, proliferation, and survival of neurons [50].In in vivo and in vitro studies, NGF has been shown to have an important role in the differentiation and survival of neurons, as well as in the protection of degenerating neurons.
Differentiation of Various Neural Cells from iPSCs
Through various mechanisms, neural cell differentiation from iPSCs can develop a diverse array of neurons (Figure 2, Table 1).It is possible to consider prior studies that successfully differentiated various neurons from iPSCs and the application of protocols used for the differentiation of human ESCs (hESCs) into iPSCs.
head-trunk transition [38].RA is required for the formation of primary neurons [39].In an embryonal carcinoma cell line in vitro, RA promoted neurite outgrowth and stimulated the expression of neural differentiation markers [40].
Furthermore, RA is essential in embryonic development and is essential for the development of many organs, including the hindbrain, spinal cord, skeleton, heart, and brain [41].
BDNF, GDNF, and NGF Pathway Regulation
Brain-derived neurotrophic factor (BDNF) is a neurotrophic factor found primarily in the brain and central nervous system that regulates nerve cell survival, growth, and neurotransmission [42].BDNF promotes neuronal survival and growth in dorsal root ganglion cells and in hippocampal and cortical neurons [43,44].In in vitro experiments in which neural differentiation was induced in a variety of stem cells, neural differentiation was confirmed after treatment with BDNF [45,46].
Glial-cell-line-derived neurotrophic factor (GDNF) is a protein that promotes the survival of many different neurons [47].GDNF can be secreted by neurons and peripheral cells during development, including astrocytes, and interacts with GDNF family receptor alpha 1 and 2 [48].In particular, it has a protective effect on dopamine-producing nerve cells, making it an important target in neurodegenerative diseases such as Parkinson's disease [49].
Nerve growth factor (NGF) is a neuropeptide involved in regulating the growth, proliferation, and survival of neurons [50].In in vivo and in vitro studies, NGF has been shown to have an important role in the differentiation and survival of neurons, as well as in the protection of degenerating neurons.
Differentiation of Various Neural Cells from iPSCs
Through various mechanisms, neural cell differentiation from iPSCs can develop a diverse array of neurons (Figure 2, Table 1).It is possible to consider prior studies that successfully differentiated various neurons from iPSCs and the application of protocols used for the differentiation of human ESCs (hESCs) into iPSCs.
Differentiation into Cortical Neurons
iPSCs can differentiate into cortex neurons.The study by Kaveena Autar [51] induced an initial neural lineage in iPSCs using two small molecule inhibitors of the SMAD pathway, LDN193189 and SB431542, promoting neuroepithelial differentiation.Following the early neural induction, the neural epithelium was induced using DKK-1, a Wnt/B antagonist, and DMH-1, a BMP inhibitor, enhancing the development of rostral neuroepithelial cells.Finally, the application of cyclopamine, an SHH inhibitor, designated the cortex fate, while BDNF, GDNF, cAMP, ascorbic acid, and laminin improved the generation of cortical neurons.
In the research by Yichen Shi, cortical development was induced in both hESCs and iPSCs using dorsomorphin, an inhibitor of the SMAD pathway [52].
Cortical differentiation can be confirmed by the reduced expression of the pluripotency gene Oct4 and the increased expression of the genes Tbr1, CTIP2, Satb2, Brn2, and Cux1.
Differentiation into Dopaminergic Neurons
Human iPSCs are capable of differentiating into midbrain dopaminergic neurons.In a study by Lixiang Ma, dopaminergic neurons were generated from iPSCs [53].After inducing iPSCs into neural epithelial cells, applying FGF8 and SHH efficiently produced dopaminergic neurons from midbrain precursors without the need for co-culture.Dopaminergic neurons can be identified by detecting markers such as TH, TUJ-1, LMX1A, FOXA2, and NURR1.
It is also possible to induce the dopaminergic neuronal differentiation of iPSCs without the use of pharmacological compounds for the inhibition of SMAD mechanisms [54].Adeno-associated viral vectors were designed to upregulate Lmx1a through SHH and Wnt and then transfected into iPSCs.The iPSCs not only successfully generated dopaminergic neurons but also showed a consistent number of them.
Differentiation into Motor Neurons
iPSCs can differentiate into motor neurons [55].After inducing iPSCs into embryonic bodies, treatment with RA and purmorphamine, an activator of the sonic hedgehog pathway, resulted in the expression of neural precursor markers.Cells forming neural rosettes were mechanically separated, plated in media containing RA and Shh, and cultured for a week.Following further culture with BDNF, CTNF, GDNF, and Shh, after 3-5 weeks, cells displayed motor neuron characteristics, and BIII-tubulin, ChAT, and Islet1 were detected.
Differentiation into Astrocytes
iPSCs can differentiate into astrocytes [56].iPSCs induced into NSCs were cultured in NSC media containing B27, BMP, CTNF, and bFGF.The differentiated astrocytes were co-cultured with the neuron layer.Throughout the culture, neurons were distinguished by their distinct cell bodies and measured along axons using fluorescence imaging.Neurons and astrocytes, as well as oligodendrocytes, were differentiated by expressing markers such as BIII-tubulin, GFAP, and GalC.
Differentiation into Oligodendrocytes
iPSCs can differentiate into oligodendrocytes [57].Neural differentiation was induced through dual SMAD inhibition.After differentiation, adding SAG and RA promoted sphere aggregation, and using PDGF media encouraged OPC formation.The development of oligodendrocytes was confirmed through the detection of OLIG2, MAP2, and SOX10.
Differentiation into Hippocampal Neurons
NSCs derived from iPSCs can differentiate into the hippocampus [58].Neural induction media composed of B27, N2, and NEAA were supplemented with LDN-193189, Cyclopamine, SB431542, and XAV-939 to induce differentiation, and CHIR-99021 and BDNF were added to promote hippocampal neuron development.The generation of hippocampal neurons was confirmed through the detection of PROX1.
Differentiation into Serotonergic Neurons
NSCs derived from iPSCs can differentiate into serotonergic neurons [59].Human pluripotent stem cells (hPSCs) were cultured in an N2 medium combined with a knockout serum replacement medium and treated with SB431542, LDN193189, purmorphamine, and RA.After 11 days, the medium was switched to NB/B27 medium, and BDNF was added.Following differentiation, the presence of serotonergic neurons was confirmed through immunofluorescence staining for 5-HT, MAP2, TUJ1, FEV, and TPH2 expression.Subsequent 3D culture also successfully yielded organoids, and the release of 5-HT and its metabolites was observed.
Therapeutic Research Using Neural Cells Derived from iPSCs
Researchers are hopeful that the transplantation of neural cells derived from iPSCs can overcome neurodegenerative diseases.To treat Parkinson's disease, which has been identified as a disorder of dopaminergic neurons, the transplantation of iPSC-derived dopaminergic neurons is considered.If these transplanted neurons function normally, they could potentially cure Parkinson's disease.This anticipation has led to the execution of cell transplantation therapies targeting either cells or animals, and in some cases, applications have extended to clinical trials.
Dopaminergic Neuron Therapy in a Model of Parkinson's Disease
Dopaminergic neurons from PSCs may be a candidate for the treatment of Parkinson's disease.When dopaminergic neurons were transplanted into the nigrostriatal lesions of rats with Parkinson's disease, the neurons survived and interacted in the rats' brains for a long period of time [60].After cell transplantation, the rats' motor function was restored.
In Vivo Transplantation and Survival of Astrocytes
Astrocytes derived from PSCs were transplanted into the striatum of mice to investigate their survival and function [56].In the brains of mice obtained 2 weeks after astrocyte transplantation, GFAP-positive cells were still observed.
Furthermore, when iPSC-derived astrocyte progenitors were transplanted into the brain of an Alzheimer's disease model in mice and examined through immunostaining, they interacted and functionally integrated with other cells in vivo [61].
Survival of Oligodendrocytes after Transplantation in Mice
To investigate the function of iPSC-derived oligodendrocytes, cells were injected into the forebrain of immunocompromised mice.At 12 weeks after cell injection, the oligodendrocytes were detected through immunofluorescence staining of hNA+ and OLIG2 protein in the corpus callosum.
Clinical Trials with iPSC Transplantation
There are very few studies in which iPSCs have been transplanted into humans.This is because questions about the safety, stability, and efficacy of iPSCs are constantly being raised.The first thing that researchers worry about is the ability to form tumors, which is a common concern in stem cell research [62].iPSCs also have a theoretical risk of forming tumors, so safety considerations follow.In addition, treatments using iPSC technology may result in modifications to the human genome, which requires discussion of the long-term ethical implications.For example, concerns include human cloning or human-animal chimeras.
On the other side of the spectrum, there are also concerns related to the immune response.Even though iPSCs are self-derived cells, the immune system may recognize them as foreign and attack them [63,64].This can happen mainly due to mismatches in human leukocyte antigens (HLAs), which is why it is important to select cells based on HLA matching.If iPSCs are generated from a donor with a specific HLA type, it is possible to use iPSCs from other people [63].If an HLA is incompatible, one can also modulate HLA expression or use gene editing [64].
Finally, because iPSCs must undergo reverse differentiation from human-derived cells, it takes a significant amount of time just to generate the cells.This can make it difficult to use autologous cells to treat acute illnesses.
In 2020, a transplantation study of iPSC-derived dopamine progenitor cells for the treatment of Parkinson's disease patients was conducted [65].After harvesting fibroblasts by skin biopsy, dopamine progenitor cells were characterized in vitro with dopamineneuron-specific and other neuronal markers.Characterized dopamine progenitor cells were transplanted into patients with Parkinson's disease, and Parkinson's-disease-related measures were assessed at 1, 3, 6, 9, and 12 months and every 6 months thereafter.Transplanted cells survived for 2 years without side effects.F-DOPA PET-CT imaging from 0 to 24 months showed a modest increase in dopamine uptake in the posterior cingulate near the implantation site.They also showed improved quality of life in clinical assessments of motor signs in Parkinson's disease, although interpretation should be carried out with caution due to the lack of a control group comparison.
In 2021, there was a planned clinical study of the transplantation of iPSC-derived neural progenitor cells for the treatment of subacute complete spinal cord injury [66].However, this was postponed due to the sudden onset of the COVID-19 pandemic.A clinical-grade iPSC line (YZWJs513) prepared at the GMP facility of Osaka National Hospital was induced to differentiate into neural progenitor cells (NPCs), and preclinical studies using mouse models confirmed its promotion of motor function recovery after spinal cord injury.
Conclusions
iPSCs can differentiate into a variety of neuronal cell types, including dopaminergic neurons, astrocytes, and microglia, which could be a revolutionary way to treat a variety of neurodegenerative diseases.Inhibition of TGFβ and the SMAD pathway induces neural progenitor cell differentiation of cells with restored pluripotency.The differentiated cells still survive and function in the body.
The chemical drugs used to treat neurodegenerative diseases have different susceptibilities in different patients and have short half-lives, meaning that they are quickly used up by the body.Drugs for neurodegenerative diseases such as Parkinson's disease and Alzheimer's disease can slow their progression by increasing the release of neurotransmitters, but they cannot reverse the course of the disease.In addition, unlike a body part such as an arm, it is very difficult to accurately deliver chemical drugs to the brain.Cell transplantation treatments using patient-derived iPSCs are entirely patient-derived, have a high degree of tolerance, and may be able to survive and function in the long term to reverse the progression of neurodegenerative diseases.
However, clinical experimental studies of iPSCs and neural progenitor cells differentiated from them are extremely rare and require careful handling.The response in experimental animals and humans may be different, and we do not yet fully understand the differentiation of iPSCs.
Future research should focus on optimizing protocols for iPSC-derived neural cell differentiation, ensuring long-term viability and the functional integration of transplanted cells in vivo and paving the way for clinical applications.
Figure 1 .
Figure 1.Adding reprogramming factors to PBMCs to induce their reverse differentiation into iP-SCs.Reverse-differentiated iPSCs can be induced to undergo mesoderm or endoderm differentiation through the activation of the SMAD pathway.Inhibition of the SMAD pathway induces the
Figure 1 .
Figure 1.Adding reprogramming factors to PBMCs to induce their reverse differentiation into iPSCs.Reverse-differentiated iPSCs can be induced to undergo mesoderm or endoderm differentiation through the activation of the SMAD pathway.Inhibition of the SMAD pathway induces the neural stem cell differentiation of iPSCs.BMP: bone morphogenetic protein, TGFβ: transforming growth factor-beta, NSC: neural stem cell, iPSC: induced pluripotent stem cell, PBMC: peripheral blood mononuclear cell, OSKM: Oct4/Sox2/Klf4/c-Myc, SMAD: Smaand Mad-related protein.
Figure 2 .
Figure 2. Different neural cells that can differentiate from neural stem cells.Cells induced to become neural stem cells due to the inhibition of the dual SMAD pathway with SB431542 and Noggin can be combined to add cytokines specific to each differentiation target.The cytokines described next to the black arrows indicate the fate of each neuron.Differentiated neurons are identified through the detection of the proteins listed under each neuron.
Table 1 .
Strategies for iPSCs differentiated into neural progenitor cells to become multifunctional neurons.
|
2024-06-20T15:04:45.842Z
|
2024-06-01T00:00:00.000
|
{
"year": 2024,
"sha1": "df21584674c9d17e2310afa9a7cf4ebc7b586c14",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/12/6/1350/pdf?version=1718705714",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f49c91544d40a96f8645ce56ec4b46168eef6b70",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
245652532
|
pes2o/s2orc
|
v3-fos-license
|
The Uptake and Impact of a Label for Peer-Reviewed Books
This article presents an analysis of the uptake of the GPRC label (Guaranteed Peer Reviewed Content label) since its introduction in 2010 until 2019. GPRC is a label for books that have been peer reviewed introduced by the Flemish publishers association. The GPRC label allows locally published scholarly books to be included in the regional database for the Social Sciences and Humanities which is used in the Flemish performance-based research funding system. Ten years after the start of the GPRC label, this is the first systematic analysis of the uptake of the label. We use a mix of qualitative and quantitative methods. Our two main data sources are the Flemish regional database for the Social Sciences and Humanities, which currently includes 2,580 GPRC-labeled publications, and three interviews with experts on the GPRC label. Firstly, we study the importance of the label in the Flemish performance-based research funding system. Secondly, we analyse the label in terms of its possible effect on multilingualism and the local or international orientation of publications. Thirdly, we analyse to what extent the label has been used by the different disciplines. Lastly, we discuss the potential implications of the label for the peer review process among book publishers. We find that the GPRC label is of limited importance to the Flemish performance-based research funding system. However, we also conclude that the label has a specific use for locally oriented book publications and in particular for the discipline Law. Furthermore, by requiring publishers to adhere to a formalized peer review procedure, the label affects the peer review practices of local publishers because not all book publishers were using a formal system of peer review before the introduction of the label and even at those publishers who already practiced peer review, the label may have required the publishers to make these procedures more uniform.
INTRODUCTION
Several countries have adopted performance-based research funding systems (PRFSs) based on bibliometric indicators. At the same time, researchers and policy makers have realized that indicators need to be adapted to the specific nature of scholarly communication in the Social Sciences and Humanities (SSH). The specificities of much of the research in the SSH disciplines have made an all-too straightforward approach toward the counting of publications for evaluative purposes unsuitable for the SSH (Nederhof, 2006). The research output of the SSH is diverse in terms of topics, audiences, publication channels and formats, creating-in the words of Diana Hicks-a "frankly messy set of literature" (Hicks 2005, p. 2). Whether bibliometric indicators are (or can be made) suitable measures for funding allocation for SSH research depends in large part on their ability to adapt to the specific nature of scholarly communication in the SSH. One of the important differences in publication patterns between SSH disciplines and the STEM fields (Science, Technology, Engineering and Mathematics) is the continued importance of book publications to the Social Sciences and in particular the Humanities (see Larivière et al., 2006;. Moreover, SSH are less well-covered in large commercial indexes such as the Web of Science indexes (owned by Clarivate Analytics) or Scopus (owned by Elsevier). According to Giménez-Toledo et al. (2016), "there exists a clear need for comprehensive databases collecting 'quality'-indicators for books and book publishers" (Giménez-Toledo et al. 2016, p. 2).
In this paper, we draw our attention toward a book label that has been designed to deal with the problem of which books to include in a national database used for the allocation of research funding: the Flemish Guaranteed Peer-Reviewed Content label or GPRC (Verleysen and Engels, 2013). The GPRC label is a tool for identifying peer-reviewed book publications, so that they can be included in the regional Flemish Academic Bibliographic Database for the Social Sciences and Humanities (Vlaams Academisch Bibliografisch Bestand voor de Sociale en Humane Wetenschappen, henceforth VABB). Publications included in the VABB are taken into account for the PRFS in Flanders, which relies in part on bibliometric indicators. The GPRC label can be understood as an attempt to optimize the Flemish PRFS to be more inclusive for the SSH, especially for books published by publishers based in Flanders. The label was created by the Flemish publishers' association (VUV) to enable books by Flemish publishers to be included more easily in the VABB and thus also in the PRFS. The label offers a novel way to include books in bibliometric indicator-based funding allocation systems by focussing on a formalized peer review procedure and by enlisting the publishers of scholarly work to provide information on the peer review processes of their books. The GPRC label can be seen in this context of an increasing awareness of the importance of books to the SSH and the realization that a more comprehensive view of SSH output is necessary, especially for book publications.
We use the concept bibliodiversity (Giménez-Toledo, 2020), to discuss how the diversity of book publications in the SSH is reflected in the choice to implement the GPRC label but also has consequences for the uptake of the label. The term "balanced multilingualism" was coined by Sivertsen (2018) in an argument in favor of a functional balance between publications in English (as the main international language of science) and local languages. The concepts bibliodiversity and balanced multilingualism are explained in more detail further on. The terms "local" and "locally oriented" are used in relation to different aspects of books that give them a local or regional focus. The first aspect being the local language, which in the case of Flanders is Dutch. The second aspect is the geographical location of the publisher, which is Flanders for all GPRC-labeled books. The third is the topic or content of the publications, which can be focused on Flemish or Belgian case studies. The fourth is the intended audience, which can consist of people within the region or country. This aspect is related to both language (books in Dutch) and contents (e.g., books on Belgian Law).
We consider the GPRC label within the broader regional and international context. While it is difficult to make causal claims about the effects of indicators on research practice, we attempt to define the possible social and normative implications of the label as it functions within the Flemish PRFS. We also analyse how the label has been taken up by the different SSH disciplines in Flanders. In this regard, we will focus specifically on two disciplines that have made use of the label frequently: History and Law. We also discuss the possible effects of the label on peer review practices. Previous articles have discussed the creation and potential of the GPRC label (Verleysen and Engels, 2013), and its potential pitfalls (Borghart, 2013).
The rest of the paper is structured as follows. In section 'Background: The GPRC Label' , we provide an in-depth discussion of the context of the GPRC label and its place within the Flemish PRFS. The specific make-up and context of the label prompts us to reflect on a number of key implications of the GPRC-label. We analyse the uptake of the GPRC label quantitatively in the section ' Analysis of the Data'. In this section, we also include the insights gathered through in-depth interviews with three experts. Finally, in the last section, we provide a more in depth discussion of the relevance of the GPRC label, with specific attention to the concepts of bibliodiversity and multilingualism.
Local Context
The Flemish PRFS: the BOF Key The Flemish PRFS is the funding allocation model that forms the backdrop of the GPRC label. Luwel (2021) has recently described the Flemish PRFS in detail. In Flanders, the allocation of funds between universities has shifted from a model based primarily on input indicators (numbers of students) to a model focusing on output indicators. Part of the university funding is specifically directed to the funding of fundamental research: the BOF-key. In 2003, this funding allocation model was extended to consider publication counts and citations as well as input parameters. This has resulted in a "metrics-heavy scheme" (Luwel 2021, p. 2). The allocation model used in Flanders shows some similarities with the models used in the Nordic countries, originally developed for Norway .
The system initially exclusively used the Web of Science (WoS) indexes for the bibliometric part of the indicator. Mounting pressure from academics (especially SSH researchers) coupled with a growing awareness of the limitations of the large citation indexes for assessing research in the SSH prompted the Flemish government to devise a new way for publications to be counted by creating a regional database: the VABB, which lists all peer-reviewed publications by researchers affiliated to an SSH unit at one of the Flemish universities . Inclusion in the VABB is based on the judgement of 18 senior scholars who form the Authoritative Panel ("Gezaghebbende Panel" in Dutch or GP). With the inclusion of the VABB, a comprehensive overview is kept of all peer-reviewed literature published by SSH scholars at Flemish universities. The requirements for inclusion in the database were established by the Flemish government and include, as a formal requirement, that publications have undergone a process of peer review prior to publication. In order to identify which publications can be considered peer-reviewed, different methods exist. Besides publications indexed in WoS -the Social Sciences Citation Index, the Science Citation Index Expanded, the Arts and Humanities Citation Index and the Conference Proceedings Citation Indexes -, which are automatically considered to be peer reviewed, lists of journals that are not indexed in these citation databases are included after judgement by the GP. Similarly, the GP maintains a list of publishers that employ peer review for their books. However, identifying peer-reviewed books on the publisher level complicates the possibility of including "hybrid" publishers of scholarly and non-scholarly books, while the identification of peer review on the level of individual books is a cumbersome task. Individual book publications can also be included in the VABB through an appeals procedure. The GPRC label creates the possibility of including publications without having to go through the appeals procedure. Meanwhile, another way to include books in the VABB is to approve book series instead of individual publications or book publishers. Publishers that use the GPRC label can also have their book series approved for the VABB. However, the inclusion of book series is not limited to the Flemish publishers. In this study, we limited our attention to the discussion of the GPRC label as used for individual publications.
The intention of the Flemish government was to stimulate universities to conduct a strategic research policy and to provide incentives toward the enhancement of research quality and a focus on fundamental research (Bof besluit, 2019). The BOFkey is thus an instrument that allows the Flemish government to provide incentives for the universities when it comes to making strategic decisions. The GPRC label was created in the context of this allocation model as a tool for including peer-reviewed book publications by Flemish publishers more easily in the VABB. For the Flemish publishers' association (VUV), the motivation was to make publishing with Flemish publishers more attractive for researchers at Flemish universities.
The Creation of the GPRC Label
As mentioned above, the GPRC label was created to find a way for books published at Flemish book publishers to be included in the VABB. The label is trademarked by the VUV. There are different ways for book publications to be included in the VABB. Even though the GPRC label was announced before the first version of the VABB, the effect of the GPRC label can only be studied now because the VABB uses a 10-year window. The first way for book publications to be included was through a list of approved publishers, which included predominantly international scholarly publishers who exclusively published peer-reviewed books. For local publishers with a mixed or hybrid portfolio of both peer-reviewed and nonpeer-reviewed publications, this meant that they would never be able to appear on the list and thus never have their book publications be included in the VABB without going through the appeals procedure. The VUV feared that scholars at Flemish universities would target their publications toward (almost exclusively international) publishing houses that did appear on the list, thus creating a disadvantage for local publishers. Moreover, the situation created an inequality between the two Flemish publishers already on the list (Brepols and Peeters) and other Flemish publishers not on the list. The system could also mean a disadvantage for disciplines that publish more locally oriented books, such as Law or History. Furthermore, the inclusion of local publishers would contribute toward making the VABB database more comprehensive.
The GPRC label offers a way for books to be included in the VABB on an individual basis. The procedure for the GPRC label is as follows. The publisher holds on to a peer review dossier that includes at least the following parts: (1) The table of contents of the publication (2) The affiliation of the reviewers (3) A chronological overview of the peer review process (4) A minimum of two reviews (5) A formal confirmation that the reviewer authorizes the publication with the label.
The ultimate decision on inclusion in the VABB remains the responsibility of the GP. The publisher has to submit the peer review dossier to the GP when requested. The panel does not review all publications with the label, but a selection each year from a number of publishers. The panel does not read the actual reviews submitted by the reviewers, their judgement of the peer review dossier is based solely on the formal criteria. The procedure for the inclusion of book series is similar. Publishers still have to hold on to a peer review dossier, which can be requested by the GP.
Examples of Labels for Peer-Reviewed Books
In their discussion of the GPRC label, Verleysen and Engels (2013) already discussed the international opportunities of a label for peer-reviewed books. An international label has not yet been created, but so far, a label modeled on the Flemish GPRC label has been adopted by two other countries. In 2014, the Federation of Finnish Learned Societies -introduced a label for peer-reviewed books and articles. The Finnish label is meant to "inform peer review practices used in Finnish scholarly publishing with the best and ethically sound international standards, " and to "make it easier for students, researchers, libraries, administrative actors or other users of research literature to recognize peer-reviewed publications" (Label for Peer-Reviewed Scholarly Publications, 2015). The Federation of Finnish Learned Societies requires publishers to describe their peer review procedure on their website and to store the documentation related to the peer review process. The requirements of the peer review process are similar to those of the GPRC, i.e., a minimum of two independent reviews is required. A quality label for scholarly book publications has also been established in Sweden: Kriterium, a label that was established in a bottom-up way (Hammarfelt et al., 2021). The Kriterium board manages the peer review process itself, appointing an academic coordinator, who in turn picks two reviewers and oversees the review process. The ultimate decision on acceptance or withdrawal lies with the Kriterium board. This is different from the GPRC-label, where the peer review process is managed by the publisher and the GP only checks whether the formal aspects of peer review have been satisfied. Another interesting feature of the Kriterium label is that all publications have to be made accessible open access on Kriterium's website.
Meanwhile, Central and Eastern European countries know a tradition of publishing scholarly books with the names of the reviewers. Using data from Poland, Kulczycki et al. (2019) characterized this practice as an open-identity peer review label that could be used to delineate between peer-reviewed and nonpeer-reviewed publications.
Peer Review
The use of the GPRC label may have social as well as normative implications. Firstly, as mentioned before, the label requires books to have undergone a peer review procedure, thus further institutionalizing peer review as a delineating criterion for what is seen as scholarly and what should count toward the PRFS. Whereas peer review has been the gold standard of quality in (international) journal literature for many decades, peer review has historically not been the primary way to ensure the scholarly value of book publications. Scholarly books have often been published by university presses who upheld the quality through the editorial processes of choosing authors and topics carefully and guarding the scholarly nature of their publications. Pochoda (2013) mentions in this context the "stable, bounded, continuous, well-ordered and well-policed" analog scholarly publishing system that knew its heyday in the mid-20th century. Pochoda (2013) also mentions that the subjection of manuscripts to external review only dates back to the 1960s. Before that time, book reviews and informal barriers were used to ensure quality. Book reviews are still being written and read by scholars (Hartley, 2006), and are a way for peers to make a decision about whether or not to invest time in reading a particular scholarly book. Meanwhile, the peer review system is inextricably linked with the rise of the academic journal with double-blind peer review being introduced in the mid-20th century (Gould, 2012).
The pressure on academics to exclusively publish peerreviewed works is increasing, and this can potentially affect publication practices in SSH. As already mentioned, researchers in the SSH publish a variety of works, including peerreviewed scholarly articles, but also books directed at a wider audience. Hicks (2005) argues that scholars in the social sciences and humanities contribute more to the so-called "national literatures." Meanwhile, scholars in the humanities have often complained of the narrow focus of research evaluations on internationally oriented journals which affects the day-to-day practice of scholars (see e.g., De Wever, 2007).
Today, external review (or peer review) is the norm for academic journals, as well as for many scholarly book publishers. The GPRC label is an attempt to include all peer-reviewed literature written by local scholars, not only the publications indexed in the major citation databases or published by publishing houses with an international reputation. In this way, the label can help scholars to continue to have a diverse portfolio of articles in international journals indexed in the major indexes as well as locally relevant publications in journals and books, often in Dutch, provided that they have been subjected to peer review. However, we have to remember that the GPRC label was introduced in a local (Flemish) context, and was also made available to publishers that do not have an exclusively academic/scholarly portfolio. Even though peer review has become the standard quality criterion of scholarly work, it is not necessarily performed systematically at local publishers. Some of the publishers who now publish GPRC books, previously may not have performed peer review of scholarly books in a systematic way. As the GPRC is an element within a PRFS that recognizes peer review as the main delineating factor between scholarly and not scholarly, it may result in some disciplines using peer review for a larger portion of their publications and it may tempt Flemish publishers toward using peer review and standardizing their peer review procedures.
Another important element to take into consideration is how peer review is being defined. A recent study by Giménez-Toledo et al. (2017) looks at how peer review is defined in the context of different PRFSs in European countries. Giménez-Toledo et al. (2017) state: "There is a diversity of approaches to defining peer review and applying it in the evaluation process: from the specific definitions and requirements in the case of the Finnish and Flemish labels to the use of existing information on scholarly publisher's peer review practices by evaluation agencies in the case of Spain." Moreover, Pölönen et al. (2020, p. 3) point out that identifying peer-reviewed publications is ambiguous, and that within PRFSs, peer review is typically defined technically, "focussing on the existence of a recognizable pre-publication procedure." In the case of GPRC, the focus on the existence of a formal peer review procedure relates to the requirements for inclusion in the evaluation system as put forward by the Flemish government: the BOF-decree regulations.
Bibliodiversity and Multilingualism
The concept bibliodiversity was introduced by Giménez-Toledo (2020) to stress the importance of taking into account a variety of publications from the full range of small and medium-sized regional publishers to the large international publishing conglomerates, but also to emphasize the diversity of topics, languages, perspectives and methodologies in scholarly publishing. In the Social Sciences and Humanities not only different publication types are used, but also a mix of books in local languages and English, topics of local relevance, books directed at different audiences which creates a diverse publication field.
Giménez-Toledo (2020, p. 3) points out that "lack of insight implies practically that there is going to be an adequate recognition for those books published by the large publishing companies with an international profile but not for the more national oriented and smallest ones." The GPRC label caters specifically to this point of having a balanced PRFS where the evaluation does not disfavor smaller and medium-sized national publishers.
Protecting this diversity is a complex problem as the scholarly book publishing market is influenced by internal developments of scientific fields, the financial resources of universities and university libraries and the incentives provided by the evaluation systems and PRFSs.
We can identify some changes in the scholarly book publishing market, influenced by these various elements. Where the scholarly book publishing market used to be dominated by university presses, there now exists an increasing concentration among big conglomerates (Pochoda, 2013). In a study about the Flemish publishing landscape, Guns (2018) found that there was a relatively high concentration of books among a few publishing houses. The concentration was larger for peer-reviewed books, suggesting that researchers publish their scholarly books at a few important publishers. The increasing concentration of publishers has played an even bigger role for journal publishers (Larivière et al., 2015).
A decline of university presses and an ascent of commercial publishing houses likely has consequences for the content of books as well. Moreover, large players, such as Oxford University Press and Routledge have started to dominate the market. Mañana-Rodríguez and Giménez-Toledo (2018) found that university presses tend to be more multidisciplinary, as their mission is to some extent different from commercial publishing houses. However, some university presses have become privatized, making the distinction less clear.
Not only the size of publishers, but also the location of publishers can be an element in bibliodiversity. In a study by Verleysen and Engels (2014), the internationalization of scholarly book publishing was studied for the Flemish SSH. Both the evolution of the internationalization and the different levels of internationalization between SSH disciplines were analyzed. The analysis showed that for the Social Sciences, publishers are located more toward the U.K. The publishers of publications in the Humanities, however, veer more toward Flanders and continental Europe.
The bibliodiversity also concerns diversity in terms of language. The general trend in scholarly publishing is that researchers tend to write more in English, as it is an international language of science, although there are important regional differences as well as differences between fields of science. Comparing the language of SSH articles in seven national databases, Kulczycki et al. (2020) found that in the Nordic and Western European countries, including Flanders, the dominant language for publications is English, while this is not the case for Central and East-European countries. Moreover, they report the highest shares of local language use in Law, History and Archaeology, and Arts. Multilingualism is therefore especially important to the SSH which contain fields of study where publishing in languages other than English is more common. It is also relevant in particular to non-English speaking areas.
The use of English as an international language of science has been widely debated (Tardy, 2004;Stockemer and Wigginton, 2019). While one of the foci of this debate is the disadvantage faced by non-native English speakers, especially in developing countries, when disseminating their research results in international elite journals (Salager-Meyer, 2014), another focus of the debate surrounding English as a language of science has been the importance of native languages to the communication of research in local contexts. For example, Sivertsen (2018) has argued that "to fulfill its responsibilities, science needs to be multilingual." In this context, Sivertsen (2018) has coined the term "balanced multilingualism, " where some publications are written in local languages and thus appreciated by the professionals or the general public while others are disseminated to international peers in English-language journals.
Multilingualism is important because the language in which a scholarly work is written has an effect on the audience it can have. A scholarly book written in Dutch will not be read by international peers who do not understand Dutch. On the other hand, a scholarly book written in English will be less likely to attract local readers, lay people or professionals to read the book. The GPRC label, as a local label, may play a role in the multilingualism of SSH publications in Flanders. While previous studies have warned about the increasing homogenization of publications as a possible adverse effect of current research evaluation systems (Dahler-Larsen, 2018), the GPRC label provides the opportunity for local researchers to have their Dutch-language publications included in the PRFS.
We thus use the concept bibliodiversity to denote the diversity of publication channels and types and we use the concept balanced multilingualism to argue that rather than a complete dominance of English or a return to local languages to communicate research, a functional balance needs to be found between the use of English as the international language of science and local languages to communicate research findings, specifically with local subjects. For the SSH, a research funding mechanism based on bibliometric indicators should take into account the specificities of SSH research, with particular attention toward bibliodiversity and balanced multilingualism.
The Trickling Down of Incentives
The GPRC label can simplify the inclusion of book publications in the PRFS, which can affect the allocation of funds. Consequently, it is possible that this advantage at the institutional level translates into incentives at the lower levels, where departments can show that their output contributes to the funds received by the institution. However, as the GPRC label is only relevant to a small part of SSH financing and other ways to include peer-reviewed book publications exist, the potential financial benefits of the use of the label are limited. Moreover, researchers face different kinds of pressures, including pressures to publish in influential international journals which may offset the relatively small incentives for publishing GPRClabeled books. In order to understand how the GPRC label fits within the larger PRFS and the Flemish context, several aspects regarding the trickling down of incentives from the PRFS to actual researcher practice should be taken into consideration.
Firstly, even though PRFSs are meant to incentivize certain changes in institutions, e.g., increasing the publication output, the incentive structures are not always straightforward: "incentives in evaluation systems do not only reinforce each other, but may also work in opposite directions" (Hammarfelt and de Rijcke 2015, p. 74). While the GPRC label seems to offer an incentive toward publishing books at regional Flemish publishers, it is only a small part of the PRFS. The BOFkey as a whole incentivizes publishing in different kinds of peer-reviewed publication channels, although the inclusion of citation-related parameters may put stronger emphasis on WoSindexed journals. The GPRC label is meant to reduce the possible disadvantage that could result from an over-emphasis on international publishers and too little attention to national literatures and scholarly books.
Secondly, the mere existence of metrics, measures and evaluation systems can influence managers' and researchers' decisions. Researchers may be impacted for example because "academics perceive the expectations built into (research evaluation systems) and interpret them as signals of what society values about their research" (Gläser and Laudel 2007a, p. 132). This effect of evaluation systems operates regardless of financial incentives.
Besides incentives within the PRFS, there are also external incentives. Publications in international journals, preferably with high impact factors, are generally considered to be more prestigious and are more important to researchers' careers. Researchers can engage in what Gläser and Laudel (2007b) have called "amateur bibliometrics." Therefore, the potential effects of GPRC label should be seen within the whole context of incentives provided by the BOF-key, but also external incentives such as the prestige associated with publishing in internationally recognized publication channels, or the existence of international rankings and metrics. These external pressures could reduce the importance of the GPRC label.
Aagaard uses the concepts tight coupling, loose coupling and decoupling to explain the different ways in which incentives of a bibliometric indicator trickle down to the individual level (Aagaard, 2015). Aagaard points out an apparent paradox in the Norwegian PRFS. Based on the facts that the financial incentives are not strong, the system is not meant to be adapted on individual levels, the higher education institutions have a lot of autonomy, and the system itself is contested within the sector, a loose coupling or decoupling would be expected, whereby the PRFS would have only very small or no effects on local management practices (Aagaard 2015, p. 728). However, a number of indirect mechanisms and factors may create a mechanism whereby incentives trickle down and external pressures are internalized. Aagaard uses the concept allure to denote the temptation for managers to adopt measures they acknowledge to be poor at individual levels, because of the simplification the quantitative measure entails and the seemingly objective nature of the measure. Aagaard proposes the concept anxiety to address the uncertainty and anxiety faced by individual researchers about the future importance of the indicator even if the current use is limited or downplayed by managers and administrators. Researchers may anticipate that the indicator may become more important in the future (Aagaard, 2015).
These studies point toward a complexity of mechanisms which makes it difficult to immediately assess the potential effects of a PRFS, or in this case, one element within a PRFS. While the existence of a label such as the GPRC label may affect everyday research, it is difficult to predict to what extent and in what way.
Data and Methods
This study uses both qualitative and quantitative methods within a sequential explanatory strategy whereby the focus lies on the quantitative analysis of the data (Creswell, 2014). The qualitative part of the study in the form of in-depth interviews helps to contextualize some of the findings from the quantitative part of the study.
The data used for the quantitative part of the study are publication metadata from the VABB database from the years 2010 (the year of the first GPRC-labeled publications) until 2019. These data are part of the most recent version of the VABB, version 11 (Aspeslagh et al., 2021). The dataset available online only contains the approved publications, not the publications which were not approved for inclusion in the VABB. Thus far, 2,580 publications with the label have been included in the VABB database, compared to 82,403 publications in VABB 11 in total (3%). The metadata in the dataset used in this study includes the publication year, the type of publication, the judgement of the Authoritative Panel (GP), the titles and the language of publication.
The analysis of the VABB data contained descriptive statistics that show the evolution of the number of publications as well as the breakdown of these publications between disciplines. Based on the empirical results from these descriptive statistics, two disciplines (History and Law) were chosen to include in a further analysis of the content. We conducted a categorization of publication titles in terms of local (Belgian/Flemish) or international orientation for these GPRC publications.
For the qualitative part of the study, three in-depth interviews with academics who had come into contact with the GPRC label were conducted. The interviewees were selected because of their experience with the GPRC label as author, member of the GP and/or through their experience in research management. Not all Flemish SSH researchers are familiar with the GPRC label or have a deep understanding of what the GPRC label entails. We decided to interview in particular someone from the discipline Law because the data pointed toward Law as the discipline that made the most use of the GPRC discipline, both in relative and in absolute terms. The in-depth interviews focused both on the respondent's general experience with the GPRC label (as author or policy maker) and some of the empirical results from the first part of the study. The interviews were recorded and transcribed. An ethical clearance was obtained from the University of Antwerp's Ethics Committee for the Social Sciences and Humanities. To safeguard the anonymity of the interviewees, we are not mentioning the specific professional positions of the interviewees.
We identify a few key points of interest for the analysis of the data. Firstly, we want to find out to what extent the label has been used in the past 10 years and whether it has contributed to a more comprehensive inclusion of book publications in the VABB. Secondly, we analyse the languages represented among the GPRC book publications. We analyse to what extent the GPRC label contributes to a "balanced multilingualism." Dutch language publications are also expected to discuss local topics. Thirdly, we expect the label to be taken up differently by the different SSH disciplines. While some SSH disciplines gear more toward the publication practices of STEM fields, publishing mostly English-language articles in international journals, others have a substantial share of local language articles and book publications, this is especially true for Humanities disciplines.
The Uptake of the Label and Its Effect on the Inclusion of Books
In this first part, we analyse the evolution of the number of GPRC publications. VABB distinguishes between five publication types, four of which are relevant in the context of GPRC (chapters in edited volumes, edited volumes, monographs, and proceedings papers). We use the term "book publications" to refer to individual publications, which can be book chapters, edited volumes, monographs and proceedings papers.
As can be seen in Figure 1, the number of publications written or edited by scholars at SSH departments of Flemish universities that have been accorded the GPRC label increased until 2015 and dropped since then. Caution is necessary in interpreting this decline, since the numbers are reported on the level of individual publications as reported to the VABB. As a result, a few edited volumes with many chapters may have a substantial effect on the numbers. However, looking at individual books (ISBNs) a similar pattern of decline in recent years emerges with a drop since 2016.
The most recent information on how many GPRC books were reported by publishers for 2020 shows a more nuanced picture. The number of books submitted for the last year (books that are not yet included in the VABB and thus do not show on the graph) is 104, almost 30 more than the 77 books that were reported by publishers in 2019. It seems too early to speak of a permanent drop in the use of the GPRC label.
In 2009, the year before the introduction of the GPRC label, 22.3% of book publications submitted by universities (chapters, edited volumes, and monographs) were included in the VABB. In 2019, the last year for which we have final data, this figure was 43.1%. This means that the inclusion rate for book publications in the VABB has almost doubled in the past 10 years. We hypothesize that this is due to an increase in comprehensiveness of the database (publishers, series and books with peer review are more recognized as such) as well as a stronger inclination to publish peer-reviewed books at the researcher level. The alternative explanation-that universities no longer submit publications they assume will not end up in the database-is unlikely as sorting the publications would be much more time-consuming for them and uncertainty over outcomes makes the responsible people opt for the safer option of submitting everything that might potentially be included. It is, however, possible that publications that do not fit neatly in any of the five publication types are not always submitted.
As can be seen in Figure 2, the number of book publications in the VABB increased roughly linearly. This is in part due to an overall rise in production. As mentioned before, the inclusion of book publications in the VABB takes different forms, with GPRC accounting for a substantial proportion of book publications in the VABB. The highest share of GPRC-labeled publications among VABB book publications was reached in 2015, with 18.3% of book publications included in the VABB. The share dropped to 11.7% in 2019. The increase in the number of book publications can thus not be solely attributed to the GPRC label. However, as the goal is to include all peer-reviewed publications, the GPRC label offers a new way for peer-reviewed publications to be added to the VABB that otherwise would be more difficult to include. It has added to the total of book publications in the VABB, but it is not the sole driving force behind the growth in peer-reviewed book publications.
Local Orientation of GPRC Publications
In this section, we discuss two aspects concerning the local orientation of publications. The first is language and the second is the content of publications, with topics of local relevance.
As expected, the GPRC label shows a higher proportion of publications in Dutch (the local language) when compared to the language of other publications in the VABB. For the total database, 78.4% of included publications are written in English, with only 16.3% in Dutch. The third language is French (2.89%), the other major national language of Belgium. Among book publications, 73.5% are written in English, and only 16.1% in Dutch. A relatively large proportion of book publications are written in French, 5.8%. English is especially prevalent among WoS-indexed publications, where 96.5% of publications are written in English. For book publications with the GPRC label, only 28.8% of book publications are written in English while the majority, 68.1%, are published in Dutch. 2.5% of GPRC publications are written in French. In this way, the label caters mostly toward local language publications and adds an avenue for Dutch language publications to be included in the VABB. Thus, the GPRC label contributes to a balanced multilingualism, whereby English-language publications in international journals exist alongside a national literature of local language publications, including book publications. With the VABB system, both types of publications are counted toward the PRFS, provided they have been subjected to peer review prior to publication.
Another aspect to take into consideration, is the content of publications. While it was not possible to study all GPRC publications in detail, an analysis of the list of GPRC titles taught us that several publications in the field of Law concern expositions of parts of the Belgian system of law. We conducted a categorization of the titles of 224 publications in Law that were either classified as edited book publications or monographs. We assigned publications to the category of Belgian or local issues if they contained explicit references to the locality (e.g., if they contained the word "Belgium") or if they could reasonably be assumed to deal with a local topic (e.g., "Handboek algemeen huurrecht" -"Handbook for general tenancy law" applies to the local tenancy law). Publications were assigned to the category of EU or European Law if their titles referenced either the EU or Europe. Most publications that were categorized as dealing with local topics did not use a toponym in the title, probably because the fact that they discuss Belgian law was assumed to be self-evident. Of the 224 publication titles we analyzed, 110 publications were found to concern Belgian or local law or criminology subjects. Twenty-four publications were found to concern EU or European law. Our approach is probably an underestimation, since it was based on a "common sense" categorization, where only publications that were very clearly local in nature were assigned to the local category. While this is only a rough estimate, it gives an indication of the importance of local topics to GPRC publications in the field of Law and a general impression of which kinds of publications the GPRC label is being used for by Law researchers. As these types of publications benefit in particular the local law practitioners and students, it makes sense for them to be written in Dutch. Eleven of these publications contain the word "handbook, " which can be translated into English as "handbook." While the Dutch Law publications tend to focus on local law, the English-language Law publications often cover EU law.
Closer scrutiny was also given to publications in the field of History, because book publications have been singled out as important parts of the research output of historians (Verleysen and Engels, 2012). From 113 publications from the field of History, it was immediately apparent that a large portion of them had a focus on Belgium, the Low Countries (current Belgium and the Netherlands) or specific places of the region. Using the same method as for the Law publications or categorizing the historical publications, 39 publications were found to be explicitly referencing either Belgium, a region within Belgium, or the Low Countries. As we have shown, the GPRC publications are mostly locally oriented. The main reasons for this are that only local publishers can use the label and that book publications in particular are used by different disciplines for local audiences, not only academic peers. For a discipline such as Law, this can be professionals in the field as well as students. For a discipline such as history, this can be an interested lay community.
The Distribution Among Disciplines
In this part, we analyse how the label has been taken up by the different disciplines. Table 1 shows the number of publications with the GPRC label for the different disciplines. The classification used here is a cognitive classification , based on the Fields of Science classification. The table includes all disciplines with 15 or more GPRClabeled publications. This results in the exclusion of non-SSH disciplines. Note that in the FoS classification, the discipline Law encompasses both the field of Law and the field of Criminology. It should also be noted that this classification does not take into account multidisciplinarity. Publications to which more than one discipline was assigned are counted fully for each of the disciplines.
The discipline that has used the label the most is Law. The predominance of Law publications among total publications is not surprising. Both Law and Criminology are disciplines that publish a large amount of publications in Dutch (53% in the last 10 years). Moreover, many Law publications are book publications (31%, compared to 22.7% for the total of VABB). However, the predominance of Law publications among GPRC publications is also related to the nature of scholarship in Law. In many countries, a debate has emerged in recent years about the nature and methodology of research in Law, and the position of so-called doctrinal research (Stolker, 2003;Van Gestel and Micklitz, 2014;Kaltenbrunner and De Rijcke, 2017). This debate also has consequences for PRFSs, as they embody decisions on which publications should be counted.
Because the discipline Law seems to use the label most often, Figure 3 shows how many law publications were published each year with the GPRC label. Even though the total number of law publications is still modest, which can produce big jumps in the numbers, a clear pattern emerges whereby the number of publications before 2016 rises significantly, to decline quite significantly after 2016.
A second noticeable trend in the distribution between disciplines is that some disciplines use the GPRC label for a larger portion of their book publications. Disciplines such as Law, Political Science and Arts use the GPRC label for a relatively large portion of their book publications (Table 1). Meanwhile, Languages and Literature and Religion, make use of the label for a relatively small portion of their book publications even though they are book-intensive fields. A difference here could be that those disciplines target non-Flemish book publishers or publishers that are automatically included in the VABB. The last column shows the proportion of book publications among VABB publications for each of the disciplines. These levels vary significantly. The highest proportion of book publications can be found for the discipline Religion, where more than half of publications included in the VABB are book publications. Conversely, for a discipline such as psychology, book publications account for only little more than 5% of publications in the VABB.
We now zoom in on the differences between the social sciences and the humanities in terms of how the GPRC label has been taken up. Because the humanities are typically seen as more book-oriented disciplines, we expect the GPRC label to be more relevant to the humanities. We disregard the discipline Law. Firstly because Law has an overwhelming presence in the data and secondly because Law is sometimes also classified as a humanities discipline (e.g., within the classification system used in the VABB which is based on the affiliation of researchers). On the whole, the intuition that a larger proportion of publications from the Humanities would be book publications than for Social Sciences disciplines is confirmed for the VABB data. Thirty-seven percent of publications in the Humanities are book publications, compared to only 23% for social sciences publications. This is not unexpected and in line with previous studies on book publications in the SSH (Verleysen, 2016). More unexpected is that the percentage of GPRC publications among book publications is higher for the social sciences than the humanities. One possible explanation could be that, since book publications are more important to the humanities, researchers were already publishing at international publishers or local publishers with a strong tradition of peer review to reach the international scholarly community. We point here again to the discipline Religion (or Theology), where publication channels that are automatically included in the VABB continue to be used. It is to be expected that scholars make decisions on where to publish, locally and internationally, based on the reputation of the publisher in the field. Strong book publishing traditions in the humanities may result in their most important publishers already being on the VABB list. Some caution should be used in the interpretation of these results. As the GPRC label concerns such a small proportion of total publications, a few publications could alter the picture. Moreover, the distinction between the social sciences and humanities can be a meaningful one, but there also exist large differences in the uptake of the label within each field. For example, for social sciences, the discipline Economics & business does not seem to make much use of the label.
The Publishers
This last section of the analysis of the VABB data concerns the publishers. Part of the explanation for why some disciplines have made use of the label more frequently than others, lies with the publishers. Many publishers have a clear subject specialization. The Flemish publisher Peeters, for instance, focuses on religious studies, among other topics. Because Peeters is one of the two Flemish publishers whose publications are automatically included in the VABB, their books do not have a GPRC label. This explains why the discipline Religion uses the GPRC label comparatively little. Table 2 shows the top 5 publishers of GPRC publications. The publisher Intersentia (which belongs to the same parent company as Larcier since 2018) publishes mainly publications in Law. Because of the predominance of Law publications among GPRC-labeled book publications, it appears relatively high on the list. The largest publisher of GPRC-labeled book publications is Leuven University Press, the university press of the largest university of Flanders. University presses of other Flemish Universities-specifically Antwerp University Press and VUB Press, both imprints of ASP (Academic and Scientific Publishers)-also publish books with the GPRC label.
The Interviews
The three in-depth interviews with experts gave some additional insights into how the label was taken up. We have identified a few key points of interest. The case of Law was discussed more in depth in the interview with a researcher from Law. All three of the experts highlighted some advantages and disadvantages to the GPRC label and spoke from their personal experience in a management position and/or as author of GPRC-labeled books.
One of the implications of the GPRC label we identified earlier on in the study is that peer review procedures have been introduced at publishers who were not used to follow peer review procedures before. One respondent who was more directly involved with the evaluation of peer review dossiers at the GP commented that in the early days of the GPRC label, review dossiers often did not comply with the formal requirements, e.g., the author names or publication title would be missing. The publishers needed some time to get used to holding on to a peer review dossier that complied with the formal criteria. However, the GP does not evaluate the reviews themselves, only whether the reviews are present in the dossiers. For an evaluation of the strength of the actual reviews in the peer review dossiers and whether there has been an evolution in this, a review of the dossiers should be undertaken. This was done for 24 books with the Kriterium label by Hammarfelt et al. (2021).
A few potential flaws of the GPRC label were identified during the interviews. With regard to the peer review process, one of the respondents raised the issue that sometimes the peer review process would be started after the manuscript was finished, and would not induce changes to the book. This undermines the idea of the label as a quality criterium.
A second problem that was identified with regard to the GPRC label is the reluctance by publishers to use it, which could be related to the difficulty of finding reviewers for GPRC books. The empirical results of the study seem to point toward a certain stagnation or even decline in the numbers of GPRC-labeled books. Among the reasons for this, a possible ceiling to the number of books fitting within the GPRC label was mentioned. However, one of the respondents argued that publishers are not eager to have their books GPRC-labeled because they find that it requires too much effort without getting much in return. Speaking from the position of reviewer, a respondent indicated that they also found it too onerous to review a potential GPRC book, on top of the many requests to review journal articles. Another respondent commented on the language barrier making it more difficult to find reviewers for GPRC books as Dutch language books have a smaller potential pool of reviewers. The difficulty of finding reviewers could also be a reason for why publishers may find it cumbersome to go through the procedure for a GPRC label.
With regard to the effect the GPRC label may have on the publication practices of researchers, our respondents indicated that the primary use for the GPRC label is for the researchers to have their books more easily included in the VABB, which is good for their career because VABB publications are recognized within the Flemish system. The interviewees all commented that international publications are important to researchers, and the GPRC label is interesting for books that would have been published locally anyway.
Another possible use for the GPRC-as a label marking quality -does not seem to play a role. Researchers do not look specifically for the GPRC label when they are looking through scholarly works. The label is also not internationally recognized. One respondent argued that this is one of the reasons why the label is less interesting for publishers, because they will not get any additional readership for GPRC-labeled books.
For the case of Law specifically, it was mentioned that Law scholars publish more Dutch language publications. The interviewee from the discipline Law commented that the label is also being used for more professional publications. Within the discipline of Law, there exists a debate about the value and nature of research in Law and the place of doctrinal research. This debate has existed for some time (Stolker, 2003;Van Gestel and Micklitz, 2014;Kaltenbrunner and De Rijcke, 2017). Our respondent from the discipline Law had a nuanced view of the functionality of the GPRC label. With regards to the debate within Law on the distinction between scientific contributions and practice-oriented contributions, they felt that the GPRC label could offer a distinction between these different kinds of publications. However, the respondent added that it is relatively easy to get a GPRC-label for a publication, also for a more practice-oriented publication. This downplays the possibility that the GPRC label could be used to delineate between these different activities by Law researchers.
Over all, the three in-depth interviews provided a varied perspective on the GPRC label, with some points of difference. However, the respondents seemed to agree that the GPRC label is only a small element, and that its importance should not be overstated. The GPRC label was not seen as a label that is used by researchers when they select which scholarly books to consult by any of the respondents, and primarily seen as a way for authors to have their books be included in the VABB. The existing prestige associated with publishing at internationally renowned publishers remains intact. With regard to the professionalization of publishers, the interviewees painted a nuanced picture. On the one hand, publishers complied more with the requirements of the GPRC label after a few years. On the other hand, the respondents voiced doubts about the quality of the reviews, which may indicate that GPRC sometimes remains a formalistic exercise. An additional study could look into the peer review dossiers and also incorporate the position of the publishers to address this point.
DISCUSSION
In this segment, we attempt to provide explanations for the main trends found in the empirical part of the study and consequently discuss the implications of the GPRC label for bibliodiversity and balanced multilingualism. We also discuss the GPRC's potential to make the PRFS more inclusive toward different publication practices in SSH disciplines.
The GPRC label only accounts for a small proportion of total publications in the VABB. Moreover, none of the interviewees identified the GPRC label as an incentive toward publishing more at Flemish book publishers. Nonetheless, the GPRC label plays a part in a broader effort of recognizing the research activities SSH scholars in Flanders already perform and valuing them within the PRFS. As such, the label can protect part of the bibliodiversity of SSH in Flanders. In addition to opening an extra route for books as a typically less favored publication type, the label is open to local, typically small, publishers as well as publishers with a mixed portfolio.
From the point of view of the publishers, the GPRC label was meant as a way to redress the balance between publishers that are on the list of VABB-approved publishers and publishers that are not. However, the GPRC label is not always attractive to the publishers because of the added workload, the difficulty of finding reviewers and the limited benefits to the commercial attraction of their publications. This downplays somewhat the possibility of the label strengthening the market position of Flemish publishers.
The empirical results indicate that the label indeed caters mainly to Dutch-language publications. Moreover, the titles of History and Law publications with the GPRC label have shown that a focus on topics of local relevance exists among GPRC-labeled books. The empirical results have also shown a diversity in terms of the uptake of the label between the different disciplines, which points to a greater or smaller demand for a new channel for recognition of locally published scholarly books. Especially in Law, many publications in Dutch were added to the VABB through the GPRC label. These publications often discuss Belgian law in particular and are usually directed toward peers as well as professional law practitioners and students. A functional balance means that these types of publications exist alongside the English language publications in international outlets.
However, there are also potential drawbacks to Dutch language peer-reviewed books. Restricting peer reviewers to only colleagues from the same language area restricts the number of potential reviewers. Finding reviewers for scholarly books is important to a label for peer-reviewed books, and as the interviews have indicated, remains a concern. With the GPRC label, the burden of finding adequate reviewers lies with the publisher. Moreover, the GP does not evaluate the quality of the reviews. This is different from e.g., the Kriterium label in Sweden, where the reviewers are appointed by an academic coordinator, who in turn is appointed by the board governing the label (Hammarfelt et al., 2021). This exempts the publisher from the work associated with peer review and places the quality assurance in the hands of the label.
Apart from making the PRFS more inclusive toward regional publishers, the GPRC label also potentially alters the publication practices at those publishers. The increasing demand for peer review when evaluations focus more on the presence of peer review as a mark of quality create the necessity for these smaller publishers to adapt as well and instate peer review procedures where there were none before. The GPRC label offers a framework for having peer review at these publishers recognized as well as providing a formal set of rules peerreviewed publications need to adhere to. This can help in a continuation of a bibliodiverse publishing landscape, but it also functions within a system where books need to be peer reviewed in order to "count."
CONCLUSION
Using a mixed approach consisting of data analysis, interviews and literature, we have discussed the GPRC label within its local context and analyzed its uptake and possible consequences. We show that the label provides an interesting solution to the problem of which books to include in local databases used for the allocation of research funding. The GPRC label is a flexible tool for recognizing individual book publications as peer-reviewed. In that way, it enables local publishers and researchers to have their publications included in the VABB more easily, without having to go through an appeals procedure. Furthermore, the GPRC label can contribute toward the goal to make the VABB database as comprehensive as possible. As such, total of 2,580 GPRC-labeled book publications have been included in the VABB in the period 2010-2019. Going forward, it seems to be important to continue to monitor the uptake of the GPRC label, and also to agree with both researchers and publishers on possible improvements to the functioning of the label. The data used in this study do not allow us to analyse what effect the GPRC label has had on the publishers and whether the publishers experience problems with the implementation of the label. We therefore suggest that future studies of the GPRC label take their perspective into account. Apart from that, an evaluation of the reviews submitted in the peer review dossiers could provide insight into the quality of the peer review. This last part, however, can only be attempted by enlisting independent reviewers from the different disciplines.
We have shown that the GPRC label is connected to bibliodiversity, in the sense that it allows books by smaller and medium-sized publishers in Flanders to be added to the VABB database. Aligning the PRFS with the specific publication practices of the SSH can be an element toward protecting the scholarly book publishing traditions of those disciplines. The results indicate that GPRC publications are often locally oriented, both in terms of topic (focus on Belgium or the Low Countries) and languages. The majority of publications were written in Dutch, which corroborates the idea that the GPRC label can contribute toward a "balanced multilingualism." While the GPRC label will likely not convince authors to write their publications in Dutch, it will allow them to have their Dutch language publications published at local publishers to be counted in the PRFS.
We also found that the GPRC label was unevenly distributed among disciplines, with Law publishing by far the most GPRClabeled book publications. The reason is that the different disciplines don't have the same use for the label. The label is most interesting for disciplines that publish locally oriented books at publishers that are not already included in the VABB. It certainly does not create a drive toward the publication of Dutch language books in those disciplines that have a predominantly journaloriented international profile. This is not very surprising, but it does point toward the fact that while the GPRC label is part of a PRFS, it does not seem to have causal effects on the choice of publication outlet or language.
A final finding of this study concerns the aspect of peer review. The GPRC label has prompted publishers that previously did not have a formalized peer review procedure to adopt one in order to comply with the GPRC regulations. It remains to be seen whether publishers will continue to publish GPRC-labeled books, or whether some publishers may decide to publish fewer GPRC books because of the extra work involved in organizing the peer review procedure. Whereas the initiative for obtaining a GPRC label currently comes almost entirely from the author, publishers could become more interested in using the GPRC label if the label would be recognized within the scholarly community as a mark of quality. If the GPRC label would be reviewed, it would be helpful to take inspiration from other labels for peerreviewed books that have recently been introduced. The Finnish and Swedish examples show that there are different uses of a label for books, the GPRC label's sole focus on approval in the VABB may be a limiting factor to its further uptake.
Finally, a few limitations of this study as well as suggestions for further research can also be given. Firstly, we have not taken book series into account in our analysis. Secondly, only limited attention was given to comparisons with the whole of the VABB. Thirdly, the total number of publications was quite small and the time frame limited to 10 years which makes it difficult to identify long-term trends. Another limitation is that we could not make causal claims about the effects of the GPRC label. Finally, future research could focus on analyzing in detail the developments at the Flemish publishers. Further studies could also explore the content of GPRC publications. While we offered a first look at the publication titles of Law and History publications, a closer analysis of both GPRC and non-GPRC publication titles and abstracts would enhance our understanding of the local and international characteristics of GPRC publications as well as the intended audiences of GPRC publications. This study also did not include an international comparison, in particular a comparison with the Finnish and Swedish labels for peerreviewed publications could be interesting. Lastly, interviews with a larger number of authors from different disciplines as well as publishers of GPRC publications could give helpful suggestions for improving the label.
DATA AVAILABILITY STATEMENT
Part of the dataset is publicly available on: doi: 10.5281/zenodo.4472810. Part of the dataset is not available online. Requests to access these datasets should be directed to peter.aspeslagh@uantwerpen.be; raf.guns@uantwerpen.be; tim.engels@uantwerpen.be.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Ethics Committee for the Social Sciences and Humanities (EASHW) University of Antwerp. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
EV: conception and design, drafting the work, and performing the analysis. RG and TE: conception and design, substantial critical revisions, and providing approval for publication. All authors contributed to the article and approved the submitted version.
|
2022-01-04T14:25:02.774Z
|
2022-01-04T00:00:00.000
|
{
"year": 2021,
"sha1": "4aa766a45710899473122fff5eb1fad8892f1957",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "4aa766a45710899473122fff5eb1fad8892f1957",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
24727124
|
pes2o/s2orc
|
v3-fos-license
|
Role of prophylactic central compartment lymph node dissection in clinically N0 differentiated thyroid cancer patients: analysis of risk factors and review of modern trends
In the last years, especially thanks to a large diffusion of ultrasound-guided FNBs, a surprising increased incidence of differentiated thyroid cancer (DTC), “small” tumors and microcarcinomas have been reported in the international series. This led endocrinologists and surgeons to search for “tailored” and “less aggressive” therapeutic protocols avoiding risky morbidity and useless “overtreatment”. Considering the most recent guidelines of referral endocrine societies, we analyzed the role of routine or so-called prophylactic central compartment lymph node dissection (RCLD), also considering its benefits and risks. Literature data showed that the debate is still open and the surgeons are divided between proponents and opponents of its use. Even if lymph node metastases are commonly observed, and in up to 90 % of DTC cases micrometastases are reported, the impact of lymphatic involvement on long-term survival is subject to intensive research and the best indications of lymph node dissection are still controversial. Identification of prognostic factors for central compartment metastases could assist surgeons in determining whether to perform RLCD. Considering available evidence, a general agreement to definitely reserve RCLD to “high-risk” cases was observed. More clinical researches, in order to identify risk factors of meaningful predictive power and prospective long-term randomized trials, should be useful to validate this selective approach.
Background
Differentiated thyroid cancer (DTC) is a relatively uncommon malignancy representing 1-2 % of all human malignancies with a worldwide mean annual incidence per 100,000 individuals ranging from 1.2 to 2.6 in men and from 2.0 to 3.8 in women, with a surprising increase in the last decades [1][2][3][4][5]. Papillary thyroid cancer (PTC) is the most frequent variant, and recently in the USA, its incidence is increased more than 240 % with 62.980 expected new cases in 2014. Thanks to US-guided FNBs, more and more tumors less than 2 cm and microcarcinomas were diagnosed [6][7][8][9][10][11][12]. According to main international series, better oncological outcomes are expected in these cases, and consequently, a less "aggressive" and "tailored" multimodal approach has been adopted to avoid a useless "overtreatment" and risky morbidity. Finally, different scientific societies suggested an accurate evaluation of laboratoristic, instrumental, clinical and genetic risk factors before to intimate a more "aggressive" therapeutic approach [13,14].
Considering the high rate of lymph node metastases routine or so-called prophylactic central compartment lymph node dissection (RCLD) in clinically N0 patients is a matter of intensive research and is still debated [15,16]. Endocrine surgeons, head and neck surgeons and otolaryngologists are divided between supporters and detractors of its use. In the last decade, in order to reduce locoregional relapse (LR) rate and thyroglobulin (Tg) serum levels, a trend toward routine dissection, avoiding radioactive iodine (RAI), has been generally reported. Nevertheless, lastly, considering evidence-based medicine (EBM) data, several authors suggested its avoidance in clinical practice reserving prophylactic dissection in high-risk patients [17][18][19][20]. The absence of macroscopic lymph node metastases node dissection might determine an overstaging of disease and a risky overuse of RAI that is not associated to better oncological outcomes in terms of LR and long-term survival in every case. Moreover, a not negligible morbidity should be taken into account.
So, in order to avoid risky "overtreatment" in low-risk patients or at the same time underestimate the real oncological status, identification of pre-and perioperative risk factors of meaningful predictive power should be considered subject to intensive research and of paramount interest.
Performing a literature review and considering the available evidence, in an attempt to better clarify the suitable indication and extension of routine central compartment neck dissection, we reviewed indications to lymphatic dissection in the current management of DTCs. Clinical experience, inhering the management of thyroid cancer in the last two decades and of the endocrine and endocrine surgery centers participating in the study, were also taken into account.
Study design
LD, lateral or central LD, modified radical neck dissection, RCLD, selective, prophylactic or therapeutic LD, DTC and PTC were used as key words to perform a PubMed database research. The most recent guidelines regarding neck dissection for PTC according to the American Thyroid Association (ATA), European Thyroid Association (ETA), Unità operative di Endocrinochirurgia (UEC), American Head and Neck Society and American Academy of Otolaryngology-Head and Neck Surgery, Japanese Society of Thyroid Surgeons and Japanese Association of Endocrine Surgeons (JSTSJAES) and European Society of Endocrine Surgeons (ESES) were also considered. In particular, regarding terminology of cervical lymphatic anatomy (neck levels) and classification of neck dissection, the most recent ATA guidelines were considered [8]. LD benefits and risks, available evidence, complications and impact on locoregional recurrence rate and mortality were also evaluated.
Risk factors
Researchers try to identify thyroid cancer predictive risk factors, pre-existing concauses of cancer onset or cancerassociated conditions that should be considered especially in preoperative evaluation of uncertain neoplastic lesions ( Table 1). The most important environmental and exogenous factors are X-rays and 131 I exposure, iodine and endocrine disruptors, while the endogenous ones are gender and age, TSH, autoimmunity, obesity and insulin resistance, hereditary conditions and family history.
Exposure to head or neck radiation in childhood is a proven risk factor correlated to the intensity of radiation and the age of the child, increasing with larger doses and with younger age at treatment [21]. Moreover, X-ray and 131 I exposure may increase thyroid risk cancer also in adult as well as autoimmune thyroiditis or prolonged iodine deficiency, associated to elevated TSH serum levels [22][23][24][25].
The gender disparity in incidence, aggressiveness and prognosis of thyroid cancer is well established. A possible role in the sex-related different biologic behavior might be played by the difference in the estrogen receptor subtypes expressed in tissue but the argument is still subject to controversies and the substantial causes have to be still clarified [26]. A family history of thyroid cancer is present in about 5 % of the patients and interesting researches have been reported. Usually, the familial non-medullary carcinoma is mostly of papillary histotype more aggressive than the sporadic forms with an incidence of 6.2-10.5 % and an autosomal, polygenic, dominant transmission but with incomplete penetrance [1]. PTC may furthermore occur in patients with familial adenomatous polyposis and its subtype Gardner's syndrome, both sharing etiopathogenic defects in the gene APC [27,28]. A high risk of papillary or follicular thyroid carcinoma has been also described in patients with Cowden's disease and in people affected by Carney complex type I [1]. Autoimmunity [22,24] Obesity and insulin resistance [29][30][31][32][33] Family history of thyroid cancer (5 %), hereditary conditions (familial non-medullary carcinoma, familial adenomatous polyposis, etc.) [26][27][28] A higher BMI and incorrect eating habits, such as the excessive use of butter, cheese, starches and smoked fish, are moreover associated to an increased risk while a diet rich in fruits and vegetables seems to play a protective role [29][30][31][32]. Moreover, recent studies suggest the possibility that insulin resistance and hyperinsulinemia, a typical feature of obesity and metabolic syndrome, may be a risk factor for thyroid cancer [5,33].
The role of some environmental pollutants as endocrine disruptors interfering with hypothalamic-pituitarythyroid axis secretions is subject to intensive research and definitive conclusions were not reported [5,34]. Finally, an interesting paper demonstrated an increased incidence of papillary thyroid microcarcinoma in Sicily, especially in the volcanic area, where it was more aggressive in young patients [35].
Prognostic factors: state of art
There is no consensus regarding RCLD in PTC patients, and identification of prognostic factors for central lymph node metastases (CLNM) could assist surgeons in determining whether this procedure should be performed. Therefore, pre-and intraoperative risk factors for level VI metastases are of paramount interest and subject to intensive research.
Several papers yielded conflicting results due to variations in the study settings and in the observed population [36][37][38].
Factors increasing the risk of CLNM include the following: tumor size >1 cm, aggressive variants of PTC, extra thyroidal extension, tumor multifocality, age >45 or <15 years, male gender, white race, familiality, BRAF V600 mutation [39][40][41]. Nevertheless, literature results are not conclusive and still matter of debate. As well, recognized age has been undoubtedly reported to be a risk factor. The cut-off of 45 years is widely used as a clinical marker for prognosis [42]. In fact traditionally, patients older than 45 years are more often associated with poor prognosis and increased recurrence, as well as frequently reported for child <15 years [43].
A recent meta-analysis has observed that age younger than 45 years is a significant risk factor for CLNM in CN0 patients [44].
Although the incidence of thyroid cancer is higher in women, the rates of malignancy and mortality due to thyroid cancer are higher in men [45]. So the male sex can be considered as a risk factor [44].
Tumor size is another important factor in TNM staging, and large tumors are more prone to be aggressive [46]. The tumor size has been repeatedly confirmed as an independent predictor of both pathologic and clinical outcomes. Lymph node metastasis is known to increase with tumor size, and moreover, Jeong et al. showed an association between large neoplasms and LN metastases [47]. In their study, mean tumor size was greater in N+ cases compared to N0 patients (1.59 ± 1.03 vs 0.93 ± 0.62 cm; p < 0.001). Literature meta-analysis confirmed that larger tumors (>1 cm) were associated with an increased risk of CLNM [44].
Lim et al., in a previous study, had reported that tumor size (>5 mm) was a significant predictive factor of CLNM in PTC microcarcinoma [48]. Machens et al. also had demonstrated that PTC microcarcinoma of >5 mm were more associated with poor prognostic factors compared with those of <5 mm [49].
BRAF mutations have been found in various cancers including melanoma, colon cancer, and thyroid cancer, and in PTCs BRAFV600E mutation, a T1799A point mutation in the B-type Raf kinase gene is thought to be the most common genetic alteration related to tumor aggressiveness and poor prognosis [50][51][52][53][54][55].
Moreover, BRAFV600E mutation was independently related to known unfavorable prognostic factors such as extrathyroidal invasion, lymph node metastases, advanced tumor stage (III/IV) and aggressive subtypes. In fact, it was associated to PTC recurrence, even in low-risk groups [53]. Finally, in a retrospective multicenter study, BRAFV600E mutation-positive patients experienced more deaths per 1000 person-years than their wild-type counterparts (11.80 vs 2.25, hazards ratio 1⁄ 4 3.53) [54].
A recent meta-analysis, including 20 studies and 9084 patients that had undergone thyroidectomy + prophylactic central lymph node dissection (PCLND), extensively focused on the risk factors for central lymph node metastasis (CLNM) in patients with clinically negative central compartment lymph nodes [44]. As results, the following variables were associated with an increased risk: age <45 years, male sex, multifocality, tumor size >2 cm for PTC and >0.5 cm for papillary microcarcinoma, location of primary tumor in the central area and low lobe, lymphovascular invasion, capsular invasion and extrathyroidal extension. Instead, bilateral tumors and lymphocytic thyroiditis did not show association with increased risk of CLNM in these patients. The authors concluded that these factors should guide the application of PCLND in patients with clinically negative central compartment lymph nodes.
In conclusion, identification of PTC clinicopathological risk factors is crucial to improve the accuracy of recurrence rate estimates and to facilitate the calculation of patient-specific disease mortality rates. Furthermore, it could allow a better selection of therapeutic protocol facilitating modality of follow-up [56].
Lymph node dissection: definition and rationale of modern trends CLNM are very frequent while lateral ones are rare and might be associated to a worse prognosis. In most cases, contralateral central or lateral spreading follows ipsilateral metastases, but "skip lesions" may be observed in about 10 % of patients, especially in superior pole cancers. In clinically N0 patients, the most suitable dissection is still debated because of uncertain prognostic value of micro and macro nodal metastases on oncological outcomes. The obscure significance of node involvement is the main cause of this unsolved issue. Thyroid cancer represents a unique and atypical neoplasm mostly associated to a favorable prognosis. Dissimilarly to all other cancers originating from different anatomic districts-chest, gastrointestinal, reproductive system….-in which unavoidably lymph node metastases are associated to a worse prognosis, in DTC they are not synonymous of more aggressive biological behavior and are not associated to unfavorable outcomes. Nevertheless, in clinically N+ patients, a higher rate of persistent or recurrent disease is mostly reported, and in "high-risk" cases, node metastases might affect long-term survival [16,57]. According to Smith et al., who reported an analysis of about 11,000 cases, in clinically N+ patients >45 years lateral nodes are associated to a worse prognosis respect to younger patients with central compartment metastases [58].
Moreover, the surprisingly elevated incidence of microscopically positive lymph nodes, their natural evolution and their not frequent progression to a clinical recurrence represent the second obscure phenomena that should be clarified by more and more researches. All therapeutic efforts in order to eradicate microscopic disease do not favorably modify just fair oncological outcomes placing patients to risk of useless "overtreatment" with long-term unfavorable side effects.
In clinical management of node metastases, several oncological principles are well-known dogmas while other ones are still debated. First of all, an accurate staging is recommended but physical examination and cervical ultrasound are still critical in the preoperative work-up because in about one third of cases unexpected node metastases are successively discovered. So an accurate intraoperative inspection from hyoid bone to sternal notch by an expert surgeon is mandatory to avoid missing residual disease postoperatively associated to higher Tg serum levels and recurrence rate [59]. Explicit and clear communication between specialists about prior operations (extent of disease and sublevels of dissection) is very important to avoid risky interventions and facility surgical management (scarred surgical bed). Until the anatomic node classification and definition of neck dissection by American Society of Head and Neck Surgery and successively by ATA [8], operative reports were unable to accurately describe lymphatic involvement and extension of dissection performed. Consequently, retrospective analysis of surgical results was unreliable and outcomes incomparable. Thanks to specific anatomic landmarks, nodes were accurately divided into cervical and mediastinal levels (I-VII) and moreover grouped into central (I, VI and VII) and lateral neck compartment (II-V). Central and lateral neck dissections were described by a published consensus statement on the terminology and classification [60,61].
So neck dissection is nowadays performed by a standardized and widely diffuse surgical approach. Selective lymph node dissection (central LD is its variant) introduced by Ballantyne in 1980-consisting en bloc removal of all lymphatic fibrous adipose tissue along with specific fascial planes-or modified radical neck dissection (MRND) firstly described by Suarez-Bocca in 1967-en block removal of all neck levels (I-VII) with respect of jugular vein, sternocleidomastoid muscle and accessory spinal nerve-became the operations of choice in DTC treatment. Radical neck dissection, associated to higher morbidity and "berry picking" followed by higher recurrence rate, are mostly contraindicated.
In case of clinical node involvement, a compartmentoriented resection of the entire lymphatic basin-systematic selective ipsilateral or bilateral, central or lateral dissection, or mono or bilateral MRND-is recommended according to risk categories in order to obtain a lower recurrence rate and a higher survival [62,63]. In clinically N+ "high-risk" patients, in the presence of more than five metastatic lymph nodes or of one node greater than 3 cm in diameter, a selective lateral dissection may be associated to a central compartment one (levels III-IV) [64]. In patients affected by lateral metastases, central and lateral neck dissection is required (levels II, III, IV, VI), reserving a bilateral dissection, in case of multiple metastases, considering the elevated incidence of contralateral central neck metastases demonstrated in the surgical specimens.
Conversely, in the absence of involved nodes, the role of prophylactic LD is still debated [65,66]. According to its proponents, RCLD, defined as complete excision of the levels VI and VII (considering the recognized anatomical continuity from neck and superior mediastinum) might be safely performed avoiding to miss virulent disease, allowing a better chance of cure with a low morbidity, and reducing postoperative Tg serum levels and recurrence risk. The following should be suggested: considering higher morbidity of reoperations, removing potential source of recurrence, improving diagnostic accuracy, simplifying the follow-up and finally modifying the indications to RAI [67][68][69]. Caliskan et al. suggested that the central compartment dissection is technically feasible and safe representing the best way to determine node status for a more accurate staging and risk stratification [69]. Nevertheless, it is generally associated with a higher rate of transitory complications and according to Barczynski et al. and T.S. Wang et al., it is contraindicated in low-volume centers [70,71]. Higher morbidity rate, the uncertain clinical significance of node involvement, absence of proven benefits on survival, a consequent up staging and finally a RAI overuse with undesirable side effects such as nausea, vomiting, ageusia, salivary gland swelling, sialoadenitis, xerostomia, pulmonary fibrosis, dental caries and second primary malignancies (0.5 %) are advocated against routine LD. Moreover, a similar risk of local recurrence-0-9 %-was reported in clinically N0 patients who undergone RCLD or TT alone [8], differently from N+ cases (relapse rate up to 40 %).
The most common questions are as follows: Does RCLD reduce locoregional recurrence? Does it increase morbidity? Does it increase the morbidity in patients that have to be re-operated on the central compartment? According to interesting meta-analysis and evidence-based medicine (EBM) studies, RCLD might reduce locoregional recurrence (level IV-V, no recommendation), improve disease-free survival (grade C), increase the number of patients with undetectable levels of Tg (level IV, no recommendation) and increase permanent hypoparathyroidism and recurrent nerve lesions (grade C).
It must be performed by experienced hands (C) [67,72]. Finally, central compartment reoperations increase permanent hypoparathyroidism and recurrent lesions. Recently, Barczynski et al. stated that RCLD upstages PTC patients determining risky overuse of RAI that is not associated to a better outcome [70]. Nevertheless, several authors demonstrated that most LN recurrences are in the lateral compartment (levels III-IV) reducing the supposed RCLD benefits [73].
Moreover, micrometastases do not affect clinical course and outcome of PTC patients [20]. In conclusion, routine LD allows a better staging and RAI selection reducing in some cases Tg serum level and recurrence rate but nevertheless is associated to higher risk of complications. According to literature data, Table 2 reports proponents and opponents of prophylactic node dissection testifying that the debate is still open.
In addition, analyzing the most recent series, a similar recurrence rate was reported in patients undergone TT alone or associated to routine LD, reducing presumed advantages of prophylactic operations ( Table 3).
The adoption of risk factors in stratifying patient categories remains of paramount importance to avoid useless RCLD. As reported above, age <45 years, multifocality, familiality, male sex, aggressive pathological variants, BRAF V600 mutation and tumor size larger than 1 cm are the main independent preoperative variables. They should be considered in association with extra capsular thyroid infiltration, positive margins, and lymphovascular invasion that might be intra-or postoperatively acquired. Therefore, different selecting criteria were suggested to identify the best RLCD indications. Several surgeons are in favor of its use in high-risk cases, in presence of positive frozen section or biopsy-proven disease and in lateral clinically node-positive patients. In addition, ipsilateral routine dissection plus frozen section (and eventually contralateral dissection) was recently introduced considering a high morbidity rate of bilateral procedure and that isolated contralateral LN metastases are exceptionally described. This tailored approach showed similar staging ability to that observed after bilateral RCLD and a lower morbidity rate similar to that reported after TT alone. The main limit consisted in overlooking contralateral metastases with a higher recurrence rate [74].
In the absence of adequate statistical power to demonstrate clear benefits on long-term outcomes, more prospective clinical trials are needed.
The most recent ATA and UEC guidelines stated that prophylactic LD could be considered in high-risk patients with advanced primary tumors and should be performed by high-volume surgeons to avoid definitive complications [62]. A reduced local recurrence rate and a lower Tg serum level may be expected. The procedure allows a better staging too [68], but a prospective randomized study on RCLD role could be very expensive and not readily feasible [75]. Revisiting the 2014 Japanese Society of Thyroid Surgeons and Japanese Association of Endocrine Surgeons (JSTSJAES) guidelines, it has been stated that in the absence of definitive data about prophylactic CND in a large series of patients, its indication depends on institutional policy and surgeons' skill levels [76]. On the contrary, Consensus European Society of Endocrine Surgeons (ESES) confirmed that routine level VI prophylactic dissection should be risk stratified in T3-T4 cases, in patients >45 or <15 years, male patient, bilateral or multifocal tumors, lateral known involved lymph nodes [77].
In our experience, a clinical retrospective study on 221 cases, TT followed by RAI administration and TSH suppression therapy, guaranteed optimal long-term results, with a low incidence of locoregional recurrence similar to that reported in patients undergone TT alone [16]. Reoperations were usually not associated with higher morbidity, especially performing unilateral dissection, although hypoparathyroidism and unintentional recurrent laryngeal nerve injury have been observed in up to 14 and 9 % of patients, respectively [78].
In the absence of enlarged lymph node, and when RAI administration is advisable (tumor >2 cm in a male patient >50-year-old), routine lymph node dissection might be not indicated, while, in low-risk patients with tumors ≤1 cm, RCLD may discover metastases requiring RAI ablation, modifying the therapeutic protocol [68]. Nevertheless, in these patients, the RAI advantages remain to be proven. Gyorki et al., in favor of therapeutic central neck dissection, in a recent assessment of clinical evidence, hypothesized that node positivity, following prophylactic dissection, may encourage administration of higher doses of 131 I without obvious benefit [79].
Finally, analyzing potential oncological benefits and morbidity rate, routine dissection of level V (a/b) is still a controversial topic, reserving its indication only to N+ cases [80].
Conclusions
In the treatment of DTC, considering a surprising increasing rate of a precocious diagnosis of tumors less than 2 cm and of microcarcinomas and the better oncological outcomes expected, a "tailored" and "less aggressive" multimodal therapeutic approach should be suggested, to avoid unfavorable even if minimal morbidity following a potential "overtreatment".
In the absence of involved lymph nodes, prophylactic dissection should be avoided, reserving RCLD to "highrisk" patients to reduce the local recurrence rate. More researches are needed in order to identify pre-and perioperative risk factors of predictive power useful in planning tailored therapeutic protocols.
Competing interests
The authors declare that they have no competing interests.
|
2018-04-03T05:27:58.941Z
|
2016-05-17T00:00:00.000
|
{
"year": 2016,
"sha1": "aa9e665b4a32e606aad9b572066b1e1de2daad89",
"oa_license": "CCBY",
"oa_url": "https://wjso.biomedcentral.com/track/pdf/10.1186/s12957-016-0879-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aa9e665b4a32e606aad9b572066b1e1de2daad89",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
270400094
|
pes2o/s2orc
|
v3-fos-license
|
Design and application of a testing device for solar reflective materials
Using white polyvinyl chloride pipes, gray polyvinyl chloride pipes, natural rubber foam sheets, and nylon 6 as experimental materials, this article designed a sandwich structure solar reflective material testing device. By testing the thermal conductivity of various materials and the thermal conductivity of the composite materials in the device and conducting thermal insulation verification experiments on the solar reflective material testing device, it was found that the overall thermal conductivity of the device was small and the internal temperature of the device was less affected by the external environmental temperature. The results indicated that the solar reflective material testing device has good insulation performance.
Introduction
Solar energy is a green and renewable energy source, which has advantages such as universality, harmlessness, and longevity.However, it is inevitable that solar energy has strong dispersion, especially in sunny summer weather, where strong solar radiation directly leads to hot summer weather and brings inconvenience to residents.In response to this situation, solar reflective materials have emerged on the market one after another [1] .This solar reflective material can reduce the continuous increase in temperature on the surface and inside the covered object due to strong solar radiation, effectively shielding ultraviolet light and reducing indoor temperature [2] .This article designed a sandwich structure solar reflective material testing device using white polyvinyl chloride pipes, gray polyvinyl chloride pipes, natural rubber foam sheets, and nylon 6 as experimental materials.Firstly, the thermal conductivity coefficients of the four raw materials in the device were tested, and the thermal conductivity coefficients of the composite materials in the device were calculated using theoretical formulas.Then, the thermal insulation effect of the solar reflective material testing device was verified through experiments [3,4] .
Material selection
The experimental materials for this experiment are white polyvinyl chloride (PVC) pipes, gray polyvinyl chloride (PVC) pipes, natural rubber (NR) foam sheets, and nylon 6 (PA6).These raw materials have good thermal insulation effects and a wide range of applications, and have all been commercialized and easy to purchase.The glass used is 5mm thick ordinary glass, and the thermometer is an ordinary thermometer with a range of 0-100 ℃ [5] .
Schematic diagram of the structure of the solar reflective material testing device
From the device diagram in Figure 1, it can be seen that the outermost layer of the experimental device is a 2mm thick white PVC pipe, the innermost layer is a 5mm thick gray PVC pipe, the middle layer is a natural rubber foam layer.The top and bottom layers of the sandwich structure at the bottom of the device are nylon layers with thicknesses of 8mm and 5mm, and the middle layer is a natural rubber foam layer.At the same time, we drill a hole at a certain distance from the top of the device to insert a thermometer.
Preparation and testing of thermal conductivity test samples
3.1.1.Pressing PVC and nylon sheets using a flat vulcanizing machine.Firstly, we cut the white PVC pipe, gray PVC pipe, and nylon 6 purchased from the market into pieces and put them into an open plasticizer for melting and plasticizing at 180oC.Subsequently, we set the temperature of the NO.1 hot plate and NO.2 hot plate of the flat vulcanization machine to 180oC, and cut the melted and plasticized thin plate into a suitable shape to fill in the mold.Next, we will adjust the pressure on the flat vulcanization machine to 1-2MPa and heat press the material for 10 minutes.Adjusting the pressure of the flat vulcanization machine to 3-5 MPa, we will heat press the material for 10 minutes.Finally, adjusting the pressure to 5-8MPa, we will heat press the material for 4-5 minutes.Finally, we will take out the mold and place it on a 25-ton flat vulcanizing machine for cold pressing for 5 minutes.Afterward, taking out the mold and the pressure plate, we cut it into Ф60mm×Ф10mm thin sheet.We can yield white PVC, gray PVC, and nylon 6 thin sheets [6,7] .
Making NR flakes.
We cut commercially available natural rubber into Ф60mm×Ф10mm thin slice, then stick the two pieces together.
Thermal conductivity test.
According to the above method, we prepared the white PVC sample, gray PVC sample, PA6 sample and foamed natural rubber sample, each with two pieces.Then, they were tested for thermal conductivity using a HotDisk thermal conductivity meter.
Preparation of experimental setup for solar reflective material testing device.
Experimental setup 1# is a thermometer and sandwich structure solar reflective material testing device (as shown in Figure 1).Experimental setup 2# is a thermometer and single-layer gray PVC tube.Experimental setup 3# is a thermometer and single-layer white PVC tube.Experimental setup 4# is a thermometer and a multi-layer PVC tube consisting of white PVC tube and gray PVC tube.
Experimental setup 5 consists of a thermometer and a multi-layer PVC tube with circular foam rubber added to the hollow tube of experimental setup 4#.
3.2.2.
Thermal insulation effect testing of solar reflective material testing device.Firstly, we assemble different experimental devices and calibrate the thermometer.Before conducting the experiment, we place the experimental device in a room with a temperature of 20 ℃ at least 2 hours in advance, so that the thermometer reading can drop to room temperature.At the beginning of the experiment, we closed the box tightly and moved the box to the experimental location with strong sunlight quickly.Then, we opened the box and recorded the thermometer reading.We record the thermometer reading every 2 minutes within the first 10 minutes, and every 5 minutes after 10 minutes until the thermometer reading stabilizes.When the internal temperature of each experimental device is roughly the same, we close the box tightly and quickly move it to a room with a room temperature of 20℃.Opening the box, we started recording the thermometer reading every 5 minutes until it dropped to room temperature.
Experimental group 1 used experimental setups 1#, 2#, and 3#.Experimental group 2 used experimental setups 1# and 4#.Experimental group 3 used experimental devices 1# and 5#.From the above test results, it can be seen that the thermal conductivity of PVC pipes and nylon is relatively high, and the insulation effect is not good when made separately as insulation devices in Table 1.Although the thermal conductivity of rubber is small, its rigidity is not enough.Therefore, the above materials are not suitable for making experimental devices separately.
Calculation of thermal conductivity of sandwich structure solar reflective material testing device.
The bottom of the sandwich structure solar reflective material testing device is a flat multi-layer composite material, consisting of nylon, natural rubber foam and nylon, as shown in Figure 2. The formula for calculating its thermal conductivity is . Among them, H represents the total thickness, and h1, h2, and h3 represent the thickness of each layer in the sandwich structure.And k1, k2, and k3 represent the thermal conductivity of each layer.The measured data is k=0.187Wꞏm-1 ꞏK -1 .
In Figure 3, the tube wall of the sandwich structure solar reflective material testing device is a cylindrical multi-layer composite material composed of white PVC, natural rubber foam, and gray PVC.The formula for its thermal conductivity is . Among them, bi refers to the thickness of each layer of material, λi refers to the thermal conductivity of each corresponding layer of material, while ri+1 and ri respectively refer to the outer diameter and inner diameter of each corresponding layer of material.The calculated result from the derived formula is 0.109 Wꞏ m -1 ꞏK -1 .
From the results of the above calculation, it can be seen that the composite experimental device is better than the single PVC, NBR, and PA6.This device not only has a smaller thermal conductivity, a good insulation effect, and a certain degree of rigidity.It is suitable and feasible for making experimental devices.
Insulation effect testing of sandwich structure solar reflective material testing device
Figure 4 shows the heating curves of different experimental devices.As time went on, the temperatures of the three experimental devices all increased to varying degrees and eventually tended to be roughly consistent.Throughout the process, the temperature of 3# was generally higher than that of 2#, while the temperature of 2# was higher than that of 1#.This is because 1# mainly uses a sandwich structure solar reflective material testing device, which has a small thermal conductivity, so the environmental temperature has a relatively small impact on the internal temperature of the device.Although 2# and 3# are both PVC pipes, the gray PVC pipe wall is 3mm, which is thicker than the white PVC pipe wall.So, the temperature change of experimental device 2# is smaller than that of experimental device 3#.Similarly, in Figure 5, when the temperature drops to the same room temperature, the temperature of 1# changes the slowest, the temperature of 3# changes the fastest, and the temperature of 2# changes second.Figure 6 shows the heating curves of different experimental devices, and Figure 7 shows the cooling curves of different experimental devices.From Figure 6 and Figure 7, it can be seen that the temperature changes of 1# and 4# are roughly the same.This is because the thermal conductivity of natural rubber foam is very close to that of air, and the natural rubber foam layer is equivalent to the air layer.Figure 8 shows the heating curves of different experimental devices, and Figure 9 shows the cooling curves of different experimental devices.The use of 5# is to eliminate the influence of not adding circular foam rubber under the gray PVC pipe on the internal temperature of the experimental device.However, from Figure 8 and Figure 9, it can be seen that the temperature changes of 1# and 5# are roughly the same.This is also due to the close thermal conductivity of natural rubber foam and air, so the natural rubber foam layer is equivalent to the air layer, resulting in approximately the same temperature changes for 1# and 5#.However, adding a natural rubber foam layer can further improve the stability and durability of the device.Overall, it can be seen that through insulation effect testing, the sandwich structure solar reflective material testing device designed in this article has good insulation performance [10] .
Conclusion
In order to facilitate the study of the actual cooling effect of insulation materials, this article designed a sandwich structure solar reflective material testing device using white PVC pipes, gray PVC pipes, natural rubber foam sheets, and nylon 6 as experimental materials.Through the testing of thermal conductivity, it was found that the thermal conductivity of each layer of material is relatively small, and the overall thermal conductivity of the composite material is relatively small.Subsequently, through comparative experiments on thermal insulation effects, it was found that the solar reflective device has a better thermal insulation effect, and its internal temperature is less affected by the external environment, resulting in a good thermal insulation effect.If applied in architectural design, I believe it can effectively solve the current problem of high indoor temperature caused by solar reflection.
Figure 1 .
Figure 1.Schematic diagram of the structure of the solar reflective material testing device.
Figure 2 .
Figure 2. The bottom sectional view of Sandwich structured solar reflective material testing device.The formula for calculating its thermal conductivity is
4 Figure 3 .
Figure 3. Wall profile diagram of the sandwich structure solar reflective material testing device.
Figure 5 .
Figure 5. Cooling curves of different experimental devices (1#, 2# and 3#).Figure6shows the heating curves of different experimental devices, and Figure7shows the cooling curves of different experimental devices.From Figure6and Figure7, it can be seen that the temperature changes of 1# and 4# are roughly the same.This is because the thermal conductivity of natural rubber foam is very close to that of air, and the natural rubber foam layer is equivalent to the air layer.
Figure 7 .
Figure 7. Cooling curves of different experimental devices(1# and 4#).Figure8shows the heating curves of different experimental devices, and Figure9shows the cooling curves of different experimental devices.The use of 5# is to eliminate the influence of not adding circular foam rubber under the gray PVC pipe on the internal temperature of the experimental device.However, from Figure8and Figure9, it can be seen that the temperature changes of 1# and 5# are roughly the same.This is also due to the close thermal conductivity of natural rubber foam and air, so the natural rubber foam layer is equivalent to the air layer, resulting in approximately the same temperature changes for 1# and 5#.However, adding a natural rubber foam layer can further improve the stability and durability of the device.Overall, it can be seen that through insulation effect testing, the sandwich structure solar reflective material testing device designed in this article has good insulation performance[10] .
Table 1 .
Thermal conductivity test results of each sample.
|
2024-06-13T15:34:25.241Z
|
2024-06-01T00:00:00.000
|
{
"year": 2024,
"sha1": "73f8c5c8f45b3705bd1eecc72cccabcfdaec75ed",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/2730/1/012046",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "228c8895e3ec085761318b248d1d892ef685e62b",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
248121885
|
pes2o/s2orc
|
v3-fos-license
|
Research on hydrogen energy storage capacity model based on Genetic Algorithm in new power system
In order to promote China’s energy revolution and realize China’s energy transformation, green and low-carbon energy development and comprehensive green transformation of economic and social development are the keys. Among them, the development of hydrogen energy has great influence and significance on energy saving, emission reduction, deep decarbonization and improvement of utilization efficiency. For the power industry, it is necessary to build a new power system with new energy as the main body. Under this system, this paper establishes a hydrogen energy storage planning model by studying the application scenarios of new energy sources, and uses genetic algorithm to solve it. Finally, a case study proves the rationality of this planning model for hydrogen energy storage planning research, and provides theoretical support and decision-making basis for large-scale investment and operation of similar systems in the future.
Introduction
China's goal of "peak carbon dioxide emissions" and "carbon neutrality" will accelerate the evolution of power system to a high-proportion new energy power system based on wind and photovoltaic power generation, and ensuring the stable operation of power system is the core of the transformation of highproportion new energy power system [1]. As a clean, zero-carbon, multifunctional secondary energy carrier, hydrogen can be exchanged with electric energy and stored efficiently for a long time, which will play an important role in the flexibility adjustment of high-proportion new energy power system [2][3]. The main path of low-carbon transformation of power system is to increase the proportion of new energy generation such as wind power and photovoltaic in primary power. In the application scenario of new energy consumption, the control system adjusts the power ratio of wind power, photovoltaic power grid and hydrogen production, absorbs wind power abandoned wind and photovoltaic power grid to the maximum extent, and alleviates the "bottleneck" problem of large-scale wind power and photovoltaic power grid, electrolyzes water to produce hydrogen and secondary oxygen by using the abandoned wind and photovoltaic power grid, and increases the storage density of hydrogen through pressure hydrogen storage. Hydrogen can be used as multi-purpose and high-density clean energy to improve the power quality of wind power grid through FC or H2ICE feedback grid, and can also be used as energy carrier through vehicle or pipeline.
New energy consumption application scenarios
In the application scenario of new energy consumption, the control system adjusts the power ratio of wind power, photovoltaic power grid and hydrogen production, absorbs wind power abandoned wind and photovoltaic power grid to the maximum extent, and alleviates the "bottleneck" problem of largescale wind power and photovoltaic power grid, electrolyzes water to produce hydrogen and secondary oxygen by using the abandoned wind and photovoltaic power grid, and increases the storage density of hydrogen through pressure hydrogen storage. Hydrogen can be used as multi-purpose and high-density clean energy to improve the power quality of wind power grid through FC or H2ICE feedback grid, and can also be used as energy carrier through vehicle or pipeline. Using clean energy to generate electricity and hydrogen is called green hydrogen, which is an important direction of hydrogen energy development in the future [5]. Electric hydrogen production equipment can tolerate a large degree of input power fluctuation, and large-scale hydrogen production is an effective way to stabilize the fluctuation of new energy output. Under this path, we can give full play to the flexibility of hydrogen production load, deploy large-scale electric hydrogen production facilities through off-grid and grid connection, and track the fluctuating output of new energy power generation in real time at the source side and the grid side, effectively solving the problem of flexible regulation of power system with high proportion of new energy [6].
From the point of view of hydrogen energy preparation, storage and transportation, and terminal utilization, alkaline electrolyzed water and proton exchange membrane electrolyzed water can accept fluctuating power input, and are suitable as the main electric hydrogen production technologies to absorb new energy [7]. Storage and transportation is the key factor that restricts the large-scale development of hydrogen energy. Gas storage and transportation efficiency is low, liquid storage and transportation cost is high, and safe and economical storage and transportation technology needs to be broken through. At present, the key materials of domestic hydrogen storage tanks are imported, and there is a big gap between low-temperature liquid hydrogen technology and hydrogen storage material technology and foreign advanced level, and the industrialization is far away [8]. In terms of terminal utilization of hydrogen energy, hydrogen energy has a certain application prospect in terminal consumption market
objective function
The carbon emission cost is included in the objective function, the economic goal and the low-carbon goal are unified by adding, and the low-carbon economic planning optimization model of electric coupling system based on carbon trading mechanism is established. The economic optimal operation of comprehensive energy system requires the minimum total cost of the system. Consider one-way power supply from power grid to microgrid. Wind turbines and photovoltaics use natural resources to generate electricity, so it can be considered that they have no power generation cost. Therefore, the total cost in the operation of integrated energy system mainly consists of five parts: power grid purchase cost, equipment purchase cost, operation and maintenance cost and total carbon cost. The objective function is: Where, F is the total cost of the system; e f is the cost of purchasing electricity from the power grid; OM f is the operation and maintenance cost of the system; inv f is equal annual cost for the initial equipment investment of the system; 2 CO f is the sum of the purchase cost, capture cost and emission cost of carbon dioxide in the system.
Energy balance constraint
(1) Power constraint. The specific formula is:
Genetic algorithm
Conventional genetic algorithm is prone to premature convergence, but it can effectively avoid searching some unnecessary points. Simulated annealing algorithm has strong computational robustness but slow convergence speed. Therefore, the idea of Simulated Annealing (SA) is integrated into the conventional genetic algorithm, the fitness function and crossover operator are improved, and the improved genetic algorithm is adopted to solve the problem, which better solves the shortcomings of premature convergence and poor local search ability of the classical genetic algorithm, and improves the running efficiency and solving quality.
The flow chart of genetic algorithm is as follows.
Example Analysis of 4 New Energy Stations
Hydrogen production near the wind farm, wind power and hydrogen production plants adopt the mode of self-provided power plant, and use the abandoned wind power to produce electrolyzed water hydrogen. Schematic diagram of self-provided power plant and local utilization mode of hydrogen is shown in Figure 4 In this basic scheme, hydrogen is produced by using the electricity quantity of 50,000 kW wind farm. Considering wind power abandonment, 14.8% can be used for hydrogen production, and all of them are concentrated in the heating season. It is equipped with three electrolytic hydrogen production equipment units. The first electrolytic cell has a service life of 15 years and can produce 3.6 million m3 of hydrogen annually. Under the constraint of objective function, the installed capacity of hydrogen energy storage is 2853.9KW. Among them, the electric load and heat load in this area are shown in the figure, and the cost composition table under this scheme is shown in the following table. Among them, the internal rate of return is 11%, and the benchmark rate of return of the project is 10%. When the internal rate of return is greater than the benchmark rate of return, the project is feasible. Total cost is 11,123.48 yuan, available NPV 1 = 11,123.48 * (P/A, 10%, 15)-1314 = 83,291.19 yuan, IRR1=11%, IRR 1 > IC, and the project is feasible.
Conclusion
With the rapid development of new energy generation technologies represented by wind power and photovoltaic power generation, the energy field is experiencing profound technological and consumption revolution. While realizing high efficiency, cleanness and low carbon, it is necessary to solve the problem of power consumption caused by intermittence of new energy.
Hydrogen energy, as a secondary energy source, is a potential solution to this problem. However, by analyzing its basic principle, technical economy and future development potential, it can be clearly seen that hydrogen energy mode is difficult to play a key role in solving future energy and power development problems. At the same time, the establishment of the annual income model of hydrogen refueling station can effectively analyze the comprehensive utilization project of hydrogen energy, and provide theoretical support and decision-making basis for future large-scale investment and operation of similar systems.
|
2022-04-13T20:08:44.856Z
|
2022-04-01T00:00:00.000
|
{
"year": 2022,
"sha1": "4b942150e6e66275238c7c3eee16dff162453ead",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/2247/1/012043",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4b942150e6e66275238c7c3eee16dff162453ead",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
232087166
|
pes2o/s2orc
|
v3-fos-license
|
Acceptability of Telegenetics for Families with Genetic Eye Diseases
Healthcare providers around the world have implemented remote routine consultations to minimise disruption during the COVID-19 pandemic. Virtual clinics are particularly suitable for patients with genetic eye diseases as they rely on detailed histories with genetic counselling. During April–June 2019, the opinion of carers of children with inherited eye disorders attending the ocular genetics service at Moorfields Eye Hospital NHS Foundation Trust (MEH) were canvassed. Sixty-five percent of families (n = 35/54) preferred to have investigations carried out locally rather than travel to MEH, with 64% opting for a virtual consultation to interpret the results. The most popular mode of remote contact was via telephone (14/31), with video call being least preferred (8/31). Hence, 54 families who had received a telephone consultation mid-pandemic (November 2020–January 2021) were contacted to re-evaluate the acceptability of telegenetics using the Clinical Genetics Satisfaction Indicator and Telemedicine Satisfaction Questionnaire. Overall, 50 carers participated (response rate 93%); 58% of participants found teleconsultations acceptable and 54% agreed they increased their access to care, but 67.5% preferred to be seen in person. Patient satisfaction was high with 90% strongly agreeing/agreeing they shared and received all necessary information. Ocular genetics is well-suited for remote service delivery, ideally alternated with face-to-face consultations.
Introduction
In line with the UK response to the COVID-19 pandemic, the government advised that healthcare providers should roll out remote outpatient consultations using video, telephone, email and text message services [1]. The potential role of telemedicine has previously been described in the response to disasters and public health emergencies including COVID-19 [2,3]. Moorfields Eye Hospital NHS Foundation Trust (MEH) is leading a taskforce that is supporting acute providers to rapidly implement virtual consultations where possible, in order to continue the delivery of ophthalmology care during the crisis [1].
Ophthalmology accounts for the largest proportion of outpatient visits per year in the National Health Service (NHS). At MEH, 600,000 outpatients attendances were recorded in 2018/19 and 1% of these were genetic eye disease consultations [4]. As a tertiary referral specialist hospital, referrals come from across the country and many patients maintain dual care with local hospital eye services. This can result in several hospital appointments per year, time off from work or school, travel expenditure and duplication of routine serial monitoring investigations. The increasing prevalence of chronic conditions is necessitating the redesign of patient pathways to improve capacity [5]. Consultations, often described as telemedicine, have been found to be most effective in specialties that primarily rely on verbal interaction for assessment. This makes them highly suitable for genetic eye disease consultations where detailed history-taking and genetic counselling are key features [6]. In ophthalmology, virtual consultations are established in various specialties. At MEH, existing electronic resources including Patient Administration System, Medisoft and intranet based worklists, created using Microsoft SQL Server Reports Software, are used to facilitate virtual consultations in medical retina care [7]. Similarly, in glaucoma care, "virtual clinic modules" created on the existing electronic patient record system (OpenEyes system) are used. This has reduced patient journey times and is highly rated by service users (<3% of respondents (n = 620) rated the service as "poor") [8].
Following the UK 100,000 Genomes Project (an initiative to sequence the genomes of 85,000 NHS patients with rare diseases and cancer to advance diagnosis and develop personalised treatments, and to introduce genomic medicine into our healthcare system), access to genetic testing has changed significantly [9]. There is a national directory of approved genetic tests, three appointed laboratory providers of specialised ophthalmic genomic testing and emerging centralised funding for most rare disease patients. This is yielding higher diagnostic rates, and hence, capacity building is required to facilitate the greater demands on specialist services. The Royal College of Ophthalmologists have issued genomics services guidance that outlines NHS England's long-term plan to offer wholegenome sequencing (WGS) to 500,000 individuals by 2023 and are focused on integrating genomics into mainstream ophthalmic practice [10]. With this in mind, it has also been recognised that much training and support will be required to general ophthalmologists, and virtual consultations with specialists in this field will be required until the level of competency is reached.
Recent technological advances have resulted in the development and wide-scale implementation of various modalities enabling ophthalmologists to manage patients remotely. These are used mainly to screen retinal conditions such as retinopathy of prematurity, diabetic retinopathy and age-related macular degeneration, diagnose anterior segment conditions and manage patients with glaucoma [11]. The COVID-19 pandemic poses specific challenges to ophthalmology service delivery with social distancing, frequent changes in restrictions, isolation and quarantine periods and patient anxieties around contracting coronavirus affecting both acute and routine patient care. The effects of delayed acute presentations have already been widely reported with the impact on chronic conditions beginning to emerge [12][13][14][15].
Prior to the COVID-19 pandemic, active canvassing of service user opinions and planning for remote genetic eye disease clinics at MEH was underway. Post-implementation, service user satisfaction and acceptability were evaluated to ensure changes were successful in the long-term. The General Medical Council (GMC) standard criteria for appropriateness of remote consultations (Box 1) suggest that patients with genetic eye diseases are suitable for telegenetics [16]. Mainstay of management is genetic testing to determine the cause with pre-and post-test genetic counselling, and subsequent long term follow up to monitor disease progression with widely available imaging that can be done at their local hospital such as colour fundus photography, visual fields testing, spectral domain optical coherence tomography (SD-OCT) and fundus autofluorescence. Most patients with or at risk of developing ocular co-morbidities such as glaucoma or corneal keratopathy are under regular follow-up with the relevant clinical specialty. Paediatric patients can be under several specialists including paediatricians and general paediatric ophthalmologists to ensure their vision is developing and there are no added amblyogenic factors. This may all involve several visits to the hospital per year, which can cause difficulty with taking time off school or work for carers. There are very few specialist genetic eye disease services across the country, which means that remote populations may not have easy access to them [17]. Furthermore, virtual consultations may also reduce the number of sight-impaired patients travelling long distances. Any services implemented must be acceptable to their users; we present our model of setting up remote clinics for our ocular genetics services and our patient satisfaction and acceptability findings. Box 1. GMC ethical guidance for remote consultations. Is a remote consultation appropriate? [17].
Remote consultations may be appropriate when.
•
Materials and Methods
Prospectively, between April and June 2019, 56 sequential adult carers of paediatric patients attending the ocular genetics service at MEH were given a short questionnaire, approved by the local Patient Experience Committee (Table 1). Participants were asked to indicate their preference of (1) having their tests carried out at a local hospital rather than at MEH and (2) a review of their results virtually rather than physically attending an appointment. If respondents answered positively to a virtual discussion, they were then asked to indicate whether they would prefer a discussion by telephone call, video call or by letter. Participants were asked what proportion of their appointments they would like to be virtual, including options for every or alternate appointments or until a treatment or trial becomes available (where a more complex discussion might be required). Carers indicated a preference for telephone (n = 14/31) rather than video (n = 8/31) consultations or written communication (n = 10/31) at this stage. So when remote consultations were mandated during the pandemic, this mode of contact was utilised more than video consultation for the paediatric cohort.
Subsequently, between November 2020 and January 2021, sequential adult carers of paediatric patients who had recently received a telephone consultation from the ocular genetics service were contacted, via telephone, and asked a short questionnaire about their experience. Patient satisfaction with both genetic counselling and telemedicine was measured using two previously validated questionnaires: Clinical Genetics Satisfaction (CGS) Indicator and Telemedicine Satisfaction Questionnaire (TSQ). The TSQ has previously been shown to have good internal consistency (α = 0.93) for diabetes patients and has since been used to assess patient satisfaction with telemedicine in a genetics setting [18,19]. The CGS has previously shown an excellent internal consistency (α = 0.913) in English when tested at 13 clinical genetic sites at 7 institutions [20]. These were modified to be relevant for our study; items 4, 5, 7 and 10 were removed from the TSQ. Patient satisfaction was indicated using a 1-5 Likert scale response mode, with higher scores indicating greater satisfaction. Participants were also asked to indicate their preference of (1) having their consultation conducted remotely rather than in person at MEH and (2) their preferred method of remote contact either via telephone or video call. STATA V.15 was used to analyse demographic and survey data. Fischer's exact test was used to compare independent categorical variables with two categories and Pearson's Chi squared test was used to compare independent categorical variables with greater than two categories. P values less than 0.05 represent results of statistically significant tests.
Results
A total of 56 carers of children completed the pre-pandemic survey on virtual consultations; 57% were attending new visit appointments and 43% were follow ups (Table 1). Sixty-five percent preferred to have their tests carried out locally. Sixty-four percent of participants (n = 34/53) indicated that they would prefer to have a virtual consultation for the review of any results. The majority of families indicated a preference for a telephone call (n = 14/31, 45%), followed by written communication (10/31, 32%), with the fewest responses for video call (n = 8/31, 26%). In terms of frequency of contact, only 5 participants (9%) opted to be seen virtually for every appointment until a treatment or trial became available.
During the COVID-19 pandemic remote consultations were implemented for all triaged non-urgent patients. Hence, 50 carers of children (mean age ± SD, 5.5 ± 4.7 years) with a variety of genetic eye diseases (Figure 1) who had received a telephone consultation as part of their standard care completed patient satisfaction questionnaires, with 19 (38%) and 31 (62%) participants attending new and follow up appointments, respectively. Four carers declined to participate, giving a response rate of 93% (50/54).
Overall, 58% of participants (n = 29/50) found telephone consultations to be an acceptable way to receive health-care services, with 24% indicating neutrality and 18% finding it unacceptable (16% disagree, 2% strongly disagree, n = 9/50) ( Figure 2). However, 67.5% of participants (n = 27/40) preferred to be seen face to face rather than remotely. Two thirds of participants (n = 33/50) agreed that telephone consultations provided for their healthcare needs with only 12% (n = 6/50) indicating that it did not. Ninety-six percent of participants (n = 48/50) felt comfortable communicating remotely. All participants agreed that they could easily talk to their health care provider on the phone, 92% (n = 46/50) agreed that they could hear them and 98% (n = 49/50) agreed that the health care provider could understand their condition. Over half (54%, n = 27/50) of participants felt that they obtained better access to care via telemedicine (28% neutral). When asked if telephone consultations saved them time travelling to hospital or a specialist clinic, 94% indicated it would save them time. Overall, 58% of participants (n = 29/50) found telephone consultations to be an acceptable way to receive health-care services, with 24% indicating neutrality and 18% finding it unacceptable (16% disagree, 2% strongly disagree, n = 9/50) ( Figure 2). However, 67.5% of participants (n = 27/40) preferred to be seen face to face rather than remotely. Two thirds of participants (n = 33/50) agreed that telephone consultations provided for their healthcare needs with only 12% (n = 6/50) indicating that it did not. Ninety-six percent of participants (n = 48/50) felt comfortable communicating remotely. All participants agreed that they could easily talk to their health care provider on the phone, 92% (n = 46/50) agreed that they could hear them and 98% (n = 49/50) agreed that the health care provider could understand their condition. Over half (54%, n = 27/50) of participants felt that they obtained better access to care via telemedicine (28% neutral). When asked if telephone consultations saved them time travelling to hospital or a specialist clinic, 94% indicated it would save them time.
Three participants indicated that they would not use telephone consultations again. All three also found telephone consultations an unacceptable way to receive services and preferred face-to-face consultation; two did not feel that it improved their access to care and one was unsatisfied with the quality of the telephone service. However, none indicated that they felt uncomfortable communicating remotely, and when given the option of telephone or video consultation, all three indicated that they would prefer video consultation over telephone. Overall, 64.3% of participants would have preferred video over telephone consultations.
Patient satisfaction with genetic consultation was generally positive. All participants felt they were listened to carefully. Ninety percent of participants (n = 45/50) felt that they received the information they required and were able to share all the necessary information, and 92% (n = 46/50) felt that the person they spoke to answered all their questions. Ninety-six percent (n = 48/50) felt that the person spent enough time with them and that things were explained to them in a way they could understand. Two negative responses were indicated in total: one person felt that they did not receive all the information they required and one person did not feel like the person they spoke to on the telephone made them feel like a partner in care. Three participants indicated that they would not use telephone consultations again. All three also found telephone consultations an unacceptable way to receive services and preferred face-to-face consultation; two did not feel that it improved their access to care and one was unsatisfied with the quality of the telephone service. However, none indicated that they felt uncomfortable communicating remotely, and when given the option of telephone or video consultation, all three indicated that they would prefer video consultation over telephone. Overall, 64.3% of participants would have preferred video over telephone consultations.
Patient satisfaction with genetic consultation was generally positive. All participants felt they were listened to carefully. Ninety percent of participants (n = 45/50) felt that they received the information they required and were able to share all the necessary information, and 92% (n = 46/50) felt that the person they spoke to answered all their questions. Ninetysix percent (n = 48/50) felt that the person spent enough time with them and that things were explained to them in a way they could understand. Two negative responses were indicated in total: one person felt that they did not receive all the information they required and one person did not feel like the person they spoke to on the telephone made them feel like a partner in care.
Discussion
This is the first patient survey to canvass the opinion of remote consultations prior to implementation and also to evaluate the acceptability of remote telephone consultations to carers of children with genetic eye disease. These conditions are a leading cause of certifiable blindness, accounting for over 10% of sight impaired and severe sight impaired registrations. They often have mobility issues, with 40% not being able to make all the journeys that they want or need to make [21]. The pre-pandemic responses from our survey suggest that the majority of participants are happy to have virtual consultations with their investigations performed locally rather than having to travel to a specialist centre and the preferred virtual mode of review were telephone calls.
Post-implementation, participants found telephone consultations acceptable and they obtained better access to care but many would still prefer to have face-to-face contact with their health care provider. For most genetic eye diseases, establishing the genetic diagnosis can take over a year in a significant number of cases, there are no approved treatments, and patients are kept under long-term follow-up for monitoring disease progression. It is important to emphasise that these patients do require a physical examination, especially all new patients, as 60% of genetic eye disease may be associated with systemic features (which can be overlooked) and this will guide clinical management strategies. But where a clinical diagnosis is established, especially for isolated ocular disorders such as non-syndromic inherited retinal dystrophies, retinal imaging is so advanced, ophthalmologists can utilise this to guide disease progression that may correlate with reported history. With the advent of colour fundus photos and OCT now being available in high-street opticians, shared care may form the future direction for such patients who are stable in long-term follow-up interspersed with virtual and face-to-face specialist consultations.
Three participants indicated that they would not use telephone consultations again (one new and two follow up patients). Although only a small proportion, this is particularly concerning during the pandemic when access to face-to-face services are reduced. However, it is important to note that these were telephone consultations, and when asked, these same participants preferred video rather than telephone consultations. A survey conducted in the US during the pandemic of 219 adult patients receiving video consultations for routine and acute ophthalmology review (42% response rate) found that nearly half of patients would have delayed seeking care in the absence of a virtual option [22]. Video consultations were also highly rated with 78% stating they would consider participating in a video visit as an alternative to a face-to face encounter in the future. Similarly, a retrospective analysis of telemedicine across 40 specialties in a single New York based centre has also shown high patient satisfaction during the pandemic but found that younger, females and "new visit" patients had lower satisfaction scores [23]. However, we did not find significantly different questionnaire responses between carers attending new versus followup appointments. This highlights the importance of evaluating the acceptability of newly implemented services that are intended to increase access to care. Where possible, service users should be included in their development and provided with a choice of contact options. This will avoid missed appointments that could result in suboptimal patient care and waste of health-care resources. Pre-pandemic evaluation of non-genetic adult teleophthalmology services has shown high levels of patient satisfaction [24]. There are no studies relating to ophthalmic genetics, however a study involving 225 participants completing an online questionnaire on acceptability and feasibility of information and communication technologies (ICTs) in the delivery of a cancer genetics service in Wales found them highly acceptable. They did not consider genetic counselling via telemedicine superior to a face-to-face consultation, but they could see how it may benefit those unable to travel [25]. A study conducted at the Mayo Clinic Biobank administered 1200 participants a questionnaire asking how they would like to receive theoretical results using three vignettes (cystic fibrosis, hereditary breast cancer and a pharmogenomics vignette). They found that although 60% of participants reported liking e-visits, the option of receiving results face-to face scored more highly [26].
Evaluation studies of the acceptability of online genetic consultations have been previously conducted, where participants were asked to rate the remote service they received. A study of 54 pre-symptomatic patients in the cardiogenetic and oncogenetic services in the Netherlands receiving online genetic counselling found that patients were significantly more satisfied with their counsellor and counselling session than the control group who received face-to-face counselling in the hospital, but overall only one-third of patients consented to this form of virtual contact [19]. A systematic review of 12 studies in the United States, Canada, the UK and Australia using telemedicine in clinical genetics clinics showed high levels of patient satisfaction and suggested that it has the potential to evaluate paediatric patients with suspected genetic conditions [27].
There were some limitations in this study. This was a relatively small sample size (although it does involve patients with rare inherited eye diseases), drawn from a single centre based in central London. We only included patients who were already at the hospital attending a face-to-face appointment, which may have selected for a cohort with better access to tertiary care. We evaluated telephone appointments only as this was the most popular option indicated on our pre-pandemic survey. Our findings may be reflective of carer perceptions during a prolonged pandemic where anxiety about their child's condition may be heightened due to cancelled or postponed outpatient appointments. In addition, we found a shift in preference from telephone to video consultations over the course of the study. This finding is likely due to the monumental digital switchover that occurred during the pandemic, increasing user accessibility and familiarity with video communication platforms [28].
Conclusions
The expansion of ocular genomic medicine and existing pressure on ophthalmology services combined with the current global pandemic means that now more than ever, alternative models of patient care need to be adopted. Measures to enable the continuation of routine and urgent health care delivery during and after the pandemic must be acceptable to patients. Genetic eye disease clinics are suitable for remote delivery and we have demonstrated that they are acceptable to families of children with inherited eye disorders.
Institutional Review Board Statement:
The study adhered to the tenets set out in the Declaration of Helsinki and was approved by the London-Camden & Kings Cross Research Ethics Committee (12/LO/0141) and was approved by the local patient experience committee.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
|
2021-03-03T05:20:44.790Z
|
2021-02-01T00:00:00.000
|
{
"year": 2021,
"sha1": "f1a1320b7bee13a55d010d200975ebe0b9167b59",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4425/12/2/276/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f1a1320b7bee13a55d010d200975ebe0b9167b59",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
229174908
|
pes2o/s2orc
|
v3-fos-license
|
Looking Back at the Early Stages of Redox Biology
The beginnings of redox biology are recalled with special emphasis on formation, metabolism and function of reactive oxygen and nitrogen species in mammalian systems. The review covers the early history of heme peroxidases and the metabolism of hydrogen peroxide, the discovery of selenium as integral part of glutathione peroxidases, which expanded the scope of the field to other hydroperoxides including lipid hydroperoxides, the discovery of superoxide dismutases and superoxide radicals in biological systems and their role in host defense, tissue damage, metabolic regulation and signaling, the identification of the endothelial-derived relaxing factor as the nitrogen monoxide radical (more commonly named nitric oxide) and its physiological and pathological implications. The article highlights the perception of hydrogen peroxide and other hydroperoxides as signaling molecules, which marks the beginning of the flourishing fields of redox regulation and redox signaling. Final comments describe the development of the redox language. In the 18th and 19th century, it was highly individualized and hard to translate into modern terminology. In the 20th century, the redox language co-developed with the chemical terminology and became clearer. More recently, the introduction and inflationary use of poorly defined terms has unfortunately impaired the understanding of redox events in biological systems.
Introduction
Life science means redox science. Even seemingly unrelated issues such as the hydrolysis of peptide bonds in living systems depend on enzymes, which have to be synthesized with significant consumption of ATP derived from redox processes in mitochondria or elsewhere. Thus, the attempt to review the evolution of the redox biology could easily result in a big historical volume looking back at all fields of biosciences. I am quite sure that this was not the idea behind the guest editor's kind invitation to write this article, and I therefore take the liberty to narrow down the scope of the review. It will only cover the biochemistry of compounds called "ROS" or "RNS" for reactive oxygen or nitrogen species, respectively. I will try to describe the history of discoveries related to their formation in biological systems, preferentially in mammalian tissue, their pathophysiological and biological relevance and their metabolic fate. The key players of the redox events, the appreciation of their biological role and the language changed dramatically over the past three centuries and we may look forward to new surprises. [2]. In the introduction of his article, he characterizes the decomposition of hydrogen peroxide by living materials as a catalytic process and refers to the father of catalysis Jöns Jacob Berzelius (1779Berzelius ( -1848. In the very same publication, Schönbein describes that these tissues, like platinum at high temperature, also liberate a gas upon H2O2 exposure that he characterized as "normal" oxygen. He also mentions that already Thénard had made similar observations with blood components. This means we could equally celebrate Schönbein or even Thenard as the discoverers of catalase (EC 1.11.1.6). Both catalytic activities, the catalatic and the peroxidatic one, proved to be heat-sensitive. Schönbein also mentions that a peroxidatic reaction had previously been seen in yeast by Julius Eugen Schlossberger (1819-1860).
Catalase, a peroxidase preferentially using H2O2 as a reductant for H2O2, in other terms a H2O2 dismutase, was then clearly established as an enzymatic entity of its own by Oscar Loew (1844Loew ( -1941 in 1900 [3]. The elucidation of the structure and the reaction mechanism of catalase was greatly favored by the research on the cytochromes, which had already started in the 1920s [4,5]. According to Zámocký and Koller [6], it was Otto Warburg (1883Warburg ( -1970 who proposed iron catalysis as mechanistic principle of the catalase reaction in 1923. In the 1930s, Karl Zeile and colleagues noticed the similarity of the catalase spectrum and that of hemoglobin [7,8]. Finally, catalase was crystallized [9] and extensively investigated spectroscopically by Kurt Stern [10][11][12]. The spectroscopic investigations [10] clearly identified the prosthetic group of catalase as the same protoporphyrin IX present in the oxygen transporters hemoglobin and myoglobin and in most of the plant heme peroxidases. In his short Nature communication [11], he acknowledges the supply of reference compounds by Otto Warburg andHans Fischer (1881-1950), which demonstrates the close cooperation of the groups working in the porphyrin field.
In case of catalase, the iron is coordinated to the four nitrogen atoms of the porphyrin ring, the fifth coordination site is bound to a protein histidine, while the sixth one interacts with the oxygen of water (ground state), H2O2 (first labile complex) or oxygen (compound I; see below). In ground-state catalase and peroxidases, the iron is usually in the ferric state. Extensive mechanistic, kinetic, electron spin resonance, sophisticated spectrophotometric and stop-flow techniques on catalase were later performed at the Johnson Foundation by Britton Chance (1913Chance ( -2010 and others [13,14]. These investigations also led to the characterization of various catalytic intermediates. While still working in Stockholm with Hugo Theorell (1903Theorell ( -1982 on horseradish peroxidase (HRP) and catalase, Chance [2]. In the introduction of his article, he characterizes the decomposition of hydrogen peroxide by living materials as a catalytic process and refers to the father of catalysis Jöns Jacob Berzelius (1779Berzelius ( -1848. In the very same publication, Schönbein describes that these tissues, like platinum at high temperature, also liberate a gas upon H 2 O 2 exposure that he characterized as "normal" oxygen. He also mentions that already Thénard had made similar observations with blood components. This means we could equally celebrate Schönbein or even Thenard as the discoverers of catalase (EC 1.11.1.6). Both catalytic activities, the catalatic and the peroxidatic one, proved to be heat-sensitive. Schönbein also mentions that a peroxidatic reaction had previously been seen in yeast by Julius Eugen Schlossberger (1819-1860).
Catalase, a peroxidase preferentially using H 2 O 2 as a reductant for H 2 O 2 , in other terms a H 2 O 2 dismutase, was then clearly established as an enzymatic entity of its own by Oscar Loew (1844Loew ( -1941 in 1900 [3]. The elucidation of the structure and the reaction mechanism of catalase was greatly favored by the research on the cytochromes, which had already started in the 1920s [4,5]. According to Zámocký and Koller [6], it was Otto Warburg (1883Warburg ( -1970 who proposed iron catalysis as mechanistic principle of the catalase reaction in 1923. In the 1930s, Karl Zeile and colleagues noticed the similarity of the catalase spectrum and that of hemoglobin [7,8]. Finally, catalase was crystallized [9] and extensively investigated spectroscopically by Kurt Stern [10][11][12]. The spectroscopic investigations [10] clearly identified the prosthetic group of catalase as the same protoporphyrin IX present in the oxygen transporters hemoglobin and myoglobin and in most of the plant heme peroxidases. In his short Nature communication [11], he acknowledges the supply of reference compounds by Otto Warburg andHans Fischer (1881-1950), which demonstrates the close cooperation of the groups working in the porphyrin field.
In case of catalase, the iron is coordinated to the four nitrogen atoms of the porphyrin ring, the fifth coordination site is bound to a protein histidine, while the sixth one interacts with the oxygen of water (ground state), H 2 O 2 (first labile complex) or oxygen (compound I; see below). In ground-state catalase and peroxidases, the iron is usually in the ferric state. Extensive mechanistic, kinetic, electron spin resonance, sophisticated spectrophotometric and stop-flow techniques on catalase were later performed at the Johnson Foundation by Britton Chance (1913Chance ( -2010 and others [13,14]. These investigations also led to the characterization of various catalytic intermediates. While still working in Stockholm
Selenium Conquering the Stage
Selenium was discovered 1817 or 1818 by Jöns Jacob Berzelius ( Figure 2; the volume of Journal für Chemie und Physik is dated 1817, the Berzelius letter therein is from 1818) [59]. The element first made its career in diverse industries as chemical catalyst, semiconductor, in photocopying and coloring glass and ceramics [60]. In the biological context, it long remained an ugly smelling toxic, teratogenic and carcinogenic poison.
In veterinary medicine, selenium became known as the causative agent of diseases such as blind staggers of cattle and the "change hoof disease" of horses, when grazing on soils with high selenium content [61]. In the 1950s, however, Jane Pinsent observed that Escherichia coli required selenium for expressing optimum formic acid dehydrogenase activity [62], and Klaus Schwarz (1914Schwarz ( -1978 found that rats deficient in vitamin E and selenium died from a fulminant liver necrosis ( Figure 3) [63,64]. The real discoverer of the essentiality of selenium is hidden in an acknowledgement. It was the former deputy director of the NIH, DeWitt Stetten , who smelled the selenium in the lab of Schwarz (Thressa Stadtman (1920, personal communication), while Schwarz had evidently developed a tachyphylaxis against the ugly smell of his selenium-containing factor 3 during the tedious isolation from hog kidney.
In the same year (1957), Gordon C. Mills described a peroxidase that was not affected by typical inhibitors of heme peroxidases such as azide or cyanide, proved to be highly specific for glutathione (GSH) and was claimed not to be a heme peroxidase [68]. This discovery was received with serious skepticism, and the new peroxidase, now known as glutathione peroxidase 1 (GPx1; EC 1.11.1.9), was declared not to exist at all at a Federation Meeting in the US by the father of peroxidase research Britton Chance (Gerald Cohen (1930Cohen ( -2001, personal communication). In consequence, less than one The discovery of selenium. The picture to the left shows Gripsholm village, where Berzelius, in 1817, saw the red mud in the lead chambers of a sulfuric acid factory, which is hidden behind the church. This mud turned out to contain selenium. The little white hut below the trees (right lower corner) was the laboratory of Berzelius (picture taken by L. F. on occasion of the selenium meeting "Se 2017" in Stockholm, organized by Elias Arnér at the Karolinska Institute). The right panel shows an excerpt of the letter of Berzelius to the editor of the Journal für Chemie und Physik, J. S. C. Schweigger, in which he reports on the discovery of selenium for the first time. The letter is dated January 27, 1818, but published in a volume of 1817 [59]. [64]. Please, notice that the co-inventor, Dewitt Stetten, who smelled the selenium in the "factor 3" of Klaus Schwarz, is hidden in footnote 13. The right panel, colored in selenium red, claims, for the first time, that the GPx reaction depends on selenium [65]. It is the communication that prompted us to try an exact selenium determination in the last 0.69 mg of bovine GPx1 [66] left over from material-consuming kinetic studies with stopped-flow equipment [67]. [64]. Please, notice that the co-inventor, Dewitt Stetten, who smelled the selenium in the "factor 3" of Klaus Schwarz, is hidden in footnote 13. The right panel, colored in selenium red, claims, for the first time, that the GPx reaction depends on selenium [65]. It is the communication that prompted us to try an exact selenium determination in the last 0.69 mg of bovine GPx1 [66] left over from material-consuming kinetic studies with stopped-flow equipment [67].
Alerted by an abstract claiming that glutathione peroxidase activity depended on selenium ( Figure 3; [65]), we determined the selenium content in our crystalline preparation of bovine GPx1, which had survived 13 steps of purification, by neutron activation and found exactly 4 g atoms of selenium per mol of the homotetrameric enzyme [66]. As later reported by the groups of Albrecht Al Tappel (1926-2017), the selenium in GPx is present as selenocysteine residue integrated in the peptide chain [69,70]. After confirmation of the selenoprotein nature, GPx became a celebrity and now entering the substance name "glutathione peroxidase" in EndNote yields >500 hits per year. Moreover, a large-scale preparation of GPx1 proved the absence of any known prosthetic group [71], and the bimolecular rate constant for the oxidation of GPx1 by H 2 O 2 was faster (~5 × 10 7 M −1 s −1 , when calculated per subunit) than the corresponding one of catalase [67], which till then had been considered unbeatable in catalytic efficiency. Expectedly, the in situ function of GPx1 could be verified by a decrease in surface fluorescence due to NADPH consumption and release of oxidized glutathione in hemoglobin-free perfused rat liver, when H 2 O 2 was not generated in the peroxisomes, but perfused [72][73][74].
We should mention here that not all members of the glutathione peroxidase family are selenoproteins. In four of the eight mammalian glutathione peroxidases the active-site selenocysteine residue may be replaced by cysteine and this appears generally the case in terrestric plants and bacteria (CysGPxs; Section 4.1). However, the second mammalian selenoprotein to be discovered also proved to be a glutathione peroxidase with an efficiency comparable to that of GPx1, the phospholipid hydroperoxide GSH peroxidase, now commonly named GPx4 (EC 1.11.1.12) [75]. These observations strengthened the belief that the magic catalytic power of selenium could substitute for the iron porphyrin prosthetic group of catalase and other heme peroxidases.
GPx4 proved to be distinct from GPx1 in sequence [76,77] and substrate specificity. While GPx1 accepts many soluble hydroperoxides apart from H 2 O 2 [78], GPx4 also reduces the hydroperoxyl groups of complex, membrane-bound phospholipids [79]. In fact, GPx4 was discovered as an enzyme that prevented lipid peroxidation and was initially named PIP (for peroxidation inhibiting protein) [80], but later on, it proved to be the chameleon of the GPx family [81]. The gpx4 gene is expressed in three different forms due to alternate use of start codons. The mitochondrial GPx4 is the one that forms the keratin-like matrix surrounding the mitochondrial helix in spermatozoa of mammals and, thus, is indispensable for male fertility [82]; the nuclear form has been implicated in chromatin compaction [83][84][85], while the cytosolic one primarily prevents lipid peroxidation by reducing lipid hydroperoxides and silencing of lipoxygenases [86].
Both, GPx1 and GPx4 have been reported to also efficiently reduce peroxynitrite to nitrite [87]. The rate constant for the peroxynitrite reduction by GPx1 is as high as 8 × 10 6 M −1 s −1 [88]. Surprisingly, however, hepatocytes isolated from gpx −/ − mice proved to be resistant to a peroxynitrite challenge, one of the many paradoxical findings with antioxidant enzymes compiled by Lei et al. [89].
Intriguingly, the cytosolic expression form of GPx4 is the only member of the GPx family that proved to be essential [90], which highlights the importance of lipid peroxidation as a pathogenic principle [81]. In this context, it appears to be revealing that GPx4 has become known as the major antagonist of a special type of regulated cell death, ferroptosis. Ferroptosis is a multi-etiological phenomenon that is characterized by a common endpoint: an iron-catalyzed oxidative destruction of unsaturated lipids in bio-membranes. Screening for ferroptosis inhibitors inter alia yielded the compound SRL3, which irreversibly inhibited GPx4 [91] in the presence of 14-3-3ε as an auxiliary protein [92]. Therefore, ferroptosis was defined in 2015 as "iron-dependent form of regulated cell death under the control of glutathione peroxidase 4" [93]. In the meantime a lot of details of the ferroptotic process have been unraveled [94], and selenium, likely as an integral part of GPx4, has become an essential element to suppress ferroptosis [95]. It is therefore tempting to speculate that the fulminant liver necrosis that Schwarz saw in his selenium-deficient rats [64] was actually the first report on this peculiar form of cell death.
Studies on the mechanism of GPx started from the first x-ray analysis of bovine GPx1 by Rudi Ladenstein [96], who saw the selenium in close neighborhood of a tryptophan and a glutamine. The functional relevance of these residues was confirmed by Matilde Maiorino via site-directed mutagenesis [97]. The catalytic triad composed of selenocysteine, tryptophan and glutamine was later amended by an asparagine residue [98]. More recently, density functional theory (DFT) calculations [99] have demonstrated that the selenocysteine residue, after forming an adduct (or complex) with H 2 O 2 , is instantly oxidized without any activation energy to a selenenic acid derivative, which in two steps is reduced by thiols to regenerate the ground-state enzyme. The selenenic acid form could, however, not be verified by mass spectrometry, since the oxidized selenium reacts with a nitrogen of the peptide backbone in the absence of reducing substrate. In a homologous Cys-GPx (see below; chapter 4), however, the corresponding sulfenic acid derivative is clearly detectable. Splitting of the peroxide bond is achieved by a dual synchronized attack, a nucleophilic one by the dissociated selenol and an electrophilic one on the second oxygen by a proton bound in the active site, preferentially at the ring nitrogen of the tetrad tryptophan. These data perfectly match earlier kinetic studies on GPx1 displaying a ping-pong pattern (revealing an enzyme substitution mechanism) with infinite maximum velocity, infinite Michaelis constant and extraordinary efficiency [67,78].
Of course, selenium also plays a role in other redox processes. The 25 selenoprotein genes of the human genome overwhelmingly encode proteins that, according to their sequence, can be classified as oxidoreductases [100], the best investigated families being the thioredoxin reductases and the glutathione peroxidases. These two families of proteins unambiguously reduce hydroperoxides, either directly or indirectly via peroxiredoxins (see Section 4.2). The precise function of many selenoproteins is however still unknown. This even holds true for some of the glutathione peroxidases (for review see [101]). As to the other selenoproteins, we may refer to the dedicated monographs edited by Dolph Hatfield and colleagues [102][103][104][105]. Here, it may suffice to state that the myth of the magic catalytic power of selenium faded away sooner than anticipated.
Cysteine-Containing Homologues of GPx
In order to demonstrate the catalytic importance of selenium, Rocher et al. exchanged the active-site selenocysteine residue in mouse GPx1 by its sulfur homologue cysteine and indeed saw a decline of three orders of magnitude in specific activity [106]. However, the specific activity, as measured under conventional test conditions (mM concentrations of both substrates), preferentially determines the rate of enzyme reduction by GSH [67] and, thus, provides limited information on the oxidation of the active site selenocysteine by H 2 O 2 . Similarly, the selenocysteine was replaced by cysteine in porcine GPx4 by Maiorino et al. [97]. In this investigation, both, the reductive and the oxidative part of the catalytic cycle were dramatically affected.
These findings were long considered to underscore the catalytic power of selenium versus sulfur. However, this view overlooks that the residual activities of the artificial Cys-GPxs are still orders of magnitude higher than any oxidation of a low molecular thiol by a hydroperoxide. Thiol oxidation by H 2 O 2 was critically reviewed by Christine Winterbourn. The data compiled reveal that the rate constants of the reaction of any low molecular weight thiol compound with H 2 O 2 never exceed 50 M −1 s −1 , even if their thiol group is fully dissociated [107]. This sharply contrasts with the rate constants around 5 × 10 4 M −1 s −1 , as are seen in the artificial Cys-GPx4 [97]. Moreover, for the naturally occurring GPx of Drosophila melanogaster, which is a cysteine-containing GPx, the corresponding oxidative rate constant was determined to reach 10 6 M −1 s −1 [108], thus coming close to those of its selenium-containing relatives. The seemingly poor GPx activity of many non-mammalian cysteine homologues of the GPx family results from a change of their substrate specificity; in functional terms, they are thioredoxin peroxidases [108,109]. Additionally, the DFT calculations mentioned above [99] showed that the mechanism of chalcogen oxidation by H 2 O 2 is essentially identical for Sec-GPxs and Cys-GPxs. In short, the difference in catalysis of cysteine and selenocysteine residues in GPxs is not a qualitative, but only a quantitative one.
Just for completion, a non-selenium glutathione peroxidase, which showed preferential activity with organic hydroperoxides, was detected in liver of selenium-deficient rats by Lawrence and Burk in 1976 [110]. This enzyme proved not to belong to the GPx family, but is a B-type GSH-S-transferase [111].
Peroxiredoxins
High-to-extreme efficiencies in hydroperoxide reduction via sulfur catalysis is also observed in another thiol-dependent peroxidase family, the peroxiredoxins (Prx; EC 1.11.1. [24][25][26][27][28][29]. A peroxiredoxin was first seen in 1968 by Robin Harris in electron microscopy as a ring-shaped protein attached to erythrocyte ghosts [112]. It was not further investigated and, because of its peculiar shape, was called "torin" [113]. In the following years a considerable number of proteins that later turned out to be peroxiredoxins were described (for review see [114]). Their strange names disclose the lack of any serious functional characterization.
More defined examples of this family, the alkylhydroperoxide reductases, were detected in the group of Bruce Ames [115]. However, these researchers associated the reductase activity with the flavoprotein component (AhpF) of the system and thus overlooked the homology of the cofactor-free component (AhpC) with torin and other known sequences. In the late 1980s a "thiol-specific antioxidant protein (TSA)" had been isolated from Saccharomyces cerevisiae [116] and sequenced [117] in the early 1990s in the laboratory of Earl Stadtman . It proved to be a cofactor-free protein. I remember that Earl asked me at a meeting in Tutzing (Bavaria, Germany) to compare the then still unpublished sequence of TSA (1991) with that of GPx, because the GPxs were then the only known peroxidases just consisting of amino acids. The sequence did not show any similarity with that of any GPx. Instead, Chae et al. found out that the sequence of TSA was homologous to the AhpC component of the bacterial alkylhydroperoxide reductases [118] and a widely distributed family of proteins that later proved to be thioredoxin peroxidases [119,120]. When unsuccessfully chasing a GPx-homologous "trypanothione peroxidase" in the trypanosomatid Crithidia fasciculata, we stumbled across another peroxiredoxin. In the kinetoplasts, however, the specificity of the peroxiredoxin is a bit different; the kinetoplasts use the thioredoxin homologue tryparedoxin as reducing substrate [121][122][123].
A common denominator of the peroxiredoxin family proved to be a highly reactive conserved cysteine near the N-terminal end of the protein. Mutation of this cysteine results in inactivation in all peroxiredoxins so far investigated [119,[124][125][126][127][128][129][130]; it was therefore named the "peroxidatic cysteine" (C P ). The reactivity of this cysteine with hydroperoxides is facilitated by at least two more essential residues, an arginine [125,130] and a threonine, the latter being sometimes replaced by a serine [127,130]. Recent DFT calculations have unraveled that the oxidation of C P is similar to that of the GPxs [131]. The proton dissociating from the C P -SH is kept in the reaction center, in case of the Prxs at the oxygen of threonine (or serine), while the arginine keeps the hydroperoxide in an optimum position by hydrogen bonding. Now the sulfur can start its nucleophilic attack on one oxygen of the peroxide bond, while the proton, unstably bound to threonine (serine) OH, combines with the other oxygen to generate water (or alcohol, depending on the substrate). As in the Cys-GPxs, the result of this initial step of the catalysis is an enzyme that has its C P oxidized to a sulfenic acid, an intermediate detected long ago by Leslie Poole's group [124].
The downstream reductive part of the catalytic cycle depends on the subfamily of Prxs. In those subfamilies with a second conserved cysteine ("2-Cys-Prx"), the sulfenic acid of C P forms an intermolecular disulfide bridge with the second, the resolving cysteine (C R ), which is afterwards reduced by a redoxin (=protein characterized by an CxxC motif, typically thioredoxin; Figure 4). In the "atypical 2-Cys-Prxs", the disulfide bridge is an intramolecular one. Their mechanism is, thus, analogous to that of Cys-GPxs with thioredoxin specificity. In the "typical 2-Cys Prxs", which are the most common ones, two inter-subunit disulfide bridges are formed between two head-to-tail-oriented subunits [130,132]. In the "1-Cys-Prxs", the reductive part of the catalytic cycle is mostly unclear.
For human Prx6, GSH has been demonstrated to be the reducing substrate; but for GPx activity Prx6 requires GSH-S-transferase π as a supportive enzyme [133][134][135]. GSH dependence of a yeast 1-Cys-Prxs has also been discussed [136], but reduction of its sulfenic acid form by ascorbate has also been reported [137,138].
Antioxidants 2020, 9, x FOR PEER REVIEW 9 of 37 inter-subunit disulfide bridges are formed between two head-to-tail-oriented subunits [130,132]. In the "1-Cys-Prxs", the reductive part of the catalytic cycle is mostly unclear. For human Prx6, GSH has been demonstrated to be the reducing substrate; but for GPx activity Prx6 requires GSH-S-transferase as a supportive enzyme [133][134][135]. GSH dependence of a yeast 1-Cys-Prxs has also been discussed [136], but reduction of its sulfenic acid form by ascorbate has also been reported [137,138]. Here, the oxidized tryparedoxin peroxidase was reacted with an excess of a tryparedoxin mutant with the C-terminal cysteine of the CPPC motif changed to serine (TbTXNC43S). The surface-exposed C40 of the tryparedoxin mutant can still react with the resolving cysteine (C173´) of the peroxidase, but stays attached (protrusions at the donut), because it can no longer fully reduce the peroxidase [139]. The model is based on electron microscopic images as described in detail in [140] and was prepared by H. J. Hecht.
Like the GPxs, the Prxs have been implicated in the defense against a peroxide challenge. Both families reduce a broad spectrum of hydroperoxides. Some Prxs even reduce complex lipid hydroperoxides in vitro. Human Prx6, for instance, shares with GPx4 the ability to reduce phosphatidylcholine hydroperoxide [135]. Why Prx6 cannot substitute for the essential GPx4 in vivo is not known. In the meantime, Prxs have been detected in every domain of life and are increasingly discussed in the context of redox regulation (see Sections 4.3 and 6).
Other Proteins with CP-Like Reactivity
Thiol groups of proteins that readily react with hydroperoxides are not restricted to the two thiol peroxidase families mentioned above. A typical example is the glycolytic enzyme glyceraldehyde-3phosphate dehydrogenase (GAPDH), whose active site cysteine reacts with H2O2 with a bimolecular rate constant of 10 2 -10 3 M −1 s −1 [132,141]. This oxidation is associated with a loss of glycolytic activity. As in GPxs and Prxs the reaction involves the primary formation of a sulfenic acid, as already shown by Little and O'Brien in 1969 [142], and results in numerous alternate functions that depend on the downstream reactions of the sulfenic acid form [143].
A second well investigated example is the transcription factor OxyR, which was discovered in the laboratory of Bruce Ames in 1985 as a regulon that responds to oxidative challenge in Salmonella typhimurium [144]. OxyRs are found in many bacteria, where they sense H2O2 and, upon oxidation, induce a large set of enzymes that inter alia protect against peroxide challenge. Their rate constants for the reaction with H2O2 range around 10 5 M −1 s −1 [145]. Based on structural and kinetic data, Joris Messens and colleagues [145] proposed a mechanism that is reminiscent of those described for GPxs . Model of tryparedoxin peroxidase of Trypanosoma brucei brucei loaded with 10 molecules of mutated tryparedoxin. Tryparedoxin peroxidase is a typical 2-Cys-Prx that tends to form donut-shaped decamers consisting of five catalytic units (dimers). Here, the oxidized tryparedoxin peroxidase was reacted with an excess of a tryparedoxin mutant with the C-terminal cysteine of the CPPC motif changed to serine (TbTXNC43S). The surface-exposed C40 of the tryparedoxin mutant can still react with the resolving cysteine (C173´) of the peroxidase, but stays attached (protrusions at the donut), because it can no longer fully reduce the peroxidase [139]. The model is based on electron microscopic images as described in detail in [140] and was prepared by H. J. Hecht.
Like the GPxs, the Prxs have been implicated in the defense against a peroxide challenge. Both families reduce a broad spectrum of hydroperoxides. Some Prxs even reduce complex lipid hydroperoxides in vitro. Human Prx6, for instance, shares with GPx4 the ability to reduce phosphatidylcholine hydroperoxide [135]. Why Prx6 cannot substitute for the essential GPx4 in vivo is not known. In the meantime, Prxs have been detected in every domain of life and are increasingly discussed in the context of redox regulation (see Sections 4.3 and 6).
Other Proteins with C P -Like Reactivity
Thiol groups of proteins that readily react with hydroperoxides are not restricted to the two thiol peroxidase families mentioned above. A typical example is the glycolytic enzyme glyceraldehyde-3-phosphate dehydrogenase (GAPDH), whose active site cysteine reacts with H 2 O 2 with a bimolecular rate constant of 10 2 -10 3 M −1 s −1 [132,141]. This oxidation is associated with a loss of glycolytic activity. As in GPxs and Prxs the reaction involves the primary formation of a sulfenic acid, as already shown by Little and O'Brien in 1969 [142], and results in numerous alternate functions that depend on the downstream reactions of the sulfenic acid form [143].
A second well investigated example is the transcription factor OxyR, which was discovered in the laboratory of Bruce Ames in 1985 as a regulon that responds to oxidative challenge in Salmonella typhimurium [144]. OxyRs are found in many bacteria, where they sense H 2 O 2 and, upon oxidation, induce a large set of enzymes that inter alia protect against peroxide challenge. Their rate constants for the reaction with H 2 O 2 range around 10 5 M −1 s −1 [145]. Based on structural and kinetic data, Joris Messens and colleagues [145] proposed a mechanism that is reminiscent of those described for GPxs and Prxs. In addition, the thiol of a critical cysteine dissociates, its thiolate attacks the peroxide bond and becomes a sulfenic acid, while a proton kept in the reaction center combines with the second oxygen of the peroxide bond to create water as ideal leaving group. This dual attack on the peroxide, which is enabled by proton shuttling, has recently been confirmed in principle by DFT calculations [131].
Certainly, the examples here listed will not remain the only ones, and if one believes in redox proteomics, "reactive cysteine residues" are abundant in proteins. In fact, oxidatively modified cysteine residues (formation of inter-or intra-molecular disulfides, sulfenamides, persulfides, S-thiolated, or nitrosylated species) are easily detected and prevail in proteins involved in redox regulation. Oxidation of cysteine residues in the context of redox regulation or signaling is often called a "redox switch", a term that, however, does not always consider the multitude of possible downstream-reactions and is used with different meanings. More importantly, the term "reactive cysteine" only makes sense, if the reaction partner is considered. Published rates for a direct cysteine oxidation by hydrogen peroxide in proteins are scarce and in most cases hardly comply with the assumption of a direct oxidation of such cysteines by H 2 O 2 or any other hydroperoxide. This discrepancy of proteomic findings and kinetic data has been addressed in many reviews [107,[146][147][148][149]. In recent years, new perspectives have helped to solve the enigma. The cysteine oxidation seen in ex vivo samples by protein chemistry indeed disclose a reactivity of these cysteines. They may be poorly reactive towards H 2 O 2 , but they react fast with oxidized thiol peroxidases; thereby, not only the kinetic barrier is overcome. Via specific protein/protein interaction, the unspecific oxidant H 2 O 2 adopts specificity and, thus, becomes an ideal messenger in regulatory processes.
The first examples of this principle showed up in studies on redox regulation in yeast. The oxidative activation of the transcription factor Yap1 is here achieved by a thioredoxin-specific 2-Cys-GPx in Saccharomyces cerevisiae [150]. In Schizosaccharamyces pombe Yap1 activation by H 2 O 2 is mediated by the 2-Cys-Prx Tsa1 [151,152]. In mammalian systems, Prx2 oxidizes the activator protein STAT3, GPx7 (and likely GPx8) uses protein isomerases as preferred reducing substrates [153] and, thus, contribute to oxidative protein folding in the endoplasmic reticulum [154,155]. GPx7 also oxidizes the glucose-regulated protein GRP78 (also called mortalin) and thereby improves its chaperone activity [156].
More recently, the group of Tobias Dick in Heidelberg (Germany) has demonstrated that 2-Cys-Prxs in mammalian cells facilitate the oxidation of regulatory target proteins. Knock-out of the cytosolic Prxs decreased the formation of disulfide bonds in cytosolic proteins [157]. This is exactly the opposite of what would have been predicted if the peroxiredoxins just competed for H 2 O 2 , and this clearly supports the idea that thiol peroxidases often act as sensors for hydroperoxides and, oxidized by the sensing process, hand over the redox equivalents to regulatory proteins [158].
The Biological Radicals
In the 18th and 19th century, the term radical indicated any group or substituent such as ethyl or carboxyl that was attached to a larger molecule [159]. The use of this term changed gradually, after Moses Gomberg synthetized a free and persistent radical for the first time, the triphenylmethyl [160]. Now, the term radical is restricted to compounds harboring one or more unpaired electrons and, in consequence, are paramagnetic. Compounds meeting these criteria are by no means uncommon in nature. In particular, enzymes or other proteins containing transition metals are often paramagnetic but are usually not named radicals.
Sometimes an unpaired electron resides in amino acid residues of the protein and is involved in the catalytic mechanism. The prototypes of the latter enzymes are the ribonucleotide reductases, which had been discovered in 1960 and the following years by Peter Reichard (1925Reichard ( -2018 and colleagues [161,162]. In 1972, Ehrenberg and Reichard provided the first evidence that the enzyme of Eschericha coli contained a free radical [163]. In 1978, finally the radical was identified as tyrosyl radical by electron spin resonance technology [164]. Depending on species and/or culture condition, the types of ribonucleotide reductases differ, but all make use of radical chemistry to eliminate the 2´-OH group of ribose in the ribonucleotide. In class Ia and Ib, an Fe-O-Fe bridge-stabilized tyrosyl radical attacks the ribose via a cysteyl radical, in class II the cysteyl radical is formed with the aid of adenosylcobalamine and class III works with a glycyl radical. The typical reductant of the ribonucleotide reductases is thioredoxin [165,166], glutaredoxin [167], other redoxins such as tryparedoxin [168] or formate (reviewed in [169][170][171]).
Another fairly stable free radical, ubisemiquinone, was detected in 1931 by Leonor Michaelis (1875-1949 [172]. In mitochondria, its oxidized and reduced forms are associated with complex I (NADH: ubiquinone oxidoreductase; EC 1.6.5.3) and complex II (succinate: coenzyme Q oxidoreductase; EC 1.3.5.1). They are, therefore, also called coenzyme Q, yet despite defined binding sites in the proteins of mitochondrial complexes, ubiquinone and ubiquinol can almost freely move within the mitochondrial membrane. The reduction of ubiquinone in complex I and II starts with a two-electron transition. In contrast, the cytochromes of complex III (coenzyme QH 2 : cytochrome c oxidoreductase; EC 1.10.3.2) and IV (cytochrome c oxidase; EC 1.9.3.1) transfer single electrons, which implies that somewhere in complex III or earlier a separation of electrons must take place, and ubisemiquinone would be a reasonable candidate to fulfill this job (but see below).
In the context of the present article, however, the focus should be on the really free radicals, i.e., those built by the organism on purpose, released from their site of generation and free to cause harm or benefit, wherever their life time allows them to diffuse. These are the superoxide radical anion (•O 2 -), its conjugate acid, the superoxide radical (•O 2 H), and the nitrogen monoxide radical (•NO; also called nitric oxide). The discovery of each of them came as an unanticipated surprise.
The Superoxide Radical
The superoxide radical was known to researchers interested in atmospheric chemistry or physico-chemists working with simplified clean systems [173]. As in the case of H 2 O 2 , the superoxide radical found its role in biology after its metabolism appeared at the horizon with the discovery of superoxide dismutase (SOD). The history of this discovery has been masterly reviewed by Irwin Fridovich (1929Fridovich ( -2019. In the introductory chapter of the proceedings of the famous Banyuls symposium on "Superoxide and Superoxide Dismutases" (Banyuls, France; 1976), he amusingly describes the frustrated search for the explanation of a mysterious ferricytochrome c reduction that, strangely enough, depended on the presence of oxygen. The phenomenon had been observed in various biochemical reactions, the search for its chemical basis took decades, the methodologies became more and more complex, but no hypothesis could be experimentally verified. Finally, a youngster, Joe McCord, entered Fridovich's lab, postulated that the reductant of cytochrome c could be superoxide, and identified SOD, which abolished the strange phenomenon, as an impurity in a carbonic anhydrase preparation they had used [174]. McCord's hypothesis [175] indeed marks the beginning of superoxide research in biochemistry.
In 1969, superoxide dismutase was isolated from bovine erythrocytes [176]. It was the copper/zinc type that was known for years under different names for green proteins of unknown function such as hemocuprein, hepatocuprein [177], erythrocuprein [178] or cerebrocuprein [179]. The bimolecular rate constant for SOD-catalyzed dismutation of •O 2 − is about 2 × 10 9 M −1 s −1 [180] and, thus, is seven orders of magnitude faster than the non-catalyzed reaction (<100 M −1 s −1 [181]). The spontaneous dismutation at physiological pH is faster (~2 × 10 5 M −1 s −1 [181]), since •O 2 − is partially associated (pK a = 4.8) and the dismutation of the protonated superoxide is faster (k for [181]). However, still SOD accelerates the dismutation by four orders of magnitude [181]. The rate constant of Cu/Zn-SOD is indeed the fastest ever reported for a bimolecular enzymatic reaction. The entire surface charge of the enzyme [182], and in particular an electrostatic gradient directed towards the reaction center guides the negatively charged superoxide radical anion towards the positive histidine-complexed copper ion [183,184], which explains the incredible efficiency of these enzymes.
In the following years, different types of superoxide dismutases were discovered: manganesecontaining SODs in bacteria [185] and mitochondria of higher organisms [186], iron-containing SODs in bacteria [187] and protozoa [188] and extracellular forms of the Cu/Zn-SOD in mammals [189]. Cu/Zn-SODs were also sporadically found in bacteria. The first one was the enzyme of Photobacterium leiognathi, which lives as symbiont in the teleost pony fish. The unusual occurrence of a Cu/Zn-SOD in a symbiotic bacterium was suspected to be the result of a natural gene transfer [190]. However, sequencing of the Cu/Zn-SOD of P. leiognathi and comparison with known sequences falsified this assumption [191], and Cu/ Zn-SODs were soon discovered also in non-symbiotic bacteria [192].
As mentioned, the superoxide radical was discovered as a reductant, but it made its way in biology as an oxidant, since it can initiate and sustain free radical chains. With the availability of SODs, it became quite easy to prove the participation of superoxide in biological systems. The first pathogenic effect of superoxide formation was lipid peroxidation in biomembranes. As early as 1972, Fee and Teitelbaum described that oxidative hemolysis, as induced by dialuric acid, could be inhibited by SOD [193]. The basis of related experiments by Zimmermann and colleagues [194][195][196] were the rediscovery of catalase and glutathione peroxidase as contraction factor I and II by Albert Lehninger (1917Lehninger ( -1986 and colleagues [197] and studies on high amplitude swelling of mitochondria induced by GSH [198,199]. These phenomena were shown to be associated with, and possibly caused by, lipid peroxidation in mitochondrial membranes. SOD indeed inhibited GSH-induced oxidative destruction of isolated mitochondrial membranes [196]. How the superoxide radical contributes to lipid peroxidation in this and similar artificial experimental settings, remains unclear. Certainly, GSH here does not act as an antioxidant; deprived of its enzymatic environment, it rather autoxidizes in the presence of traces of transition metals with formation of superoxide. Already in 1974, Misra had observed that autoxidizing thiols produce superoxide [200]. The superoxide radical (more likely than the superoxide radical anion) might abstract a hydrogen atom from a methylene group between two double bonds of a polyunsaturated fatty acid, which is the usual start of a free radical chain in membrane lipids. Accordingly, catalase and GPx1 inhibited loss of volume control and contractibility and lipid peroxidation [194][195][196][197].
These observations pointed to an essential contribution of H 2 O 2 or any other hydroperoxide, respectively. A superoxide-driven formation of the hydroxylradical (•OH) from H 2 O 2 in the presence of traces of iron, according to Haber and Weiss [173], might cause lipid peroxidation in simplified models such as washed mitochondria and isolated membranes. •OH is indeed a very aggressive oxidant. It reacts with a realm of naturally occurring compounds with rate constants higher than 10 9 M −1 s −1 , i.e., at rates near or at control by diffusion [201].
Strong oxidative power of H 2 O 2 in the presence of Fe 2+ had already been observed in the 19th century by the British chemist Henry J. Horstman Fenton (1854Fenton ( -1929 [202], but Fenton never mentioned the involvement of a radical, and the precise mechanism of the "Fenton chemistry" is still being debated. Most recently even a participation of singlet oxygen ( 1 O 2 ; the least excited species, 1 ∆ g O 2 , also occurs in biological systems) in such redox processes has been postulated [203]. This way, another oxidant would be added to the scenario of •O 2 − products.
In short, even in simplified model systems of biomembrane destruction, we have to consider various initiators, propagators and amplifiers of free radical chains. Homolysis of H 2 O 2 will yield two molecules of the hydroxyl radical, the most likely initiator of lipid peroxidation By analogy, homolysis of a fatty acid hydroperoxide would yield one hydroxyl radical and an alkoxyl radical (LO•), which implies that the radical chain would be accelerated due to branching. More Likely, however, •OH is generated from H 2 O 2 or LOOH and Fe ++ according to Haber and Weiss [173] or a Haber/Weiss-like reaction, respectively. In the latter case also an alkoxyl radical (LO•) may be formed, which is almost as aggressive as •OH [204]. After hydrogen abstraction (initiation), the polyunsaturated fatty acids usually add molecular dioxygen, which yields the lipid hydroperoxyl radical (LOO•). The latter can in turn abstract a hydrogen atom from another unsaturated fatty acid residue (propagation) or react with a chain-breaking scavenger such as vitamin E (termination).
In vivo, lipid peroxidation is even more complicated. In mammals, up to eight lipoxygenases (COX and LOX) differing in reaction and substrate specificity contribute to lipid peroxidation (reviewed in [209][210][211]). They contain a non-heme iron and are usually dormant enzymes. Activation is achieved by oxidation of the catalytic iron, as has first been demonstrated for cyclooxygenase (COX1) in 1971 by William Lands and colleagues, and later extended to 5-LOX [212], 12-LOX [213] and 15-LOX [86]. Therefore, enzymatic lipid peroxidation is under the control of all enzyme families involved in hydroperoxide metabolism (reviewed in [211]), and some of the GPxs and Prxs also reduce the products of LOXs, the hydroperoxides, and, thus may act as terminators by preventing •OH formation from LOOH in a Haber/Weiss-like reaction. Of course, access to substrates has to be considered, which however remains unclear in many cases. Most of the thiol peroxidases require the support of a phospholipase, since, with the notable exception of GPx4, they can only reduce free fatty acid hydroperoxides efficiently, and the specificity for free fatty acids also holds true for most of the LOXs. Thus, biosynthesis and metabolism of lipid peroxides is under the control of lipases, in particular of phospholipase A 2 and its regulator Ca ++ . The couple 15-LOX and GPx4 is an important exception, since 15-LOX appears unique in acting on complex phospholipids in membranes, thus producing the products that are specifically handled by GPx4 [81].
At the Tübingen (Germany) GSH meeting in early 1973, the role of •O 2 − as a possible source of mitochondrial H 2 O 2 was also discussed [214]. Gerriet Loschen and Angelo Azzi, who both attended and presented at the meeting, argued that the most likely source of the mitochondrial H 2 O 2 was an autoxidizing cytochrome b, which, because of a maximum in the spectrum upon reduction at λ = 566, was called cytochrome b 566 . inside-out mitochondrial particles [217,218]; and this finding was soon confirmed by others [219,220]. Later, using similar analytical tools, Richter could also show that some of the microsomal oxygenases primarily produce superoxide [221]. What followed was a fierce transatlantic debate about the precise mechanism of •O 2 − formation in mitochondria. It was quite clear that it happened somewhere at the substrate site of the antimycin A block. Antimycin A blocks the respiratory chain at the oxygen site of cytochrome b 566 , which implies that all components at the substrate site of this block become reduced and can theoretically produce •O 2 − by autoxidation. The problem is that there are so many components: the flavine of succinate dehydrogenase, non-heme iron proteins, ubiquinols and cytochrome b 566 . While Loschen and Azzi favored an autoxidation of cytochrome b 566 , the transatlantic party around Britton Chance, Alberto Boveris (1940Boveris ( -2020 and Enrique Cadenas insisted on autoxidation of the ubiquinols [222] and Boveris still appeared to defend this view in a recent review [223]. In 1986, however, Hans Nohl (1940Nohl ( -2010 and Werner Jordan reinvestigated the problem. They first showed that ubiquinol does not readily autoxidize and does not produce •O2 − in aprotic media such as mitochondrial membranes. Then, they made use of a novel inhibitor, myxothiazol [224], which had been isolated by Reichenbach and colleagues from Myxococcus fulvus. Myxothiazol blocks the respiratory chain at the substrate site of cytochrome b 566 [225]. By means of this inhibitor, Nohl and Jordan could create a functional state of the respiratory chain with completely reduced ubiquinol and completely oxidized cytochrome b 566 . In contrast to antimycin A, myxothiazol did not induce any •O 2 − production and antimycin A was no longer active in the presence of myxothiazol [226]. In particular the last quoted experiment unambiguously demonstrates that the mitochondrial •O 2 − / H 2 O 2 production, as detected by Loschen et al. [24,217], occurs in complex III, more precisely by autoxidation of cytochrome b 566 and not by ubiquinol or the ubisemiquinone radical [214]. Yet by now, almost half a dozen different sites of mitochondrial superoxide production are being discussed, and the mechanisms differ [227,228]. An involvement of ubiquinols or flavin radicals can therefore not generally be ruled out. An important beneficial role of •O 2 − was reported in 1973. Bernhard Babior (1935Babior ( -2004) et al. [229] demonstrated that granulocytes produced •O 2 − , and they already reasoned that this phenomenon was an essential part of the body's defense system against pathogenic bacteria. The discovery was soon confirmed and extended to other phagocytes [230][231][232][233]. It complemented three fields of already advanced research: the respiratory burst known since 1933 [234], inflammation and phagocytosis known for more than a century by Elie Metchnikoff's (1845-1916) milestone paper [235]. Already Metchnikoff had observed that phagocytosis was not only directed against bacteria, but the phagocytes attacked practically everything that is sick, dead or foreign, thus triggering an inflammatory response. Up to Babior's discovery, H 2 O 2 formed by the oxidative burst and halogen atoms (or hypohalous acids) arising from the myeloperoxidase reaction were widely considered the only bactericidal agents of phagocytosing leukocytes [236,237]. Initially, Babior appeared to believe that •O 2 − itself was the predominant killing agent [229]. In the meantime we have learned that •O 2 − is definitely the indispensable precursor of the H 2 O 2 that is associated with phagocytosis, but the white blood cell use it also to make a highly toxic cocktail to cope with a bacterial invasion. It comprises •OH possibly derived from Haber/Weiss chemistry, •NO, peroxynitrite formed from •NO and •O 2 − (see below), hypohalous acids or halogen atoms from a myeloperoxidase reaction, 1 ∆ g O 2 and likely more, and the composition of the cocktail differs depending on the cell type ( Figure 5). Moreover, oxidative burst and superoxide formation may occur independently from phagocytosis, if phagocytes are stimulated, e.g., by pro-inflammatory cytokines, immune complexes or the complement component C5a (compiled in [238]). It appears needless to state that the bactericidal cocktail does not work without any collateral damage to the environment of a fighting leukocyte. It causes tissue damage and, in consequence, inflammation. Already before the superoxide dismutase became known, erythrocuprein was rediscovered as an anti-inflammatory protein under the name "orgotein", which is in line with the pro-inflammatory role of •O 2 − [241]. Orgotein was finally developed up to marketing approval in several countries for treatment of osteoarthritis, interstitial cystitis and induration penis plastic. Some years later, the drug had to be abandoned, because the promise of complete lack of antigenicity of the bovine protein turned out to be too optimistic. As a substitute, the recombinant human Cu/Zn-SOD was prepared in a hurry by Grünenthal GmbH (Aachen, Germany) and the Chiron corporation in Emeryville (CA; USA) [242,243] (Figure 6). The human SOD showed exciting promise in animal models of septicemia [244] or reperfusion injury [245], yet the general aversion against recombinant products in these years and the costs involved let the project die. In short, the hope for an improved clinical use of SOD [246] remained a dream.
It contains an FAD and cytochrome b 558 (discovered by Segal and Jones [252]). Its FAD moiety accepts the reduction equivalents of NADPH from the interior of the cell and releases •O 2 − preferentially into the phagocytic vacuole, but also into the extracellular space (see below Figure 5). Like the lipoxygenases, NOX2 is a dormant enzyme that needs to be activated by cytosolic factors: p67 phos , polyphosphorylated p47 phox , p40 phox , the GTPases Rac1 and Rac2, and Rap1. Any functional disturbance of this complex system leads to a severe clinical condition, chronic granulomatous disease, which is characterized by recurrent infections. The disease was first described in 1954 [253] and underscores the importance of NOX2 in host defense [240,254]. Superoxide production by NOX-type enzymes was soon detected also in many non-phagocytic cells. The sources are other members of the NOX family. The common denominator of these enzymes is a homologue of the flavocytochrome p91 phox . However, their mode of activation and the pathologies in case of malfunction differ (compiled in [251]). In addition, not all NOX-type enzymes produce •O 2 − . DUOX I and DUOX II can make H 2 O 2 directly and NOX4 appears to obligatorily produce H 2 O 2 without the help of any SOD [251].
Already Metchnikoff had observed that phagocytosis was not only directed against bacteria, but the phagocytes attacked practically everything that is sick, dead or foreign, thus triggering an inflammatory response. Up to Babior's discovery, H2O2 formed by the oxidative burst and halogen atoms (or hypohalous acids) arising from the myeloperoxidase reaction were widely considered the only bactericidal agents of phagocytosing leukocytes [236,237]. Initially, Babior appeared to believe that •O2 − itself was the predominant killing agent [229]. In the meantime we have learned that •O2 − is definitely the indispensable precursor of the H2O2 that is associated with phagocytosis, but the white blood cell use it also to make a highly toxic cocktail to cope with a bacterial invasion. It comprises •O2 − , H2O2, •OH possibly derived from Haber/Weiss chemistry, •NO, peroxynitrite formed from •NO and •O2 − (see below), hypohalous acids or halogen atoms from a myeloperoxidase reaction, 1 ΔgO2 and likely more, and the composition of the cocktail differs depending on the cell type ( Figure 5). Moreover, oxidative burst and superoxide formation may occur independently from phagocytosis, if phagocytes are stimulated, e.g., by pro-inflammatory cytokines, immune complexes or the complement component C5a (compiled in [238]). [239]. The cover illustration schematically represents a phagocytosing white blood cell, as shown in [240]. The picture was taken in 1980 by L. F. The right panel shows a more realistic version of phagocytosis: a white blood cell swallowing opsonized yeast (phase contrast microscopy). •O2 − is stained with tetrazolium blue. Notice that most of the dye is precipitated in the phagocytic vacuole, but also faintly spread over the entire surface of the cell. The right picture was taken by G. L. in 1980.
It appears needless to state that the bactericidal cocktail does not work without any collateral damage to the environment of a fighting leukocyte. It causes tissue damage and, in consequence, inflammation. Already before the superoxide dismutase became known, erythrocuprein was [239]. The cover illustration schematically represents a phagocytosing white blood cell, as shown in [240]. The picture was taken in 1980 by L. F. The right panel shows a more realistic version of phagocytosis: a white blood cell swallowing opsonized yeast (phase contrast microscopy).
•O 2 − is stained with tetrazolium blue. Notice that most of the dye is precipitated in the phagocytic vacuole, but also faintly spread over the entire surface of the cell. The right picture was taken by G. L. in 1980.
The Nitrogen Monoxide Radical
The discovery of the nitrogen monoxide (•NO; commonly called nitric oxide) did not only surprise, because it proved to be a radical-it also is a gas. The history has been reviewed by Salvador Moncada [255], Ferid Murad [256], Louis Ignarro [257], Robert Furchgott (1916 [258,259] and Wilhelm Koppenol [260]. It started with the therapeutic use of nitro-vasodilators in the 19th century. A major push forward was the discovery of the endothelium-derived relaxing factor (EDRF) in the 1980s [261]. In many respects, EDRF mimicked the efficacy of compounds such as nitroglycerine or nitroprusside, but its chemical nature remained obscure. It was known that EDRF activated a guanylyl cyclase that had been extensively characterized by Murad [262], as did the nitro compounds, that it was inhibited by the superoxide anion radical, by hemoglobin and myoglobin and that it could be mimicked by •NO. In 1987, finally, two groups independently came to the very same conclusion: EDRF is •NO [263,264]. In 1998, Furchgott, Ignarro and Murad received the Nobel Prize "for their discoveries concerning nitric oxide as a signaling molecule in the cardiovascular system" [265]. Moncada and his colleagues, who published this discovery in the very same year [264] as Ignarro et al. [263], were not awarded. Only "The Nobel Assembly of Karolinska Institutet" could tell the reasons for this decision, but it did not [265].
pro-inflammatory role of •O2 − [241]. Orgotein was finally developed up to marketing approval in several countries for treatment of osteoarthritis, interstitial cystitis and induration penis plastic. Some years later, the drug had to be abandoned, because the promise of complete lack of antigenicity of the bovine protein turned out to be too optimistic. As a substitute, the recombinant human Cu/Zn-SOD was prepared in a hurry by Grünenthal GmbH (Aachen, Germany) and the Chiron corporation in Emeryville (CA; USA) [242,243] (Figure 6). The human SOD showed exciting promise in animal models of septicemia [244] or reperfusion injury [245], yet the general aversion against recombinant products in these years and the costs involved let the project die. In short, the hope for an improved clinical use of SOD [246] remained a dream. Babior's enzyme that produces superoxide radicals in phagocytes was first described by Sbarra and Karnowski in 1959, yet as an enzyme producing H2O2 [247]. It is now known as NADPH oxidase type 2 (NOX2) [248][249][250][251]. Its catalytic complex (p91 phox and p22 phox ) is a transmembrane protein. It contains an FAD and cytochrome b558 (discovered by Segal and Jones [252]). Its FAD moiety accepts the reduction equivalents of NADPH from the interior of the cell and releases •O2 − preferentially into the phagocytic vacuole, but also into the extracellular space (see above Figure 5). Like the lipoxygenases, NOX2 is a dormant enzyme that needs to be activated by cytosolic factors: p67 phos , polyphosphorylated p47 phox , p40 phox , the GTPases Rac1 and Rac2, and Rap1. Any functional disturbance of this complex system leads to a severe clinical condition, chronic granulomatous disease, which is characterized by recurrent infections. The disease was first described in 1954 [253] and underscores the importance of NOX2 in host defense [240,254].
Superoxide production by NOX-type enzymes was soon detected also in many non-phagocytic cells. The sources are other members of the NOX family. The common denominator of these enzymes Many of the physiological functions of •NO were already known around the time of the mentioned Nobel Prize [255,266]. Its target is a guanylyl cyclase, where it binds to a heme moiety and produces cGMP as the second messenger that leads to smooth muscle relaxation in practically all animals. In mammals its biosynthesis is achieved by three distinct nitric oxide synthases (NOS; nNOS, eNOS and iNOS for neuronal, endothelial and inducible NOS, respectively), which use L-arginine, NADPH and O 2 as substrates and FAD, FMN, iron porphyrin IX, tetrahydrobiopterine and Zn 2+ as cofactors. Their functions differ. The essential function of eNos is the regulation of blood flow via production of EDRF; it also contributes to inhibition of platelet aggregation. The neuronal isozyme is involved in neurotransmission and synaptic plasticity. The inducible NOS is widely distributed, responds to exogenous stimuli such as bacterial lipopolysaccharides and phorbol esters and to endogenous pro-inflammatory cytokines. In macrophages, which typically lack myeloperoxidase, it complements the bactericidal cocktail with peroxynitrite, which is formed from •NO and •O 2 -(see below). Apart from these canonical ways of •NO biosynthesis, the radical can also be produced by reduction of nitrite or nitrate [267].
In the meantime, •NO has reached the status of an approved drug to manage serious hypertension. A compound that inhibits the breakdown of its second messenger cGMP, sildenafil (Viagra ® ), has made its career as a lifestyle drug; it is used to improve penile erection. More recently, •NO was also discussed in plant and bacterial physiology. By mid July 2020, entering "nitric oxide" in EndNote yielded 88,863 hits, of which 10,853 were reviews. To discuss these more recent amendments in detail is simply impossible. We here can only touch upon some critical aspects.
•
•NO itself is a benign radical. Its biological effects are overwhelmingly beneficial. Its radical character, however, implies that it can react with a large variety of molecules and these down-stream processes may cause adverse effects. Fortunately, the history of nitrogen oxides can be traced back to Joseph Priestley (1733-1804), and a lot of the chemistry of •NO had been worked out before its presence in biological systems was detected [268]. The chemistry of the interaction of •NO with oxygen, thiols and other molecules is, however, very complex, and the relevance to biological systems still appears to be debated. Like •O 2 − , •NO can act as a reductant and as an oxidant.
• A prominent characteristic of •NO is its affinity to metal complexes. It is the basis of its physiological function as activator of guanylyl cyclase, but also of adverse effects resulting from binding to cytochrome P450 in the endoplasmic reticulum or to the cytochromes of the respiratory chain. The interaction of •NO with b-type cytochromes in complex III appeared to mimick antimycin A in triggering superoxide production (see above), which implies the formation of peroxynitrite (ONOO -) due to the simultaneous presence of •NO and •O 2 − and, in consequence, mitochondrial dysfunction [269]. • •NO can interact with the biradical molecular dioxygen to form a realm of nitrogen oxidation products comprising radical and non-radical species such as, e.g., 268,270].
• In contrast to •NO, •NO 2 is a strong oxidant and is likely responsible for nitration of tyrosine in proteins [271]. The bimolecular rate constant for the reaction of •NO 2 with tyrosine at pH 7.5 is 3.2 × 10 6 M −1 s −1 , •NO 2 will also nitrate unsaturated fatty acids [268,272]. • Nitrosothiol in proteins or low molecular compounds such as GSH is commonly considered as a hallmark of "nitrosative stress". Of course, these derivatives could be formed by a reaction of •NO with thiyl radicals, yet the steady state concentration of thiyl radicals is too low to be of physiological relevance. Most likely, S-nitrosation is achieved by N 2 O 3 , the latter being built from •NO and •NO 2 , with a rate constant of 1.1 × 10 9 M −1 s −1 [268]. However, also other mechanisms are being discussed [270].
•
In the context of lipid peroxidation, •NO can adopt controversial roles. Being a radical, it can terminate free radical chains, e.g., by interacting with an ROO• [273]. Its oxidation products, however, may also initiate a free radical chain by hydrogen abstraction from a poly-unsaturated fatty acid residue [272].
•
The most important pathogenic reaction of •NO is probably its combination with •O 2 − to form peroxynitrite. This reaction of two radicals proceeds with a rate constant of 1.9 × 10 10 M −1 s −1 , which implies that it is limited by diffusion [270,274]. Peroxynitrite, although it is not a radical, is a highly aggressive oxidant, which prompted Beckmann and Koppenol to describe this reaction as one of the "good" (•NO) with the "bad" (•O 2 − ) to make the "ugly" (peroxynitrite) [275].
• Peroxynitrite, apart from being detrimental by itself, had been proposed to decompose into NO − and 1 ∆ g O 2 , thus creating another aggressive oxidant [276]. This hypothesis was, however, falsified by two later publications [277,278]. • •O 2 − , by reacting with •NO to peroxynitrite, inhibits the beneficial effects of •NO, e.g., on the circulation [279,280], and simultaneously causes oxidative damage. In retrospect, therefore, the surprising results seen with SOD infusion in models of reperfusion injury and septicemia [246] may be re-interpreted as resulting from •NO salvage and inhibition of the formation of peroxynitrite.
In short, •NO itself is beneficial in guaranteeing optimum blood flow and neuronal function, but when transformed to •NO 2 or peroxynitrite, it becomes Janus-faced: it creates an efficient bactericidal cocktail with the typical collateral oxidative tissue damage [272,[281][282][283]. For recent developments and ramifications in the field see [267,[284][285][286].
Changing Paradigms: From Tissue Damage to Redox Regulation
The interest in free radical biochemistry was certainly boosted by Denham Harman's (1916-2014 free radical theory of aging published in 1956 [287] and Rebeca Gershman's (1903Gershman's ( -1986) observation of the similarity of oxygen poisoning and x-ray irradiation damage [288]. However, our knowledge on the process of aging has since apparently not improved [289]. Nevertheless, the concern about oxidative damage of DNA, proteins and other macromolecules kept the redox biochemists busy for some decades. Indeed, all related symposia and monographs of the last century focused on oxidative tissue damage [239,[290][291][292][293][294][295]. The recent revival of lipid peroxidation research in the context of ferroptosis reveals the importance of the past efforts [296,297]. With the beginning of the new millennium, the journals, meetings and monographs on redox biology are almost exclusively filled with contributions on redox regulation and redox signaling. Although flourishing quite late, the subject of redox-dependent regulatory phenomena is not new at all.
Early observations of a redox-dependent metabolic regulation were already reported in the 1960s. Jacob and Jandl [298] saw an activation of the pentose phosphate shunt upon oxidation of GSH in red blood cells. Pontremoli et al. [299] described that fructose 1,6-diphosphatase, a key enzyme in gluconeogenesis, was activated by incubation with cystamine and that this activation could be reversed by thiols such as GSH, cysteine or mercaptoethanol. The authors attributed this effect to an oxidative modification of a particular cysteine residue of the enzyme. In the 1970s, Czech et al. speculated on the involvement of SH oxidation in the action of insulin [300,301] and May and de Haen proposed that H 2 O 2 might be a second messenger of insulin [302]. The hypothesis was finally corroborated by over-expression of GPx1; the mice became fat and insulin resistant [303]. By now, the role of H 2 O 2 in insulin signaling is firmly established [304]. In 1974, Eggleston and Krebs further discussed a regulation of the pentose phosphate shunt by GSSG [305], and in 1985, Regina Brigelius could already compile a large list of proteins that were S-modified, mostly glutathionylated, under oxidant conditions, although the mode of their formation and the functional consequences were still unclear in many cases [306].
Despite these early hints suggesting a metabolic regulation by redox events, phosphorylation and de-phosphorylation dominated the field for quite a while (for a historical review see [307]). The delayed merging of this research field with redox biology must indeed surprise, since the oxidative inactivation of protein phosphatases had been known already in the 1970s [308]. The situation only changed when it became obvious that many phosphorylation cascades are under the control of oxidants, the insulin case mentioned above being just one of many examples. In particular, growth factor signaling proved to be redox-controlled (for reviews see [309-312]). Early examples are the oxidative activation of NFκB [313][314][315][316], the activation of a NOX by binding of transforming growth factor ß to its receptor [317], and NOX activation upon receptor occupancy by platelet-derived (PDGF) or epidermal growth factor (EGF) [318][319][320]. According to Sundaresan et al. [318], the synchronization between binding of EGF, PDGF, IL1, and NFκB to their receptors and the activation of NOX is mediated by the small GTP-binding protein RAC1, yet the mechanisms certainly differ between cellular compartments, cell types and phosphorylation cascades. More recently, even the participation of mitochondrial regulation is being discussed [227]. In most cases, the regulating compound remains hidden in the cloudy abbreviation ROS (see Chapter 7). Bacteria, however, have distinct sensors for H 2 O 2 , •O 2 − and lipid hydroperoxides (reviewed in [321]). H 2 O 2 specifically oxidizes the transcription factor OxyR, already introduced in Section 4, the SoxR regulon appears to specifically respond to •O 2 − [322,323], and Ohr, which is a peroxiredoxin, only reacts with lipid hydroperoxides [324]. These systems collectively induce an adaptive transcriptional response that makes the bacterium more resistant to oxidant challenges. In mammals, sensing ROS usually means sensing H 2 O 2 , although more specific sensors may also exist in higher organisms. In fact, NF-κB activation, which has long been known to be favored by oxidant conditions [313,315], can be inhibited by overexpression of GPx1 [314], but interleukin-induced NF-κB activation was more efficiently inhibited by overexpression of GPx4, which points to a critical role of lipid hydroperoxides in the activation cascade [325]. Additionally, the novel form of regulated cell death, ferroptosis, is selectively affected by GPx4. As a general rule, the less reactive species such as H 2 O 2 , lipid hydroperoxides or the superoxide radical anion are more likely involved in regulation, while the aggressive oxidants such as the •OH radical, singlet oxygen or peroxynitrite react too promiscuously to allow a specific regulation and are more likely just damaging.
The known molecular targets of regulatory redox events are almost exclusively thiols in proteins. The latter may be oxidized directly by H 2 O 2 or another hydroperoxide, as outlined already in Section 4. Alternatively, thiol-dependent peroxidases, which are particularly competent for the reaction with hydroperoxides, may serve as primary sensors and transfer their redox equivalents via heterodimer formation followed by disulfide reshuffling to the ultimate target proteins (for details see Section 4). The redox regulation of phosphorylation cascades can be achieved in different ways. Either a protein kinase can be activated by cysteine modification at the sequence motif MxxCW [326] or, more commonly, protein phosphatases may be inhibited by oxidation of their active-site cysteine. The net effect, inhibition or activation, depends on the role of respective kinases and phosphatases in the cascades. Since the majority of protein phosphatases are susceptible to oxidative inactivation, practically all phosphorylation cascades proved to be redox-regulated.
The adaptive response in mammals is predominantly regulated by the Nrf2/Keap1 system [146]. Nrf2 was detected by Itoh et al. in 1997 as a master regulator for a realm of cytoprotective enzymes [327]. It is kept in the cytosol by Keap1 and continuously degraded. Keap1 contains a reactive cysteine residue, the oxidation or alkylation of which allows Nrf2 to migrate to the nucleus, where it activates the transcription of ARE-regulated genes. ARE stands for "antioxidant response element", which actually is a misnomer, since its activation depends on the oxidation of Keap1, as outlined above. The name ARE is therefore often replaced by the more adequate abbreviation EpRE for electrophile response element [328,329]. The original name ARE was chosen because its activation was achieved by synthetic antioxidant phenols [330]. Such compounds, however, tend to react with the most abundant radical of biological systems, which is the normal triplet oxygen, and thereby produce oxidants instead of scavenging dangerous radicals. This behavior of the synthetic phenols is shared by many antioxidants, also by natural ones, which implies that, in vivo, they act like oxidants. This way, however, they may induce the expression of protective enzyme via the Nrf2/Keap1/EpRE system and, thus, improve the antioxidant defense system. This appears to be the mechanism, by which antioxidants, if active at all, could display beneficial health effects [331,332]. They might trigger hormesis, a concept attributed to Theophrastus Bombastus von Hohenheim ("Paracelsus"; 1493-1541).
Other important regulators that work with reversible thiol oxidation are the redoxins. Typically, the reduced form binds to a transducer or adapter protein and thereby blocks signal transduction. Upon oxidation, the redoxin is released, and signaling can proceed. A classical example is the redox-sensitive association of thioredoxin with ASK1, the apoptosis signaling kinase [333]. Similarly, reduced nucleoredoxin (a redoxin with a CPPC motif) binds to the adapter protein MyD88 and thereby blocks recruiting of Myd88 to the Toll-like receptor [334], and reduced nucleoredoxin also interrupts Wnt signaling by redox-sensitive binding to Dvl (disheveled) [335].
Although the quoted examples demonstrate the importance of protein thiol modification in regulatory processes (for more details see [158,321,[336][337][338][339], other mechanistic possibilities should not be ignored. A reversible oxidation of the sulfur in methionine residues and its functional consequences have more recently been demonstrated [340]. The already mentioned oxidation of iron in lipoxygenases is likely independent of any sulfur modification. In principle, every redox-sensitive residue or component of a protein is susceptible to redox regulation. Moreover, sometimes an activity modification affects seemingly remote metabolic pathways [341]. A typical example is the oxidative inactivation of GAPDH activity (see Chapter 4), which blocks glycolysis and directs the glucose metabolism to the pentose phosphate shunt. We therefore should remain open to still undiscovered regulatory phenomena.
An almost topical state-of-the-art review and a compilation of ongoing research has recently been published as "a summary of findings and prospects for the future" by the~100 participants of the COST Action CM1203 ("EU-ROS") [342].
Now the Language Problem
The title of this Special Issue "Redox language of the cell" deserves a critical comment: Cells do not talk to us, we talk about cells. This means that the redox language is a human one and, as such, prone to errors and unfavorable developments.
Before the 20th century the language of (bio)chemists was very individualized and often hard to understand. Accordingly, it was not always easy to figure out, who invented what. For instance, Veitch in the short historical introduction of the horse radish peroxidase review [35] traces the discovery of plant peroxidases back to times before the discovery of H 2 O 2 . Louis Antoine Planche (1776-1840) reportedly described the development of blue color in "jalap" by fresh horse radish in 1810 [343]. Schönbein, who is most often quoted as the discoverer of peroxidases (see Section 2), essentially performed the same experiments, just with more tissue samples and (wrongly defined) H 2 O 2 . His language and his mechanistic speculations are strange, to say the least (see Figure 7). In his article [2], the blue color of guajacol (jalap) is also produced by ozone, "metal superoxydes" (or proxides), platinum and other noble metals in the presence of H 2 O 2 (called "Wasserstoffsuperoxyd", a name now reserved for the hydrogen superoxide radical and confusingly also presented with the composition of the latter). All these compounds like the catalysts of natural sources are believed to generate an activated form of oxygen that oxidizes the guajacol resin.
Antioxidants 2020, 9, x FOR PEER REVIEW 20 of 37 article [2], the blue color of guajacol (jalap) is also produced by ozone, "metal superoxydes" (or proxides), platinum and other noble metals in the presence of H2O2 (called "Wasserstoffsuperoxyd", a name now reserved for the hydrogen superoxide radical and confusingly also presented with the composition of the latter). All these compounds like the catalysts of natural sources are believed to generate an activated form of oxygen that oxidizes the guajacol resin. Schönbein [2] that tries to explain the catalytic mechanism of peroxidases as analogous to noble metal-induced decomposition of hydrogen peroxide. Language and contents are not easily understood: H2O2 is called "Wasserstoffsuperoxyd", a name now reserved for •O2H; the presumed composition of hydrogen peroxide (O2H) is simply wrong; the analogy of the peroxidase reaction and metal-induced peroxide decomposition may be doubted; the characterization of oxygen atoms of differing degrees of reactivity is "unique".
Sometimes Alexander von Humboldt (1769-1859) is named the founder of redox chemistry, because he is presumed to have described the production of barium peroxide, which Thénard used to prepare H2O2. I checked Humboldt's pertinent publication [344] carefully, but was unable to find an unambiguous proof of this assumption; the description of the starting materials ("Alaun-Erden" or "schwere Erden") were just too unprecise to understand what kind of chemical experiments he performed.
Over the 20th century, the language of redox biology co-developed with the chemical terminology. It was complemented by the UPAC enzyme nomenclature and some abbreviations that were commonly understood. By the third quarter of the century, the language of redox biologists had reached a maturity that allowed an easy communication between reasonably educated scientists of different disciplines. Unfortunately, this favorable development was discontinued. Beginning in the last quarter of the 20th century, the redox biologists enjoyed creating their own language-and created little else but confusion. Fragments like "free radicals collectively named ROS….", "free radicals such as H2O2…", "GPx scavenges free radicals…" or "transfers free radicals from intracellular ROS to glutathione" are not rare in the scientific literature (politeness forbids the [2] that tries to explain the catalytic mechanism of peroxidases as analogous to noble metal-induced decomposition of hydrogen peroxide. Language and contents are not easily understood: H 2 O 2 is called "Wasserstoffsuperoxyd", a name now reserved for •O 2 H; the presumed composition of hydrogen peroxide (O 2 H) is simply wrong; the analogy of the peroxidase reaction and metal-induced peroxide decomposition may be doubted; the characterization of oxygen atoms of differing degrees of reactivity is "unique". Sometimes Alexander von Humboldt (1769-1859) is named the founder of redox chemistry, because he is presumed to have described the production of barium peroxide, which Thénard used to prepare H 2 O 2 . I checked Humboldt's pertinent publication [344] carefully, but was unable to find an unambiguous proof of this assumption; the description of the starting materials ("Alaun-Erden" or "schwere Erden") were just too unprecise to understand what kind of chemical experiments he performed.
Over the 20th century, the language of redox biology co-developed with the chemical terminology. It was complemented by the UPAC enzyme nomenclature and some abbreviations that were commonly understood. By the third quarter of the century, the language of redox biologists had reached a maturity that allowed an easy communication between reasonably educated scientists of different disciplines.
Unfortunately, this favorable development was discontinued. Beginning in the last quarter of the 20th century, the redox biologists enjoyed creating their own language-and created little else but confusion. Fragments like "free radicals collectively named ROS . . . .", "free radicals such as H 2 O 2 . . . ", "GPx scavenges free radicals . . . " or "transfers free radicals from intracellular ROS to glutathione" are not rare in the scientific literature (politeness forbids the referencing of such statements).
What follows below should not be interpreted as an undue outbreak of frustration; it is to express my concern about the growing lack of precision in more recent publications. This concern is shared by others. Also, the report on the COST Action CM1203 starts off with a harsh critique of the use of ill-defined terms [342], and Henry J. Forman, on behalf of the entire board of "Free Radical Biology & Medicine" published an article with the title: "Even free radicals should follow some rules: a guide to free radical research terminology and methodology" [345]. The latter publication discourages experimentation with methods, the specificity of which is defined by little else than the promotional material of the kit industry, and complains about the use of poorly defined "scientific" terms. Here, I will not reiterate all arguments of these publications. A few remarks should suffice to alert uncritical redoxologists.
In 1985, Helmut Sies promoted the term "oxidative stress" by using it as book title [291]. There, the term was introduced to describe pathological conditions in which the endogenous defense mechanism can no longer cope with the production of oxidants. Such situations do exist, e.g., in septicemia, ischemia/reperfusion and certain poisonings, and accordingly the term made a lot of sense. Over the years, the oxidative stress was repeatedly re-defined to consider new aspects of redox biology [346][347][348][349][350][351]. However, these modifications of the term were not always and not instantly accepted by the scientific community. In consequence, the precise meaning of oxidative stress varies with the date of publication and the authors. In the meantime, every tissue, cell or subcellular compartment had its own poorly defined form of stress. The oxidative stress has been divided into a "distress" and an "eustress" and complemented by a "reductive stress". The redox-related stresses, thus, fused to a continuum that leaves no room whatsoever for unstressed normal life. In fact, physiological redox regulation, which is commonly not perceived as stressing, is now found under a broad stress umbrella as a form of eustress [351]. In short, the term stress has lost its power to differentiate between a pathological event, an adaptation to changing challenges and physiological fine-tuning of metabolic fluxes by redox regulation or redox signaling. This scenario justifies the question of whether the original definition of oxidative stress [291], which described a disturbance with pathological consequences, was not clearer.
Other examples of poorly defined terms are "ROS" and "RNS" for reactive oxygen and nitrogen species, respectively. They are often used as synonym for radical species, although some of the aggressive derivatives of oxygen and nitrogen are not radicals, and some of the radicals are not very reactive. It also has become fashionable to hybridize ROS and RNS to RONS or to invent subtypes of ROS such as mROS, if a mitochondrial origin is suspected, or lipid ROS, again without a clear definition. So far, I could not find out if the highly unstable thromboxane A 2 , the peroxides prostaglandin G 2 and H 2 or the epoxide leukotriene A 4 belong to lipid ROS or not. Finally, the borderline between ROS and RNS is unclear. Most of the RNS contain more oxygen than nitrogen, and depending on the mesomeric form, a lone electron of an RNS may be centered at the oxygen. Admittedly, it is sometimes difficult to precisely state which species is responsible for a reaction. This is particularly true, when the bactericidal cocktails of phagocytes are involved. However, this difficulty should not be misused as an excuse, not to head for the clearest language possible. I feel heavily distressed, when seeing exploding stars spreading ROS all over a paper.
A related problem is hidden in the term "reactive". It has already been mentioned (see Section 4) that "reactive" is meaningless without naming the reaction partner. Normal triplet oxygen ( 3 O 2 ) is never considered a reactive species, but it binds fast to hemoglobin and myoglobin and reacts fast with cytochrome c oxidase and a realm of oxygenases. H 2 O 2 does not display any exciting rates with low molecular weight thiols (k ≤ 50 M −1 s −1 [107]), but it may react with protein thiols in GPxs and Prxs with rate constants higher than 10 6 M −1 s −1 [352,353]. The superoxide radical anion hardly attacks amino acids. The highest rate constant was seen with tryptophan (24 M −1 s −1 [354]), but it reacts fast with hydrated copper (2.7 × 10 9 M −1 s −1 [355], depending on conditions, up to 8.1 x 10 9 M −1 s −1 [182]), with the copper of Cu/Zn SOD (2 × 10 9 M −1 s −1 [181]) and with •NO (1.9 × 10 10 M −1 s −1 [270]).
The most serious confusion came from the term antioxidants. When I was still a student, I learned from chemical textbooks [356,357] that antioxidants were used in polymer chemistry. The term described chain terminators in peroxide-initiated radical polymerization of olefins. According to Di Meo and Venditti [321], the term with exactly this meaning was introduced by Moureau and Dufraisse [358] already in 1926. Later, the antioxidants were hijacked by biochemists to describe inhibitors of analogous free radical chain reactions in living organisms. Meanwhile this term is degenerated to describe something that is presumed to be good for human health. The idea that this perception is not simply biased may be exemplified by glucose, which is thought not to be good for human health and is (perhaps as a consequence) never called an antioxidant. As a polyol, however, glucose is an efficient free radical trap (k for the reaction with •OH =1 × 10 10 M −1 s −1 [201]) and, when metabolized, is the main source of reduced pyridine nucleotides, the small currency of redox biology, which guarantees defense against peroxide challenge and repair of oxidant damage. A clear definition of antioxidants no longer exists. They comprise free radical trapping agents such as polyphenols, substrates of enzymes (e.g., GSH), which by themselves are poor or no antioxidants at all, and compounds that become constituents or cofactors of enzymes (e.g., selenium or zinc).
Intriguingly, despite the heavy promotion of antioxidants in the lay press, controlled clinical trials in various "oxidative stress diseases" yielded overwhelmingly negative results (no or adverse effects) [89,342]. The reasons for these seemingly paradoxical observations are likely trivial. First, the steadily growing list of oxidative stress diseases is not defined either. The expression rarely distinguishes between a pathogenic relevant oxidant challenge and clinically irrelevant increase in •O 2 − /H 2 O 2 production, as is seen in almost every disease. Second, the most aggressive and damaging radical, •OH, reacts fast with almost everything, with thiols, polyols, aromatic compounds including phenols, purines and pyrimidines with rate constants beyond 10 9 M −1 s −1 [201]. The concentration of such endogenous targets of the •OH radical may be estimated to lie in the medium to high millimolar range; a most efficient radical-trapping antioxidant would have to reach as similar concentration in vivo in order to significantly reduce an •OH-induced tissue damage. This is unfortunately not possible without drastically increasing the volume of the patient. This consideration, incidentally, may also explain why nature has not developed any device to cope with an •OH challenge. The enzymatic systems to fight oxidative tissue damage aim at the prevention of •OH formation, and it may be justified to specifically support these systems, if they are deficient. In short, in a biomedical context we do not need "antioxidants", neither as scientific term nor as dietary supplements to treat oxidative stress diseases. At best, we can expect a hormetic response from real antioxidants, which often share the tendency to act as pro-oxidants in vivo (see Section 6).
The above examples may suffice to redirect redox biologists to a clear, mostly chemical language. Otherwise, they have to face the risk to end up in a Babylonian confusion of tongues and its consequences, as are described in one of the oldest books in the world (Genesis 11.1-9).
Conclusions
Redox chemistry can be traced back to the 18th century, while redox biology began in the 19th century with the description of H 2 O 2 metabolism catalyzed by biological material. Related activities were later attributed to heme proteins such as the heme peroxidases and catalases. Their structural and functional characterization had to wait for the availability of technological progress in synthetic chemistry, spectroscopy, x-ray crystallography, electron spin resonance and stop-flow techniques, and therefore did not reach reasonable maturity before the middle of the 20th century. Although the biological function of catalases in H 2 O 2 metabolism has meanwhile been established, the role of many heme peroxidases in plants and fungi still awaits clarification.
The second half of the 20th century surprised with two new peroxidase families that proved to lack the catalytic heme moiety. These were the glutathione peroxidases, which work with selenium or sulfur catalysis, and the peroxiredoxins, which overwhelmingly depend on sulfur catalysis. The substrate specificity of these enzymes significantly broadened the scope of redox biology, since these peroxidases proved to also reduce hydroperoxides distinct from hydrogen peroxide. In particular, bio-membrane damage due to lipid peroxidation became a focus of research. Studies on the catalytic mechanism of these enzymes have greatly improved our understanding of the reactivity of protein thiols and selenols.
In the last three decades of the 20th century, redox biology was enriched by two key players that proved to be radicals: the superoxide anion and nitric oxide. The former is built by NADPH oxidases, one of which is a pivotal component of the host defense system, or by the respiratory chain of mitochondria. The superoxide anion is dismutated to H 2 O 2 and O 2 by different types of superoxide dismutases. Nitric oxide proved to be the endothelium-derived relaxing factor that is built by the endothelial NO synthase (eNOS). Outside the vascular system, nitric oxide is formed by other NOS-type enzymes and has diverse biological functions.
Up to the middle of the last century, redox biology was dominated by the concern about oxidative damage. By now, the major focus of redox biology is metabolic regulation and signaling by diverse oxygen, nitrogen or sulfur species that proved to be messenger molecules, targets or sensors. The present challenge is to understand how the biological radicals and compounds derived therefrom interact with redox-regulated systems and how the synthesizing and degrading enzymes contribute. A re-interpretation of the biological roles of glutathione peroxidases and peroxiredoxins as possible hydroperoxide sensors and transducers marks the present progress of this research field.
The language of redox biology of the 18th and 19th century is hardly compatible with modern terminology. The language became clearer in the 20th century. More recently, the abundant use of poorly defined terms and abbreviations such as oxidative stress, antioxidants, ROS or RNS has hampered the in-depth understanding of biological redox events. A return to a clearer, chemistry-based language is strongly recommended.
Funding: This research received no external funding.
|
2020-12-10T09:02:52.845Z
|
2020-10-26T00:00:00.000
|
{
"year": 2020,
"sha1": "45e9971ae731e3687c1e791463bdc389f598af37",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3921/9/12/1254/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dad3a429c8a4b8e2ea5a11ea2219201a9a80e027",
"s2fieldsofstudy": [
"Chemistry",
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
13404434
|
pes2o/s2orc
|
v3-fos-license
|
Hypoxic turtles keep their cool
Several species of freshwater turtles spend the winter submerged in ice-covered lakes in a state of severe metabolic depression. It has been proposed that the hibernating turtles are comatose and entirely unresponsive, which raises the question of how they detect the arrival of spring and whether they respond to sensory information during hibernation. Using evoked potential studies in cold hypoxic turtles exposed to light and vibration, we show that hibernating turtles maintain neural responsiveness to light stimuli during prolonged hypoxia, while responsiveness to vibration is lost. This reveals a state of differential neural shutdown, in different sensory systems in the cold hypoxic turtle brain. In behavioral studies we show that turtles held for 14 days in hibernation increase locomotor activity in response to light or elevated temperatures, but not to vibration or increased oxygen. We conclude that hibernating freshwater turtles are not comatose, but remain vigilant during overwintering in cold hypoxia.
S everal species of freshwater turtles spend the winter submerged in icecovered lakes in a state of severe metabolic depression. It has been proposed that the hibernating turtles are comatose and entirely unresponsive, which raises the question of how they detect the arrival of spring and whether they respond to sensory information during hibernation. Using evoked potential studies in cold hypoxic turtles exposed to light and vibration, we show that hibernating turtles maintain neural responsiveness to light stimuli during prolonged hypoxia, while responsiveness to vibration is lost. This reveals a state of differential neural shutdown, in different sensory systems in the cold hypoxic turtle brain. In behavioral studies we show that turtles held for 14 days in hibernation increase locomotor activity in response to light or elevated temperatures, but not to vibration or increased oxygen. We conclude that hibernating freshwater turtles are not comatose, but remain vigilant during overwintering in cold hypoxia.
Northern species of freshwater turtles, as shown in Figure 1, face a physiological problem each winter; they cannot tolerate freezing, and must therefore overwinter submerged at the bottom of lakes where they can be trapped underwater until the ice thaws. In a recent study we examined to what extent cold and hypoxic turtles remain responsive under these conditions. 1 The turtles survive the cold anoxic winter by suppressing metabolism to less than 1% of normal metabolism at 20 C 2 and with their large capacity for anaerobic metabolism they endure an entire winter without breathing. In addition to the protective effects of hypometabolism, damage to the central nervous system is mitigated by channel arrest, where the density of ion channels in the central nervous system is dramatically decreased, allowing for ion homeostasis to be maintained at minimal energy expenditure, as well as spike arrest where activity of the nerve cells is markedly reduced. 3 This central nervous system depression during hibernation is mediated by release of inhibitory neurotransmitters, such as g-aminobutyric acid, 4 and has led to the view that the cold and anoxic turtles are completely comatose and unresponsive during hibernation. 5 The self-imposed inhibition of the central nervous system, however, poses a conundrum; if the turtles are in an unresponsive coma, how do they know when spring has arrived and it is possible to resurface?
To address this puzzle, we studied the responsiveness of red eared slider turtles (Trachemys scripta elegans) to different sensory stimuli during cooling and hypoxia. Anesthetized turtles, warmed to a core body temperature of 25 C, were cooled and subjected to light or vibration stimuli until they reached a body temperature of 3 C. The evoked potentials from the stimuli were measured via electrodes placed subcutaneously on the turtles' heads.
To test for the effects of hypoxia per se, the animals underwent the same stimulation regime of light and vibration at a body temperature of 20 C while being ventilated with nitrogen gas to induce severe hypoxia. In this manner, we could separate the effects of cooling and hypoxia. We chose to study light and vibration stimuli based on the hypothesis that there might be a discrepancy between the reduction of light and vibration sensitivity, as increased light levels signal the arrival of spring, and the possibility to resurface, whereas vibration detection might be less relevant to a hibernating turtle.
Responsiveness to both light and vibration was gradually eliminated with cooling. However, as the animals reheated, light responsiveness recovered faster than to vibration. The neural response to vibration declined and was undetectable after 1 h of nitrogen gas ventilation; in contrast light responses were sustained throughout hypoxia.
We also performed a behavioral experiment, where the turtles were submitted to hibernation conditions for 2 weeks in a dark, closed water tank at 3 C, and then exposed to one of several stimuli while being tracked on video to investigate whether particular sensory stimuli would induce a behavioral response. The behavioral experiments confirmed the findings of the stimulation experiments; when exposed to light an immediate increase in movement was recorded. Increased activity was also recorded when the water temperature was increased as expected because of the positive effect on metabolism, whereas neither vibration nor increased water oxygenation stimulated activity.
We have therefore shown that cold anoxic freshwater turtles are not in a state of unresponsive coma, as previously hypothesized. Instead they retain slow vigilance, ready to respond in a coordinated fashion when adequate stimuli are received. This implies that the Keywords: coma, evoked potential, freshwater turtles, hibernation, responsiveness mechanisms entertained to explain how turtles save energy during hibernation, such as channel arrest and increasing g-aminobutyric acid, are not universally applied in the central nervous system.
Our study provides an example of a vertebrate central nervous system that maintains critical functions during cold (3 C) without oxygen. Future experiments on how a vertebrate central nervous system continues to function during such conditions could provide valuable insights into the energy conserving effects of low temperature, and may have relevance to understand the central nervous system function during clinical hypothermia as commonly used in the immediate treatment after severe oxygen deprivation during stroke or cardiac arrest. Furthermore, we have revealed what appears to be a selective decrease in certain central nervous system activities, while others remains functional. The result is apparently a wakeup sequence where a selective response to a certain appropriate stimulus seems to be followed by an increase in the overall central nervous system activity. Understanding how such sequential shut down and activation of higher brain functions occur in a vertebrate central nervous system could further our understanding of hibernation and coma conditions in both animals and humans.
Disclosure of Potential Conflicts of Interest
No potential conflicts of interest were disclosed. Figure 1. Freshwater turtle species living at northern latitudes, such as Trachemys scripta species, avoid freezing by overwintering at the bottom of frozen lakes and ponds. As they cannot surface to breath during winter, they enter a state of deep metabolic depression to conserve energy until the ice melts. www.tandfonline.com
|
2018-04-03T00:38:56.881Z
|
2015-03-17T00:00:00.000
|
{
"year": 2015,
"sha1": "852132564b401454e4555ec55d487495e4182178",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/23328940.2014.978167?needAccess=true",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "2f5a4adbcf038e0c7d16035c11a30caa6bd8a50a",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
237330443
|
pes2o/s2orc
|
v3-fos-license
|
Under-Vaccination in Pediatric Liver Transplant Candidates with Acute and Chronic Liver Disease—A Retrospective Observational Study of the European Reference Network TransplantChild
Infection is a serious concern in the short and long term after pediatric liver transplantation. Vaccination represents an easy and cheap opportunity to reduce morbidity and mortality due to vaccine-preventable infection. This retrospective, observational, multi-center study examines the immunization status in pediatric liver transplant candidates at the time of transplantation and compares it to a control group of children with acute liver disease. Findings show only 80% were vaccinated age-appropriately, defined as having received the recommended number of vaccination doses for their age prior to transplantation; for DTP-PV-Hib, less than 75% for Hepatitis B and two-thirds for pneumococcal conjugate vaccine in children with chronic liver disease. Vaccination coverage for live vaccines is better compared to the acute control group with 81% versus 62% for measles, mumps and rubella (p = 0.003) and 65% versus 55% for varicella (p = 0.171). Nevertheless, a country-specific comparison with national reference data suggests a lower vaccination coverage in children with chronic liver disease. Our study reveals an under-vaccination in this high-risk group prior to transplantation and underlines the need to improve vaccination.
Introduction
Infection is still the most common cause of mortality in the long term after liver transplantation, despite the advances in immunosuppression and medical management [1,2]. The society of pediatric liver transplantation (SPLIT) in the United States registered almost 38% culture-proven infections within the first 90 days of transplantation [3]. Moreover, of the 3.8% of patients who died, almost 12% had an infection. At Bicêtre University Hospital, 47% of children suffered from bacterial infections in the early phase after pediatric liver transplantation, leading to death in 3% of patients [4]. A recent US-multicenter study showed that 16% of all pediatric solid organ transplant recipients suffered at least one hospitalization for a vaccine-preventable infection (VPI) in the first 5 years after transplantation, resulting in increased morbidity and mortality [5]. Moreover, a prolonged hospitalization after transplant due to VPI increased costs on average of about USD 120,498.
However, based on national recommendations, only 55% of US [6] and 70% of Swiss [7] patients were up to date with immunization, before orthotopic liver transplantation (OLT). The window of opportunity for vaccination is usually limited to prior to transplantation because pediatric liver transplant candidates often have unstable disease courses. Thus, it is paramount that eligible children are immunized at an early age.
This study aims to examine the immunization coverage in pediatric patients who underwent OLT at five liver transplant centers in Europe. An age-appropriate vaccination is considered, when patients have received the recommended number of vaccination doses for their age prior to transplantation. In order to investigate if children with chronic liver disease are vaccinated according to national vaccination guidelines, the vaccination rates of these children are compared to those of children with acute onset liver disease. This cohort of children serves as controls. To reduce country-specific variation, vaccination coverage is compared with national reference data. We also analyzed antibody titers before transplantation.
Patients and Data Acquisition
This multi-center, retrospective analysis included 430 children who were born between 1995 and 2020 and who underwent liver transplantation at University Hospital of Padova (Italy), Hospital Papa Giovanni XXIII in Bergamo (Italy), Necker Enfants Malades Hospital in Paris (France), Vilnius University Clinic of Children's Diseases (Lithuania) and Hannover Medical School (Germany). Only children with a certified immunization record and aged below 18 years at the time of transplantation were included in this study. All parents/caregivers of patients analyzed in this study provided informed consent allowing their children's data to be used for scientific purposes at the time of hospital admission. Patient data were anonymized prior to analysis. Ethical approval was not necessary due to the retrospective design of the study, according to European legislation.
Vaccination dates were taken from vaccination records. To assess whether the patient received an age-appropriate vaccination prior to transplantation, the national vaccination recommendations at time of birth were used. If the age limit is changed here (e.g., lowering the age for booster dose with MMR from 3 years to 16 months), the new limit applies to all who have not yet been transplanted and who exceed this age limit. With EU approval of a new vaccine (e.g., against meningococcal B or rotavirus), the age limits for a vaccination set by the manufacturer count, as long as no national vaccination recommendation is made. If the vaccine is implemented in the national vaccination calendar, these age limits are used to classify an age-appropriate vaccination status. Children were considered as ageappropriate vaccinated if they had received the recommended doses of a vaccine required for their age, as mentioned above, by the time of the liver transplantation.
Antibody titers against Hepatitis A and B, as well as measles and varicella, were determined pre-transplant. Those patients who received albumin, fresh frozen plasma or immunoglobulins before antibody measurement were excluded, as well as children under 6 months of age, due to potential maternal antibodies. Depending on levels and as specified by the manufacturer for each test, they were considered as non-immune or immune. Borderline IgG was considered as non-immune, due to the long observation period in several centers and the adjustments to the reference ranges after test changes.
Statistical Analysis
Qualitative data was expressed as number and percentage (%). Quantitative data was expressed as median (25-75% quartile). The comparison of two groups with categorical variables was performed using chi-squared test or Fisher's exact test. Mann-Whitney U test was used for continuous variables due to non-normality. p < 0.05 was considered Children 2021, 8, 675 3 of 12 statistically significant. Statistical analysis was performed using R version 4.0.5 [8]. For graphical data ggplot2 package version 3.3.3 was used [9].
Immunization Recommendations
There are no Europe-wide general vaccination recommendations for children. As a result, recommendations are made on a country-specific basis with many changes up to 2021. In Germany, the Standing Committee on Vaccination at the Robert Koch Institute is responsible, with annual recommendations. The Italian National Immunization Prevention Plan is released by the Ministry of Health and adopted by each region to its Regional Immunization schedule [10]. From 2014, the country-specific vaccination recommendations can be found on a website of the European Centre for Disease Prevention and Control [11].
Here is a simplified summarization with recommended age of vaccination for each country. Vaccination against diphtheria (D), tetanus (T), pertussis (P), poliomyelitis (PV) and haemophilus influenzae type B (Hib), abbreviated with DTP-PV-Hib, is included in every national vaccine schedule, with adjustments in timing, antigen concentrations (e.g., diphtheria in child or adult dose) and addition of polio vaccination (attenuated or inactivated). Until 2013, vaccinations were given in France at 2, 3, 4 and 16 months. Since then, the vaccination has been discontinued at 3 months and the booster age has been reduced from 16 to 11 months. In Italy, vaccinations were given at 3, 5 and 11 months. In Lithuania, the basic immunization takes place at 2, 4, 6 and 18 months. In Germany, infants are vaccinated at 2, 3, 4 and 11 months, since 2020 the dose at 3 months is only recommended for premature infants.
Hepatitis B
France, Lithuania and Italy recommend three ages for immunization: 2, 4 and 16 months. Since 2013, the age in France is 2, 4 and 11 months. Italian children are usually vaccinated at 3, 5 and 11 months. In Lithuania, vaccination with Hepatitis B starts after birth and is continued at ages 1 and 6 months. By contrast, in Germany the recommended ages are 2, 3, 4 and 11 months. As of June 2020, the second dose at 3 months is no longer required for full term infants.
Pneumococcal Conjugate Vaccine
From 2005 to 2008, French infants usually received four doses of pneumococcal vaccine at the ages of 2, 3, 4 and 16 months. In 2009, this changed to 2, 3 and 12 months and was lowered from 12 to 11 months in 2013. Pneumococcal vaccination was included in the Italian immunization schedule in 2008, starting at the age of 3 months and followed by doses at 5 and 11 months. In Lithuania, infants are vaccinated at 2, 4 and 12 months. In Germany, since 2006, recommended ages are 2, 3, 4 and 11 months. However, since 2015, only preterm infants receive the dose at 3 months of age.
Pneumococcal Polysaccharide Vaccine
Immunization with the 23-valent pneumococcal polysaccharide vaccine can start at 2 years of age, after completion of the pneumococcal conjugate vaccine. However, it is not included in the standard vaccination schedule of any countries in our analysis.
Rotavirus Vaccine
Since February 2006, at least one rotavirus vaccine is licensed in the European Union. In Germany, the rotavirus vaccine was included in the standard vaccination schedule from 2013, starting at 6 weeks and continuing at 2 months of age. Italy introduced vaccination against Rotavirus in 2017 at 3 months of age. Since 2018 every child in Lithuania should be immunized at 2 months of age. For French children, rotavirus is not included in the vaccination schedule. Since 2006, the meningococcal vaccine serogroup C, is recommended in Germany starting at 12 months of age. Italian children receive one dose at 13 months. France introduced the vaccination four years later, in 2017, and the age for the first dose was lowered to 5 months, with a second dose at 12 months.
Meningococcal Vaccine Serogroup B (MenB)
Since January 2013, a meningococcal vaccine against serogroup B has been available in the European Union. However, only Italy and Lithuania have included it in their standard vaccination schedule. In Italy, the vaccination was given from 2014-2016 at ages 7, 9 and 15 months, and since 2017, it has been implemented for ages 3, 4, 6 and 13 months. Lithuania has offered vaccination to children aged 3, 5 and 12 months since 2018.
Meningococcal Vaccine Serogroup ACWY (MenACWY)
There are various vaccines available in the European Union for meningococcal vaccine against serogroup ACWY. In 2010 starting with Nimenrix has been offered since 2010 at 2 years of age and at 12 months of age since 2012. Four years later the initial age was lowered to at least 6 weeks. It is not included in the standard vaccination schedule in any country in our analysis.
Measles, Mumps, and Rubella (MMR)
The vaccination against measles, mumps and rubella is available in all four countries in their regular vaccination schedule. Since 2001, immunization began in Germany at the age of 11 months and is completed at the earliest at 15 months. In France, vaccinations were given at 12 months and 3 years of age; the second dose has been administered at 16 months since 2005. Lithuanian children are vaccinated against MMR at 15 months and 6 years of age. Italy vaccinated at 13 months and 12 years of age until 2007, with the booster age reduced to 6 years from 2008.
Varicella/Chickenpox Vaccine (VZV)
Immunization against chickenpox was introduced in Germany in 2004. A second dose after at least 4 weeks was recommended from 2006, if the first dose was combined with MMR. Since 2009, a refresher should always take place at the earliest at 15 months. In Italy, vaccinations against chickenpox are at the age of 13 months and 12 years, and since 2008, the booster has been given at the age of 6. There is no general vaccination recommendation against chickenpox in France and Lithuania.
Human Papillomavirus (HPV) Vaccines
Vaccination against HPV was included in the general vaccination recommendations in France and Germany in 2007. French girls were first vaccinated when they were 14 years old, and since 2013 when they were 11 years old. In Germany, vaccination was initially recommended from 12 years of age, and from 2014 the minimum age was reduced to 9 years. Vaccination has also been recommended for boys since 2018. In Italy, the minimum age for vaccination is 11 years. As of 2017, both boys and girls aged 12 and over can be vaccinated against HPV. Lithuania introduced HPV vaccination only for girls in 2016 from the age of 11 years.
Study Population
Vaccination records of 430 pediatric, liver transplant recipients, performed between January 2003 and April 2021, were available. The groups were divided depending on whether they had chronic (n = 363, 84.4%) or acute (n = 67, 15.6%) liver disease. More than 60% with chronic liver disease were diagnosed with biliary atresia (BA), followed by metabolic conditions with 9.6% and progressive familial intrahepatic cholestasis (PFIC) 6.9%. Further diagnoses were cryptogenic cirrhosis (6.3%), Alagille syndrome (3.6%), cystic fibrosis (3.3%) and other liver diseases (9.3%). Children with acute onset liver disease (n = 67) were diagnosed with hepatic malignancy (50.7%), acute liver failure (38.8%), neonatal onset (9%) and one patient with Amanita phalloides poisoning serving as a control group for further analysis. Distribution of gender did not significantly differ between both groups (49.3% male in chronic group, 58.2% male in control group; p = 0.181). Moreover, there were no significant differences in ages at time of transplantation. Baseline characteristics are presented in Table 1.
Analysis of Age-Appropriate Vaccination Coverage
Prior to transplantation, around 66.5% of children with chronic liver disease had received the recommended number of pneumococcal conjugate vaccine doses for their age, compared to 79.3% in the control group. This is similar with DTP-PV-Hib and Hepatitis B, although the overall vaccination rates are slightly higher compared to pneumococcal conjugate vaccine. Regarding the rotavirus vaccination, there is a significantly better vaccination coverage in the acute transplant recipients of 30.6% versus 16.6% in the chronic liver disease patients (p = 0.02). By contrast, significantly more children suffering from chronic liver disease (20.1%) were immunized with pneumococcal polysaccharide vaccine than in the control group (2.9%; p = 0.016). The same applies to Hepatitis A vaccination (42.0% versus 7.7%; p < 0.00001).
Around two thirds (65.7%) of eligible children with chronic liver disease were vaccinated against meningococci C, compared to 59.2% in the control group (p = 0.393). In meningococcal B, 22% of patients were vaccinated, compared to 16.1% in the control group, prior to transplantation after 2013 (p = 0.461). There is no significant difference between both groups regarding the quadrivalent meningococcal vaccine (ACWY), which has been available since 2010.
As a live vaccine, MMR post-transplant is not formally recommended, hence pre transplant vaccination is important: More than 81% of all children with chronic liver disease were up to date, compared to 62.3% of those with acute disease (p = 0.003). However, fewer children received age-appropriate vaccination against VZV, almost two thirds (65.2%) of children with chronic liver disease and 54.9% in the control group (p = 0.171).
The HPV vaccination was introduced quite late compared to the other vaccinations and is not carried out until at least the age of 9 years. Exactly 20% of all adolescents with Table 2.
Country-Specific Vaccination Coverage between Healthy Children and Those with Chronic Liver Disease
In order to minimize the country-specific influence on vaccination, and to check if parents are complying with early vaccine recommendations, the vaccination of eligible children with chronic liver disease was compared to healthy children (Table 3). Vaccination data on healthy children was taken from national reference databases as well as published data. Due to limited data, Lithuania was excluded from this comparison. [12], in France [13] and Italy [14] examined in 2018 received their first dose of MMR up to the age of 24 months. Similar rates are found for transplant candidates in Germany and France. Only in Italy is the rate lower at 60.4%. By contrast, rates of meningococcal vaccine serogroup C vaccination, which is usually administered at 12 months, are lower in transplant candidates in all three countries compared to healthy individuals at their second birthday [12,14,15].
While 83% received their second MMR dose by their second birthday in France, this is only true for half of eligible transplant candidates. Similarly in Italy, almost 60% of those with chronic liver disease and 90% of healthy controls were vaccinated twice with MMR by the age of seven. In Germany around 70% of healthy controls and transplant recipients had received MMR twice by their second birthday.
Age at Vaccination for 1st and 2nd Dose with MMR in Children with Chronic Liver Disease in Germany, France and Italy
We analyzed the vaccination age of 1st and 2nd doses of MMR in children with chronic liver disease. As shown in Figure 1, median administration age for both vaccines is higher than the national recommendation. However, only median age of 2nd MMR dose in Italy is lower compared to the recommended age in the vaccination schedule. children was taken from national reference databases.
Age at Vaccination for 1st and 2nd Dose with MMR in Children with Chronic Liver Disease in Germany, France and Italy
We analyzed the vaccination age of 1st and 2nd doses of MMR in children with chronic liver disease. As shown in Figure 1, median administration age for both vaccines is higher than the national recommendation. However, only median age of 2nd MMR dose in Italy is lower compared to the recommended age in the vaccination schedule.
Prevalence of Protective Antibody Levels against Hepatitis A and B, Measles and VZV Prior to Transplantation
In addition, antibody levels were measured prior to transplantation. All infants under 6 months of age were removed from the calculation in order to minimize the effects of the maternal loan titers. In the acute onset group, significantly more children had protective titers compared to children with chronic liver disease in Hepatitis B (80.0% and 63.3% respectively, p = 0.021). This is similar with VZV, where significantly fewer children and adolescents with chronic liver disease (53.1%) had sufficient pre-transplant titers compared to the acute onset group (68.8%, p = 0.042). By contrast, significantly more patients with chronic liver diseases had protective Hepatitis A antibodies compared to the acute onset group (60.1% versus 39.5%, p = 0.011). Comparison of the measles vaccination titers revealed no difference between both groups (62.9% in chronic, 70.7% in acute, p = 0.334). These results are summarized in Table 4. Table 4. Prevalence of seropositive rates of IgG antibodies before transplantation in infants aged 6 months and older. Data is expressed as percentage of protective titers (number of age-appropriate vaccinated children/total number of investigated children).
Vaccine
Chronic
Prevalence of Protective Antibody Levels in Infants with Age-Appropriate Vaccination
Interestingly, comparing the prevalence of protective vaccination titers only in those patients with age-appropriate vaccination revealed a difference in varicella zoster: significantly fewer children with chronic liver disease and age-appropriate vaccination with VZV had protective titers compared to the control group. Results are shown in Table 5. Table 5. Prevalence of seropositive rates of IgG antibodies before transplantation in age-appropriate vaccinated infants. Data is expressed as percentage of protective titers (number of age-appropriate vaccinated children with protective titer/total number of investigated children with available titer prior to transplantation).
Vaccine
Chronic
Discussion
This study reviews the immunization status in 430 children and adolescents at the time of liver transplantation at five European liver transplant centers, revealing an underimmunization in this high-risk population. Only 80% of children with chronic liver disease were vaccinated against DTP-PV-Hib compared to national standards. However, this is in line with observations from Switzerland [7] as well as from the United States and Canada [6]. By contrast, levels of age-appropriate Hepatitis B-vaccinated children were 74.1% and lower compared to Feldman et al. with 84% of patients being age-appropriately vaccinated prior to transplantation. These results show that despite regular medical visits, vaccination recommendations are poorly implemented, even though national recommendations have been simplified and combination vaccines are available. This may also be a reflection of pediatrician concerns not to vaccinate due to liver disease. This is particularly worrying, as Leung et al. found insufficient antibody titers in in 67% of fully vaccinated children with Hepatitis B after liver transplantation [16]. Moreover, despite complete HBV vaccination, infection may occur post-transplant as case reports suggest [17].
A rotavirus infection is one of the leading infectious causes after pediatric solid organ transplantation [6]. However, rotavirus vaccination data is scarce in pediatric liver transplant patients. Since February 2006, a rotavirus vaccine has been authorized in Europe, however, it is not included in standard vaccination schedules in every country. The number of vaccinated patients (30.6%) in our control group is significantly higher compared to patients with chronic liver disease (16.6%; p = 0.02). The lesser number of children vaccinated may be a reflection of the fact that diagnosis of chronic liver disease is often made in the first few weeks of life due to jaundice [18] and, consequently, within the narrow timeframe of the rotavirus vaccination. Interestingly, only 59% of hepatologists from the SPLIT group recommended rotavirus vaccine for infants prior to transplantation [19]. The situation is similar with pneumococcal conjugate vaccine, in which only two thirds of children with chronic liver disease were up to date pre-transplantation. From the second birthday, vaccination status can be extended with the 23-valent pneumococcal polysaccharide vaccine. As an indication vaccination, it also explains why almost 20% of all chronic liver disease patients are significantly better vaccinated than in the control group (2.9%, p = 0.02). Data in adults with liver cirrhosis suggests that they have lower antibody levels to pneumococcal capsular polysaccharide after vaccination, compared to healthy individuals, and that after transplantation these drop to pre-vaccination levels [20].
An infection with chickenpox can have a prolonged, severe course under immunosuppression [21,22] and is one of the leading causes of VPI following pediatric solid organ transplantation [5]. Live-attenuated vaccines are not generally recommended in immunosuppressed patients, and vaccination with MMR and varicella should not be given until the age of 6 months at the earliest [23]. However, the window of opportunity for vaccination is usually limited prior to transplantation because pediatric liver transplant candidates often have unstable disease courses. Thus, it is essential to immunize eligible children at an early age. Our data shows that children suffering from chronic liver disease showed a median vaccination age with MMR, which is close to the national recommendations ( Figure 1). Moreover, vaccination rates were better compared to the control group: 81% versus 62% (p = 0.003) for MMR and 65% versus 54.9% (p = 0.171) for varicella were up to date pre-transplant. However, this is less than the 90% of all patients vaccinated on time with both live vaccines prior to transplant observed by Feldman et al. [6].
On the one hand, the median age of the vaccinated children is close to the national recommendations; on the other, the country-specific comparison of vaccinations shows that this is far below the national reference data for Italy, but also for the second MMR dose in France. This may be a country-specific reflection of vaccination hesitancy, which recently led to mandatory vaccinations in France [24], Italy [25] and Germany [26]. In addition, there is a wide variation in immunization practices for pediatric liver transplant candidates. For example, 15% of pediatric transplant hepatologists always recommend live vaccines and only 84% sometimes [19]. In view of this data, the information that these live vaccinations should be carried out quickly does not seem to reach all families, even if the coverage with MenC is still lower in all countries (Table 3). To improve this, several factors must be addressed: vaccination education of parents [27] and co-treating pediatricians, reminder systems for upcoming or missed vaccination windows [28], digitally available vaccination records as well as standardization of immunization schedules.
Vaccination titers can be used for a better assessment of vaccination status. The group with chronic liver disease had significantly better titers for Hepatitis A, but also had significantly better coverage for indication vaccines. Interestingly, the prevalence of seroprotective titers for Hepatitis B and VZV is higher in children with acute liver disease than in children with a chronic disease (Table 4), but with no difference for measles. However, this may also represent a poorer response to vaccinations in the context of liver disease, as children with biliary atresia show diminished humoral immunity to MMR and VZV vaccines compared to healthy controls [29]. If only those children who were vaccinated according to their age are examined, significantly fewer children with chronic liver disease had protective titers compared to the control group (Table 5). However, after transplantation, titers may fall due to immunosuppression. In a Swiss study, only children with a history of chickenpox had detectable VZV-antibodies and those previously vaccinated did not [7]. Yoeli et al. demonstrated that non-immune VZV-patients, after liver transplantation, received less doses prior to transplantation, were younger at transplantation and had less time between their last VZV-dose and transplantation [30]. Thus, this represents a balancing act, that vaccination should take place early enough, but not too early, so as not to compromise the success of vaccination. Therefore, regular vaccination titers can be measured in order to document response on the one hand, and on the other hand, the need for vaccine refreshment before transplantation, as described by L Huillier et al. [7]. However, cellular-mediated aspects of immunity are disregarded by only measuring antibody titers [31] and even patients with non-protective antibody titers may mount an effective immune reaction.
The current study has some limitations: As this is a multicenter investigation, it has the problem that national vaccination calendars are different. This means that vaccination priorities and times in each country have shifted several times over the study period of more than 18 years. For example, rotavirus vaccine is not included in all countries analyzed and an indication vaccination does not necessarily have to be taken over by the health system, so that this may be omitted by the families for financial reasons. In addition, the willingness to vaccinate varies; on the one hand, from country to country and, on the other hand, over time, resulting in recent mandatory vaccinations in France and Italy [24,25]. In Germany, the measles vaccination became mandatory for entry into kindergarten or school in March 2020 [26]. In addition, the total level of vaccination titers were not investigated in this study, as the comparability is limited, due to the different laboratories and measurement methods over time. Those patients with acute liver disease were defined as a control group in this study, but their vaccination titers may also be subject to changes due to the underlying disease. Moreover, the level of the vaccination titer can differ significantly, even if the number of patients with a protective level does not differ.
In conclusion, incomplete vaccination status and insufficient antibody levels are common prior to pediatric liver transplantation. As infection is relevant for morbidity and mortality in the short and long term after transplantation, new strategies should be found, in particular, to reduce VPI. Mandatory vaccination may be a start, but improvements in the vaccination education of parents and pediatricians to enhance acceptance, as well as reminder systems of vaccination windows seem useful. Moreover, standardized vaccination recommendations across Europe, including new vaccines (e.g., rotavirus), with digitally accessible vaccination records as well as regular serological analyses of vaccination titers prior to transplantation may be helpful here, with re-vaccination if necessary. Institutional Review Board Statement: Ethical review and approval were waived for this study, due to retrospective analysis.
Informed Consent Statement: All parents/caregivers of patients analyzed in this study provided informed consent allowing their children's data to be used for scientific purposes at the time of hospital admission. Patient data were anonymized prior to analysis. Data Availability Statement: All data requests should be submitted to the corresponding author for consideration. Access to anonymized data may be granted, following review.
|
2021-08-28T06:17:16.305Z
|
2021-08-01T00:00:00.000
|
{
"year": 2021,
"sha1": "0c1ca96ca7cad3de10db3b6a10de641d7955cd7e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9067/8/8/675/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "58c17cc8dfeda2b6e90096e81cb7c1cf45ca5129",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248187409
|
pes2o/s2orc
|
v3-fos-license
|
Application of Generalized Regression Neural Network and Gaussian Process Regression for Modelling Hybrid Micro-Electric Discharge Machining: A Comparative Study
: Micro-Electric Discharge Machining ( µ -EDM) is one of the widely applied micromanu-facturing processes. However, it has several limitations, such as a low cutting rate, difficult debris removal, and poor surface integrity, etc. Hybridization of the µ -EDM is proposed as an alternative to overcome the process limitations. Conversely, it complicates the process nature and poses a challenge for modelling and predicting critical process responses. Therefore, in this work, two distinct, nonparametric, previously unreported, workpiece material independent models using a Generalized Regression Neural Network (GRNN) and Gaussian Process Regression (GPR) were developed and compared to assess their performance with limited training data. Various smoothing factors and kernels were tested for GRNN and GPR, respectively. The prediction of models was compared in terms of the mean absolute percentage error, root mean square error, and coefficient of determination. The results showed that GPR outperforms GRNN and accurately predicts the µ -EDM process responses. The GRNN’s performance was better for less stochastic output with a discernible pattern than other outputs. The Automatic Relevance Determination (ARD) squared exponential kernel was found to be the best performing kernel among those chosen. GPR models can be used with reasonable accuracy to predetermine critical process outputs as they have R 2 values above 0.90 for both training and validation data for all outputs. This work paves the way for future industrial implementation of GPR to model and predict the outputs of complex hybrid machining processes.
Introduction
Continuing miniaturization in numerous technical disciplines like the electronics industry or microsystems technology demands components with microfeatures such as holes, and channels, etc. Micro-electric discharge machining (µ-EDM) is one of the leading processes widely used for microfeature fabrication.Different kinds of materials, regardless of their hardness and toughness, could be machined easily using µ-EDM.However, the process is slow, and fabrication of deep features or high aspect ratios is a challenge for µ-EDM due to inefficient debris removal, low discharge energy, poorer machining stability, etc.To mitigate such challenges, hybridization of µ-EDM is proposed in the literature either by using energy assistance or through the combined application of one or more machining processes in material removal; to enhance specific process outputs.In energy-assisted EDM, the source could be a laser, a vibrating device, a magnet, etc.Although hybridization certainly improves the process, it also severely complicates the nature of the process, and it becomes challenging to establish a mathematical model for such a process.In the µ-EDM, the material removal arises due to melting and evaporation as a spark is generated among the tool and workpiece in a dielectric fluid.Adding other energy sources poses difficulties in estimating the time and tool required for a particular job and in estimating the final surface characteristics of the manufactured feature.Surfaces, particularly in the microdomain, play a critical role and thus have to be tailored according to need and application.
Past and contemporary research in the area of hybrid µ-EDM has been chiefly aimed toward demonstrating process capabilities, whether it be laser hybridization [1], magnetic field-assisted hybrid µ-EDM [2], ultrasonic or low-frequency vibration-assisted [3][4][5] or a combination of micro-EDM with electrophoretic deposition [6].Attempts toward modelling the process have also been made using methods including the finite element method (FEM) [7], fluid dynamics [8], graph theory [9], and empirical modelling [10,11].However, the comparison of different methods for modelling and predicting machining response characteristics like cutting rate, tool wear rate, etc., are not reported significantly [9,11].Unune et al. [10] reported predicted values using empirical models and were found to be within 7% of experimental results.In recent work, a comparison was made between ANN-Particle Swarm Optimization (PSO) and FEM-based models for predicting process outputs of the micro-EDM drilling process without any hybridization [12].They found that ANN-PSO could more accurately predict the material removal rate and dimensional deviation.In another work, the authors found the adaptive neuro-fuzzy inference system (ANFIS) model to perform better than ANN in predicting micro-EDM responses, viz., metal removal rate, surface roughness, etc. and tool wear ratio [13].However, there was no hybridization of the micro-EDM process in both those works.At the macro scale, numerical and thermal modelling of hybrid electric discharge and arc machining was attempted to consider latent heat and temperature-dependent workpiece properties [14].Further literature on the implementation of recent machine learning techniques such as Backpropagation Neural Network (BPNN), Generalized Regression Neural Network (GRNN), Gaussian Process Regression (GPR), etc., to model this complicated process could not be found.
The prime benefit of different machine learning techniques lies in modelling and predicting highly complicated nonlinear processes and phenomena.ANNs are very popular and have been extensively used for modelling complex manufacturing process outputs such as microchannel dimensions [15], intelligent manufacturing [16], shear strength [17], and material removal rate [18], etc.However, ANNs require a large number of parameters to be correctly tuned.This limits their usage in cases where the training dataset is limited.On the other hand, in GRNN, only one parameter needs to be optimized to fit the model.Moreover, GRNN has shown better efficacy for predictive modelling [19].This makes GRNN a suitable candidate for modelling processes such as hybrid µ-EDM, which are highly energy-intensive and slow processes (thus restricting the feasibility of performing a large number of experiments) but in which the prediction of different process outputs is also critical.
GPR, like GRNN, offers a probabilistic and nonparametric approach to modelling.But certain hyper-parameters are still required to be optimized for better model fitting.GPR has been used to model and predicts many complex phenomena such as predicting oxygen consumption in steel making process [20], deposit shape in cold metal transfer based wire arc additive manufacturing [21], predicting the ultimate tensile strength in friction stir welding [22], predicting weld quality uncertainty in hybrid-tandem gas metal arc welding [23], etc.Although GPR has not been used to model hybrid µ-EDM, it has been used to model Wire-EDM [24,25] and EDM [26].The authors in these articles find GPR more accurate than BPNN and find it suitable for modelling with small datasets.Thus, it is only natural to attempt to model hybrid µ-EDM using GPR.
In light of the discussed literature, it is clear that GRNN and GPR techniques could be applied for modelling complex hybrid machining.Such studies are rarely available in the literature.The application of GRNN and GPR for modelling vibration-assisted micro-EDM has not been reported previously.Thus, this article explores the efficacy of two different nonparametric modelling techniques, GRNN and GPR, for predicting the different significant responses in a hybrid µ-EDM process, which has not been previously attempted.Therefore, in this work, blind microholes of various depths were initially fabricated using µ-EDM on different materials by varying the energy input and vibration frequency.Post-fabrication, three key process outputs were measured, including hole drilling rate, rate of tool wear, and centerline average surface roughness of the bottom surface.These outputs are then modelled using GRNN and GPR, and a comparison is made in terms of mean absolute percentage error (MAPE), root mean square error (RMSE), and coefficient of determination (R 2 ).
Microfeature Fabrication Process
Figure 1 displays a schematic diagram of the machining setup and shows the microhole fabrication process using µ-EDM.In this process, the electrode, also referred to as the tool, is given a negative charge while the workpiece with a positive charge.This electrode is attached to a spindle and rotated at a selected rotation per minute (RPM).The workpiece is placed upon a vibrating device (which is the additional source of energy).A suitable dielectric fills the gap between them.As they are brought closer to each other, at a certain distance, the dielectric breaks down, and a spark is initiated between the two.The high heat generated due to the spark melts and evaporates material from both the electrode and workpiece.As a result, a cavity of the shape of the rotating electrode is formed in the workpiece while some portion of the electrode is worn out.
In light of the discussed literature, it is clear that GRNN and GPR techniques could be applied for modelling complex hybrid machining.Such studies are rarely available in the literature.The application of GRNN and GPR for modelling vibration-assisted micro-EDM has not been reported previously.Thus, this article explores the efficacy of two different nonparametric modelling techniques, GRNN and GPR, for predicting the different significant responses in a hybrid -EDM process, which has not been previously attempted.Therefore, in this work, blind microholes of various depths were initially fabricated using μ-EDM on different materials by varying the energy input and vibration frequency.Post-fabrication, three key process outputs were measured, including hole drilling rate, rate of tool wear, and centerline average surface roughness of the bottom surface.These outputs are then modelled using GRNN and GPR, and a comparison is made in terms of mean absolute percentage error (MAPE), root mean square error (RMSE), and coefficient of determination (R 2 ).
Microfeature Fabrication Process
Figure 1 displays a schematic diagram of the machining setup and shows the microhole fabrication process using -EDM.In this process, the electrode, also referred to as the tool, is given a negative charge while the workpiece with a positive charge.This electrode is attached to a spindle and rotated at a selected rotation per minute (RPM).The workpiece is placed upon a vibrating device (which is the additional source of energy).A suitable dielectric fills the gap between them.As they are brought closer to each other, at a certain distance, the dielectric breaks down, and a spark is initiated between the two.The high heat generated due to the spark melts and evaporates material from both the electrode and workpiece.As a result, a cavity of the shape of the rotating electrode is formed in the workpiece while some portion of the electrode is worn out.
Experimentation & Process Outputs
This study selected AISI 316 SS, Co29Cr6Mo, and BeCu as work materials.A WC electrode of ø0.5 mm was used as a tool.Experimental runs were carried out on DT-110i multi-purpose micromachine, as shown in Figure 2. A constant tool speed of 1500 rpm and feed rate of 0.2 mm/min were used during machining.An electromagnetic actuator was used to vibrate the work material.The constant electrode RPM and feed rate were selected based on previous pilot experiments conducted for different materials.The machine was found to function best at the selected constant parameters.
Experimentation & Process Outputs
This study selected AISI 316 SS, Co29Cr6Mo, and BeCu as work materials.A WC electrode of ø0.5 mm was used as a tool.Experimental runs were carried out on DT-110i multi-purpose micromachine, as shown in Figure 2. A constant tool speed of 1500 rpm and feed rate of 0.2 mm/min were used during machining.An electromagnetic actuator was used to vibrate the work material.The constant electrode RPM and feed rate were selected based on previous pilot experiments conducted for different materials.The machine was found to function best at the selected constant parameters.
The discharge energy (DE) in RC circuit based micro-EDM typically depends on the voltage and capacitance values.DE is a product of voltage and capacitance square.The vibrational frequency has been identified to be playing an important role in improving micro-EDM performance [4].However, work material and aspect ratio have not been researched well in the available literature.Therefore, to conduct experimentation, the energy input, vibrational frequency, and microhole aspect ratio were selected as the control factors; in addition, the work material is considered a categorical factor.The details regarding the control factors are provided in Table 1.The aspect ratio is taken as the ratio of intended hole depth to the electrode diameter.The Central Composite Design (CCD) approach was used to plan 31 experimental runs for control factors, as shown in Table 2.In addition, three more experiments were conducted, one on each material, at new design points to help in model validation.Figure 3 shows the fabricated microholes on the AISI 316 SS workpiece.
DR =
Actual depth o f drilled hole(µm) Machining time(min) Drill rate (DR), tool wear rate (TWR), and arithmetical mean height of surface roughness (Ra) were the process outputs.DR and TWR were calculated using Equations ( 1) and ( 2), respectively.DR is basically the depth of hole drilled per unit time, and TWR is the length of tool worn out per unit depth of hole drilled.Machining time was verified using a digital stopwatch, while tool wear was measured as tool height loss pre-and post-machining.The arithmetical mean height of surface roughness (Ra) of the hole bottom and depth of the drilled hole was measured using the Profilm 3D-FILMETRICS ® white light Interference profilometer.Three measurements were made at different hole locations and averaged to obtain the mean Ra value.The discharge energy (DE) in RC circuit based micro-EDM typically depends on the voltage and capacitance values.DE is a product of voltage and capacitance square.The vibrational frequency has been identified to be playing an important role in improving micro-EDM performance [4].However, work material and aspect ratio have not been researched well in the available literature.Therefore, to conduct experimentation, the energy input, vibrational frequency, and microhole aspect ratio were selected as the control factors; in addition, the work material is considered a categorical factor.The details regarding the control factors are provided in Table 1.The aspect ratio is taken as the ratio of intended hole depth to the electrode diameter.The Central Composite Design (CCD) approach was used to plan 31 experimental runs for control factors, as shown in Table 2.In addition, three more experiments were conducted, one on each material, at new design points to help in model validation.Figure 3 shows the fabricated microholes on the AISI 316 SS workpiece.
𝐷𝑅 = 𝐴𝑐𝑡𝑢𝑎𝑙 𝑑𝑒𝑝𝑡ℎ 𝑜𝑓 𝑑𝑟𝑖𝑙𝑙𝑒𝑑 ℎ𝑜𝑙𝑒(𝜇𝑚) 𝑀𝑎𝑐ℎ𝑖𝑛𝑖𝑛𝑔 𝑡𝑖𝑚𝑒(𝑚𝑖𝑛)
(1) Drill rate (DR), tool wear rate (TWR), and arithmetical mean height of surface roughness (Ra) were the process outputs.DR and TWR were calculated using Equations ( 1) and (2), respectively.DR is basically the depth of hole drilled per unit time, and TWR is the length of tool worn out per unit depth of hole drilled.Machining time was verified using a digital stopwatch, while tool wear was measured as tool height loss pre-and postmachining.The arithmetical mean height of surface roughness (Ra) of the hole bottom and ®
Modelling 4.1. GRNN Model
GRNN is a neural network that does not require any backpropagation to reach the optimal solution.Data is passed only once through the network, and thus, it is also known as a feed-forward type network.The network consists of four layers, viz.input, pattern, summation, and output, as shown in Figure 4.
GRNN Model
GRNN is a neural network that does not require any backpropagation to reach the optimal solution.Data is passed only once through the network, and thus, it is also known as a feed-forward type network.The network consists of four layers, viz.input, pattern, summation, and output, as shown in Figure 4.The neurons at the input layer correspond to the independent variables that affect the output.Each input neuron is connected to all pattern neurons.The neurons in the pattern layer correspond to vectors in the target set.At this layer, the Euclidean distance (di) between the input and training data is calculated, and an exponential activation function is used to determine the weights.The output from the pattern layer is then fed into two summation neurons-Sj and Sd, also known as numerator and denominator, respectively.The formula for calculating Sj, Sd and di are provided in Equations (3-5), respectively. where, The data from these neurons are then divided to calculate the final output, Y(X), as shown in Equation (6).The neurons at the input layer correspond to the independent variables that affect the output.Each input neuron is connected to all pattern neurons.The neurons in the pattern layer correspond to vectors in the target set.At this layer, the Euclidean distance (d i ) between the input and training data is calculated, and an exponential activation function is used to determine the weights.The output from the pattern layer is then fed into two summation neurons-S j and S d , also known as numerator and denominator, respectively.The formula for calculating S j , S d and d i are provided in Equations ( 3)-(5), respectively. where, The data from these neurons are then divided to calculate the final output, Y(X), as shown in Equation (6).
Xi represents the input used during training, while X represents sample input.Yi represents the target set used for training, and Y(X) is the calculated output corresponding to input vector X, while n is the total number of training datasets.σ is the smoothing factor, also known as the spread, and is the only tunable parameter that determines the accuracy of the network.MATLAB R2018b ® was used to perform the GRNN modelling in this work.
GPR Model
Gaussian Process Regression (GPR), a class of machine learning, obtains nonparametric and probabilistic models for response variables of the process under consideration.Rasmussen and Williams [27] define a Gaussian process as "a collection of random variables, any finite number of which have a joint Gaussian distribution".A Gaussian factor established the relationship among process parameters and response variables.More details regarding GPR modelling are available in our previous paper [5].Several categories of kernel functions can be selected for establishing the model.The exponential, squared exponential, quadratic, rational quadratic, and kernel functions are widely used.
If y is the output for an input vector x, then for a Gaussian process, Such that both f(x) and are typically distributed.This is to say: where µ(x) is the mean function and k(x,x') is the covariance function.The process can be simplified, assuming the prior mean function as zero.Hence, for a set of response variables y corresponding to a set of input process parameters X, the joint distribution for a set of predictions y new corresponding to input X new is: where I is the identity matrix.The posterior predictions as a Gaussian distribution can be given as Although the method is a nonparametric modelling, specific hyperparameters dependent on the covariance function are required to be optimized for a better fitting of the model.Further details are available in the study by Rasmussen and Williams [27].For GPR modelling, MATLAB R2018b ® was used.
In this work, different ARD-based kernels were chosen to model as they allow dimensional independence, which means a separate length scale is possible for each predictor.Exponential, squared exponential, Matern 3/2, Matern 5/2, and rational quadratic ARD kernels were chosen and defined below.ARD Exponential: ARD Squared Exponential: ARD Matern 3/2: ARD Rational Quadratic: where, σ s is the signal standard deviation, σ l is the specific length scale, α is the scale-mixture factor and m is the number of predictors.
Drill Rate
The first attempt at modelling drill rate was made using GRNN.As the smoothing factor is the only parameter on which the prediction accuracy of a GRNN depends, thus, its optimization is crucial.The smoothing factor varies between 0 and 1.A significantly lower value of smoothing factor near 0 may show better results while training but cannot predict accurately as they have poor generalization.On the other hand, a smoothing factor of 1 may have a better conception but would also have a much higher value of error [5].In addition, the same smoothing factor may not be adequate for all models, and thus, for every model, a search must be performed to determine the appropriate value of the smoothing factor.The model was trained using the 31 data points with different smoothing factors.Then, to test the model adequacy, the output for the three validation experiments (V1, V2 & V3) were compared with experimental results.The results are shown in Figure 5, in which Sigma denotes the smoothing factor.It is observed that GRNN is not much adept at predicting these values.Gross under-prediction is observed for the validation experiment performed on BeCu irrespective of the smoothing factor, while severe over-prediction is observed for experimentation on Co29Cr6Mo.At lower smoothing factor values, GRNN fails to predict the trend, and it was only after the value of the smoothing factor was above 0.6 that the prediction began to resemble the experimental trend.It was found that the smoothing factor of 0.6 (shown in black) provided the best fit for the validation experiments as it had the least mean absolute error (MAE) and mean square error (MSE), and thus, it was chosen as the final value for the model.The resulting predicted values for all the experiments and their comparison with the actual experimental values are shown in Figure 6.
After that, GPR was used to model the drill rate.Different ARD-based kernels were chosen to model as they allow dimensional independence, which means for each predictor, a separate length scale is possible.Exponential, squared exponential, Matern 3/2, Matern 5/2, and rational quadratic ARD kernels were tested, and the results for the three validation experiments, similar to GRNN, were evaluated to test the models.The results are shown in Figure 7.It can be observed that all kernels show better efficacy at demonstrating the general trend than GRNN models with different smoothing factors.From these different kernels, the ARD squared exponential (shown in black) was chosen for the final model as it has the lowest MAE and MSE.The predicted values of this model, prediction intervals for 95% confidence level, and corresponding experimental findings are displayed in Figure 8.After that, GPR was used to model the drill rate.Different ARD-based kernels were chosen to model as they allow dimensional independence, which means for each predictor, a separate length scale is possible.Exponential, squared exponential, Matern 3/2, Matern 5/2, and rational quadratic ARD kernels were tested, and the results for the three validation experiments, similar to GRNN, were evaluated to test the models.The results are shown in Figure 7.It can be observed that all kernels show better efficacy at demonstrating the general trend than GRNN models with different smoothing factors.From these different kernels, the ARD squared exponential (shown in black) was chosen for the final model as it has the lowest MAE and MSE.The predicted values of this model, prediction intervals for 95% confidence level, and corresponding experimental findings are displayed in Figure 8. Figure 9 presents the comparison of these two models in terms of MAPE, RMSE, and 2 .The extension "-V" represents the value of these indices for validation experiments only.The histograms show that GPR performs better than GRNN at predicting DR.A substantial difference between these two models is distinct for all the characteristic indices.GPR shows a good correlation with the experimental results for the training data Figure 9 presents the comparison of these two models in terms of MAPE, RMSE, and 2 .The extension "-V" represents the value of these indices for validation experiments only.The histograms show that GPR performs better than GRNN at predicting DR.A substantial difference between these two models is distinct for all the characteristic indices.GPR shows a good correlation with the experimental results for the training data and the validation data.However, it should also be noted that RMSE increases almost Figure 9 presents the comparison of these two models in terms of MAPE, RMSE, and R 2 .The extension "-V" represents the value of these indices for validation experiments only.The histograms show that GPR performs better than GRNN at predicting DR.A substantial difference between these two models is distinct for all the characteristic indices.GPR shows a good correlation with the experimental results for the training data and the validation data.However, it should also be noted that RMSE increases almost five-fold for the validation data, even for GPR; however, it follows the experimental trend as seen in Figure 8.
Tool Wear Rate
For modelling tool wear rate, a similar procedure was followed.First, different smoothing factors were tested for GRNN; the best value was chosen based on the least error for validation data.It was found that the error was minimum when the smoothing factor was 1, as seen in Figure 10.For TWR, too, it is observed that GRNN fails to follow the experimental trend as in the case with DR. Figure 11 shows the GRNN predicted values and their comparison with the experimental values.GPR again excels at modelling, and in fact, the difference among the various kernels is minute, as can be observed from Figure 12.Still, the ARD squared exponential has the best performance in terms of MSE and MAE.The GPR predicted results and the corresponding 95% confidence level prediction interval, along with the experimental data, are shown in Figure 13.The histogram of model performance indices, as shown in Figure 14, also highlights this difference.
Tool Wear Rate
For modelling tool wear rate, a similar procedure was followed.First, different smoothing factors were tested for GRNN; the best value was chosen based on the least error for validation data.It was found that the error was minimum when the smoothing factor was 1, as seen in Figure 10.For TWR, too, it is observed that GRNN fails to follow the experimental trend as in the case with DR. Figure 11 shows the GRNN predicted values and their comparison with the experimental values.GPR again excels at modelling, and in fact, the difference among the various kernels is minute, as can be observed from Figure 12.Still, the ARD squared exponential has the best performance in terms of MSE and MAE.
The GPR predicted results and the corresponding 95% confidence level prediction interval, along with the experimental data, are shown in Figure 13.The histogram of model performance indices, as shown in Figure 14, also highlights this difference.
Tool Wear Rate
For modelling tool wear rate, a similar procedure was followed.First, different smoothing factors were tested for GRNN; the best value was chosen based on the least error for validation data.It was found that the error was minimum when the smoothing factor was 1, as seen in Figure 10.For TWR, too, it is observed that GRNN fails to follow the experimental trend as in the case with DR. Figure 11 shows the GRNN predicted values and their comparison with the experimental values.GPR again excels at modelling, and in fact, the difference among the various kernels is minute, as can be observed from Figure 12.Still, the ARD squared exponential has the best performance in terms of MSE and MAE.The GPR predicted results and the corresponding 95% confidence level prediction interval, along with the experimental data, are shown in Figure 13.The histogram of model performance indices, as shown in Figure 14, also highlights this difference.
Surface Roughness
When it comes to the roughness of the machined surface, like TWR, a smoothing factor of 1 was again observed to be the optimum value and was used to predict the outputs using GRNN for model comparison.The results are shown in Figures 15 and 16.Unsurprisingly, the ARD squared exponential came out as the best kernel function, as seen in Figure 17, and was used to predict the outputs, as shown in Figure 18.
Surface Roughness
When it comes to the roughness of the machined surface, like TWR, a smoothing factor of 1 was again observed to be the optimum value and was used to predict the outputs using GRNN for model comparison.The results are shown in Figures 15 and 16.Unsurprisingly, the ARD squared exponential came out as the best kernel function, as seen in Figure 17, and was used to predict the outputs, as shown in Figure 18.
Although GPR still outperforms GRNN, the difference between the two models is not as significant as in the case of the other two outputs.This is evident from the performance indices histogram, as shown in Figure 19.The GRNN model has a usable R 2 value for the training data and the validation experiments.MAPE and RMSE do not fall too far behind the GPR model.GRNN, in this case, proves to be better at modelling surface roughness than tool wear or drill rate.
This improvement in the performance of GRNN is possible since, unlike the other two outputs, a discernible pattern is evident in the output data of surface roughness, as seen in Figure 20.The discharge energy has a far more significant influence on surface roughness than other outputs, and thus, an almost linear relationship between the two is observed.This 'simplification' helps GRNN perform better than other outputs as such a pattern is not observable in the data for drill rate and tool wear rate.However, despite this enhanced performance, it still lags behind GPR in every aspect.
Surface Roughness
When it comes to the roughness of the machined surface, like TWR, a smoothing factor of 1 was again observed to be the optimum value and was used to predict the outputs using GRNN for model comparison.The results are shown in Figures 15 and 16.Unsurprisingly, the ARD squared exponential came out as the best kernel function, as seen in Figure 17, and was used to predict the outputs, as shown in Figure 18.Although GPR still outperforms GRNN, the difference between the two models is not as significant as in the case of the other two outputs.This is evident from the performance indices histogram, as shown in Figure 19.The GRNN model has a usable R 2 value for the training data and the validation experiments.MAPE and RMSE do not fall too far behind the GPR model.GRNN, in this case, proves to be better at modelling surface roughness than tool wear or drill rate.This improvement in the performance of GRNN is possible since, unlike the other two outputs, a discernible pattern is evident in the output data of surface roughness, as seen in Figure 20.The discharge energy has a far more significant influence on surface roughness than other outputs, and thus, an almost linear relationship between the two is observed.This 'simplification' helps GRNN perform better than other outputs as such a pattern is not observable in the data for drill rate and tool wear rate.However, despite this enhanced performance, it still lags behind GPR in every aspect.This improvement in the performance of GRNN is possible since, unlike the other two outputs, a discernible pattern is evident in the output data of surface roughness, as seen in Figure 20.The discharge energy has a far more significant influence on surface roughness than other outputs, and thus, an almost linear relationship between the two is observed.This 'simplification' helps GRNN perform better than other outputs as such a pattern is not observable in the data for drill rate and tool wear rate.However, despite this enhanced performance, it still lags behind GPR in every aspect.
Conclusions
In this article, two distinct approaches, namely GRNN and GPR, were applied for modelling and prediction of process performance of a complex microscale hybrid machining process.The performance of GRNN and GPR in the prediction of process outputs with limited training data was analyzed in terms of MAPE, RMSE, and R 2 .Initially, the experiments were planned using the CCD approach of RSM and then were conducted to fabricate microholes using vibration-assisted hybrid µ-EDM on different materials to eliminate the workpiece dependency of models.The energy input to the machine, vibration frequency, and aspect ratio of microfeature were varied to conduct the experiments.Their effects were assessed on the rate of drilling, rate of tool wear, and roughness of the machined surface are crucial outputs of the process.The following inferences may be drawn from the conducted work:
•
Among the two approaches tested, the prediction accuracy of the GPR model outperformed GRNN in the case of all output parameters.
•
GRNN developed models for drill rate and tool wear rate cannot be used for predictions as the correlation for validation data points is extremely low, and MAPE and RMSE errors are extremely high.
•
For surface roughness, GRNN performance improved significantly but still lags behind GPR.It was observed that this improvement is possible because surface roughness is less stochastic in nature and has a direct relationship with energy input.
•
An ARD squared exponential kernel was observed to be the best performing kernel for all output models using GPR.
Based on the work done in this article, it may be concluded that GPR is a much better candidate than GRNN for modelling seemingly stochastic outputs of complex machining processes even with limited training data.
Future Work
Considering the proven efficacy of GPR in this research work, GPR models can be implemented in the future, particularly for vibration-assisted micro-EDM, to predetermine with reasonable accuracy the time and tool requirements for performing a particular drilling operation.Using more training data, the model may be extended in the future for other operations performed using hybrid micro-EDM, such as micromillings, to develop a single holistic model to predict the output of all operations performed with vibration-assisted hybrid µ-EDM.The extension of this work could also be in the direction of combining the approach of Big Data along with GPR modelling by extracting any possible real-time continuous data from different machining processes.
Processes 2022 , 18 Figure 5 .
Figure 5. GRNN model testing for DR with different smoothing factors.Figure 5. GRNN model testing for DR with different smoothing factors.
Figure 5 .
Figure 5. GRNN model testing for DR with different smoothing factors.Figure 5. GRNN model testing for DR with different smoothing factors.
Figure 5 .
Figure 5. GRNN model testing for DR with different smoothing factors.
Figure 6 .
Figure 6.Experimental and GRNN predicted values for DR training data.
Figure 8 .
Figure 8. Experimental and GRNN predicted values for DR.
Figure 8 .
Figure 8. Experimental and GRNN predicted values for DR.
Figure 8 .
Figure 8. Experimental and GRNN predicted values for DR.
Figure 10 .
Figure 10.GRNN model testing for TWR with different smoothing factors.
Figure 9 .
Figure 9.Comparison histogram of model characteristics.
Figure 10 .
Figure 10.GRNN model testing for TWR with different smoothing factors.
Figure 10 .
Figure 10.GRNN model testing for TWR with different smoothing factors.
Figure 11 .
Figure 11.Experimental and GRNN predicted values for TWR training data.
Figure 12 .
Figure 12.GPR model testing for TWR with different kernels.
Figure 13 .
Figure 13.Experimental and GPR predicted values for TWR training data.
Figure 12 .
Figure 12.GPR model testing for TWR with different kernels.
Figure 13 .
Figure 13.Experimental and GPR predicted values for TWR training data.
Figure 12 .
Figure 12.GPR model testing for TWR with different kernels.
Figure 13 .
Figure 13.Experimental and GPR predicted values for TWR training data.Figure 13.Experimental and GPR predicted values for TWR training data.
Figure 13 .
Figure 13.Experimental and GPR predicted values for TWR training data.Figure 13.Experimental and GPR predicted values for TWR training data.
Figure 14 .
Figure 14.Comparison histogram of model characteristics for TWR.
Figure 15 .
Figure 15.GRNN model testing for Ra with different smoothing factors.
Figure 14 .
Figure 14.Comparison histogram of model characteristics for TWR.
Figure 15 .
Figure 15.GRNN model testing for Ra with different smoothing factors.Figure 15.GRNN model testing for Ra with different smoothing factors.
Figure 15 .
Figure 15.GRNN model testing for Ra with different smoothing factors.Figure 15.GRNN model testing for Ra with different smoothing factors.
Figure 16 .
Figure 16.Experimental and GRNN predicted values for Ra training data.
Figure 17 .
Figure 17.GPR model testing for with different kernels.
Figure 18 .
Figure 18.Experimental and GPR predicted values for Ra training data.
Figure 17 .
Figure 17.GPR model testing for with different kernels.
Figure 18 .
Figure 18.Experimental and GPR predicted values for Ra training data.
Figure 17 . 18 Figure 16 .
Figure 17.GPR model testing for R a with different kernels.
Figure 17 .
Figure 17.GPR model testing for with different kernels.
Figure 18 .
Figure 18.Experimental and GPR predicted values for Ra training data.Figure 18. Experimental and GPR predicted values for Ra training data.
Figure 18 .
Figure 18.Experimental and GPR predicted values for Ra training data.Figure 18. Experimental and GPR predicted values for Ra training data.
Figure 19 .
Figure 19.Comparison histogram of model characteristics for Ra.
Figure 20 .
Figure 20.Experimental Ra data points stacked against energy.
Figure 19 .
Figure 19.Comparison histogram of model characteristics for Ra.
Figure 19 .
Figure 19.Comparison histogram of model characteristics for Ra.
Figure 20 .
Figure 20.Experimental Ra data points stacked against energy.Figure 20.Experimental Ra data points stacked against energy.
Figure 20 .
Figure 20.Experimental Ra data points stacked against energy.Figure 20.Experimental Ra data points stacked against energy.
Table 1 .
Parameters and their levels.
Table 1 .
Parameters and their levels.
|
2022-04-16T15:02:51.341Z
|
2022-04-13T00:00:00.000
|
{
"year": 2022,
"sha1": "cf8135060efb59699753f9cd5aedc1f766351fb6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9717/10/4/755/pdf?version=1649855958",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f74ea3c8d270fa38f2d76b0a941b39604d59481a",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": []
}
|
255879046
|
pes2o/s2orc
|
v3-fos-license
|
Synthesis Routes on Electrochemical Behavior of Co-Free Layered LiNi0.5Mn0.5O2 Cathode for Li-Ion Batteries
Co-free layered LiNi0.5Mn0.5O2 has received considerable attention due to high theoretical capacity (280 mAh g−1) and low cost comparable than LiCoO2. The ability of nickel to be oxidized (Ni2+/Ni3+/Ni4+) acts as electrochemical active and has a low activation energy barrier, while the stability of Mn4+ provides a stable host structure. However, selection of appropriate preparation method and condition are critical to providing an ideal layered structure of LiNi0.5Mn0.5O2 with good electrochemical performance. In this study, Layered LiNi0.5Mn0.5O2 has been synthesized by sol-gel and solid-state routes. According to the XRD, the sol-gel method provides a pure phase, and solid-state process only minimize the secondary phases to certain limit. The Ni2+/Mn4+ content in the sol-gel process was higher than in the solid-state reaction, which may be due to the chemical composition homogeneity of the sol-gel samples. Regarding the electrochemical behavior, sol-gel process is better than solid-state reaction. The discharge capacity is 145 mAh/g and 91 mAh/g for the sol-gel process and solid-state reaction samples, respectively.
Introduction
High-capacity cathode materials typically contain a certain amount of Cobalt for stabilization and promoting their electrochemical properties. The role of cobalt as transition metal changes its oxidation state to maintain the electrically stay neutral when the lithium ion is taken out from the cathode. However, Cobalt price has gone up significantly so high that Co-free cathode materials have been proposed and investigated recently [1][2][3]. Layered-structure lithium nickel manganese oxide (LiNi 0.5 Mn 0.5 O 2 ) is a candidate Co-free cathode material that possesses high theoretical capacity (280 mAh/g), good cycling stability, and small volume changes [4,5]. Ohzuku and Makimura successfully demonstrated the synthesis of 1:1 solid solution of LiNiO 2 and LiMnO 2 , namely LiNi 0.5 Mn 0.5 O 2 using solid-state synthesis technique by heating at 1000 • C for 15 h. LiNi 0.5 Mn 0.5 O 2 is one of the most attractive materials [6].The structure of LiNi 0.5 Mn 0.5 O 2 consists of layers of transition metal (Ni and Mn) separated from Li layers by oxygen. Li and transition metal (TM) coordinated octahedrally by oxygen but diffuses from site to site by hopping through intermediate tetrahedral sites. The Li migration during the charge-discharge process has a diffusion rate, which has an activation energy barrier. The energy required for a Li-ion to cross the activated state is likely to depend on the size of the tetrahedral site, which also calls as strain effect. However, the exchange of Ni/Li (or Ni/Li disordering) usually happens between the layer and TM layer in these materials during synthesis and electrochemical cycling [7]. Several amount of Li + that occupy the transition metal slab or vice versa for transition metal and other indication of Li + /Ni 2+ mixing is the Li 2 MnO 3 like phase formation. The impact of Li + /Ni 2+ disorder increases the possible formation of Li-dumbbells which causes it causes the high voltage process involving removal of the tetrahedral Li-ions [8].Furthermore, the transition metal of Ni acts as electrochemically active and has a low activation energy barrier due to the valence state is low, which cam promote the Li ion diffusion. On the other hand, manganese act as electrochemically passive, which the primary role is to maintain the host crystal stability. During synthesis layered LiNi 0.5 Mn 0.5 O 2 has a possible cation anti-site contains approximately 8-10%, which mean it obtain 10% Li + will occupy the transition metal slab and vice versa for transition metal [7]. This cation mixing promotes by two factor that must investigate during research, the first assumption caused by a Ni 2+ substitution in the lithium layers due to the closely cationic size between Li + (0.76 Å) and Ni 2+ (0.69 Å). The second consideration is the formation of Li + /Mn 4+ which form Li 2 MnO 3 -like phase ordering. Li 2 MnO 3 -like was suggested can provide a driving force for the Li + /Ni 2+ exchange [9,10]. The selection of appropriate preparation method and condition are critical to obtain an ideal layered structure of LiNi 0.5 Mn 0.5 O 2 showing good electrochemical performance. There are several methods for synthesis LiNi 0.5 Mn 0.5 O 2 cathode material such as solid-state method [11], hydrothermal synthesis [12] and co-precipitation [13] have been used by other groups to synthesize this material. Furthermore, the precipitation agents require several purification steps to be removed and their residues can promote the negative impact for electrochemical performance. This difficulty when producing in term batch-to-batch during large-scale production. Nevertheless, the hydrothermal synthesized promoted the cation anti-site defect which determined electrochemical properties of cathode materials. Even worse, impurities have often been found along with hydrothermally due to the oxidizing circumstance in aqueous solution. To produce LiNi 0.5 Mn 0.5 O 2 with a superior electrochemical performance, there still are challenges to be overcome. Sol-gel method is the common method for synthesis of the multi-cation cathode materials due to high purity, high homogeneity, and low synthesis temperatures. In this study, a sol-gel process is proposed to fabricate layered LiNi 0.5 Mn 0.5 O 2 with good electrochemical performance. For comparison, LiNi 0.5 Mn 0.5 O 2 was also synthesized by a conventional solid-state reaction. The samples were characterized by XRD, SEM, XPS and galvanostatic charge-discharge tests.
Materials and Preparation
The sol-gel method preparation was prepared according to the procedure: 0.105 mole lithium acetate (Li(CH 3 COO) ·2H 2 O, MACKLIN, 99.9%) with an excess of 5 mol % to compensate for Li-loss during high temperature treatment, 0.05 mole nickel acetate (Ni(CH 3 COO) 2 ·4H 2 O, Sigma Aldrich, St. Louis, MO, USA, 98 %), and 0.05 mole manganese acetate(Mn(CH 3 COO) 2 ·4H 2 O, Sigma Aldrich, St. Louis, MO, USA, ≥99%)was dissolved in distilled water, and the same time the citric acid was dissolved with distilled water in the different beaker. The citric acid was added drop by drop in to the transition metal solution afterward adding the ethylene glycol. The temperature setting at 60-70 • C and stirring for gelation overnight. Furthermore, increased the temperature at 150 • C until getting a dry gel. The resulting dry gel was continued to pre-calcine in Al 2 O 3 crucibles at 600 • C for 12 h and calcined at 900 • C for 12 h.
Solid-state method was used for comparing the sol-gel method. A stoichiometric amount of nickel oxide (NiO, Sigma Aldrich, St. Louis, MO, USA, 99.8%), manganese oxide (MnO 2 , Sigma Aldrich, St. Louis, MO, USA, ≥ 99%) and lithium hydroxide (LiOH, Sigma Aldrich, St. Louis, MO, USA, 98%) as raw materials. The raw materials were placed in a 50 mL ball mill jar to homogenized by roller milling for 24 h at 500 r.p.m using zirconia milling media with ethanol (Sigma Aldrich, St. Louis, MO, USA, 95%) added as a carrier fluid. The mass ratio of the starting materials and the zirconia balls was 1:90. After milling, ethanol was evaporated under mixing at 85 • C. After drying, each powder was sieved using a 325 mesh to obtain uniform particle sizes. Finally, the powders were calcined at 900 • C for 12 h to obtain LiNi 0.5 Mn 0.5 O 2 powders.
Basic Characterization
The thermal behavior of the as-prepared powder was carried out using (SETSYS Evolution TGA-DTA/DSC SETARAM) up to 1000 • C at the scan rate of 20 • C/min. The phase purity and crystal structure of the LiNi 0.5 Mn 0.5 O 2 samples were investigated using X-ray diffraction (Rigaku Multi Flex) with Cu-Kα radiation. The two thetas range from 10 to 80 degree with 0.5 deg./min rate. The operating voltage and current were 30 kV and 20 mA, respectively. Phase identification of the sample was analyzed using MDI Jade 6 and identify the phase using ICDD database. The date result calculated to determine the value of lattice parameter, average volume crystal size and amount of impurities phase. Scanning electron microscopy was performed using a Hitachi S3000 to identify the morphology and particle size of the samples. The particle size of powder synthesized was determined using Zetasizer 3000 HSA.X-ray photoelectron spectroscopy confirms the valence state of transition metal due to the important initial condition of nickel state and correction binding energy using C1s peak (285 eV).
Electrochemical Characterization
The electrodes were prepared by mixing the LiNi 0.5 Mn 0.5 O 2 powder with carbon black and polyvinylidene fluoride at a weight ratio of 80:10:10 in N-methyl pyrrolidine. The slurry was coated on aluminum current collect using the doctor blade, keeping the thickness of the coated electrode around 25 µm and dried at 100 • C in vacuum for 24 h. The foils were rolled into thin sheets and cut into disks with a diameter of 13 mm. The cathode loading was estimated to be ∼2 mg/cm 2 . Lithium foil was used as the anode and polypropylene microporous films were used as separators. The electrolyte consisted of 1 M LiPF 6 in a mixture of ethyl carbonate (EC, Sigma Aldrich, St. Louis, MO, USA) and diethyl carbonate (DMC, Sigma Aldrich, St. Louis, MO, USA) at a 1:1 volume ratio. CR2032 coin cells were assembled in an argon-filled glove box. The galvanostatic charge-discharge curves were performed using an Arbin Battery Tester 2043 in the potential range 2.5-4.3 V. Figure 1 shows the thermogravimetry and differential scanning calorimetry of the twoprecursor powder for solid state method and the sol gel method. For solid state precursor, there show no big changes in weight loss curve. The high slope around 200-550 • C probably the weight of loss (17.30%) is attributed from thermal dehydration of LiOH.H 2 O and also CO 2 releasing from carbonate decomposition [14]. Another weight-loss contribution also comes from MnO 2 by releasing oxygen [15]. The weight loss above than 550 • C around (2.10%) probably attributed by formation phase compound such as spinel, rock-salt and layered phase. However, the theoretical and practical weight loss is slightly different, 19.73% and 24.75%, respectively. It is assumed the unseen of weight loss because of water vapor from LiOH.H 2 O. Attributing CO 2 for kinetically can promote the vapor to occur at 60 • C [14]. This reason makes sense around 5.02% practical weight loss it does not appear.
Weight Loss Decomposition
Molecules 2023, 28, x FOR PEER REVIEW 4 of 9 probably referred to the transformation of crystalline, rock-salt until had been formed the layered phase structure [19]. The weight loss of 2.1% between 500 and 950 °C has attributed to the release of O2.
X-ray Diffraction Analysis
The crystal formation during calcination was confirmed using X-ray diffraction based on temperature difference and synthesis routes process. Figure 2 demonstrate the For sol-gel precursor, in the begin observation at low temperature range up to 200 • C the weight loss (32.40%) has occurred, probably corresponding to the physically adsorbed water and weakly bound ligand molecules. The second stage of weight loss (42.28%) was occurred between 200-400 • C, that attributed to the pyrolysis of residential organic functional group such as glycerol in low temperature of this stage [16] and decomposition of acetate into oxides by accompanied releasing water by dehydration of vinyl alcohol (-CH-CHOH-) to leave (-CH=CH-) in the high-level temperature of this stage [17]. This phenomenon confirms by Nowak-wick et. al., the citric acid will decompose become an aconitic acid at 240 • C. The decomposition compound is a complex process leading through dehydration and decarboxylation reaction to different intermediate product [18]. The third stage shows a flat curve with small weight change about (1.49%), this weigh lost probably referred to the transformation of crystalline, rock-salt until had been formed the layered phase structure [19]. The weight loss of 2.1% between 500 and 950 • C has attributed to the release of O 2 .
X-ray Diffraction Analysis
The crystal formation during calcination was confirmed using X-ray diffraction based on temperature difference and synthesis routes process. Figure 2 demonstrate the phase transformation between sol-gel and solid-state does occur, respectively. The differences between sol-gel and solid state showed in the magnification (104) as shown in (b) and (e). The smoothed peak owns by sol-gel, indicate the layered structure well formed, while the solid state showed a broad region peak and accompany by the complex pattern with a few overlapping peaks. The complete splitting peak reflection from the layered phase can be clearly shown in (c) and (d). Those index (006)/(102) and (018)/(110) are considered to be indicator the well-organized of layered structure which mean the order distribution of lithium and transition metal ion in the lattice site. As increasing temperature, the enhances the splitting phenomenon. probably referred to the transformation of crystalline, rock-salt until had been formed the layered phase structure [19]. The weight loss of 2.1% between 500 and 950 °C has attributed to the release of O2.
X-ray Diffraction Analysis
The crystal formation during calcination was confirmed using X-ray diffraction based on temperature difference and synthesis routes process. Figure 2 demonstrate the phase transformation between sol-gel and solid-state does occur, respectively. The differences between sol-gel and solid state showed in the magnification (104) as shown in (b) and (e). The smoothed peak owns by sol-gel, indicate the layered structure well formed, while the solid state showed a broad region peak and accompany by the complex pattern with a few overlapping peaks. The complete splitting peak reflection from the layered phase can be clearly shown in (c) and (d). Those index (006)/ (102) and (018)/(110) are considered to be indicator the well-organized of layered structure which mean the order distribution of lithium and transition metal ion in the lattice site. As increasing temperature, the enhances the splitting phenomenon.
Morphology
The sample prepared by sol-gel and solid-state were observed, there seems not to be a significant difference for the morphology, but slightly larger particle size for a solid-state sample as shown in Figure 3. There also observed the bigger particle size in solid-state, the agglomeration probably it is attributed by melted of particle together as shown in Figure 3e. This result is an agreement with the synthesis result from Kos et al., that the particles of samples synthesized by solid state shown larger sizes and are not as well separated compared to those from the sol-gel method [20]. This result consistent with the particle size distribution that shown in the Figure 3c,f the broad peak of sol-gel is narrower than solid state, indicates the sol-gel particle size is more homogeneous size and well distributed than solid-state.
Morphology
The sample prepared by sol-gel and solid-state were observed, there seems not to be a significant difference for the morphology, but slightly larger particle size for a solid-state sample as shown in Figure 3. There also observed the bigger particle size in solid-state, the agglomeration probably it is attributed by melted of particle together as shown in Figure 3e. This result is an agreement with the synthesis result from Kos et al., that the particles of samples synthesized by solid state shown larger sizes and are not as well separated compared to those from the sol-gel method [20]. This result consistent with the particle size distribution that shown in the Figure 3c,f the broad peak of sol-gel is narrower than solid state, indicates the sol-gel particle size is more homogeneous size and well distributed than solid-state.
XPS
The XPS spectra of Ni 2p3/2 for sol-gel and solid-state sample show in Figure 4a,b. The Ni 2p3/3 spectrum consists of the main peak and accompanies by broad satellite peak. The main peak could be assigned to the nickel ion with the divalent state. As can be seen from this figure, Binding Energies (BEs) from the main peak are located at 854.27 and 854.61 eV for sol-gel and solid-state sample, respectively. However, the peak position of BEs Ni 2p3/2 for sol-gel shifted to the lower BEs than solid-state sample which indicate the Ni 2+ were dominant valence state than Ni 3+ . However, the main peak Mn2p3/2 position of solid-state showed a shift toward lower BEs than sol-gel, further the broad width-peak of sol-gel showed slightly larger than solid-state peak. This mean the Mn 4+ were dominant for sol-gel. This result, also confirm by ratio of splitting peak. The total Mn 4+ and LEP area divide Mn 3+ are 61.64 and 56.21 for sol-gel and solid-state, respectively.
XPS
The XPS spectra of Ni 2p3/2 for sol-gel and solid-state sample show in Figure 4a,b. The Ni 2p3/3 spectrum consists of the main peak and accompanies by broad satellite peak. The main peak could be assigned to the nickel ion with the divalent state. As can be seen from this figure, Binding Energies (BEs) from the main peak are located at 854.27 and 854.61 eV for sol-gel and solid-state sample, respectively. However, the peak position of BEs Ni 2p3/2 for sol-gel shifted to the lower BEs than solid-state sample which indicate the Ni 2+ were dominant valence state than Ni 3+ . However, the main peak Mn2p 3/2 position of solid-state showed a shift toward lower BEs than sol-gel, further the broad width-peak of sol-gel showed slightly larger than solid-state peak. This mean the Mn 4+ were dominant for sol-gel. This result, also confirm by ratio of splitting peak. The total Mn 4+ and LEP area divide Mn 3+ are 61.64 and 56.21 for sol-gel and solid-state, respectively.
Electrochemical Performance
The galvanostatic investigation was performed on the Li/ LiNi 0.5 Mn 0.5 O 2 cell assembled between sol-gel and solid-state sample calcine at 900 • C. The first charge-discharge curve record at 2.7 to 4.3 V under constant current 0.05 C, as shown in Figure 5. The illustration of the charge-discharge curve for sol-gel sample looks smooth and monotonous. Upon charge, the voltage curve steeply increases at 3.75 V followed by a lower slope as decreasing x-Li content in Li x Ni 0.5 Mn 0.5 O 2 , is clear that the sample consists of pure layered phase. From the charge, capacity indicates that 0.57 lithium can be removed from the layered Li x Ni 0.5 Mn 0.5 O phase below the voltage range 4.3 V. However, the discharge capacity of 145 mAh/g, corresponding with re-insertion of 0.51 lithium in the host crystal. According to the XRD and XPS, the sol-gel method provides a pure phase material and more completely forming hexagonal ordering crystal of layered LiNi 0.5 Mn 0.5 O 2.
Electrochemical Performance
The galvanostatic investigation was performed on the Li/ LiNi0.5Mn0.5O2 cell assembled between sol-gel and solid-state sample calcine at 900 °C. The first charge-discharge curve record at 2.7 to 4.3 V under constant current 0.05 C, as shown in Figure 5. The illustration of the charge-discharge curve for sol-gel sample looks smooth and monotonous. Upon charge, the voltage curve steeply increases at 3.75 V followed by a lower slope as decreasing x-Li content in LixNi0.5Mn0.5O2, is clear that the sample consists of pure layered phase. From the charge, capacity indicates that 0.57 lithium can be removed from the layered LixNi0.5Mn0.5O phase below the voltage range 4.3 V. However, the discharge capacity of 145 mAh/g, corresponding with re-insertion of 0.51 lithium in the host crystal. According to the XRD and XPS, the sol-gel method provides a pure phase material and more completely forming hexagonal ordering crystal of layered LiNi0.5Mn0.5O2.
To investigate the voltage fade would be better discus using the differential discharge capacity versus potential (dQ/dV). An anodic peak at 3.75 V which associated with the oxidation of Ni 2+ to Ni 4+ was showed in the Figure 6. It should be noted that the anodic peak of solid-state sample is slightly shifted towards a lower potential and its intensity is lower than that of the sol-gel, indicating a worse electrochemical activity and resulting a lower capacity of discharge platform in solid state result, which are perfectly coincident To investigate the voltage fade would be better discus using the differential discharge capacity versus potential (dQ/dV). An anodic peak at 3.75 V which associated with the oxidation of Ni 2+ to Ni 4+ was showed in the Figure 6. It should be noted that the anodic peak of solid-state sample is slightly shifted towards a lower potential and its intensity is lower than that of the sol-gel, indicating a worse electrochemical activity and resulting a lower capacity of discharge platform in solid state result, which are perfectly coincident with the XPS observations. Figure 7 shows the rate capability of the layered LiNi 0.5 Mn 0.5 O 2 synthesized via the solgel method and the solid-state method. The capacity of the layered LiNi 0.5 Mn 0.5 O 2 prepared by sol-gel method is higher than that solid-state method, and this difference increases under various rates increases. This result could be attributed to particles agglomerate resulting in the decrease in its specific surface area, which makes increases the Li diffusion path, so the deceased discharge capacity of the layered LiNi 0.5 Mn 0.5 O 2 synthesized via the solid-state method. The capacity fading in solid state method is attributed to the average Mn valence is equal to or less than 3.5. The disordering cation leads to the cracking of particles and loss of electric contact while cycling. agglomerate resulting in the decrease in its specific surface area, which makes inc the Li diffusion path, so the deceased discharge capacity of the layered LiNi0.5Mn0.5O thesized via the solid-state method. The capacity fading in solid state method is attr to the average Mn valence is equal to or less than 3.5. The disordering cation leads cracking of particles and loss of electric contact while cycling. agglomerate resulting in the decrease in its specific surface area, which makes in the Li diffusion path, so the deceased discharge capacity of the layered LiNi0.5Mn0.5 thesized via the solid-state method. The capacity fading in solid state method is attr to the average Mn valence is equal to or less than 3.5. The disordering cation lead cracking of particles and loss of electric contact while cycling.
Conclusions
Layer LiNi0.5Mn0.5O2 was successfully synthesized via sol-gel and solid-state routes By weight loss measurement and X-Ray characterization recorded the decomposition product and phase obtain from room temperature until 950 °C. Relative pure phase can be obtained by the sol-gel method at low temperature, due to the short distance among lithium and transition metal formed in the precursor. Conversely, incompletely tree-phase transformation occurs on the solid-state method at the same temperature, indicate the deficient of energy. The larger particles from raw materials may need longer annealing time or higher temperature to complete the required reaction. Layer LiNi0.5Mn0.5O2 prepared by sol-gel delivered better electrochemical performance than solid-state method in terms of capacity fade and cycle life performance.
Conclusions
Layer LiNi 0.5 Mn 0.5 O 2 was successfully synthesized via sol-gel and solid-state routes. By weight loss measurement and X-Ray characterization recorded the decomposition product and phase obtain from room temperature until 950 • C. Relative pure phase can be obtained by the sol-gel method at low temperature, due to the short distance among lithium and transition metal formed in the precursor. Conversely, incompletely tree-phase transformation occurs on the solid-state method at the same temperature, indicate the deficient of energy. The larger particles from raw materials may need longer annealing time or higher temperature to complete the required reaction. Layer LiNi 0.5 Mn 0.5 O 2 prepared by sol-gel delivered better electrochemical performance than solid-state method in terms of capacity fade and cycle life performance.
|
2023-01-17T17:04:12.695Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "387ec787c26fe1a70e76b004f3045ce8375d7868",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/28/2/794/pdf?version=1673582456",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "efa8f8b1c9ae2111aa16c798c23eb72892a14a69",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255810574
|
pes2o/s2orc
|
v3-fos-license
|
Identifying genes with conserved splicing structure and orthologous isoforms in human, mouse and dog
Background In eukaryote transcriptomes, a significant amount of transcript diversity comes from genes’ capacity to generate different transcripts through alternative splicing. Identifying orthologous alternative transcripts across multiple species is of particular interest for genome annotators. However, there is no formal definition of transcript orthology based on the splicing structure conservation. Likewise there is no public dataset benchmark providing groups of orthologous transcripts sharing a conserved splicing structure. Results We introduced a formal definition of splicing structure orthology and we predicted transcript orthologs in human, mouse and dog. Applying a selective strategy, we analyzed 2,167 genes and their 18,109 known transcripts and identified a set of 253 gene orthologs that shared a conserved splicing structure in all three species. We predicted 6,861 transcript CDSs (coding sequence), mainly for dog, an emergent model species. Each predicted transcript was an ortholog of a known transcript: both share the same CDS splicing structure. Evidence for the existence of the predicted CDSs was found in external data. Conclusions We generated a dataset of 253 gene triplets, structurally conserved and sharing all their CDSs in human, mouse and dog, which correspond to 879 triplets of spliced CDS orthologs. We have released the dataset both as an SQL database and as tabulated files. The data consists of the 879 CDS orthology groups with their detailed splicing structures, and the predicted CDSs, associated with their experimental evidence. The 6,861 predicted CDSs are provided in GTF files. Our data may contribute to compare highly conserved genes across three species, for comparative transcriptomics at the isoform level, or for benchmarking splice aligners and methods focusing on the identification of splicing orthologs. The data is available at https://data-access.cesgo.org/index.php/s/V97GXxOS66NqTkZ. Supplementary Information The online version contains supplementary material available at (10.1186/s12864-022-08429-4).
phenomenon is common in eukaryotic organisms [4]. It is estimated to concern 95% of human multi-exonic genes, with a still growing median of 5 alternatively spliced transcripts per gene [5][6][7][8]. Alternative isoforms can show specific interactions with proteins and ligands, specific subcellular locations, tissue-specific expression profiles and differential expression between developmental stages, age and sex [9][10][11][12][13][14]. Anomalous AS can be associated with both rare and common human diseases [15,16]. Thus, it is extremely interesting to inventory alternative transcripts at gene level. We actually distinguish two mechanisms leading a gene to produce alternative transcripts. In addition to AS, which consists of splicing introns and yields the mature mRNA, alternative transcription (AT) generates alternative 5' initiations and/or 3' terminations during the transcription process.
Orthology is a fundamental concept in computational biology. Orthologous biological characters are considered to have existed in a common ancestor species and are currently shared and derived in its descendants. Orthologs share common inherited phenotypes. While numerous resources are available to identify orthologous genes [17] or exons, very few describe sets of orthologous alternative transcripts. The genome annotation resources and the splice aligners rely on sequence conservation to predict new transcripts sharing homology with already known transcripts. However, this does not correspond to a suitable definition of the transcript orthology, and formal definitions of orthology applying at the alternative transcript level are also scarce. As previously noted by [18], alternative orthologous transcripts are transcribed from orthologous genes and share the same exonic structure: all their exons are orthologous exons. Additionally, alternative orthologous transcripts sharing their coding sequence (CDS) are designated as spliced CDS orthologs.
Our study takes us a step further in knowledge concerning splicing orthology. Following on from our earlier work [19], we first provide a formal description of structural orthology, applied both at the level of a gene's splice sites and that of its alternatively spliced transcripts.
Based on this formalism and on highly curated transcripts from CCDS, we then identify a dataset of genes whose splicing structures are conserved across human, mouse and dog. Additionally, a number of spliced CDS orthologs are predicted for the genes through the comparative genomics approach, while known and predicted transcripts of the genes are classified into groups of spliced CDS orthologs that we called CDS orthology groups.
More specifically, we identified a set of 253 orthologous gene triplets in human, mouse and dog, sharing all their splice sites and start and stop codons, and thus identified as structural orthologs (i.e. orthologous genes sharing a conserved splicing structure). 879 groups of spliced CDS orthologs were identified for these genes. Orthologous spliced CDSs share the same splicing site structure in each orthologous gene. We gathered evidence for the predicted transcripts using various databases and sequencing datasets. Additionally, we identified a number of transcripts in the dataset as alternative transcripts with distinct UTR regions but having the same CDS, thereby potentially encoding the same protein. Our data are made available for further analysis.
Results
In this study, we developed a comparative genomics method based on a description of coding exon structures across multiple species. The method first identified splicing sites conserved among orthologous genes, thereby denoted as orthologous splicing sites. Next, the orthologous genes were compared according to the orthologous splicing sites, in order to estimate whether each splicing site involved in a known transcript has an ortholog in another species. If so, a transcript sharing a conserved splicing structure was identified in the other species, and it was denoted as an orthologous transcript. Finally, we identified orthologous genes sharing a conserved splicing structure: all their splicing sites are conserved over the considered species. These genes were denoted as structurally orthologous genes (see "Methods" and Fig. 1 as an example). More precisely, in addition to splicing sites, start and stop codons were also considered, collectively defined in the paper as functional sites. The coding sequences (CDS) specifically were compared, and thus spliced CDS orthologs were predicted.
The study focused on 2,167 genes shared in human, mouse and dog and their transcripts, which were stringently chosen. These genes were selected so as to exhibit several complete alternative transcripts, each having a manually curated annotation in human and mouse according to the CCDS database. Among them, we identified 253 triplets of structurally orthologous genes, which share all their functional sites and have all their CDSs conserved across the three species. 879 triplets of spliced CDS orthologs were identified among these genes: the 879 distinct CDSs expressed in a given species have orthologs in the two others, thus none of the CDS is specific to a species, nor missing in any of the three species.
Transcript prediction : 6,861 predicted CDSs
The 2,167 orthologous genes shared in human, mouse and dog express 18,109 known transcripts. Models of their spliced CDSs were built, making possible comparisons of alternative CDSs across species (see an example in Fig. 1a). The pairwise gene comparisons led, on the one hand, to predict orthology relationships between the functional sites involved in the known transcripts of both species, and on the other hand to predict new candidate functional They concern the CDS part of the transcripts. They involve coding blocks, defining the nucleotidic sequences building a CDS, and functional sites delimiting exons on the gene. The coding blocks correspond to the intervals between the functional sites. A same block name occurring in several species indicates a conserved and orthologous region. For example, two transcripts known in mouse involve alternative exons denoted as C and BC, where both exons contain the block 'C', and block 'B' is an alternative 5' extension of exon C. A known human CDS involves an exon BC estimated to be an ortholog of the mouse exon BC, and the known dog CDS involves an exon C, orthologous to the mouse exon C. (b) Gene model alignment. Each block and site in a gene is aligned with the gene sequence of an orthologous gene, resulting in pairwise gene alignments. These alignments reveal i) the orthology between already known sites (or coding blocks), ii) the sequence homology of known sites (or blocks) with not annotated loci in another gene, resulting in predicted sites and blocks (dashed bubbles). Here, aligning mouse and human genes revealed the presence in human of a homolog of the acceptor site of C. This site is thus declared as predicted. It indicates the human gene is able to express the C exon alone, without the 'B' part. Additional predictions: acceptor site of H in human and dog, and 'B' block plus its acceptor site in dog. The site graph summarizes pairwise orthology relations: a node is a functional site and an edge is an orthology relationship. (c) Predicted transcripts. Five transcripts are made possible from site and block predictions sites and exons. In the example shown Fig. 1b, several exons observed in mouse CDSs had no orthologs known in the human and dog CDSs, but the corresponding orthologous splicing sites and exons could be predicted in the human and dog genes. Both latter predictions relied on pairwise sequence alignments of each exon and splicing site in one gene with the complete sequence of another gene. Figure 2 illustrates such sequence alignments and the prediction of conserved splicing sites.
Based on the known and predicted exons, and on the transcript structure comparisons, we identified orthology relationships between functional sites and between CDSs. In the example illustrated in Fig. 1, the four predicted exons in human and dog led to predict orthologs of the four CDSs that are known in mouse. Orthologous CDSs have identical transcript models (see Fig. 1c and "Methods"). This way, we predicted 6,861 CDSs in human, mouse and dog (Table 1), each being the ortholog of a known transcript CDS. Thus, the predicted number of transcripts represented 38% of the known transcripts initially considered. In a later section, we provide additional evidences for some of the predictions.
Prediction distribution across model species and emergent model species
Predictions are not equally distributed across species, reflecting differences in the initial amount of knowledge considered. Because human is the most widely docu- Uppercases indicate nucleotides involved in known exons, and lowercases indicate intronic nucleotides never observed yet to belong to an exon. For example, in human and mouse genes, the longer exon BC is known (uppercase). In the dog gene however, only the shorter exon C is known and the upstream nucleotides have been observed as intronic so far (lowercase). Note that the exon C is not known in human (it does not occur in any human transcript, see Fig. 1a). The splicing sites are indicated in bold, and predicted splicing sites are underlined. For example, a motif "AG" in human has been aligned with the known acceptor "AG" of exon C in mouse, thereby yielding a predicted orthologous splicing site in human (underlined bold "AG"). This motif is now identified as an acceptor site of exon C in human. Additionally, a motif "ag" in dog has been aligned with the known "ag" acceptor of exon BC in mouse, yielding a predicted orthologous splicing site in dog (underlined bold "ag"). The nucleotides of the predicted exons are shown in red. For example, a motif "cag" in dog has been aligned with the sequence "CAG" of the known block 'B' in mouse, yielding a predicted block (red "cag"). As a result, shorter exons C and H can exist in human (only longer exons BC and GH were known). In dog two new exons can exist, H (only GH was known) and BC (only C was known) mented species (8,374 known transcripts considered), it garners the lowest number of CDS predictions (1,540 predicted CDSs, Table 1). Thus, although there is less room to complement a highly studied transcriptome, it would be possible to improve its current annotations by better accounting for alternative transcripts identified in less studied transcriptomes. As expected, dog is the least documented species (3,224 known transcripts) and it receives the largest number of predicted CDSs (3,209). This is congruent with the general task of comparative genomic approaches, consisting in transferring transcript annotations from well documented model species to the less documented non-model or emergent model species. A functional site graph links its orthologous functional sites (splice sites and start and stop codons) for each gene triplet. These graphs allow us to compare gene structures across the three species (see "Methods" and Fig. 1b as an example). From 2,167 orthologous gene triplets, 1,661 were retained for subsequent analysis. The genes considered comprised exclusively of either functional sites specific to one species, or of functional sites shared in two or three species (see "Methods").
Among the 1,661 gene triplets, 253 yielded functional site graphs displaying all the functional sites shared in all three species, and were defined as structurally orthologous genes (Fig. 1b). The other genes displayed at least one functional site specific to a species, or shared in two out of three species.
The following hypothesis can be made concerning each structurally orthologous gene identified: when all splice sites and coding exons are conserved, all three orthologous genes should be able to express the same CDSs, and then the same proteins. Conversely, no CDS should be specific to, or missing in any species.
Transcript orthology : 253 genes with all the orthologous CDSs conserved across human, mouse and dog 135 genes with all orthologous cDSs shared in a single copy for each species
Using the 1,661 gene triplets retained for gene structure analysis, a transcript graph per gene triplet (see "Methods") was built in order to draw orthology links between CDSs and to define groups of orthologous CDSs (denoted as CDS orthology groups, Fig. 3).
We first considered gene triplets with a transcript graph exclusively containing CDSs shared in a single copy for each species (see "Methods"). 986 genes fulfilled this requirement. Among them, 135 genes had all their CDSs shared in all three species in a single copy per species. Following the classical "one to one" definition of orthology, each species displayed the same spliced CDS structure in a single copy, so the ancestor probably already possessed this spliced CDS structure. The 135 genes in question expressed a total of 462 CDS orthology groups (triplets of spliced CDS orthologs), involving 845 known and 541 predicted CDSs in human, mouse and dog genes ( Table 2, set S135). An example of such a gene triplet displaying exclusively orthologous CDSs is shown in Fig. 3.
As expected, all 135 gene triplets belong to the set of 253 structurally orthologous genes, with conservation across all three species of all the required functional sites and of all the coding exons, allowing each species to form the same orthologous spliced CDS structures (see "Methods").
genes with CDSs in multiple copies for at least one species: 213 CDSs with variable UTR
Among the 253 structurally orthologous genes however, 118 (253-135) did not conform to the previous proper- Fig. 1a), ii) to predict orthologous CDSs using predicted exons and sites (see Fig. 1c), and iii) to establish orthology relations between CDSs (left of Fig. 3). Thus, the resulting graph of transcripts connects transcripts sharing the same CDS splicing structure, thereby identifying CDS orthology groups. Left of Fig. 3, the transcript graph for the orthologous genes ENSG00000001167, ENSMUSG00000023994 and ENSCAFG00000001580 contains 4 subgraphs, each being a triplet of orthologous CDSs (nodes are CDSs, blue for human, grey for mouse and red for dog, with "?" indicating a predicted CDS). This implies that i) all the seven known gene transcripts, in human, mouse and dog actually represent four different CDS splicing structures, ii) five new CDSs are predicted to make the graph complete, iii) for this gene, each of the four CDS splicing structures is feasible in human, mouse ang dog, so the three genes share the same orthologous CDSs. Right of Fig. 3, the four braces indicate details of the four CDS orthology groups, showing the Ensembl identifiers of known transcripts, the predicted CDSs, and the spliced CDS structures t1 t2 t3 and t4 shared over the three species ties. Each of the 118 transcript graphs was such that it contained at least one CDS being redundant in a species: two or more known transcripts in this species encoded the same CDS (see Additional file 5). An example of such a gene is shown in Fig. 4, where human and mouse each have two known transcripts with an identical CDS, but distinct UTR regions.
The 118 genes expressed a total of 1,051 known transcripts which led us to infer 488 predicted CDSs. Known transcripts displayed 288 redundant CDSs ( Table 2, set S118) and 417 distinct CDS orthology groups could be observed overall. Whenever CDSs were redundant in a species, a one-to-one relation of orthology between transcripts (CDS plus UTR) did not apply. For example, in Fig. 4, CDS redundancy exists at the transcript level in human and mouse, and we cannot determine which of the two human transcripts is orthologous to which of the two mouse transcripts based on the CDS alone. However, shared CDSs are unique and the orthology definition still applies at CDS level, i.e. to the genes' protein isoforms. Each of the spliced CDS structures encode an orthologous protein a priori shared in the three species, and the 118 genes' ancestors presumably expressed ancestors of these 417 protein isoforms.
The redundant CDS cases mainly correspond to alternative transcripts with a same CDS but different 3' or 5' UTR regions. Among the 1,051 known transcripts of the 118 gene sets, 101 were found to lack 5' or 3' UTR, or both ( Table 2). We do not take them into consideration in the following enumeration of CDS redundancy. We found 114 gene triplets from 118 displaying redundant CDSs in at least one species such that the underlying transcripts are all described together with their UTR regions. We assume that such transcripts are genuine cases of multiple alternative transcripts encoding a same CDS. This represents a total of 213 sets of known transcripts encoding redundant CDSs. While 109 and 103 sets are respectively identified in human and mouse, only 1 is identified in dog. This disparity most likely results from the lack of information in the emergent model species rather than from the absence of CDS redundancy ( Table 2, set S114).
Experimental evidence for predicted transcripts
In this section, our previous predictions are substantiated with additional sources. Please note that the precision and recall measures of the applied comparative genomics method can be found in [19]. We examined seven additional databases in order to validate our transcript predictions using knowledge not included in the present study (see "Methods"). We detail here the results concerning the 253 structurally orthologous genes in human, mouse and dog. The additional databases are the Ensembl 96, Ensembl 98, Ensembl 102, Ensembl 103, UCSC, XBSeq and FEELnc databases. An important number of our predictions were substantiated, representing up to 42.8% of validated predictions in dog and around 20% of validated predictions in human and mouse (Table 3). Overall, 350 predicted CDSs (34%) were validated (i.e. tagged as confirmed) from the additional databases.
For the 679 remaining predicted transcripts that were not found in additional databases, we sought evidence in RNA-seq sequencing datasets by looking for signatures of predicted transcripts in the sequence reads. We considered an exon junction specific to this transcript as a predicted CDS signature, in other words, a junction that is not observed in any known transcript from the input data (set ENS90data, see "Methods"). 394 of the 679 predicted transcripts (58%) contained at least one specific exon junction (see Table 4), while all their specific junctions were identified in the reads for 255 of them (64.7%). These transcripts were tagged as achievable.
Finally, we managed to find hints of 89 (49.4%) predicted transcripts in human, 112 (45%) in mouse and 404 (67.3%) in dog (see Tables 3 and 4, see also Additional file 6). Thus, we found evidence for the existence of 58.8% of our transcript predictions derived from our comparative genomics method, suggesting that the method is suitable for CDS prediction. The type of confirmation obtained for each predicted transcript was kept in the database as an attribute (see Methods and Availability).
Description of the 253 structurally orthologous genes
The 253 triplets of genes we defined as structurally orthologous have, by definition, all their start/stop codons and splice sites conserved over human, mouse and dog. According to the Gene Ontology, most of these genes belonged to the categories "cellular process", "biological regulation" and "metabolic process" (see Additional file 8).
These genes express 1,896 known transcripts and we predicted 1,029 additional CDSs (Table 2, set S253, and see Additional files 1-3. ). An average of 2.5 (1,896/(3×253)) known transcripts per gene was expressed, ranging from 1 (in dog) to 13 (in human). We predicted an average of 1.3 (1,029/(3×253)) CDSs per gene, ranging from 1 (in each species) to 12 predictions (in dog). Among the 1,029 predicted CDSs, 350 were found in other databases and 255 had specific exon junctions that were aligned with sequencing reads. Each of the 253 orthologous genes encoded the same spliced CDS structures in the three orthologs. Gene proteome is shared across species and we identified 879 (462 in the S135 set, and 417 in the S118 set) CDS orthology groups. The gene transcriptomes may differ, however, due to multiple transcripts encoding the same CDS, with a potentially different number of such alternative transcripts across species. We identified 114 genes from the 253 where such CDS redundancy occured (45% of genes), which involved 213 sets of redundant CDSs. According to our data, alternative transcripts encoding a same CDS represent a tangible situation as 8% (213/(3×879)) of the sets of CDSs contain at least two different transcripts with distinct UTR regions. The phenomenon could be higher than 8% with regard to the genes in our dataset. In particular UTR regions in dog are almost undocumented at present, leading to just 1 observation among known transcripts ( Table 2, set S114).
Discussion
We applied a comparative genomics approach to a set of 2,167 genes in order to compare CDSs and gene structures between the three species: human, mouse and dog. We predicted CDSs in all three species and found that about 15% (253/1,661) have orthologous splicing structures that are wholly conserved in human, mouse and dog, and so could express the same set of isoforms over the three species. These structurally orthologous genes are defined as having conserved all start/stop codons and splice sites (denoted as the functional sites). For these genes, we found additional annotated and experimental data supporting 59% of the predicted CDSs, underpinning the robustness of our results. These data could be useful in several kinds of analyses.
Alternative transcripts encoding a same CDS
A recent study showed that alternatively spliced transcript diversity and expression levels across human tissues are mostly driven by AT start and stop sites [20]. Here we document such cases of AT where several alternative transcripts encode a same protein. Multiple transcripts encoding the same CDS occur in 45% of the structurally conserved genes, concerning 8% of the CDSs. This suggests a widespread phenomenon. It may be assumed that various alternative promoters and different 3'UTR regions yield as many different possibilities to regulate a given protein expression, depending on the required specificity of the tissues physiology. Interestingly, some studies have reported that a given protein isoform may or may not be expressed, depending on the transcribed UTR regions [21]. These observations and our results thus suggest that, even if the same functional sites are shared between orthologous genes, genes may express different transcriptomes with different UTR regions or different numbers of transcripts encoding a same CDS. These redundant alternative transcripts may be involved in responses to different regulatory processes.
Benchmark for a spliced aligner
Sequence homology lies at the heart of numerous protein and transcript predictions. However, there is still room for improvement in the underlying comparative genomics and spliced alignment methods [22]. The latter work shows recurrent challenges in accurately identifying intron-exon boundaries, and in handling non canonical GT and AG splice sites. The latest spliced aligner algorithms consider the spliced structure of a query transcript and the known splice sites of the target gene, thereby searching explicitly for spliced orthologs [23]. Our sets of spliced orthologs can be used to test such methods using real data.
Comparative genomics of regulatory elements
Our study formally defines orthology at splice site level, providing a more in-depth examination of the conservation of gene sequences located within intronic and exonic regions, and implied in the alternative splicing regulation. Indeed, if all splice sites are conserved and their orthology identified, it becomes possible to interpret sequence divergence in the surrounding regions that could be involved in the regulation of alternatively spliced transcript expression. Recent studies have shown that AS events follow conserved patterns of expression shared across species [24,25], indicating an underlying conserved mechanisms and regulatory sequences related to the genes' splicing programs. Additional observations show that some splicing events encounter divergence in their inclusion rates [26] or divergence in their tissue specific expression rates [27], which alternatively suggests regulatory sequences divergence.
Comparative transcriptomic at alternatively spliced transcript level
Finally, our description of orthology at spliced CDS level may be useful in comparative transcriptomic studies, helping to identify the differential expression of orthologous alternative CDSs across human, mouse and dog species. However, most current studies in comparative transcriptomics focus either on the gene level, taking into consideration all the alternatively spliced transcripts expressed collectively, or at exons' junction level, ignoring both the complete AS combinations forming an alternatively spliced transcript and all the different alternatively spliced transcripts, possibly involving a given AS event [28]. We believe that formal identification of orthologous alternatively spliced transcripts is thus lacking in current comparative transcriptomic studies.
Conclusion
In this paper, we apply a comparative genomics method based on the identification of the coding exon and splicing site structure of genes and the identification of the spliced CDS structure of transcripts. We define orthology at the functional site level of genes, identifying orthologous start and stop codons, donor and acceptor splice sites, and then at CDS level, identifying CDS orthology groups. These formalisms help to both predict new CDSs, and to identify genes sharing a same structure and transcriptome across species.
Applying a selective approach with the objective of producing highly reliable data, we studied a set of 2,167 orthologous genes shared in human, mouse and dog from CCDS and Ensembl. Given these genes, we predicted 6,861 CDSs, almost doubling the knowledge available in an emergent model species, the dog. We identified a set of 253 orthologous genes sharing all their functional sites and all their CDSs across the three species. We called the latter genes structural orthologs. We predicted 1,029 CDSs for the 253 genes, confirming 59% of them using additional annotation and experimental data. From these genes, we identified 879 CDS orthology groups (see Additional file 4). Interestingly, among the 2,637 gene CDSs, 8% were encoded by two or more alternative transcripts with different UTR regions. This concerned 45% of the genes examined, suggesting an important role for alternative transcription in the data considered.
Our data consists of 879 groups of spliced CDS orthologs which are available in the form of a SQL database as well as tabulated files. They are useful for research focusing on the identification of splicing orthologs and research focusing on genes conservation and divergence across species. This covers comparative transcriptomics at the level of orthologous alternatively spliced transcripts, for instance, and comparative genomics at the level of splicing regulation sequences.
Data sources: gene triplets on human, mouse, dog
The selected genes were one-to-one orthologous genes shared by human, mouse and dog species as defined in Ensembl release 90 [29], based on GRCh38.p10, GRCm38.p5 and CanFam3.1 assembly versions. Additionnaly, to be selected, a gene triplet must have at least two alternatively spliced transcripts in CCDS [30] for human and mouse, and at least one Ensembl transcript for dog. Thus, all human and mouse transcripts are obtained from CCDS, and all dog transcripts are obtained from Ensembl. Such sequences are called known transcripts. The resulting set contained a total of 2,167 orthologous gene triplets, expressing 18,109 known transcripts (8,374 in human, 6,511 in mouse and 3,224 in dog). This dataset is referred to in the rest of the paper as "ENS90data".
A database of orthologous genes and transcripts structurally conserved in human, mouse and dog
An SQL database, Transcript_ortho, displays the gene and transcript structures of 253 orthologous gene triplets conserved in human, mouse and dog species (see Results). The genes proposed in Transcript_ortho are structurally orthologous in the three species. The transcript_ortho tables describe the intron/exon composition of the genes and transcripts, through their genomic positions, as well as the orthology relationships for both coding exons and CDSs. Transcript_ortho contains 2,925 transcripts, along with their known (1,896) or predicted (1,029) status. In the second case, an additional attribute indicates the degree of confidence in the prediction, through an experimental confirmation tag (see below in the "Assessing Evidence" paragraph). The database, its schematic diagram (see Additional file 7) and the complete set of predicted transcripts, in GTF format, can be downloaded at https:// data-access.cesgo.org/index.php/s/V97GXxOS66NqTkZ.
Representation of genes and transcript structures: structure model definitions
In Eukaryotes, a precursor transcript is composed of exons and introns. Each intron is delineated by two splice sites, the splice donor site (generally "GT") and the splice acceptor site (generally "AG"), which allows intron excision during the splicing process. The exon sequences, remaining after splicing, constitute the mature transcript, or messenger RNA (mRNA). The mRNA contains a coding DNA sequence (CDS) to be translated into a protein.
A CDS is composed of a succession of codons (trinucleotides), starting with a start codon (generally "ATG"), ending with a stop codon ("TAG", "TGA" or "TAA"), and including no in-frame stop codon in between.
We used the formalism described in [31] and [19] to represent this structure, allowing us to model the structure of a gene from the intron/exon structure of its known transcripts.
The . M T i,u is composed of a subset of tokens K i,m from M G i , where first and last tokens are start and stop codons and the other tokens (the alphabetical letters) stand for the coding blocks representing the exonic segments that constitute the CDS, each exon being delineated with its splicing sites. Technically, each token is associated with its genome coordinates, and the gene structure model is obtained from the structure models of its known transcripts [19]. For instance, the M G i gene model above could have been obtained from the two transcript models M T i,1 and M T i,2 (see Fig. 5).
Pairwise comparison between orthologous genes: structural orthology definitions Comparing two gene structure models: functional site prediction
Given two orthologous genes i and j in two species, together with their known transcripts, each gene structure model M G i and M G j is firstly drawn from the structure of its known transcripts. The pairwise comparison between M G i and M G j thus consists of examining whether each token K i,m from M G i (and vice-versa for the tokens K j,n from M G j ) is conserved or not in the orthologous gene j, which is done by aligning coding blocks of gene i against the genomic sequence of gene j (see [19] for more details). If the sequence of a token K i,m is aligned in gene j, either it corresponds to an already known token K j,n in M G j , or it does not. In the latter case, a token is added to complete M G j , referred to as a predicted orthologous token (cf. the '>' , 'C' and ']' tokens in Fig. 5, predicted from gene i to gene j). In cases where tokens K i,m and K j,n represent orthologous coding blocks, they are unified by a same letter.
It is worth mentioning that, given the dataset considered, a site or an exon predicted in gene j belongs to none of the known transcripts of that gene. In fact, a site/exon predicted in gene j from gene i corresponds to a conserved sequence shared by the nucleotidic sequence of the two genes, associated with a known site/exon belonging to at least one of the known transcripts of gene i. This highlighting of new exons through a comparative genomics approach paves the way for transcript prediction (Fig. 5). The pairwise gene comparison leads to the following definitions of structural orthology. Two aligned tokens, K i,m from gene i and K j,n from gene j, define a pair of orthologous tokens, denoted as A(K i,m , K j,n ) (and reciprocally A(K j,n , K i,m ), the alignment relation of the two tokens being symmetrical). In the case of coding blocks, they are denoted by a same letter.
Two genes i and j whose structure models M G i and M G j contain only pairs of orthologous tokens A(K i,m , K j,n ) define a pair of structurally orthologous genes. M G i and M G j are syntactically equal. Two transcripts T i,u from gene i and T j,v from gene j whose structure models M T i,u and M T j,v contain only pairs of orthologous tokens A K i,m , K j,n define a pair of structurally orthologous transcripts, named spliced CDS orthologs. M T i,u and M T j,v are syntactically equal.
Comparing two transcript structure models: transcript prediction
The comparative approach based on structure models allows us to compare the transcriptomes of two orthologous genes in order to determine the structural orthology relation between transcripts and to predict transcripts. A pairwise comparison between transcripts of two orthol-ogous genes assessed whether each transcript T i,u from gene i (and reciprocally for transcripts T j,v from gene j) has a spliced CDS ortholog in the orthologous gene j, in other words, whether a transcript T j,v exists in j with the same CDS structure as T i,u (in other words, M T i,u = M T j,v ). If so, we infer a CDS orthology relationship between the known transcripts T i,u and T j,v . Otherwise, it is possible to examine whether each token involved in transcript model M T i,u has an orthologous token in the gene j model M G j . If so, we predict that a new transcript for gene j is possible, with the same spliced CDS as transcript T i,u . Below are the formal definitions: Given two orthologous genes i and j and a transcript If this executable transcript, denoted as T j,v , does not already belong to the set of known gene j transcripts under consideration, then it is called a predicted transcript. For example, in Fig. 5, the transcript labelled "2" in gene j is a predicted transcript, involving the predicted exon C.
By the end of the pairwise comparison between two orthologous genes i and j, we thus dispose of a pairwise alignment of gene models M G i and M G j , orthology relationships between tokens, A(K i,m , K j,n ), a set of predicted transcripts, and orthology relationships between transcripts (i.e., , with transcripts T i,u and T j,v being known or predicted. In the paper, the term CDS, or spliced CDS, refers to the protein coding transcript region (excluding the UTR), underscoring the fact that our predictions are based only on the coding parts of the transcripts.
Comparison across multiple orthologous genes
Comparing gene structures across three species: graph of functional sites As described above, a pairwise comparison between two orthologous genes i and j produces pairs of orthologous tokens, A(K i,m , K j,n ), and tokens with no identified ortholog, denoted as A(K i,m , −), where "−" stands for a gap. For each token, this defines whether it is shared by both genes, or is specific to one gene, respectively. Thus, given three orthologous genes i, j and k in three species, a token shared by the three species can be identified via three pairwise orthologies: is a triplet of orthologous tokens. In order to represent such a threespecies comparison between structural elements at gene triplet level, we defined a graph of functional sites, G FS .
Each node of G FS is labelled with a token corresponding to a functional site of one of the three genes. All the functional sites involved in known and predicted transcripts are taken into consideration to build the graph. Each edge of G FS connects a token K i,m of a gene i to a token K j,n of another gene j iff A(K i,m , K j,n ) (Fig. 6a). We built a graph of functional sites per triplet of orthologous genes in three species.
Comparing transcript structures across three species: graph of transcripts, to reveal CDS orthology groups
A similar structure was designed to compare gene transcripts across three species, the graph of transcripts, G T . We built a graph of transcripts per triplet of orthologous genes in three species. Each node of G T corresponds to a transcript (either known or predicted) in one of the three species. Each edge of G T connects a transcript T i,u of a gene i to a transcript T j,v of a gene j iff they are spliced CDS orthologs, i.e., M T i,u = M T j,v (Fig. 6b).
Graph analysis: identifying structurally orthologous genes and conserved transcriptomes in three species
Three types of subgraph are considered in a functional site graph G FS : a singleton corresponds to a functional site present in only one species, a couple represents a site shared by two species, while a triplet represents a site shared by all three species (Fig. 6a). Only gene triplets where functional site graphs are made up of singleton, Fig. 6a), where nodes are functional sites, edges are orthology relations, and '?' indicates predicted sites. (b) Determination of CDS orthology groups. Transcript graphs were built using pairwise transcript orthology. The example illustrates two CDS orthology groups. A first group involves three known orthologous CDSs. A second group of two CDSs is made up of a known CDS in gene i and a predicted CDS in gene j ('?' symbol). Gene k cannot form the third orthologous CDS due to the lack of an element in the gene couple and triplet subgraphs, are considered for subsequent analysis. A transcript graph G T is thus obtained for these genes. A functional site graph containing only triplets of sites indicates that each functional site has an orthologous site in each of the other species. Such a gene has a structure shared in all three species, which suggests that the structure was already existing in their common ancestor, defining a triplet of structurally orthologous genes. A transcript graph G T containing only triplets of transcripts implies that each CDS has an orthologous CDS in each of the other species. This indicates that each of the three orthologous genes can express the same CDS set.
Assessing evidence for predicted transcripts from annotations and experimental data
From the annotated transcripts contained in our ENS90data base set, a number of predicted transcripts are generated by our comparative genomics method. We assessed how these predictions are supported by complementary transcript annotations and experimental data. By the end of the process, each predicted transcript had been tagged with one of four labels: confirmed, possible, achievable or not achievable.
If one of the databases contained a spliced CDS, described in GTF format and corresponding to the coding exons of T, then T was tagged as confirmed.
Identifying exon junctions specific to predicted transcripts in read data
Unconfirmed predicted transcripts were examined against RNA-seq raw data. We considered comprehensive datasets spanning a large quantity of tissue in human [35], mouse [25] and dog [34], and searched for hints of a predicted transcript, defined as specific exon junctions, among the reads. A given exon junction was defined as specific to a given predicted transcript if no other occurrence of that junction belonged to the transcripts considered in our initial ENS90data set. Finding reads which contain the specific exon junctions of a transcript does not prove that the complete transcript was expressed
|
2022-03-20T06:22:44.343Z
|
2022-03-18T00:00:00.000
|
{
"year": 2022,
"sha1": "882f05acd5ff035bfa05381a021cc873f90f7f43",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12864-022-08429-4",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "8a51eac747bc5c93a2bc14401817dac3b46227ec",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55971999
|
pes2o/s2orc
|
v3-fos-license
|
Modeling of Tea Production in Bangladesh Using Autoregressive Integrated Moving Average (ARIMA) Model
In the present paper, different Autoregressive Integrated Moving Average (ARIMA) models were developed to forecast the tea production by using time series data of twenty-four years from 1990-2013. The performance of these developed models was assessed with the help of different selection measure criteria and the model having minimum value of these criteria considered as the best forecasting model. Based on findings, it has been observed that out of eleven ARIMA models, ARIMA (1,1,2) is the best fitted model in predicting the production of tea in Bangladesh and the forecasted value of tea production in Bangladesh, for the year 2014, 2015 and 2016 as obtained from ARIMA (1,1,2) was obtained as 65.568 Million Kilogram, 67.867 Million Kilogram and 60.997 Million Kilogram. Citation: Rahman A (2017) Modeling of Tea Production in Bangladesh Using Autoregressive Integrated Moving Average (ARIMA) Model. J Appl Computat Math 6: 349. doi: 10.4172/2168-9679.1000349
Introduction
Tea serves as the most important and popular drink for twothirds of the world population not only because of its attractive aroma and taste but also because of its many pharmacological effects, like suppressing tumor cell growth, reducing cardiovascular diseases, antiobesity and decrease the risk of atherosclerosis says Wang et al. [1] and Dhekale et al. [2]. The role of Bangladesh tea industry in global context is insignificant. It is only 1.68% of the global tea production and 0.58% of the world tea export. It seems that its export is gradually declining. If this trend continues, Bangladesh will turn into a tea importing country by 2015 discussed by Monjur [3] and Baten et al. [4]. As a result; international comparisons of the tea industry's efficiency have been of great interest to firms in the industry as well as policymakers. The large tea producing countries like India and Sri Lanka produce more than Bangladesh, where India and Sri Lanka's production level is 16 and 12 times higher than Bangladesh discussed in BCS [5]. It was found that in 1998, on an average only 1,145 kg of tea was produced per hectare in Bangladesh. Whereas, in the same year, production level per hectare in India and Sri Lanka was 1708 and 2030 kg respectively found Monjur [3] and Majumder [6].
In Bangladesh, on average the area of a tea estate is around 337 hectares. At present there are very few newly established smallholding tea gardens operating in the north-western part of Bangladesh. They have very significant contribution to the tea industry of Bangladesh. The first tea garden was established in 1857 at Malnicherra discussed Monjur [7] and Khisa and Iqbal [8], two miles away from Sylhet town, situated in the north-eastern part of Bangladesh. The British companies were the pioneer of tea plantation in Bangladesh. By 1903, there were 15 European planters in Northern Sylhet, 102 in Southern Sylhet and 26 in Habiganj district of Sylhet discussed Sana [9]. At present Bangladesh has 162 tea gardens and among them Sterlink companies operate 28 gardens and 128 gardens are operated by Bangladeshi owners (National Tea Company, Bangladesh Tea Board, Private limited companies and proprietary owners). Besides, six gardens are operated by smallholders which are situated in the north-western part of Bangladesh discussed Huque [10].
Tea cultivation in Bangladesh is spread over the hilly zones on the eastern part mainly in four districts (Sylhet, Moulvibazar, Habiganj and Chittagong). About 96% annual productions (of which 63% is of Moulvibazar district) is contributed by greater Sylhet obtained from 93% (of which 62% is of Moulvibazar district) of plantation area discussed. Islam et al. It is to be noted that Sterling companies produce about 50% of annual crop from about 42% of plantation area discussed BBS [12].
Materials
In the present study, time series secondary data on production (Million Kilogram) of tea in Bangladesh were considered for the period 1990 to 2013 from Bangladesh Tea Board (BTB) [13], Ministry of Commerce, and Government of People's Republic of Bangladesh. The time series secondary data were analyzed with the help of various ARIMA models. .
Methods
ARIMA is one of the most traditional methods of non-stationary time series analysis. In contrast to the regression models, the ARIMA model allows time series to be explained by its past or lagged values and stochastic error terms. The models developed by this approach are usually called ARIMA models because they use a combination of autoregressive (AR), integration (I)-referring to the reverse process of differencing to produce the forecast and moving average (MA) operations discussed by Box [14].
The ARIMA model is denoted by ARIMA (p,d,q) where "p" stands for the order of the auto regressive process, 'd' is the order of the data stationary and 'q' is the order of the moving average process. The general form of the ARIMA (p,d,q) can be written as which discussed Judge et al. [15] ∆ d y t =δ+θ 1 ∆ d y (t-1)+ θ 2 ∆ d y (t-2) +…………….+θ p y (t-p)+ e (t-1) αe (t-1) -α 2 e (t-2) α q e (t-2) Where, ∆ d denotes differencing of order d, i.e., ∆y t =y t -y t-1 , ∆ 2 y t =∆y t -∆y t-1 and so forth, y t-1 .......... y t-p are past observations (lags), δ,θ 1 ……… θ p are parameters (constant and coefficient) to be estimated similar to regression coefficients of the Auto Regressive process (AR) of order "p" denoted by AR (p) and is written as, Y=δ+θ 1 y t-1 +θ 2 y t-2 ………+θ p y t-p +e t (2) Where, e t is forecast error, assumed to be independently distributed across time with mean θ and variance θ 2 e, e t-1, et -2 ………. e t-q are past forecast errors, α 1 ……….α 2 are moving average (MA) coefficient. While MA model of order q (i.e.) MA (q) can be written as, Y t =e t -α 1 α t-1 -α 1 α t-2 ……….α q α t-q Seasonal ARIMA model is to be denoted by ARIMA (p,d,q) (P,D,Q), where P denotes the number of seasonal autoregressive components, Q denotes the number of seasonal moving average terms and D denotes the number of seasonal differences required to induce stationarity discussed Box et al. [16]. The steps which are followed in order to define an ARIMA model as stated by Box [17] and Rahman et al. [18].
Result and Discussion
In Figure 1, from the autocorrelation (ACF) and partial autocorrelation (PACF), it is clear that there is no significant spike in the original series which also indicates that there are no significant effects of Auto-Regressive and Moving Average in the original series, that is, the tea production series is stationary without any difference.
After making the series stationary, different parametric combinations of ARIMA (p,1,q) model were tried to analyze the twenty-four years' data (1990 to 2013) of tea production and the best fitted model is accepted on the basis of minimum value of all selection criteria as mentioned above in methodology. The results of performance of developed ARIMA (p,1,q) model is presented in Table 1. It exposed the performance of eleven ARIMA models out of which ARIMA (1,1,2) was best out of all. ARIMA (2,1,2) and ARIMA (2,1,1) ranks second and third respectively while remaining ARIMA models are not as good as these three.
Therefore, it was concluded that the appropriate model for forecasting the production of tea in Bangladesh during 2013 was ARIMA (1,1,2) having minimum value of all selection criteria as compared to remaining ten models. Table 2 shows the forecasted value of tea production using the best fitted model ARIMA (1,1,2). The forecasted value of tea production
Conclusion
This paper aimed to modeling the production of tea during 2013 in Bangladesh, by Autoregressive Integrated Moving Average (ARIMA) Approach. On basis of results obtained it is concluded that ARIMA (1,1,2) model having minimum value of all measures of selection criteria was found to be the appropriate model amongst all for predicting the tea production in Bangladesh. The model showed a good performance in case of explaining variability in the data series and also, it's predicting ability. The forecasting of tea can help tea garden owners as well as the policy makers for future planning.
|
2019-02-17T14:07:10.050Z
|
2017-04-28T00:00:00.000
|
{
"year": 2017,
"sha1": "2678f4c59c80181889fc4144e3b9171ce871cc79",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2168-9679.1000349",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1b56806ec4e823c7dc36e48cacd445b58676fed7",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
236623236
|
pes2o/s2orc
|
v3-fos-license
|
PROFILE OF SCIENTIFIC LITERACY BASED ON DAILY LIFE PHENOMENON: A SAMPLE TEACHING FOR STATIC FLUID
The purpose of this study is to describe the daily phenomenon-based physics literacy profile in static fluid material to find out how far the students' abilities about scientific science. Previously, the development of a daily phenomenonbased scientific literacy test instrument adjusted to the scientific literacy competency indicators and the applicable curriculum was carried out. This study used a quantitative descriptive method with a Research and Development model and is conducted on 40 students of SMAN 4 Sidoarjo at XI-MIPA class. The assessment instrument developed is stated theoretically and empirically valid. The theoretical validity is 83% which includes material, construct, and language criteria. The empirical validity includes the reliability test with a reliability coefficient of 0.782 (reliable). The item validity test states 15 valid questions with low to very high categories, the difficulty level with 13 moderate questions, and the distinguishing power with 13 questions in the sufficient to good category. Of the four categories of practical feasibility, 12 questions out of 15 (80%) met the criteria and were eligible to be tested. The categories of each indicator of scientific literacy competence are indicators that explain phenomena scientifically in the moderate category (63.0%), indicators of evaluating and designing scientific investigations in the very low category (43.7%), and indicators of interpreting data and evidence in the very low category (52.7%). It can be concluded that the average percentage of students' physics science literacy ability is 53%, which is included in the very low category.
Introduction
Assessment is the process of collecting information about students and the class to make instructional decisions (Arends, 2012). Legislation No.19 of 2005 on National Education Standards Chapter 1 Article 1, paragraph 17 states that the assessment collects and processes information to measure student achievement. Thus the assessment is a matter that must be implemented in the learning process for evaluating the results obtained during the process.
Assessment is carried out at the national and international level in the international student assessment program or Program for International Student Assessment (PISA). PISA assessment program is expected to assess the quality of education at a young age school children for the challenges of human resources in the 21st century (Pratiwi, 2019). Three aspects were assessed, namely, scientific literacy, mathematics, and reading. Meanwhile, Indonesia still ranks poorly. Indonesia is in the bottom ten rankings in 2018, ranking 74 out of 79 countries with an average scientific literacy score of 389 from the OECD average score of 489 (Kemendikbud, 2019). Indonesia still has very little awareness of education, one of the causes for the low level of scientific literacy. Several factors cause the low level of scientific literacy in Indonesia, including textbook selection, misconceptions, un-contextual learning, low reading skills, and a learning environment that is not conducive (Fuadi, Robbia, Jamaluddin, & Jufri, 2020). In addition, the learning process tends to use memorization as a vehicle for mastering knowledge, not thinking skills (Mardhiyyah, Ruslowati, & Linuwih, 2016).
Scientific literacy is knowledge and scientific skills through the identification of questions, thus obtaining new knowledge, explaining scientific phenomena, making conclusions based on facts, understanding the characteristics of science, awareness of how science and technology shape the natural, intellectual and cultural environment, and the willingness to be involved and care about science-related issues (OECD, 2017). Based on the 2013 revised curriculum, scientific literacy is indispensable in learning how far students understand science. According to the explanation of the Vice Minister of Education and Culture regarding the concept and implementation of the 2013 Curriculum, it is stated that future challenges must be faced with future competencies as well (Vice Minister of Education and Culture, 2014). In addition, scientific literacy can be used as a personal and social problem-solving skill (Lederman, Lederman, & Antink, 2013).
The scientific literacy assessment conducted by PISA is only intended for students aged 15 years and under, while for students aged 15 years and over, the equivalent of high school students is not considered. It shows that a scientific literacy assessment instrument is needed for high school students to measure how high school students can scientific literacy and advance the quality of education in Indonesia (Indrawati & Sunarti, 2018).
Research related to scientific literacy has been done a lot, both the development of test instruments and students' scientific literacy profiles. However, the existing research is still limited in terms of material, item, and population coverage, so other developments related to scientific literacy need to be carried out. Indrawati (2018), in his research on the development of scientific literacy instruments on the discussion of waves, has been able to develop theoretically and empirically valid test instruments. However, the coverage of the material and population is still limited. Several researchers, Parno (2018), analyzed the profile of scientific literacy on the discussion of dynamic fluids. 2) Tulaiya (2020) analyzed scientific literacy skills in heat material. And 3) Lestari (2020) the feasibility of formativebased instruments on the discussion of global warming.
Physics is one of the branches of science, a science closely related to human life (Harefa, 2019). Students are required to know concepts and are required to apply learning concepts to their daily lives. Learning with scientific literacy not only aims to gain knowledge with high cognitive value but also requires the application of knowledge to life and interactions with nature (Putri, Ramalis, & Purwanto, 2018). Learning with scientific literacy skills encourages the ability to analyze scientific information to obtain new knowledge (Bybee & McCrae, 2011). Learning science literacy skills is related to applying, synthesizing, and evaluating existing information effectively (Whittingham, Huffman, Rickman, & Wiedmaier, 2013).
Learning physics by applying concepts to the phenomena of everyday life is following scientific literacy and competency standards. One of the materials in physics that has many applications in everyday life is static fluid material. Students can easily find applications of the concept of static fluids because their use is side by side with everyday life. The application of static fluids in everyday life ships, for instance, the buoyancy that acts on the ship, can make the ship float well. This buoyancy makes use of static fluid material, namely in the discussion of Archimedes' law.
Some other examples are submarines, hot air balloons, hydraulic pumps, and others. It shows that physics is very useful for life if realized in technology (Harefa, 2019). Milanto (2021) stated that a test instrument had been developed in the discussion of static fluids about the scientific literacy profile of students. However, the discussion used does not cover the entirety of the inert fluid material. In addition, no discussion shows the application of physics in everyday life. Physical science literacy assessment instruments based on everyday life phenomena, especially for high school students, are needed to familiarize the students.
Research Methods
This research uses a quantitative descriptive method with a Research and Development Development model, including a preliminary study stage, model development, and testing (Saputro, 2016). At the preliminary study stage, potential and problem analysis was carried out and a literature review. Second, the model development stage was carried out by making a test instrument in 15 essay questions on static fluid material. Finally, the testing phase of the instrument developed, has been validated by two experts, was tested on 40 students of SMAN 4 Sidoarjo in class XI MIPA 1 -XI MIPA 4.
The data were collected using the scientific literacy test instrument sheet and the validation sheet. The validity test of the instrument was carried out by two methods, namely theoretical validity, and empirical validity. The theoretical validity is based on the validation results by two experts with aspects of material, construction, and language. Meanwhile, empirical validity is based on the criteria for measuring the items, reliability, difficulty level and distinguishing power. Then an analysis of the students' scientific literacy profiles was carried out. Students' scientific literacy indicators are adjusted to scientific literacy competencies, namely, explaining phenomena scientifically, evaluating and designing scientific investigations, and interpreting data and evidence scientifically (OECD, 2019). The criteria for assessing students' scientific literacy are grouped into very high, high, medium, low, and very low criteria (Purwanto, 2008) in Table 1.
Result and Discussion
The development of a scientific literacy test instrument is based on the standard PISA scientific literacy competencies and is adjusted to the basic competencies in the 2013 Curriculum. Given the low literacy ratings of students, it is necessary to involve literacy questions in learning, especially those related to the phenomena of everyday life. The static fluid used in the research is adapted to everyday life, including Archimedes' law, capillary action, Pascal's law, hydrostatic pressure, surface tension, and viscosity. The instruments developed were validated theoretically and empirically.
Theoretical Validity
Two lecturers theoretically validated the instrument that was successfully developed. The aspects that are validated include aspects of material, construction, and language. Figure 1 is a diagram of the percentage of theoretical validity of each aspect. The results of theoretical validation by two experts get a percentage of 83% with valid criteria. The test instrument is declared valid based on the validity criteria if the validation percentage obtained more than 60% (Arikunto, 2014). Thus, the 15 questions developed were declared valid and could be used for trials with revision adjustments. The results of the percentage of theoretical validity for each criterion are shown in Figure 1.
Empirical Validity: Item Validity
The validity test of the items was obtained from calculations using the Pearson productmoment correlation based on the results of the scores for each item with the total scores obtained by students in the trial. The results of the item validity test can be seen in Figure 2. The empirical validity is based on the reliability test results, the validation of the items, the difficulty level, and the distinguishing power. Reliable means trustworthy, so instruments that are declared reliable are instruments whose measurement results can be trusted (Asrul, Ananda, & Rosnita, 2014). Based on the results of reliability calculations with Alpha Cronbach's formula in (Widiyanto 2018), the instrument developed was declared reliable with a reliability coefficient (r11) of 0.782. When compared with the product-moment coefficient value (table r) at N = 39 with a significance of 5%, namely 0.316, then r11> r table or 0.782> 0.316. A reliable instrument is an instrument that can be used repeatedly, and the measurement results are fixed (Asrul, Ananda, & Rosnita, 2014).
The validity of the items in Figure 2 shows the percentage of validity of each item in the very low to very high category. Of the 15 questions tested, there were 0% of the questions in the very low category. It can be interpreted that there are no invalid questions. For questions in the low category as many as 33% or five questions, namely numbers 6, 9, 11, 13, and 14, the moderate category is 40% or six questions, namely numbers 1, 2, 3, 8, 10, and 12, the high category is 20% or three questions, namely numbers 4, 5, and 15, and the very high category is 7% or 1 question in number 7. From these categories, it is stated that as many as 15 questions are valid. The validity of the questions is determined by calculating the correlation coefficient using the Pearson productmoment correlation formula and comparing it with the r table. From the calculation results for the low to very high category, the value r> r table = 0.32, so it can be concluded that the fifteen questions developed were declared valid through the item validity test.
Level of Difficulty
The difficulty level is obtained from calculating the average score of each item compared to the maximum score of the questions used. From the calculation results, the percentage of difficulty level for each item from easy to difficult categories can be seen in the diagram in Figure 3. The difficulty level ( Figure 3) shows that 87% of the questions have categorized a medium, and 7% were categorized as easy and difficult. It shows that as many as 87% or 13 questions are reasonable or feasible questions to use. A good problem is neither too easy nor too difficult to solve. (Widiyanto, 2018). On the other hand, 7% or one question from the easy and difficult categories is not suitable for use. Easy questions are found in number 1, and difficult questions are in number 11. The rest are in the medium category.
Discriminating power
The discriminating power of each item is determined from the coefficient of difference in the difference in the average grouping of upper and lower class students compared to the maximum score. The discriminating power test is presented in the diagram in Figure 4. The discriminating power in Figure 4 shows that of the 15 questions developed. There were 13% or two questions in the poor category, 60% or nine questions in the moderate category, 27% or four questions in the good category, and 0% or no questions in the category very good. This percentage shows that as many as 13 questions from the sufficient and good categories are feasible to use. Meanwhile, two questions from the unfavorable category, namely numbers 11 and 13, were not suitable for use or needed consideration. The coefficient of difference can differentiate between smart students (upper class) and stupid students (lower class). The greater the coefficient of difference, the better the problem is. A low coefficient of difference cannot distinguish between upper and lower classes, meaning that smart and stupid students can do them or smart or stupid students cannot do them. Good questions are questions that only smart students can answer. (Widiyanto, 2018).
Questions that can be used for data collection meet the four criteria of empirical validity testing. These four criteria must be met because they are related to one another. Of the 15 questions developed, 12 questions were feasible and met the four criteria, and three questions were not feasible, namely numbers 1, 11, and 13.
Student Science Literacy Profile
After testing the instrument's validity, and analysis of the students' physics science literacy profile was analyzed. The categories of students' physics science literacy profile based on the percentage of values can be seen in table 2.
Knowledge of science in scientific literacy is limited to knowledge and how the process of knowing is applied to the surrounding life (Mardhiyyah, Ruslowati, & Linuwih, 2016). According to the PISA definition, scientifically literate someone is capable and willing to engage in reasoned discourse about science and technology (OECD, 2016). The average value obtained is 39.85, with a percentage of 53%. This percentage value is obtained from calculating the mean score of the experimental students compared to the maximum score of 75. Thus, from this value, it can be categorized that students' average science literacy ability is in the very low category. If viewed from each item, the students' physics science literacy skills can be categorized in Table 3.
Based on Tables 2 and 3, the percentage of students' physics science literacy abilities varies. If viewed from the indicators of each item, it can be grouped into indicators of scientific literacy as in Table 4. Based on Tables 2 and 3, the percentage of students' physics science literacy ability varies. If viewed from the indicators of each item, they can be grouped into indicators of scientific literacy as in Table 4.
The results of the test on students can be analyzed the profile of the physics science literacy. Based on Table 2, the students' physics science literacy profiles at SMAN 4 Sidoarjo vary from very low to high. The average scientific literacy ability of students is in the very low category. It shows that the students' physics science literacy skills are still below standard if viewed from the indicators of each item and indicators of scientific literacy as in Tables 3 and 4.
Indicators: explain phenomena scientifically
Students are expected to recognize, offer, and evaluate explanations for various natural and technological phenomena (OECD, 2016). Based on Table 4, students' scientific literacy abilities on this indicator fall into the medium category. That is, some students can recognize, offer, and evaluate explanations of natural phenomena and technology. For question number 1, students' scientific literacy falls into the medium category with the questions in Figure 5. In this question, almost all students have understood the meaning of the problem. Some of the answers are not quite right because students do not understand the application of physics to hot air balloons. Thus, it cannot explain how the lift force works in a hot air balloon. For question number 2, students' scientific literacy was in a low category; by applying the ink liquid pattern event to the uniform, most students understood the question's meaning, but some students did not understand the questions. However, most students do not understand that fluid position in the questions is included in capillary action.
For question number 3, the students' scientific literacy was in the medium category who used the flood phenomenon application. Most of the students gave the correct hypothesis, but few gave the right reasons. The correct hypothesis is obtained by students from reasoning based on the phenomena that occur. The answer is inaccurate because students do not understand the relationship between capillary subjects and the application to the questions. For question number 4, the students' scientific literacy was in the medium category who used hydraulic machine technology in the car wash. Most of the students were able to understand the meaning of the questions and answer them correctly. Students who answer incorrectly most do not understand that the application of hydraulic machines in car washing is included in the benefits of fluids, namely Pascal's Law.
For question number 5, students' scientific literacy was low using ships at tourist attractions with different passenger capacities. Most of the students answered wrong. Students' incorrect answers were dominated by inaccurate predictions or problem analysis. They answered that the ship's mass, ship size, and sea waves influenced the ship to float. However, some students answered correctly for the right reasons, namely because the ship has a buoyancy force adjusted to the ship's capacity, volume, and hull.
Indicators: evaluate and design scientific investigations
Students are expected to describe and assess scientific investigations and propose ways of solving a problem scientifically (OECD, 2016). From Table 4, it is known that on this indicator, students' scientific literacy skills are in the very low category. Students' ability to evaluate and design scientific investigations is below average. For question number 6, the students' scientific literacy was in the low category carrying ship applications with different hull shapes. Students who have high thinking skills can analyze the hull's efficiency by looking at the pictures provided in the questions.
Meanwhile, students with low analytical skills will relate the shape and size of the ship to the hull's efficiency. So, students with this type do not understand the meaning of the question command asked. Many students can only answer the type of stomach that is effective without giving reasons, or the reasons given are not quite right. It is because students only guess which stomach is roughly effective without thinking about what makes it effective. For number 7 it is in the very low category with the questions in Figure 6.
Bacaan untuk soal nomor 7-8.
Dua turis yang sedang berlibur di Bali melakukan kegiatan diving atau menyelam. Turis pertama menyelam pada kedalaman 10 m di bawah permukaan laut, sedangkan turis kedua menyelam lebih dalam 12 m dari turis pertama. Sesampainya kembali ke permukaan, turis kedua mengeluhkan sesak dan sulit bernafas ketika menyelam. Tekanan atmosfer di permukaan air sama dengan 1 atm dan massa jenis fluida 1.030 kg/m 3 . Bagaimana rumusan masalah yang tepat untuk bacaan di atas ? The average student's answers did deviate from the questions, and many answered the conclusions of the phenomena in reading. It proves that students do not know the meaning of the questions about the problem formulation. For question number 8, students' scientific literacy was (bound, manipulation, response). For question number 9, students' scientific literacy fell into the very low category using the submarine application. Most students cannot evaluate the motion of the submarine, which is included in the Archimedes Law material. Some students only answer the points without explaining the evaluation, or the evaluation given is not correct. For question number 10, students' scientific literacy fell into the very low category using the experimental phenomenon of scientists, namely the Archimedes experiment in determining density. On average, students are not able to evaluate a method that Archimedes has done in the experiment. It proves that the ability to evaluate students is still very low.
Indicators: interpret data and evidence in a scientific manner
In this indicator, students are expected to analyze and evaluate data scientifically, explain in various representations, and draw the correct conclusions (OECD, 2016). The students 'scientific literacy profile on this indicator is in the very low category (Table 4), which means that the students' ability to interpret scientific data and evidence is below the average. For question number 11, the students' scientific literacy was in the very low category (Figure 7). Perhatikan grafik di bawah ini ! Dari grafik tersebut representasikan data dalam bentuk analisa hubungan massa dan volume dari ketiga fluida !
Figure 7. Example problem number 11
This question is presented in graphical form. None of the forty students had the correct answer. It shows that all students cannot analyze data in graphical form and interpret it in descriptions. For question number 12, the students' scientific literacy was in the medium category using experimental data on the viscosity of three different fluids. The average answer given by students can guess the most significant fluid viscosity coefficient, but no scientific evidence and explanation are given to support this answer. It shows that students do not fully understand the concept of viscosity and its application in everyday life. Students can only shoot from the viscosity of the fluid. For question number 13, students 'scientific literacy was in the very low category by using experimental pictures of eggs and salt solution according to Archimedes' Law. The average student's answer is wrong. It shows that students have not been able to identify assumptions, evidence, and reasons for the experimental results presented.
For question number 14, students' scientific literacy fell into the medium category. In this problem, two figures are presented, which represent the phenomenon of surface tension and not. Students were asked to distinguish which of the two images included a scientific assumption of surface tension. Of all the students' answers, most of them answered correctly. However, only a few answered with scientific reasons. It shows that students who answered correctly without scientific reasons only guessed from pictures and explanations without knowing surface tension in the questions. For question number 15, the students' scientific literacy was in the medium category. Most of the students' answers were correct. However, some answers do not include an evaluation of the assumptions or evidence sections. It shows that students do not master how to interpret data and scientific evidence.
The trials were carried out using valid instruments with medium question categories. The instrument used contained questions about the phenomena of everyday life so that students could more easily understand the application of physics in life. However, the students 'scientific literacy results were at very low criteria, meaning that the student's abilities were still below average. Based on previous studies, the ability of students to evaluate and design scientific investigations is, on average lower than the other two categories (Milanto, Zainuddin, & Setyarsih, 2021). Similar to the results of previous studies, in this study, the highest average ability of students was to explain phenomena scientifically.
Students' literacy skills at SMAN 4 Sidoarjo are not yet good because learning is done online or online because it is still during the Covid-19 pandemic. It hinders learning which should be explained directly through student experiments and experiments. Online learning is considered less effective, especially in physics, because of the reduced time for active learning and limited technology to explain learning in detail. Providing scientific literacy to students is only done implicitly through informing several technologies being applied in direct learning, both scientifically analyzing and evaluate. In this case, increasing students' scientific literacy relies not only on the role of a teacher. According to (Treacy & Melissa, 2011), increasing scientific literacy can be obtained from reading, writing, and reviewing journals. In addition, students must also have the ability to study literature critically and scientifically (Jurecki & Wander, 2012).
Conclusion
The developed instrument for assessing physical science literacy based on everyday life phenomena for students is theoretically and empirically feasible. The theoretical validity is 83% which includes material, construct, and language criteria. The empirical validity of 80% or 12 of the 15 questions were declared valid, which included the criteria for item validity, reliability, difficulty level, and distinguishing power. The average physical science literacy ability based on the phenomena of everyday life on static fluid material for students of SMAN 4 Sidoarjo is in the very low category with a percentage of 53%. Some of the criteria for indicators of scientific literacy, namely, indicators explain phenomena scientifically in the moderate category (63.0%), indicators evaluate, and scientific design investigations in the very low category (43.7%), and indicators interpret data and evidence scientifically in the very low category (52.7%).
The physical science literacy assessment instrument based on the phenomena of everyday life that has been developed is feasible to be tested and used as an evaluation tool in learning. Further research related to assessment instruments is needed, especially in a more detailed discussion regarding aspects of scientific literacy and how to improve students' scientific literacy skills through learning, both online and offline.
|
2021-08-02T00:06:14.288Z
|
2021-05-05T00:00:00.000
|
{
"year": 2021,
"sha1": "01eb0ba989fe25e0ea7e3caeb24455444245ceb3",
"oa_license": "CCBY",
"oa_url": "https://journal.trunojoyo.ac.id/penasains/article/download/10272/Literacy%20Science",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4c530bff91d1c9318b0d85ad336b649d00d77a05",
"s2fieldsofstudy": [
"Physics",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
265221092
|
pes2o/s2orc
|
v3-fos-license
|
Redefining the Laparoscopic Spatial Sense: AI-based Intra- and Postoperative Measurement from Stereoimages
A significant challenge in image-guided surgery is the accurate measurement task of relevant structures such as vessel segments, resection margins, or bowel lengths. While this task is an essential component of many surgeries, it involves substantial human effort and is prone to inaccuracies. In this paper, we develop a novel human-AI-based method for laparoscopic measurements utilizing stereo vision that has been guided by practicing surgeons. Based on a holistic qualitative requirements analysis, this work proposes a comprehensive measurement method, which comprises state-of-the-art machine learning architectures, such as RAFT-Stereo and YOLOv8. The developed method is assessed in various realistic experimental evaluation environments. Our results outline the potential of our method achieving high accuracies in distance measurements with errors below 1 mm. Furthermore, on-surface measurements demonstrate robustness when applied in challenging environments with textureless regions. Overall, by addressing the inherent challenges of image-guided surgery, we lay the foundation for a more robust and accurate solution for intra- and postoperative measurements, enabling more precise, safe, and efficient surgical procedures.
Introduction
In surgical procedures, accurate measurements are crucial.The determination of precise arterial diameters directs the choice of endovascular prostheses (Carvalho, dos Santos, and von Wangenheim 2006).When a tumour is removed, the optimal resection margin must be selected (Bilgeri et al. 2020).This ensures complete removal of the tumour while sparing the surrounding healthy tissue.Similarly, precise measurements are necessary during bowel surgeries to preserve functional bowel length and ensure optimal nutrient absorption after the operation (Gazer et al. 2017).In open surgery, direct visualization and tactile feedback facilitate accurate measurements of tissue and anatomical structures.However, laparoscopic surgery introduces two inherent challenges in intraoperative measurements.First, unlike open surgery, structures in laparoscopic procedures cannot be measured directly due to the limited working space and the use of specialized instruments, rendering traditional measurements exceedingly difficult or even impossible.Second, indirect perception of structures via cameras introduces limited depth perception and distorted perspective, making it challenging to accurately estimate size relationships and obtain reliable measurements.These two factors highlight the necessity of a measurement tool specifically tailored for laparoscopic procedures.
In this study, we introduce a versatile measurement method for laparoscopic surgery, aiming to provide a solution that can measure any structure, both intra-and postoperatively.This leads to the following benefits: It simplifies the measurement process by being applicable to a range of use cases.A readily available measurement method can support the decision-making process during surgery and enable more comprehensive, quantified quality assurance postoperatively.In addition, increased measurements contribute to a broader data foundation, allowing for the identification of best practices and integration into intelligent systems, such as automatic surgery reports.While the method is fundamentally designed for minimally invasive procedures, we focus on laparoscopy as a prime example.
Our contribution to the field of laparoscopic surgery is threefold.First, in collaboration with medical experts, we identify key requirements for a laparoscopic measurement method.Second, we develop a measurement method that meets these key requirements and implement this proposed method using state-of-the-art components from the realm of computer vision.Third, we evaluate its performance through various qualitative and quantitative experiments, demonstrating that the method is not only highly accurate but also robust in challenging conditions such as textureless regions, blood, reflections, and smoke.The code is made available1 .
Related Work
Our work resides at the intersection of two primary areas of research: surface reconstruction methods used in laparoscopy and methods of obtaining precise measurements in minimally invasive surgery (MIS).In the following, we briefly outline related work in both areas.
Surface Reconstruction in Surgery
In the realm of laparoscopy, surface reconstruction is an essential technique aimed at compensating for challenges like a restricted field of view, complex hand-eye coordination, and absence of tactile feedback (Maier-Hein et al. 2014).Many approaches have been developed to create 3D models of organic surfaces that can facilitate data registration and augmented reality in laparoscopic procedures (Röhl et al. 2012;Maier-Hein et al. 2014).
Surface reconstruction methods can be categorized into two types: active and passive methods.Active methods, such as structured light and Time of Flight (ToF), necessitate controlled light projection into the environment and have been explored in multiple studies (Ackerman, Keller, and Fuchs 2002;Maurice et al. 2012;Maier-Hein et al. 2014).Conversely, passive methods do not require projected light and rely solely on camera images.Various algorithms have been proposed for passive methods, such as stereo reconstruction and photometric stereo (Röhl et al. 2011;Collins and Bartoli 2012;Malti and Bartoli 2014).Among these methods, comparative studies have shown the passive stereo-based approaches to be more advantageous due to their higher accuracy and dense point clouds, in comparison to methods like ToF (Maier-Hein et al. 2014;Groch et al. 2011).
While our method also performs a surface reconstruction to capture accurate 3D distances, in distinction to existing work, our objective differs from registering 3D models.Instead, our goal is to precisely measure structures in laparoscopy.Based on the results of the method comparison, we opt for a stereo-based method.In addition to the advantages of more accurate results and dense point clouds, which allow for finer measurements, stereo vision offers another crucial benefit.As a passive system, it relies solely on camera images.This is a significant criterion given the limited working area in laparoscopy and enables postoperative measurement.Furthermore, by utilizing stereo vision, we rely on a component that addresses limited depth perception in laparoscopy (Way et al. 2003).
Certain methods, such as Intuitive Surgical's da Vinci System, have employed stereo vision to optimize 3D perception (Freschi et al. 2012).Stereo vision has already been used as a component in laparoscopy for various reasons that are beyond the scope of this work.Consequently, it is reasonable to capitalize on this existing technology and develop further functionalities using the available components.
Measurement Approaches
Despite the advancements in MIS, accurate intraoperative measurement remains a challenging task.Some current approaches demonstrate the potential of stereo-endoscopes in obtaining precise measurements.For instance, Field et al. (2009) demonstrated that accurate anatomical measurements could be achieved using stereo-endoscopes.However, their method relied on markers, limiting its applicability.Similarly, Bodenstedt et al. (2015) presented an imagebased approach for bowel measurement in laparoscopy using stereo endoscopy.Despite showing promise in phantom and porcine datasets, this method was limited by the need for previous images for reconstruction and its specificity to certain structures.Our goal is to extend this concept and provide a general-purpose measurement solution that can be used for any structure without the need for manually placed markers or previous images.To accomplish this, we propose using state-of-the-art AI-based disparity estimation for surface reconstruction (Lipson, Teed, and Deng 2021).
Method Requirements
We formulate the requirements after a thorough review of the characteristics of laparoscopic surgery and in collaboration with medical experts at Charité Universitätsmedizin Berlin, which included the analysis of an actual surgery and analyses of potential case studies.The inclusion of these requirements in the current study aims to maximize the benefits of laparoscopic surgery and address its challenges.
R1 Utilize existing components: When performing a laparoscopic surgery the measurement tool should not require additional instruments or incisions, as one of the primary benefits of laparoscopic surgery is its minimally invasive nature.The solution should rely on existing components and tools that are already used in the surgical environment.R2 Integrate as passive system: When surgeons want to perform a measurement, the system must seamlessly integrate without altering the surgeon's workflow and the surgical procedure should remain the primary focus.R3 Increase ease of use and availability: When a surgeon needs to access the measurement method during surgery, then the system should be easily usable and available without preparation.This facilitates better documentation, data collection, and decision-making without interrupting the surgical process or requiring preparation before measurements are taken.R4 Leverage existing camera input: When processing visual data for the surgery, then the system should utilize the input stream from the laparoscopic camera, ensuring it aligns with what the surgeon sees during the procedure.R5 Implement integration of computer vision-based approaches: When integrating with existing approaches like phase recognition, then the system must employ the same data sources to ensure compatibility and seamless integration with these methods.R6 Adhere to medical standards: When operating in the sensitive surgical domain where patient safety is crucial, then the system must ensure high accuracy and robustness in all measurements.R7 Enhance versatility of measurement tasks: When a surgeon aims to measure a structure during surgery, the system should be capable of measuring any structure of interest, eliminating the need for multiple specialized tools tailored to specific tasks.R8 Provide intra-and postoperative measurement capabilities: When taking measurements during a surgical procedure, then the system should support both intra-and postoperative evaluations as they can provide valuable data for further research, quality control, and outcome analysis, contributing to the continuous improvement of surgical practices.
Method
Our method depicted in Figure 1 is structured into three parts: Input, Processing, and Output.
Input.To perform a measurement using our method, two pieces of information are required: Stereo images that contain the structure to be measured and the camera parameters of the stereo camera.The stereo images are used to select the desired distance, while the camera parameters are used to project the 2D image into 3D space.While we obtain the images directly from the camera's input stream, the camera parameters must be determined at the outset through camera calibration.
Processing.We define the left image I L as our source image to which we relate all measurements.We assume that the image shows all points of the real world discretized as pixels in the image.If we want to measure the distance between two points of the original scene in world coordinates p W a and p W b , we need to select two pixels in the image p I a and p I b containing these points.For the selection, we distinguish between two cases: offline and online selection.
The offline selection represents the option that a person is using an interface where the points of interest are selected manually by clicking on the specific pixels in the left image I L , e.g.mouse or touchscreen.This case offers the possibility to measure postoperative on the video or intraoperative with the help of an assistant that performs the task.To enable a simple and non-intrusive measurement process, we introduce the online selection functionality.With this, surgeons can directly use the available surgical instruments, such as two graspers, as measuring tools during the surgery.The surgeon first positions the tools at both ends of the structure to be measured, ensuring that the tooltips point towards p I a and p I b .This allows the surgeon to define the structure to be measured within the scene.Then the tips of the instruments are automatically detected and set as p ∈ R 3 , obtained from a reprojected image, we aim to reconstruct a smooth, watertight mesh of the underlying surface consisting of well defined triangles.To do this, we use the Poisson Surface Reconstruction (PSR) algorithm (Kazhdan, Bolitho, and Hoppe 2006).
Output.Our method computes two different types of distances: the direct and the on-surface distance between points p W a and p W b .The direct distance is the Euclidean distance between the selected points (Figure 1 blue line).Considering the complex structure of human anatomy, we include an on-surface distance measurement (Figure 1 red line).From the triangulated mesh representation of the anatomical structure, we create an undirected graph G = (V, E), where V is the set of vertices and E is the set of edges.Each vertex in V corresponds to a vertex in the mesh, and each edge in E connects two vertices that share a face in the mesh.The weight of each edge (i, j) is defined as the Euclidean distance between the responding vertices.We compute the optimal path distance between vertices belonging to p W a and p W b using Dijkstra's shortest path algorithm (Bodenstedt et al. 2015).To address the potential overestimation of distance due to the vertex structure in the mesh, we use spline interpolation
Experiments
In this section, we first introduce all system components of our prototype implementation of the method and then elaborate on the evaluation of the conducted experiments.
Experimental Setup
We first provide an overview of the system components that are common to all six experiments.
Stereo Camera.For our experimental setup, we utilize the Intel RealSense D435 camera as a stereo camera to simulate a laparoscope.The camera is equipped with two infrared sensors, which provide the necessary stereo input for the measurement system.We use a resolution of 848x480 with a frame rate of 15 fps.The camera captures monochrome images, which presents an additional challenge for the measurement system's robustness since they contain less information than RGB images.
Segmentation Model.For real-time measurements, we draw upon YOLOv8 (Jocher, Chaurasia, and Qiu 2023) as segmentation model.We train the model using the Ex-vivo dVRK segmentation dataset related to the work of Colleoni, Edwards and Stoyanov (2020).Since the Intel RealSense camera provides monochrome stereo input streams, we convert the RGB images to monochrome.The data is split into 70% for training, 20% for validation, and 10% for testing.
Disparity Algorithm.We use two different algorithms.
The first one is RAFT-Stereo, which serves as a component for disparity estimation in our method.Due to the high generalization of RAFT-Stereo compared to related work in the field, we use a model pre-trained on a Middlebury Stereo dataset (Lipson, Teed, and Deng 2021).We believe that having an accurately trained model is crucial, rather than relying on poor training data within the domain.The second is the well-known SGBM (Hirschmuller 2005) as a baseline against which we benchmark the use of RAFT-Stereo.
Surface reconstruction method.In our experimental setup, we utilize the PSR method from the Open3D library to reconstruct 3D surfaces (Zhou, Park, and Koltun 2018).
Evaluation
To evaluate the proposed method, we deploy it in three environments, each designed to assess different functionalities.
Quantitative Evaluation.Evaluating the accuracy of the method requires an environment with well-known structures for measurement.In collaboration with medical experts, we designed a CAD model that abstracts and simplifies realworld structures.We refer to this environment as the "Playground" (Figure 2).The CAD model was 3D printed using PLA material with a layer thickness of 0.1 mm.Reference points are consistently marked with a small protrusion, enabling their manual recognition in the stereo images.The reference points are applied to different structures in such a way that certain distances occur multiple times in different orientations and positions.To evaluate the on-surface measurements, plane, convex, and concave shapes were incorporated into the Playground in addition to the direct distances.The measurements should not only be highly accurate but also deliver robust measurement results over several trials.The design of the Playground and the measurements taken from various perspectives serve to control this property.The Playground is printed in a single color with a smooth surface, resulting in a structureless surface prone to strong reflections.Combined, this represents an extreme case for stereo vision.We perform multiple measurements of known distances and analyze the error, mean absolute error (MAE), and the standard deviation (STD) of the error: We conduct n = 48 offline and online measurements for the ground truth distances l gt, direct = {40 mm, 80 mm, 120 mm} , focusing initially on direct distance.To evaluate the on-surface functionality, we perform n = 48 on-surface measurements of different shapes.We analyze three planes with distances of l gt, plane = {40 mm, 80 mm, 120 mm}, complemented by convex and concave shapes.For the convex and concave distances, the "Wave" shape is represented by l gt, wave = 78.573mm, the curve shape by l gt, curve = 68.625 mm, and the triangle shape with l gt, triangle = 65 mm.We compute the metrics for both the basic on-surface variant and the splineinterpolated variant.
Qualitative Evaluation.A critical aspect of this method is the ability to perform online measurements with surgical tools within the camera scene.To evaluate this functionality, we conduct several measurements using real tools from the da Vinci Surgical System on a realistic phantom representing a hepatectomy (liver resection) (Figure 2).The surgical phantom used in the evaluation is provided by the Experimental Surgery Department of Charité Universitätsmedizin Berlin.Since the phantom does not have exact distances to verify, we limit our evaluation to a qualitative assessment of the tooltip estimation and point cloud.
The third environment is based on a video recording from an actual surgical procedure (Ye et al. 2017) (Figure 2).We utilize a publicly available dataset.It contains rectified stereo images captured during real robotic surgery by the Hamlyn Centre and was published in conjunction with Ye et al. (2017).This dataset does not include ground truth data, therefore our evaluation is limited to qualitative analysis of the effects of smoke, blood, and reflections on the reconstructed surface.We sample individual stereo images from the dataset and apply the proposed method to these images, subsequently visualizing the outcomes.
Results
Below, we present the comprehensive results of our quantitative and qualitative experiments.
Quantitative Evaluation in the Playground Direct measurements.Table 1 presents the measurement results for the three different direct distances.With RAFT Stereo as a component for disparity estimation, the MAE for all three measurements is below 0.3 mm in the offline setting, demonstrating high accuracy.The STD of the error remains under 0.2 mm, indicating a robust performance.With the maximum error being under 0.8 mm, the results prove to be highly accurate.In the online setting, the results are less accurate.This seems to be plausible as it is difficult to handle instruments as precisely.In contrast, accurate distance measurements using the SGBM algorithm for disparity estimation are not feasible, as shown in Table 1.Although we observed isolated precise measurements, the SGBM algorithm is not able to produce accurate results over multiple measurements.In summary, both the textureless regions and the poor lighting conditions in the images present a unique challenge for disparity estimation, as clearly demonstrated by the SGBM baseline example.From this point forward, we solely consider RAFT-Stereo for disparity estimation, as SGBM led to implausible values.
On-surface measurements.Table 2 shows the results for the on-surface measurements.First, we can observe that onsurface measurements are less accurate than direct measurements.The spline interpolation is able to reduce measurement errors (Figure 3), but they are still present.The errors may come from the higher complexity as every deviation from the real world to the point cloud is summed up but our analysis reveals an even more significant factor.While the point cloud itself is highly accurate, as evidenced by the direct measurements in (Figure 3) and the point cloud in Figure 4, the bottleneck lies in the mesh creation.The PSR is not able to reproduce the correct surface over all measurement samples.
Qualitative Evaluation
Figure 5 shows an exemplary measurement of the surgical phantom using two graspers from the da Vinci Surgical Sys- In addition, we perform a qualitative assessment of the applicability of our method in a real environment by applying it to actual images obtained from the da Vinci Surgical System.Figure 6 shows exemplary surface reconstructions for these real-world images.The results are highly promising, with no noticeable noise.In addition, the method demonstrates robustness in handling reflections and the presence of blood or smoke, which do not pose any issues.
Discussion
As the current implementation of the method is only a prototype, there are some limitations to be aware of.
Limitations
Intraoperative online measurements require a real-time version of our method.Currently, the primary bottleneck in our system is mesh generation using PSR.Despite the lack of graphics processing unit acceleration, this step remains the most critical aspect of our prototype implementation.Even with minimal noise and high accuracy of point clouds, onsurface measurements occasionally produce outliers due to Original left image.
Point cloud.imperfections in the resulting mesh.Our method offers several advantages for surgical procedures, primarily by meeting the key requirements (R1-R8).However, predicting the adoption of such a method by surgeons in general is challenging (Hemmer et al. 2022).This is exacerbated by the lack of a comparable system and data on usage frequency.An additional point of consideration is the optimal measurement process in practical settings.Currently, measurements require that both surgical instruments, including their tooltips, remain visible within the camera's field of view.
Path to Deployment
Addressing the limitations mentioned above, several research directions can enhance our method.First, it is worth considering the enhancement of the mesh creation component and optimizing hyperparameters.Second, exploring AI models for generating a mesh from point clouds is promising.Similar to RAFT-Stereo, the AI model could recognize structures and thus represent a more intelligent approach than PSR.Our method's modularity allows for the optimization or replacement of elements in future studies.Third, it would be of interest to refine the prototype by using a laparoscopic stereo camera and involving surgeons in realworld measurements (Dowrick et al. 2023).In addition, we consider methods that map multiple reconstructions of internal structures, allowing for persistent reference points even when they transition out of the camera's field of view.Collaborating with surgeons will pinpoint weaknesses and possible applications.Additionally, integrating workflow recognition could enable automatic measurement of actions like cuts.Lastly, as reliable measurement accuracy is crucial, integrating measurement uncertainties by estimating error margins is an important future research avenue.
Conclusion
This work aims to develop a versatile measurement tool for laparoscopy.We outline the fundamental characteristics of laparoscopy and establish the key requirements for a laparoscopic measurement tool.Based on these requirements, we devise a method that intelligently integrates the results of existing research.
We evaluate our method through a series of experiments designed to test various functionalities.To achieve this, we implement the method as a prototype, integrating several state-of-the-art components and establishing appropriate experimental environments.For distance measurements, we design and 3D print a CAD model, while a surgical phantom serves for more realistic application scenarios.We construct datasets for both environments, opting for a challenging setting to evaluate the robustness of our method.In addition, we supplement these two datasets with a third public dataset comprising real-world laparoscopic images.
The results show the potential of our proposed human-AIbased measurement method.In addition to exhibiting a remarkably high accuracy, it demonstrates substantial robustness when applied to real-world image data.The inclusion of AI-based components, such as RAFT-Stereo and YOLOv8, is instrumental in achieving these results.Moreover, we successfully show the essential functionality of online measurements by employing surgical instruments to establish reference points.Although the measurement outcomes are not as accurate as those in the offline selection, they still indicate potential.Considering that we do not optimize the individual components for laparoscopic application in the current setting, these results serve as a proof of concept.
Figure 1 :
Figure 1: Schematic illustration of the measurement method divided into input, processing and output.
I a and p I b .To achieve this, We define a function σ : I L → (M a , M b ), where M a and M b are the segmentation masks for surgical tools a and b, respectively.We propose YOLOv8 (Jocher, Chaurasia, and Qiu 2023) as segmentation model σ.The output masks have binary values, with 1 representing the presence of a tool and 0 indicating no tool at the corresponding pixel location.Similar to Bodenstedt et al. (2015), we calculate the center of mass m a and m b for the masks M a and M b , respectively, by finding the average coordinates of all non-zero elements in each mask.The points farthest from their respective centroids are then identified as p I a and p I b .Given the two points p I a and p I b in pixel coordinates, we must first reproject them back to world coordinates to measure the distance between our points.To do this, we start with the AI-based estimation of the disparity map using RAFT-Stereo (Lipson, Teed, and Deng 2021).RAFT-Stereo is a deep learning architecture for rectified stereo, building upon the RAFT optical flow network (Teed and Deng 2020).It introduces multi-level convolutional gated recurrent units to more efficiently propagate information across images.Then we derive the reprojection matrix Q from the camera parameters and transform the pixels back to world coordinates.Given a reprojected point cloud P = {p W 1 , p W 2 , . . ., p W n }, where p W i
Figure 2 :
Figure 2: Environments used in the experiments.
Figure 3 :
Figure 3: Measurement error in mm for direct, basic, and spline measurements in the online and offline setting.
Figure 4 :
Figure 4: Exemplary online measurement in the Playground.
Figure 5 :
Figure 5: Exemplary reconstruction of the Phantom.
Figure 6 :
Figure 6: Exemplary point clouds of real surgery images.
Table 1 :
The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) Comparison of SGBM and RAFT direct measurements with offline and online selection.
Table 2 :
Comparison of Results from basic and spline-interpolated on-surface measurements with offline and online selection.
|
2023-11-17T06:43:04.227Z
|
2023-11-16T00:00:00.000
|
{
"year": 2023,
"sha1": "93f6af28ae04b7f7c9b5989b4650c28c48c5c27c",
"oa_license": null,
"oa_url": "https://ojs.aaai.org/index.php/AAAI/article/download/30334/32361",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c93863e5b6fe6fe579d36fa011c612eabf45db74",
"s2fieldsofstudy": [
"Engineering",
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
256347529
|
pes2o/s2orc
|
v3-fos-license
|
Propofol Regulates ER Stress to Inhibit Tumour Growth and Sensitize Osteosarcoma to Doxorubicin
Osteosarcoma is the most common malignant bone tumour affecting children and young adults. The antitumour role of propofol, a widely used intravenous sedative-hypnotic agent, has been recently reported in different cancer types. In this study, we aimed to assess the role of propofol on osteosarcoma and explore the possible mechanisms. Propofol of increasing concentrations (2.5, 5, 10, and 20 μg/ml) was used to treat the MG63 and 143B cells for 72 hours, and the CCK8 assay was applied to evaluate the tumour cell proliferation. Tumour cell migration and invasion were assessed with the transwell assay. The tumour cells were also treated with doxorubicin single agent or in combination with propofol to explore their synergic role. Differential expressed genes after propofol treatment were obtained and functionally assessed with bioinformatic tools. Expression of ER stress markers CHOP, p-eIF2α, and XBP1s was evaluated to validate the activation of ER stress response with western blot and qRT-PCR. The statistical analyses were performed with R v4.2.1. Propofol treatment led to significant growth inhibition in MG63 and 143B cells in a dose-dependent manner (p < 0.05). Osteosarcoma migration (MG63 91.4 (82–102) vs. 56.8 (49–65), p < 0.05; 143B 96.6 (77–104) vs. 45.4 (28–54), p < 0.05) and invasion (MG63 68.6 (61–80) vs. 32 (25–39), p < 0.05; 143B 90.6 (72–100) vs. 39.2 (26–55), p < 0.05) were reduced after propofol treatment. Doxorubicin sensitivity was increased after propofol treatment compared with the control group (p < 0.05). Bioinformatic analysis showed significant functional enrichment in ER stress response after propofol treatment. Upregulation of CHOP, p-eIF2α, and XBP1s was detected in MG63 and 143B secondary to propofol treatment. In conclusion, we found that propofol treatment suppressed osteosarcoma proliferation and invasion and had a synergic role with doxorubicin by inducing ER stress. Our findings provided a novel option in osteosarcoma therapy.
Introduction
Osteosarcoma is a highly malignant bone tumour afecting the extremities of children and young adults. Limb salvage surgery and anthracycline (such as doxorubicin)-based chemotherapy are the frst-line treatment for osteosarcoma [1]. Despite recent developments in surgery [2,3] and systemic treatment methods [4,5], local recurrence and drug resistance still pose as signifcant challenges and lead to decreased overall survival. Tus, novel methods to deal with osteosarcoma progression and drug resistance are pressingly needed.
Propofol is a widely used intravenous sedative-hypnotic agent, and its antitumour role has recently been recognized. Te use of propofol as general anaesthesia induction in gastric cancer showed signifcantly better survival compared with using etomidate in specifc TNM stages [6]. In vivo studies with a colorectal cancer xenograft model of mice also showed a signifcant decrease in tumour development with propofol in comparison with sevofurane under nonsurgical conditions [7]. Besides tumour development, propofol also has been proposed to reverse drug resistance of multiple chemotherapy agents such as cisplatin [8][9][10], docetaxel [11], 5-fuorouracil [12], and others [13] in multiple cancer types. Molecular mechanisms regarding the role of propofol on cancer have been proposed. For example, propofol has been reported to suppress lung cancer tumorigenesis by modulating the circ-ERBB2/miR-7-5p/FOXM1 axis [14]. Other studies showed that propofol suppressed colorectal cancer development by the circ-PABPN1/miR-638/SRSF1 axis [15] and mediated pancreatic cancer cell activity through the repression of ADAM8 via SP1 [16,17]. Te current understanding of possible molecular mechanisms has been extensively reviewed [18,19].
Endoplasmic reticulum (ER) is involved in the biosynthesis of lipids and proteins, and many factors or drugs could lead to "ER stress," a state induced by the accumulation of misfolded and/or unfolded proteins [20]. ER stressinduced osteosarcoma cell death was extensively reported [21][22][23][24][25][26].
In this study, we aimed to assess the role of propofol on osteosarcoma and its sensitivity to doxorubicin and explore the possible mechanism. We performed experiments on the MG63 and 143B cell lines and found that propofol inhibited cell proliferation, migration, invasion, and sensitized tumour cells to doxorubicin therapy. Our data also proved the presence of ER stress and activation of UPR pathways under propofol treatment. Based on our fndings, we proposed that propofol could inhibit osteosarcoma malignancy and promote sensitivity to doxorubicin via inducing ER stress. Tokyo, Japan). Te following primers were used for qRT-PCR: XBP1, forward: 5'-GGAGTTAAGACAGCGCTTGG-3', reverse: 5'-GCACCTGCTGCGGACTC-3'; GAPDH, forward: 5'-ACCACAGTCCATGCCATC-3', reverse: 5'-TCCACCCTGTTGCTG-3'. Gene expression was analysed using the 2 −ΔΔCt method.
CCK8
Assay. Cells in their logarithmic growth phase were seeded into a 96-well plate at a density of 1 × 10 4 cells/ well and incubated with the treatment. After 72 h, 10 μl of CCK8 solution (Shanghai, China) was added to each well and incubated for another 4 hours at 37°C. Te absorbance of each well was measured at 450 nm with a microplate reader. Each experiment was conducted with 5 biological repeats.
Transwell Assay.
Briefy, cells were suspended in the serum-free medium and seeded in the chambers of 24-well plates with or without Matrigel precoating. At 48 h culture, the chambers were taken out and penetrating cells were fxed with 5% paraformaldehyde for 20 min and dyed with 0.1% crystal violet for 20 min. Penetrating cells in 5 randomly selected felds of each sample were captured for counting using a light microscope (magnifcation 20x).
Bioinformatic and Statistical
Analysis. All statistical analyses were performed in R v4.2.1 (https://www.R-project. org/). Data were presented as mean ± SD. Te t-test was used for analysing measurement data attributed to the normal distribution and homogeneity of variance. p < 0.05 indicated the signifcant diference.
Propofol Treatment Led to Growth Inhibition in
Osteosarcoma. To assess whether propofol could afect osteosarcoma cell growth, the CCK8 assay was performed after propofol treatment. Increasing concentrations of propofol (2.5, 5, 10, and 20 μg/ml) together with DMSO control were used to treat osteosarcoma cell lines MG63 and 143B for 72 hours. Te remaining viable cells were assessed with the CCK8 assay. Te reading of OD450 showed a gradual decrease with increasing concentrations of propofol in both cell lines (Figures 1(a) . Tose data showed that growth inhibition was observed in both cell lines in a dosedependent manner. Since propofol (2.5 μg/ml) showed a signifcant reduction in cell proliferation compared with the control group, it was used as the working concentration for the following experiments. Tese data showed that propofol treatment inhibited osteosarcoma cell migration and invasion.
Propofol Sensitized Doxorubicin-Induced Growth
Inhibition in Osteosarcoma. Next, we aimed to explore whether propofol could afect the antitumour efciency of doxorubicin in osteosarcoma. An increasing dose of doxorubicin were used to treat osteosarcoma cell lines MG63 and 143B with or without propofol (2.5 μg/ml). Te growth curves were plotted based on the OD450 readings, and the IC50s were calculated ( Figure 3). Te IC50 (μM) of doxorubicin in combination with the propofol group was signifcantly lower than that of the doxorubicin single agent treatment group (0.008 vs 0.052, p < 0.05) in the MG63. Similar changes were also observed in 143B cells. Te IC50 (μM) was reduced from 0.021 in the doxorubicin single agent treatment group to 0.014 in the doxorubicin + propofol treatment group (p < 0.05). Based on these fndings, we concluded that propofol could sensitize doxorubicininduced growth inhibition in osteosarcoma.
Propofol-Induced ER Stress in Tumour Models.
In order to explore the molecular changes after propofol treatment, we analysed the transcriptional changes after propofol treatment in the dataset GSE101724. Raw data were downloaded from the GEO database, and the expression matrix was constructed. Diferentially expressed genes were analysed between the propofol treatment group and the control group and plotted as a volcano plot (Figure 4(a)).
Signifcantly diferentially expressed genes were defned as | logFC| > 1 and p < 0.05. Altogether 195 genes met the criteria and were collected. Gene Ontology (GO) enrichment analysis on the diferential expressed genes was performed, and the top 8 activities are plotted as shown in the bar graph (Figure 4(b)). Te top 10 up-or downregulated genes based on logFC changes were plotted as a heatmap (Figure 4(c)). Interestingly, we found that most of the diferential expressed genes were enriched in endoplasmic reticulum (ER) stress-related activities. ER stress is known to be involved in multiple tumour activities such as proliferation, invasion, and drug resistance. Tese data showed a correlation between propofol treatment with ER stress, which might explain our fndings on the role of propofol in osteosarcoma in this study.
ER Stress Response Was Activated after Propofol Treatment in Osteosarcoma.
Next, we set out to test whether ER stress was involved in propofol treatment in osteosarcoma. We examined the expression of ER stress-related markers such as CHOP, p-eIF2α, and XBP1s. Western blot results showed that CHOP and p-eIF2α were upregulated after propofol treatment in both MG63 and 143B cells ( Figure 5(a)). Te presence of XBP1s was also detected in both cell lines after propofol treatment ( Figure 5(b)). All these data suggested the concerted activation of all three ER stress sensors and their combinatorial response, validating that ER stress was induced by propofol treatment.
Discussion
In this study, we examined the antitumour role of propofol in osteosarcoma and explored the possible mechanism. We frst tested the growth inhibition induced by propofol. CCK8 assay after propofol treatment showed signifcant growth inhibition in a dose-dependent manner in multiple osteosarcoma in vitro models. Tumour migration and invasion changes after propofol treatment were also assessed with the transwell assay. Results showed that propofol treatment induced decreased cell migration and invasion in both MG63 and 143B cell lines.
Next, we tried to test whether propofol could sensitize osteosarcoma to doxorubicin treatment. Proliferation assays of osteosarcoma cells cultured with diferent concentrations of doxorubicin with or without propofol were performed. Proliferation curves were plotted, and the IC50s under diferent conditions had been calculated to refect the drug sensitivity. In accordance with similar experiments in other tumour types, propofol signifcantly reduced the IC50 and sensitized osteosarcoma to doxorubicin treatment.
We explored the possible mechanisms by evaluating the transcriptional changes after propofol treatment with bioinformatic tools. Diferentially expressed genes were collected and functional enrichment highlighted ER stress response-related pathways. We were aware that the GSE101724 study was diferent from our study in tumour cell lines and propofol concentrations. GSE101724 study was used as pilot study to predict the possible roles of propofol on cells. Te fndings were validated in our study with osteosarcoma cell lines before jumping into any conclusion.
To validate the presence of ER stress and activation of unfolded protein response (UPR) after propofol treatment in osteosarcoma, all three pathways [20] were examined by testing the protein level of CHOP and p-eIF2α together propofol induces ER stress and autophagy by promoting calcium release and ROS production in C2C12 myoblast cell line [27]. Similar results were also observed in HeLa cells [28].
Limitations of this paper should be noticed. Tis study focused on the role of propofol on osteosarcoma and proposed the involvement of ER stress response; further studies are needed to fully illustrate the mechanisms on the detailed process of propofol-induced ER stress response. In addition, the antitumour role of propofol was based on the in vitro TRIB3 CEBPB SCG2 DDIT4 GDF15 SLC3A2 DDIT3 VGF PLAU TNFRSF12A ROMO1 C20orf52 ID3 LOC401115 ATP5I HMGCR INSIG1 LOC100130516 NDUFA3 LOC100131801 (c) Figure 4: Bioinformatic analysis showed that propofol treatment led to endoplasmic reticulum stress. Transcriptional data of GSE101724 were downloaded from the GEO database and diferentially expressed genes after propofol (300 μM) treatment were functionally analysed.
Diferentially expressed genes were shown in the volcano plot (a) with up-and downregulated genes labelled as red or blue, respectively. GO functional enrichment of the diferential expressed genes showed enrichment in the endoplasmic reticulum stress-related pathways (b). Te top 10 up-or downregulated genes after propofol treatment were plotted as the heatmap (c). results of tumour cells treated with 2.5 μg/ml of propofol for a duration of 48 and 72 hours. More studies should be conducted before we could fnd the appropriate usage of propofol in clinic as an antitumour agent.
Conclusions
In conclusion, we assessed the antitumour role of propofol on osteosarcoma and found that propofol could induce the ER stress response, inhibit osteosarcoma cell proliferation, migration, and invasion, and increase the sensitivity to doxorubicin treatment. Our fndings provided a novel option in osteosarcoma therapy.
Data Availability
Te data used to support the fndings of this study are included within the article.
Conflicts of Interest
Te authors declare that they have no conficts of interest.
|
2023-01-29T16:06:51.488Z
|
2023-01-27T00:00:00.000
|
{
"year": 2023,
"sha1": "0f80906427f399b367a0c6ee992f4789529c478e",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijclp/2023/3093945.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eacedde9a06eb2a1a4a7210a476edc7ad681cd9c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
202876836
|
pes2o/s2orc
|
v3-fos-license
|
Antidiabetic activity of Manomani chooranam aqueous extract on female wistar albino rats
Diabetes mellitus is one of the metabolic syndromes, which is characterized by hyperglycaemia, hyperuricemia, hyperaminoacidemia and it leads to hypoinsulinemia and reduced action of insulin. The global prevalence of diabetes is estimated to increase from 4% in 1995 to 5.4% by the year of 2025. Most of the antidiabetic drugs such as sulfonylureas, biguanides, α-glucosidase inhibitors, incretin-mimetic etc., have some kind of adverse effects like nausea, vomiting, diarrhoea, abdominal pain, headache, etc. Thus search for a new safe and potent anti-diabetic herbal formulation drug is essential to overcome these problems. Asper world ethno botanical reports nearly 1000 plants could be used to treat diabetes mellitus. In siddha medicine, many single and polyherbal formulations and higher medicine like chooranam, parpam, chendooram and chunam have been practised cure or control diabetes mellitus from time immemorial.
Ethical clearance
The study was one after being approved by Institutional Animal Ethics Committee (Ref. No.06/IAEC/MG/2017-1). Adult female Wistar albino rats (weighing 150-200g) were used for acute toxicity and anti-diabetic activity. The animals were purchased from TANUVAS, Chennai. They were kept individually in cages, fed with standard pellet and water; animals were maintained at a temperature of 27±3 0 C.
Preparation of MMC aqueous extract
As per Siddha Pharmacopeia, MMC was taken by boiling 100 gm of chooranam with 400 ml of water by boiling in slow flame and then filtered. The aqueous extract prepared is stored in an airtight container.
Acute toxicity study
As per OECD 423 guidelines, MMC aqueous extract was administered through oral gavage to female Wistar rats (n=3) in a dose of 2000 mg/kg body weight. The rats were observed for the first 24 hours for any signs and symptoms of toxicity or death and later for 2 weeks. The procedure was repeated with higher doses of MMC 5000mg/kg body weight using 3 female Wistar albino rats
Dose selection
As the limit dose did not exhibit any signs of toxicity, a dose of 2000 mg/kg body weight p.o., was taken as dose for the main study. Metformin at a dose of 100 mg/kg was taken as a standard control.
Induction of diabetes in Wistar albino rats
The overnight fasted Wistar albino rats were injected with intraperitoneal (i.p.). Inj. streptozotocin in 0.1 M cold sodium citrate buffer, at the dose of 35 mg/kg body weight. To counter-act the drug-induced hypoglycemia, the rats were subjected to drink 5% glucose solution overnight. A week time was given for the development of diabetes. The confirmation of hyperglycemia was determined by monitoring blood glucose level more than 200 mg/dl were considered as diabetic induced rats. Then the animals were divided into the respective groups (6 animals in each group) Six groups with six rats in each (a total of 36 rats) were used. All groups received respective drugs for 3 weeks.
Group I (control) received normal saline P.O (per orally). Group II (negative control) diabetic Induced rats received normal saline without treatment. Group III (positive control) diabetic induced rats received metformin P.O 100 mg/kg body weight once a day for 3 weeks. Group IV, V and VI treatment group (low, moderate and high dose) received 500 mg, 1000 mg, 1250 mg/kg of the chooranam extract respectively once a day P.O (Table 1).
Statistical analysis
Collected data were entered in Microsoft excel 2019 and analyzed using JASP 0.8.4.0 version. Results were expressed in mean±standard error of the mean as a table.
Statistical analysis was performed using one way ANOVA followed by post hoc Tukey's test. The p<0.05 was considered statistically significant.
Acute toxicity study of Manomani chooranam aqueous extract (MMCAE) in rats
Acute toxicity studies confirmed that the MMCAE up to a dose of 5000mg/kg body weight was non-toxic.There was no mortality or any abnormal behavioral changes were found at any of the selected doses the end of the study.
Effect of MMC on blood glucose level
The blood glucose levels were significantly increased after STZ injection, when compared to that of normal control group and the animals were grouped according to the blood glucose levels. The groups which received low dose (500 mg/kg ), moderate dose (1000 mg/kg) and high dose (1250 mg/kg) showed significant reduction in blood sugar level. The blood glucose levels were significantly reduced in all treatment groups under study when compared to positive control ( Figure 1) and (Table 2).
Histopathological study of pancreas
All the sections of normal control (Figure 2A) showed normal tissue architecture with lobules separated by connective tissue septae. The lobules consist of exocrine acinar cells.
The endocrine islets of Langerhans were embedded within the acinar cells. All the sections of disease control ( Figure 2B) showed highly reduced islets number. The number of cells in each islets and the size of islets were also reduced.
The tissue sections of positive control metformin ( Figure 2C), low dose of MMMCAE ( Figure 2D), moderate dose of MMCAE ( Figure 2E), high dose of MMCAE ( Figure 2F) has shown an increase in size of the islets compared to NC group.
There was an increase in size of cells in each group compared to negative control group compared to normal control.
There was no loss of tissue architecture and necrosis in any of the group. There was protection in all three treated groups which was evident by the increase in size of islets of Langerhans and there was no difference in protection of cells in these groups.
DISCUSSION
Diabetes has become a major health problem in most of the countries. Combination of herbs has been extensively used from ancient times and had shown potent antidiabetic activities without toxicity. Therefore a polyherbal formulation was prepared. 28 Since STZ has selective pancreatic islet beta cell cytotoxicity it is used to induce type I diabetes in rat model. 29 Streptozotocin enters the ß cell causing alkylation of DNA resulting in necrosis.
In the present study, the anti-diabetic activity of MMC aqueous extract was investigated in STZ induced diabetes in rat models. The 3-week study was conducted in the Central Animal house, Mahatma Gandhi Medical College and Research Institute, Puducherry. In our study, the MMCAE at three doses, that is, 500 mg/kg, 1000 mg/kg, and 1250 mg/kg produced a dose-dependent reduction in the sugar levels especially in first week of treatment when compared to the standard drug metformin (100 mg/kg), which is followed by significant reduction in sugar levels in subsequent weeks till our study (3 weeks).
The poly herbal formulation significantly reduced the blood glucose level in streptozotocin-induced-diabetic rats as compared to the diabetic control group. The possible mechanism by which polyherbal formulation brings about its hypoglycaemic action in diabetic rat may be by potentiating the insulin effect of plasma by increasing either the pancreatic secretion of insulin from the existing beta cells or by its release from the bound form. 27 Lack of insulin leads to inactivation of the glycogen synthase systems. 30 The possible mechanism of lowering blood glucose level by herbal formulation TAB may be by inhibiting the pancreatic enzymes resulting in an increase in pancreatic secretion of insulin or its release from bound form. 28
CONCLUSION
The results show that Manomani chooranam is safe and is able to control the blood sugar levels. The ability to reduce the blood sugar levels is due to the presence of active ingredients in this poly herbal formulation. Hence it can be a potent anti-diabetic drug for usage, which needs further evaluation.
A B
C D E F
|
2019-09-17T01:04:16.989Z
|
2019-08-28T00:00:00.000
|
{
"year": 2019,
"sha1": "e670c8c8628a09d1a1fa1e5ae4e48a26a257cc5f",
"oa_license": null,
"oa_url": "https://www.ijbcp.com/index.php/ijbcp/article/download/3648/2598",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b9ba0a59f3b2d8ef3e78d2689629f5886b56c461",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
7257699
|
pes2o/s2orc
|
v3-fos-license
|
Phylogenomic Insights into Mouse Evolution Using a Pseudoreference Approach
Comparative genomic studies are now possible across a broad range of evolutionary timescales, but the generation and analysis of genomic data across many different species still present a number of challenges. The most sophisticated genotyping and down-stream analytical frameworks are still predominantly based on comparisons to high-quality reference genomes. However, established genomic resources are often limited within a given group of species, necessitating comparisons to divergent reference genomes that could restrict or bias comparisons across a phylogenetic sample. Here, we develop a scalable pseudoreference approach to iteratively incorporate sample-specific variation into a genome reference and reduce the effects of systematic mapping bias in downstream analyses. To characterize this framework, we used targeted capture to sequence whole exomes (∼54 Mbp) in 12 lineages (ten species) of mice spanning the Mus radiation. We generated whole exome pseudoreferences for all species and show that this iterative reference-based approach improved basic genomic analyses that depend on mapping accuracy while preserving the associated annotations of the mouse reference genome. We then use these pseudoreferences to resolve evolutionary relationships among these lineages while accounting for phylogenetic discordance across the genome, contributing an important resource for comparative studies in the mouse system. We also describe patterns of genomic introgression among lineages and compare our results to previous studies. Our general approach can be applied to whole or partitioned genomic data and is easily portable to any system with sufficient genomic resources, providing a useful framework for phylogenomic studies in mice and other taxa.
Introduction
The efficient generation and analysis of comparative genomewide data sets remains a key challenge in evolutionary biology. Massively parallel sequencing has made it relatively easy to generate whole-genome sequencing (WGS) data sets, enabling comparative genomic studies across a broad range of evolutionary timescales. However, the empirical and analytical resources required to generate comparative WGS data sets are still somewhat limiting in species groups with large, complex genomes. As a partial solution, various partitioning approaches are often used to generate comparative genome-wide data across broader sets of species (e.g., restriction site-associated DNA sequencing, targeted capture, transcriptomics, etc.; reviewed in Ekblom and Galindo 2011;Jones and Good 2016). These approaches overcome the extra costs associated with WGS, but analyzing such data across a diverse sample of species still presents a number of challenges. In particular, the most sophisticated analytical frameworks often rely upon approaches developed for WGS and the numerous benefits afforded by a high quality reference genome (e.g., efficient genotyping, physical location, and associated functional annotation; Li and Durbin 2009;Li et al. 2009;McKenna et al. 2010;Yandell and Ence 2012). Thus, as with WGS, the types of analyses that can be conducted using genome-wide partitioned data can be limited by the existence, quality, and completeness of a reference genome.
One common solution in species lacking genomic resources is to use an established reference from another species. Most genotyping approaches are reference-based at some level and thus depend on accurate sequence read mapping (DePristo et al. 2011), which decreases with increasing sequence divergence from the reference (Li et al. 2008). Although mapping algorithms allow for reference mismatches to account for some divergence, polymorphism, or sequencing error (Nielsen et al. 2011;Ruffalo et al. 2011;Liu et al. 2012), mapping to a divergent reference can generate a number of systematic biases that could compromise comparative evolutionary analyses. For example, sequences that show substantial divergence from a reference will map with lower quality and effectively hide corresponding sample-specific variation. Analyses relying on full sequence information, such as those often used in phylogenetics or molecular evolution, may be particularly sensitive to these issues because called genotypes may converge towards the reference, resulting in an overestimated similarity between subject and reference sequences in divergent regions. This phenomenon, generally referred to as reference (or mapping) bias, has been discussed most frequently with regard to its effect on detecting allele-specific expression in transcriptomic analyses (e.g., Satya et al. 2012;Stevenson et al. 2013;Panousis et al. 2014;Brandt et al. 2015), yet it impacts any comparative study where reads are mapped to a divergent reference. For example, reference bias could lead to the underestimation of rates of molecular evolution or the overestimation of phylogenetic discordance due to stochastic genealogical processes (i.e., incomplete lineage sorting) or hybridization. One approach that has shown some promise in alleviating these concerns is the generation of "pseudogenomes," or reference genomes that incorporate sample-specific variation (Holt et al. 2013;Huang et al. 2013Huang et al. , 2014. This allows annotation to be carried over from a reference while accounting for sequence divergence during the mapping stage. Here, we extend these previous works by developing a scalable pseudoreference approach to iteratively incorporate sample-specific variation into a reference and thereby reduce the effects of systematic mapping bias in downstream analyses.
The house mouse (Mus musculus) is an important model of mammalian biology and a compelling system in which to develop comparative genomic approaches and resources. In addition to extensive genetic and developmental resources, the mouse was the second mammal to be sequenced (Chinwalla et al. 2002), and the mouse reference (C57BL/6, a mosaic lab strain primarily of M. musculus domesticus origin; Yang et al. 2011) remains second in quality only to the human genome. House mice have also emerged as a powerful system to address fundamental questions in genome evolution, population genetics, and speciation (e.g., Good et al. 2010;Halligan et al. 2010;Kousathanas et al. 2014;Turner et al. 2014;Phifer-Rixey and Nachman 2015;Larson et al. 2016). Although most evolutionary genomic studies in this group have focused on a few closely related species and subspecies (e.g., Keane et al. 2011;Yang et al. 2011), house mice are embedded within a radiation of~38 species that shared a common ancestor~7.5 Ma (Schenk et al. 2013). Several of these species already have developed inbred laboratory strains, providing a unique combination of genetic and genomic resources that could be leveraged to address a wide array of evolutionary questions in mammals. However, aspects of the Mus phylogeny remain unresolved, including uncertainty in the evolutionary relationships among some key lineages that are relatively closely related to house mice (e.g., M. spretus/spicilegus/macedonicus and M. caroli/cookii/cervicolor; Hammer and Silver 1993;Lundrigan et al. 2002;Chevret et al. 2005;Tucker et al. 2005;Bryja et al. 2014). In addition to uncertainty in overall species relationships, it is also unclear how much phylogenetic discordance there is across the house mouse genome due to incomplete lineage sorting or gene flow between species (e.g., Keane et al. 2011. Resolving these outstanding issues is an important step in developing the mouse system for comparative evolutionary studies. In this study, we use targeted capture to generate whole exome data (54 Mb targeted, exons and flanking regions) across 10 species of mice (Mus). We use these data to evaluate the general performance of our pseudoreference approach in mitigating the effects of reference bias. We then use the pseudoreferences to resolve the phylogenetic relationships among these mouse species while assessing phylogenetic discordance at different genomic scales and the extent of introgression between some lineages. In addition to insights into the evolutionary history of these species, our study provides a foundation for future comparative studies in mice and a general framework for rapidly generating phylogenomic data sets in other groups of closely related species.
Exome Capture
Illumina sequencing libraries were generated using whole genomic DNA from ten species ( (Fairfield et al. 2011), and 100 bp paired-end sequenced on an Illumina HiSeq 2000. This in-solution enrichment platform targets 54.3 Mbp of exonic regions with the mouse genome (NCBI37/mm9).
Quality Assessment and Iterative Mapping
Raw reads were cleaned using the expHTS pipeline (available from https://github.com/msettles/expHTS; last accessed February 28, 2017), which trims adapters and low-quality bases, merges overlapping reads, and removes identical reads (putative PCR duplicates). Initial capture performance statistics were calculated using CollectHsMetrics in Picard v2.5.0 (available from http://github.com/broadinstitute/ picard; last accessed February 28, 2017). To mitigate reference bias, we employed an iterative mapping strategy to generate species-specific exomes embedded within the mouse reference genome (GRCm38). Cleaned reads were mapped to the reference genome using the MEM algorithm of BWA v0.7.15 (Li and Durbin 2009;Li 2013). Duplicate reads were identified postmapping using Picard v2.5.0. For multiply mapped reads, only the location with the best mapping quality was included in the analysis. Regions with insertions or deletions (indels) were identified and realigned, and single nucleotide variants (SNVs) were called using HaplotypeCaller within the Genome Analysis Toolkit (GATK) v3. 6 (McKenna et al. 2010;DePristo et al. 2011). Resulting SNVs were filtered for a minimum quality of 30 and a minimum sequencing depth of at least five independent reads. These variants were injected back into the original reference using FastaAlternateReferenceMaker within the GATK. Additional processing of files, such as indexing, merging, and sorting, was accomplished using SAMtools v1.3.1 ) and Picard v2.5.0, as required. After each round, the modified reference was used as the starting point for additional iterations, starting with remapping of all reads and proceeding through variant calling. The early rounds of this iterative procedure should systematically introduce variants from the sample into the reference, increasing the number of sample reads that map and the number of variants that can be confidently called until the number of incorporated reads stabilizes across subsequent iterations. At this point, we inserted IUPAC ambiguity codes at putative heterozygous positions.
It was initially unclear how many iterations of mapping and reference generation ought to be performed to remove reference bias in our study. Preliminary evaluation (data not shown) suggested more than three iterations of mapping and genotyping would be required to incorporate most variation into a pseudoreference. We examined this empirically by identifying the number of iterations (5) at which read incorporation and per-site sequence divergence plateaued in the most divergent species in our sample, M. pahari. We then used this as the number of iterations necessary to produce a stable pseudoreference across all species in our sample. As a final step, each position with insufficient data to confidently call a sample genotype was excluded; an additional round of variant calling was performed with the EMIT_ALL_SITES argument set, producing a VCF with calls at each position. All remaining ambiguous positions (genotype quality <30, read depth <10 or >60) were hard masked (i.e., replaced with an "N") using GNU awk and bedtools v2.25 (Quinlan and Hall 2010). This produced a final consensus pseudoreference exome for each sample with the same coordinate system as the mouse reference. We also generated pseudoreferences without ambiguity codes for some downstream analyses. These are useful for bioinformatic analyses, including mapping and variant calling, which assume a haploid reference. All code necessary to replicate these procedures starting from cleaned reads is available as part of the pseudo-it project on GitHub (http://www.github.com/bricesarver/pseudo-it; last accessed February 28, 2017), and all pseudoreferences are available upon request.
Phylogenetic Inference
We used a two-tiered approach to resolve the phylogenetic relationships in our sample. First, we estimated the overall phylogeny from a concatenated alignment of gene sequences using the brown rat (Rattus norvegicus) as an outgroup. For each targeted protein-coding gene, we extracted the longest protein-coding transcript sequence based on the UCSC genes track (retrieved through the UCSC Genome Browser) from each iterated pseudoreference and from the whole genome reference sequence for M. m. domesticus strain C57BL/6. For each species, exons were extracted and assembled into transcripts using custom code and the Biostrings package (Pagè s et al. 2016) in R v3.1.3 (R Core Team 2015) and then combined into a multispecies alignment. We then used BioMart (Smedley et al. 2015) to identify one-to-one orthologous transcripts in R. norvegicus. Each set of transcripts was translation aligned using TranslatorX (Abascal et al. 2010) with the Muscle progressive alignment algorithm (Edgar 2004). Alignments without a length evenly divisible by three or possessing internal stop codons were discarded (5702 genes). With this filtered gene set, we then performed concatenated analyses by chromosome to simplify data processing and to verify internal consistency of analyses. All transcript sets from each chromosome were combined into a supermatrix using Phyutility v2.2.6 (Smith and Dunn 2008). A tree was estimated for each chromosome with the MPI version of RAxML v8.2.3 (Stamatakis et al. 2005;Stamatakis 2014) using a simultaneous maximum likelihood (ML) search and rapid bootstrapping run under the GTR + À model of sequence evolution (autoMRE option). Trees were visualized using FigTree v1.4.2 (http://tree.bio.ed.ac.uk/software/figtree; last accessed February 28, 2017). Among-chromosome topological discordance was assessed by rooting trees with rat and estimating pairwise Robinson-Foulds distances (Robinson and Foulds 1981) using the ape library (Paradis et al. 2004) in R.
Second, we focused on finer-scale patterns of phylogenetic discordance. A phylogenetic tree assumes a series of bifurcating speciation events. However, the speciation process is not necessarily instantaneous and we expect some regions of the genome to show conflicting phylogenetic histories due to incomplete lineage sorting, hybridization, or undetected gene duplication. In phylogenetics, a distinction is made between the history of a locus (a "gene tree") and the true relationship among lineages (a "species tree"; Maddison 1997). Several approaches have been developed to account for gene treespecies tree discordance under the multispecies coalescent (e.g., Edwards et al. 2007;Liu et al. 2009;Heled and Drummond 2010), yet many of these approaches are computationally intensive and thus less practical for genome-scale data sets. With these limitations in mind, we accounted for phylogenetic discordance in our data set using the computationally efficient species tree algorithm implemented in ASTRAL v4.10.11 (Mirarab et al. 2014;Sayyari and Mirarab 2016). Assuming sets of independent and accurately estimated gene trees, ASTRAL breaks each tree into its constituent quartets (i.e., four-taxa cases) and recovers a consistent estimate of the species tree.
Resolution of individual targets or transcript genealogies may be limited in our study, given the low overall levels of coding divergence between our focal species. To increase local phylogenetic signals, we expanded our working data set to include 5'-or 3' untranslated regions (UTR) and all other regions targeted for capture. Though exome probes are usually contained within annotated exons, both the capture process itself and the iterative pseudoreference process allow for the discovery of variation in flanking regions. To incorporate this variation, we extended each target by 200 bp on both ends and merged regions that were up to 1 kbp apart, increasing the total data set from 54.3 to 163.4 Mbp. As above, we first used RAxML (GTR + À, 200 bootstrap replicates) to estimate an ML tree per chromosome by extracting extended targets using bedtools v2.2.5 and combining regions with AMAS (Borowiec 2016). For these data, no alignment is required because indel variation is not incorporated into the pseudoreference. We then repeated this procedure across autosomal windows of five different sizes (extended targets, 100 kbp, 500 kbp, 1 Mbp, and 5 Mbp), estimating ML phylogenies from each window using the fast hill-climbing algorithm in RAxML. Strong linkage disequilibrium typically extends 100 kbp or less within wild house mouse (M. musculus) populations (Laurie et al. 2007), suggesting that larger window sizes may combine regions with independent phylogenetic histories. Any window containing only missing data for at least one individual was discarded. For each window size, all trees were combined for species tree inference in ASTRAL. We also calculated among-locus phylogenetic discordance using the normalized quartet score, which quantifies the amount of quartet discordance relative to the species tree.
Testing for Introgression
Motivated by recent studies that identified introgression between mouse lineages (e.g., Teeter et al. 2008;Keane et al. 2011;Yang et al. 2011;Staubach et al. 2012;Janoušek et al. 2015;Liu et al. 2015) we tested for signatures of introgression within and between taxa from the M. musculus group (here, M. m. musculus and M. m. domesticus), the M. spretus group (M. spretus, M. spicilegus, and M. macedonicus), and between M. cervicolor, M. cookii, and M. caroli. We used the D-statistic (i.e., Patterson's D or the ABBA-BABA test) to characterize patterns among species (Green et al. 2010;Durand et al. 2011). Briefly, the D-statistic is a normalized difference of counts of two site patterns within a rooted four-taxa case: ABBA and BABA. ABBA counts indicate a sharing of alleles between the first taxon and a specified outgroup (A) and the second and third taxa (B), whereas the opposite is true for the BABA case. Significance was assessed using a chi-square test (see Pease and Hahn 2015), and 95% confidence intervals estimated using a nonparametric bootstrap with 10,000 replicates. Additionally, when our sampling allowed, we estimated the minimum proportion of genomic admixture (f ) following Durand et al. (2011). The D-statistic is relatively robust to genotying error (Green et al. 2010;Durand et al. 2011), but could be sensitive to inherent differences in the source and quality of the exome data relative to the reference genome (dom C57BL/6 ). Therefore, we limited our comparisons to sequenced exomes except when directly testing for differential introgression between M. m. musculus and the two available M. m. domesticus genotypes (dom LEWES , dom C57BL/6 ).
Efficient Targeted Recovery of Mus Whole Exomes
Multiplex exome capture was successful across all samples. Sequencing efforts produced an average of~22 million reads per sample with an average of 1.1% of targets showing no coverage. Given a combined target size of~2% of the genome, this represents targeted recovery of 53.8 Mbp of sequence data (table 1) including most annotated genic regions in the mouse genome. Approximately 75% of raw reads were unique, resulting in average target coverage of 30Â across samples (range: 20.6-39.3Â) with~80% of targeted bases sequenced to at least 10Â coverage (table 1).
Evaluation of Iterative Pseudoreference Generation
To assess the performance of the iterative approach, we compared the same set of cleaned reads mapped to the mouse reference and to five-iteration pseudoreferences for each species (table 1). In all cases, mapping to a five-iteration pseudoreference resulted in minor increases in the coverage of targeted bases (e.g., 23.0-23.4Â in M. pahari) and the percentage of targeted bases recovered at a given depth (e.g., +1.3% for targets with at least 10Â coverage in M. pahari; table 1). In addition, reads were more confidently placed with each pseudoreference, resulting in an increase in usable bases and fewer reads discarded due to low mapping quality, as evidenced across iterations for the M. pahari exome (supplementary material table S2, Supplementary Material online).
In addition to modest increases in overall coverage, pseudoreference construction should also help mitigate systematic biases in standard descriptive statistics when mapping to a distantly related reference genome. To test this, we calculated the per-site divergence for targeted bases on Chromosome 1 (i.e., the number of homozygous alternative calls relative to the C57BL/6 reference divided by the total number of confidently genotyped sites) at each iteration for three samples-M. m. domesticus (dom LEWES ), M. spretus, and M. pahari-of increasing evolutionary distance from the reference. Divergence estimates were notably higher in all three species when using a five-iteration pseudoreference when compared with mapping straight to the mouse genome ( fig. 1A). Increases in per-site divergence were lowest for M. m. domesticus (dom LEWES , 0.19% vs. 0.22%; fig. 1A), and highest for the most distantly related lineage in our study, M. pahari (3.34% vs. 4.24%). In all cases, the most dramatic change was observed after mapping to the first estimated pseudoreference (i.e., iteration 2) and appeared to reach an asymptote by the fourth iteration. However, the relative magnitude of change scaled with divergence ( fig. 1B), assuming that incremental increases reflect divergence estimates asymptotically approaching their true value. These results indicate that the number of iterations required to mitigate biases will be contingent on the divergence levels between sample(s) and reference(s) in a given study.
The impact of pseudoreference construction on estimates of sequence divergence should also be apparent within a genome, across sites that vary in levels of functional constraint, for example. To test this, we classified all confidently called sites in M. pahari as belonging to protein-coding exon sequence, 5'-or 3'-UTRs, or flanking regions (introns or intergenic). We observe the same trends, with the most dramatic changes in per-site divergence detected in the less constrained flanking regions, followed by UTRs and protein coding domains ( fig. 1C).
Finally, we looked at the number and quality of variants called for M. m. domesticus (dom LEWES ), M. spretus, and M. pahari using the mouse reference and a five-iteration exome pseudoreference (Chromosome 1). We used HaplotypeCaller in the GATK (with -emitRefConfidence BP_RESOLUTION) to return genotype calls at each position and applied common quality filters to each set (as above). Although the number of confidently called sites decreased with divergence from the mouse reference genome, the total number of confidently called sites relative to the first iteration increased (table 2). Intuitively, we would also expect that genotype qualities should tend to increase in the (5). Shown are the total reads per library after cleaning (Total Reads), the number of bases in regions targeted by the capture (Bases On-Target), average coverage per target (Target Coverage), the percentage of bases in reads mapped with a MAPQ greater than zero (% Low Quality Bases), and the percentage of bases in targeted regions with at least 10Â coverage.
context of pseudoreferences. Consistent with this, we observed a positive skew in genotype qualities for all three species at positions that were confidently called relative to the mouse reference genome and the final pseudoreference ( fig. 2). However, we also observed many sites where the genotype quality decreased, frequently reflecting the loss of reads at a position due to being more confidently placed elsewhere after iteration. We also observe cases where sites called as homozygous reference or alternative relative to the mouse reference are called heterozygous (and vice versa) due to the placement of reads with alternate alleles at a given site.
Resolving the Mus Phylogeny
We first estimated a phylogeny for each chromosome based on concatenation of protein-coding transcripts. After filtering, this data set consisted of 15,620 aligned transcripts (28.2 Mbp) with one-to-one orthologs in rat. RAxML produced the same fully resolved tree for all chromosomes with 100% bootstrap support for each bipartition (supplementary material fig. S1, Supplementary Material online). There was no topological discordance among chromosomes (Robinson-Foulds distances equal to zero). Additionally, there was no discordance among trees estimated using sets of transcripts without Rattus (26,624 transcripts with a total length of 43.7 Mbp, analysis not shown), and all trees were resolved with 100% bootstrap support. These analyses also confirmed that M. pahari is an outgroup relative to the other sequenced species based on the rooted phylogeny (supplementary material fig. S1, Supplementary Material online). We then repeated this procedure for an expanded data set including all targeted and flanking regions in mice (and excluding rats), and found the same general results of a fully resolved concatenated phylogeny with no discordance among chromosomes ( fig. 3). Notably, these concatenated phylogenies resolve M. spretus/ spicilegus/macedonicus and M. caroli/cookii/cervicolor as monophyletic groups with M. spretus and M. caroli placed as the basal lineages within each.
Using concatenation to resolve a phylogeny effectively averages over fine-scale discordance, which can inflate confidence in the overall tree and obscure important sources of incongruence (Hahn and Nakhleh 2016). Therefore, we also used a species-tree approach to quantify fine-scale topological discordance. To do this, we first estimated individual ML genealogies trees using all extended targets with data partitioned into five window sizes: 100,531 extended targets (66,743 bp); and 511 5 Mbp intervals (291,249 bp). We then used these trees to estimate species trees while accounting for among-locus topological discordance. We detected no appreciable discordance in the point-estimate of the species tree (rooted on M. pahari; fig. 3) when compared with the perchromosome concatenated trees at the 100 kbp, 500 kbp, 1 Mbp, and 5 Mbp scales ( fig. 4). However, quartet support for some branches did vary by window size, and there was discordance at the target-level scale ( fig. 4). The lowest support was found for branches defining the M. pahari-platythrix-minutoides group at the base of the tree, suggesting some uncertainty in the placement of these deep nodes. Indeed, M. platythrix and M. pahari share a common ancestor in the species tree estimated using the extended targets, contrary to all other analyses. Only 36% of quartets support this clade, and the branch is extremely short. We also observed some variation in support levels within other groups. For example, although the M. spretus-spicilegus-macedonicus clade itself was well supported across most analyses, only 43% of quartets support the species tree designation of this clade at the level of targets ( fig. 4). Support steadily increased to 59% at the 100 kbp scale, 74% at the 500 kbp scale, 82% at the 1 Mbp scale, and 96% at the 5 Mbp scale. Thus, there is some fine-scale discordance in this group of interest, but the overall species tree generally shows more support than alternative phylogenies. Likewise, support for the M. caroli-cookii-cervicolor started at 60% at the target scale and reached 100% at the 5 Mbp scale. Normalized quartet scores suggest~80% of all quartets support the species tree at the target scale, and this increased to~99% at the 5 Mbp scale. Considering all analyses, the phylogeny for these taxa appears reasonably well resolved with relatively low levels of topological discordance, at least at the scales that can be reasonably evaluated with our exome data.
Introgression
We detected genotype asymmetries consistent with significant autosomal introgression between M. m. domesticus and M. m. musculus. We also detected some evidence for significant introgression between M. cookii and M. caroli. We did not detect autosomal introgression in other cases, including between lineages of the M. spretus group (M. macedonicus, M. spicilegus, and M. spretus) ( fig. 5; supplementary material table S3, Supplementary Material online). Patterns of between-lineage allele sharing were variable among strains within M. m. domesticus and M. m. musculus, consistent with the notion of differential introgression due to recent gene flow (Yang et al. 2011). Our sampling allows us to estimate the minimum admixture proportion for a few of these instances. We estimated that~7% of the genomes of mus PWK (f ¼6.6%) and dom C57BL/6 (f ¼6.8%) descend from introgression between M. m. musculus and M. m. domesticus.
Discussion
Genomic data sets are now commonplace in model and nonmodel systems. However, using a divergent reference genome to analyze genomic data sets can introduce reference biases that can affect biological inferences. To help address this outstanding issue, we developed a scalable pseudoreference approach to iteratively incorporate sample-specific variation into an established reference. Additionally, we describe the first targeted sequencing effort of complete exomes for approximately one-third of described Mus species diversity. Using these data, we resolve the phylogenetic relationships between these mouse species and describe patterns of introgression among lineages. Our analyses demonstrate that targeted exome sequencing is useful for both of these tasks and provides a proof-of-concept for similar analyses in other systems. More generally, our pseudoreference framework alleviates mapping biases that can lead to systematic underestimates in divergence and related statistics, providing a useful tool for comparative genomic analyses. Below, we discuss the general utility and limitations of our approach as well as the specific insights of our data to mouse evolution.
Exome Capture and Pseudoreference Construction
Ongoing work will continue to generate assembled and annotated reference genomes for many species of interest. However, high-quality reference genomes, which are critical to mapping reads generated from high throughput sequencing technologies, are still relatively scarce (Ellegren 2014). We were able to capture whole exomes across ten species span-ning~7.5 Myr of divergence (Schenk et al. 2013). Given the strong and comparable performance across all species, we anticipate that this capture approach would be effective over deeper evolutionary timescales. In addition to basic phylogenetic insights, our approach could also be used to generate comparative genomic data for in-depth analyses of molecular evolution over moderate timescales or as a pahari, estimated using all extended targets from Chromosome 1. ML bootstrap support values are listed above branches. There was no discordance between this tree and trees estimated from other chromosomes. The inset provides the normalized quartet scores calculated with ASTRAL from local genealogies estimated at five genomic scales. supplement to lower-coverage whole genome data. Other studies have shown that targeted capture can be used to recover exome data over a broad range of evolutionary timescales (Vallender 2011;Bi et al. 2012;Jin et al. 2012;Hedtke et al. 2013), though the integration of such data into a wellannotated reference genome had not been explored.
Our transspecific capture and iterative pseudoreference approach leveraged the benefits of the mouse reference, including position and annotation information, while mitigating the confounding effects of reference bias. Even among closely related species, we demonstrated that reference bias can have a strong impact on estimation of basic parameters, such as genetic divergence ( fig. 1) and genotype quality ( fig. 2). These simple comparisons illustrate that while reference-based genotyping is sensitive to divergence, the iterative pseudoreference approach reduces these biases over moderate levels of sequence divergence. Pseudoreferences, therefore, should generally increase the quality of and confidence in downstream analyses through incorporating additional reads and placing them with greater confidence. Implementation of our approach is straightforward (with the pseudo-it package), requires the same set of resources as standard mapping and variant calling, and preserves the coordinate system of the original reference. An alternative approach would be to de novo assemble targeted regions within each species (e.g., Bi et al. 2012). Whereas mapping to contigs assembled de novo is not expected to introduce reference bias, assembly requires substantially more computational power and results in a new coordinate system that needs to be linked between species. Any hard-earned empirical or computational annotation afforded by a reference would also need to be reestablished.
Several other approaches have been developed to combine sets of loci into workable references that can be used to call variants (e.g., PRGmatic; Hird et al. 2011), but are not iterative and cluster regions based on overall similarity. Recent studies using restriction-site associated DNA sequencing (RAD-seq) have shown that overall data quality is considerably higher when using a reference (Fountain et al. 2016;Shafer et al. 2016). Mapping of reads followed by de novo assembly would also be expected to reduce mapping bias and genotyping errors but consumes substantially more resources (Gan et al. 2011;Hunter et al. 2015). It is possible to obtain genomic coordinates of contig sets assembled de novo by aligning to a reference. However, such an approach is computationally demanding. For example, de Bruijn graph-based assembly would need to be performed under a range of k-mer values and clustered, and each assembly is both CPU and memory intensive. Furthermore, de novo assemblies from transcriptomic or capture data sets are often highly fragmented (e.g., Bi et al. 2012), leading to additional complications. We have shown that studies lacking species-specific references may benefit from an iterative approach, provided that a reference genome exists within a moderate evolutionary distance.
We also illustrated that the use of pseudoreferences can be combined with exome capture to resolve a species-level phylogeny (figs. 3 and 4) and inform about patterns of introgression ( fig. 5). A resolved phylogeny is important for answering a variety of questions in evolutionary biology, including estimating speciation rates (e.g., Nee 2001), inferring rates of morphological evolution (e.g., Pennell and Harmon 2013), and characterizing patterns of molecular evolution (e.g., Zhang et al. 2005). In addition to exonic sequences, noncoding (e.g., introns and intergenic regions) may also be targeted for capture or recovered through anonymous partitioning approaches. Given that they tend to evolve more quickly than exons ( fig. 1C), these noncoding regions would aid in resolving relationships among closely related taxa, inferring rates of evolution in concert with the phylogeny, or investigating finer-scale patterns of phylogenetic discordance. Because reference mapping biases scale with divergence ( fig. 1), iterative mapping is likely to be particularly useful when analyzing more rapidly evolving nongenic regions.
Many of the questions listed above require that the tree be ultrametric (i.e., scaled relative to time), but it is still computationally intractable to estimate an ultrametric species tree with genome-scale data using Bayesian methods. To address this, others have recommended restricting analyses of whole genome data to the most informative regions or combining regions with similar underlying topologies (e.g., Jarvis et al. 2015;. Given the need to subset WGS data, partitioned comparative data sets are obviously well suited for this general approach (though whole exome data would still likely need to be subsampled). Using a reduced data set, it should be possible to fix the topology to the estimated species tree and use Bayesian approaches to estimate a substitution rate scaled relative to time (and scaled relative to absolute time if fossil calibrations are used). Fixing the tree eliminates one of the most computationally intensive parts of likelihood-based phylogenetic estimation, the recalculation of the likelihood after topological rearrangement, and would facilitate an analysis using many loci for a more accurate calculation of the substitution rate per unit time.
Our iterative approach is not without important limitations. For example, we did not take indel variation into account when iterating our pseudoreferences in order to maintain a consistent coordinate system across many species. Due to the deleterious effect of frameshift mutations, indels tend to be rare in protein coding regions and we chose to ignore them within our study. Others have incorporated indel information within pseudogenomes (Holt et al. 2013;Huang et al. 2013Huang et al. , 2014, though these studies were focused on pairwise contrasts between very closely related genomes and did not use iteration. Extending the pseudoreference approach to efficiently incorporate small-scale indels across a phylogenetic sample remains an important goal for future studies; however, this reference-based framework will always be limiting with respect to larger-scale structural variation (e.g., chromosomal translocations and inversions). Thus, the approach outlined here will be most useful for generating comparative evolutionary genomic data sets of orthologous loci that can be used for phylogenetic and population genomic inferences. The relevance of such reference-based comparative studies should continue to grow as high quality reference genomes become increasingly common across the tree of life.
Mus Phylogenomics
Previous works focused on Mus systematics lacked several lineages included in this study or were uncertain with respect to the branching order within certain clades. In particular, the relationships between M. spretus/spicilegus/macedonicus and M. caroli/cookii/cervicolor have remained unclear (e.g., Lundrigan et al. 2002;Tucker et al. 2005;Tucker 2008). For example, there was conflicting evidence about the relationships among M. spretus, M. spicilegus, and M. macedonicus and the placement of each relative to the M. musculus species group. Our analyses resolved this group as monophyletic as well as the phylogenetic relationships among all ten species ( fig. 3). Though discordance among M. spretus, M. spicilegus, and M. macedonicus is appreciable at the scale of extended targets, a majority of quartets still support the species relationships inferred from all other data sets ( fig. 4). Overall, the species phylogeny is relatively well supported even when accounting for among-locus phylogenetic discordance ( fig. 4). This information is crucial for effectively designing genomic or functional genetic experiments in house mice that require comparisons to closely related species. Moreover, we note that the phylogenetic relationships that we recovered were robust across individual genealogies estimated at different local scales ( fig. 4) and when considering targets from smaller subsets of the whole exome capture (e.g., by chromosome). However, in one case, using extended targets alone for species tree analysis transposed the relationships at the base of the tree on a short branch with low quartet support, presumably reflecting a lack of informative sites in the alignments. These patterns suggest that the same general phylogenetic conclusions would have been apparent using a much smaller set of targeted loci as long as enough phylogenetically informative sites are present to confidently resolve relationships.
We also detected introgression between some mouse lineages. These results were not unexpected and are in strong agreement with other studies investigating whole-genome ancestry among mouse strains. Classic inbred strains derive from early breeding efforts of mouse fanciers (Beck et al. 2000), which included some crosses between species and subspecies (Ferris et al. 1982;Tucker et al. 1992;Ideraabdullah et al. 2004). The mosaic nature of classic inbred strains of mice is well known (Bonhomme et al. 1987;Yalcin et al. 2010;Didion and Pardo-Manuel De Villena 2013), but the extent of introgression has been the subject of some debate. Using a genome-wide SNP genotyping platform, Yang et al. (2011) estimated that M. m. domesticus strain C57BL/6 has a genome composed of~93% M. m. domesticus, and~7% M. m. musculus. Our estimate of 6.8% is in close agreement with their inferences, indicating that levels of introgression within sequenced genic regions is similar to genome-wide patterns based on SNVs and that variable ascertainment schemes used to populate the Mouse Diversity Genotyping Array (Yang et al. 2009) do not appear to bias overall signatures of gene flow. Additionally, we detected a strong signature of M. m. domesticus introgression into the wild-derived M. m. musculus strain PWK/PhJ, consistent with Yang et al. (2011) (fig. 5, supplementary material table S3, Supplementary Material online). The context of introgression involving this and some other wild-derived strains remains unclear. For PWK/PhJ, this could reflect natural gene flow as this strain was derived from the Czech Republic near the natural hybrid zone between M. m. domesticus and M. m. musculus. However, it has also been suggested that the haplotype structures of introgressed regions in this and a few other wildderived inbred stains are consistent with very recent gene flow, perhaps occurring in the laboratory subsequent to strain derivation (Yang et al. 2011). Liu et al. (2015) found 0.02-0.8% M. spretus ancestry within M. m. domesticus, and other studies have described natural introgression between these taxa (Orth et al. 2002;Keane et al. 2011;Song et al. 2011). We did not detect appreciable introgression between M. m. domesticus (dom LEWES ) and M. spretus, suggesting that the extent of introgression between these species is variable among individuals.
As expected, we did not detect introgression between M. spicilegus or M. macedonicus and the currently allopatric M. spretus. However, we did detect introgression between M. cookii and M. caroli. These species are broadly distributed throughout Eastern and Southeastern Asia and can cooccur in the same localities along with M. cervicolor (Suzuki and Aplin 2012). Introgression between these lineages, therefore, is not unexpected. Interestingly, these findings further support the notion that association with humans may contribute to hybridization between Mus species. Evidence for natural introgression within Mus includes cases of secondary contact following human-associated range expansions of M. spretus and M. musculus and between various M. musculus lineages (Palomo et al. 2009;Jones et al. 2010;Gabriel et al. 2010;Bonhomme et al. 2011;Song et al. 2011;Suzuki et al. 2013). Additionally, while the historical relationships of M. cookii and M. caroli and humans are less clear, both species (along with M. cervicolor) primarily occur in rice fields and nearby areas (Suzuki and Aplin 2012). This suggests that contact between lineages, and subsequent hybridization, may have been facilitated by agricultural development. Collectively, our results suggest that exome capture approaches may provide a powerful tool to reliably investigate finer-scale patterns of introgression among Mus species.
Supplementary Material
Supplementary data are available at Genome Biology and Evolution online.
|
2017-12-12T12:09:13.791Z
|
2017-03-01T00:00:00.000
|
{
"year": 2017,
"sha1": "c8a18201d85a602d0d52c575361b10b43d4fd89d",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/gbe/article-pdf/9/3/726/19270574/evx034.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c8a18201d85a602d0d52c575361b10b43d4fd89d",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
30308968
|
pes2o/s2orc
|
v3-fos-license
|
Advances in cardiac rehabilitation : cardiac rehabilitation after transcatheter aortic valve implantation
For more than a decade, transcatheter aortic valve implantation (TAVI) has become a promising treatment modality for patients with severe aortic stenosis and a high surgical risk. To improve exercise capacity and quality of life, cardiac rehabilitation (CR) including physical activity is a well-established treatment for patients after cardiac valve surgery. First studies have shown that CR could also be a helpful tool to maintain independency for activities of daily living and participation in socio-cultural life in patients after TAVI. Strength and balance training are important parts of physical activity in octogenarians and have been investigated in healthy older adults in several studies, but need to be widened and investigated for TAVI patients. Hence, for this older patient group, there are more prospective multicentre studies needed.
Introduction
Due to the demographic change and an aging population, the prevalence of the most frequent valve disease, aortic stenosis (AS), is still rising [1,2].
For patients with severe AS and a prohibitive surgical risk, transcatheter aortic valve implantation (TAVI) has developed as a golden standard [3][4][5].First performed in a human body in 2002 [6], the procedure is rapidly growing in utilization.A total of 48,353 TAVI procedures have been performed in Germany with a 20-fold increase since 2008 [7].Several clinical trials and registries have shown the advantages and the procedural success concerning mid-to long-term outcomes with an improved survival rate compared to the standard therapy [3,[8][9][10].
Cardiac rehabilitation after TAVI
For the improvement of exercise capacity, quality of life and morbidity, cardiac rehabilitation (CR) including physical activity is a wellestablished treatment for patients after cardiac valve surgery [11][12][13].
As TAVI starts to get more common nowadays and numbers are steadily increasing, this relatively new patient group also becomes more present in CR.Little is known about the efficacy of CR in TAVI patients as there exist only few studies about CR in this high-risk patient group in the older age, often having several comorbidities.Russo et al. [14] investigated the efficacy regarding functional capacity and additionally the safety of CR in TAVI patients and patients after surgical aortic valve replacement (sAVR).Though having many comorbidities, no major complications occurred in TAVI patients.In the overall patient group, a significant increase the 6-minute walk test (6MWT) could be achieved.The authors showed that a supervised, short-term, exercisebased CR program is feasible, safe and effective in these octogenerians.Völler et al. [15] could also show a benefit in functional status in patients after TAVI undergoing CR.Measured by the 6MWT and a bicycle stress test, the patients reached a significant longer walking distance as well as a significant higher exercise capacity at discharge of the three-week inpatient structured rehabilitation program, that consisted of individualised physical training, patient education and psychological support [16].As the valve academic research consortium-2 consensus document (VARC-2) [17] underlines the necessity of frailty as a multicomponent factor with the criteria slowness, weakness, exhaustion, wasting and malnutrition, poor endurance and inactivity as well as a loss of independence, evidence for these factors is needed.Zanettini et al. [18] could show that most TAVI patients obtained a significant improvement of an in-hospital CR program concerning functional status, quality of life and autonomy, which even remained constant during mid-term follow-up.
The first results of CR in TAVI patients show that CR can be a helpful tool to maintain independency for activities of daily living and participation in socio-cultural life, but still there are more prospective multicentre studies including geriatric assessments, such as frailty, needed to have enough evidence for the improvement and the prognosis of this high-risk and multimorbid patient group.Preliminary results of a prospective multicenter registry with a geriatric pre-interventional assessment according to Schoenenberger et al. [19], which was repeated at admission and discharge of inpatient CR, revealed that frailty could be a transient status in the majority of patients.Compared to the preinterventional measurement, where 47.3% had a positive Frailty-Index,
Final remarks
Other factors, e.g. the importance of fall prevention or strength training, have been investigated in healthy older adults in several studies [21][22][23], but need to be widened and investigated for TAVI patients.Cardiac rehabilitation researchers need to investigate and implement programs and protocols that include physical training with a combination of balance training in dual-and multi-tasking situations and strength training, mainly focusing on the lower extremities.
|
2017-11-06T23:22:38.792Z
|
2016-10-14T00:00:00.000
|
{
"year": 2016,
"sha1": "e65f57de526f68bba7686ea3e64ecb379507a558",
"oa_license": "CCBYNC",
"oa_url": "https://www.monaldi-archives.org/index.php/macd/article/download/758/737",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e65f57de526f68bba7686ea3e64ecb379507a558",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14338855
|
pes2o/s2orc
|
v3-fos-license
|
Reentry Trajectory Optimization Based on a Multistage Pseudospectral Method
Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization.
Introduction
Global strike and space transportation have spurred a great interest in hypersonic glide vehicle for both military and civilian applications [1,2]. The need for an effective and reliable access to space is promoting a rapid development of hypersonic glide vehicle. The progress is witnessed by the experimental success of NASA's scramjet-powered X-43A in 2005, US Air Force's X-51 in 2010, and the recent flight of DARPA's Falcon HTV-2 in 2012.
The reference trajectory is one of the key components of reentry guidance design for hypersonic glide vehicle; therefore, reentry trajectory optimization plays an important role in steering a safe and efficient flight of hypersonic glide vehicle in complex reentry environment, as well as meeting all of the mission requirements [3]. Generally, the hypersonic reentry vehicle enters the atmosphere of the Earth at an altitude of about 100∼120 km. The full flight trajectory ranges from the high orbital reentry interface to the terminal area at 20∼30 km in altitude. The reference trajectory is typically generated offline and preloaded on the hypersonic glide vehicle before launching. It is often required to correct the reentry trajectory for tracking errors during the reentry flight and even to replan a reference trajectory onboard for reaching a new target or aborting. It is a challenging task to optimize a reference trajectory in real-time for hypersonic glide vehicle, since the dynamics model is highly nonlinear along with limited control authority in the reentry flight [4].
The overall objective of this paper is to develop an onboard control strategy for reentry trajectory optimization. The multistage pseudospectral method is proposed to deal with the unexpected situations in reentry flight such as target transition and threat avoidance. In each processing stage, the trajectory estimation and trajectory refining are conducted to generate a specified range of flight trajectory. The full trajectory is determined in the form of optimal trajectory sequences. The main results on analysis of reentry trajectory optimization by using multistage pseudospectral method are presented to validate the feasibility.
Brief Review
Generally, two categories of numerical methods are used to solve the problem of reentry trajectory optimization: the 2 The Scientific World Journal indirect methods and the direct methods [5]. The indirect methods are based on the Pontryagin's minimum principle that results in a Hamiltonian boundary-value problem (HBVP). A high accuracy in the solution is the primary advantage of the indirect methods; however, the HBVP is quite complicated to solve [6]. The direct methods mainly convert the optimal control problem to the nonlinear programming problem (NLP). It is easier to use due to the larger radii of convergence without deriving the first-order necessary conditions [7]. Of the many direct methods, pseudospectral method has been demonstrated as an effective tool for solving the problem of reentry trajectory optimization. The pseudospectral method is one class of state and control parameterization methods. It was first used in optimal control problems by Reddien [8] in 1979. Recent studies have shown that the pseudospectral methods can provide simple structures and faster convergence rates for the optimal control problem with smooth and well-behaved solution [9,10]. It is quite convenient to obtain the solutions of large-scale constrained optimal control problem in a computationally efficient manner.
The successful use of the LPM in reentry trajectory optimization promoted its update for a simple way to check the optimality of the direct methods. Gong and Ross [15] proposed the convergence results for problems with mixed state and control constraints. The research has shown that, under a set of sufficient conditions, the discretized solution converges to the continuous solution. Further, Williams [16,17] introduced several variants of the standard LPM providing general pseudospectral approaches to find the CP. Recent studies have focused on the solutions of reentry trajectory optimization with maximum downrange [18,19]. The feasible reentry trajectory obtained by the LPM can reach an accuracy of 10 −3 ∼10 −5 .
(2) GPM: Benson et al. [20] first expatiated on the integral GPM and the differential GPM, explicitly formulating a mapping between the Karush-Kuhn-Tucker (KKT) conditions and the discretized first-order necessary conditions. Huntington [7] improved the GPM by a revised pseudospectral transcription for problems with path constraints and differential dynamic constraints. Huntington et al. [21] also presented a new method to compute the control at boundaries. Later, Jorris et al. [22] addressed the ability of the GPM to optimize the reentry trajectories for the hypersonic glide vehicle with highly accurate solutions. Jorris and Cobb [23] also proposed an up-and-coming numerical technique based on the GPM, capable of generating a three-dimensional reentry trajectory with geographic constraints. Zhang and Chen [24] introduced an easy GPM for optimizing the reentry trajectory of common aero vehicle (CAV) satisfying all of the path constraints and control authority. Tawfiqur et al. [25] and Xie et al. [26] obtained the flight profile using multiphase implementation of the GPM. Yang and Sun [27] improved the GPM to solve the problem of minimum total heat and demonstrated that the approach is not sensitive to the initial value.
(3) RPM and CPM: The RPM and CPM remain the less studied of the pseudospectral methods; however, the two new pseudospectral methods migrate fast from theory to flight application in the last years. One of the key advantages is that the RPM provides an accurate way to construct a complete mapping between the KKT multipliers in NLP and the costates in the optimal control problem [28,29]. Unlike the LPM, the costate approximation of the RPM converges exponentially. The RPM is comparable with the GPM in accuracy and computational efficiency, while only resulting in a less accurate final costate than the GPM [9,10]. The RPM for solving infinite-horizon nonlinear optimal control problems was developed by Fahroo and Ross [30], followed by a research on direct trajectory optimization and costate estimation for finite-horizon problems [31]. Ross and Fahroo [32] and Huntington et al. [9] also demonstrated when the RPM probably fails and when it is appropriate to The Scientific World Journal 3 use it for optimal control problems. Recent studies have focused on the generation of optimal reentry trajectories for the RLV and suborbital launch vehicle (SLV) using the RPM and CPM [12,33,34].
Although pseudospectral methods have achieved numerous advances in direct trajectory optimization, a drawback of the techniques is that the process of global trajectory optimization is time-consuming, such that the reference trajectory has to be obtained before flight. It also cannot deal with the unexpected situations in the glide flight such as the target transition and threat avoidance. For the purpose of fully autonomous and adaptive reentry guidance, it is of great importance to enable the onboard trajectory control strategy, which generates the optimal or suboptimal trajectory with the transition of the flight state.
Fundamentals
where is radial distance from the center of the Earth to the reentry vehicle. The terms and are the longitude and latitude. The Earth-relative velocity is . The heading angle is , and is the flight-path angle. The mass of the vehicle and the bank angle are described as and , respectively. The terms and are the aerodynamic drag and lift forces; that is, =
Typical Reentry Constraints. Typical path constraints for reentry trajectory optimization include [36]
where (2) is a constraint on heating rate at a specified point on the surface of hypersonic vehicle with a normalization constant . Constraint (3) is on the dynamic pressure that is determined by the atmosphere density and Earth-relative velocity. The total aerodynamic load constraint is described as (4). Note that these constraints are "hard" constraints, meaning that they should be within the maximum allowable values strictly.
Generally, the terminal conditions depend on different flight missions. Let subscript denote the terminal state; the terminal constraints for the reentry trajectory are defined as where ℎ = − is the altitude from the sea level and is the Earth radius. The mark * denotes the specified value at the final time .
In addition, the control = [ ] corresponding to the state history should not exceed the system authority in terms of the maximum magnitudes and rates as follows:
Problem Formulation.
Subject to the reentry dynamics, the purpose of reentry trajectory optimization for hypersonic vehicle is to find the angle of attack and bank angle, such that the objective function is a minimum (or a maximum), meanwhile satisfying all of the boundary conditions and path constraints. Without loss of generality, the problem of reentry trajectory optimization is considered as the optimal control problem in the continuous Bolza form. Determine the control ( ) ∈ and the state ( ) ∈ that minimizes the objective function = Φ ( ( 0 ) , 0 , ( ) , ) 4 The Scientific World Journal subject to the state dynamics, boundary conditions, and path constraintṡ( Note that different objective functions are generally selected according to different flight missions of hypersonic vehicle, such as the minimum arriving time, minimum total heat load, maximum control margin, and maximum downrange or crossrange. The traditional pseudospectral method for solving continuous Bolza problem uses a single mesh interval and increases the degree of the polynomial for convergence [3]. It has a simple structure for the optimal control problem with smooth and well-behaved solution; however, several limitations still exist on the problem of reentry trajectory optimization. On the one hand, a fairly large-degree global polynomial is often used to obtain an accurate approximation. Figure 1 shows an example of relative errors between approximated and real trajectory of 25 min flight time using the GPM. It can be found that, with more than 60 discretization points, the GPM typically results in a small relative error (less than 5%) between the approximated trajectory and real ODE trajectory. On the other hand, large number of discretization points probably leads to an inefficient or even intractable computation due to the large-scale global pseudospectral differentiation matrix [37]. Table 2 shows an example of computation time for optimizing a reentry trajectory of 25 min flight time using the GPM, which increases exponentially with the increasing number of discretization points.
A simple decrease of the number of discretization points would save the computation time; however, an accurate approximated trajectory is also required. A tradeoff between the computation time and accuracy of solution may not notably improve the performance of reentry trajectory optimization. In order to enable the onboard trajectory control, we add the following three objectives to the method.
(1) The optimization of the specified range of flight trajectory should be completed before the hypersonic vehicle arrives at it.
(2) The discretization points in the nearest interval from the present position should be dense such that the approximated trajectory is accurate enough.
(3) The method is capable of dealing with unexpected situations in actual reentry flight, such as threat avoidance and target transition.
Outline.
In this section, we introduce the multistage pseudospectral method based on the GPM. The scheme is similar processed with the other pseudospectral methods. The traditional GPM discretizes all the state, control, and constraint condition equations at Legendre-Gauss (LG) nodes; then, it approximates the values using the Lagrange interpolating polynomials. The derivatives of each state are obtained by differentiating the global interpolating polynomials, such that the 3DOF equations of motion at the collocation nodes are transcribed into algebraic constraints. In addition, the terminal condition is defined in the forms of initial state and a Gauss quadrature. The integral parts of the objective function are also estimated by the Gauss quadrature. Thus, the continuous Bolza problem is transformed into the NLP. The optimal solution of the NLP can be obtained using the method of sequential quadratic programming (SQP).
For the purpose of onboard trajectory control strategy, the algorithm herein tactically divides the traditional GPM into multiple stages. Two subproblems are typically involved in each processing stage. One is the trajectory estimation using the low-order GPM to determine a rough global optimal trajectory. The other is the trajectory refining in the nearest interval to determine a segment of accurate trajectory. Note that the algorithm generates a specified range of trajectory at a time, ahead of the current position of vehicle. The full reentry trajectory consists of a series of optimal trajectory sequences.
In the following, the principle of GPM is briefly described. The details of the multistage trajectory control strategy and the preceding subproblems are presented thereafter. Finally, some typical geographic constraints in hypersonic glide flight are discussed.
The Scientific World Journal 5 4.2. GPM. The description herein is a compilation from the studies of Rao [5] and Betts and White [6]. For continuous Bolza problem, the GPM first collocates the state and control in the dynamic equation at LG nodes ( = 1, 2, . . . , ) that are the roots of the th degree Legendre polynomial. With the two boundary nodes, 0 and , there are + 2 discretized nodes in total. Thus, the state, ( ), is formed with a basis of + 1 Lagrange interpolating polynomials as Then, the derivatives of each state at the LG node are described in the form of a differential approximation matrix aṡ( The differentiation matrix, ∈ ×( +1) , is determined by where ( ) is the th degree Legendre polynomial [8]. Thus, the dynamic equations at the collocation nodes are transcribed into algebraic constraints as Since the state at the final time is ignored by the state approximation equation, an additional constraint with the final state is required as where are the Gauss weights. Finally, the objective function is approximated by a Gauss quadrature as with the boundary conditions and path constraints in the form of discretization as ( ( ) , ( ) , ; 0 , ) ≤ 0, ( = 1, 2, . . . , ) . (17) Thus, the solution to the continuous Bolza problem is determined by the solution to the NLP with dynamic constraints (13) and (14), objective function (15), boundary constraints (16), and path constraints (17).
Multistage Trajectory Control
Strategy. The multistage trajectory control strategy is based on the traditional GPM. The optimization problem is divided into several stages. In each stage, the algorithm solves two subproblems: the trajectory estimation and trajectory refining. The core idea of the scheme is that the collocation points should be dense enough in the interval nearest to the current position, and, in each processing stage, the method generates a specified range of reentry trajectory with the transition of the flight state. Figure 2 is an illustration that captures the main idea of the scheme. The following steps explain the process of trajectory optimization in detail using the multistage trajectory control strategy.
Step 1. In Stage 1, assuming that the hypersonic vehicle is flying steadily in the current interval, the present objective is to generate a segment of accurate trajectory in the next interval. We define the next interval as 0 ∼ 1 and the trajectory segment as Sequence 1. First, the trajectory from 0 to the terminal condition of the overall trajectory is optimized by using a low-order GPM. The number of LG points in the rough optimization is rough1 and the computation time is rough1 . Note that rough1 is typically a small number (less than thirty) such that rough1 is short enough. This process is called trajectory estimation. A rough optimal trajectory from 0 to the terminal condition is obtained in this step.
Step 2. For trajectory segment with higher accuracy, the trajectory refining is required in the nearest interval. As shown in Figure 2, the initial and terminal condition of the trajectory segment with interval 0 ∼ 1 is typically selected from among the rough optimal trajectories in Step 1. Then, the trajectory segment is optimized again by using a low-order GPM. Since Sequence 1 is a small part of the full trajectory, a loworder GPM is competent to generate an accurate trajectory segment. We define the number of LG points in the refined trajectory as refine1 and the computation time as refine1 . Thus, the total computation time of optimization in Stage 1 is cpu1 = rough1 + refine1 . Sequence 1 is the optimized trajectory for the actual flight in the next interval 0 ∼ 1 .
Step 3. When entering Stage 2, the vehicle flies along the trajectory Sequence 1 obtained in Stage 1. The next objective is to generate a segment of accurate trajectory in the interval 1 ∼ 2 . The processes of trajectory estimation in Step 1 and trajectory refining in Step 2 are repeated such that the trajectory Sequence 2 can be obtained. Similarly, the numbers of LG points in the trajectory estimation and trajectory refining are rough2 and refine2 , respectively. The total computation time of optimization in Stage 2 is cpu2 = rough2 + refine2 .
Step 4. Repeat the aforementioned processes until the vehicle gets close to the terminal condition of the full trajectory. The generation of optimal trajectory is then divided into multiple stages, and the final trajectory consists of a series of optimal trajectory sequences as shown in Figure 2.
As a summary of the proposed algorithm, Figure 3 is a flowchart that captures the main blocks of the algorithm. In addition, some supplementaries of optimization process are described in the following.
(1) Since the optimization algorithm is a multistage method, it typically cannot obtain the global optimal solutions. For onboard trajectory generation purpose, an optimal reentry trajectory is generally obtained in the form of local optimal trajectory sequences.
(2) The choice of rough and refine is determined according to different flight missions. For the trajectory of 25 min flight time, rough can be selected between twenty and thirty and decreases properly as the rangeto-go is reduced. For the trajectory segment with 200 sec interval ∼ +1 , refine is typically less than twenty.
(3) The optimization of the next trajectory interval can be completed before the vehicle arrives, since the flight time in the current stage is much longer than the total computation time. For example, assuming that the trajectory interval ∼ +1 is about 200 seconds, the optimization only needs twenty LG points for trajectory estimation and trajectory refining, respectively. The total computation time is generally less than 40∼ 50 seconds. The flight time in each trajectory interval is 5 times more than that of the total computation time of the next trajectory interval.
Geographic Constraints.
Since hypersonic glide vehicles take on various flight missions, some complex constraints are involved in reentry flights. The waypoints and no-fly zones are two typical geographic constraints that should be included in the reentry trajectory optimization. For meeting the requirement of flight calibration, payload delivery, reconnaissance task, and so on, the hypersonic glide vehicle often needs to fly directly over a series of waypoints in the actual flight. Without loss of generality, the waypoint constraints are described in the form of algebraic constraints as [38] where the position of the th waypoint is presented by its longitude and latitude ( , ), and is the total number of waypoints. As shown in Figure 4, a simple method to deal with the waypoint constraints is to divide the reentry trajectory into multiple phases by the waypoints. Each waypoint turns to be the last collocation node in the previous phase as well as the first collocation node in the next phase. Thus, the waypoint constraint is typically transformed into the terminal condition of the previous phase. During the reentry flight missions, additional geopolitical sensitive regions and threat regions are also supposed to be focused on. The hypersonic glide vehicle must not violate the boundary of these regions. Without loss of generality, the nofly zone constraints herein are specified as cylinder zones with infinite altitude, since many no-fly zones with other shapes can be replaced by the cylinder zones. As shown in Figure 4, 1 and 2 are the cross section of no-fly zones with different shape. They are convenient to be included in a larger cylinder zones of which is the cross section. Thus, we can define the no-fly zones in the following form of algebraic constraints as [38] where the cross section of the th no-fly zone is described by the center ( , ) and the radius . The total number of nofly zones is . In addition, target transition for hypersonic vehicle is another mission requirement in actual flight. In a way, the target transition is a special kind of waypoint constraints, since the new target constraint can be treated as a waypoint constraint. As shown in Figure 4, the onboard trajectory control strategy potentially plays an important role in driving the hypersonic vehicle to a new target in the presence of new mission commands or unexpected situations.
Numerical Results
In this section, we present the numerical results of reentry trajectory optimization using the multistage trajectory control strategy. The aerodynamic data and characteristics parameters are based on the CAV-H data [39,40]. The control boundaries and hard constraint limits remain fixed throughout all the simulations as max = 30deg, min = 5deg, max = 89 deg, min = −89 deg, max = 8.0 × 10 5 W/m 2 , max = 5.0 × 10 4 Pa, and max = 2.5. The objective function to find an optimal trajectory with minimum total heat load is given as In each stage, the numbers of LG points for trajectory estimation and trajectory refining remain fixed in the following three cases as rough = refine = 20. The time interval of each refined trajectory sequence is selected around 300 seconds according to the rough trajectory estimation. The optimization solutions are found by Matlab 7.14 on a desktop computer with a 2.1-GHz processor.
Case 1 (Free-Space Flight).
In the numerical example of free-space flight, the initial and terminal conditions of reentry trajectory are described in Table 3. Table 4. Table 5 presents the computation time of the trajectory estimation and trajectory refining in each stage. It can be found that the actual flight time in the current trajectory interval is many times more than the total computation time of the next trajectory interval. All of the solution demonstrates that the multistage trajectory control strategy is feasible to solve onboard trajectory optimization problem for free-space reentry flight.
Case 2 (Target Transition).
In the numerical example of target transition flight, two geographic constraints are temporarily added to the flight mission when the hypersonic glide vehicle enters the second trajectory interval. The parameters of the new target and waypoint are listed in Table 6. The other conditions are the same with Case 1 as shown in Table 3. Figure 6 shows the numerical results of the target transition trajectory. Figure 6(a) compares the replanned 3D trajectory with the original 3D trajectory of Case 1. It can be found that the trajectory intervals are smoothly connected around the replan point. Figure 6(b) shows that the waypoint is directly passed through at the joint of the fourth and fifth optimal trajectory sequences. The onboard trajectory control strategy succeeds in driving the hypersonic glide vehicle to Table 9: Parameters of the geographic constraints (Case 3). Tables 7 and 8, respectively.
Case 3 (Threat Avoidance).
In the numerical example of the threat avoidance flight, one no-fly zone constraint is added to the flight mission when the hypersonic glide vehicle enters the second trajectory interval. The parameters of the no-fly zone constraint are listed in Table 9. The other conditions are the same with Case 1. Figure 7 shows the numerical results of the threat avoidance flight. Figures 7(a)-7(b) provide a comparison between the original trajectory and replanned trajectory. It can be found that the original trajectory would penetrate the nofly zone directly without avoidance. In contrast, the onboard trajectory control strategy with geographic constraints succeeds in driving the trajectory to go just along the boundary of the no-fly zone. As shown in Figure 7(f), the bank angle has an enormous change compared to the original trajectory for obtaining large lateral maneuverability. Figures 7(i)-7(j) show that the heating rate and dynamic pressure increase in advance since the geographic constraint is added; however, they are less than the maximum limitations. Finally, the same target with original mission is also satisfied. The total flight time is 1620.0 seconds. Tables 10 and 11 list the flight time and the computation time in each stage, respectively. The results demonstrate the feasibility of the multistage GPM for solving reentry flight of threat avoidance missions.
Conclusions
In this paper, a multistage trajectory control strategy based on the pseudospectral method is developed for reentry trajectory optimization. In each processing stage, the algorithm generates a specified range of optimal trajectory ahead of the current position of hypersonic vehicle. The full trajectory consists of a series of optimal trajectory sequences. Moreover, the proposed scheme is capable of dealing with unexpected situations in reentry flight. The performance of the multistage pseudospectral method is demonstrated by numerical examples of the free-space flight, target transition flight, and threat avoidance flight.
|
2018-04-03T03:55:53.307Z
|
2014-01-16T00:00:00.000
|
{
"year": 2014,
"sha1": "42ab5a23445a0d3364623ba2247fb1431b960acb",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/tswj/2014/878193.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fcfc7fe9f5a2b6e27bb7a3355e09ace033745f8e",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
270073741
|
pes2o/s2orc
|
v3-fos-license
|
Analysing the Impact of Resistant Starch Formation in Basmati Rice Products: Exploring Associations with Blood Glucose and Lipid Profiles across Various Cooking and Storage Conditions In Vivo
Common cooking methods were used to prepare basmati rice products, including boiling 1 (boiling by absorption), boiling 2 (boiling in extra amount of water), frying, and pressure cooking. The cooked rice was held at various temperatures and times as follows: it was made fresh (T1), kept at room temperature (20–22 °C) for 24 h (T2), kept at 4 °C for 24 h (T3), and then reheated after being kept at 4 °C for 24 h (T4). The proximate composition, total dietary fibre, resistant starch (RS), and in vitro starch digestion rate of products were examined. The effect of RS on blood glucose and lipid profiles was measured in humans and rats, including a histopathological study of the liver and pancreas in rats. The basmati rice that was prepared via boiling 1 and stored with T3 was found to be low in glycaemic index and glycaemic load, and to be high in resistant starch. Similarly, in rats, the blood glucose level, cholesterol, triglycerides, and LDL were reduced by about 29.7%, 37.9%, 31.3%, and 30.5%, respectively, after the consumption of basmati rice that was prepared via boiling 1 and stored with T3. Awareness should be raised among people about the health benefits of resistant starch consumption and the right way of cooking.
Introduction
The development of the latest handy food products and the use of new technologies have both improved in current years [1].The recognition of the connection between a nutritious food regimen and good health is one of the important reasons for the popularity of functional foods, which provide health advantages and further lessen the chance of persistent sicknesses due to fundamental vitamins.Resistant starch (RS), which has been identified as a type of dietary fibre, is one of the most broadly used ingredients in functional foods.It has long been recognised for having a superb function among fibres regarding its several nutritional features that provide health benefits.Its consumption lowers the blood glucose levels after a meal, reduces cholesterol and triglyceride levels in blood, improves the body's response to insulin, and even reduces fat storage [2].Animal studies have shown that resistant starch has positive effects on gut health by helping to create a substance called butyrate.This can change the composition and balance of microorganisms in the gut, which is the main reason of resistant starch becoming more popular as a food ingredient [3].
Foods 2024, 13, 1669 2 of 24 Resistant starch was discovered in the 1980s as a component that resists digestion by enzymes [4], and it has been defined as 'the sum of starch and products of starch degradation no longer absorbed in the small intestine of healthy individuals' [5].Foods contain five main types of resistant starch.Resistant starch 1 (RS1) is a type of starch that the body cannot digest because of the presence of whole, undamaged cell walls on grains, tubers, and seeds.This is used as an ingredient in a vast range of traditional foods due to its stability to heat during normal cooking processes [6].Because of their crystallinity, RS2 are natural, uncooked starch granules that are not greatly affected by hydrolysis.RS2 is found in raw starch granules like banana, potato, and high amylose corn starches.RS3 is retrograded starch, which is created during cooking and stored at room temperature or below [7].RS4 starches include cross-linked and starch ethers, which are chemically modified to develop resistance against enzymatic digestion [8].Amylose-lipid complexes, which occur when amylose and lipids come into contact, are where RS5 is created.Amyloselipid complexes are created when lipids perforate the amylose chains found in numerous plant sources with high amylose concentrations [9].
The mechanism that causes the starch chains to recrystallise after the gelatinised paste cools is known as retrogradation.The extent and rate of the retrogradation of starch are determined by its features, such as its crystalline or molecular structure, and storage circumstances, such as duration, temperature, and water content [10].
The delayed digestion of resistant starch, which takes more than five to seven hours, results in the reduction of insulinemia and postprandial levels, and may lengthen the duration of satiety [11].Its physiological actions are primarily responsible for its remarkable nutritional activity when compared to dietary fibres.When subclasses of fermentable dietary fibre sources are regularly consumed, there is a large synthesis of short-chain fatty acids (SCFAs) and gut-related microbiota, and immunological regulation occurred [9].Resistant starch avoids the small intestine's digestive process and passes straight into the large intestine, where it ferments into short-chain fatty acids like acetate, propionate, and butyrate, as well as gases like H 2 , CO 2 , and CH 4 that have been produced by the large intestine's probiotic bacteria [12].The SCFAs are absorbed via the intestinal wall and transported into the liver via portal circulation, where mainly butyrate gets utilised by colonocytes [13].The SCFAs protect the human body from metabolic disorders like obesity by providing energy to our brain muscles and heart.It also increases the production of bile acids, mineral absorption, and leptin production [14].Consuming resistant starch promotes the growth of beneficial bacteria like Lactobacillus, Bacteroides, and Bifidobacteria, while inhibiting the growth of harmful bacteria like Firmicutes [3].Several methods have been described by RS to lower cholesterol levels, including the increased excretion of bile in the faeces and a reduction in the influence of propionic acid on cholesterol production.In rats, SCFSs also lessen the synthesis of cholesterol in the gut and liver [15].Hence, it is suggested that at least 20 g of resistant starch per day should be consumed to obtain the various health benefits.In developing countries, about 30 to 40 g/d of resistant starch is recommended [16], while in India and China, RS consumption of about 10 and 18 g/d, respectively, is recommended, indicating that about 10 to 20% of daily carbohydrate consumption in the form of resistant starch is necessary for the health benefits to be obtained [17].
The cooking and grinding of food products for a longer period of time reduces its resistant starch content.During grinding, the resistant starch content of rice and oats is reduced from 12 to 5% and from 16 to 3% respectively.Resistant starch formation in food is also affected by water content, pH, heating processes (time, temperature, etc.), preservation methods (freezing, drying, storage, etc.), cooking methods, and the presence of components such as proteins, lipids, minerals, and inhibitors [6].Resistant starch content has been found to be impacted by both dry heating techniques, including baking, frying, microwaving, and autoclaving, as well as wet cooking techniques, like pressure cooking, steaming, and boiling.A high baking temperature and time has been reported to increase the resistant starch content [18].The main challenge within the food industry is the manufacturing of consumerfriendly foods, which contains enough resistant starch to result in a great enhancement of public health.Though the health benefits provided by resistant starch have been well documented, few studies are available regarding the resistant starch content of basmati rice products in India, and the in vivo efficacy of the resistant starch of basmati rice products in improving glucose and lipid profiles has not been studied.Considering the health benefits of resistant starch, the current study was designed to determine the manner in which cooking, and storage temperatures affected the resistant starch in basmati rice products and how that affected the lipid profile and blood sugar levels.Dietary interventions using resistant starch may improve the glucose metabolism and insulin sensitivity.Thus, products made from basmati rice were selected because they represent an essential part of the diet of people from North India.We additionally examined how well the resistant starch in basmati rice products worked in vivo enhanced the lipid profiles and blood glucose levels in rats and humans.
Our objectives are as follows: 1. To determine the effect of cooking and refrigeration on the resistant starch and dietary fibre content of cereal products; 2. To quantify the resistant starch and soluble fibre components in cereal products; 3. To assess the impact of resistant starch and soluble fibre components on postprandial glucose responses and appetite ratings; 4. To determine the effectiveness of resistant starch on lipid profiles and blood glucose levels in rat models.
Procurement and Cooking
The most common form of basmati rice (PB1121) was obtained by Punjab Agricultural University in Ludhiana's Department of Plant Breeding and Genetics This variety has been grown in large amounts in the fields of Punjab.It was harvested in the month of October from the field.The grains were cleaned and milled in order to remove the husk.The moisture content of basmati rice was reduced by up to 12 percent after milling.Four common cooking methods used by North Indians, namely boiling 1 (Boiling by absorption; rice were boiled in equal amount of water), boiling 2 (Boiled in extra amount of water), frying (Fried rice), and pressurecooking, were chosen (Table 1).These four food products were examined under four distinct storage conditions, or four treatments, as follows: they were prepared freshly (T1), kept at room temperature (20-22 • C and 45-50% RH) for 24 h (T2), stored at 4 • C for 24 h (T3), and then reheated after being kept at 4 • C for 24 h (T4) (Figure 1).Every rice product was utilised three times during the trial.The samples were dried and stored in zip lock bags (airtight packages) and further used for nutritional analysis after the treatments.Using standardised techniques, the nutritional analyses of cooked samples were performed for crude protein, crude fat, and ash.Crude protein was measured using the macro-Kjeldahl technique.The (AOAC 2000) method was also used to measure the amount of crude fat and ash [19].
Dietary Fiber
A megazyme total dietary fibre (K-TDFR-200A) kit was used to calculate the total amount of dietary fibre.The standard methodology provided by [19] was also utilised to analyse the contents of soluble and insoluble dietary fibre.The following formula was used to determine the dietary fibre: where R 1 = residue weight 1 from m 1 , R 2 = residue weight 2 from m 2 , m 1 = sample weight 1, m 2 = sample weight 2, A = ash weight from R 1 , p = protein weight from R 2, and where BR = blank residue, BP = blank protein from BR 1 , BA = blank ash from BR 2 .
Total Starch and Resistant Starch
Using a megazyme K-RSTAR assay kit provided by [20], the total and resistant starches were calculated.To find the total amount of starch, non-resistant (solubilised) starch and resistant starch were added.
In Vitro Starch Digestion Rate
The method described by [21] for determining the in vitro starch digestion rate was used.Furthermore, 500 mg of the sample was incubated with 250 U porcine amylase in 1 mL of artificial saliva (carbonate buffer; Sigma A-3176 Type VI-B) [3050 spruce, St. Louis, MO 63103 USA]) for 15 to 20 s.A water bath at 37 • C was used to incubate 5 mL of pepsin (1 mL per mL of 0.02 M aq.HCl; from gastric porcine mucosa; Sigma P-6887) for 30 min.Before correcting the pH to 6, the digesta was neutralised by adding 5 mL of 0.02 M aq.Sodium hydroxide.(0.2 M C 2 H 3 NaO 2 buffer, 25 mL).Two additional ingredients were added as follows: 2 mg of pancreatin per mL of acetate buffer (Sigma P1750 from porcine pancreas) and 5 mL of amyloglucosidase (Sigma A-7420 from Aspergillus niger; 28 U per mL of acetate buffer).The solution was then incubated for 4 h, during which the digesta's glucose concentration was periodically checked using an Accucheck (Roche Diabetes Care, India) glucometer.
Rapidly Digestible Starch and Slowly Digestible Starch
The following formula was used to translate the glucometer reading at 15 min to the percentage of starch digested.
where G G = Reading of the glucometer (mM/L); V = Digest volume (mL); 180 = glucose's molecular weight; W = sample weight (g); S = sample's starch content (g per 100 g dry sample); M = moisture percentage in sample (g per 100 g sample); 0.9 = starch stoichiometric constant from glucose concentrations; RDS% = percentage of starch digested at 15 min; SDS% = percentage of starch digested at 120 min-percentage of starch digested at 15 min.
Impact of Resistant Starch and Soluble Fibre Components on Postprandial Glucose Response via Measuring Glycaemic Index
The formula for the glycaemic index was computed by applying Goni's approach [22].All subjects provided their informed consent for inclusion before they participated in the study.The protocol for the study was approved by the ethics committee of Punjab Agricultural University, Ludhiana.The blood glucose levels were measured in ten healthy participants.In accordance with the Helsinki Declarations, the participants provided their explicit written consent.The participants were instructed to maintain a 12 h fast over night.The test food was served in the morning, and participants had fifteen minutes to eat their meals.A finger prick was used to obtain blood samples using a Glucometer (Dr.Morphine).The blood glucose level was assessed at 0, 15, 30, 45, 60, 90, and 120 min following the ingestion of 50 g of cooked cereal product, which is equivalent to a fast.Furthermore, 50 g of glucose were provided to the control group in order to compare how cooked food affected their blood sugar levels.Then, 150-300 mL of water could be consumed by volunteers, depending on what they ate throughout the study.The following formula was used to determine the glycaemic index:
GI =
Area under the curve f or 50 gm carbohydrate f or test sample Area under the curve f or 50 gm carbohydrates f rom control (glucose) × 100
Glycaemic load was calculated as
Glycemic load = GI × Available carbohydrates 100
Effectiveness of Resistant Starch on Blood Glucose Levels and Lipid Profiles in Rats
It was hypothesised that consuming foods high in RS might benefit the treatment of diabetes and hyperlipidaemia, either by increasing insulin secretion, decreasing its sensitivity, or by lowering the synthesis of cholesterol.We performed a rat experiment to gather accurate, true, and unbiased data on this.Furthermore, rats and humans share a great deal of similarities in terms of biology and genetics, as well as behavioural traits.
Animal Collection
Thirty-five Wistar albino rats, weighing between 180 and 220 g and 2 to 3 months old, were acquired from the Akal College of Pharmacy and Technical Education Mastuana sahib, Sangrur (Registered breeder of CCSEA) animal house and breeding centre (AHBC).The Institutional Animal Ethics Committee granted authorisation for the experiment to be conducted (IAEC no.: GADVASU/2023/1AEC/68/12).The animals were kept in cages, were provided with commercial pellets to eat, and had unlimited access to water.
Induction of Diabetes
Intraperitoneal injections of freshly produced 230 mg/kg nicotinamide (NA) with buffer saline NaCl 0.9% were administered to the Wistar albino rats.The rats were again provided with an intraperitoneal injection of approximately 60 mg/kg of streptozotocin (STZ) after a quarter of an hour.After injection, rats were provided with 5% glucose water in order to avoid hypoglycemia.Their blood samples were obtained five days into the induction process, and the levels of lipids, insulin, and blood glucose were assessed.More than 200 mg/kg of blood glucose was a sign of diabetes in the rats.The blood glucose levels were monitored during the first, third, and last weeks of the 28-day treatment period for the rats.The first and concluding weeks of the trial were devoted to measuring the lipid profile and insulin.2.4.3.Treatment Protocol 1. Group-I: (Normal control) Normal rats were fed a normal diet for a period of 28 days.2. Group-II: (Diabetic control) Diabetic rats were fed a regular diet for 28 days following the onset of diabetes.3. Group-III Diabetic rats were fed an FBR (freshly prepared boiled basmati rice) diet for 28 days.4. Group-IV (Treatment group) Diabetic rats were fed 4BR (stored at 4 • C for 24 h boiled basmati rice) for 28 days.5. Group-V (Treatment group) Diabetic rats were fed RehBR (reheated basmati rice after being stored at 4 • C for 24 h) for 28 days.
Each group contained seven rats.Among the four different types of cooking, only rice prepared via the boiling 1 method were fed to the rats as a limited numbers of rats were available to us.Moreover, the boiling 1 method is the most common way of cooking of rice among the north Indian population, and rice prepared via this method was found to have a high amount of resistant starch content and a lower glycaemia index than any other cooking method.The rice that was kept at room temperature for the entire day (T2) was also removed from the treatment since the product developed microorganisms as a result of the storage.
Histopathological Study of Rat Organs 2.5.1. Preparation of Samples
After the termination of the experiments, the pancreas and livers were examined for histopathological studies.These organs were cut into thin-sectioned pieces (1 mm × 1 mm × 1 mm).The samples of tissue were gathered and preserved with 10% formalin.After that, the samples were placed in alcohol at successive concentrations-70%, 80%, 95%, and pure alcohol-in order to extract the water from the tissue.Subsequently, the material was purified using xylol and then embedded in block paraffin.Next, a microtome was used to cut the paraffin blocks into 5 µm thick sections in order to prepare the tissue for sectioning.The tissue portion was then left on the hot plate set at 50 • C for 15 min [23].
Haematoxylin-Eosin (HE) Staining
Haematoxylin-eosin staining was used in a number of procedures.The tissue sample was first deparaffinised by dipping the sample into xylol I, xylol II, and xylol III for three minutes each.Subsequently, the tissue samples underwent rehydration with the addition of ethanol concentrations in increments of 100%, 95%, 80%, and 70% for a duration of two minutes each.After soaking the samples in Harri's Haematoxylin for ten minutes, they were washed for ten minutes with tap water.Additionally, the samples were submerged in eosin for ten minutes before being serially dehydrated with ethanol concentrations ranging from 70% to 100%.Xylol I, II, and III were used to hold the samples during the cleaning procedure.Following the colouring procedure, Canada balsam glue was dripped, covered in a glass cover, and then allowed to dry.All organs were then carefully investigated under a microscope (SZX16 Olympus, Ambala, India) using 600× magnification [23,24].
Statistical Analysis
The data were analysed using descriptive statistics, analysis of variance, and post hoc tests with SAS (version 9.4) software.A factorial completely randomised design was used to examine the quantity and effect of the cooking method and storage temperature on the proximate composition, total starch, and dietary fibre content of cereal products.One way analysis of variance was used to compare the average glycaemic index, glycaemic load, and pre-, during, and post-treatments in rat blood glucose levels.T-Test analysis was performed to examine the average effect of resistant starch-rich products on the insulin and lipid profiles of rats.
Proximate Composition
It was discovered that basmati rice had the highest crude protein level (9.54%), which was prepared using the boiling 1 method, followed by frying (fried rice, 8.28%), boiling 2 (8.22%), and pressure cooking (7.49%) (Table 2).The protein content of the products held at different temperatures showed a significant variation (≤0.001*), with T3 having the greatest content (9.47%), followed by T2 (8.32%), T1 (8.03%), and T4 (7.39%).When compared to protein, the crude fat content of fried rice (10.86%) and the boiled 1 method (2.18%) was found to be higher than that of the other cooked products due to the additional oil used during the frying process.Out of all the storage temperatures, T3 had the greatest crude fat content (4.03%), followed by T2 (4%), T4 (3.92%), and T1 (3.78%).The ash content of basmati rice was found to be the highest in the boiling 1 method (1.86%), followed by frying (1.85%), boiling 2 (1.75%), and pressure cooking (1.46%).There was no noticeable difference in the ash concentration (≤0.001*) between the rice products stored at various temperatures.
Dietary Fibre
The soluble dietary fibre content of basmati rice was found to be higher in pressure cooked rice (0.92%), followed by boiling 2 (0.89%), boiling 1 (0.87%), and frying (0.78%).Among the storage conditions, it was found to be higher in freshly prepared food i.e., more so in T1 (1.08%) and less in T3 (0.61%).The insoluble dietary fibre was highest in the boiling 1 method (2.80%), followed by boiling 2 (2.47%), pressure cooking (T4), and frying (2.25%).It was found to be higher in T3 (2.88%) and lower in T1 (2.14%) among the different storage temperatures.The amount of dietary fibre in the prepared food products was impacted Foods 2024, 13, 1669 9 of 24 by temperature changes during storage.The amount of insoluble and total dietary fibre in basmati rice increased with the length of time and when stored at low temperatures.The highest amount of dietary fibre was found in products produced via the boiling 1 method (3.67%) and kept at 4 • C (T3) (3.49%) for 24 h.The highest amount of soluble fibre was found in fresh food samples (T1) (Table 3).
Resistant Starch, Non-Resistant Starch, and Total Starch
Rice is considered a starchy food product.The total starch content of rice products was found to range from 73.2 to 79.2%, with the boiling 1 method having the greatest level of total starch and boiling 2 having the lowest level (Figure 2).The RS content of raw basmati rice was found to be either increased or decreased after applying different cooking methods.Following the use of all the cooking methods, the resistant starch percentage of raw basmati rice increased to 2.27%.The resistant starch content of basmati rice was found to be highest after the boiling 1 method (12.81%), followed by frying (11.67%), boiling 2 (8.72%), and pressure cooking (7.57%).The RS content of all rice products decreased during storage at different temperatures.Products held with T3 had the highest level of resistant starch content (11.76%), followed by T2 (10.53%) and T4 (9.64%).Conversely, freshly made products, like in T1 (8.84%), showed the lowest amount of RS after cooking for 15 min at 100 • C.However, the non-resistant starch content of basmati rice was found to be higher during T1 (67.42%) when compared to T3 (63.45%), and higher during pressure cooking (67.83%) and the boiling 1 method (66.39%), respectively.The results showed that the cooking technique and the storage temperature greatly influence the structure of starch leading to the changes in its physical and nutritional characteristics.
freshly made products, like in T1 (8.84%), showed the lowest amount of RS after cooking for 15 min at 100 °C.However, the non-resistant starch content of basmati rice was found to be higher during T1 (67.42%) when compared to T3 (63.45%), and higher during pressure cooking (67.83%) and the boiling 1 method (66.39%), respectively.The results showed that the cooking technique and the storage temperature greatly influence the structure of starch leading to the changes in its physical and nutritional characteristics.
In Vitro Starch Digestion Rate
The rate at which starch is metabolised in vitro is a crucial factor in assessing a food product's ability to elevate a person's blood glucose levels.Different storage temperatures had an impact on the in vitro starch digestion rate of basmati rice products (boiling 1, boiling 2, frying, and pressure cooking).This rate was measured 120 min after the food sample had finished digesting (Figures 3-6).The basmati rice boiled via absorption (boiling 1) and stored with T3 and T2 had a slower digestion rate of 39 and 41% at 120 min, compared to T1 and T4 rice, which had completed digestion at a rate of 43 and 41% after
In Vitro Starch Digestion Rate
The rate at which starch is metabolised in vitro is a crucial factor in assessing a food product's ability to elevate a person's blood glucose levels.Different storage temperatures had an impact on the in vitro starch digestion rate of basmati rice products (boiling 1, boiling 2, frying, and pressure cooking).This rate was measured 120 min after the food sample had finished digesting (Figures 3-6).The basmati rice boiled via absorption (boiling 1) and stored with T3 and T2 had a slower digestion rate of 39 and 41% at 120 min, compared to T1 and T4 rice, which had completed digestion at a rate of 43 and 41% after 90 min in basmati rice boiled in an extra amount of water (boiling 2).The rate of starch digestion was lower in T3 and T2 products with 38 and 42.2% at 120 min as compared to T1, as well as all other rice products.The starch digestion rate of fried and pressure cooked basmati rice was also high in T1 and T4 products when compared to that stored at a low temperature.Therefore, the results indicated that the rice cooked using the boiling 1 method followed by storing with T3 had the slowest in vitro starch digestion rate at the end of the 120 min of digestion, indicating the presence of a higher amount of RS and dietary fibre in the product.
90 min in basmati rice boiled in an extra amount of water (boiling 2).The rate of starch digestion was lower in T3 and T2 products with 38 and 42.2% at 120 min as compared to T1, as well as all other rice products.The starch digestion rate of fried and pressure cooked basmati rice was also high in T1 and T4 products when compared to that stored at a low temperature.Therefore, the results indicated that the rice cooked using the boiling 1 method followed by storing with T3 had the slowest in vitro starch digestion rate at the end of the 120 min of digestion, indicating the presence of a higher amount of RS and dietary fibre in the product.90 min in basmati rice boiled in an extra amount of water (boiling 2).The rate of starch digestion was lower in T3 and T2 products with 38 and 42.2% at 120 min as compared to T1, as well as all other rice products.The starch digestion rate of fried and pressure cooked basmati rice was also high in T1 and T4 products when compared to that stored at a low temperature.Therefore, the results indicated that the rice cooked using the boiling 1 method followed by storing with T3 had the slowest in vitro starch digestion rate at the end of the 120 min of digestion, indicating the presence of a higher amount of RS and dietary fibre in the product.
Impact of Resistant Starch and Soluble Fibre Components on Postprandial Glucose Response
Ten healthy human participants were fed rice products made via boiling 1, boiling 2, and pressure cooking methods that had a high RS content, testing the glycaemic response.Rice prepared using frying was not fed to the subjects as they rejected it, claiming that a fried product has a higher amount of oil.Also, basmati rice products that received treatment 2 were not offered to the subjects as they were stored at room temperature, and this temperature in the summer season led to microbial growths in the product.Moreover, in
Impact of Resistant Starch and Soluble Fibre Components on Postprandial Glucose Response
Ten healthy human participants were fed rice products made via boiling 1, boiling 2, and pressure cooking methods that had a high RS content, testing the glycaemic response.Rice prepared using frying was not fed to the subjects as they rejected it, claiming that a fried product has a higher amount of oil.Also, basmati rice products that received treatment 2 were not offered to the subjects as they were stored at room temperature, and this Foods 2024, 13, 1669 13 of 24
Impact of Resistant Starch and Soluble Fibre Components on Postprandial Glucose Response
Ten healthy human participants were fed rice products made via boiling 1, boiling 2, and pressure cooking methods that had a high RS content, testing the glycaemic response.Rice prepared using frying was not fed to the subjects as they rejected it, claiming that a fried product has a higher amount of oil.Also, basmati rice products that received treatment 2 were not offered to the subjects as they were stored at room temperature, and this temperature in the summer season led to microbial growths in the product.Moreover, in India and south-east Asia, people prefer fresh rice, but in China and north-east Asia, people prefer stale rice and use different methods to cook food.The lowest glycaemic index was observed after the consumption of basmati rice prepared via the boiling 1 method (45.8%) with T3, followed by boiling 2 (49.5%) and pressure cooked rice (50.7%).Treatment 3 was determined to have the lowest glycaemic index storage conditions, which may help those suffering from diabetes (Figure 9).Similarly, the glycaemic load was found to be high in pressure cooked rice (53.3%), followed by boiling 1 (47.2%) and boiling 2 (43.03%).But, after storing with T3, the glycaemic load was decreased, and it was found lowest in boiling 1 (34.2%) when compared to other cooking methods (Figure 10).was observed after the consumption of basmati rice prepared via the boiling 1 method (45.8%) with T3, followed by boiling 2 (49.5%) and pressure cooked rice (50.7%).Treatment 3 was determined to have the lowest glycaemic index storage conditions, which may help those suffering from diabetes (Figure 9).Similarly, the glycaemic load was found to be high in pressure cooked rice (53.3%), followed by boiling 1 (47.2%) and boiling 2 (43.03%).But, after storing with T3, the glycaemic load was decreased, and it was found lowest in boiling 1 (34.2%) when compared to other cooking methods (Figure 10).was observed after the consumption of basmati rice prepared via the boiling 1 method (45.8%) with T3, followed by boiling 2 (49.5%) and pressure cooked rice (50.7%).Treatment 3 was determined to have the lowest glycaemic index storage conditions, which may help those suffering from diabetes (Figure 9).Similarly, the glycaemic load was found to be high in pressure cooked rice (53.3%), followed by boiling 1 (47.2%) and boiling 2 (43.03%).But, after storing with T3, the glycaemic load was decreased, and it was found lowest in boiling 1 (34.2%) when compared to other cooking methods (Figure 10).
Effectiveness of Resistant Starch on Blood Glucose Level in Rats
Similar to human experiments, treatment 2 for basmati rice was not considered for rat trials.Only basmati rice prepared with the boiling 1 method were fed to rats due to the limited number available to us, and considering the fact that boiling 1 preparation has a high RS content and is a common cooking method used by the North Indian population.The findings showed that blood glucose concentrations of resistant starch from various diet groups tended to decrease, but the mean values did not differ from those of the control or diabetic control groups.In diabetic rats, the blood glucose levels increased dramatically from (114.3 ± 6.9) mg/100 mL in normal rats to (286.5 ± 16.5) mg/100 mL.Nevertheless, following the administration of the treatment diet, the pre-treatment groups' levels dramatically restored to the normal range for each treatment group (G3-G5).After administering T1 basmati rice, the blood glucose level dropped from (286.3 ± 16.to 244.2 ± 11.0) in G3.Basmati rice stored with T3 was administered to G4, and it was discovered that the blood glucose level decreased from (282.1 ± 18.4 to 198.3 ± 18.1), and that the blood glucose level decreased from (276.6 ± 30.1 to 232.5 ± 25.8) in G5 (Table 4).The diets provided in G3, G4, and G5 resulted in A 14.7, 29.7, and 15.9% decrease in the blood glucose levels of the rats.Therefore, rice cooked with absorption techniques (boiling 1) and stored with T3 resulted in a maximum decline in the blood glucose levels of the experimental rats.
Effect of Resistant Starch on Lipid Profile in Rats
The diabetes group had significantly (<0.001) higher levels of triglycerides, total cholesterol, and low density lipoproteins (LDL) than the normal control and treatment groups.In contrast, the diabetic group had lower levels of high density lipoprotein (HDL).The maximum reduction in cholesterol was found in G4 with 39.9%, followed by G5 (33.5%) and G3 (32.5%), respectively (Table 5).Similarly, the triglyceride and LDL levels were reduced the most in G4 (31.3%, 30.5%), followed by G5 (21%, 17.7%) and G3 (20.4%, 11.34%), respectively.The HDL levels were increased in G4 (54.7%), followed by G5 (50.3%) and G3 (48.9%) (Table 5).The data show the glucose and lipid lowering potential of a T3 diet when fed to G4 rats.Each value is the mean of six observations.Values are mean ± SD; Different small letters in different columns showed significant differences at 5%; * Significant at 5%; NS non-significant.The blood glucose and lipid levels of rats were totally changed following the consumption of RS3.The microscopic observation of the livers of the rats is shown in Figure 11.The tissue staining in the control showed the normal architecture of the liver with normal hepatocytes and a sinusoidal layer.In the diabetic control, due to the induction of diabetes, the liver showed the hepatocytes degeneration and the inflammatory cellular infiltration with fatty liver.However, in the treatment controls, the liver showed the mild degeneration of hepatocytes.The rat group (G4) who consumed T3 basmati rice showed less degeneration when compared to the group who consumed basmati rice stored with T1.On the other hand, while observing the pancreas, the tissue staining in the control group showed the normal architecture of the pancreas with a normal range of Langerhans islands and beta cells (Figure 12).The section of the pancreas of a diabetic rat depicted the cytoplasmic degeneration of islets of Langerhans.While Group 3 and 4 (treatment groups) showed a reduction in the number of beta cells and degranulated cytoplasm in most cells when compared to the control group, the results were far better than for the diabetic group.Group 4, in which basmati rice stored at 4 • C for 24 h was given provided, showed improved architecture of the pancreas, with increments in the formation of beta cells and the regeneration of islets of Langerhans when compared to group 3. Results revealed that the consumption of high RS rice products resulted in better insulin sensitivity and delayed glucose absorption, leading to improvements in blood glucose levels.Similarly, faecal bile excretion and reduced cholesterol synthesis via SCFA were the main factor controlling the triglyceride and cholesterol levels in the experimental rats.The blood glucose and lipid levels of rats were totally changed following the consumption of RS3.The microscopic observation of the livers of the rats is shown in Figure 11.The tissue staining in the control showed the normal architecture of the liver with normal hepatocytes and a sinusoidal layer.In the diabetic control, due to the induction of diabetes, the liver showed the hepatocytes degeneration and the inflammatory cellular infiltration with fatty liver.However, in the treatment controls, the liver showed the mild degeneration of hepatocytes.The rat group (G4) who consumed T3 basmati rice showed less degeneration when compared to the group who consumed basmati rice stored with T1.On the other hand, while observing the pancreas, the tissue staining in the control group showed the normal architecture of the pancreas with a normal range of Langerhans islands and beta cells (Figure 12).The section of the pancreas of a diabetic rat depicted the cytoplasmic degeneration of islets of Langerhans.While Group 3 and 4 (treatment groups) showed a reduction in the number of beta cells and degranulated cytoplasm in most cells when compared to the control group, the results were far better than for the diabetic group.Group 4, in which basmati rice stored at 4 °C for 24 h was given provided, showed improved architecture of the pancreas, with increments in the formation of beta cells and the regeneration of islets of Langerhans when compared to group 3. Results revealed that the consumption of high RS rice products resulted in better insulin sensitivity and delayed glucose absorption, leading to improvements in blood glucose levels.Similarly, faecal bile excretion and reduced cholesterol synthesis via SCFA were the main factor controlling the triglyceride and cholesterol levels in the experimental rats.
Discussion
The goal of the current study was to determine the ideal method of cooking and storage conditions to raise the resistant starch content of the basmati rice products often consumed in India.It was discovered that the boiling 1 approach had a higher protein content than the boiling 2 method.This might have happened because of the effect of soaking and cooking, as in the boiling 2 method, cooking denatures protein and soaking solubilises protein content, leading to its reduction during the draining of water after cooking [25].In a study, boiling was found to be superior (4.99%) among other heat treatments, like microwave (2.49%) and autoclaving (3.5%), increasing the protein content the most; this is due to the kinetic energy being mainly responsible for the increasing or decreasing of the protein content of rice while cooking [26].
Fat is important to increase the palatability of the food.Fried rice had a higher fat content because of the addition of extra oil and more absorption.When compared to frying, other cooking methods like boiling 1, boiling 2, and pressure cooking were found to provide a low fat content due to fat hydrolysis, which occurs in the presence of water and heat.The ash content was found to be higher in the boiling 1 and frying methods when compared to the boiling 2 and pressure cooking methods because of the higher amounts of protein, fat, and fibre content [27].Boiling 2 had a low ash content; this could be because of macro and micro elements being leached out during soaking and draining, while the reduction in the fibre content could be a reason for the lower ash content observed in the pressure cooked rice sample.
Discussion
The goal of the current study was to determine the ideal method of cooking and storage conditions to raise the resistant starch content of the basmati rice products often consumed in India.It was discovered that the boiling 1 approach had a higher protein content than the boiling 2 method.This might have happened because of the effect of soaking and cooking, as in the boiling 2 method, cooking denatures protein and soaking solubilises protein content, leading to its reduction during the draining of water after cooking [25].In a study, boiling was found to be superior (4.99%) among other heat treatments, like microwave (2.49%) and autoclaving (3.5%), increasing the protein content the most; this is due to the kinetic energy being mainly responsible for the increasing or decreasing of the protein content of rice while cooking [26].
Fat is important to increase the palatability of the food.Fried rice had a higher fat content because of the addition of extra oil and more absorption.When compared to frying, other cooking methods like boiling 1, boiling 2, and pressure cooking were found to provide a low fat content due to fat hydrolysis, which occurs in the presence of water and heat.The ash content was found to be higher in the boiling 1 and frying methods when compared to the boiling 2 and pressure cooking methods because of the higher amounts of protein, fat, and fibre content [27].Boiling 2 had a low ash content; this could be because of macro and micro elements being leached out during soaking and draining, while the reduction in the fibre content could be a reason for the lower ash content observed in the pressure cooked rice sample.
Human digestive enzymes cannot break down dietary fibre.In addition to managing large intestine functions, this has a major physiological impact on glucose, lipid, and mineral metabolism [28].The solubility of dietary fibre in water is primarily determined by its soluble or insoluble nature.Like a magnet, soluble dietary fibre attracts body water into the digestive tract [29].Soluble fibre in the stomach creates a gel-like structure that is water soluble and facilitates easy food movement [30].Water-insoluble fibres include lignin, cellulose, hemicellulose, and resistant starch.These fibres do not dissolve in water.Insoluble fibre helps move faeces out of the body by raising the intestinal pressure [31].With the exception of frying, the amount of soluble dietary fibre in each cooking method was found to be similar in the current study.After cooking, the amount of soluble dietary fibre increased, but this is related to the temperature and cooking time.The primary cause of T1's higher soluble fibre content when compared to T3 is the heat treatment of cereals, which increases the viscosities of the water extract, thereby converting insoluble dietary fibre into a soluble form.Boiling methods were shown to have significant amounts of insoluble dietary fibre.Maillard's reaction is responsible for the rise in the amount of insoluble dietary fibre that occurs during cooking.It is possible that thermal processing may have caused the production of Maillard's reaction products, and thus may have increased its IDF value [32].An increase in the total dietary fibre content might have occurred due to the apparent increase in cellulose content [33].The boiling or microwave heating of instant mashed potatoes has been shown to increase TDF and IDF contents [34].Another study found that microwave cooking and frying reduced the in vitro digestible starch, and increased the resistant starch (RS) and water-insoluble dietary fibre (IDF) [35].The heat processing of rice increased the dietary fibre content [36,37].Hence, an increase in the cellulose content can lead to an increased amount of TDF and IDF.When compared to storage temperatures, a low temperature might be the cause of the increase in the amount of resistant starch in T3, resulting in higher levels of insoluble and total dietary fibre.
Resistant starch (RS) is defined as starch that is not digested by hydrolytic enzymes in the small intestine for at least 120 min after digestion, instead passing to the colon to then be fermented by microbiota [38].It helps to control cholesterol, blood sugar levels, and essentially acts as an insoluble dietary fibre in the body [39].Boiling as a cooking method will increase or decrease the resistant starch content, depending on the form of food [40].High levels of RS were found in the boiling 1 method because of the presence of heat and water.Starch granules absorb moisture, swell, and gelatinise when food is heated in the presence of water, whereas amylose degrades and washes out of the solution to then produce a larger degree of gelatinisation with longer heating times [41,42].In the boiling 2 method, the starch was leached out of rice and discarded with the water.Hence, a smaller amount of starch was available to convert into resistant starch, probably due to the low amount of amylose content.Fried rice contained high amounts of RS because stir-frying reduces the starch hydrolysis rate and increases the RS content.According to a study, stir-fried food's RS concentration increased from 7 to 12% [43].As the cooking time increased, so, it seemed, did the reductions in moisture content and amylose.The absence of water in the fried samples also inhibits the process of the crystallization of amylose chains, resulting in a decreased RS content [22,44].
The pressure cooked basmati rice had a significantly lower level of resistant starch than any other cooking processes because of the bursting of the starch cells that occurs at high temperatures and pressures.The mechanism of the high pressure gelatinisation of starch is different from heat-induced gelatinisation.The amorphous region of starch granules swells when coming into contact with water and high temperatures, leading to helix-coil transitions in amylose and amylopectin and a loss of granular structure and crystalline order [45].Pressure cooking significantly reduced the RS content of fresh jasmine rice when compared to the rice cooker and oven baking [46].Among storage temperatures, the RS content was to be found higher in T3 than in T2, T4, and T1.Because of the retrogradation, the recrystallization of starch granules occurred while storing at 4 • C. The rate and amount of retrogradation are determined by starch properties, like the molecular or crystalline structure, and storage conditions, such as time, temperature, and the water content [10].
Since retrogradation at low temperatures produces RS, insoluble dietary fibre, a high amylose concentration, and a longer amylopectin chain, it was discovered that the in vitro starch digestion rate of basmati rice products was lowest in T3 products, followed by T2, T4, and T1.A high amylose content reduced the starch digestibility because of the presence of large numbers of hydrogen bonds.The starch digestibility was also found to be low in fried rice.This might also be due to an interaction between amylose and fatty acids, resulting in complex formations on the surface of starch granules, and these components may act as physical barriers to digestion [47].
The starch digestion rate is calculated by taking the glucose measurements up to 180 min from the start of digestion, as rapidly digestible starch is digested at 20 min, slowly digestible starch is digested at 120 min, and complete starch digestion will have occurred after 180 min [48].Rapidly digestible starch (RDS) is converted into glucose within 20 min of food intake, while slowly digestible starch (SDS) takes 20 to 120 min to be converted into glucose.Rapidly digestible starches are thought to cause an abrupt rise in blood glucose levels following consumption, whereas slowly digestible starches are completely digested in the small intestine, resulting in a slower rise in blood glucose.Slowly digestible starches are linked to satiety, a stable glucose metabolism, and diabetes management [49].
The RDS was found to be significantly high in pressure cooked basmati rice, followed by boiling 2, boiling 1, and fried rice.RDS was found to be high with T1, followed by T4, T2, and T3.SDS was found to be high in the boiling 1 method, followed by frying, boiling 2, and pressure cooking.The low digestibility could be caused by variations in the morphological and physical characteristics of the starches in cereals.Cooking cereal starches causes the starch granules to gelatinise and undergo physical and chemical disruption.In turn, the amount of water present, the cooking time, and temperature all affect the extent to which gelatinisation occurs [50].Protein structures surrounding starch granules may limit starch gelatinisation and granule swelling, lessening the granules' vulnerability to enzymatic attack.This may be responsible for a certain amount of the low digestibility.The type of starch has a direct effect on the rate at which cereal starches can be digested.The digestibility of starch decreases with increasing amylose concentration.The greater surface area of amylopectin and the insoluble aggregates are considered to be the cause of the apparent variations in the digestibility between amylose and amylopectin.These factors also probably reduce the susceptibility of cleavage sites to enzyme attack [51].Therefore, the kind and supply of starch in cereals may affect how easily they digest.
Similarly, the amylose content in basmati rice was found to be high using the boiling 1 method when stored with T3, which could be because of how the amylose aligns and associates with itself during cooking and cooling, a process known as retrogradation [52].The cooking process decreased the amylose content, probably due to its leaching into the water, thus resulting in a low amylose content following the boiling 2 method [53].
The rate at which a particular food raises the blood sugar is determined by its glycaemic index.Selecting low-glycaemic index foods has been shown to improve a healthy person's post-prandial glucose and lipid metabolism.The glycaemic load (GL) of an average portion of food is made up by the amount of available carbohydrate in that serving and the GI of the food.The higher the GL, the greater the expected elevation in blood glucose and in the insulinogenic effect of the food.The consumption of a high-GL diet for a longer period of time leads to an increased risk of type 2 diabetes and coronary heart disease [54].
The glycaemic index and glycaemic load of the human subjects fed with basmati rice prepared with the boiling 1 method were found to be low with T3, followed by T2, which can be due to the higher amount of RS and dietary fibre found in these products.The result of the present study was supported by a randomised, single-blind crossover study, where 15 healthy participants were fed white rice cooked and stored at different temperatures.The findings showed that the white rice cooked and stored at 4 • C for 24 h had the highest RS content and also led to a reduced glycaemic response when compared to the control rice [55].Following the administration of the treatment diet, the blood glucose levels of the pre-treatment groups (G3-G5) in the rat research dramatically restored to a near-normal range.G4 showed a significant fall in blood glucose levels-22.5%.Products made from rice that were kept with T3 were rich in resistant starch, which causes the sugar to be released into the bloodstream gradually and decreases the absorption of the sugar.The control of the glucose output is significantly impacted by the slow digestion of resistant starch (RS) [56].The starch is nearly freshly digested, but the metabolism of resistant starch can occur anywhere from 5 to 7 h after a meal.Digestion takes 5-7 h, gradually raises the blood sugar levels, reduces blood sugar and insulinemia, and leads to satiety being provided for a longer time.Because insoluble dietary fibre absorbs sugar molecules, it inhibits the passage of glucose in the small intestine [57].In the human digestive tract, fibre slows the rise in blood glucose levels and decreases the absorption of glucose.Also, when fibre is hydrated, it acts more effectively to reduce blood glucose level [58].
Similar results were reported in a study where rats fed on resistant starch from the rice group (12.9 ± 3.2 mmoles/L) exhibited a decreasing tendency in blood glucose concentrations when compared to the resistant starch of corn (15.6 ± 8.1 mmoles/L [59].The relative body weight in the treatment group and diabetic group (G2) was significantly lower (<0.001)than that of the normal control (G1) group; this was due to the protein breakdown during diabetes.In diabetic control rats, the plasma insulin levels were found to be lower (G2) when compared to the normal control group (G1).However, the level of plasma insulin was found to be elevated in the treatment groups (G3-G5) when compared to the diabetic control group (G2) after the treatment diet was provided.STZ (streptozotocin) induces diabetes in rats via the preferential accumulation of the chemical in β-cells following entry through the GLUT2 (glucose transporter) receptor (chemical structural similarly with glucose), which allows STZ to bind to this receptor.Under this condition, the destruction of β-cells and the induction of the hyperglycaemic state are associated with inflammatory infiltrates, including lymphocytes in the pancreatic islets.The destruction of the β-cells of the islets of Langerhans results in a massive reduction of insulin release.However, in the treatment group, the plasma insulin levels increased significantly.This could happen because of the release of insulin in its bound form from the beta cells of islets of Langerhans [60].
After 28 days of treatment, the cholesterol level reduced significantly in all treatment groups, except for the diabetic control (G2).Insulin inhibits the lipolysis process because it is important to synthesise fatty acids and triglycerides in fat tissues.In the diabetic group, due to the reduction in insulin levels, the body of the rat started to use fat for the production of energy through the lipolysis mechanism [6], hence, resulting in the increased production of Acetyl-coA, which further increased the level of ketone bodies and cholesterol levels [61].
The diabetic group exhibited lower levels of high density lipoproteins (HDLs) and higher levels of total cholesterol, triglycerides, and LDL in comparison to the normal control and treatment groups.The maximum reduction in triglyceride was found in G4, when compared to other groups.These results may be related to soluble fibre consumption, which is considered an important dietary factor in the prevention of cardiovascular diseases in many epidemiological studies [62].
Interestingly, the results indicate that, at the end of the treatment, all groups of diabetic rats treated with basmati rice products had lower cholesterol levels, lower triglyceride and LDL levels, and higher HDL levels.The amount of resistant starch found in rice in each diet has an impact on this outcome.The highest concentrations of resistant starch demonstrated their ability to lower the LDL, triglyceride, and total cholesterol levels.Diabetes is abnormally linked to increased amounts of endogenous glucose and fat synthesis.Through the regulation of glycolysis and gluconeogenesis, the liver plays a critical role in preserving glucose utilisation and storage equilibrium [63].
Microscopic observations of the liver cells of rats showed that the group that consumed T3 basmati rice exhibited less degeneration when compared to the T1 group, which could be due to the presence of a higher amount of resistant starch in T3 basmati rice, exhibiting better insulin sensitivity and an improved lipolysis mechanism.Retrograded starch has also been suggested to reduce blood LDL and cholesterol concentrations in a number of pathways, such as an increase in bile acid excretion from faeces.In addition to short-chain fatty acids in the liver and guts of rats, soluble dietary fibres regulate cholesterol levels by inhibiting hepatic cholesterol production through large intestine fermentations [64].
The histopathological study of pancreases from group 4 showed the improved architecture of the pancreas, with increments in the formation of beta cells and the regeneration of islets of Langerhans when compared to group 3, leading to improved blood glucose levels.High levels of RS, which function as an insoluble fibre and are fermented by the intestinal microbiota, are responsible for this.They also release carbon dioxide, methane, hydrogen, and metabolically active short-chain fatty acids, which have an impact on insulin secretion and hepatic gluconeogenesis [65].According to a recent study, RS from a high-fat diet can control the expression of genes related to lipid and hepatic glucose metabolism pathways, as well as hyperglycaemia and hyperlipidaemia in diabetic rats.Resistant starch and insoluble dietary fibre also help to reduce lipolysis and increase GLP-1, peptide YY, and insulin secretion [66].GLP-1 (Glucagon-like-peptide 1) stimulates insulin secretion and reduces glucagon secretion.Pancreatic peptides (YY) help to reduce appetite and increase the feeling of fullness, which then helps to control blood glucose and lipid levels.
Conclusions
The quantity of resistant starch in basmati rice increased when cooking methods such as boiling 1, boiling 2, and frying were used, while the amount dropped when pressure cooking was used.While freshly prepared food (T1) and reheated food (T4) reduced the amount of resistant starch, products held at 4 • C (T3) and at room temperature for 24 h (T2) increased the amount of resistant starch.Products kept at 4 • C (T3) exhibited high levels of amylose, slowly digestible starch, and insoluble dietary fibre.The trial rats' blood glucose and cholesterol levels dropped as a result of eating this rice, and the human participants' glycaemic index and glycaemic load also fell.A slower rise in the blood glucose and cholesterol levels was also caused by the food product's greater RS, which stimulated the regeneration of beta cells in the pancreas and hepato cells in the liver.Given the wide range of starchy preparations that Indians eat, modifying the method of cooking and storage temperature of starchy foods may have multiple positive health effects.In order to manage blood glucose levels, awareness regarding the nutritional and health benefits of consuming resistant starch should be raised.In order to boost the amount of resistant starch in food at the domestic level, people must also be educated on proper food preparation and storage techniques.
Figure 2 .
Figure 2. Effect of different cooking methods and storage temperatures on the resistant starch content of basmati rice products (g/100 g).Data are means ± SD.Different letters over the error bars denote that the means differed significantly (p < 0.05).
Figure 2 .
Figure 2. Effect of different cooking methods and storage temperatures on the resistant starch content of basmati rice products (g/100 g).Data are means ± SD.Different letters over the error bars denote that the means differed significantly (p < 0.05).
Figure 3 .
Figure 3.Effect of different storage temperatures on the in vitro starch digestion rate of basmati rice boiled via absorption.
Figure 4 .Figure 3 .
Figure 4. Effect of different storage temperatures on the in vitro starch digestion rate of basmati rice boiled in an extra amount of water.
Figure 3 .
Figure 3.Effect of different storage temperatures on the in vitro starch digestion rate of basmati rice boiled via absorption.
Figure 4 .Figure 4 . 25 Figure 5 .Figure 5 .
Figure 4. Effect of different storage temperatures on the in vitro starch digestion rate of basmati rice boiled in an extra amount of water.
Figure 5 .
Figure 5.Effect of different storage temperatures on the in vitro starch digestion rate of pressure cooked basmati rice.
Figure 6 .
Figure 6.Effect of different storage temperatures on the in vitro starch digestion rate of fried basmati rice.
Figure 6 . 25 Figure 7 .
Figure 6.Effect of different storage temperatures on the in vitro starch digestion rate of fried basmati rice.Foods 2024, 13, x FOR PEER REVIEW 13 of 25
Figure 8 .
Figure 8.Effect of different cooking methods and storage temperatures on slowly digestible starch in basmati rice products.Data are means ± SD.Different letters over the error bars denote that the means differed significantly (p < 0.05).
Figure 7 . 25 Figure 7 .
Figure 7. Effect of different cooking methods and storage temperatures on rapidly digestible starch in basmati rice products.Data are means ± SD.Different letters over the error bars denote that the means differed significantly (p < 0.05).
Figure 8 .
Figure 8.Effect of different cooking methods and storage temperatures on slowly digestible starch in basmati rice products.Data are means ± SD.Different letters over the error bars denote that the means differed significantly (p < 0.05).
Figure 8 .
Figure 8.Effect of different cooking methods and storage temperatures on slowly digestible starch in basmati rice products.Data are means ± SD.Different letters over the error bars denote that the means differed significantly (p < 0.05).
Figure 9 .
Figure 9.Effect of different cooking methods and storage temperatures on the glycaemic indexes of basmati rice products.
Figure 10 .Figure 9 .
Figure 10.Effect of different cooking methods and storage temperatures on the glycaemic loads of basmati rice products.
Figure 9 .
Figure 9.Effect of different cooking methods and storage temperatures on the glycaemic indexes of basmati rice products.
Figure 10 .Figure 10 .
Figure 10.Effect of different cooking methods and storage temperatures on the glycaemic loads of basmati rice products.
Figure 11 .Figure 11 .
Figure 11.Histopathology study of the livers of rats.(a) Control; (b) diabetic control; (c) freshly prepared basmati rice-fed diet group; (d) T3 basmati rice-fed diet group.Black arrow shows hepatocytes; yellow arrow shows the sinusoidal layer.
Figure 12 .
Figure 12.Histopathology study of the pancreas of rats.(a) Control; (b) diabetic control; (c) T1 basmati rice-fed diet group; (d) T3 basmati rice-fed diet group.Black arrows show beta cell; yellow arrows show Langerhans islet.
Figure 12 .
Figure 12.Histopathology study of the pancreas of rats.(a) Control; (b) diabetic control; (c) T1 basmati rice-fed diet group; (d) T3 basmati rice-fed diet group.Black arrows show beta cell; yellow arrows show Langerhans islet.
Table 1 .
Preparation of commonly consumed basmati rice products in India.
Pressure cooking (pressure cooked rice)Basmati rice to water (2:4 w/v)Basmati rice was pressure cooked separately in a pressure cook (15 psi) for 10 min.
Table 2 .
Effects of different cooking methods and storage temperatures on the crude protein, crude fat, and ash content of basmati rice products (g/100 g).
Table 3 .
Effect of different cooking methods and storage temperatures on the dietary fibre (soluble, insoluble, and total) content of basmati rice products (g/100 g).
B Values are mean ± SD; Mean values with different superscripts are significantly (p ≤ 0.05) different.T1-Freshly prepared with in 1 h, T2-Stored at room temperature (20-22 • C for 24 h), T3-Kept at 4 • C for 24 h, T4-Reheated samples after stored at 4 • C for 24 h.Boiling 1 (Rice boiled by absorption); boiling 2 (Rice boiled in an extra amount of water); frying (Fried rice); pressure cooking (Rice cooked in pressure cooker).
Table 4 .
Effectiveness of resistant starch on blood glucose and plasma insulin levels in rats.
Table 5 .
Effect of resistant starch on the lipid profile of rats.
|
2024-05-29T15:19:49.137Z
|
2024-05-27T00:00:00.000
|
{
"year": 2024,
"sha1": "66d57396b49b4e6c0de374209b7a3bc64f20e674",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/13/11/1669/pdf?version=1716771659",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1bfad9498e1374353318416e277bee44eab798cc",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234168169
|
pes2o/s2orc
|
v3-fos-license
|
Dosimetric Evaluation of Intensity Modulated Radiotherapy and Three-Dimensional Conformal Radiotherapy Treatment Plans for Prostate Cancer
METHODS Twenty treatment plans for ten patients were created using 3D-CRT of four-fields with gantry angles of 00, 900, 1800, and 2700; and IMRT of five-fields with gantry angles of 00, 720, 1440, 2160, and 2880 on an Eclipse Treatment Planning System (version 15.6). The volume of reference isodose, target volume, maximum isodose in the target, reference isodose, dose at 95% of planning target volume (PTV), dose at 2%, 5%, and 98% of PTV, and prescribed dose were collected from the dose volume histogram of each plan. The conformity index and homogeneity index (HI) were then calculated. The doses of the organs at risk were also collected and evaluated.
Introduction
Till date, cancer is among the most feared diseases with high mortality rate. Consistence with this, an estimated number of new cancers were diagnosed in 2019 in the United State as 1.762.450, with a total of 606.880 deaths recorded.
[1] In Nigeria, according to the International Agency on Research on Cancer, as of 2018, the total number of new cases was 115.950, with 70.327 deaths recorded. [2] Prostate cancer is the second leading cause of cancer in men. In 2019, a total of 174.650 men were diagnosed with prostate cancer in the United (Table 1). The OARs, which are rectum, bladder, and femoral heads (left and right), were also contoured according to the Radiation Therapy Oncology Group (RTOG) atlas for contouring of normal tissue [5] using the Eclipse TP system version 15.6.
TPs
Two plans were generated for each patient using the Eclipse TP system version 15.6, with energy of 6 MV photons. The prescribed dose was as follows: 76 Gy for three cases; 79 Gy for six cases; and 69 Gy for the patient planned in two phases as shown in Table 1. The different prescription was due to the different non-use of uniform prescription model in our center. The oncologist's prescription type depended on the cancer stage. Each 3D-CRT plan was produced using four beams (box technique) at the gantry angles of 0°, 90°, 180°, and 270°. Multi-leaf collimators (MLC 120 model) were used at 0.5 cm away from PTV to reduce dose to OAR and for more conformity of the 3D-CRT plans. The IMRT plans were done using five beams at the gantry angles of 0°, 72°, 144°, 216°, and 288°. The intensity optimization for each of the beam portals for all IMRT plans was achieved by setting dose constraints and priorities for PTV and OAR until the constraints were met, following the International Commission on Radiation Units and Measurement (ICRU) protocol for dose prescription, with a minimum coverage dose of 95% and maximum accepted dose of 107%. [6] The doses were calculated using Anisotropic Analyses Algorithm in the Eclipse TP system, with the treatment table or couch not included in the calculation volume.
When creating the IMRT plan for a LINAC equipped with an MLC, there were two delivery options: stepand-shoot and sliding window. For this study, the sliding window was adopted for all the IMRT plans.
The Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC) analysis and Radiation Therapy Oncology Group (RTOG) Report 62 (a review of Report 50) guideline were adopted for the dose constraint reaching the OAR. The guideline stipulates that States (cancer.net) and, in Nigeria, an estimated hospital prevalence of between 127 and 185.5 per 100.000 males admitted in hospitals were diagnosed of prostate cancer. [3] Prostate cancer can be treated by surgery, radiation therapy, chemotherapy, cryotherapy, hormone therapy and immunotherapy, and newer technological development. [4] Radiation therapy has a dynamic role in the treatment of prostate cancer. It involves the use of various treatment plans (TPs) such as 2D-technique, 3D-Conformal Radiation Therapy (3D-CRT), and intensity modulated radiotherapy (IMRT). 2D-technique involves manual calculations and does not spare organs at risk (OAR). 3D-Conformal Radiation Therapy (3D-CRT) is a conformal TP that conforms the radiation doses to the target and, in history, was the best TP for prostate cancer, but results to little sparing of OAR. With intensity modulated radiotherapy (IMRT), the reduction of radiation effect on normal tissues has improved. Research has shown that IMRT has more advantages compared to 3D-CRT in the treatment of prostate cancer. In this study, we investigated the use of homogeneity index (HI) and conformity index (CI) in the evaluation of 3D-CRT and IMRT plans for optimal treatment delivery.
Patients Selection
Ten patients with malignant neoplasm of prostate that received radiotherapy with IMRT on a clinical linear accelerator (LINAC), Vitalbeam model (Varian Medical System, Palo Alto, CA, USA) in our department from June 2019 to January 2020 were analyzed, retrospectively.
Simulation and Contouring
Each patient was asked to stay on a supine position on a whole-body board (Radon Medical Equipment, Yenimahalle/ANKARA) without immobilization and was simulated with a 16-slice computed tomography (CT) simulator (Optima 580; GE Healthcare, Waukesha, WI, USA). The plans were sequentially done in three phases. The clinical treatment volume (CTV) for one of the cases was contoured in two phases and nine cases were contoured in three phases. Each planning target volume (PTV) was contoured with 0.5 cm margin from each CTV. Phase 1 (PH 1) contains the prostate, seminal vesicle, and lymph node. Phase 2 (PH 2) contains the prostate and seminal vesicle only, while phase 3 (PH 3) contains the prostate only. However, the case with two phases had phase 1 (the prostate+seminal vesicle+lymph node) and phase 2 (the prostate only) Table 1 Showing the prescribed doses for the ten patients for both 3D-CRT and IMRT not more than 35% of the rectum should receive 60 Gy (V60 Gy <35%) and not more than 20% of the rectum should receive 70 Gy (V70 Gy <20%). Also, for the bladder, not more than 15% of the bladder should receive 80 Gy (V80 Gy <15%), not more than 25% should receive 75 Gy (V75 Gy <25%), not more than 35% should receive 70 Gy (V70 Gy <35%), and not more than 50% should receive 60 Gy (V60 Gy <50%). For the femoral heads, not more than 5% of the femoral heads should receive 50 Gy (V50 Gy <5%). [5,[7][8][9] Dose Volume Analysis The plan sums for the different plans were generated and data were collected from their dose volume histogram (DVH). From the DVH, the value of dose in Gy reaching the following volume of PTV was recorded: V2%, V5%, V50%, V95%, and V98%. Also, the maximum isodose in the target (Imax) and the reference isodose reaching V95% of PTV were also recorded.
CI and HI
CI and HI were calculated and recorded for each TP using the following equations: [10,11] (1) Where V RI is volume of the target receiving 95% of the prescribed dose and TV is the total volume of the target.
(2) Where I max is maximum dose in the target RI is reference isodose and (3) Where: D ≥95% is dose at 95% of planning target volume D ≥5% is dose at 5% of PTV Using the calculated conformity and homogeneity indices according the RTOG protocol, we evaluated the TP that conforms more to PTV and is more homogeneous. The RTOG protocol defines the range of conformity and homogeneity as follows: • If CI value is between 1 and 2; then, the treatment is in accordance with the protocol. • If CI value is between 2 to 2.5 and 0.9 to 1; then, there is a minor deviation of the protocol. • If the CI value is >2.5 and <0.9, it is considered as a severe deviation from the protocol. For homogeneity, the ideal value for HI is 1 and it increases as the plan becomes less homogeneous. Values closer to 1 are more homogeneous than values away from 1. The mean doses reaching the rectum, bladder, Right, and left femoral heads were also analyzed for each plan.
Statistical Analysis
A two-tailed pair t-test was used to compare the mean of the different TPs at critical significant value of 5%.
Results
In this study, the dose distribution for IMRT plan is more aligned to PTV than that of 3D CRT plan (as shown in Figure 1), which, in turn, reduces the dose to OAR. The dose coverage for both 3D-CRT and the IMRT TPs met the required criteria of at least 95% of the prescribed dose of PTV. The dose maximum was in the range of 105.5%-108% for 3D-CRT plans, (H 2 ) were in the range of 1.021-1.069, with an average of 1.044±0.02.
CI
The CI for each TP was calculated using equation 1. Figure 2 shows the dose coverage from the DVH. Table 2 shows the comparison between the CI of 3D-CRT and IMRT.
OAR
The dose to OAR of each patient planned using 3D-CRT was compared to that of IMRT, as shown in the DVH in Figure 3. The DVH shows the dose to the rectum (brown), bladder (purple), left femoral head (blue), and right femoral head (sky-blue) for both TP. Tables 2 shows the mean results of dose to OAR for 3D-CRT and IMRT.
although it was one of the plans that had up to 108%, which was due to the large size of the PTV. However, the dose maximum for IMRT was in the range of 104.5%-106.7%. Figure 2 shows the DVH of patients planned with 3D-CRT (left) and IMRT (right) treatment techniques, comparing their PTVs. The square box shows the PTV coverage of the TP done using IMRT technique, while the triangular shape is the PTV coverage of the TP done using 3D-CRT TP technique.
HI
Results from the HI, H1, for the ten patients planned with 3D-CRT were in the range of 1.069-1.170, with an average of 1.088±0.03. For IMRT, HI were in the range of 1.056-1.102, with an average of 1.072±0.002. Also, HI (H 2 ) for 3D-CRT were in the range of 1.029-1.128, with an average of 1.062±0.04. However, for IMRT, HI Although several studies evaluated 3D-CRT and IMRT plans for single phase, this study paid more attention to plans of three phases and evaluation was done using their plan sum. More also, studies evaluating one and two phases were compared with our results. These studies adopted the HI defined by Wu et al. [12] In this study, the HI adopted was defined by RTOG protocol (defined as H 1 ) and Yoon et al., (defined as H 2 ), as stated in the materials and methods, and were compared using similar standard. From the result of this study (Table 3), HI (H 1 ) for IMRT showed a better homogeneity when compared to that of 3D-CRT (p-value=0.03). This result was close to that of H 2 (Table 3); however, there was no statistically significant difference between the two techniques (p-value=0. 16). By relating the two results got from both protocols, it was discovered that the HI formula defined by Yoon et al., was closer to 1 than the RTOG protocol, since HI closer to 1 is the baseline for good homogeneity according to both protocols. Also, in this study, the result of CI (Table 3) shows that the conformity of IMRT (0.99) was better than that of 3D-CRT plans (0.91), such that it had a conformity closer to 1 than that of 3D-CRT. There was a statistically significant difference between the mean of both plans (p-value=0.23). Compared to the study of Crowe et al., [13] this CI of this study was closer to 1 when using the RTOG protocol. This was consistent with the study by Cristofaro et
Discussion
In the treatment of cancer, sparing of OAR is one of the goals of radiotherapy. This was considered in this study. Both techniques were evaluated for sparing of OAR using the plan sum of the three phases. This study was aimed at comparing 3D-CRT and IMRT TPs in the treatment of neoplasm of prostate by comparing their HI, CI, and dose to OAR. The results from this study (Tables 2) show that IMRT is much better than 3D-CRT in terms of sparing of OAR. For 3D-CRT, it was observed that it was difficult to meet the RTOG dose constraint protocol for rectum, since the dose reaching 50% volume of the rectum was more than 50 Gy in most cases (Table 2); however, most of the plans met the QUANTEC protocol of 20% of the volume receiving 70 Gy (Table 2). For IMRT, the dose to OAR was within the tolerance set by RTOG and QUANTEC (Tables 2). Table 2 shows the comparison between the OAR of 3D-CRT and that of IMRT. V20 (Gy) and V50 (Gy) represents the dose to 20% and 50% volume of OAR, respectively (QUANTEC). There was 21% reduction in dose to 20% volume of the rectum in IMRT and 27% reduction in dose to the 50% volume of the rectum in IMRT relative to the 3D-CRT plans. 20% reduction in dose to 20% volume of the bladder and 40% reduction in 50% volume of the bladder in IMRT was also observed. More also, in the 20% volume of the right femoral head, there was 7.2% reduction and 42% reduction in the 50% volume of the right femoral head in the IMRT plans. The 20% of the left femoral head ex- al. [14] and Jamal, et al. [15] The result from this study contradicts that of Kinhkikar, et al., [16] since their CIs were 0.97±0.02 and 0.98±0.02 for IMRT and 3D-CRT, respectively, thus resulting in a better conformity in 3D-CRT than in IMRT. This may be due to the level of experience of the IMRT planner. In this study, the mean dose to the left femoral head was reduced by 40.2% in IMRT. This was consistent with the study by Uysal et al., [17] who reported a mean dose of 18.79±18.79 and 31.5±4.11 Gy for IMRT and 3D-CRT, respectively, thus resulting in 40.3% reduction. This was also consistent with the study by Cristofaro, et al., and Crowe et al. In Table 3, the volume of the bladder receiving 35 Gy (V35) had 20.1% reduction in IMRT and this result was close to the result of Kinhikar et al., with 23.7% reduction in IMRT for V35. The volume of the bladder receiving 40 Gy had a reduction of 49.9% reduction in IMRT relative to 3D-CRT. This was higher than the 41%, 37.61%, 24.7%, and 26.8% reported by Cristofaro, et al., Ashman et al., [18] Uysal et al., and Kinhikar et al., respectively. For the rectum, the volume receiving 40 Gy had a 32.7% reduction in IMRT relative to 3D-CRT. Crowe et al., had a reduction of 49% in the volume receiving 40 Gy in IMRT, while 50% reduction was reported by Kinhikar et al. However, Cristofaro, et al., had 34% reduction, which is closer to our result. Other studies by Wortel et al. [19] and Panayiotis et al. [20] also had reduction in IMRT.
Generally, the results from this study were comparable to that of other studies; however, homogeneity and conformity indices were better and had lesser dose to OAR.
Conclusion
Twenty TPs of 3D-CRT and IMRT were created and their CI and HI were evaluated for ten prostate patients. Also, the dose to OAR was evaluated. The use of IMRT TP technique for prostate cancer proved to be superior over 3D-CRT and in sparring dose to OAR. More also, the control of normal tissue complication probability is better with plans done in more than one phases compared to those done in a single phase.
|
2021-05-11T00:07:32.959Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "5918173bde42d1c2e6b0fbb362ac72d5f61420e4",
"oa_license": null,
"oa_url": "https://doi.org/10.5505/tjo.2020.2437",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "97fc1bf02d20c6e8668749323f9942b421eb6bfb",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2345531
|
pes2o/s2orc
|
v3-fos-license
|
Phenotypic and Genotypic Comparison of Epidemic and Non-Epidemic Strains of Pseudomonas aeruginosa from Individuals with Cystic Fibrosis
Epidemic strains of Pseudomonas aeruginosa have been found worldwide among the cystic fibrosis (CF) patient population. Using pulse-field gel electrophoresis, the Prairie Epidemic Strain (PES) has recently been found in one-third of patients attending the Calgary Adult CF Clinic in Canada. Using multi-locus sequence typing, PES isolates from unrelated patients were found to consistently have ST192. Though most patients acquired PES prior to enrolling in the clinic, some patients were observed to experience strain replacement upon transitioning to the clinic whereby local non-epidemic P. aeruginosa isolates were displaced by PES. Here we genotypically and phenotypically compared PES to other P. aeruginosa epidemic strains (OES) found around the world as well as local non-epidemic CF P. aeruginosa isolates in order to characterize PES. Since some epidemic strains are associated with worse clinical outcomes, we assessed the pathogenic potential of PES to determine if these isolates are virulent, shared properties with OES, and if its phenotypic properties may offer a competitive advantage in displacing local non-epidemic isolates during strain replacement. As such, we conducted a comparative analysis using fourteen phenotypic traits, including virulence factor production, biofilm formation, planktonic growth, mucoidy, and antibiotic susceptibility to characterize PES, OES, and local non-epidemic isolates. We observed that PES and OES could be differentiated from local non-epidemic isolates based on biofilm growth with PES isolates being more mucoid. Pairwise comparisons indicated that PES produced significantly higher levels of proteases and formed better biofilms than OES but were more susceptible to antibiotic treatment. Amongst five patients experiencing strain replacement, we found that super-infecting PES produced lower levels of proteases and elastases but were more resistant to antibiotics compared to the displaced non-epidemic isolates. This comparative analysis is the first to be completed on a large scale between groups of epidemic and non-epidemic CF P. aeruginosa isolates.
Introduction
Pseudomonas aeruginosa is the principal pathogen in adult cystic fibrosis (CF) patients [1].Infection with P. aeruginosa has been associated with both acute and chronic infections, whereby chronic respiratory infections punctuated by acute pulmonary exacerbations ultimately lead to accelerated clinical decline [2].As airways infection is the leading cause of morbidity and mortality in the CF population [3], understanding the pathogenesis of P. aeruginosa in CF is of paramount importance.
Our group recently described the Prairie Epidemic Strain (PES), a novel transmissible strain in the Calgary Adult CF Clinic (CACFC) [17].Using pulse-field gel electrophoresis (PFGE), this strain has been detected since 1980, with a prevalence of 22-37% over three decades of patients transitioning to the adult program.PES infection has been associated with increased rates of lung function decline and progression to end-stage lung disease [18].PES has not been found in extensive environmental surveys or in other non-CF associated infections confirming its designation as an epidemic CF strain [19].Parkins et al. [17] demonstrated that PES had higher levels of antibiotic resistance against ceftazidime, ciprofloxacin, and tobramycin compared to local non-epidemic isolates.Moreover, PFGE and multi-locus sequence typing (MLST) found that five patients in our cohort had experienced strain replacement by PES.In each of these situations, patients chronically infected with a local non-epidemic isolate of P. aeruginosa were super-infected with PES, with only PES detected despite subsequent detailed sampling.
In this study, a comparative analysis of PES, other P. aeruginosa epidemic strains (OES) and local non-epidemic P. aeruginosa isolates was conducted in order to assess their relative pathogenic potential.We hypothesized that epidemic strains including PES and OES may share characteristics that would enable phenotypic distinction from local non-epidemic isolates.
Here classical virulence factors, biofilm formation, growth, and antibiotic susceptibility were measured to determine if these factors differ between the groups.The virulence factors tested included proteases-used to degrade fibrin and collagen to invade host tissues [20], elastasesspecifically break down elastin [20], lipases-hydrolyze triglycerides to access fatty acids [21], hemolysins-lyse erythrocytes [20], swarming-a means of coordinated motility mediated by the flagellum and pili [22], and swimming-motility mediated only by the flagellum [22].We also measured antibiotic susceptibility against 4 classes of antibacterial drugs: aminoglycosides (tobramycin), fluoroquinolones (ciprofloxacin), cephalosporins (ceftazidime), and carbapenems (meropenem).These assays were chosen based on their unambiguous measurements and high-throughput capacity.
We also performed a longitudinal assessment of phenotypic traits of isolates collected from a subset of patients who transitioned to the CACFC (aged 18) and isolates collected more recently (at the time of the study) to understand the natural evolution of infecting isolates over time in patients experiencing strain replacement.
Strain Collection
One hundred and eighteen CF clinical P. aeruginosa isolates and the reference strain PAO1 were used in the study.The sample collection was composed of 32 PES (derived from 10 patients), 35 OES (including 6 LES from the United Kingdom, 2 Md1, 1 MES, 3 AUST-01, 2 AUST-02, 1 AUST-03, 1 AUST-04, 1 P42 (AUST-06), 10 Strain A (LES from Canada), and 8 Strain B), and 51 local non-epidemic isolates (derived from 18 patients).The PES and local non-epidemic isolates were obtained from the prospectively maintained and inventoried CACFC Biobank-a comprehensive repository established since 1978 that includes every bacterial pathogen isolated from an individual with CF.For the longitudinal assessment of local isolates, samples were taken at early (at enrollment at age 18) and late (most recent at the time of the study, or prior to transplant/death/loss of follow up) time points.Collection and analysis of P. aeruginosa from the CACFC Biobank was granted by the Conjoint Health Regional Ethics Board (REB15-0854).
Multi-locus sequence typing (MLST)
The protocol was adapted from Curran et al. [23] and Beaudoin et al. [24].Briefly, seven housekeeping genes (acsA, aroE, guaA, mutL, nuoD, ppsA, trpE) were PCR amplified using the amplification primers listed in the S1 Table .PCR conditions were adapted from Korbie and Mattick [25].PCR products and sequencing primers were sent to Macrogen Inc. to sequence both the forward and reverse strands with the data being uploaded to pubmlst.org[26] for allele type (AT) and sequence type (ST) assignments.Suspected novel ATs were sequenced twice to validate findings.
Virulence Assays
Bacteria were grown in tryptic soy broth (TSB; Difco, Sparks, MD) for 16-18 hours in a 37°C shaking incubator.Unless otherwise stated, all assays were performed as 3 independent trials, with 3 replicates per trial, resulting in 9 measurements per trait, per isolate.For the protease, elastase, and lipase assays, overnight cultures were standardized to an OD 600 of 0.3 and 3μl were spotted onto the plates.The plates were incubated at 37°C for 24 hrs (protease), 48 hrs (lipase), or 24 hrs with an additional 48 hrs at 4°C (elastase).The resulting diameter of the zone of clearance was measured.For the swarm and swim assays, a sterile toothpick was used to pick up a single colony from a streak plate and inoculated onto the centre of the agar plate.The plates were incubated at 37°C for 48 hrs (swarm) or 25°C for 72 hrs with additional humidity (swim) and the radius from the point of inoculation was measured.
Swim.Plates consisted of 0.3% Luria-Bertani agar and the protocol was adapted from Murray and Kazmierczak [29].
Hemolysis.P. aeruginosa was qualitatively assessed for the ability to lyse erythrocytes collected from sheep blood.Bacterial colonies were passaged from a streak plate onto 5% sheep blood TSB agar plates (Dalynn Biologicals, Calgary, AB).The plates were incubated at 37°C for 24 hours and the pattern of hemolysis was measured as beta (complete lysis), alpha (incomplete lysis), or gamma (no lysis).In this instance, only 3 replicates of each isolate were completed.
Biofilm Growth.A second MBEC device was set up as stated above.In this case, the pegs were removed and placed in a 0.85% saline solution.The biofilms that formed on the pegs were disrupted using sonication for 5 mins (5510 Branson).Serial dilutions were performed to determine viable cell counts on TSB agar plates.
Planktonic Growth
The bottom half of the MBEC device was measured for planktonic growth at an OD 600 in a Perkin Elmer Victor X4 plate reader.
Mucoidy
P. aeruginosa strains were qualitatively assessed for mucoidy.Bacterial colonies were streaked onto Pseudomonas isolation agar (PIA; Difco, Sparks, MD) and incubated at 37°C for 48 hrs.A slimy phenotype was rated as positive for the presence of mucoid-producing cells.In this instance, only 3 replicates of each isolate were assayed.
Hierarchical Clustering Analysis
Cluster 3.0 was used to group the phenotypic traits based on the unweighted pair group method with arithmetic mean.The gene cluster, correlation uncentered, and centroid features under the hierarchical tab were used.For each phenotypic assay, the data was mean-centered and scaled to unit-variance in order to account for the differences in dynamic range and variance of each trait to allow the variables to be considered together.Java Treeview was used to build the dendrogram.
Statistical Analysis
Kolmogorov-Smirnov (KS) and Mann-Whitney U (MWU) tests as well as Benjamini-Hochberg false-discovery corrections were performed in 'R' (www.R-project.org).For all tests a pvalue <0.05 was considered significant and the multiple testing problem was accounted for by using the Benjamini-Hochberg step-up procedure [33] to control the false discovery rate (FDR).Briefly, FDR was calculated by ranking p-values and multiplying each p-value by the number of tests performed and dividing them by the rank.The largest p-value still under 0.05 becomes the new p-value cutoff for significance, which are reported for each group of tests.MWU tests with FDR control were used for the longitudinal study to compare isolates from different time points from the same patient for all phenotypic traits, but a paired test was not used as the number of strains isolated at each time (early or late) did not always match.
KS tests were used to determine whether values from one group were larger or smaller than the other group.The KS test is based on the empirical cumulative distribution function (ECDF), which indicates the cumulative percentage of isolates from each group type with at least that much activity, as determined by the x-axis for each phenotypic trait.The KS test was used as it does not assume normality of data, nor homoscedasticity, and is insensitive to log transformation.Separate KS tests were performed that checked whether the ECDF of each group listed in the table was respectively larger or smaller than that of the second group.
Chi-square tests were employed to determine whether there were significant differences between the groups in the mucoidy and hemolysis assays.
MLST of Epidemic and Non-Epidemic P. aeruginosa
Using MLST, we were able to validate our findings from the PFGE analysis (refer to Parkins et al. [17]) and confirm that PES are indeed clonal.Full allele types (ATs) and sequence types (STs) are provided in the S2 Table .In 31/32 (97%) of the PES isolates an ST of 192 was found.In one isolate, 6/7 loci matched that of ST192 but a novel AT was found at guaA (AT132) due to single nucleotide difference from AT7.This resulted in a new ST1495 designation but a difference at only 1 locus is not enough to differentiate isolates [34] and it was therefore categorized as PES.
As for the OES, none of the isolates were found to type as ST192 that was displayed by PES.The most related OES to PES were two strain A's isolates (ST683) and one Strain B isolate (ST New) as each shared 3 allele types with the PES sequence type (S2 Table ).LES/Strain A (hereafter referred to as LES) isolates were found to have two sequence types-ST146 and ST683, which have previously been identified [16,35].These two sequence types differed at the ppsA locus but ST146 was the dominant ST as it was present in 13/16 (81%) of the isolates.Our collection also included 2 Md1 isolates and both were ST148 clones [35].The MES was found to have a novel ST but it matched 6/7 allele types of a previously identified Manchester strain [36].The other Canadian epidemic strain, Strain B, had two STs but ST439 [16] was found in 6/8 (75%) of the isolates.The Australian strains had five STs, but the strains in this subset are known to be genetically distinct and coincided with previously published work [37].
In typing the 51 local non-epidemic isolates (also referred to as local isolates) from 18 patients, we found 24 different STs with 12 being novel types unique to the CACFC (S2 Table ).We observed that patients who harbored these non-epidemic isolates tended to have the same or related strains at subsequent time points (S2 Table ).In particular, 6 patients were stably colonized with local isolates (patients A2, A9, A14, A40, A51, A52) for up to 17 years.Moreover, 6 other patients (A18, A34, A35, A64, A85, A129) harbored multiple isolates that were the same local clone or lineages of the clone since nucleotide sequence changes were found at 1-3 ATs (S2 Table ).In three of our patients (A18, A52, A129) we found stable isolates with a MLST sequence type of ST179 (S2 Table ).Moreover, two of our patients (A8, A11) were colonized with ST274 (S2 Table ).These sequence types were the only ones other than ST192 to be found in more than one patient.
Six additional patients were observed to undergo strain replacement (S2 Table ).Five of these patients (A11, A43, A78, A131, A134) were each initially colonized with local isolates that were subsequently replaced by PES and this replacement occurred in as little as 2 years.One patient (A8) experienced strain replacement of a local non-epidemic isolate by a different non-clonal isolate (S2 Table ).
Clustering of Phenotypic Traits
Classical virulence factors were measured for all isolates, including enzymatic activity (protease, elastase, lipase, and hemolysin production), motility (swarm and swim), biofilm formation (biomass and growth as measured by colony forming units (CFUs)), planktonic growth, and mucoid phenotype.P. aeruginosa isolates were holistically compared based on all phenotypes tested.Hierarchical clustering analysis was used to cluster the isolates based on the similarity of their phenotypic traits.This analysis produced two main clades (Fig 1).Clade A was mainly comprised of isolates expressing higher levels of virulence factors and 22/32 (69%) of PES fell into this group (Fig 1 We also looked at the clustering of the PES and local isolates that were found in the five patients where strain displacement was observed.Of the displaced local isolates, all were found in clade A with the higher level of virulence determinants (Fig 1).Similarly, 7/9 (78%) of the replacement PES were also found to cluster in clade A.
Comparison of Phenotypic Traits Between Epidemic and Non-Epidemic Isolates
Based on our cluster analysis, we wanted to determine if there might be phenotypic traits that would distinguish epidemic isolates from non-epidemic isolates.Isolates were categorized into three groups (PES, OES, local isolates) with the OES comprised of various other epidemic strains grouped into a single entity due to their small sample size and to assess if there is a common phenotypic marker among the epidemic strains.
When we compared the epidemic strains (PES and OES) against the local isolates, we noticed that all three groups had a wide distribution of activity for each trait (Fig 2A -2F).In terms of individual traits few showed significant differences between the epidemic strains and the local isolates ( ).In terms of mucoidy, 66% of the PES isolates exhibited the mucoid phenotype whereas only 14% of the OES and 37% of the local non-epidemic isolates were mucoid (S2A-S2C Fig).These differences in mucoidy were significantly different between all groups (S4 Table ).The antibiotic susceptibility tests demonstrated that only OES could be differentiated from the local non-epidemic isolates (Fig 3A -3D).The OES were more resistant to all four antibiotics compared to the local isolates, especially against tobramycin (83% of isolates) and ciprofloxacin (77% of isolates) (Fig .Conversely, the local isolates were highly susceptible since only 1 isolate was resistant to ceftazidime and none were resistant to meropenem (Fig 3B and 3D).
Pairwise group comparisons of the activities of all isolates found that the local isolates had the highest mean activity for elastase, biofilm biomass, and planktonic growth (Fig 2B , 2D and 2F).Conversely, OES had the lowest mean activity for all factors tested except for lipase production, where this group had the highest mean level of activity (Fig 2C).Statistical analysis indicated that PES could be differentiated from OES based on their higher protease activity and greater better biofilm biomass but lower lipase activity (Fig 2A,2C and 2D and S3 Table).This was also true in terms of antibiotic susceptibility where the OES were more resistant than PES for three of the four antibiotics tested (
A Comparison of Phenotypic Traits of Stable and Strain Replacement Isolates
A longitudinal (early vs. late) assessment of colonizing P. aeruginosa isolates from individual patients was performed to determine how infecting strain characteristics change over time.These isolates were separated into three groups based on the patient cohort they were derived from: those patients stably infected with PES for at least 6 years (n = 23, early = 10, late = 13), those patients stably infected with non-epidemic local isolates for at least 5 years (n = 38, early = 19, late = 19), and patients that experienced strain replacement in as little as 2 years (n = 18, early = 9, late = 9).
Comparisons of these super-infecting PES isolates demonstrated that relative to the replaced local non-epidemic isolates that they displaced, they tended to be less virulent in general (Fig 4A -4H).Statistical analysis indicated that significant differences between these two subgroups were only found in protease and elastase production assays (S5 Table ).In particular, PES produced lower levels of both of these enzymes than the displaced isolates (Fig 4A and 4B).The local isolates showed a similar trend as protease production was significantly different between the early and late groups (Fig 4A and S5 Table).However, for the patients stably colonized with PES, no significant differences were found between the early and late isolates for any of the traits tested (Fig 4A-4H and S5 Table ).
A comparison of antibiotic susceptibility between the displaced isolates and PES showed that the displaced isolates were generally more susceptible to antibiotics than the PES that replaced them (Fig 5A -5D).In particular, 67% of PES were resistant to tobramycin and 78% resistant to ciprofloxacin (Fig 5A and 5C).In contrast, none of the displaced isolates were resistant to the antibiotics (Fig 5A -5D).Statistical analysis confirmed that these differences were significant (S5 Table ).Likewise the late stable local isolates were significantly more resistant to ciprofloxacin (33%) compared to the early local isolates (5%) (Fig 5C and S5 Table).With the exception of susceptibility to meropenem, the PES stable group did not show any significant difference in activity between the early and late isolates (Fig 5D and S5 Table).
Discussion
Since the discovery of LES in 1996, the dogma of transmissible and epidemic P. aeruginosa strains being a rare occurrence has been dispelled [9].Although most CF patients are colonized with non-epidemic strains of P. aeruginosa, several epidemic strains have been found around the world amongst the CF population [4,5].With the exception of the European LES and Canadian Strain A [16], each epidemic strain is genetically distinct and they are commonly, but not universally, associated with worsened disease progression and/or multi-drug resistance [6].Currently, it is unknown if this resistance is an intrinsic property or an adaptive response to increased antibiotic pressure [24].PES is prevalent among CF patients attending the Calgary Adult CF Clinic (CACFC) and amongst patients transferring from other clinics in the Prairie provinces of Canada suggesting broader prevalence [17].Herein we used MLST to genotype P. aeruginosa isolates since this technique is unambiguous and reproducible across multiple labs [23].This work found that the PES (ST192) clones are genetically distinct from the other epidemic strains.Patients that were not infected with PES tended to carry lineages of their own local non-epidemic isolates (S2 Table ) that persisted for many years as found in other clinics [18,38,39].Among our patients, MLST identified clones (ST179) in a pair of CF siblings (A18 and A129) (Parkins et al. [17] and this study).This clone was also detected in one other unrelated patient (A52).Additionally, ST274 clones were found in two unrelated patients (A8 and A11).These ST179 and ST274 isolates are minor clones common in the local environment that have been detected in a small number of other patients in the clinic and elsewhere [34,37].
Adaptation to CF airways can be accomplished in a myriad of mechanisms, including loss of virulence factors over time [40][41][42], biofilm formation [43], and tolerating antibiotic treatment [44].We initially thought that PES and OES might be phenotypically similar since both have been able to spread throughout CF patient populations.Our results suggest that epidemic strains adapt to CF lungs to cause chronic infections in three key ways-biofilm growth, conversion to the mucoid phenotype, and antibiotic resistance.We found that biofilm growth was the only trait that separated both PES and OES from the local non-epidemic isolates with PES isolates being abundantly mucoid.It is now well established that mucoidy is a result of mutations in the mucA gene, which in turn allows for the over-expression of alginate [45].Though not completely essential for biofilm formation, alginate is commonly found in the extracellular matrix of biofilms [46,47].This conversion to the mucoid phenotype serves as an adaptive response to the harsh CF lung environment and patients chronically infected with mucoid isolates tend to have worsened disease progression [48].
The antibiotic susceptibility assays indicated that OES were more resistant than the local isolates.Furthermore, PES isolates that participated in strain replacement were more resistant than their displaced non-epidemic isolate counterparts, especially to tobramycin and ciprofloxacin.However, PES were more likely to be susceptible to antibiotics compared to LES (Parkins et al. [17] and this study).The CACFC is conservative in its admistration of antibiotics during routine treatment meaning PES would have less pressure to develop resistance compared to LES and other epidemic P. aeruginosa strains [49,50].Together with the biofilm and mucoidy data, the antibiotic susceptibility analysis suggests that epidemic isolates have a competitive advantage over the local isolates in the CF lungs.This is important in terms of strain replacement.Super-infections refer to secondary infections that have superimposed on a primary infection, whereby the secondary infection dominates and the original strain is no longer detected leading to strain replacement [51].These super-infection events have been documented with LES with the worry that strain replacement may lead to worsened patient prognosis [51,52].Five patients (A11, A43, A78, A131, A134) in this study were initially colonized with a local non-epidemic isolate but were later super-infected with PES ST192 clones.Serial yearly assessments following this event demonstrated total dominance by the super-infecting PES [17].The longitudinal study was necessary as the hierarchical analysis of all the strains displayed that all of the displaced local isolates and 7/9 of the replacement PES grouped together.This indicated that these isolates had similar virulence profiles and it is likely that strain replacement is a phenomenon that cannot be attributed to differences in virulence traits alone.
We were able to demonstrate that even amongst genetically distinct isolates from the same patient involved in strain replacement that reduction of virulence factors continued to occur.In particular, 4/9 (44%) of the replacement PES did not have elastase activity whereas all the displaced local non-epidemic isolates produced elastase.Elastase production is regulated by the lasIR quorum sensing (QS) system [53] implying that PES could have evolved to be QS mutants.Since the bacteria are decreasing their production of virulence factors, they are not eliciting host responses to the same degree and can better evade constant immune surveillance, resulting in the establishment of chronic infections.This phenotype is beneficial as it would aid the bacteria evade host immune responses mediated by polymorphonuclear leukocytes [54] and may even allow the bacteria to better compete.Moreover, the PES stable isolates did not exhibit a significant change of activity between the early and late time points, which was also observed in the hierarchical clustering analysis since PES isolates tended to group together suggesting that they have similar phenotypic profiles.However it is possible that since we are assessing these replacement PES isolates many years after the super-infection event, the original traits enabling super-infection may have been altered or completely lost.
We acknowledge that phenotypic heterogeneity in P. aeruginosa isolates have been extensively reported in CF literature [55][56][57].Even though we randomly selected isolates for this study, we are inferring phenotypes to the entire population.Sampling bias may also exist due to the relatively small number of P. aeruginosa isolates from the same patient and that other virulence factors, such as pyocyanin [58], siderophore [59], and DNase [60] production, could have been studied that could have served to better differentiate the epidemic and non-epidemic groups.Here we provide a comprehensive assessment of the phenotypic traits and genotypic background of P. aeruginosa causing chronic infections in CF.This study is the first to compare multiple P. aeruginosa epidemic and non-epidemic isolates in order to provide insight on strain replacement.The mechanisms that are involved in strain replacement remain largely unknown but our study has demonstrated that virulence factor production is likely not the sole component used.Elucidating these mechanisms will provide a better understanding regarding the rapid spread of P. aeruginosa clones in the CF population.
clade A).Within clade A, PES largely clustered into two groups with four other side by side pairings.Conversely, clade B was comprised of isolates expressing lower levels of virulence factors and only 10/32 (31%) of PES were found in this clade (Fig 1 clade B).Most of the OES clustered in clade B (22/35, 63%) while the local isolates were more evenly distributed since 24/51 (47%) expressed higher activity (clade A) of the phenotypic traits whereas 27/51 (53%) expressed lower activity (clade B) of the factors tested.
Fig 2A-2F, S1 Fig and S3 Table).The two exceptions were biofilm growth (Fig 2E) and mucoidy (S2A-S2C Fig).Significant differences were found between the epidemic strains (both OES and PES) and the local non-epidemic isolates for biofilm growth (Fig 2E and S3 Table
Fig 1 .Fig 2 .
Fig 1. Hierarchical clustering analysis of phenotypic traits of P. aeruginosa isolates.The dendrogram is split into two main clades (A and B).The mean value for all tests are represented as black boxes, higher than average virulence factor production is indicated in yellow, and lower than average is shown in blue.PES isolates are listed in red text, OES are in blue text, and local isolates are in green text.An asterisk denotes those isolates involved in strain replacement.Values were mean-centered and scaled to unit-variance.Dendrogram was built using Cluster 3.0 that contained a hierarchical feature with the gene cluster, correlation uncentered, and centroid linkage options.doi:10.1371/journal.pone.0143466.g001
Fig 3 .
Fig 3. Antibiotic susceptibility testing of P. aeruginosa isolates.Prairie Epidemic Strain (PES, red), other epidemic strains (OES, blue), and local nonepidemic isolates (green) were assayed for A) tobramycin, B) ceftazidime, C) ciprofloxacin, and D) meropenem susceptibility.Mean susceptibility values of Kirby-Bauer zone sizes were measured for each individual isolate.The black bar indicates the mean (+/-standard deviation) of each group whereas the dashed grey line depicts the resistance breakpoint value according to EUCAST standards [32].Significant differences between the groups according to the KS test are indicated by the asterisk symbols.doi:10.1371/journal.pone.0143466.g003 Fig 3A-3D and S3 Table).The only phenotypic characteristic that could separate the two groups was PES having lower numbers of CFUs in the biofilm mode of growth (Fig 2E and S3 Table).Of interest, none of the groups could be differentiated from each other based on the motility (S3 Fig andS3
Fig 4 .
Fig 4. Longitudinal comparison of virulence factors, biofilm formation, and growth of P. aeruginosa isolates.A) Protease.B) Elastase.C) Lipase.D) Swarm.E) Swim.F) Biofilm Biomass.G) Biofilm Growth (log scale).H) Planktonic Growth.Each data point represents the mean (+/-standard deviation) activity of a single isolate.Circles represent early isolates whereas squares represent late isolates.Three different situations were observed: local isolate displaced by PES via super-infection (white), PES stably colonizing a patient (grey), and local isolate stably colonizing a patient (black).Significant differences between the groups according to the MWU test are indicated by asterisk symbols.doi:10.1371/journal.pone.0143466.g004
Fig 5 .
Fig 5. Longitudinal comparison of antibiotic susceptibility of P. aeruginosa isolates.A) Tobramycin.B) Ceftazidime.C) Ciprofloxacin.D) Meropenem.Each data point represents the mean activity (+/-standard deviation) of a single isolate.Circles represent early isolates whereas squares represent late isolates.Three different situations were observed: local isolate displaced by PES via super-infection (white), PES stably colonizing a patient (grey), and local isolate stably colonizing a patient (black).The dashed red line depicts the EUCAST resistance breakpoints [32].Significant differences between the groups according to the MWU test are indicated by the asterisk symbols.doi:10.1371/journal.pone.0143466.g005 Table) or hemolysis (S4 Fig and S4 Table) assays.
|
2016-05-12T22:15:10.714Z
|
2015-11-23T00:00:00.000
|
{
"year": 2015,
"sha1": "39a747afadedfb0826e9e77e7abcf159b1cd4668",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0143466&type=printable",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "39a747afadedfb0826e9e77e7abcf159b1cd4668",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
227283177
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation the Recurrence Risk of Gestational Trophoblastic Neoplasia (GTN) after Serum βhCG Normalization
Background: Although many GTN patients can be treated with chemotherapy, a small proportion of them will relapse after complete recovery. To the best of our knowledge, there is not any information in respect of relapsed GTN cases in our region. In the current study we have aimed to evaluation of the recurrence risk of gestational trophoblastic neoplasia (GTN) after serum βhCG normalization Methods: This descriptive-analytical study was carried out on registered hospital data of patients with confirmed GTN diagnosis following molar pregnancy who admitted to the gynecology ward of Imam Khomeini Hospital between 2011 and 2017. Patients with diagnosis of postmolar GTN, based on at least five bhcg measurements was included. Patients information including initial level of serum BhcG, time to Bhcg resolution, types of molar pregnancy, treatment protocols, need to recuretage relapse, and finally, the period time between bhcg resolution to relapse were evaluated. Results: In the present study, 239 patients with GTN (including 180 complete and 59 partial moles) were evaluated. The mean age of the patients was 28.8 years, ranging from 16 to 47 years. The mean βhCG concentration was 170,000 IU/ml (ranged 760 to 850,000). The mean time of βhCG resolution was 8.19 months in the range of 4 to 12 months. Recurrence was observed in 9 patients (3.7%). The mean period time between βhCG resolution to relapse was 20.94 months. The mean initial level of βhCG was significantly lower in patients with recurrence (p <0.0001). The highest recurrence rate was seen in those receiving multiple-drug chemotherapy. There was also a significant relationship between disease stage and recurrence rate. Conclusion: The findings of this study indicate that although the recurrence of GTN is relatively low, given the poor prognosis of these patients, continuous evaluation of bHCG levels for at least two years is essential to prevent disease progression.
Introduction
Gestational trophoblastic disease (GTD) refers to a group of diseases characterized by abnormal trophoblastic tissue proliferation. Gestational trophoblastic neoplasia (GTN), a malignant form of GTD, includes invasive mole, choriocarcinoma, placental trophoblastic tumor, and epithelioid trophoblastic tumor. Although these malignancies occur weeks or years after all types of pregnancy, they are most commonly followed by molar pregnancies (Heller, 2015;Barroilhet, 2018;Shaaban, Rezvani, Haroun, Kennedy, Elsayes, Olpin, Salama, Foster, & Menias, 2017;Reva Tripathi, 2017;Biscaro, Braga, & Berkowitz, 2015). According to the International Federation of Obstetrics and Gynecology (FIGO) recommendation, GTN is diagnosed by observing each of the four criteria: no decrease in hCG-β levels after four weeks, elevated hCG-β serum level for three consecutive weeks, hCG-β detection 9-month after mole removal and histological diagnosis of choriocarcinoma, (Eysbouts, Ottevanger, Massuger, IntHout, Short, Harvey, Kaur, Sebire, Sarwar, Sweep, & Seckl, 2017). Some of the patients with molar pregnancies did not completely cure after mole removal and developing to malignancy. Therefore, finding an appropriate marker for the early prediction of neoplasia has been critically important (Seckl, Sebire, & Berkowitz, 2010). Women with GTN usually characterized by vaginal bleeding in the first trimester or abnormal ultrasound at 12 to 18 weeks of gestation. In some cases, invasion of the tumor into the uterus leads to significant vaginal bleeding, and in other cases, it can lead to intraperitoneal hemorrhage by perforating the myometrium. Besides, an intrauterine necrotizing tumor may act as a foci of infection (Mangili, Garavaglia, Cavoretto, Gentile, Scarfone, & Rabaiotti, 2008).
Numerous studies have been conducted in recent years to find the appropriate markers for the early prediction of GTN. For example, a series of studies suggested the ratio of B-HCG before and a week after mole removal or the ratio of hCG-α and hCG-β as appropriate early predictors of GTN (Kang, Choi, & Kim, 2012). GTD is considered treated if the hCG measurement is normal for three consecutive weeks (less than five mIU/ mL) and then stays normal for up to 9 months. However, the main problem in developing countries is the stopping of follow up after hCG normalization, and only half of the patients attend scheduled medical visits after molar pregnancy (Schmitt, Doret, Massardier, Hajri, Schott, Raudrant, & Golfier, 2013).
The reports of relapsed patients indicated the possibility of GTN recurrence after hCG normalization. Bagshawe et al. Showed that GTN could recur automatically after normalization of serum hCG levels. Therefore, a common follow-up protocol is used after normalizing hCG levels. Recently published guidelines suggested the monitoring of hCG levels at least nine months after its normalization (Jankilevich, Uberti, Braga, Bianconi, Maesta, Viggiano, Sun, Cortes Charry, Salazar, Grillo, & Moreira de Andrade, n. d.). In the current study, we have aimed to investigate the incidence of relapsed GTN after normalization of serum hCG levels.
Study Design
This descriptive-analytical study registered hospital data of patients with confirmed GTN diagnosis following molar pregnancy who admitted to the gynecology ward of Imam Khomeini Hospital between 2011 and 2017. Patients with diagnosis of postmolar GTN, based on at least five bhcg measurements was included, while those with incomplete records were excluded. Patients information such as initial level of serum BhcG, time to Bhcg resolution, types of molar pregnancy, treatment protocols, need to recuretage relapse, and finally, the period time between bhcg resolution to relapse were evaluated during the study years.
Definitions
Postmolar GTN was defined as having one of the following FIGO criteria: When the plateau of hCG lasts for four measurements over a period of 3 weeks or longer; that is, days 1, 7, 14, 21.
When there is a rise in hCG for three consecutive weekly measurements over at least a period of 2 weeks or more; days 1, 7, 14.
If there is a histologic diagnosis of choriocarcinoma.
Statistical Analysis
The data were described by mean, median, standard deviation, frequency, and percentage. The mean comparison was carried out by independent student t-test or Mann-Whitney. Proportions were compared by the chi-square test. Kaplan Meier plot was used to describe the relapse time. All statistical analysis was done by SPSS version 20. (Figure 1).
Results
Our patients were divided into two groups, with recurrence and with out recurrence The Mean age of patients in both groups was not statistically significant. The mean initial level of βhCG was significantly lower in patients with recurrence (p <0.0001). There was no significant difference in gestational age between the two groups. Also, the mean period time of βhCG normalization was not significantly different in both groups (0.66). The highest recurrence rate was seen in those receiving multiple-drug chemotherapy. There was also a significant relationship between disease stage and recurrence rate (Table 2). In relapsed group two patients were in stage 1, two in stage 2, three in stage 3and two in stage 4. The mean period time between βhCG resolution to relapse was evaluated in different treatment modalities. Accordingly, patients treated with multi-drug chemotherapy showed the shortest treatment time to relapse (Table 3 There were three relapses in patients treated with single-drug chemotherapy, one in MTX and two in actinomycin. The recurrence rate ratio was not significantly different between the patients treated with MTX or Actinomycin. Of the 66 patients treated with multi-drug chemotherapy, 20 patients were high risk and initially treated with multi-drug therapy, while other 46 patients were secondary treated with this protocol due to non-response to single-drug chemotherapy (Table 4).
Discussion
Although many GTN patients can be treated with chemotherapy, a small proportion of them will relapse after complete recovery. Previous studies have carried out on a low sample size due to its low frequency. To the best of our knowledge, there is not any information in respect of relapsed GTN cases in our region. Therefore, this retrospective cohort study compared the patients with and without relapsed GTN.
The recurrence rate in the present study was 3.7%. It was similar to previous studies. Yang et al. In a study of 1130, patients have reported the 314 patients with recurrent GTN (3.4%), which was similar to the results obtained in this study (Yang, Xiang, Wan, & Yang, 2006). However, another study by the same group found a 6.5% recurrence GTN rate between, which was much more than our study findings (Kong, Zong, Cheng, Jiang, Wan, Feng, Ren, Zhao, Yang, & Xiang, 2020). Also, Barga et al., In a study on GTN, followed molar pregnancy, showed that 10 of 2284 GTN patients would relapse (Braga, Maestá, Matos, Elias, Rizzo, & Viggiano, 2015).
Moreover, our results showed that the meantime of βhCG resolution to relapse was 20.1 months. The lowest recurrence time was ten months, and some patients showed recurrence even after 47 months. Consistent with the present study in a study by Barga et al., The mean period time of diagnosis to recurrence was 18 months, and all of the diagnoses were made 9 months after βhCG resolution (Braga, Maestá, Matos, Elias, Rizzo, & Viggiano, 2015). However, in the study of Yang and his colleagues, this time was three months, which was below our results. In the Yang study, in contrast to the present study, the median duration of recurrence was reported, whereas, in our study, the meantime was reported. Besides, similar to the present study in the Yang study, it was shown that more than 78% of patients had recurrence within one year after treatment, and 10% had recurrence after two years (Kong, Zong, Cheng, Jiang, Wan, Feng, Ren, Zhao, Yang, & Xiang, 2020). These findings indicate the need for long term monitoring of bHcg levels after complete recovery. Balchandran and colleagues in a study on 4,000 GTN patients have suggested that the level of Bhcg should be monitored for at least one year after complete recovery (Balachandran et al., 2019).
Our findings also showed that initial B-hcg levels were significantly lower in patients with recurrence. These findings were in line with the study by Powles et al. However, the cause is not well understood. But the tumor in these patients would likely consist of mutated cells and cells with low maturation without the ability to produce Bhcg. This condition has been shown in other tumors (Fosså, Waehre, & Paus, 1992). Besides, the type of treatment was also related to the incidence of recurrence. Patients undergoing multiple drug chemotherapy have been shown to have a higher incidence of GTN recurrence.
Given that high-risk patients are being treated with multi-drug chemotherapy regimens, the higher incidence of recurrence in these patients may have been due to high-risk GTN. A study by Yang and colleagues also showed that the recurrence rate in high-risk patients was 6.9% and approximately four times higher than the low-risk patients (Kong, Zong, Cheng, Jiang, Wan, Feng, Ren, Zhao, Yang, & Xiang, 2020). Couder and colleagues also found that patients who required more than four doses of MTX to normalize bhCG levels had a significantly higher recurrence risk (Couder, Massardier, You, Abbas, Hajri, Lotz, Schott, & Golfier, 2016).
Conclusion
The findings of this study indicate that although the recurrence of GTN is relatively low, given the poor prognosis of these patients, continuous evaluation of bHCG levels for at least two years is essential to prevent disease progression. In the current study, we have evaluated the GTN patients of Khuzestan for the first time, and it was the strength of our study. We did not study the patietns survival rate, and it was the limitation of the study.
|
2020-12-05T13:44:38.850Z
|
2020-11-30T00:00:00.000
|
{
"year": 2020,
"sha1": "36b68533ec5df7df3478e0f3bb35a01a43b626e5",
"oa_license": "CCBY",
"oa_url": "http://www.ccsenet.org/journal/index.php/jmbr/article/download/0/0/44351/46755",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "36b68533ec5df7df3478e0f3bb35a01a43b626e5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
269771317
|
pes2o/s2orc
|
v3-fos-license
|
Impact of previous treatment history and B-cell depletion treatment duration on infection risk in relapsing-remitting multiple sclerosis: a nationwide cohort study
Background B-cell depletion displays striking effectiveness in relapsing-remitting multiple sclerosis (RRMS), but is also associated with increased infection risk. To what degree previous treatment history, disease-modifying therapy (DMT) switching pattern and time on treatment modulate this risk is unknown. The objective here was to evaluate previous DMT use and treatment duration as predictors of infection risk with B-cell depletion. Methods We conducted a nationwide RRMS cohort study leveraging data from the Swedish MS registry and national demographic and health registries recording all outpatient-treated and inpatient-treated infections and antibiotics prescriptions from 1 January 2012 to 30 June 2021. The risk of infection during treatment was compared by DMT, treatment duration, number and type of prior treatment and adjusted for a number of covariates. Results Among 4694 patients with RRMS on B-cell depletion (rituximab), 6049 on other DMTs and 20 308 age-sex matched population controls, we found higher incidence rates of inpatient-treated infections with DMTs other than rituximab used in first line (10.4; 95% CI 8.1 to 12.9, per 1000 person-years), being further increased with rituximab (22.7; 95% CI 18.5 to 27.5), compared with population controls (6.6; 95% CI 6.0 to 7.2). Similar patterns were seen for outpatient infections and antibiotics prescriptions. Infection rates on rituximab did not vary between first versus later line treatment, type of DMT before switch or exposure time. Conclusion These findings underscore an important safety concern with B-cell depletion in RRMS, being evident also in individuals with shorter disease duration and no previous DMT exposure, in turn motivating the application of risk mitigation strategies.
INTRODUCTION
The treatment landscape of multiple sclerosis (MS) has evolved drastically over the last decade, with the introduction of increasingly effective diseasemodifying therapies (DMT). 1 2 While newer treatment options offer stronger suppression of MS inflammatory disease activity compared with older DMTs, they may also be associated with particular treatment-related risks.
Infections, in particular, constitute a primary concern in the era of modern MS treatments, 3 4 with evidence for differences in qualitative and quantitative aspects of infection risk depending on the mode of action of DMTs.6][7] Depletion of B-cells represents an increasingly used treatment modality in both relapsing-remitting MS (RRMS) and progressive MS, and has been associated with primarily bacterial infections compared with platform therapies in observational settings. 7Interestingly, though B-cells play an important role in physiological immune functions, neither the rate of mild nor severe infections differed significantly compared with control treatment arms in the registration studies for ocrelizumab and ofatumumab. 8 91][12] In line with this, laboratory parameters associated with increased
WHAT IS ALREADY KNOWN ON THIS TOPIC
⇒ B-cell depleting therapy is highly effective in relapsing-remitting multiple sclerosis, but is also associated with an emerging safety signal for increased infection risk.How this risk is related to treatment-related factors, such as treatment history and duration of B-cell depletion, is unknown.
WHAT THIS STUDY ADDS
⇒ Following a nationwide relapsing-remitting multiple sclerosis cohort exposed to different therapies over 9 years, we found a doubled rate of serious infections on rituximab compared with non-B cell-depleting therapies combined.
The increased risk of infection with rituximab was not modulated by previous multiple sclerosis therapies or the duration of treatment.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
⇒ Infection risk mitigation strategies are motivated with B-cell depletion in RRMS regardless of treatment history and duration of exposure to B-cell depletion.
Multiple sclerosis
infection risk, primarily hypogammaglobulinemia, may develop with longer treatment exposure, and lack of B-cells is associated with reduced humoral vaccination responses. 13 14Emerging data also indicates that infection risk may be greater in people with MS (PwMS) treated in routine clinics compared with clinical trial populations. 10This also includes an increased risk of severe COVID-19 with B-cell depletion compared with other treatment modalities. 15 16e previously reported an increased risk for hospital-treated infection with rituximab, a B-cell depleting drug used off-label for MS, compared with older platform self-injected MS DMTs in a nationwide cohort study. 7Leveraging a larger study cohort with longer follow-up time, we here sought to refine risk estimates and, in particular, to what degree time on treatment and treatment history impacted infection risk with B-cell depletion.The specific objectives of the study were to address if prior exposure to MS therapies predicts the risk of infection during treatment with rituximab, and secondarily, if infection risk is modified by treatment history when a DMT switch is performed from another DMT to rituximab or by longer exposure to treatment.
METHODS
We performed a nationwide cohort study by linking data on MS treatment and patient characteristics in the Swedish MS Register (SMSreg) to national health registries.The SMSreg is used in all Swedish neurology departments and has an estimated national coverage of about 80% of prevalent MS cases with high validity of registered data. 17 18The national health registries have virtually complete coverage, and for this study comprised the Total Population Register with information on demographic variables, the Swedish Patient Register with information on inpatient (since 1964 and with national coverage since 1987) and outpatient (since 2001) specialised care diagnoses coded according to Swedish revisions of the International Classification of Diseases codes, the Causes of Death Register (since 1952) and the Swedish Prescribed Drug Register with information on all collected prescribed drugs (since 2005).
Study cohort
The primary study cohort comprised all PwMS (age ≥18) with RRMS in SMSreg who started rituximab for the first time from 1 January 2012 to 30 June 2021 (N=5999).Incidence rates (IR) of infection in the main cohort were compared with rates among PwMS who, in the same period of time, initiated any of: injectables (N=2250; comprising interferons and glatiramer acetate), natalizumab (N=1990), fingolimod (N=1552) or dimethyl fumarate (N=2745).Since the aim was to estimate risks attributable to current therapy, follow-up was censored at treatment discontinuation (ie, 'on drug'-analyses).To further benchmark infection rates, we created a population-based comparator group of MS-free individuals by matching rituximab initiators 1:5 to subjects randomly drawn from the Swedish population (N=22 806).To avoid immortal time bias due to possible retrospective recording of data in the SMSreg, patients with MS with the start of index treatment >90 days before recorded inclusion in SMSreg were excluded (N=1492).Individuals with any record of infection within 180 days before the index date were excluded to avoid ongoing infections (PwMS=2117, general population comparators=2498).Patients whose treatment history included mitoxantrone or stem cell transplantation were also excluded (N=56), as were those who started injectables, glatiramer acetate, natalizumab, fingolimod or dimethyl fumarate after treatment with rituximab (N=128).
Outcomes
The main outcome was time to first severe infection, defined as hospitalised infection or death due to infection.The two secondary outcomes were; (1) time to specialist care outpatient infections, and (2) time to any filled prescription of a systemic antibiotic.Details on the outcome definition are available in the online supplemental table S1.
Exposures and follow-up time
Time at risk was counted from the treatment start date (index date) until the outcome of interest, 90 days after recorded discontinuation of therapy (recorded stop of therapy or start of a new therapy, if earlier), emigration, death, exclusion from SMSreg, start of mitoxantrone or haematogenic stem cell therapy, or data extraction date (30 June 2021), whichever occurred first.For the general population comparator group, the index date was defined by their matched patient's rituximab initiation date, and follow-up ended if a diagnosis of MS was made.An individual who started more than one DMT during the observation period could contribute data to multiple treatment periods.
To assess if infection rates varied by first versus later treatment line, and to put infection rates on rituximab in the context of other DMTs and the general Swedish population, we first divided the full cohort into five exposure groups: (1) treatment naïve PwMS starting rituximab (rituximab first line), (2) PwMS switching to rituximab from another therapy (rituximab later line), (3) treatment naïve PwMS starting injectables, natalizumab, fingolimod or dimethyl fumarate (other DMT first line), (4) PwMS switching to injectables, natalizumab, fingolimod or dimethyl fumarate from another non-rituximab DMT (other DMT later line) and ( 5) population-based MS-free controls.
To further investigate the impact of previous DMT, we then restricted the sample to PwMS switching to rituximab from another therapy within 180 days, stratified on type of DMT switching from; (1) natalizumab, (2) injectables, (3) dimethyl fumarate and (4) fingolimod.
Finally, to assess the impact of the duration of rituximab treatment, we included all subjects starting rituximab and split the follow-up time by year since treatment start.
Covariates
Other covariates and potential confounders were assessed at the index date.Demographic covariates included age, sex, region of residence, country of birth (categorised as Sweden or other) and highest educational level (categorised as ≤12 years or >12 years).Medical history was assessed from any filled prescription or diagnosis in the 5 years before treatment started and included any serious inpatient treated infection, outpatient infection treated in specialist care, systemic antibiotics and a modified Charlson Comorbidity Index. 19MS-specific covariates included time since diagnosis, any relapse within 1 year prior to treatment start, Expanded Disability Status Scale (EDSS) score and Multiple Sclerosis Impact Scale (MSIS-29) physical and psychological score.The number of previous treatments was included as a covariate in the analysis of the rituximab switch group.The
Multiple sclerosis
year of treatment start was included to account for any differences in the risk for infection or recorded treatment according to the calendar year, and the general population comparator group was assigned a start year based on the date of start of index treatment for the matched patient with MS.
Statistical analyses
Tabulations for each exposure group were made for baseline patient characteristics, and number of events.The cumulative incidence of serious infections by time on rituximab was plotted as one minus the Kaplan-Meier survival estimate, and hazards plotted by Epanechnikov Kernel-smoothing of Nelson-Aalen estimated cumulative hazard increments as implemented in the muhaz package in R. IR per 1000 person-years with 95% CIs were calculated for all outcomes and Cox proportional hazard models were fitted to estimate HR for occurrence of first ever infection after treatment start, with time since treatment start date as the time scale in analyses of DMT history, and age as time scale in analysis of duration of rituximab use.To assess the impact of different covariates, a series of models were run, incrementally adding domains of the covariates listed earlier.Age was modelled with second-degree polynomials.Robust SEs were used to account for observations repeatedly included for multiple treatment starts.There was no missing data in most variables derived from national registers, but EDSS and MSIS-29 were missing for 49.9% and 34.3% at index date.The latter two were therefore categorised into quartiles, with an indicator for missing data included in the analyses.Cumulative incidence and hazard were plotted using R V.4.3.1, while SAS V.9.4 was used for all other statistical analyses.
Sensitivity analyses
We performed two sensitivity analyses.First, to exclude potential bias introduced by the COVID-19 pandemic, all analyses were run with follow-up ending 29 February 2020.Second, the analysis of the impact of previous DMTs was run restricted to participants who had been treated at least for 1 year with the previous DMT.
RESULTS
In total, we included 10 743 PwMS and 20 308 population-based controls in the full cohort, with 1458 participants in the firstline rituximab group and 3236 in the later line group, and 2774 and 3275 in the corresponding other MS DMT groups.Baseline characteristics are detailed in table 1.In brief, the two first-line groups had a shorter disease duration, a higher proportion with a history of recent relapse and lower mean age at the start of index therapy compared with the later-line groups.Comorbidity burden, disability status (EDSS) and self-reported impact of MS (MSIS-29) were similar both by line of treatment and DMT.Proportions with a history of inpatient-treated infection and use of systemic antibiotics prior to DMT start were greater in the later-line compared with first-line cohorts, while the history of outpatient-treated infections was similar.The year of start of index therapy (or matched year for the MS-free population) varied between groups, with earlier start in the other DMT later line group and the most recent start year in the first-line rituximab group.Exposure groups also differed regarding area of residence, with increased proportions of individuals exposed to rituximab in the regions of Northern Sweden and Stockholm, compared with the rest of Sweden.
Infection risk on rituximab versus other DMTs
Serious (inpatient treated or fatal) infections were more frequent in the two rituximab cohorts compared with the corresponding other DMT groups, and did not differ between first or later line use (figure 1).Thus, IRs for serious infection were 22.7 versus 21.4 events per 1000 person-years (PYR) in the first and later line rituximab groups, respectively.This was not meaningfully impacted by adjustment for measured confounders (HR from the most adjusted model was 0.82, 95% CI 0.63 to 1.07, for intermediate models see online supplemental table 2).Corresponding rates with other DMTs were significantly lower, 10.4 and 11.4 in the first and later line cohorts, respectively, and 6.6 among MS-free controls.Adjusted HRs for serious infections from Cox regression estimated a roughly halved risk with other DMTs compared with rituximab, and reduced by two-thirds in the MS-free population.A slightly reduced magnitude of relative risk across groups were recorded for outpatient treated infections but the pattern remained similar; the first-line rituximab group displayed the highest IR with 57.0 events per 1000 PYR, followed by 52.7 in the rituximab-later line group, 36.2 and 37.0 in the other DMT first and later line group, respectively, and 22.5 in the control group.In contrast, IR of prescription of systemic antibiotics was numerically highest in the rituximab later-line group, though not significantly higher than with rituximab first line, and with a pattern otherwise remaining similar to previous outcomes.Three cases of progressive multifocal leukoencephalopathy were recorded during follow-up; two in natalizumab-treated patients and one in the rituximab later-line group, diagnosed soon after the switch from natalizumab.Nine fatal infections were recorded, six in the control group and three among PwMS, all of whom were or previously had been treated with rituximab.
Infection risk on rituximab by previous DMT
Next, we explored the risk of infection with a switch to rituximab from specific DMTs (figure 2).This restricted cohort included 2644 PwMS switching to rituximab from another DMT, with 886 switching from natalizumab, 786 from injectables, 626 from dimethyl fumarate and 346 from fingolimod, respectively (descriptive statistics in online supplemental table 3).IRs for serious infections ranged from 15.3 events per 1000 PYR with the switch from injectables, to 24.1 with the switch from natalizumab, though with overlapping 95% CI (figure 2).A similar non-significant trend for lower risk with the switch from injectables was observed also for outpatient infections, but not with the prescription of antibiotics.
Infection risk by time on rituximab
To address whether the duration of exposure to rituximab impacted infection risk, we plotted the Kaplan-Meier estimate of the cumulative incidence and Kernel-smoothed hazard curves for the first occurrence of serious infections (figure 3).As expected the proportion of patient who experienced at least one serious infection increased over time, but there was no indication that the hazard of experiencing an infection increased over time.The population remaining on rituximab treatment may have changed in meaningful ways over time, as the people discontinuing treatment may differ from those who remain.The drug survival has previously been shown to be higher on rituximab than on other DMTs; in this material 90% remained on rituximab 3 years after treatment initiation, with about 60% remaining on rituximab after 8-9 years (the latter with high uncertainty given that only 37 patients remained at 9 years) (online supplemental figure S1).Testing the difference in hazards over time in multivariable Cox regression we found that accounting for differences in baseline
Multiple sclerosis
covariates between patients with long versus short treatment duration did not have a strong impact on the observed association with treatment duration.The hazard was significantly increased in years 2-5 compared with the first year of exposure, but with no significant risk increase thereafter (figure 4).Considering both the shape of the hazard curve and the pattern of adjusted HRs, infection risk appeared lower in the first exposure year, stable or marginally increasing in years 2-5, and then unchanged.The estimate for year 5 looks a bit higher, but any trend this may have indicated was not confirmed by later years, and the possible trend was further attenuated in the sensitivity analysis ending follow-up at the arrival of the COVID-19 pandemic (see next section).Lack of support for an increased risk with longer exposures to rituximab was more evident with outpatient infections and antibiotics use, the latter even showing a significantly lower hazard over time (figure 4).
Sensitivity analyses
Finally, we performed two sets of sensitivity analyses.In the first, we censored follow-up on 29 February 2020 to exclude a possible bias introduced by the COVID-19 pandemic.This reduced somewhat the IRs for serious infections, while rates of antibiotics prescriptions increased, but differences between rituximab and the other DMTs remained similar (online supplemental table 4).Findings were similar also in the analysis on the rituximab switch cohort, with a tendency for lower point estimates for serious infections and higher for antibiotics use (online supplemental table 5).No differences in the incidence rate of serious or outpatient treated infections, by the duration of rituximab exposure, were statistically significant after censoring follow-up during the COVID-19 pandemic (online supplemental figure 2).
In the second sensitivity analysis, we restricted the study population of the rituximab switch cohort to those with ≥1 year of exposure to the previous therapy, but with no major impact on the results (online supplemental table 6).
DISCUSSION
In this large cohort study leveraging high-quality nationwide population-based data, we corroborate and extend prior observations on infection risks across different MS DMTs, with special reference to B-cell depletion.The cohort partly overlaps with our previous study, 7 but with a larger RRMS cohort (10 743 vs 8519) and longer observation time (until June 2021 vs December 2017), thereby generating more precise risk estimates.In the prior study, we observed HRs of serious infections with non-Bcell depleting DMTs in the range of 0.59 to 0.77 compared with rituximab, against 0.48 in this data set, thus, indicating a higher risk increase.Most importantly, however, we here also addressed two potentially important covariates, namely the impact of prior treatment history and rituximab exposure time.Contrary to our expectations, we did not detect any meaningful differences between treatment-naïve and previously-treated PwMS, neither at the group level, nor with specific prior DMTs, although there was a trend for lower risk with a switch to rituximab from injectables than highly effective DMTs.
We also did not detect a clear trend for increased infection risk with longer exposure to B-cell depletion, except that the risk was lower in the first year of treatment compared with later on.This observation appears to contrast with a recent Norwegian study concluding that treatment duration with rituximab was a predictor for risk of hospitalisation for infection. 11Similarly, observational studies with different B-cell depleting drugs (rituximab, ocrelizumab, ofatumumab) in mixed patient populations
Multiple sclerosis
have indicated an increased risk of development of hypogammaglobulinemia and infectious events over time. 10 12 20Apart from being based on smaller cohorts (n=184, 291, 447 and 565, respectively) and with more heterogenous case mixes, two of the studies compared patients with versus without infections at any point during follow-up, 10 12 and one compared the annualised infection rate in the first two treatment years with treatment years three and beyond while only including patients who had been treated with B-cell depletion for at least 2 years thus requiring drug survival in the first but not in the second period of comparison. 20Besides the substantially larger study population, enabling more precise risk estimates, we here also controlled for a much wider range of confounders, although this only modestly impacted results and therefore remains an unlikely explanation for these discrepant results.Instead, we note that analysing infection as a binary end-point without accounting for the time scale corresponds to an analysis of the cumulative incidence, which would inherently be expected to increase with follow-up time (cf.our figure 3).In line with this interpretation, the Norwegian study analysed the presence of infection as a binary event (in a total of 57 subjects with infection) and found an adjusted OR of infection per treatment year of 1.52 (95% CI 1.11 to 2.09), but simultaneously noted that there was no evidence for an increase in IR over treatment year. 11The difference between studies may thus be an artefact of different handling of timescales.We cannot exclude, however, that differences in clinical management may also play a role.The Swedish MS society recommends a lowdose rituximab protocol with 500 mg of rituximab as a single infusion every 6 months, and that dose interval extension should be considered with the occurrence of hypogammaglobulinemia or infections.
It also deserves to be noted that the risk of hypogammaglobulinemia may differ depending on the type of drug and doses used.We previously conducted a real-world study comparing the impact of immunoglobulin G (IgG) concentrations with the Swedish low-dose rituximab protocol compared with standard dosing of ocrelizumab, indicating no significant drop over the first year with the former, but a mean 0.16 g/L drop with each infusion with the latter. 21Similarly, a systematic review of studies with ofatumumab and ocrelizumab in MS suggests that the impact on IgG concentrations is lesser with ofatumab. 22Therefore, we cannot exclude that an increased infection risk over time could become evident with the use of standard dosing at regular intervals of ocrelizumab or higher doses of rituximab due to greater proportions of developing hypogammaglobulinemia.
Interestingly, recent observational data suggest that the risk of recurrence of disease activity is low even with substantially longer treatment intervals introduced during the COVID-19 pandemic. 23 246][27] Although it seems biologically plausible that treatment with anti-CD20 therapy over extended time periods should increase infection risks, our data instead suggest that in clinical practice the risk increase over time is limited.
Although not tested in this study, it may therefore be speculated if current risk-mitigation strategies have blunted an increased risk over time that otherwise would have been seen.
In this study, we also analysed milder outpatient-treated infection and prescription of systemic antibiotics, which also includes all prescriptions made in primary care.We found a largely similar pattern for these outcomes, but while the absolute risk difference was larger for most comparisons due to overall higher incidence rates, the relative risk was smaller than for serious infections.
This study has certain limitations.This includes a lack of data on tobacco smoking status, but since Sweden has a very low rate of daily smokers (6% of the general population in 2021 28 ) the impact is likely to be small.We also chose not to analyse cumulative dose or dosing intervals due to a significant degree of data missingness, though a large majority can be expected to have been exposed to bi-annual infusions with 500 mg of rituximab, at least until the COVID-19 pandemic outbreak.A further limitation is that as we did not have access to laboratory data, we could not include immunoglobulin levels or lymphocyte counts in the models to potentially identify relevant subgroups among rituximab-exposed individuals varying in infection risk.
In sum, we find an approximately doubled risk of serious infections with rituximab compared with other DMTs, regardless if used first line or as an escalation or switch agent, where the magnitude of risk increase appears to be stable over exposure time.This information is important for patients and treating neurologists in discussions of the benefit-risk balance with different treatment strategies.In parallel, additional efforts should be made to further develop risk mitigation strategies to diminish treatment-related risks with antiCD20 therapies.
Acknowledgements We thank the people with multiple sclerosis and clinicians contributing data to the Swedish MS Register, as well as MSc Simon Englund and Dr Peter Alping for the help setting up the linkage database.The content and views reported here are solely the responsibility of the authors and do not necessarily represent the views of PCORI, its Board of Governors or Methodology Committee.Further funding included the Swedish Research Council (grants no: 2020-02700 and 2021-01418), Stockholm County (grants no: 20200451), the Swedish Brain foundation and Neuro Sweden.The funding sources had no role in the study design; in the collection, analysis and interpretation of data; in the writing of the report; or in the decision to submit the paper for publication.
Contributors
Competing interests FP has received research grants from Janssen, Merck KGaA and UCB, and fees for serving as Chair of DMC in clinical trials with Chugai, Lundbeck and Roche, and preparation of witness statement for Novartis.SV and TF declare no competing interests.
Patient consent for publication Not applicable.
Ethics approval Ethical approval for the study was obtained from the Swedish Ethical Review Authority (DNR: 2021-02384).
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Release of data requires ethical approval for a specified research purpose.Supplemental material This content has been supplied by the author(s).It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed.Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ.BMJ disclaims all liability and responsibility arising from any reliance placed on the content.Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Open access
This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made.See: https://creativecommons.org/ licenses/by/4.0/.
Figure 1
Figure 1 Infection rate in rituximab as first or later line therapy compared with alternative DMTs and the general population.DMT, disease-modifying therapy; events, number of patients with infection; HR, HR with 95% CI from Cox proportional hazards model adjusting for age, sex, start year, comorbidities, demographics and (comparison between MS cohorts only) MS clinical characteristics; IR, incidence rate per 1000 PYR; N, number of patients; PYR, personyears of follow-up.
Figure 2
Figure 2 Infection rate in rituximab switching from natalizumab compared with switches from injectables, dimethyl fumarate and fingolimod.DMF, dimethyl fumarate; events, number of patients with infection; FGL, fingolimod; HR, HR with 95% CI from Cox proportional hazards model adjusting for age, sex, start year, comorbidities, demographics, MS clinical characteristics and treatment history; INJ, injectables; IR, incidence rate per 1000 PYR; N, number of patients; NTZ, natalizumab, PYR, person-years of follow-up; RTX, rituximab.
Figure 3
Figure 3 Cumulative incidence (95% CI) and hazard of serious infection while on rituximab.
Figure 4
Figure 4 Infection rate in rituximab by time on treatment.events, number of patients with infection; IR, incidence rate per 1000 PYR; N, number of patients; PYR, person-years of follow-up; HR, HR with 95% CI from Cox proportional hazards model adjusting for age, sex, start year, comorbidities, demographics and MS clinical characteristics.
Conception or design of the work: SV, FP and TF.Data collection: SV, FP and TF.Data analysis: SV and TF.Data interpretation: SV, FP and TF.Drafting the article: SV.Critical revision of the article: SV, FP and TF.TF was the guarantor of the work, accepting full responsibility for the work and/or the conduct of the study, had access to the data, and controlled the decision to publish Funding Research reported in this study was partially funded through a Patient-Centered Outcomes Research Institute (PCORI) Award (MS-1511-33196).
|
2024-05-16T06:17:54.940Z
|
2024-05-14T00:00:00.000
|
{
"year": 2024,
"sha1": "af0cfda0e8f6a453f2e423db2c064dab5c063d71",
"oa_license": "CCBY",
"oa_url": "https://jnnp.bmj.com/content/jnnp/early/2024/05/13/jnnp-2023-333206.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "BMJ",
"pdf_hash": "1c2926ae6d23810f891031cbd6340da484e9a3c2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244947748
|
pes2o/s2orc
|
v3-fos-license
|
Layered feature representation for differentiable architecture search
Differentiable architecture search (DARTS) approach has made great progress in reducing the computational costs of designing automatically neural architectures. DARTS tries to discover an optimal architecture module, called as the cell, from a predefined super network containing all possible network architectures. Then a target network is constructed by repeatedly stacking this cell multiple times and connecting each one end to end. However, the repeated design pattern in depth-wise of networks fails to sufficiently extract layered features distributed in images or other media data, leading to poor network performance and generality. To address this problem, we propose an effective approach called Layered Feature Representation for Differentiable Architecture Search (LFR-DARTS). Specifically, we iteratively search for multiple cell architectures from shallow to deep layers of the super network. For each iteration, we optimize the architecture of a cell by gradient descent and prune out weak connections from this cell. Meanwhile, the super network is deepen by increasing the number of this cell to create an adaptive network context to search for a depth-adaptive cell in the next iteration. Thus, our LFR-DARTS can obtain the cell architecture at a specific network depth, which embeds the ability of layered feature representations into each cell to sufficiently extract layered features of data. Extensive experiments show that our algorithm solves the existing problem and achieves a more competitive performance on the datasets of CIFAR10 (2.45% error rate) , fashionMNIST (3.70%) and ImageNet (25.5%) while at low search costs.
Introduction
Over the last few years, deep neural networks (DNN) have demonstrated powerful capabilities of feature extraction (Ravi and Zimmermann 2001;Guyon et al. 2008;Cai and Zhu 2018;Nixon and Aguado 2019) and data mining (Bramer 2007;Zhu 2009;Cai et al. 2020;Verma and Singh 2021.) Thus, DNN is applied to a large variety of challenging tasks, such as image recognition (Hu et al. 2018;Kaiming et al. 2016), speech recognition (Geoffrey et al. 2012;Alex et al. 2013), machine translation (Sutskever et al. 2014;Wu et al. 2016), and other complex tasks (Sunil et al. 2019;Lotfollahi et al. 2020;Heydarpour et al. 2016;Bobadilla et al. 2021). But designing an advanced neural network typically requires substantial efforts of human experts. To eliminate such a handcraft process, neural architecture search (NAS) (Zoph and Le 2016;Zoph et al. 2018;Real et al. 2019) has been proposed to automatically search for a suitable neural network from a predefined search space. Its excellent performance has increasingly attracted researchers' attention.
Most NAS approaches apply reinforcement learning (RL) (Zoph and Le 2016;Zoph et al. 2018;Irwan et al. 2017) or evolutionary algorithms (EA) (Real et al. 2017;Liu et al. 2017;Real et al. 2019) to perform architecture search. Both of their searching procedure require sampling and evaluating numerous architectures from a discrete search space to obtain the optimal one. The searching procedure is prohibitively computational overhead. For example, NASNet )(RL-based) trains and evaluates more than 20,000 neural networks across 500 GPUs over 4 days. AmoebaNet (Real et al. 2019) (EA-based) even takes 3150 GPU-days to discover an optimal neural architecture.
To eliminate this high computational overhead, (Liu et al. 2018b) recently proposed a differentiable architecture search (aka DARTS), which relaxes the discrete search space to be continuous and optimizes a common cell architecture in a super network (also called search network in the following) by gradient descent. Then, the identical cells are repeatedly stacked multiple times and connected end to end to construct a target network for a specific task. This kind of NAS approach indeed reduces the computational costs by the differentiable search strategy. However, this target network shows poor performance on testing datasets, especially when transferred to a large-scale dataset since this repeated and simple network structure in depth-wise is hard to sufficiently extract the layered features distributed in media data. In term of image data, the layered features express semantic information of different granularities. In general, the semantic information need to be handled by convolutional kernels with different configurations. But obviously the simple neural architecture from DARTS cannot fully extract and utilize these useful features. Therefore, how to search for cell architectures with layered feature representation for a target network becomes our research question.
To address the above problem, we propose an effective approach called Layered Feature Representation for Differentiable Architecture Search (LFR-DARTS). Specifically, we initialize a search network constituted by multiple cells with all candidate operations and then iteratively search for the architecture of each cell from shallow to deep layers of the search network. For each iteration, we first optimize the architecture of a specified cell by gradient descent and gradually prune out weak connections from this cell. To effectively learn the importance of candidate operations and highlight the optimal ones during this process, we design a new functional network layer called Normalization-Affine and introduce an entropy constraint for the operations being optimized. When obtaining the optimal architecture of a cell, we deepen the search network by increasing the number of this cell to N (a configurable hyperparameter) copies in the original location of network while keeping other cells unchanged, so as to create an adaptive network context to search for a depth-adaptive cell in the next iteration. Therefore, our LFR-DARTS makes each cell to be searched at a specific and adaptive network depth, which is conducive to embedding the ability of layered feature representations into each cell to sufficiently extract data features (Fig. 1).
In terms of search efficiency, our approach takes shorter search time than DARTS since we constantly prune out weak operations from the search network to progressively accelerate the forward and backward propagation of the network. Moreover, the optimization for cell architecture is simpler yet more efficient compared to DARTS, which is demonstrated by the diagnostic experiments in Sect. 4.3. We validate our LFR-DARTS on the image classification tasks of CIFAR10, fashionMNIST and ImageNet. We take only 0.45 GPU days (NVIDIA GTX1080Ti) to obtain an optimal neural architecture on the training dataset of CIFAR10. Our neural network achieves the state-of-the-art performance on validation dataset of CIFAR10 (i.e., 2.65% test error rate with 2.7M parameters and 2.45% test error rate with 4.4M parameters). Then we transfer the neural architecture to other datasets of fashionMNIST and ImageNet. Under the same circumstances, our network achieves 3.70% test error rate on fashionMNIST (with 2.5M parameters) and 74.5% top1 accuracy on ImageNet (with only 4.9M parameters).
In summary, we make the following contributions in this work: 1. We propose a layered feature representation approach for differentiable architecture search to solve the problem of insufficient layered feature extraction in DARTS. Firstly, we design a hierarchical search scheme that is to search a depth-adaptive cell architecture in each search iteration. At the end of each iteration, we dynamically increase the number of the currently obtained cell to N copies in the original depth location so as to deepen the search network. Compared with other differentiable search approaches, our hierarchical and dynamic search scheme allows the discovered network to sufficiently extract feature information of different granularities and levels and integrate it to make decisions. 2. A new functional network layer (called as Normalization-Affine) and the entropy constraint are developed to highlight important operations among candidates, while suppressing other weak operations. That provides higher reliability for optimal architecture selection. 3. Extensive experiments show the advantages of our method in neural architecture search. Compared to other DARTS approaches, our discovered cells are able to represent different levels of feature information hidden in data. Therefore, our algorithm achieves competitive even better network performance and generalization on several datasets.
Related work
In recent years, NAS is becoming a research hotspot in artificial intelligence. Many search algorithms have been proposed to explore neural networks. According to the strategies to explore the search space, the existing NAS approaches can be roughly divided into three categories (Thomas et al. 2018), i.e., reinforcement learning (RL)-based approaches, evolu-tion algorithm (EA)-based approaches, and gradient-based approaches.
The early approaches (Zoph and Le 2016;Zoph et al. 2018;Bowen et al. 2016;Cai et al. 2018;Liu et al. 2018a) use RL to optimize the search policy for discovering optimal architectures. NASNet ) trains a recurrent neural network as a controller to decide the types and parameters of neural networks sequentially. ENAS (Hieu et al. 2018) reduces the computational burden of NAS-Net by sharing the weights of common operations among child networks. The EA-based methods apply evolutionary algorithms to evolve and optimize a population of network structures (Real et al. 2017(Real et al. , 2019Wang et al. 2020;Ma et al. 2020). AmoebaNet (Real et al. 2019) encodes each neural architecture as a variable-length string. The string mutates and recombines to produce new population of networks. The high-performance networks will be remained and they generates the next promising generation.
But both RL-based and EA-based approaches require excessive computational overhead though achieving an advanced performance. To address this issue, the gradientbased approaches (Liu et al. 2018b, a, c;Xuanyi and Yang 2019) are proposed to accelerate the architecture search. Typically, DARTS relaxes the discrete search space to be continuous and utilizes gradient descent to jointly optimize neural architecture and network weights. SNAS (Liu et al. 2018c) proposes to constrain the architecture parameters to be one-hot to tackle the inconsistency in optimizing objectives between search and evaluation scenarios. GDAS (Xuanyi and Yang 2019) develops a differentiable sampler over the search space to avoid simultaneously training all the neural architectures in the space. DARTS+ (Liang et al. 2019) RobustDARTS (Arber et al. 2020) and PDARTS (Liu et al. 2018c) employ early stopping to restrict the excessive number of "skip" operations. FairDARTS (Chu et al. 2020) proposes the collaborative competition strategy to address the unfair advantage in exclusive competition. NASSA (Hao and Zhu 2021) designs a new importance metric of candidate operations for more reliable architecture selection. Although the gradient-based approaches show high search efficiency, their network structures lack the ability of layered feature representations.
Method
In this section, we present our proposed algorithm Layered Feature Representation for Differentiable Architecture Search (LFR-DARTS) in detail. We first introduce a classical differentiable NAS algorithm DARTS in Sect. 3.1, which is a basis of our LFR-DARTS. Then, we describe the concrete search procedure of our algorithm in Sect. 3.2. Finally, in Sect. 3.3, we introduce a minimum entropy constraint and formulate the gradient optimization for the search network.
Preliminary: DARTS
In DARTS, the goal of architecture search is to discover an optimal cell with the most important operations from a search network. The search network consists of L identical cells with the given candidate operations. These cells connect with each other in order, and each cell is considered as a directed acyclic are two input nodes of this cell, x B−1 is the output node, and the others are intermediate nodes. The nodes are connected to predecessors by multiple kinds of operations (e.g., convolution, pooling). These operations share an operation space O Table 1, in which each operation is represented as o(.). The feature transformation f (.) from node i to the subsequent node j could be represented by the weighted sum of these operations: where x i is the feature maps of node i, and α is the architecture parameter, which is used to weight its corresponding operation.
is represented by all of its predecessors: The output x B−1 of one cell is calculated by the concatenation of the intermediate nodes in the channel dimension: The output of this cell will be input of the next cell. The cell is a special information processing or feature extraction block. Thus, the internal architecture (including operation types and connection between nodes) of the cell is critical to the performance of a neural network.
The procedure of layered architecture search
A convolutional neural network (CNN) has a hierarchical structure so as to extract the layered visual features of images. As Simonyan et al. (2013); Zeiler and Fergus (2014) describes, the discriminative information is hidden in feature maps of different layers, each layer has the characteristic of representing specific features. Many excellent network structures (Szegedy et al. 2015;Kaiming et al. 2016;Hu et al. 2018) obey this rule consistently. But differentiable NAS algorithms Liu et al. 2018b) just search single cell architecture (a normal cell and a reduction cell) in pre-defined search space, and then construct a target neural network by the repetitive cells. It contradicts the common sense and cannot be guaranteed that the neural network with repetitive and oversimplified structure is capable of sufficiently extracting layered features. It causes poor performance, especially when transferring the cell architecture to a large-scale dataset.
Following the characteristics of neural networks, we propose a new differentiable NAS algorithm called Layered Feature Representation for Differentiable Architecture Search (LFR-DARTS). Firstly, we specify the number of target cells to be searched and initialize a search network by a few identical cells that contain the same structure and candidate operations inside. These cells are connected in order, which makes each cell naturally placed in the different depths of the search network. Then, we iteratively search for multiple cells with different architectures from shallow to deep layers. For each iteration, we first optimize the architecture of a cell by gradient descent and gradually prune out weak connections from this cell. Once a cell discovers its optimal architecture, we will fix its architecture in the search network and then perform the search for a deeper-adaptive cell in the next iteration.
In order to embed the capability of layered feature representation into the cells, we dynamically increase the depth of the search network during the search process, rather than keeping the static state as in DARTS. Concretely, when we discover the optimal architecture of a cell (if it is a normal cell), we will increase its number to N copies in the original depth of the search network while simultaneously keeping other cells unchanged. In this way, our gradually growing search network creates an adaptive network context for searching optimal cells adaptive to different network depths.
But we find that there exist some problems when applying the architecture optimization strategy of DARTS to our search process. First, this optimization strategy in DARTS is just applied to searching a single cell, not multiple cells with different hierarchical features. Since the parameters α for architecture optimization in DARTS are shared between cells, leading to optimizing and producing only a common cell. Second, the search procedure is complicated as DARTS needs to alternatively optimize the architecture parameters α and network weights ω by gradient descent. α is trained on the validation dataset and ω is trained on the training dataset respectively, which greatly consumes search time. To solve the problems of architecture optimization, we design a new functional layer called Normalization-Affine (NA), which follows intermediately after each candidate operation and provide us a selection indicator of optimal operations.
For any candidate operation, our NA functional layer first normalizes the output of this operation and then reweights the normalized result by a trainable parameter to learn its importance. We formulate the NA layer for any k-th operation in a set of candidate operations: where the trainable weight parameter ϕ k is referred to as an affine parameter which is used to weight each operation. is a very small value close to zero.
.., x m in } is the input tensor of the NA layer and the output tensor of the k-th operation, and it contains m feature maps. μ = {μ 1 , μ 2 , ..., μ m } , σ = {σ 1 , σ 2 , ..., σ m } are mean vector and standard deviation vector of the mini-batch x in . μ and σ also contain m elements, and each element is corresponding to a feature map of x in . The normalized function norm(.) partially comes from Batch Normalization (Ioffe and Szegedy 2015), which is one of the most common and useful normalization approaches in CNN models.
We combine Eqs. 1, 4 and 5, get information conversion from node j to i, shown as Eq. 6: where o k (.) is the k-th operation in a set of candidate operations and x j is the input of the operation. μ k , σ k denote the mini-batch mean and mini-batch standard deviation vector of the output of k-th operation. Each NA layer, corresponding to any operation, contains a learnable affine parameter ϕ, which is trained and updated together with weight parameters ω by gradient descent. Since different cell is located in different depth of network, the affine parameters of the cell will be trained to learn the layered neural architectures. In addition, we optimize affine parameters and weight parameters in the same gradient descent step rather than alternate optimization, which saves half of the search time compared to DARTS.
Our dynamic search approach gradually prune out the weak operations from search network based on the affine parameters. The importance score S of an operation between any pair of nodes is defined as follows: where ϕ k denotes the affine parameter corresponding to the k-th operation in the operation space. The larger S k is, the more likely the corresponding candidate operation is to be retained during the search process. We might doubt whether the normalization is really necessary in the NA layer. We have found through experiments that directly using the affine parameter to weight an operation without beforehand normalization cannot achieve an ideal result. The reason is that the distribution of the outputs from different operations probably varies widely, which makes it pretty hard to identify importance of operations by the affine parameters ϕ. For any operation, its weight parameters ω will be optimized and updated simultaneously with the corresponding ϕ. Optimizing the two kinds of parameters together could make them vary synchronously, resulting in same result by increasing one and deceasing another. Therefore, normalization before reweighting is quite necessary since it makes the results from different operations uniformed so that the affine parameters can genuinely represent the importance of operations.
Network optimization with entropy constrain
During the search process, we optimize the affine parameters ϕ together with weights ω by the gradient descent. Then we try to pick out the operation with the highest importance score from the candidates. But the importance scores of operations between a pair of nodes could be very close to each other, which makes it challenging to select an optimal operation among them. Thus we consider adding an entropy constraint over these candidate operations to concentrate the high scores on one or few operations, then the operations with high scores can be identified and selected more easily. To this end, we redesign the loss function as follows: where L CE is a general cross entropy loss function and B p=1 H p denotes the summation of entropies w.r.t. all candidate operations in the cell currently being searched, H p is the entropy of the p-th set of candidate operations. B is the number of nodes in a cell and |O| is the size of the operation space. λ is a scaling factor that controls the rate of convergence.
We try to minimize the loss function Eq. 9, and this optimization procedure forces the entropy to decrease so that the importance score distribution S tends to a single peak gradually. When the scaling factor λ is larger, the constraint is stronger, thus the difference of importance scores can get obvious through the training with the fixed number of steps. We verify the entropy constraint effectiveness, and details are presented in Sect. 4.3.2.
With the minimum entropy constrain, we formulate the optimization and gradient computing about affine parameters. According to Eqs. 6 and 9, the gradient w.r.t. an affine parameter ϕ k can be computed as below: where ∂ S j ∂ϕ k depends on the positive and negative of ϕ k , the result as follows: where δ jk = 1 if j = k or else δ jk = 0. From Eqs. 11 and 12, we can observe that the entropy constraint also delivers interactive information between different affine parameters, which pushes the competition of various operations. Moreover, there is no extra computational burden for training our search network just like a common convolutional neural network. The pseudocode of our proposed algorithm LFR-DARTS is presented in Algorithm 1.
Experiments
In this section, we compare the performance of our algorithm LFR-DARTS with other NAS approaches and humandesigned networks on the several popular image classification datasets, including CIFAR10, fashionMNIST and ImageNet (Jia et al. 2009). Following DARTS (Liu et al. 2018b), We conduct our experiment in two steps: (1) Architecture search: searching the optimal cell on the training dataset of CIFAR10; (2) Architecture evaluate: construct an evaluation network by the obtained cells and test its performance on the testing datasets of CIFAR10, fashionMNIST and ImageNet.
Architecture search and result
The initial search network G consists of χ = 5 cells where two reduction cells (with stride = 2) are inserted between three normal cells (with stride = 1). The number of nodes B = 7 in a cell. The initial operation space O is same as Zoph et al. (2018); Liu et al. (2018b) and the size of space |O| = 8 at the beginning of each iteration. The search network in the each iteration will be performed the search training of T epochs before obtaining the final architecture of the corresponding cell. The process of search training assures the performance stability of the search network after each network prune. In our experiment, the value of T is set to 60 because we find through experiments that the accuracy of the search network keeps relatively steady after about 60 epoch training. Fewer training epochs usually lead to performance collapse as network parameters have not converged. More training epochs contribute little to the result. All of our experiments are run on one device with a CPU of Intel core i7-8700K and a GPU of NVIDIA GTX1080Ti.
The architecture search is implemented on the deep learning framework PyTorch (Paszke et al. 2017) with initial channels of 16 and a batch size of 96. The initial learning rate is 0.025 and then annealed down to zero following a cosine schedule. A standard SGD optimizer with momentum of 0.9 and weight decay of 3 × 10 −4 is adopted. The hyperparameter λ is fixed at 5 × 10 −3 . Other experiment set- We implement the experiment five times with different random seeds and pick out the best cells based on the validation performance. The cells discovered by LFR-DARTS algorithm are presented in Fig. 2.
Evaluation on CIFAR10
An evaluation network consisting of 16 cells discovered in Sect. 4.1 is trained from scratch for 600 epochs on the CIFAR10 with mini-batch=50, initial learning_rate=0.025, init_channels=36, drop_path_prob=0.15, momentum=0.9, weight_decay=3 × 10 −4 and auxiliary towers of weight=0.4. We use standard image preprocessing and data augmentations, i.e., randomly cropping, horizontally flipping and batch normalization. Other settings remain the same as Hieu et al. (2018); Liu et al. (2018b). Two reduction cells as Fig. 2(d) and Fig. 2(e) are located at 1/3 and 2/3 of the total depth of the evaluation network, respectively. The other positions of the network are filled with three other kinds of the normal cells, i.e., Fig. 2(a), (b) and (c).
To explore the performance limitation of our discovered neural architecture, we further increase the initial channels to 50 for the evaluation network which contains 15 cells and more parameters (denoted as large settings in Table 2). We compare our neural network with other networks designed by experts and other NAS methods under fair conditions where the parameters are less than 5M for all the NAS networks. Every evaluation network is trained 5 times using different random seeds and the results prove that our discovered network has excellent performance and strong stability.
The test results and the comparison with other approaches are summarized in Table 2. As shown in Table 2, our LFR-DARTS achieves a test error rate of 2.65% with only 2.7M parameters on the validation dataset of CIFAR10. With more parameters (4.4M), LFR-DARTS further reduces the error rate to 2.45%, which almost outperforms the existing state-of-the-art works with less computational cost than DARTS.
Evaluation on fashionMNIST
The discovered cell architectures are first transferred to another dataset called fashionMNIST, consisting of a training set of 60,000 images and a testing set of 10,000 images. Each image is a 28x28 grayscale image associated with a label from 10 classes. The evaluation network is constructed by 15 cells, 36 initial channels. We training this evaluation network by Table 3.
Evaluation on ImageNet
We transfer our architecture discovered on CIFAR10 to a large-scale dataset named ImageNet, and the result also demonstrates excellent generalization performance. Following the mobile setting in Zoph et al. (2018); Hieu et al. (2018), we construct our evaluation network by 15 cells and train it for 250 epochs with batch size 160, initial channels 48, weight decay 3 × 10 −5 , monmentum=0.9 and initial SGD learning_rate=0.1 (decayed linearly to 0.0). We keep other hyper-parameters and settings as that on CIFAR10. We compare our algorithm with other approaches and the results are presented in Table 4. We compare the complete architecture search and evaluation process of ours, DARTS and random search on three datasets, respectively. Please refer to Figs. 3 and 4. Fig. 3 illustrates the loss learning curves on CIFAR10 (a), fashionMNIST (b) and ImageNet (c). Fig. 4 shows the training/testing process at the evaluation stage on the same three datasets. The results show that our method achieves better performance and generalization than the baseline methods.
Efficiency of architecture search
Our LFR-DARTS algorithm has shown high search efficiency and space utilization. In the experiment, the architecture search contains χ iterations. In each iteration, we make statistics on the time and space cost of one epoch training, and the results are presented in Table 5. For each iteration, we gradually remove the weak operations from the search network at 3 steps, respectively. At the beginning, it spends 196 seconds in one epoch training with 10094M GPU memory usage and reduces to 87 seconds with 8676M GPU memory usage at the last. It clearly shows that the training time and space costs constantly get decreasing, which proves that our method speeds up the search process.
To further show the difference in the search efficiency between our algorithm and DARTS, we investigate the forward-propagation and backward-propagation time of our method and DARTS during the search process. For the search network of ours and DARTS, we set the same batch size=32, training epochs=300, and then we monitor the time changes of once propagation. The results are displayed in Fig. 5. The propagation time in our method descends step by step in Fig. 5(a) as the search network constantly drops weak operations. But DARTS needs to conduct a bilevel optimization of architecture parameters and network weights simultaneously. We measure the propagation time and illustrate it in Fig. 5(b). As can be seen, our method still keeps a faster search process than DARTS even in the initial phase. The search process (a) (b) Fig. 5 The efficiency comparison of once forward and backward propagation during the search process gets accelerated gradually in the later phases, which is also suitable for searching deeper networks.
Effectiveness of entropy constraint
In this section, we experimentally verify the effectiveness of the entropy constraint mentioned in Sect. 3.3. The entropy constraint is a part of the loss function Eq. 9. The minimum entropy over candidate operations makes the distribution of operations' importance score tend to a single or few peaks. This distribution makes it easier to filter out optimal operations since the constraint highlights the most important operation. In fact, the parameter λ is closely related to the distribution of the importance score, so an appropriate scaling factor is a key. We conduct experiments to compare the effects of different values of λ on the results. During a search stage, we randomly chose one set of operations (containing 8 operations) in a searching cell and observe its differences of importance scores under different λ settings. The result is displayed as Fig. 6, where λ = 0.0 means no entropy constraint in the experiment. The four sub-figures show the distributes of importance score at four different training epochs. The distributes vary as the training, and various λ has different impacts on the results. In the initial phase of the training 6(a), the importance is a random distribution. After a period of training 6(b), the high scores are concentrated on few operations. From the extensive experiments, we find that the importance score converges better under the condition of λ = 0.005 than other settings. Too large or small values of λ could lead to unsatisfactory convergence results. As we can see, it gets worse while λ ≥ 0.05. So an appropriate entropy constraint w.r.t. candidate operations can make a positive contribution to the process of architecture search.
Discussion
From the visualized results of the discovered cells in Fig 2, we get some interesting observations consistent with common sense are that shallow network layers prefer to select small separable convolutional kernel (as Fig. 2(a)) and deeper layers prefer large dilated separable convolutional kernel (as Fig. 2(c)). Therefore, these discovered cells show striking depth-adaptive characteristic. That is small-size convolutional kernels in the shallow layers do well in extracting the fine-grained feature information of data. The large-size convolutional kernels in the deeper layers are conducive to processing the fused features. Sufficient layered information can provide more reliable basis for making decisions. However, the network architectures stacked repeatedly in DARTS cannot meet the requirement of feature extraction. LFR-DARTS takes this problem into consideration, thus improves the performance of differentiable approaches. Our method also provides a valuable reference for developing more elaborate and useful architecture cells.
In addition, our search process is divided into multiple stages and performed iteratively. Each cell architecture is searched based on the obtained cells and the current net-work depth. Although, these cells are not the global optima, their combination provides an approximately optimal solution for the architecture search. The greedy search scheme is currently adopted by most differentiable search methods. Thus, there are lots of promising improvements in this greedy search scheme.
Our work has shown many advantages in designing network architectures for image tasks. It is also worth applying our method to other fields, such as object detection, natural language processing etc. We will further explore the differentiable approaches of architecture search to solve the problems of model automated design in other fields.
Conclusion
In this paper, we propose a novel differentiable NAS algorithm called Layered Feature Representation for Differentiable Architecture Search (LFR-DARTS) to solve the existing problem of insufficient layered feature representation. In this way, LFR-DARTS improves the performance and generalization of discovered network architectures compared to other differentiable NAS algorithms. Specifically, we develop a layered and dynamic architecture search scheme to discovered multiple optimal cells from shallow to deep layer and gradually prunes out the weak operations from the search network. Besides, to effectively learn the importance of candidate operations and highlight the optimal ones during search process, we design a new functional layer Normalization-Affine and introduce an entropy constraint for the operations. The extensive experiments on the image classification tasks demonstrate our algorithm can achieve better performance while requiring low computational costs.
|
2021-12-08T16:04:49.844Z
|
2021-12-06T00:00:00.000
|
{
"year": 2022,
"sha1": "c019fbee84096dff84b6b03e9763286578690392",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1086452/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "SpringerNature",
"pdf_hash": "62b37fa0413a5f7f747fb7f9c4816cd768a55c30",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
229226763
|
pes2o/s2orc
|
v3-fos-license
|
Artificial Bee Colony Algorithm Optimization for Video Summarization on VSUMM Dataset
This paper attempts to prove that the Artificial Bee Colony algorithm can be used as an optimization algorithm in sparse-land setup to solve Video Summarization. The critical challenge in doing quasi(real-time) video summarization is still time-consuming with ANN-based methods, as these methods require training time. By doing video summarization in a quasi (real-time), we can solve other challenges like anomaly detection and Online Video Highlighting. A simple threshold function is tested to see the reconstruction error of the current frame given the previous 50 frames from the dictionary. The frames with higher threshold errors form the video summarization. In this work, we have used Image histogram, HOG, HOOF, and Canny edge features as features to the ABC algorithm. We have used Matlab 2014a for doing the feature extraction and ABC algorithm for VS. The results are compared to the existing methods. The evaluation scores are calculated on the VSUMM dataset for all the 50 videos against the two user summaries. This research answers how the ABC algorithm can be used in a sparse-land setup to solve video summarization. Further studies are required to understand the performance evaluation scores as we change the threshold function. Keywords—Artificial Bee Colony optimization; video summarization; online video highlighting; sparse-land; anomaly detection; image histogram; HOG; HOOF; canny edge
I. INTRODUCTION
Since campuses, roads, and public places are monitored constantly by video surveillance, the adaptation of VS will be imperative. Skimming through a huge corpus of video data to derive meaningful summarization requires efficient VS techniques. The need of the hour is to come up with techniques that can be easily deployed and require less training of the algorithms as in ANN methods. Some of the frameworks work well in the object tracking environment or any others. In this framework, we have come up with a common approach to do VS, as seen in the result section evaluated across multiple genres of videos table reference. The main motivation behind this work is three-fold. Firstly, to prove the use of the ABC optimization algorithm in a sparse-land setup. Secondly, to apply this approach to a real-time (quasi) framework similar to [1]. Thirdly, to adapt any domains online video content so that it can be used to solve other challenges in real-time like anomaly detection [2].
The challenge to any video summarization is to adapt to any domain, some of the frameworks work well on a certain domain as the methods are restricted or concentrated for a particular purpose like choosing humans and vehicle [3]. Methods like the sparse-land approach give the liberty to adapt to any domain videos, which is also proven in this work by the evaluation scores across multiple genre videos in VSUMM dataset in Table I, II. There has been a keen interest in the sparse-land based approach in the literature [4,1,5,6,7,8], hence taking this approach in this paper is proven.
The rest of the paper is organized as follows: Section II briefs about the related works in VS. Section III describes the proposed ABC method for the VS framework, Section IV deals with the proposed methodology, Section V discusses the experimental results. Finally, Section VI concludes the paper.
II. RELATED WORKS
The optimization algorithms play a vital role in selecting the right frames for video summarization and updating the dictionary D. Various studies on optimization algorithm and its performance metrics are based on storage reduction and computation time, as discussed in [9]. In this paper, we have evaluated the ABC algorithm against a well know dataset VSUMM [10], and the results benchmarked against a known dataset. In this section, we will go through optimization algorithm selection and different strategies to do VS. In the literature, we find lots of methods and techniques to do VS, based on clustering [11], saliency-based methods calculating the frame importance score on egocentric VS[lee2012discovering], traditional approaches with SVD [12]. In recent years there is an enormous amount of papers based on ANN [13,14,15,16,17,18,19]. ANN methods involve training in supervised, unsupervised approaches which may not be suitable for a near real-time VS. Graph-based methods [20,21,22,23] also requires the data porting into a graph database before computation which specializes in keyframe retrievals and browsing system. Among all of the methods, the sparse-land based approach to solve VS still stands out of other techniques due to its simplicity in solving VS as an optimization problem. The other features can be easily plugged and played with any optimization algorithm, as demonstrated in [9], flexibility in selecting the right dictionary shapes and elements and support quasi (real-time) in solving the VS [1], followed by anomaly detection [2]. We also see recent advancement in the sparse-land approach using CSC(Convolutional Sparse Coding Model) as on par with the current ANN methods [24].
Optimization algorithm from Evolutionary methods like ABC [25], P SO [26], GA [27], ADM M [28] and rmsprop [29] are quite common methods for optimization algorithm, In this paper, we have used the ABC method for optimization. In the recent literature, we can see ABC usage [30] for VS, where the authors have worked on another well-known dataset Summe [31] using segment level data on the Video for VS. The global effects of the entire video may not be captured well in such approaches [30]. [30] has used the ABC algorithm to identify key video segments and used clustering techniques to arrive at keyframes. The keyframes come from the center of the cluster. A region of interest approach is used to identify important frames, similar to the camshift algorithm proposed in our work to reduce the unwanted frames. The final reduction of keyframes is done via the hue histogram comparison. Also, the ABC algorithm has shown better convergence than other algorithms like PSO.
[9], in our previous approach, we have proposed four algorithms to test video summarization optimization time and storage reduction. The test was performed on random videos on youtube, whereas this paper accomplishes the performance of the ABC optimization algorithm against known VSUMM dataset in VS, also we have calculated the performance evaluation scores as indicated in the experiments and results section.
[1] has used ADMM optimization techniques in a sparseland setup to solve VS challenge, these ideas are some of the key foundations in solving the VS framework along with dictionary initialization and sparse modeling. References for image restoration can be found in [32]. Image reconstruction is done with the current frame and frames from the dictionary. A high reconstruction error of α denotes more changes between frames. When the reconstruction error α is high the frame is included for summarization [33,5,34,35].
A. Summary of the Contribution
Our contribution in this work is the usage of the ABC optimization algorithm in a sparse-land setup to do VS. The evaluation metrics precision, recall, F 1 − Score are obtained for the individual video to showcase the working of the ABC algorithm on par with other methods as compared in Table III with earlier reference works [10]. The other two Tables I, II gives the precision, recall, F 1 − Score for all the 50 individual videos in VSUMM dataset. This framework works as a near real-time(quasi-real-time) summarization and anomaly detection framework. The framework can also be easily extended to other advanced sparse-land setups such as CSC [24].
III. THE ABC OPTIMIZATION FOR VS FRAMEWORK
The artificial bee colony (ABC) algorithm comes from the swarm intelligence branch. The ABC algorithm is modeled around the intelligent behavior of honey bee in performing their task efficiently to identify the target food locations [25,36]. There are mainly three types of phase, Employed, onlooker, and scout bee phases. The employed bees are responsible for visiting the existing food sources, onlooker bees wait for the dance ceremony to select the next food source depending upon the performance of the bees, the scout bees do a random pickup of food sources. The main function of Employed phase is to update the X new position variable and to find a suitable partner solution X p , the update equation to calculate the new position is as shown in the below equation 1. X is the current solution and X p is the partner solution. φ is a random value in the range [-1,1].
The Onlooker bees are responsible for selecting the food sources with a highest nectar value F (θ i ), θ i is the i t h food source, the probability of a cycle is given as P (c) = {θ i (c)|i = 1, 2, ....S}, (C: cycle, S:no. of food sources), probability function p(X i ) for choosing the food sources as given below.
The scout bees do a random discovery of the food sources with the predefined limits specified by the search space limits [X M in , X M ax ], the randomness of the food sources are determined by the below equation 3.
IV. PROPOSED METHODOLOGY
The architectural flow for VS is similar to our previous work [9]. The features used are HOG(histogram of oriented gradients) with nine bins with a range of 20 degrees per bin, HOF (Histogram of Optical Flow), HOOF (Histogram of Optical Flow), Canny edge detection, the sample feature output of a frame can be seen in the below Fig. 1.
A. Preprocessing of Video Using
The camshift algorithm is used to preprocess the frames, a wide variety of applications can found for the camshift algorithm [37,38,39] including object tracking and frame rate and size reduction by only capturing the ROI areas. In our approach, we have used the camshift algorithm to reduce the number of frames. This is an important step to filter keyframes. The camshift algorithm usage and depiction can be seen in Fig. 2, similar methods can be seen in the literature [40].
B. Features Used
The features used can be seen in the code listing Matlab code below and the values as depicted in Fig. 1. currF is the current frame read, Canny − edge variable Contain the Canny edges, HI is the histogram image, HOG is the histogram of oriented gradients [41], HOOF is the Histograms of oriented optical flow.
C. Dictionary of Key Frames
The atom selection for the dictionary is done using a similar approach as followed in [2,1], We have selected 50 frames for dictionary comparison, the 50 frame is a selected as a computational limit, The current frame feature values are compared against and previous frames value as indicted in equation 4, where pre is the previous frames feature value, cu is the current frames feature value, the α is calculated by the ABC algorithm as indicated in the algorithm section, λ is initialized to a small value of 0.01, 50 k atoms. Dictionary selection is again a great way to start the summarization with good representation from the video data, the dictionary initialization is discussed in [42,43,44].
D. Threshold as the Reconstruction Error
The threshold α is calculated as a mean of the 50 frames in the current cycle comparison from the dictionary, as we increment by 50 frames for the next comparison. The threshold α as compared with the value from equation 4 when there is a higher reconstruction error (higher value of α), we include the frame for summarization.
V. EXPERIMENTAL RESULT
In this section, we discuss the results obtained using the ABC optimization algorithm on a well-known dataset VSUMM [10]. The dataset consists of 50 videos from different genres and user summary keyframes for each video. In this experiment, we have compared the results for two user summaries and given the evaluation for each user summary against the automated summary generation as available in VSUMM dataset [10].
The average evaluation scores obtained in Table III indicate the approach using the ABC algorithm in a sparseland approach is close to other results as compared to [10].
www.ijacsa.thesai.org Fig. 3 depicts the results of one of the video # 30 from the VSUMM dataset giving a clear indication of the frame number matches and +/ − 1 frame matches, hence the results obtained demonstrate the approach for sparse-land based VS, a full framework for VS, anomaly detection, and online-highlighting. This approach is open to include any other Text/NLP [45,46,47,48,49,50] based feature inputs. Frame importance rankings [45] with NLP caption generation methods [51,52] combined with other video features are recent advancements in video summarization features [53,50].
A. Evaluation of Video Summary
The evaluation is based on the proposed approach as discussed in [54,10] called Comparison of User Summaries (CUS). The user summary is composed of many user summaries and taken a common score approach in the VSUMM dataset. The results in our approach called the automatic summary are compared with two user summaries as depicted in Fig. 3. Precision, recall, and F1-score are the common metrics to measure the performance of the VS framework, the formulas are followed from [54,10]. The evaluation metrics for precision, recall, F1-score is depicted in Tables I and II against both the user summary in VSUMM dataset. The equation depicted below 5, 6, 7 are used for the evaluation metrics with the automated summary generated by our approach, the comparison scores are mean accuracy rate CUSA(precision) Error rate CUSE(Recall), and F1-Score. The F1-score obtained by our approach is close enough to other methods [10], by balancing the threshold parameters in ABC algorithm we can improve the F1-Score, also we need to take care of other scores that get affected like precision and recall. Finding the right balance with all the parameters of our ABC approach for video summarization and evaluation by F1-Score is another open challenge.
VI. CONCLUSION In this work, we propose the ABC optimization algorithm for Video summarization to reduce long video to short video, removing redundant frames. We have compared the performance metrics for evaluations with the known dataset VSUMM. The comparison metrics have given a better score with other methods with reasonable performance. This method can be easily used for (quasi) real-time VS and anomaly detection, also extendable with other advanced sparse-land approaches as CSC (Convolutional Sparse Coding Model) [24], and K-SVD approaches [55,56]. Finding an optimal threshold function or value for summarization is still open as the performance measure gets affected as we decrease or increase the threshold function.
|
2020-11-05T09:06:11.939Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "8f837bb3b7f83708dc8f0b244779cb3b19d3650c",
"oa_license": "CCBY",
"oa_url": "http://thesai.org/Downloads/Volume11No10/Paper_73-Artificial_Bee_Colony_Algorithm_Optimization.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6add805c513215217a0b796808004e8f102adc29",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
225756406
|
pes2o/s2orc
|
v3-fos-license
|
The relationship between volume increment and stand density in Norway spruce plantations
An understanding of the relationship between volume increment and stand density (basal area, stand density index, etc.) is of utmost importance for properly managing stand density to achieve specific management objectives. There are two main approaches to analyse growth–density relationships. The first relates volume increment to stand density through a basic relationship, which can vary with site productivity, age, and potentially incorporates treatment effects. The second is to relate the volume increment and density of thinned experimental plots relative to that of an unthinned experimental plot on the same site. Using a dataset of 229 thinned and unthinned experimental plots of Norway spruce, a growth model is developed describing the relationship between gross or net volume increment and basal area. The models indicate that gross volume increases with increasing basal area up to 50 m 2 and thereafter becomes constant out to the maximum basal area. Alternatively, net volume increment was maximized at a basal area of 43 m 2 and decreased with further increases in basal area. However, the models indicated a wide range where net volume increment was essentially constant, varying by less than 1 m 3 ha − 1 year − 1 . An analysis of different thinning scenarios indicated that the relative relationship between volume increment and stand density was dynamic and changed over the course of a rotation.
Introduction
In production forests, mid-rotational thinning is implemented for the purpose of reducing natural density-related mortality and to release crop trees from competition, encouraging individual-tree growth and potentially producing higher valued timber products (Nyland, 2016). However, overly reducing the stand density can result in non-optimal total stand volume production. Physiologically, this loss in volume production can be explained as result of underutilizing available resources, such as light interception in the canopy and water and mineral absorption in the soil (Long et al., 2004). At the opposite extreme, excessive densities may also lead to non-optimal volume production if available site resources (e.g. light, water) become a limiting factor. Thus, there is a direct relationship between stand density and volume production. A quantification of this relationship is important for understanding the growth and economic implications of thinning treatments.
While long-term studies of thinning have been on-going since 1860, a general relationship between volume increment and stand density has not yet been determined. One of the issues contributing to the lack of understanding is the wide range of definitions of volume 'growth' and stand 'density' (Zeide, 2001). In terms of total stand volume, volume growth is defined as either net-growth, which includes standing and harvested tree volume, or gross-growth, which is net-growth plus volume of trees lost to mortality (Avery and Burkhart, 2015). Because gross volume growth includes mortality, differences in the relationships between gross or net volume growth and stand density are expected to occur after density-related mortality is initiated (Nyland, 2016). Further differences in volume growth are expected to occur between total and merchantable volume (Zeide, 2001). However, merchantability limits change with different product classes and largely depend on local markets. In this work, only total gross or net volume growth is considered.
Historically, the measure of stand density used to examine growth-density relationships has either been basal area, standing volume, or stand density index. As an estimator of gross volume growth, all three measures have been shown to produce the same growth-density relationship (Allen II and Burkhart, 2018). However, it is important to define how they are calculated for the growth period. For example, Pretzsch (2005) used the stand density index at the beginning of the growth period to examine growth-density relationships in Norway spruce and European beech stands in Germany. The use of basal area at the beginning of the growth period has mostly been used for the development of growth models to describe growth-density relationships and when the growth periods are relatively short (e.g. Zeide, 2004;Gizachew and Brunner, 2011;Allen II and Burkhart, 2018). Alternatively, Mäkinen and Isomäki (2004a) compared volume growth Forestry in thinning experiments in Norway spruce stands in Finland based on the average basal area over a 28-year period. This approach is often used in the analysis of designed experiments and/or when the growth periods are relatively long (e.g. Isomäki, 2004a, 2004b;Nilsson et al., 2010).
Analyses of growth-density relationships have been conducted using two approaches. The first approach is used when comparative thinning experiments, containing an unthinned control plot and one or more thinning treatments, are installed in the same stand. Within this approach, the density and volume growth of thinned plots can be calculated relatively (as a percentage) of the measures observed on the control plots. Often termed 'relative growth-density relationships', examining the relationship between growth and density in this manner allows for the quantification of the percentage increase or decrease in volume increment in the thinned plot as compared with the unthinned plot. The second approach relates absolute measures of density directly to absolute measures of volume growth, termed 'absolute growth-density relationships', and is often used when developing growth models and/or when data from comparative plots are unavailable.
Three hypothesized patterns concerning relative growthdensity relationships have been identified (Zeide, 2001). The first is the constant-optimum pattern of Wiedemann (1932), which suggests that the relative volume growth (RVG) of thinned stands is constant and optimal across a range of absolute basal areas and any increase or decrease outside of this range results in a decrease in RVG (Figure 1). The second is the optimal pattern of Assmann (1950), which suggests that there is a relative basal area (RBA), where RVG is maximized. In young stands, beginning at the basal area of the unthinned or lightly thinned control, RVG increases with a reduction in RBA up to an optimum and then decreases with further reductions in RBA. However, with increasing age the optimum RBA moves towards the unthinned control. Finally, the third pattern is the constant pattern of Mar:Moller (1954), which suggests that RVG is optimal beginning at the basal area (or volume) of the unthinned stand and is constant down to a 50 per cent reduction in basal area (or volume).
Alternatively, there is only one hypothesized pattern identified in the literature for absolute growth-density relationships; the constant-optimum pattern of Langsaeter (1941). Langsaeter (1941) suggested that volume increment increases with increasing standing volume up to a point where volume increment is maximized and constant across a range of standing volumes. At excessive densities volume increment should decline. The limits of standing volume where volume increment is constant and optimal are said to be dependent on species, site and age. This constant-optimum growth-density pattern is similar to that proposed by Wiedemann (1932) with the difference that Langsaeter considered volume growth in absolute units and uses standing volume as the measure of density.
Langsaeter's alternative approach to analysing the relationship between volume increment and stand density is quite interesting when compared with the works of Wiedemann (1932), Assmann (1950) and Mar:Moller (1954). If Langsaeter's hypothesized absolute growth-density pattern is correct, and due to the others defining maximum density as the density of the unthinned (or lightly thinned) control, then constant, optimum and constant-optimum growth-density patterns could all be observed in both absolute and relative growth-density relationships over the course of a rotation. For example, Figure 2 shows that given the same hypothetical constant-optimum absolute growth-density relationship, all three relationships could be observed in both absolute and relative densities depending on the density of the control plot and the absolute or relative densities of the thinned plots. Thus, the relative growthdensity relationship may change over the rotation period as the unthinned control moves through the absolute growth-density relationship. However, this potential linkage between absolute and relative growth-density relationships over a rotation is yet untested in the literature.
These alternative growth-density hypotheses have been tested in a variety of stand types around the world with various conclusions. Perhaps some ambiguity has arisen because there was no indication as to whether gross or net volume increment was considered in the previously mentioned absolute and relative growth-density hypotheses. In a review of the history and evolution of concepts in growth-density relationships, Zeide (2001) indicates that those hypotheses are for gross volume growth. However, more recent examinations of absolute growthdensity relationships have shown that gross volume increment increases with increasing stand density (e.g. Curtis and Marshall, 2009;Gizachew and Brunner, 2011;Allen II and Burkhart, 2018). Given that mortality will reduce net growth below gross growth it is possible that optimum or constant growth-density patterns may emerge in net growth at higher densities. Thus, an examination of growth-density relationships in both types of volume increment is warranted.
Analyses of thinning trials of Norway spruce stands in Fennoscandia have had varying results. Using data from Norway spruce trials in Finland, Mäkinen and Isomäki (2004a) determined that only early heavy thinning slightly, but significantly, reduced gross volume increment over a 28-year period. Based on data from Norway spruce thinning trials in Sweden, Nilsson et al. (2010) determined that medium and heavy thinning also slightly, but significantly, reduced gross volume increment over a 30year period. In contrast to the results from Finland, Nilsson et al. (2010) found that four light thinnings significantly increased net volume growth over the unthinned control and heavily thinned treatments in the same period. However, these results may not be directly comparable due with pre-thinning differences in the stands, different thinning treatments and different site productivities. Further, these two studies only examined relative growth-density relationships based on 28-30-year averages of volume increment and basal area which prevents the detection of differences in the growth-density relationships over time. A detailed examination of absolute and relative growth-density relationships for Norway spruce in Fennoscandia has yet to be conducted on the basis of long-term thinning trials.
The purpose of this study was to test theoretical relationships between gross or net increment and stand density in evenaged Norway spruce stands. The primary research goal was to quantify the absolute relationship between volume growth and stand density and to determine how different density removals affect the relative growth-density relationship between thinned and unthinned stands. An additional goal was to determine the effects of thinning on cumulative total gross and net volume The relationship between volume increment and stand density in Norway spruce plantations production over a general rotation period. To address the research goals the following hypotheses were tested: (1) absolute gross or net volume increment increases with increasing basal area; (ii) relative volume increment changes over the course of a rotation depending on the basal area of the unthinned stand and (iii) cumulative gross and net volume over a rotation is highest in the unthinned stand.
Methods
A dataset of long-term thinning experiments of Norway spruce was used to develop a modelling framework by which the alternative thinning hypotheses could be tested. This modelling framework includes individual functions for stand-level gross or net volume increment, dominant height and basal area. Additionally, a Monte-Carlo analysis was performed for nine different thinning scenarios to compare cumulative gross and net volume production at harvesting age from different thinning treatments.
Thinning trial data
The data used in this study come from a series of 11 thinning trials consisting of 135 long-term permanent plots established in even-aged Norway spruce plantations. Within those data are 442 observations (growth periods) in once thinned stands and 246 observations in twice thinned stands (Table 1). The intent of establishing the trials was to examine the effects of various levels of thinning intensity and timing of thinning, using dominant stand height as the thinning trigger, on volume production and stand structure. In all cases, thinning was performed from below. Initially, the trials were planned to include treatments of different density reductions at different stages of stand development. Over the course of the experiment, the target residual TPH were often missed (thinning to light or to heavy) and/or the timing of thinning was performed to early or late. The deviation from the original experimental plan was such that formal analyses of the trial data by treatment groups could not be performed. However, this deviation has resulted in a dataset which covers a wide range of thinning intensities and is highly useful for the development of models which explain growth response to thinning.
The thinning trials are distributed across the natural range of Norway spruce in Norway in the counties of Agder, Vestfold, Telemark, Viken, Innlandet and Trøndelag. A full description of these trials is provided by Braastad and Tveite (2001). Within this thinning trial series, there were an additional 65 plots which did not have information concerning the trees removed in the first thinning. Observations from those plots were not reported or used for analysis in this work. As the thinning trial data did A = stand age from planting (years); SI = site index at base age 40 years; TPH = number of trees (ha −1 ); BPA = basal area (m 2 ha −1 ); V = stand volume (m 3 ha −1 ); TQ BA = basal area thinning quotient (BA after thinning divided by basal area before thinning, %); and H T = dominant stand height (m) at thinning.
not include unthinned control plots, an additional 94 unthinned control plots from other silvicultural trials, within the same geographic region, were added and consist of 311 additional growth observations (Table 1).
Tree measurements and stand calculations
Establishment of the trials occurred between 1969 and 1976 in 20-31-year-old stands (Table 1). Measurements were scheduled in 5-year intervals, with a few deviations from the schedule, and The relationship between volume increment and stand density in Norway spruce plantations to date each plot has received between 5 and 10 measurements. At plot establishment, every tree greater than 5 cm in diameter at breast height (dbh) was numbered and the species, dbh, and any damages were recorded. Total tree heights and crown heights were subsampled by randomly selecting one tree out of the first four trees for a height measurement, and then systematically sampling the height of every fourth tree thereafter. Non-sampled tree heights were estimated with a linear mixed-effects variant of the logarithmic height-diameter model of Lenhart (1968) localized with random parameters fit to each trial, plot and measurement period. Total stem volumes were calculated using the volume equations of Vestjordet (1967). Ingrowth trees were included in plot measurements once reaching a dbh of 5 cm. Quadratic mean diameter (QMD, cm), standing volume (V, m 3 ha −1 ) and basal area (BA, m 2 ha −1 ) were calculated for each plot. Net stand-level volume (NV, m 3 ha −1 ) was calculated as the sum of V and the cumulative volume of all trees removed in thinning. Gross stand-level volume (GV, m 3 ha −1 ) was calculated as NV plus the cumulative volume of any trees lost to mortality. Stand-level gross (GVPAI) and net (NVPAI) volume increment were calculated as the periodic annual increment over a measurement period.
Modelling and analyses
To address the research goals, three hypotheses were tested. The first was to test the relationship between absolute gross or net volume growth and absolute basal area. This was done by developing an equation explaining periodic annual volume increment as a function stand density, dominant stand height and dominant height increment. The second hypothesis was to examine if different relative growth-density relationships exist over the course of a rotation. As the equation for gross or net volume increment included both basal area and dominant height, additional equations for those variables were also developed. These equations were then used to simulate stand development over a rotation for nine different thinning scenarios (Table 2), including an unthinned control. The third and final hypothesis was to test differences in cumulative gross and net volume production over a rotation among the nine thinning scenarios. Based on the simulations, a Monte-Carlo analysis was performed to determine differences among the thinning scenarios.
Gross and net volume growth
Alternative growth-density hypotheses suggest that volume increment either increases with increasing stand density or reaches an upper limit at some density less than the maximum density which can be sustained on a given site. In order to test these hypotheses, the type II combination power and exponential function was chosen as it has the flexibility to reflect increasing, optimum, asymptotic patterns (Allen II and Burkhart, 2018). The functional form relating stand density to volume increment can be expressed as: where VPAI = gross or net volume periodic annual increment (m 3 ha −1 year −1 ), BA = stand basal area (m 2 ha −1 ), exp = the base of the natural logarithm and α 0 , α 1 = are the parameters to be estimated. From equation (1), the hypothesis that volume increment follows an increasing pattern with stand density can be tested on the basis of the significance of α 1 . If α 1 is significantly different from zero and positive, then an optimum pattern occurs and otherwise an increasing pattern occurs. In the case that α 1 is significantly positive, the basal area at which VPAI is maximized can be determined by calculating the first derivative of equation (1) and solving for the basal area where the first derivative is equal to zero. It follows that the basal area where PAI is maximized is equal to α 1 . Equation (1) expresses volume growth as purely a function of stand density. However, in addition to stand density, volume growth is also a function of average tree size, stand age and site productivity (Oliver and Larson, 1996;Zeide, 2004;Allen II and Burkhart, 2018). Therefore, the development of a model which can account for both the effects of stand development stage and soil/climate productivity is necessary to prohibit confounding those effects.
Predominant stand density measures, such as basal area, couple stem number per unit area with a measure of average tree size which is generally diameter-based. It has been shown that basal area can vary in equally dense stands and therefore the inclusion of a measure of average tree size assists in defining a unique growing condition (Zeide, 2005). From past studies of growth-density relationships, a positive parameter estimate on average tree size suggests that for a given basal area a larger average tree size results in greater stand volume growth (e.g. Zeide, 2004;Pretzsch, 2005;Allen II and Burkhart, 2018). However, this does not reflect that with increasing time, and thus increasing average tree size, stand volume growth declines (Avery and Burkhart, 2015) and an additional variable which reflects the gradual decline of volume growth with time is needed. Using the average tree height as a measure of average tree size in combination with its increment reflects this gradual decline in volume growth as height increment declines with time. Further, these two variables together describe both site productivity and local climate impacts over the growth period as well as age effects.
Two different definitions of tree height were examined including the average height of all trees and the average height of the 100 largest diameter trees, also known as dominant stand height. The dominant stand height definition provided slightly better fit statistics. Therefore, the dominant stand height (H, m), its increment (H PAI , m year −1 ) and their interaction were included in equation (1) as modifiers to the parameterα 0 . The final model has the form: In the model-development process, additional variables describing thinning were also included in equation (2). However, these variables were never significant in the model and therefore excluded. Additionally, stand age and site index were included as modifiers to parameter α 1 , which defines the basal area at which VPAI is maximized. These parameters were also not significant Forestry in the model and when removed no evidence of patterns in the residuals from the final equation were seen in stand age or site index.
Equation (2) can be used to determine the gross or net volume increment at a given basal area, dominant stand height and dominant stand height increment. It was additionally desired to understand how both types of volume increment develop over a rotation for both thinned and unthinned stands. This was accomplished by simulating gross and net volume increment over a traditional rotation period for different thinning scenarios. As equation (2) explains volume increment as a function of basal area and height, in order to perform this simulation additional equations for basal area and height development were needed.
Basal area development and response to thinning
In this analysis, it was desired to simulate basal area development of an unthinned stand based on a given initial stand density. However, no information on planting density was available for the unthinned control plots (Table 1). In order to simulate the basal area development in unthinned stands the model of Gizachew et al. (2012) was used. Their model is based on empirical results from Norway spruce spacing trials in Norway and was developed to explain basal area development of different initial planting spacings. To determine the validity of this equation, the first available density observation from each unthinned plot which had not yet entered into density-dependent mortality was used as a proxy for initial planting density. Based on residual analysis, the equation was found to be unbiased for the unthinned control plots and therefore acceptable to use for simulations.
To model the basal area of thinned stands, the basal area equation of Gizachew et al. (2012) was modified to account for the effects of thinning on radial growth. This included parameters which could explain differences in thinning removals and the timing of thinning. This model has the form: 11.715 , β 2 = log(1 − BA AT /β 0 ), β 1i = parameters to be estimated, SI = site index (m) at base age 40, TQ BA = basal area after thinning/basal area before thinning, BA AT = basal area after thinning (m 2 ha −1 ) and yst = years since thinning.
The asymptotic limit of basal area, parameter β 0 , was developed by Gizachew et al. (2012) from Norwegian forest inventory data. Thinning response is incorporated into the rate parameter, β 1 , as a linear function of basal area removal and site index. Parameter β 2 conditions the function such that the basal area at the thinning age (yst = 0) is equal to the basal area immediately after thinning. An additional variable was initially included to account for the effect of dominant stand height at the time of thinning on basal area development. This parameter was not significant in the model and therefore removed. Further, no obvious bias was seen in the residuals when examined over dominant stand height at the time of thinning.
For simulating volume growth in equation (2), an estimate of dominant height and dominant height increment is needed. Therefore, a dominant height projection model was developed using the Chapman-Richards type function: where H 1 = current dominant stand height at age A 1 , H 2 = future dominant stand height at age A 2 and θ 1 , θ 2 = are the parameters to be estimated. Previous results from thinning studies Norway spruce in other regions have indicated that thinning has no effect on dominate height development (Mäkinen and Isomäki, 2004a). In an initial screening of dominant height-age equations for these data, preliminary residual analysis indicated no patterns among the residuals for thinned and unthinned stands. Additionally, no variables describing thinning effects were significant when included in equation (4).
The relationship between volume increment and stand density in Norway spruce plantations
Model fitting and evaluation
Equations (2)-(4) were simultaneously fitted by nonlinear seemingly unrelated regression (NSUR), using the SAS/ETS MODEL Procedure (SAS Institute Inc., 2011). NSUR takes into account the contemporaneous correlations among nonlinear regression equations (Borders, 1989;Robinson, 2004). Model assumptions of normality and variance homogeneity of the residuals for all fitted equations were evaluated visually by examining normal Q-Q plots and residual plots, respectively. Further, residual plots in relation to age, site index and stand treatment (thinned vs. unthinned) were examined to determine if these stand variables were properly represented by each model.
Simulation and Monte-Carlo analysis
For the comparison of different thinning scenarios, a simulation was performed using equations (2)-(4). Different site productivities were evaluated based on site index classes of 11, 14, 17 and 20 m at a base age of 40 years, where age is defined as years since planting. A total of nine different thinning scenarios, including an unthinned control, were simulated for each site index class (Table 2). Simulated thinnings were performed based on the representative thinnings in the data used for model fitting (Table 1) and can be categorized as either medium or heavy thinning, based on percentage basal area removal, and early or late thinning based on dominant stand height. The lengths of the simulations varied by site index class and were based on general rotation ages of even-aged Norway spruce stands. These ages were 120, 96, 81 and 77 years for site indices 11, 14, 17 and 20 m, respectively, and come from analysis of the culmination of mean annual increment in Norway spruce stands (Søgaard et al., 2019).
The steps for simulating the nine different thinning scenarios presented in Table 2 are as follows: (1) For each site index class simulate dominant height development using equation (4) for every age over the rotation using the respective site index value for H 1 and 40 years for A 1 .
(2) Simulate basal area development for the unthinned scenario using the equation of Gizachew et al. (2012) for an initial planting density of 4000 trees ha −1 . (3) Simulate basal area development for the thinned scenarios for one or two thinnings: (a) First thinning (i) Using the simulated dominant heights and basal areas from the unthinned scenario, reduce the basal area in each thinning scenario by its respective percentage at its respective dominant height as detailed in Table 2. (4) Calculate gross or net volume increment based on simulated dominant heights, dominant height increments and basal areas for all nine scenarios.
Initially, it was desired to include a comparison of different initial stand densities. However, the result of the hypotheses examined, and the conclusions were the same and did not provide additional insight into the examination of growth-density hypothesis. Thus, one initial planting density of 4000 trees per hectare was chosen.
A Monte-Carlo analysis was performed to test significant differences in cumulative gross and net volume production among thinning treatments over the rotation. In this analysis, 1000 datasets were generated by randomly selecting 999 observations from the fitting dataset with replacement. For each dataset, equations (2)-(4) were fitted using NSUR creating 1000 unique sets of parameter estimates. Those parameters estimates were then used in the simulation of the nine thinning scenarios to create a distribution of cumulative gross and net volume at harvesting age for the four site indices. Differences among the thinning treatments were then determined by calculating 95 per cent confidence bands from the 2.5 and 97.5 percentiles from the distribution.
Visual assessment of relative growth-density relationships following simulation
To compare the implied relative growth-density relationships from the simulations, within each site index class the gross or net volume increment and basal area of the simulated thinned stands are related to those of the unthinned stand. The relative gross (RGVI) or relative net (RNVI) volume increments and RBA can be calculated as: where U and T denote the unthinned and thinned gross volume increment or basal area, respectively. Figures where then generated relating relative volume increment to RBA for each site index class at different stand ages.
Results
Parameter estimates and fit statistics for equations (2)-(4) are presented in Table 3. Analysis of residuals and normal Q-Q plots indicated that the model assumptions were met. Further, there was no evidence of bias among the residuals when plotted against age, site index, treatment types and climatic variables.
Absolute growth-density relationships
Hypotheses tests concerning the growth-density patterns for gross and net volume increment were made based on parameter α 1 in equation (2). Those parameter estimates were significantly greater than zero, indicating optimum patterns between gross Forestry or net volume increment and basal area. Further, the values of those parameters indicate that gross or net volume increment for Norway spruce stands are maximized at basal areas of about 62 and 43 m 2 ha −1 , respectively. To visually assess the behaviour of equation (2) when all other factors are held constant, predicted gross and net increments were plotted across the range of expected basal areas for Norway spruce and with a range of dominate stand heights and height increments (Figure 3). While equation (2) indicates that gross volume increment is maximized at 62 m 2 ha −1 basal area, Figure 3 shows that when height and height increment are held constant, GV PAI is almost constant at basal areas greater than 50 m 2 ha −1 . Similarly, NV PAI has only a slight variation over the basal area range of 30-50 m 2 ha −1 but declines at higher densities.
To highlight the importance of accounting for the effects of age and site, the results of the mean stand development from each thinning scenario in Table 2 were used to plot the relationship between net or gross volume increment and basal area (Figure 4 and Supplementary Figure 2). These figures differ from Figure 3 as both types of volume increment are determined from the simulated height and basal area development as predicted by equations (3) and (4). Thus, Figure 4 and Supplementary Figure 2 are the 2D representations of stand development in NV PAI or GV PAI and basal area, respectively.
A clear unimodal pattern can be seen in both NV PAI and GV PAI . If age were not accounted for this could be mistaken as density effect instead of the natural slowing of growth at higher ages which happens to coincide with stands growing into higher densities. Additionally, Figure 4 shows clear differences in the growth response patterns of thinned treatments among the different site index classes with the curves at lower SI deviating from the curves in the higher SI. Because thinning was implemented at a fixed height in the simulations, the age at which the treatments were applied increased with decreasing SI. Thus, the response patterns seen here also include an age effect when comparing among SI classes. Therefore, it is highly important to separate the age effect with the density effect to determine the direct relationship between volume growth and stand density to avoid confounding the age effect.
Both age and site effects were accounted for in equation (2) by including dominant height and height increment in the model. Additional variables which describe thinning were initially included but were not significant and no obvious bias was seen in the residuals among the thinned and unthinned plots. This suggests that for a given basal area, height, and height increment, any differences in volume increment between thinned and unthinned stands were not large enough to be detected by the modelling approach used in this work.
Relative growth-density relationships
By considering the unthinned stand as the reference stand, relative net (RNVI) and relative gross (RGVI) volume increment were calculated for all nine thinning scenarios at each simulated time step. Plotting those values over stand age shows when the RNVI or RGVI in the thinning treatments surpass that of the unthinned treatment, which has a relative value of one ( Figure 5 and Supplementary Figure 3).
The RNVI in all thinning treatments and for all site indices increased above the unthinned treatment at some point in the rotation ( Figure 5). In general, the age at which the surpassing occurred decreased with increasing site index resulting in the thinning treatments in the higher site index classes having a longer period of increased net growth over the rotation period. Alternatively, there was only one instance in SI-17 and two instances in SI-20 where RGVI was increased above one (Supplementary Figure 3). In all three instances, this was due to a slight increase in basal area of the thinned treatments above that of the unthinned treatment (Supplementary Figure 4).
The relationship between volume increment and stand density in Norway spruce plantations The change in RNVI over the rotation indicates differences between the unthinned and thinned treatments at different time periods which could indicate different response patterns. For clarity, four different time steps were chosen from the simulation data in SI-11 to compare and contrast the change in those relationships over time. Figure 6 shows that initially both NVPAI and RNVI increase with increasing stand density. With time the relationships develop into the constant pattern of Mar:Moller (1947), the optimum pattern of Assmann (1950) and the constantoptimum pattern of Wiedemann (1932). Thus, all three growthdensity patterns were observed in the simulation data and, as hypothesized in Figure 2, those patterns depended on the density of the unthinned control.
Simulation and Monte-Carlo analysis
The previous results indicate that thinning initially decreases both gross and net volume increment due to the immediate decrease in basal area. However, there are cases where both types of volume increment are increased in the thinning scenarios above that of the control. The question becomes how the differences in volume increment among the thinning treatments affect the total cumulative volume over the course of a rotation.
Results from the Monte-Carlo analysis show several cases where cumulative gross volume (CGV) is significantly different between the unthinned and thinned treatments (Figure 7). In site index classes 11, 14 and 17 all once and twice thinned treatments which received the heaviest thinning (50 per cent basal area removal) produced significantly less cumulative CGV than the unthinned treatment, regardless if the thinning was applied early or late. In the site index class 20, only the treatments which received two heavy thinnings had significantly less CGV as compared with the unthinned treatment.
Alternatively, there were no significant differences among all treatment scenarios within a given site index for cumulative net volume production. These results indicate that all eight thinning treatments examined in this work can produce as much net volume over the course of a rotation. Therefore, the recommendation that up to 50 per cent basal area can be removed without having a noticeable effect on total harvested volume is valid for these data under the rotation lengths examined here.
Absolute growth-density relationships
The results from this modelling study which examines gross and net volume increment from thinned and unthinned stands of Norway spruce in Norway suggests different absolute growthdensity patterns between the two types of volume increment. The models presented here indicate that when all other variables are held constant, gross volume increment increases with increasing basal area. However, Figure 3 shows that the pattern is curved and becomes essentially constant at higher densities (basal area > 50 m 2 ha −1 ). Alternatively, in net volume increment the results from the model suggests that volume increment Forestry Figure 4 Two-dimensional representation of net volume increment development for different thinning treatments based on equations (2)-(4) at different site indices (SI, m). The curves represent the integrated effects of both stand age and stand density. Thinning treatments are described with intensity as control (no thinning), medium (25 per cent basal area reduction), or heavy (50 per cent basal area reduction) and with timing as early (first thin at 12 m height) or late (first thin at 16 m height).
increases with increasing basal area up to a point where volume increment is maximized and then decreases with increasing basal area up to the maximum. The decline at higher densities is the result of greater competition and larger mortality as the basal area approaches the maximum. While the pattern is curved, Figure 3 indicates that there is a range in basal area where net volume increment is essentially constant and that this range increases as height increment, and therefore age, decreases. This result indicates that Langsaeter's hypothesized absolute growthdensity pattern is applicable for these Norway spruce data when net volume increment is considered.
In contrast to the work presented here, models developed from Norwegian National Forest Inventory (NNFI) data by Gizachew and Brunner (2011) indicated GV PAI strictly increasing up to a basal area of 60 m 2 ha −1 . Figure 3 shows that based on these data from thinning trials, GV PAI reaches a maximum and begins to flatten out after 50 m 2 ha −1 basal area. One major difference between these two studies is the underlying data source used in the development of the volume growth models. The thinning trials used in this work were established in evenaged Norway spruce plantations, whereas the NNFI data used in Gizachew and Brunner (2011) contain a range of forest types including both planted and naturally regenerated stands, the latter of which can be either even-or uneven-aged. The summary statistics for the two data sources show that the thinning trial data are representative of better stocked stands on higher site productivities as compared with the NNFI data.
Different response patterns in GV PAI from stands of different management types have been seen in other species. Based on thinning studies of loblolly pine in the south-eastern US, Allen II and Burkhart (2018) compared growth-density relationships between non-intensively and intensively managed plantations. In non-intensively managed plantations, growth models produced strictly increasing patterns between GV PAI and basal area and Allen II and Burkhart (2018) noted similarities between their relationship and the relationship of Gizachew and Brunner (2011). Alternatively, growth-density patterns from intensively managed plantations of loblolly pine showed more curvature, flattening out at higher densities, similar to the curves presented in Figure 3. Thus, differences in the response curves can be attributed to differences in site types and management.
The relationship between volume increment and stand density is quite complex and other factors outside of stand density which indicate a point in stand development and productivity are needed. Publications as early as Langsaeter (1941) indicated that the growth-density relationship is a function of stand age and site index. However, these variables are merely indicators of a point in stand development based on an average productivity. Further, there may be cases where stand age and site index are either unknown or roughly estimated which could lead to The relationship between volume increment and stand density in Norway spruce plantations Figure 5 Relative net volume increment (RNVI) development with stand age of thinned stands as compared with an unthinned (RNVI = 1). Thinning treatments are described with intensity as medium (25 per cent basal area reduction) or heavy (50 per cent basal area reduction) and with timing as early (first thin at 12 m height) or late (first thin at 16 m height). misleading results. The models presented in this work use height and height increment which in addition to explaining the effects of stand age also can explain the effects of current site productivity. Such models may be useful for forecasting volume increment from an existing forest inventory or in simulation studies when coupled with a height-age equation which incorporates climate variables. Further, in stands with non-homogeneous site productivities, such as are often found in Norway, even greater precision for forecasting volume increment could be found when connected to height-growth information obtained from remote sensing.
Relative growth-density relationships
As an alternative to absolute growth-density relationships, researchers have often employed relative growth-density relationships to examine differences among thinned and unthinned experimental plots. From the plethora of publications in the literature concerning relative growth-density relationships, it appears that much effort has been given to establish one unifying relationship or pattern. While there are conflicting claims and counter-claims concerning the basic relationships hypothesized by Langsaeter (1941), Mar:Moller (1954 and Assmann (1970), those patterns are often misspecified. Langsaeter's hypothesis is concerned with the absolute growth-density relationship, whereas the patterns of Mar:Moller and Assmann are concerned with relative growth-density relationships.
As hypothesized in Figure 2, Langsaeter's absolute growthdensity relationship could occur simultaneously with the relative growth-density relationships of Wiedemann (1932), Mar:Moller (1954 and Assmann (1970) due to the specification of the unthinned stand as the reference stand. The results presented in this study suggest that this hypothesis is plausible for net volume increment (e.g. Figure 6). While relative gross volume increment increases with increasing RBA, net volume increment of thinned stands is maximized at a basal area of 43 m 2 ha −1 and decreases at higher densities due to mortality. As thinning is generally applied relatively early in the rotation, the unthinned stand is often not at the maximum sustainable density on a given site, especially in plantations where extreme planting densities are not likely to occur. The modelling results show here that as the unthinned and thinned plots move towards the maximum density the relative growth-density relationship shifts from increasing to constant to optimal ( Figure 6). Thus, different patterns can be seen depending on the density of the reference stand and conflicting claims about the relationship may be an artefact of comparing different data sources. Figure 6 Absolute and relative relationships between net volume increment and basal area based on simulated thinning scenarios for SI-11 at different stand ages. Different relative growth-density patterns can also occur depending on the methodology used to examine those relationships. For example, in thinning studies of Norway spruce and Scots pine in Finland and Sweden, Isomäki (2004a, 2004b) and Nilsson et al. (2010) calculate relative volume increment and RBA as averages over a 30-year period. For comparison, the periodic annual volume increment and average basal area over a 30-year period following thinning were calculated from the simulations in this study and the relative relationships between thinned and unthinned scenarios were plotted. Similar to the results from Mäkinen and Isomäki (2004a), Figure 8 shows purely increasing relationships when these data are averaged over this 30-year period. This is not surprising given the results shown in Figure 5 and Supplementary Figure 3 as the unthinned control was still well below the maximum density during this period. While this methodology is valid for examining differences in thinned and unthinned experimental plots over a long growth period, it lacks the ability to detect different relative growth-density patterns which occur at different points in stand development. These results show that it is very important for studies to properly define both the volume increment analysed and the stand density so that appropriate comparisons among studies can be made.
Total volume production
While showing that the growth-density patterns are more dynamic than originally thought the questions becomes how much it matters and what effect does thinning have on total production at the harvest age. For example, in terms of net increment do the periods of increased growth counter the initial reduction in increment immediately following thinning. The results of the Monte-Carlo analysis indicated that there were no differences, according to over-lapping confidence intervals, among cumulative net volume production at the end of a rotation within a given site index between the nine thinning scenarios examined (Figure 7). Alternatively, one or two heavy thinnings reduced CGV production below the unthinned stand, depending on the site productivity. As the difference between gross and net volume is mortality, these results indicate that those treatments can produce as much total net volume while additionally reducing volume lost to mortality. Thus, the recommendation that up to 50 per cent basal area can be removed without having a noticeable effect on total net volume production over a rotation is plausible for the thinning scenarios and rotation lengths examined here.
In a previous analysis of these thinning trial data with measurements covering 25-30 years after thinning, Braastad and Tveite (2001) concluded that cumulative net volume increased with increased density with the largest volumes occurring in the unthinned plots. Figure 8 shows that this is due to a decrease in the average relative net volume increment with increasing thinning strength over the same period. The advantage of thinning on total net volume production is by reducing the time of reduced net volume growth at higher densities. Modelling results suggest that this can result in larger relative net volume growth later in Table 2. the rotation (e.g. Figure 8). Therefore, the concluded similarities between cumulative net volumes between the thinning scenarios at the end of the rotation are highly dependent on the length of the rotation period. In this study, the rotation length was defined as the age where mean annual increment culminates. However, if the rotation length is shortened then these results may not hold.
Different effects of thinning on total gross and net volume production have been found in other Fennoscandia countries. Nilsson et al. (2010) found that only heavy thinning significantly reduced CGV production but show that light thinning shows an increase in cumulative net volume production. Alternatively, Mäkinen and Isomäki (2004a) found no significant differences in CGV between thinned and unthinned stands. Although, the results of these studies should be compared with caution as those analysis did not or could not account for differences in site productivity and consist of data with different thinning treatments and timing of thinning.
Conclusion
The relationship between volume growth and stand density is dynamic with differences occurring between different measures of volume growth. From the models produced in this work, when all other variables were held constant, gross volume increment increased with increasing basal area up to the maximum basal area on a given site. Alternatively, net volume increment followed an optimal pattern and was maximized around a basal area of 43 m 2 ha −1 . These differences indicate the importance of properly defining the measure of volume increment used in growthdensity studies and the need to ensure comparisons are made among the same measures.
While the relationship between net volume increment and basal area is curved, the model results indicated that there was a range of basal areas where net volume increment was essentially the same, varying less than 1 m 3 ha −1 year −1 and this range varied with current height and height increment. This result would confirm Langsaeter's hypothesis which says that volume growth is constant and optimal across a wide range of densities and that the range in density is dependent on age and site.
As hypothesized in this work, the confirmation of Langsaeter's hypothesis does not negate different response patterns in relative growth-density relationships but instead provides reasoning as to why different patterns may have been observed in past work. Due to the specification of maximum density for a given site being the density of an unthinned stand, all three hypothesized relative growth-density response patterns (i.e. increasing, constant, optimal) can occur depending on the absolute density of the reference stand. Figure 8 Relationship between gross or net volume periodic annual increment over the 30-year period following thinning and the midpoint (average) basal area over the same period. Volume increment and basal area of the thinning treatments have been made relative to the unthinned treatment.
Forestry
Results of simulated thinning scenarios indicated no significant differences in total net volume production at harvesting age between unthinned stands and various thinning strengths up to 50 per cent basal area removal. Alternatively, only two heavy thinnings (50 per cent basal area removal) significantly reduced total gross volume production, as compared with an unthinned stand, at the end of a rotation. This result suggests that for Norway spruce stands two heavy thinnings can produce as much total net volume at the end of a rotation, providing some earlier return on investment, while at the same time avoiding densityrelated mortality. However, these results should be taken with caution as the simulated results have been extrapolated outside of the range of stand ages used in model fitting. It is highly important that these thinning trials continue to be monitored throughout the rotation.
In conclusion, the results of this work rectify alternative historic hypotheses concerning the relationship between volume increment and stand density. The system of equations developed here indicates that different growth-density patterns can occur over a rotation and thus past controversy could have arisen from the desire to develop one unified pattern of response and the comparison of response patterns across different data sources. By use of the modelling framework presented in this work, differences in species reactions to thinning could be evaluated. Further, this modelling framework allows for the analysis of thinning trials more correctly by incorporating the effects of stand age, site productivity, thinning onset and thinning frequency to properly understand the growth-density relationship.
Supplementary data
Supplementary data are available at Forestry online.
|
2020-06-25T09:07:57.511Z
|
2020-06-29T00:00:00.000
|
{
"year": 2020,
"sha1": "fad177a722109a16184cd08f74daf877da68d6bb",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/forestry/article-pdf/94/1/151/35488274/cpaa020.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "191a3e412887a60daa9dafc0fc9993fd64bf5170",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
235820258
|
pes2o/s2orc
|
v3-fos-license
|
In vitro antioxidant and antitumor study of zein/SHA nanoparticles loaded with resveratrol
Abstract Resveratrol (RES) loaded Zein‐SHA (low‐molecular‐weight sodium hyaluronate) nanoparticles with average diameter of about 152.13 nm and polydispersity index (PDI) of 0.122, which can be used to encapsulate, protect and deliver resveratrol. By measuring ABTS free radical scavenging ability and iron (III) reducing power, it was determined that encapsulated resveratrol has higher in vitro antioxidant activity than free resveratrol. When tested with murine breast cancer cells 4T1, the encapsulated resveratrol also showed higher antiproliferative activity than free resveratrol, with IC50 values of 14.73 and 17.84 μg/ml, respectively. The colloidal form of resveratrol developed in this research may be particularly suitable for functional foods and beverages, as well as dietary supplements and pharmaceutical products.
chemical properties, which limit its application in food, nutrition, and medicine. In order to break these limitations, delivery systems are designed and applied. Among the delivery systems, nanocomplexes have been widely employed in food fields relying on their advantages, such as effective targeting and sustained release (Guo et al., 2020).
A variety of nanoparticle-based systems have been investigated for their potential to encapsulate, protect, and deliver resveratrol.
Among these, zein has been widely studied. Zein encapsulated tea polyphenols nanocarriers show good performance against external environmental stress and protect tea polyphenols from damage (Ba et al., 2020). Zein nanocarriers can significantly improve the photostability of lutein (Frankjen et al., 2020).
Zein has a special amino acid composition, it is rich in glutamic acid (21%-26%), leucine (20%), proline (10%) and alanine (10%) that are hydrophobic amino acids, therefore it cannot be dissolved in water and has unique self-assembly characteristics. Zein based nanoparticle protects resveratrol from isomerization by UV light (Cheng et al., 2020). Zein in addition to work as a carrier, it also can work to boost the better antioxidant activity than free ingredients (Shi et al., 2020).
In the current research, we used the antisolvent coprecipitation method to prepare Zein-SHA nanoparticles. Sodium hyaluronate can be complexed with zein, which can increase the stability of the nanoparticles in aqueous solution, and the nanoparticles have pH Response characteristics. We used the approach to fabricate resveratrol-loaded nanoparticles, and then compared their in vitro antioxidant and anticancer activities with free resveratrol.
The ultimate aim was to show that these nanoparticles could be used to encapsulate resveratrol in a form that could be successfully utilized in functional food, nutrition, and pharmaceutical products.
| Purification of zein
Zein dissolved in DMSO (0.2 g/ml). The solution was then settled in dichloroethane at a ratio of 1-10 (v/v), repeated three times. Then dip with petroleum ether and filter with suction three times, and left it in a vacuum oven to dry it to obtain white powder.
| Zein solution
Dissolved zein in 90% (v/v) ethanol aqueous solution to prepare 2.5% (w/v) zein solution, then put in 4°C refrigerator for use.
| Resveratrol-zein solution
5 mg of RES dissolved in 2.5 mg/ml zein solution, magnetically stir for 10 min, put in 4°C refrigerator for use
| SHA solution
Low-molecular-weight sodium hyaluronate powder dissolved in double-distilled water, magnetically stir for 10 min, perform ultrasonication for 30 s to fully dissolve it.
| Preparation of resveratrol-loaded Zein-SHA nanoparticles
Added the prepared RES-zein dropwise to the SHA solution in a ratio of 1:4, kept stirring at 700 rpm, and then removed the ethanol through dialysis (MWCO, 3,500 Da) (Wang et al., 2017).
| Nanoparticle characterization
2.4.1 | Particle size and zeta potential measurements The average particle size, polydispersity index, and zeta potential of freshly prepared nanoparticles solution(pH = 5.0) were measured at 25°C using DLS (Brookhaven 90 Plus).
| X-ray diffraction
Used X-ray CCD single crystal diffractometer to measure RES-ZEIN-SHA NPs, proportional physical mixture and X-ray diffraction (XRD) patterns of three components. The XRD patterns were recorded at a scanning range of 10°-40° (2θ) and a scanning rate of 5°/min.
| Transmission electron microscope
A certain concentration of nanoparticles solution was dropped on a carbon-coated copper mesh, dried by natural air, and the morphology of the nanoparticles was observed and analyzed by Transmission electron microscope (JEM-1011, JEOL).
| Resveratrol determination
An ultraviolet spectrophotometer was used to determine the content of resveratrol in the nanoparticles. An aliquot of freeze-dried nanoparticles (20 mg) was dissolved in 10 ml DMSO, and stirred for 2 hr in the dark. Then the solution was centrifuged at 13,700 g for 30 min. The supernatant was diluted with DMSO and analyzed by UV-visible spectroscopy at 327 nm, and the content of resveratrol was calculated by the standard curve, which was established using a standard solution in the range of 0-10 μg/ml (R 2 = .9977).
| Particle yield and resveratrolloading efficiency
The freshly prepared colloidal dispersion was dialyzed to remove free resveratrol and organic reagents, and then centrifuged at 1,200 g for 5 min to remove large particles, and then freeze-dried. Particle yield and resveratrol loading use formula to determine efficiency were determined using equations (Equation 1) and (Equation 2):
| In vitro resveratrol release
Measured the release of resveratrol in vitro using the previously reported dialysis method. (Wang et al., 2018). Put 2 ml of RES loaded zein-SHA NPs with a certain concentration into the dialysis bag. The dialysis bag was completely immersed in the dialysate (PBS solution, pH = 5.0, pH = 7.4), and the measurement was carried out at 37°C and 100 rpm. 1 ml of dialysate was taken out at certain intervals, and 1 ml of corresponding fresh buffer was added, and the content of resveratrol was measured by UV. The sampling points were 1, 2, 4, 6, 10, 12, 24 hr.
| Stability studies
In order to determine the storage stability of the nanoparticles, the nanoparticles were stored in the dark at 25°C for 14 days, and the stability was evaluated by measuring the average size, PDI and potential of the nanoparticles. Stability experiment repeated three times.
| Determination of ABTS free radical scavenging ability
The ABTA radical scavenging ability was detected with reference to previous methods (May et al., 2015;Zhang et al., 2020). In order to obtain the ABTS radical solution, 7 mM ABTS solution and 4.9 mM potassium persulfate solution were mixed in equal volume for 12 hr under dark conditions. The absorbance of ABTS radical diluted with PBS buffer at 734 nm was 0.80 (±0.02). 0.5 ml of different samples were mixed with 3.5 ml of ABTS free radical solution and stabilized for 5 min in the dark, and the absorbance at 734 nm was measured.
The PBS buffer used to disperse nanoparticles and the absolute ethanol used to dissolve resveratrol were used as blank controls, and the clearance rate was calculated by the following formula (Equation 3): A 0 was the blank control, A 1 was the absorbance of the sample.
| Determination of reducing power
The previous method was used to determine the reducing power of the sample (Gulcin, 2006;Huang et al., 2017). Different concentrations of resveratrol nanoparticles in distilled water or pure resveratrol dissolved in ethanol were added to a mixture of PBS (5 ml) and potassium ferricyanide (5 ml, 1%, w/v). The solution was incubated in 50°C water solution for 20 min, then cooled to room temperature.
After, 2.5 ml of the resulting solution was mixed with distilled water (2.5 ml), reacted with FeCl 3 solution (0.5 ml, 0.1%, w/v) for 10 min, and then measured at 700 nm in a spectrophotometer. All samples were carefully protected from light throughout the experimental procedure.
| Cell culture and anticancer assay
The MTT method was used to detect the toxicity of samples to mouse breast cancer cells (4T1). 1 × 10 5 cells were sown in a 96-well plate containing DMEM medium and cultured in a 37°C, 5% carbon dioxide incubator. After 12 hr, added the medium for dissolving dif- (2) Loading efficiency (% ) = RES in nanoparticles total RES input × 100% . ( A t and A c represent the absorbance of the sample and blank treated cells, respectively.
| Statistical analysis
The data were expressed as mean ± standard deviation (SD). Data were processed and analyzed using the SPSS version 22.0 (Windows, SPSS Inc.). p < .05 was considered indicative of statistically significant differences.
| Effect of SHA concentration on the stability of nanoparticles
The isoelectric point of zein is 6.2, which is due to its unique amino acid ratio. These physicochemical properties are used to prepare core-shell nanoparticles. Zein nanoparticles have a higher particle size when the SHA concentration is low, and are unstable and easy to aggregate. When the ratio of zein to SHA was 6:1, the particle size is 209.5 nm. As the relative content of SHA increased, the particle diameter formed was smaller and stable. When the ratio of zein to SHA was 1:1, the particle size was 112.3 nm. The PDI of nanoparticles with different SHA levels was 0.135-0.200, indicating that their particle size distribution is relatively narrow (Figure 1a).
It shown that adding SHA can significantly reduce the size of nanoparticles and stabilize the nanocolloid solution. The most likely reason: as the amount of SHA increases, the particle size gradually becomes smaller, possibly because the degree of particle aggregation increases and the structure becomes tighter.
The result shown that with the increase of the cationic SHA concentration, the zeta potential changes from −29.83 to −20.93 mV (Figure 1b), which is related to the adsorption of cationic SHA molecules on the surface of anionic nanoparticles. This result is consistent with the formation of small, uniformly dispersed nanoparticles by electrostatic interaction. Different ratios of zein and SHA (zein [5 mg]:1:1, 1:2; zein [2.5 mg]:1:1, 1:2, w/w) were used to make Zein-SHA with RES NPs in order to optimize the formulation of preparing nanoparticles.
As shown in Figure 1c, the concentration of zein was 5 mg/ml, the ratio of zein to SHA was 1:1 and 1:2, through TEM (Figure 1d), it can be seen that the nanoparticles are unstable and aggregated, but when the concentration was 2.5 mg/ml the particle distribution is relatively dispersed. Although the particle size distribution measured by DLS is not much different, when observed by TEM, the zein concentration of 2.5 mg/ml was selected to prepare nanoparticles.
| X-ray diffraction analysis
The XRD patterns of the nanoparticles and their constituents were tested ( Figure 2). It can be seen from Figure 2a that the 2θ diffraction peaks of resveratrol alone were 16.30, 19.15, 22.30, 23.55, 25.20, 28.25, which proved that resveratrol exists in the form of crystals. RES in nanoparticles did not show any characteristic intense sharp peaks, which shown that resveratrol was in amorphous form ( Figure 2b).
| Particle yield and RES-loading efficiency
Encapsulation efficiency and particle yield affect the commercial application of nanocarriers. In order to pursue higher economic benefits and reduce waste, high encapsulation efficiency and particle yield are of great significance in delivery systems. Therefore, we explored the effect of adding different concentrations of resveratrol on the particle yield and the encapsulation efficiency of resveratrol.
The results shown that both the particle yield and loading efficiency significantly decreased with increasing initial resveratrol concentration (Table 1).
| Resveratrol release
The release of resveratrol in the nanoparticles was affected by pH The results shown that the nanoparticles encapsulating resveratrol have good acid-responsive release characteristics, and were easier to release in the tumor microenvironment (pH = 5.0) than under normal physiological conditions (pH = 7.4).
| Stability studies
Comparing the stability of Zein-SHA nanocarriers and RES loaded Zein-SHA NPs. The newly prepared nanoparticles have a relatively narrow particle size distribution. However, after 14 days under 25°C, the particle size of Zein-SHA with RES NPs remains unchanged and had better stability (Figure 4d). The particle size of the empty carrier becomes larger and has a wider particle size distribution (Figure 4a).
The PDI of the two kinds of nanoparticles did not change significantly (Figure 4b,e), but the potential of the empty carrier changed greatly (Figure 4c), while the Zein-SHA with RES NPs changed less ( Figure 4f), Shown that Zein-SHA with RES NPs had good storage stability.
| Antioxidant activities of encapsulated resveratrol
Resveratrol has good antioxidant activity. Based on this, the scavenging rate of ABTS free radicals and Fe 3+ reduction ability of free and nanoencapsulated resveratrol were investigated. The results shown that Zein-SHA with RES NPs had a significant difference in the scavenging rate of ABTS free radicals compared with RES and ascorbic acid (Ac; Figure 5a). The SC 50 of Zein-SHA with RES NPs, RES, and
| In vitro cytotoxicity
Breast cancer is one of the killers threatening women's health (Freddie et al., 2018
| CON CLUS IONS
The solubility of resveratrol limits its application in the food industry. The resveratrol-loaded nanoparticles we developed can solve the problems of poor solubility and low bioavailability. The nanoparticles formed are relatively small, have a narrow particle size distribution, and have good antiaggregation stability in aqueous solutions.
Moreover, they are able to efficiently incorporate resveratrol into their hydrophobic cores. Encapsulation of the resveratrol improved its water dispersibility, antioxidant activity and anticancer activity. Overall, our results suggest that the Zein-SHA with RES NPs developed in this study may be highly effective nanocarriers for hydrophobic nutraceuticals.
ACK N OWLED G EM ENTS
This work was supported by the National Natural Science Foundation of China (31801477, 32072169, 51773198, 51903119) and China Agriculture Research System (CARS0230).
CO N FLI C T O F I NTE R E S T
None of the authors are in conflict of interest with this research.
|
2021-07-15T05:23:09.742Z
|
2021-05-07T00:00:00.000
|
{
"year": 2021,
"sha1": "3acf3ec4721d25f4a6059aeeee192f2801b4e3cc",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/fsn3.2302",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3acf3ec4721d25f4a6059aeeee192f2801b4e3cc",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
199649095
|
pes2o/s2orc
|
v3-fos-license
|
Removal of an anionic azo dye direct black 19 from water using white mustard seed ( Semen sinapis ) protein as a natural coagulant
In this study, standard jar tests were conducted using white mustard seed protein (WMSP) as a natural coagulant to remove direct black 19 (DB-19) from its aqueous solution. Comparative coagulation tests were performed using commercial polyaluminum chloride (PAC). The results showed that DB-19 removal by WMSP increased with increasing settling time and reached the maximum removal at 180 min. The DB-19 removal descended from 98.4 to 46.2% as pH increased from 4 to 10. The most effective temperature for DB-19 removal was 25 C. The removal of DB-19 was weakened by the presence of Na 2 S 2 O 4 . Overall, WMSP was more ef fi cient than PAC for DB-19 removal in all experiments except at pH 4 and 5. The mechanism of the removal of DB-19 by WMSP could be attributed to adsorption and charge neutralization processes.
INTRODUCTION
Globally, the textile industry is one of the largest fresh water consumers which consequently generates large amounts of wastewater. According to the data of the China statistical yearbook on the environment, the textile industry Among the techniques, coagulation/flocculation has many advantages such as cost effectiveness, convenience in operation, low energy consumption, and no generation of harmful and toxic intermediates (Shi et al. ; Verma et al. a).
Currently, the commercial coagulants used for water treatment consist of two major groups, namely, inorganic coagulants and organic polymers. Though both of them are playing a very important role in water treatment, many drawbacks arise in water treatment, such as inefficiency at low temperature, changes of pH, corrosion of instruments, and generation of a huge amount of non-biodegradable sludge and so on. Furthermore, their residuals in water can cause adverse impacts on living beings. For example, studies revealed that aluminum is neurotoxic and can cause pathogenesis of Alzheimer's disease (Flaten ; Polizzi et al. ). Also, the residual monomers of polyacrylamide (PAM) present in treated water have been proven to be neurotoxic and carcinogenic (Šciban et al. ). Hence, the development of eco-friendly coagulant has been a major topic of research in water treatment.
Plant-based coagulants, including mustard seed, Jatropha curcas seed, guar gum, copra, Cactus latifaria, Prosopis juliflora seed, common bean, chitosan, orange waste and so on, have been used to remove a large variety of pollutants including turbidity, algae, dyes, humic acid, bacteria and metals etc. from water due to their advantages such as abundant source, high efficiency, low sludge production, biodegradability, and nontoxicity (Pritcharda For instance, Ndabigengesere et al. () reported that the active agents in aqueous Moringa extracts were cationic proteins with a molecular weight of 13 kDa and an isoelectric point between 10 and 11. The optimal dosage of shelled Moringa oleifera seed was almost the same as alum, whereas purified proteins were more effective than alum for turbid water treatment. Furthermore, the innocuous coagulant did not affect the pH and conductivity of treated water with four or five times less volume of chemical sludge created compared to alum.
Recently, Boulaadjoul et al. () reported that Moringa oleifera seed powder was used to enhance the primary treatment of paper mill effluent. The results indicated that the turbidity and COD abatements were 96.02 and 97.28% using Moringa oleifera seed powder as coagulant, whereas the respective removals of turbidity and COD by alum were 97.1 and 92.67%, indicating that Moringa oleifera is a very efficient natural coagulant for the treatment of paper mill effluent. Betatache et al. () reported that the optimum dosage of prickly pear cactus was 0.4 g/kg for sewage sludge conditioning, which was much more efficient than three polyelectrolytes, namely, 0.8 g/kg for Chimfloc C4346, 80 g/kg for FeCl 3 and 60 g/kg for Al 2 (SO 4 ) 3 .
In recent research, mustard seed proteins with the molecular weight of approximately 6.5 and 9 kDa have been proven to be more efficient to remove turbidity from pond water than Moringa oleifera seed proteins (Bodlund et al.
).
Although mustard seed has more advantages in wider distribution, abundance, and availability at low price compared with Moringa oleifera seed, the only application of the mustard protein for water treatment obtained from the published literature was to remove the turbidity, which limits its versatility in water treatment. Hence, our study is aimed at investigating the potential of mustard seed protein as a natural coagulant for the removal of dye from its aqueous solution.
In this study, direct black 19 (DB-19, Figure 1) was selected as a target pollutant to test the coagulation ability of white mustard seed protein (WMSP) as it is a widely used anionic dye, especially in some Asian countries (Shi et al. ). Meanwhile, both this dye and its reduction product have been proven to be mutagenic (Joachim et al. ).
A commercial coagulant, polyaluminum chloride (PAC), that is most widely used for water treatment in China was selected to carry out the same experiment as a comparison to evaluate the coagulating performance of WMSP. The removal effects with respect to pH, settling time, temperature, coagulant dosage and influence of inorganic salts presence was investigated.
MATERIALS AND METHODS
Preparation of coagulants and dye-containing wastewater White mustard seed (Figure 2), a traditional Chinese medicine used widely to resolve phlegm and dispel colds (vocabulary of traditional Chinese medical science), was purchased from a local pharmacy in Zhengzhou, China.
The WMSP was extracted in the same way as described in our previous study (Tie et al. ). The only difference from the previous method was that the dialysis tube with a molecular weight cut-off of 3,500 Da was used in this study. The WMSP content was measured at 596 nm (UVmini-1240, Shimadzu, Japan) using bovine serum albumin as the standard (Zhou & Chen ). The PAC solution at the same concentration was synthesized by adding PAC into deionized water.
The DB-19 solution used in this study was prepared using the same method described in our previous study (Tie et al. ). The pH was adjusted to different values in the range of 4 to 9 using 0.1 M HCl and NaOH solution.
Jar tests
The jar tests were carried out using standard jar test equip- 655 nm. The removal rate of DB-19 was calculated by the following equation: where C 0 and C e are the initial and final concentrations of DB-19 in solution (mg L À1 ), respectively.
Characterization
Fourier transform infrared spectroscopy (FTIR, Thermo Scientific NicoletiS10, USA) and X-ray photoelectron spectroscopy (XPS, Thermo Scientific ESCALAB 250Xi, USA) were used to characterize WMSP, DB-19, and the reaction products between WMSP and DB-19. The DB-19 removals at different pH using two coagulants are shown in Figure 4 for the two coagulants. In the case of the same tests using PAC as the coagulant, the removal efficiency was lower at each dosage. Therefore, WMSP was shown to be more efficient for DB-19 removal than PAC in the experiment.
RESULTS AND DISCUSSION
Effect of temperature on DB-19 removal Figure 6 shows the effect of temperature on DB-19 removal in the range of 10-45 C. It can be seen that the removal rate varied at 31.7-43.7% for PAC and 70.0-83.2% for WMSP, respectively. At each temperature set for the experiment, WMSP performed better than PAC for DB-19 removal.
The optimal temperature for DB-19 removal by both of the two coagulants was 25 C, and the corresponding removal rates were 43.7 and 83.2% for PAC and WMSP, respectively. The test results indicated that WMSP was more efficient than PAC at all selected temperatures.
Effect of the presence of inorganic salt on DB-19
removal Sodium sulfate is widely adopted to improve dyeing performance and fixation property by enhancing the dye molecules transfer from the solution to cotton fibers when direct dyes are used (Teixeira et al. ). Hence, the effect of Na 2 SO 4 on DB-19 removal was investigated. Figure 7 shows that, overall, Na 2 SO 4 weakened the DB-19 removal for both coagulants, whereas WMSP still outperformed PAC in the presence of Na 2 SO 4 .
Mechanism of DB-19 removal by WMSP
The effect of pH on the flocculation is critical to explore the coagulation mechanism (Teixeira et al. ). Figure 8 shows the effect of pH on WMSP content. The lowest WMSP content occurred at pH 9.0, indicating the isoelectric point of WMSP was around pH 9.0 due to the fact that protein as an amphoteric compound has the least solubility at the isoelectric point (Zhao et al. ). Hence, WMSP was positively charged by adsorbing H þ on its amino-groups in the pH range of -9 set for the experiment. where D and W denote DB-19 and WMSP, respectively.
The decrease of residual DB-19 content with descending pH shown in Figure 3 can be explained by the mechanism mentioned above. The adsorption and charge neutralization between WMSP and DB-19 was weakened, resulting in higher residual DB-19 content since WMSP molecular was negatively charged at pH 10 higher than its isoelectric point of pH 9. However, the reaction was reinforced by the increase of positive charge of WMSP with decreasing pH below its isoelectric point to create lower DB-19 residual.
CONCLUSIONS
This study describes an attempt of removal of the anionic dye DB-19 from water using proteins extracted from white mustard seed as a natural coagulant. The results indicate that the coagulation efficiency of WMSP was better than PAC at the same dosage for all other experiments except at pH 4 and 5. The main mechanism of the removal of DB-19 by WMSP could be adsorption and charge neutralization.
To our knowledge, it is the first report on removal of an anionic dye using WMSP as a natural coagulant, and the results showed that WMSP had excellent coagulation ability to remove DB-19.
|
2019-08-16T08:25:44.444Z
|
2019-07-24T00:00:00.000
|
{
"year": 2019,
"sha1": "36e450454e180f9570fb914cd49ed2f249ae90d4",
"oa_license": "CCBY",
"oa_url": "https://iwaponline.com/jwrd/article-pdf/9/4/442/628843/jwrd0090442.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e8ffcbeb5769b733efa69423c8749542ea95ce77",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
248152994
|
pes2o/s2orc
|
v3-fos-license
|
Long-Term Health Outcomes of Infantile Spasms Following Prednisolone vs. Adrenocorticotropic Hormone Treatment Characterized Using Phenome-Wide Association Study
Objective To determine differences in long-term health and neurological outcomes following infantile spasms (IS) in patients treated with adrenocorticotropic hormone (ACTH) vs. prednisolone/prednisone (PRED). Methods A retrospective, case-control study of patients with an International Classification of Diseases, Ninth Revision, Clinical Modifications (ICD-9) diagnosis of IS, identified over a 10-year period from a national administrative database, was conducted. IS patients treated with ACTH or PRED were determined and cohorts established by propensity score matching. Outcomes, defined by hospital discharge ICD codes, were followed for each patient for 5 years. Related ICD codes were analyzed jointly as phenotype codes (phecodes). Analysis of phecodes between cohorts was performed including phenome-wide association analysis. Results A total of 5,955 IS patients were identified, and analyses were subsequently performed for 493 propensity score matched patients, each in the ACTH and PRED cohorts. Following Bonferroni correction, no phecode was more common in either cohort (p < 0.001). However, assuming an a priori difference, one phecode, abnormal findings on study of brain or nervous system (a category of abnormal neurodiagnostic tests), was more common in the PRED cohort (p <0.05), and was robust to sensitivity analysis. Variability in outcomes was noted between hospitals. Significance We found that long-term outcomes for IS patients following ACTH or PRED treatment were very similar, including for both neurological and non-neurological outcomes. In the PRED-treated cohort there was a higher incidence of abnormal neurodiagnostic tests, assuming an a priori statistical model. Future studies can evaluate whether variability in outcomes between hospitals may be affected by post-treatment differences in care models.
INTRODUCTION
Infantile spasms (IS) is a severe pediatric epilepsy disorder typically presenting in the first year of life (1). Hallmarks of IS include spasm-like seizures that occur in clusters with progressive worsening, and a distinctive electroencephalogram (EEG) pattern termed hypsarrhythmia (1,2). Patients are at risk for adverse long-term outcomes, including increased mortality, risk for intractable epilepsy, and neurodevelopmental impairment (3). Determination of treatment guidelines for IS has evolved over the past several decades (2,4). Firstline treatment for IS consists of steroid or steroid-inducing treatment, but the choice of prednisolone/prednisone (PRED) or adrenocorticotropic hormone (ACTH) has conflicting or equivocal data on efficacy, including time to remission, resolution of hypsarrhythmia, and outcomes (2,5). Further, ACTH is significantly more expensive without evidence supporting its cost-effectiveness (6). Our goal was to characterize the longterm neurological and other health outcomes for IS, comparing patients who received either ACTH or PRED, using information from a nationwide pediatric clinical administrative database.
Study Design and Participants
We performed a retrospective analysis of data from the Pediatric Hospital Information System (PHIS). PHIS is a nationwide database containing pediatric patient data from 52 children's hospitals (7), including inpatient visit data, as well as some observation, emergency department, ambulatory surgery, and clinic visits data. From PHIS, we identified all patients with an International Classification of Diseases, Ninth Revision, Clinical Modifications (ICD-9) code indicative of infantile spasms (ICD-9-CM: 345.6, 345.60, 345.61), from January 1, 2004 to September 30, 2015. Prior work has indicated the effectiveness of using ICD codes for identifying patients with IS (8). Outcomes for each patient were measured for up to five years after initial IS diagnosis.
Standard Protocol Approvals, Registrations, and Patient Consents
This project used de-identified data and was not considered human subjects research, and was exempted by the Institutional Review Boards at the University of Utah and Intermountain Healthcare.
Data Preparation
Data for IS patients was accomplished by identifying patients with an ICD-9-CM code of IS, either 345.60 (Infantile Spasms, without mention of intractable epilepsy) or 345.61 (Infantile spasms, with intractable epilepsy). For inclusion, patients had to receive ACTH or PRED within the 21 days following IS diagnosis. The diagnosis of IS and the administration of ACTH or PRED were determined from PHIS. Patients on both ACTH and PRED (96 patients) or on neither (3,497 patients) were removed. For the remaining 2,362 patients, their ICD-9 codes were converted to phecodes. Phenome-wide association study (PheWAS) is a methodology for evaluating patients by grouped diagnostic codes (9). PheWAS categorizes each ICD-9 code into one of 1866 "phecodes", which are groups of similar diseases or traits. A phecode was assigned to a patient if they had a matching phecode-associated ICD-9 diagnosis. Phecodes were rounded to the lowest whole integer to reduce granularity in the data for analysis. For instance, the phecode 008.xx indicates an intestinal infection, with the numbers after the decimal point indicating more granular details concerning the phecode (for example, 008.52 indicates an intestinal infection due to C. difficile).
The data was first filtered based on the medication given. Patients on ACTH or PRED were retained, patients on neither or both were discarded. For patients with multiple IS diagnoses, the earliest visit was used as the index date. To more accurately ensure that ACTH or PRED was given for an IS diagnosis and not a different diagnosis, additional requirements were as follows: 1. The patient's age at the time of IS diagnosis had to be < 1 year (<365 days). 2. The patient's age (in days) at the time of IS diagnosis has to be less than or the same as the age when the patient received the first dose of either ACTH or PRED. 3. The administration of medication needed to occur at no more than 21 days after the initial IS diagnosis.
All diagnoses/ICD codes/phecodes between 6 months and 5 years after initial IS presentation were included for analysis, resulting in 1,916 patients and 66,257 phecodes prior to propensity score matching. Duplicates for phecodes and medical record number were removed. Prior to this removal, data was analyzed for Wilcoxon-Mann-Whitney tests to determine changes in phecode frequencies on the individual level. Chronic condition complex (CCC) data was recorded using two categories: (10) non-neurological CCC codes (termed "Non-Neuro CCC") and neurological CCC (termed "Neuro CCC") codes.
CCC determination was used to establish similar patients for matching. For Neuro CCC and Non-Neuro CCC codes, a binary flag (1 yes, 0 no) was used to indicate whether a patient was diagnosed with a chronic condition prior to the initial IS diagnosis. To prevent all patients from receiving a Neuro CCC flag because of their IS diagnosis, and instead only identify those with addition neurological CCCs, the IS CCC flag was removed unless the ICD-9-CM code also indicated a concurrent FIGURE 1 | Cohort identification. A total of 5,955 patients with infantile spasms (IS) were identified in Pediatric Hospital Information System (PHIS) using ICD-9 codes. We then filtered for medications and timeframe, performed propensity score matching, and finally, 986 patients were selected for analyses (493 in each drug cohort). IS, Infantile Spasms; ph, phecode; n, number of patients. *The dataset indicated for Wilcoxon-Mann-Whitney test is queried using match results.
Discharge year was determined based on the first visit for IS to account for changes in IS treatment over time. Sex, race/ethnicity, urban flag, and payer, were all based on a patient's first visit for any diagnosis. Non-Neuro CCC flag and Neuro CCC flag were determined by filtering for phecodes prior to a patient's initial IS diagnosis.
Propensity Score Matching
Propensity score matching was performed including for sex, payer (government, private, unknown, or other), urban flag (based on Rural-Urban Commuting Area (RUCA) codes), race/ethnicity, year of discharge, Neuro CCC, and Non-Neuro CCC. Propensity score matching was performed in R (version 3.6.1) using the MatchIt package. A 1:1 matching ratio of ACTH to PRED patients was used. Among 1,916 total ACTH or PRED patients, 986 patients matched (493 in each drug cohort). We used k-nearest neighbors for matching and a caliper of 0.20 standard deviations.
To optimize the matching process, the variables discharge year and race were further categorized. Year of discharge was simplified into 3 subgroups: group 1, discharge years 2008-2011; group 2, 2012-2015; and group 3, 2016-2019. Due to the small proportion of persons with Native American, Black Hispanic, and Pacific Island race/ethnicity in our study, these individuals were placed in a single group.
Statistical Analysis
Frequency calculations of the cohort (ACTH or PRED) phecodes were performed using Python. Differences between the groups were evaluated by either a two-sided Fisher's exact test; or distributional, with the number of distinct occurrences for a given phecode between the two cohorts assessed in rank distribution by the Wilcoxon-Mann-Whitney (WMW) test (SciPy v1.6.1). Although each patient in the ACTH cohort was matched with a patient in the PRED cohort, this similarity was not considered causal enough to justify using paired method tests (11). Phecodes were then sorted by lowest (most significant) p-value for each cohort. Percentage differences between the phecodes of the two cohorts were also calculated. Adjusted p-values were calculated using a Bonferroni correction. To determine whether there were differences in phecode frequencies at the individual level, as well as between the two drug groups, a Wilcoxon-Mann-Whitney Test was performed on the data prior to duplicate phecode removal. For sensitivity analysis, to determine whether the hospitals with the most IS patients had a disproportionate effect on FIGURE 2 | Propensity score matching. Differences in means (y-axis) are shown between the adrenocorticotropic hormone (ACTH) and prednisolone/prednisone (PRED) drug cohorts for covariates before and after matching. Before matching (indicated by white dots), the absolute standardized difference in means has higher values, indicating a higher difference between the cohorts for certain covariates. Following matching (indicated by black dots), the values for the absolute standardized mean difference decrease, indicating a higher degree of similarity between the cohorts when comparing variable frequencies. "Distance" indicates the absolute difference between the propensity scores of matching patients. Neither drug cohort had phecode frequencies above the adjusted p-value. However, the PRED cohort had two neurological phecode frequencies above the p < 0.05 line, abnormal findings on study of brain/nervous system, and infantile cerebral palsy (percentage differences of 7% for both phecodes). The ACTH cohort also had a neurological phecode frequency above p < 0.05 line, hemiplegia (percentage difference of 4%). the results vs. an overall trend in the data, the holdout (leave one out) method was performed on the five hospitals with the most patients. The top hospital in terms of patient contribution to data was removed prior to matching, and the data analysis was rerun. This was done sequentially for the top five contributing hospitals.
Data Availability Statement
All data are available on request to the authors. Data preparation was performed in Python (version 3.7.7). Code and software used by the authors are freely available, and if not otherwise indicated, are available at GitHub (https://github.com/Monika-Baker) or upon request.
RESULTS
From an initial 5,955 IS patients identified, following sorting and exclusions, we identified 1,916 patients who had taken either ACTH or PRED (Figure 1). Selected demographic data are provided in Table 1. After propensity score matching (Figure 2), final cohorts of 493 patients each for ACTH and PRED were established. All ICD-9 codes related to each patient were then collected for 5 years after initial date of IS diagnosis and grouped into phecodes.
Phecode frequencies and percentage differences were compared for ACTH and PRED cohorts (Figure 3). Neither of the treatment groups had phecode frequencies with significant p-values following Bonferroni correction. However, the PREDtreated group had two neurological conditions: abnormal findings on study of brain/nervous system, and infantile cerebral palsy with significant p < 0.05 values assuming an a priori hypothesis of a difference in neurological outcomes ( Table 2; percentage rate difference for both was 7%). The ACTH-treated group also had a neurological condition with a frequency greater than p < 0.05, hemiplegia (percentage rate difference 4%).
Following sensitivity analysis (leave one out/holdout), abnormal findings on study of brain/nervous system retained significance and did not dramatically fluctuate, but hemiplegia and infantile cerebral palsy did fluctuate with p-values rising above 0.05 (Figure 4). Analysis with Wilcoxon-Mann-Whitney test indicated that these neurological phecodes were also more prevalent at the individual level (abnormal findings on the FIGURE 4 | Sensitivity analysis. Graph of leave one out analysis; p-value on y-axis and rank order contributing hospitals on x-axis. The top patient contributing hospital was left out prior to propensity score matching and data analysis rerun including Fisher's exact test. This was performed sequentially for each of the top hospitals. Results of p-value for the phecode, abnormal findings on study of brain/nervous system, remained similar. However, the p-values for infantile cerebral palsy and hemiplegia fluctuated substantially when the first and third hospitals were left out, respectively. This indicates that these hospitals were skewing the frequencies for these two phecodes, and that it is not an overall trend in the data. study of brain/nervous system and infantile cerebral palsy for PRED patients, hemiplegia for ACTH patients) as well as at the cohort level. However, only abnormal findings on the study of brain/nervous system was robust to the sensitivity analysis.
DISCUSSION
In this large, national-level, long-term analysis of outcomes for IS patients treated with ACTH or PRED, we found no differences in neurological or non-neurological conditions. However, assuming an a priori hypothesis of differences in neurological outcomes and thus not performing Bonferroni multiple comparisons correction, PRED-treated IS patients were more likely, and at higher rates, to have two neurological phecodes, abnormal findings on the study of brain/nervous system and infantile cerebral palsy. ACTH-treated IS patients were more likely, and at higher rates, to have the neurological phecode of hemiplegia, assuming the same a priori hypothesis.
Only one neurological finding, abnormal findings on the study of brain/nervous, was robust to sensitivity (holdout) analysis. However, the significance for hemiplegia and infantile cerebral palsy did fluctuate, indicating that the differences in frequencies for these phecodes were driven by data from a single (or few hospitals) hospital. Interestingly, two of the phecodes were more common in the PRED-treated group, following analysis with the Wilcoxon-Mann-Whitney test ("Abnormal findings on study of brain/nervous system" and "Hemiplegia, infantile cerebral palsy"). This indicates that the increases in the frequencies of these two phecodes were driven at the individual level, i.e. multiple instances of the same diagnosis (phecode) in the same patient, from different hospital admissions.
Limitations of the study are its retrospective nature, inherent limitations of matching, and that most of the data from PHIS are from in-patient hospitalization. As such, the in-patient source of the majority of data limits quantification of certain disorders, such as developmental delay. However, although we were unable to quantify the absolute number of IS patients with a diagnosis such as developmental delay, our analysis is still able to evaluate for the ratios or proportions, and thus relative differences, between the PRED and ACTH cohorts. Due to the additional complexities in analysis, for this study, we did not evaluate patients who had taken both PRED and ACTH or other medications (e.g., vigabatrin) (12) An additional limitation of the dataset was that a large number of IS patients were listed with neither ACTH or PRED, suggesting that IS patients treated solely with outpatient prescription management, at least within our inclusion time frame, were not included in our analysis. It is important that our data showed no differences in epilepsy outcomes between the PRED and ACTH groups, which has been a concern regarding IS treatment (2,4). It is unclear why there is an observed increase in the PRED group of abnormal findings on the study of brain/nervous system. This phecode encompasses multiple nonspecific neurological findings in various neurodiagnostic tests, including cerebrospinal fluid, radiological tests, and EEGs. Further studies should be performed to identify which of these non-specific findings should be focused on, and whether they are due to differences in side effects or disease management.
In conclusion, we have found that PRED and ACTH treatment for IS have similar long-term outcomes for most health conditions, including for most neurological conditions. Further, our study is one of the few for IS, which considers long-term outcomes including that of non-neurological conditions, while most studies are evaluating immediate treatment response (for example, Grinspan et al.) (13) or evaluating long-term outcomes for only cognitive or epilepsy-related aspects (14,15). As some outcomes appeared to be correlated with specific hospitals, future studies can evaluate whether variability in outcomes may be affected by post-treatment differences in care models.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by University of Utah IRB. Written informed consent from the participants' legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements.
|
2022-04-14T23:01:41.560Z
|
2022-04-13T00:00:00.000
|
{
"year": 2022,
"sha1": "7e633ac6823d3cab067b7aeb8e1d48dbf09dea6f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2022.878294/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "7e633ac6823d3cab067b7aeb8e1d48dbf09dea6f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
251751429
|
pes2o/s2orc
|
v3-fos-license
|
Design principles for internet skills education: results from a design-based research study in higher education
The current generation of students lives in a globally connected world where Internet technologies are ubiquitous. As a result, they learn how to use various digital tools from a very young age, although, for the most part, the skills they develop are not adequate to use internet technologies in academic settings effectively. Additionally, while digital skills are essential in Higher Education (HE), developing programs for the effective use of internet technologies is still an issue in question. Research lacks empirical investigations regarding the design of such programs. To address this gap, the authors applied a design-based research (DBR) methodology to empirically explore an instructional intervention that aimed to enhance undergraduate students’ digital skills for the effective use of the Internet during their studies. Specifically, the authors drew from multiple sources and utilized a triangulation approach to interpreting the findings emphasizing the aspects of digital skills development and assessment and learning design. The results clearly show that the project-based learning intervention and the proposed design principles can positively impact digital skills development and support learning in academic settings. The authors conclude with implications for further research in the field focusing on digital skills frameworks, assessment instruments, instructional approaches, and learning content. Supplementary Information The online version contains supplementary material available at 10.1007/s43545-022-00428-2.
Introduction
Several organizations worldwide have acknowledged that the preparation of future generations for the digital world is necessary as digital skills are becoming increasingly important in a wide range of professions (Adams Becker et al. 2018;Jørgensen, 2019;Sursock, 2015). In this context, universities ensure that graduates possess the necessary skills to utilize digital technologies and be prepared for the labor market (Jørgensen, 2019). Digital skills are also crucial during university studies (Brooks, 2016;Dahlstrom and Bichsel, 2014). Students who possess such skills can access services, support, and information provided by HEIs (Beaunoyer et al. 2020). Such skills in academic settings were particularly prominent during the COVID-19 pandemic, which raised the demand for HEIs to use technologies to deliver courses due to the immediate call for online teaching and virtual education (Beaunoyer et al. 2020;Daniel, 2020). Specifically, studies have reported that the transition from F2F teaching to online instruction was challenging due to the lack of students' digital skills and suggested developing relevant programs in HE (Aristovnik et al. 2020;Turnbull et al. 2021). However, the need to address students' lack of digital skills is not new. Scholars have previously debated whether HE students have the capabilities to take full advantage of technology for learning purposes (Bullen et al. 2011;Helsper and Eynon, 2010;Kennedy et al. 2008).
Previous research has mainly focused on understanding the purposes and practices of technology use among young students (Corrin et al. 2010;Gallardo-Echenique et al. 2015;Kennedy and Fox, 2013). Researchers reported several other factors influencing the effective use of technology for learning in HE. Such factors are closely related to the instructional practice and the learning experience (Bond et al. 2018;Kirkwood and Price, 2014;Littlejohn et al. 2012), including the perceived affordances of digital technologies to support learning (Margaryan et al. 2011;Ng, 2012), and the application of instructional practices that foster technology integration (Kivunja, 2014). Previous research reported a lack of Design-Based Research (DBR) approaches that could inform the formulation of policies regarding the methodology of teaching and learning for digital skills education (Spante et al. 2018). Accordingly, to address this gap, the present study applied a DBR approach aiming to answer the following questions: • What are the characteristics of an effective program aimed at developing students' digital skills in the context of HE? • Which design principles could support the development of digital skills programs in HE?
In the following sections, the authors present the literature review regarding digital skills in the context of HE, the instructional practices that can support the development of such skills, and the existing frameworks for digital literacy. Consequently, the authors describe the DBR approach focusing on the processes and the results, followed by a discussion on the why, what, and how of learning with internet technology in HE and implications for further research.
Students' digital skills in the context of HE
Today's university students belong to Generation Z and have experienced a globally connected world where the Internet is readily available (Seemiller and Grace 2017). They have also been called "digital natives" due to their constant immersion in technology (Evans and Robertson, 2020). As a result, they have already experienced various online tools and services before entering HE (Guzmán-Simón et al. 2017;Gurung and Rutledge, 2014;Brooks, 2016;Bond et al. 2018). However, research has shown that such experiences are limited, not homogeneous, and might not adequately cover all the skills individuals should possess to take full advantage of digital technologies, especially for academic purposes (Corrin et al. 2010;Kennedy and Fox 2013;Ng, 2012;Šorgo et al. 2017). More specifically, students seem familiar with online communication tools (Bullen et al. 2011;Kennedy and Fox, 2013;Margaryan et al. 2011) and social networking services (Gabriel et al. 2012;Gosper et al. 2013;Shopova, 2014). Also, they are familiar with Learning Management Systems (Bond et al. 2018;Ng, 2012), search engines, and Wikipedia (Biddix et al. 2011;List et al. 2016;Margaryan et al. 2011). However, they experience difficulties applying more advanced information search strategies and selecting credible resources to support their academic work, such as libraries and institutional repositories Head 2013;Lee et al. 2012). They also find it challenging to critically evaluate and use online information sources to complete a task (List et al. 2016). Furthermore, they do not feel competent in applying copyright-related regulations when using online content (Gudmundsdottir et al. 2020;Shopova, 2014). Regarding the learning experience, research shows that students use the Internet more for "consuming" rather than creating content (Kennedy and Fox, 2013;López-Meneses et al. 2020;Ng, 2012Ng, , 2015. More recently, studies regarding the use of digital tools for online instruction during the Covid-19 pandemic reported that students felt confident about their skills in using communication platforms, browsing online for information, and sharing digital content (Aristovnik et al. 2020). However, they reported a lack of skills in critically evaluating online information (Sales et al. 2020), adjusting advanced settings of software and programs, and using online platforms, such as Learning Management Systems (Aristovnik et al. 2020). Students also faced challenges in delivering online teachings, such as increased workloads, minor interactions, poorer communication, and confusion (Lemay et al. 2021).
Instructional practices for utilizing digital technologies in HE
Research on digital technologies for teaching purposes in HE has reported a technology-led rather than a technology-enhanced learning approach (Kirkwood and Price, 2014). Specifically, studies have shown that students are not aware of the affordances of digital technologies to support learning (Brooks 2016;Ng, 2012), even though understanding the characteristics and benefits of each tool is essential for academic practice (Kennedy and Fox, 2013;Ng, 2012). Additionally, while universities play a vital role in the digital transformation of today's societies, they rarely make the necessary distinction between the individual needs of students (Jørgensen, 2019). Specifically, a lack of customized support highlights the limitations and benefits of using such digital tools to meet students' individual learning needs (Beetham et al. 2009). The above fact contradicts studies showing that students' digital skills and familiarity with digital devices and tools vary (Corrin et al.;Hargittai, 2010;Helsper and Eynon, 2010).
Regarding students' preferences about learning with digital technologies, research reported approaches that promote learning through discovery and experimentation, using audiovisual sources, working on different tasks simultaneously (multitasking), and getting immediate satisfaction (Teo, 2016). However, researchers have pointed out that the continuous exposure of this generation to digital content seems to be associated with the possibility of distraction from the subject matter, lack of concentration, and the difficulty in communicating work correctly (e.g., grammar, spelling, writing style) (Brooks, 2016;Issa and Isaias, 2016). Consequently, instructional practices in academic settings should incorporate alternative teaching approaches to enhance students' participation in the learning process and address the challenges above, considering students' different characteristics and learning expectations (Kennedy and Fox, 2013;Ng, 2012). Previous studies reported that teaching with digital technologies in the context of HE should capitalize on constructionist approaches that promote hands-on learning experiences and active participation, such as inquiry-based learning, problem-based learning, and project-based learning (Kivunja, 2014;Wekerle et al. 2022;Guo et al. 2020;Ng, 2015).
Frameworks for the development of skills for using digital technologies
During the past decades, research in HE regarding the use of digital technologies has been associated with the terms digital literacy and digital competence (Spante et al. 2018), as the literature converged on the use of the term "digital" (Goodfellow, 2011). According to Spante et al. (2018), a general categorization relates to the context when using the above terms. At a policy-making level, the term digital competence appears more often. For example, in the European Commission's "DIG-COMP: A Framework for Developing and Understanding Digital Competence in Europe," digital competence refers to the self-confident, critical, and creative use of digital technology for accomplishing goals in various contexts, such as work, academic, or leisure (Ferrari, 2013). At a research level, digital literacy denotes an approach based on acquiring skills and know-how (Spante et al. 2018). Digital literacy refers to "the awareness, attitude and ability of individuals to use digital tools and facilities to identify, access, manage, integrate, evaluate, analyze and synthesize digital resources, construct new knowledge, create media expressions, and communicate with others, in the context of specific life situations, to enable constructive social action; and to reflect upon this process" (Martin and Grudziecki, 2006, p. 255). Other researchers have used the term "digital literacies" to refer to the use of digital technologies in situated knowledge practices in the academic environment (Goodfellow 2011;Lankshear and Knoebel, 2008;Littlejohn et al. 2012). Van Dijk and Van Deursen (2014) have adopted the term "digital skills" to emphasize the interactive performance of digital media and refer to "interactions with programs and other people, transactions in goods and services and continually making decisions" (p. 140).
Since the Internet is the medium that university students use to access services and resources for study purposes (Ng, 2012), the present study adopts a skills-oriented approach focusing on the digital skills required to use the Internet. Such an approach was necessary to gather measurable results and draw conclusions (Iordache et al. 2017). Several researchers attempted to identify the skills for using the Internet effectively. Hargittai (2005) proposed "web skills," focusing on effective information retrieval. Potosky (2007) referred to "internet knowledge," which relates to the familiarity with terms connected to the Internet, such as the browser, and the knowledge of the processes for carrying out tasks using the Internet, such as practical information retrieval. In the same context, Livingstone and Helsper (2010) defined "internet literacy" as a multidimensional concept that includes accessing, analyzing, evaluating, and creating online content. Bunz (2004) introduced "computer-email web (CEW) fluency," which refers to the use of the Internet for information and communication purposes. Finally, Van Dijk and Van Deursen (2014) proposed a theoretical framework that classified Internet skills into six areas: (a) Operational, (b) Formal, (c) Information, (d) Communication, (e) Content Creation, and (f) Strategic. They also suggested a skills distinction between the technical aspects of using the Internet as a medium (medium-related skills) and the fundamental aspects related to online content (content-related skills).
Research purpose
To remedy the lack of research for investigating the teaching of digital skills education in the context of HE, the authors of this paper adopted a DBR approach to study the design and development of programs for the effective use of the Internet in higher education learning environments.
Method
DBR aims to develop new theories, artifacts, and practices that potentially impact learning and teaching in natural environments (Anderson and Shattuck, 2012;Barad andSquire, 2004). The present study's character was developmental and had two objectives: (a) the development of an intervention proposed as a solution to a problem and (b) the construction of design principles (Plomp, 2007). The research design included three phases which run for three consecutive semesters during a university course.
On a conceptual level, the development of the intervention was based on Van Dijk and Van Deursen's conceptualization of digital skills. This approach is appropriate because (a) it covers the skills identified as essential during the literature review, and (b) it includes several skills that refer to processes rather than the use of specific Internet tools. Specifically, Operational skills refer to the most basic technical skills required to use the Internet, such as browsers to access web applications. Formal skills refer to navigating through various websites with different layouts. Information skills indicate the skills for searching, selecting, and evaluating online information. Social skills refer to using online communication services, interacting with others, and exchanging meaning. Creative skills are the skills someone needs to create different types of acceptable quality content (e.g., text, audio, video) and publish it or share it with others on the Internet. Lastly, Strategic skills refer to the fulfillment of personal goals using the Internet (e.g., making the right decisions toward achieving a goal and securing the benefits of using the Internet).
Participants
The intervention took place during three consecutive semesters, comprising three research cycles. The total number of participants was 58 university students who attended an elective course about the use of internet technologies. Approximately, 26% of the students were males, and 74% were females.
Research instruments
For the present study, the authors have utilized four different instruments for data collection: 1. Students' self-assessment questionnaire for internet skills 2. Design materials for the learning environments 3. Open-ended questionnaire to gather students' views about the learning experience 4. Observation data from each session The students' self-assessment questionnaire was structured on a five-point Likert scale ranging from "Not at all true of me" to "Very true of me" and consisted of skills items from the Internet Skills Scale (ISS) proposed by Eynon (2014, 2016). The ISS followed the theoretical framework proposed by Van Dijk and Van Deursen (2014) and the methodology used by Helsper and Eynon (2010) to distinguish the different types of Internet use. The researchers validated the questionnaire for the context of HE (Miliou and Angeli, 2021). The final version consisted of 27 items referring to five skills areas: Operational, Information-Navigation, Social, Creative, and Critical (Online Appendix 1). Due to small sample sizes per research cycle, the authors performed a within-subjects comparison during pre-and post-intervention measures with the Wilcoxon Signed Rank Test (Cohen et al. 2018). The results were used to draw conclusions about students' skills levels before and after the intervention and refine the learning materials in the upcoming research cycles based on their needs. The students' views about the learning experience emerged from an open-ended questionnaire issued during the first research cycle (Online Appendix 2). The answers were analyzed using content analysis, suitable for analyzing qualitative data (Popping, 2015). The design data referred to the tools and activities designed during the second and third research cycles. They were classified chronologically and thematically. Finally, the researchers recorded field notes during all research cycles for all groups taught and observed by the researchers. Thematic coding analysis followed, which corresponded to the ISS skills areas. Such type of analysis was necessary due to the comparative nature of the study (Cohen et al. 2018).
Research procedures
The authors followed an iterative design process by implementing three research cycles. In Cycle 1, the intervention followed the established course content and teaching methods (Online Appendix 3). The research purpose was exploratory, aiming to examine students' familiarity with internet technologies, their skills level, and their learning preferences. At first, researchers distributed an open-ended questionnaire that referred to students' preferences regarding their learning experience. Secondly, the self-assessment questionnaire was administered before and after the intervention to determine the effectiveness of the teaching process in enhancing students' skills. Finally, the field notes focused on usability and affordances and informed the future design and development work. The authors reflected on the outcomes and adjusted the learning materials to improve the intervention during Cycle 2. Self-assessment questionnaires (before and after the intervention) and field notes were used to collect data for comparison purposes. Finally, Cycle 3 followed more consistently structured activities and tools. Specifically, the researchers reviewed the intervention procedures to address the challenges faced in the research Cycle 2 and applied the same data collection processes (e.g., self-assessment questionnaires, field notes).
Research cycle 1
During Cycle 1, the authors studied and analyzed the pre-existing course design, the tools, and the internet-based activities to clarify the teaching context of this research. The course followed a linear path, starting with presenting the different types of internet tools. Then, time was devoted to teaching each tool separately. Some tools took up to three meetings (Weblogs), while some others took up to two sessions (Wikis) and the rest of them one meeting. For this purpose, the researchers emphasized Weblogs and Wikis during the lessons, and students' assignments for the course semester involved using both tools. Also, part of the course material was related to the internet skills indicators identified during the preliminary research phase (e.g., references to social networking tools and content creation tools about items from the Social and Creative skills sets). However, there was a lack of content related to Operational, Information-Navigation, and Critical skills.
Findings of research cycle 1
Concerning Operational skills, statistically significant results were reported for the skills item "I know how to complete online forms" (z = − 3.162, p = 0.002). Specifically, students answered confidently after the intervention that they possessed the skill above. Although the instructional material did not refer to web browsers, students were familiar with their use. Completing an online form to access several online tools seemed to be an easy and typical process for them. A few students used bookmarks to save the course's material, while some reported that they had not used them in the past. Additionally, students showed a lack of readiness in using shortcuts for more advanced operations rather than copy-paste (e.g., screen splitting) and limited knowledge in adjusting the privacy settings (e.g., how to manage to browse history data.). As none of the above skills appeared in the instructional material, students did not use them during the lessons. As a result, the analysis of students' responses before and after the intervention did not show statistically significant results for the corresponding items, namely "I know how to use shortcut keys (e.g., CTRL-C for copy, CTRL-S for save)" (z = − 763, p = 0.445), "I know how to bookmark a website" (z = − 1.311, p = 0.190), and "I know how to adjust privacy settings" (z = − 302, p = 0.763). Additional issues were found regarding Internet safety, specifically the creation of passwords. In general, an incomplete assessment of the potential risk for personal data theft was observed, which highlighted the need to raise students' awareness of issues related to securing their online identity and presence. While the course included relevant teaching material on Internet safety, such as references to general risks when browsing the Internet (virus downloads and attacks by hackers), students could not transfer the acquired knowledge to a new context because there were no consolidation activities. To conclude, it was considered essential for the intervention to include a dedicated meeting for the operational skills, presenting the most advanced use of internet browsers, such as adjusting privacy settings and browsing safely, using keyboard shortcut functions, and bookmarking.
Students seemed to be relatively familiar with the web's different interfaces regarding information-navigation skills. However, relevant educational material was not foreseen during the course. There were no statistically significant differences in student self-assessment before and after the intervention for the relevant items: "All the different website layouts make working with the Internet difficult for me." (z = − 1.706, p = 0.088), "I find the way in which many websites are designed confusing." (z = − 1.931, p = 0.053), "Sometimes I end up on websites without knowing how I got there." (z = − 1.734, p = 0.083), and "I find it hard to find a website I visited before." (z = − 1.633, p = 0.102). Similarly, no statistically significant differences were reported for the online information search processes and specifically for the items: "I find it hard to decide what the best keywords are to use for online searches." (z = − 1.228, p = 0.219), and "I get tired when looking for information online." (z = − 333, p = 0.739). Students' work revealed a lack of knowledge in using advanced search strategies, such as keyword search or searching in specific databases to conduct their assignments. Most students used a particular search engine, while very few used the online library catalog or other databases. Some students reported that managing a large amount of information made it difficult to look for quality.
In contrast, others said they did not know where to look for information and were not satisfied with their results. The self-assessment results were statistically significant for the skill item "I should take a course on finding information online." (z = − 2.028, p = 0.043). Specifically, before taking the course, students were confident that they did not need it. After the intervention, most of them answered neutrally. To sum up, there was a need to enrich the curriculum with content about information search strategies, search engines, meta-search engines, and databases with diverse digital material, such as documents, images, or videos.
Regarding Social skills, students' assignments and, in particular, their blogs showed that they already possessed relevant skills. Most students could integrate and use the "Follow" function, and they had no difficulty deciding which blogs to follow. For this reason, the analysis showed no statistically significant difference for the item: "I feel comfortable deciding whom to follow online (e.g., on services like Twitter or Tumblr)" (z = − 1.134, p = . 257). As students mentioned, they preferred to use the Follow function over the RSS because it did not require registration in another service and corresponded better to how they used other internet services. Therefore, the researchers thought replacing it with the "Follow" function was more appropriate.
Additionally, no statistically significant differences were found for the items that refer to communicating and sharing information: "I know when I should and shouldn't share information online" (z = − 1.342, p = 0.180), "I am careful to make my comments and behaviors appropriate to the situation I find myself in online" (z = − 1.732, p = 0.083), "I am confident about writing a comment on a blog, website or forum" (z = − 1.508, p = 0.132), "I know which information I should and shouldn't share online" (z = -0.586, p = 0.558), and "I would feel confident writing and commenting online" (z = − 0.749, p = 0.454). Such results may be attributed to the lack of instructional content associated with netiquette rules, personal information sharing, and digital footprint. For example, in some cases, the incorrect use of language was observed. Concerning the management of their profiles, statistically significant results were found for the item "I know how to change whom I share content with (e.g., friends, friends of friends or public)" (z = − 2.121, p = 0.034). Specifically, students answered with higher confidence after the intervention that they possessed the above skill, probably because they had to change their blog's settings from private to public to complete their assignments. However, there was a lack of readiness to change the default settings of the tools used. The above results indicated the need to enhance the instructional materials with netiquette rules, online information-sharing practices, and their implications.
Regarding Creative skills, blogs and wikis enabled students to modify, adapt, and reorganize their content. For example, two students came up with creative suggestions about editing content and were prompted by their fellow students to share their ideas with the whole group. Such experience resulted in positive attitudes toward learning and self-improvement. Students also enjoyed the fact that they could choose their blog's theme, mainly based on their topics of interest. By no surprise, students' self-assessment showed positive, statistically significant results for almost all items: "I know how to create something new from existing online images, music or video" (z = − 3.477, p = 0.001), "I know how to make basic changes to the content that others have produced" (z = − 3.583, p < 0.001), and "I know how to design a website" (z = − 4,177, p < 0.001). Specifically, students answered whether they possessed the above skills after the intervention. Also, the instructional content included information about the Creative Commons license, which students had to integrate into their blogs. As a result, statistically significant results were reported for the item, "I know which different types of licenses apply to online content" (z = − 4.030, p < 0.001). Mainly, students answered confidently after the intervention that they possessed the above skill. One of the factors that made it challenging for students to create more advanced multimedia formats, such as video, was the time frame of the meetings, which was limited for editing content and practicing with the tools. Therefore, there was no statistically significant difference in students' self-assessment before and after the intervention for the item, "I would feel confident putting video content I have created online" (z = − 1,732, p = 0.083).
To conclude, there was a need to emphasize authentic activities related to students' interests and multimedia content creation. Also, the authors thought it necessary to utilize cloud services for students to store and access their work at any time. Although the authors addressed this topic in the middle of the semester, it was evident that it would be helpful for students to utilize these services at the beginning of the course. Another key finding was that students were looking for examples of how they could design and develop their blogs. For essence, when asked to create their welcome message for their blog readers, some students looked for exemplary texts to compose their message.
Similarly, they searched for patterns in the blog's structure and publishing posts. As it turned out, Wikis did not work as effectively as the use of blogs. Some other challenges were the lack of shared interests among the group members and the difficulty of clarifying each student's work and active participation.
Concerning Critical skills, there were misunderstandings regarding copyright and a lack of fundamental understanding of the ethical issues of using online content. Specifically, in some cases, students used images from websites where the sharing of material was prohibited. In these cases, they downloaded images from search engines without using search filters or services that provide free multimedia content. Additionally, there were cases where a partial understanding of the rights to fair use of images had emerged. To conclude, participants found it challenging to evaluate the publicly available online content. All the above, together with the lack of relevant instructional material, resulted in no significant results for all relevant items: "I am confident in selecting search results" (z = − 1.897, p = 0.058), "Sometimes I find it hard to verify information I have retrieved" (z = − 1.069, p = 0.285), and "I carefully consider the information I find online" (z = − 1,734, p = 0.083). However, a positive, statistically significant difference was reported for the item, "I know which apps/software are safe to download" (z = -3.090, p = 0.002). Specifically, students answered with higher confidence after the intervention that they possessed the above skill. Although students could not download software and practice the skill in the University's laboratory due to regulations prohibiting downloading any software from the Internet, discussing the above topic by exchanging personal experiences enhanced their confidence that they possessed the above skill after the course.
In terms of Critical skills, it was necessary to enrich the instructional content with criteria for evaluating online resources and the fair use of multimedia resources available on the Internet. It was also essential to refer to procedures, strategies, and tools to support students in the information retrieval process to obtain quality results from reliable sources. Additionally, the authors discussed issues related to personal responsibility for disseminating information and knowledge.
The data collected from the open-ended questionnaire revealed that students' prior experience with digital technologies, knowledge level, and readiness to meet the learning objectives were heterogeneous. Most of them seemed to use information search engines and social media services quite often. According to some of their answers: "I use the internet at least 12 h a day, so this course is useful to me," and "From the tools we learned during the course, I have used before about twenty percent of them." Regarding the instructional strategies that can support learning, students preferred the link of theory to practice and the use of tutorials about the basic functionalities of the tools. Participants reported: "What I liked most was the practical part mainly because we could better understand what we learned in theory;" "The fact that we had to integrate some tools into our blog was an incentive to work from home;" and "It was very easy for me that the tutorials included everything, so I could later find what I forgot." They also suggested using gamification techniques, such as quizzes. For example, a student said: "The process was very organized and much easier than I thought. It could also include activities such as quiz games." Regarding using digital tools that supported instruction, students stated that they preferred the ones that allowed them to create multimedia material for academic and professional purposes. In addition, blogs seemed to be preferable to the use of wikis. One student said: "What I liked the most is that we created our blog for free. Creating multimedia seemed quite valuable, offering an opportunity to create professional presentations. On the other hand, […] did not find wikis exciting and preferred blogs in which [..] posted our work or text and videos about our work." Regarding the main factors that enhanced students' motivation to participate in the learning process, their answers revealed the usefulness of the skills to their studies and their subsequent professional career. According to some of their responses: "In many jobs, you use internet tools. If the employer sees that I know and am aware of various tools, it will benefit me." Furthermore, students found Creative Commons licenses to protect their work particularly interesting. For instance, a student stated: "I think it is necessary to know [the Creative Commons license] as it concerns protecting our rights." Lastly, students stressed the need to make meaningful connections between different tools. According to their answers: "I find it very useful that we learn about the use of each tool separately, but I think it would have been more useful to learn how all these tools could be connected. This strategy can make the course more meaningful," and "It would be particularly effective to have constant and frequent interaction with the tools to become more familiar with them." The above findings informed the design decisions for implementing the next research cycle.
Research cycle 2
The results from the previous Cycle suggested the need to redesign the instructional practice to gradually familiarize students with various internet tools and their capabilities through a series of tasks and interconnected activities that they could implement at their own pace. For this purpose, the authors applied a project-based learning approach to emphasize autonomous learning and participation in authentic learning experiences through progressive skills development. Project-based learning is a common practice in HE for producing artifacts using computer technologies, and its benefits for skills development have been documented in several previous studies (Gülbahar and Tinmaz, 2006;Guo et al. 2020;Lee et al. 2014). To align the tools to the skills indicators of the ISS and their affordances for supporting academic studies based on students' perceptions, the authors applied a technology mapping process (Angeli and Valanides, 2013). Based on the student's preference for the use of blogs, the authors were able to identify several activities and tools that could be linked to their use and support the progressive development of students' skills. Notably, creating a blog requires developing technical skills, such as using a browser to access the service, filling out an online registration form, or changing the privacy settings (Operational skills). Also, it requires the organization of the blog's content, which is essential for the reader's successful navigation. In this regard, the blog administrator learns all possible ways to navigate a web page (Navigation skills). Also, blog posts can include all types of information that resulted from searching on various websites (Information skills). It is also an excellent communication tool that allows the administrator to share information, connect to other blogs, and interact with other Internet users (Social skills).
Furthermore, as a personal expression tool, it allows posting multimedia content, such as text, images, and video (Creative skills). Finally, the sharing of information as a means of self-expression and creativity also implies its responsible use by the administrator in terms of accuracy and reliability (Critical skills). Online Appendix 4 presents the course structure designed and implemented during the second research cycle.
Findings of research cycle 2
In general, the development of Operational skills was relatively easy for students. Several references linked concepts and practices to students' experiences during the activities and motivated them to engage in learning actively. For example, managing browsing history and security settings were identified as skills applied to various environments, either personal or academic. During the creation of their blogs, students had to complete several online forms to access tools and services. Additionally, they were prompted to bookmark tools, websites, and services to easily retrieve them afterward and complete their projects. Before using online registration forms, students had to complete a gamified activity about creating secure passwords. The purpose of the activity was to attract students' attention and ensure that they would apply this knowledge during their registration to online services. The results about the development of students' skills showed statistically significant differences for the items: "I know how to bookmark a website" (z = − 2.965, p = 0.003), "I know how to adjust privacy settings" (z = − 2.958, p = 0.003), and "I know how to complete online forms" (z = − 2333, p = 0.020). Specifically, students answered with higher confidence after the intervention that they possessed the above skills, even though no statistically significant difference was found for the item "I know how to use shortcut keys (e.g., CTRL-C for copy, CTRL-S for save)" (z = − 1.730, p = 0.084).
Concerning the Information-Navigation skills, during the development of their blogs, students chose different templates in terms of structure and appearance (navigation, main menu, posts, sidebar). The above fact implied that they were familiar with various interfaces and, therefore, navigation. Furthermore, managing the blogs' structure using widgets (e.g., adding the function of search and recent posts) helped them get an idea of website design and navigation. Consequently, there were statistically significant differences for the relevant items: "All the different website layouts make working with the Internet difficult for me" (z = − 2.121, p = 0.034), "I find how many websites are designed confusing" (z = − 2.070, p = 0.038), "Sometimes I end up on websites without knowing how I got there" (z = − 1.998, p = 0.046), and "I find it hard to find a website I visited before" (z = 2.041, p = 0.041). Specifically, after the intervention, students answered confidently that the above skills statements were not factual, meaning they did not experience navigation difficulties.
Additionally, during this Cycle, students were asked to include informational material in their blogs. Thus, the instructional content was enriched with activities related to using information search strategies (e.g., using filtering options, broadening and narrowing the search results) and the utilization of search and meta-search engines. Such activities allowed students to compare search tools, understand their differences, implement strategic actions to take immediate quality results, and justify using various search tools to determine their added value. For example, students had to complete an online quiz regarding how well they could operate the search engine they were using daily. Such reference to their daily activities enhanced their positive attitudes toward the subject. In general, the use of quizzes enabled immediate feedback and encouraged discussions about the practical implications of this knowledge in both personal and academic settings. Therefore, statistically, significant differences were reported for the items: "I should take a course on finding information online" (z = − 2.280, p = 0.023), "I find it hard to decide what the best keywords are to use for online searches" (z = − 2.989, p = 0.003). Specifically, students answered with higher confidence after the intervention that the skills mentioned above were not actually of them, meaning that they were confident that they developed the relevant information skills. However, the analysis did not show a statistically significant difference for the item "I get tired when looking for information online" (z = − 2.209, p = 0.46). Specifically, the activity used for applying information retrieval strategies included a lot of information and questions for the strategy and tools, requiring much time and mental effort from students to experiment and assimilate the newly acquired knowledge. As a result, many students felt overloaded.
Accordingly, there was a need to distinguish general from academic search tools, provide explicit step-by-step instructions for the use of the strategy, and give students the time to explore and become familiar with the functions of each tool.
Concerning Social skills, the creation of students' blogs was based on examples drawn from activities structured around authentic case study scenarios. The researchers prompted students to manage their digital reputation and formulate rules for sharing information in their blogs during the activities. Additionally, students were asked to apply their knowledge by providing feedback to their peers regarding the netiquette rules applied to their blogs. Consequently, statistically, significant differences were found for the relevant items: "I know when I should and should not share information online" (z = − 2.251, p = 0.024), "I know how to change whom I share content with (e.g., friends, friends of friends or public)" (z = − 2.070, p = 0.038), "I am confident about writing a comment on a blog, website or forum" (z = − 2.070, p = 0.038), "I know which information I should and should not share online" (z = − 2.460, p = 0.014), and "I would feel confident writing and commenting online" (z = − 2.121, p = 0.034). Specifically, students answered with higher confidence after the intervention that they possessed the above skills. However, there were no statistically significant differences for the items: "I am careful to make my comments and behaviors appropriate to the situation I find myself in online" (z = − 1.414, p = 0.157), and "I feel comfortable deciding whom to follow online (e.g., on services like Twitter or Tumblr)" (z = − 1.633, p = 0.102). Such findings indicated that the training materials did not cover all the rules of netiquette. Discussions made during the course revealed the need to focus on how to formally communicate through emails (email netiquette), as this tool was one of the main tools used by students during their studies. Additionally, at a technical level, students were familiar with the "Follow" function; they all followed their fellow students' blogs. However, in some cases, the decisions to follow other blogs were merely made based on the relevant content without considering evaluation criteria (e.g., if the blog's content is reliable). Therefore, it was essential to extend the evaluation process and apply it to blogs to promote a more general perception of evaluation and the choices that media users make when deciding to follow someone online. The good practices identified included the references to students' online social practices (e.g., memes with humorous characters) and how to apply netiquette rules. Such practices enhanced students' positive attitudes and strengthened their motivation toward the learning subject. An additional theme that emerged during the intervention was the need to include professional networking tools with the possibility of linking the blog to a professional profile.
The development of Creative skills was based on creating multimedia materials used to enrich students' blog content. Students' engagement in content creation, apart from learning about each tool's capabilities for the course purposes, encouraged them to explore new possibilities, such as creatively expressing their views and suggesting alternative uses of the tools themselves. Additionally, prompting students to transfer their skills to other academic activities (e.g., creating multimedia presentations required from other courses) was essential to helping them understand the clear connection of tools to their academic work. Additionally, highlighting the importance of using Creative Commons licenses helped students understand their role and responsibilities in communicating information ethically and critically. The results showed statistically significant differences for all related skills items: "I would feel confident putting video content I have created online" (z = − 1,732, p = 0.043), "I know how to create something new from existing online images, music, or video" (z = − 2.428, p = 0.015), "I know how to make basic changes to the content that others have produced" (z = − 2.701, p = 0.007), "I know how to design a website" (z = − 2.994, p < = 0.003), and "I know which different types of licenses apply to online content" (z = − 3.115, p = 0.002). Namely, students answered with higher confidence after the intervention that they possessed the above skills. Some good practices identified in this Cycle are presenting examples for each tool and the corresponding product during the content creation process. As part of the teaching process, the presentation of examples was helpful in terms of communicating the expectations regarding the final artifact. Also, evaluation criteria were distributed to students to ensure the quality of the produced projects. For example, students had to incorporate Creative Commons licenses into the presentation and use images free from copyright restrictions. One critical issue which emerged during content creation was the lack of consistent blog posting. Thus, the researchers created a checklist with possible items published constantly on students' blogs to keep them updated.
Regarding the development of Critical skills, students were prompted to post reliable online resources on their blogs. They had to search for relevant material and apply evaluation criteria to assess their results. For this purpose, they were asked to evaluate original online reliable and non-reliable sources, including their blogs and troll news media websites familiar to younger audiences. However, students experienced difficulties evaluating online information included in websites for which it was not apparent from their URL addresses whether they had reliable/non-reliable information. For such cases, the application of evaluation criteria based on the website content was necessary. It is worth noting that students could relate their own experiences to the course content. In particular, they referred to phishing emails circulated to their email accounts and suggested relevant content as part of the course material. Students were also prompted to participate in gamified activities regarding which online applications were safe to download. The activity included reward elements and produced immediate feedback. The results from the activity prompted students to link to their own experiences regarding malware protection, and some of them who were already familiar with such practices advised their peers on how to keep their computers and applications updated safely.
Students positively perceived such practices. For example, the results from students' skills assessment showed statistically significant differences for the following items: "I know which Αpps/software are safe to download" (z = − 3.125, p = 0.002), "I am confident in selecting search results" (z = − 2.511, p = 0.012), and "I carefully consider the information I find online" (z = − 2.850, p = 0.004). Specifically, students answered with higher confidence that they developed the above skills. Additionally, a statistically significant difference was found for the item: "Sometimes I find it hard to verify the information I have retrieved" (z = − 2.491, p = 0.013). Mainly, students answered with higher confidence after the intervention that did not find it hard to verify the information they had retrieved.
To conclude, the results of the second research cycle were encouraging. The presented course material created favorable conditions for preparing students to participate actively throughout the learning experience. The curriculum, organized holistically, supported the systematic transfer of knowledge and the progressive development of students' skills. Additionally, students were prompted to activate their prior knowledge and experiences. The variety of activities and the creative approach of the blog production allowed them to develop their autonomy and strengthen their decision-making skills. The use of case study scenarios emphasized creative thinking and provided stimuli for reflection and action. The variety of tools connected to tasks for the blog creation enhanced students' motivation and engagement in the learning process throughout the course.
Research cycle 3
Based on the research results of Cycle 2, the authors put effort into further developing students' understanding of internet tools' affordances for academic study. Online Appendix 4 highlights the additions made to the course structure used in Cycle 3.
Results of research cycle 3
Regarding Operational skills, the main design addition included a keyboardshortcut scaffold in the form of a notepad, which was available on every computer to allow immediate access to keyboard functionalities and allow students to retain new knowledge. Such scaffold facilitated the repetition and transfer of new knowledge to use all the tools. Additionally, the affordances of each tool to support learning in academic environments were communicated to students, who were also prompted to identify several potential uses of the tools in their academic lives. Such affordances included (a) managing academic work using bookmarks, (b) minimizing the time it takes to complete an assignment using keyboard shortcuts, (c) using cloud technology services to organize and store assignments and other study-related documents efficiently, and (d) customizing browser settings to adjust to individual practices and protect user accounts. All the above resulted to positive statistically significant differences for all items: "I know how to use shortcut keys (e.g., CTRL-C for copy, CTRL-S for save)" (z = − 3.673, p = 0.001), "I know how to bookmark a website" (z = − 3.516, p = 0.001), "I know how to adjust privacy settings" (z = − 3.689, p = 0.001), and "I know how to complete online forms" (z = − 3.276, p = 0.001). After the intervention, students answered that they possessed the skills above.
Regarding the Information-Navigation skills, the information retrieval activity was redesigned to include fewer questions and a step-by-step guide on implementing each information search strategy. Such practices helped students better discern the different types of information search. The tools' affordances that were identified and communicated as useful for academic work included the following: (a) navigating academic websites and databases using indexes/directories, (b) using search and meta-search engines to search for multimedia, (c) using different search tools that can meet specific information needs when completing an assignment, (d) defining keywords to search for specific content in search engines, and (e) storing and classifying files and documents in folders. Consequently, there were statistically significant differences for all the skills items: "I should take a course on finding information online" (z = − 3.555, p = 0.001), "I find it hard to decide what the best keywords are to use for online searches" (z = − 3.869, p = 0.001), "All the different website layouts make working with the Internet difficult for me" (z = − 2.797, p = 0.005), "I find the way in which many websites are designed confusing" (z = − 3.448, p = 0.001), "Sometimes I end up on websites without knowing how I got there" (z = − 3.008, p = 0.003), "I find it hard to find a website I visited before" (z = 2.687, p = 0.007), and "I get tired when looking for information online" (z = − 2.958, p = 0.003). Specifically, students answered with higher confidence after the intervention that all the above statements were not true, meaning that they were confident that they had developed the relevant information search and navigation skills.
Concerning Social skills, the activities' design and content included applying netiquette rules in the academic environment and using different information-sharing options. The identified tools' affordances for communication purposes were as follows: (a) applying netiquette rules when using the institution's email or forums in a course 's Learning Management System, (b) communicating information in academic contexts through different types of media (text, image, video), and (c) using social networking services for professional purposes, such as showcasing their online CVs. The results showed statistically significant differences for all the skills: "I know when I should and shouldn't share information online" (z = − 3.448, p = 0.001), "I am careful to make my comments and behaviours appropriate to the situation I find myself in online" (z = − 2.952, p = 0.003), "I know how to change who I share content with (e.g., friends, friends of friends or public)" (z = − 2.179, p = 0.029), "I am confident about writing a comment on a blog, website or forum" (z = − 3.339, p = 0.001), "I feel comfortable deciding who to follow online (e.g., on services like Twitter or Tumblr)" (z = − 2.667, p = 0.008), "I know which information I should and shouldn't share online" (z = − 3.626, p = 0.001), and "I would feel confident writing and commenting online" (z = − 2.972, p = 0.003). Specifically, students answered with higher confidence after the intervention that they possessed the above skills.
Concerning the Creative skills, the design included a checklist with different types of posts that students needed to create and follow throughout their project's development. This practice reinforced students' motivation to work with the tools learned more frequently. In addition, content creation activities strengthened students' motivation and interest in creating their final project. In cases where students were more experienced using similar tools, they were prompted to activate their prior knowledge (e.g., students from the IT department used more sophisticated functions of the tool used to customize their blogs). The tools' affordances that were identified to support learning in academic environments included (a) creating and publishing products with the use of multimedia (e.g., presentations, videos), (b) protecting original work from copyright, and (c) conducting research with the use of online software for developing questionnaires and analyzing data, (d) producing artifacts/publications and communicating the results, and (e) allowing students to create and showcase a professional work-related portfolio. The findings showed statistically significant differences for all skills items: "I would feel confident putting video content I have created online" (z = − 3.654, p < 0.001), "I know how to create something new from existing online images, music, or video" (z = − 3.542, p < 0.001), "I know how to make basic changes to the content that others have produced" (z = − 3.548, p < 0.001), "I know how to design a website" (z = − 3.805, p < 0.001), and "I know which different types of licenses apply to online content" (z = − 4.274, p < 0.001). Specifically, students answered with higher confidence after the intervention that they developed these skills.
Concerning Critical skills, the content was enriched with step-by-step explanations for evaluating web-based information, including Scams, Hoaxes, and Fake News, which students suggested during Cycle 2. The material used during the activities was up to date and directly related to students' experiences (e.g., a real incident of attempted email fraud, real blog owners spreading fake news), including original videos popular with younger people. During the activities, students linked such practices to personal experiences derived from fake news or scams circulated on social media, such as "like" farming, and questioned the reliability of online content. The practices identified as supportive to academic studies were (a) assessing the suitability of online resources and multimedia content for academic purposes and (b) protecting academic identity from online fraud. All the above resulted in statistically significant differences for all items: "I know which apps/software is safe to download" (z = − 3.292, p = 0.001), "I am confident in selecting search results" (z = − 3.808, p < 0.001), and "I carefully consider the information I find online." (z = − 3.567, p < 0.001). Particularly, students answered with higher confidence after the intervention that they possessed the above skills. Additionally, a statistically significant difference was found for the item "Sometimes I find it hard to verify the information I have retrieved" (z = − 3.999, p < 0.001). Specifically, students answered with higher confidence after the intervention that they did not find it hard to verify the information they had retrieved.
To conclude, Cycle 3 was positive regarding the quality of the learning experience. As it turned out, communicating the affordances of tools to support future academic/professional needs enhanced students' interest and motivation, resulting in positive outcomes regarding developing their skills.
Discussion
Our research results confirmed the literature findings that digital skills are essential for university studies (Brooks 2016;Dahlstrom and Bichsel 2014). Specifically, during the research, students perceived positively the communication of the affordances of the digital tools to support academic work. In some cases, they also highlighted several potential uses of the tools, despite their different academic profiles. This finding was in line with previous research on the affordances of tools to support learning in HE (Margaryan et al. 2011;Ng, 2012). In addition, an exciting result that emerged during the study was that students found value in developing digital skills for their future professional careers.
Regarding their prior knowledge and skills, students were very familiar with the use of digital technologies, and they could very efficiently operate and adjust the functionalities of the tools. We could see evidence of their digital world immersion (Seemiller and Grace 2017). Before the interventions, the most competent use of digital technologies was reported for the Operational and Social skills sets. This finding agrees with the previous research, which emphasized students' familiarity with Learning Management Systems, search engines, and online communication tools (Bond et al. 2018;Bullen et al. 2011;Kennedy and Fox, 2013;List et al. 2016;Ng 2012). Additionally, although students did not report competent use of digital tools for content creation processes and digital content licensing, a finding reported in previous studies (Kennedy and Fox, 2013;López-Meneses et al. 2020), they adapted very easy to the content creation process. Several challenges emerged in developing Information-Navigation and Critical skills sets, especially regarding searching and selecting credible information sources and considering copyright restrictions. Previous studies also reported similar results (Gudmundsdottir et al. 2020;Hargittai et al. 2010;Head, 2013;Lee et al. 2012;List et al. 2016). It is worth noting that although a pattern can be drawn from the above findings, the results from each Cycle regarding students' skills levels were distinctive, supporting previous studies that argued that young learners do not constitute a homogeneous generation in terms of digital skills (Corrin et al. 2010;Helsper and Eynon 2010;Ng 2012). The exploration of both students' digital skills levels and their learning preferences before the intervention allowed us to identify potential needs and gaps and adjust our designs accordingly; such an adjustment was critical for the successful implementation of the project-based learning approach.
Regarding the assessment of digital skills in the context of HE, the present study demonstrated the need to include additional skills items to the Internet Skills Scale tool related to the academic environment. Specifically, regarding Operational Skills, future research could consist of statements that will refer to cloud technologies for organizing and storing information. Also, it is suggested to include items related to the safe use of browsers to protect security and privacy in the digital environment. Such statements could refer, for example, to creating secure passwords, managing browser history, and understanding how cookies work. Concerning Information-Navigation skills, it is essential to enrich the Scale with items related to strategic information retrieval, such as finding multimedia content. Regarding Social Skills, it is advised to include items related to netiquette with specific references to frequent communication activities in academic settings, such as the use of email or the use of a discussion forum. Lastly, regarding Critical skills, it is suggested to introduce items regarding the fair use of online content.
Additionally, the triangulation of data from all research cycles made it possible to conclude the content and the design of the learning experience. Specifically, based on the study's findings, we can conclude that the project-based learning intervention and the underpinning learning design contributed to the development of students' digital skills. Such results align with previous research, which suggested that constructivist learning approaches and project-based learning are beneficial for HE students (Guo et al. 2020;Kivunja 2014;Ng 2015;Wekerle et al. 2022).
Based on our study results, general programs for digital skills education in the context of HE should include • Advanced use of web browsers (adjusting privacy settings, safe browsing); • Fundamental understanding of copyright and ethical issues about the use of online content (e.g., use of text, images, videos); • Cloud services (e.g., organizing and classifying folders and files); • Information search strategies (search and meta-search engines, academic search, multimedia search); • Digital footprint (privacy settings, personal and professional identity); • Netiquette rules (email, forum); • Content evaluation (Scam, Hoaxes, Fake News); and • Online fraud (Phishing).
Secondly, the design principles that can support the development of digital skills education programs are summarized as follows.
Digital technologies need to be selected based on their affordances to support learning
The selection of tools should be based on the possibilities they offer to accomplish the learning objectives and should facilitate the transfer and the practical use of knowledge. Before implementing a program, mapping tools' capabilities can support their regular review to ensure they are up to date. In addition, the tool's affordances should be communicated to students.
Content should focus on the development of a range of skills that are interconnected and linked to students' academic and personal environment and prior knowledge
Meaningful connections can be achieved by developing small units that progressively address all skills in different areas. Emphasizing declarative and procedural knowledge in each unit can contribute to skills development within realistic timelines. When necessary, the processes of using digital technologies can be repeated, and their application to different subject areas can be highlighted. Due to students' familiarization with a range of digital technologies, the use of advanced organizers that link instructional content to their prior knowledge and experience can attract their attention and better prepare them for the subject matter. Advance organizers may include information on course organization, examples of good practice, original texts from the Internet, and multimedia material related to students' daily life, such as references to important events, news, and announcements related to the university community. The content should be up to date, and instructors should point out its future usefulness.
Content should stimulate interest in the program/course
The stimulation of students' interest in the program/course can use various tools and activities, which stimulate curiosity, include surprising content, are fun (gamified), are contextual, contradict what students already know, create opportunities for reflection, and bring challenges. Also, it is essential that the content is relevant to students' daily life and experiences and is presented in various forms, especially multimedia.
Active participation should be promoted throughout the learning experience
Active participation can be promoted through a series of activities that strengthen digital skills in the long run through an explicit schedule for implementing activities with opportunities for immediate feedback at regular intervals. Also, active participation can be enhanced by personalizing the learning experience based on students' individual needs and preferences.
The provision of timely support is essential
The progressive skills development in the use of digital tools requires timely support for acquiring new knowledge due to skills' interdependence. Such support can occur by sharing tutorials, such as how to subscribe to various services and use the essential functions of a tool.
Opportunities for personalizing the learning experience should be provided
Students should be enabled to explore and choose how to use digital technologies based on their motivations and expectations. Personalization can also occur by communicating learning opportunities that digital tools offer in the academic environment and by providing opportunities for autonomous learning through clear guidelines and schedules.
Collaboration through the exchange of good practices should be promoted
Providing collaboration opportunities among students can help create a productive atmosphere by sharing good practices. More specifically, the interaction between students can improve the quality of their produced works, as it provides stimuli for further exploration of the learning material. The above practice can work effectively in cases where more advanced students can guide their less advanced fellow students in using digital technologies.
The provision of guidance and examples should be used to support task completion
Guidance can occur through detailed tutorials with visual elements, which provide information on the digital tools' functionality (e.g., subscription, features). Also, activity checklists with completion criteria or "how-to" instructions that are particularly popular for the younger generations can help students keep track of and accomplish their tasks. Presenting examples, such as well-designed websites, exciting and engaging multimedia, and multimodal texts can provide tangible evidence that helps students recognize the value of the acquired skills. Examples can also stimulate students' interest in the program/course and serve as a reference point for their project.
Assessment criteria should be communicated
Assessment processes and standards must be communicated to students in advance to help them understand the expected outcomes and to realize possible improvements. In this regard, the activities should provide evidence of content acquisition.
Implications for practice
The study findings showed that today's students are not fully prepared to embrace technologies in their academic pathways. They need training and support to develop their digital skills in the context of HE. Future research could benefit from the empirical results of our study and develop updated frameworks and instruments to address the need mentioned above, including aspects that are most relevant to HE. Assessment instruments could be used in HE programs to support early students' preparation for using digital technologies during their studies and prevent potential skills gaps. Additionally, the proposed project-based learning approach and the accompanying design principles that emerged during the design process could positively impact digital skills development and support learning in academic settings. Specifically, the proposed method could inform the development of interdisciplinary programs, e.g., in academic libraries, emphasizing the implementation of research work. The design principles presented herein can be used to create or review programs in formal and informal learning environments for the younger generations. Based on our findings, we suggest that HEIs need to reconsider the integration of digital technologies in their programs and courses in terms of a) the affordances of technology to support learning, b) the adoption of instructional practices that favor the development of digital skills, and c) the professional development of teaching staff.
Limitations of the study and future directions
From a research perspective, several limitations need to be addressed by future researchers to draw more concrete conclusions. Firstly, the research participants were students who chose to attend the course. Thus, they were probably more interested in participating and enhancing their skills. Future research could occur in mandatory programs exploring the effectiveness of transferring digital skills from general programs to other courses in the academic environment. Second, the assessment of internet skills took place by students, and as a result, there may be an element of subjectivity in their answers. Although the evaluation of students' final blogs could have indicated skills development, it could not support evaluating all the skills mentioned in the ISS (e.g., I get tired when looking for information, I would feel confident writing and commenting online). Furthermore, the self-assessment was considered more appropriate for the present research regarding time and resources. Finally, it is worth noting that the nature of DBR presupposes the study of a natural environment, which is subjected to research limitations. For example, the University's laboratory settings posed restrictions on online material (e.g., download options). Future research could adopt alternative teaching approaches, such as the BYOD approach or the flipped classroom approach, supporting more flexible research designs.
Conclusion
The present study highlighted the importance of designing and developing digital skills programs to address undergraduate students' needs in HE. This paper makes the case that such programs should be based on empirical investigations to inform relevant policies and practices. Specifically, our findings indicate that the design and development of digital skills programs should be based on relevant skills frameworks, assessment instruments, instructional approaches, and content. Lastly, we hope that the project-based learning approach and the design principles we propose could inform theoretically and methodologically the research community about the design and development of digital skills programs in the context of HE.
Data availability The first author is in the process of publishing more papers from this data set, so at this time, it cannot be made available.
Declarations
Conflict of interest On behalf of all authors, the corresponding author states that there is no conflict of interest.
|
2022-08-24T15:27:01.574Z
|
2022-08-22T00:00:00.000
|
{
"year": 2022,
"sha1": "92e4b3298c9a3258948e4a150ed4d4dd6777a1a6",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s43545-022-00428-2.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "3c6f7a3688b251811be430283119cbddea7976bd",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17710266
|
pes2o/s2orc
|
v3-fos-license
|
Connection between Dispersive Transport and Statistics of Extreme Events
A length dependence of the effective mobility in the form of a power law, B ~ L^(1-1/alpha) is observed in dispersive transport in amorphous substances, with 0<\alpha<1. We deduce this behavior as a simple consequence of the statistical theory of extreme events. We derive various quantities related to the largest value in samples of n trials, for the exponential and power-law probability densities of the individual events.
I. INTRODUCTION
Dispersive transport in amorphous materials has been studied for almost three decades now, and yet it continues to attract considerable attention. In these studies, charge carriers are created at one side of a slab of the material and transported across to the other side. It has been realised that the transport phenomena in amorphous media cannot be described by standard concepts like uniform drift and diffusional spreading, see [1][2][3][4][5] for exhaustive reviews on this subject and [6] for a popular account. An important and striking observation is the (apparent) dependence of the mobility on the thickness of the material. The general features of the mechanism of dispersive transport in amorphous materials are well understood; the process is subdiffusive and the delay in the transport occurs mainly due to trapping events in localized centers of various energetic depths. A fairly large body of theory has been developed to describe several aspects of the transport phenomena; already the early theoretical work of Shlesinger [7] and of Scher and Montroll [8] could explain many of the experimentally observed phenomena, in particular the length-dependent mobility. In this note we shall show that this length dependence of the mobility can be easily understood as a simple consequence of the statistics of extreme events.
The statistical theory of extreme events was formulated for discrete random variables more than half a century ago by Fisher and Tippett [9] and Gnedenko [10]; it was extended and popularized by Gumbel [11,12]. Gumbel described various applications that include statistics of extreme floods, droughts and fracture of materials. In the field of condensed matter, however, the statistical theory of extreme events has found fewer applications. The principal aim of the theory of extreme events is to obtain the statistics of the largest value in a sample of n independent realizations of a random variable. In particular the aim is to determine the asymptotic (n → ∞) dependence of the extreme values on the sample size n. The random variable in the case of dispersive transport is the residence time in the trapping centers. The question then can be posed as follows. Given the distribution of trap depths, how does the largest trapping time increase with the number of trapping centers; the latter quantity is determined by the thickness of the material. (we shall use length synonymously with thickness). Together with the assumption that the extreme events determine the behavior of the sum of the residence times, the statistical theory of extreme events makes then a prediction concerning the dependence of the largest residence time on sample size or the thickness of the material. Since mobility is deduced from the sum of the transit times and this quantity is given by the sum of the residence times, the length dependence of the mobility would follow in a natural fashion, as we shall show in this paper.
The paper is organized as follows. In section II the elements of the statistical theory of extreme events that are necessary for the later derivations are briefly described. The application to dispersive transport in disordered materials is discussed in section III. Detailed calculations of various quantities that are relevant for the characterization of the statistics of extreme events are made in section IV. The paper closes with concluding remarks in section V.
II. ELEMENTS OF STATISTICS OF EXTREME EVENTS
This section reviews standard material of the statistics of extreme events [11,12] which is needed later on. Let f (x) be the probability density function (PDF) of a random variable X and F (x) = x dx ′ f (x ′ ) its cumulative distribution function (CDF). F (x) is the probability that a particular realization of X has a value ≤ x. Let Ω n = {x 1 , x 2 , · · · x n } be a set of n independent realizations of X sampled from the density f (x). Let x nl denote the largest value of X in the set Ω n . The CDF of x nl is the same as the probability that all the values of X in the set Ω n are less than x, and hence is given by, The probability density of the largest value of X in the set Ω n is found by differentiation, Given a value of u, let g u (ν) denote the probability that the first ν − 1 values of X are less than u and the ν-th value is greater than u. An expression for g u (ν) can be readily written down as The mean value of ν is denoted by n and is given by .
The quantity n is the mean number of steps required to exceed a given value of u. The above relation can be 'inverted' as follows. For a given n, considered now as a parameter, let u nl denote the value of u that obeys Eq. (4). Thus we get an implicit equation for u nl as, Gumbel [11] calls u nl as the 'expected' largest value of X in a sample of n independent realizations. Notice that u nl is not the mean of the extreme value in a sample of size n.
Hence u nl should strictly be viewed as a quantity defined by Eq. (5). This simple expression for the 'expected' largest value helps gain insight into the asymptotic behaviour of the extreme values; we shall use this definition of 'expected' largest value to derive the power law dependence of the mobility in the next section. A simple example is provided by the exponential PDF f (x) = exp(−x) , x ≥ 0. Solution of eq.(5) with respect to u nl yields i.e., a logarithmic increase of the expected largest value with sample size. There are essentially three classes of behavior of the largest value u nl for large n, leading to different asymptotic forms of Φ n (x) for large n [11,12]. The first class (I) is formed by probability densities f (x) which decay at least exponentially for large n. The second class (II) results from probability densities whose moments diverge beyond a certain order. The third class (III) comprises probability densities where the values of x are bounded. We will encounter class I and class II behavior later, depending on the physical quantity that is considered.
III. MOBILITY IN DISPERSIVE TRANSPORT
In a typical experiment on dispersive transport, a slab of a finite thickness L (usually thin films of an amorphous substance) is coated with semitransparent metal electrodes. A constant voltage is maintained across the slab. Charge carriers are created at one surface at time t = 0 by a laser pulse; the charge carriers are drawn through the slab by the electric field. The current I(t) exhibits different behavior at short and at long times, which is most clearly seen when plotted on a log-log graph. Shlesinger [7] and Scher and Montroll [8] predicted where the parameter 0 < α < 1. Deviations from the ideal behavior Eq.(7) are still a topic of current research (see, e.g. [5,13] and the references therein). The physical explanation of the behavior of I(t) is by trapping of the charge carriers in trapping centers with widely differing depths. In the short-time regime, most of the charges are within the slab, while at longer times the charges are extracted from the slab.
A transit time t tr can be deduced from the crossover between the short-time and longtime behavior. An effective mobility is then defined by the ratio of effective velocity and applied field F , In the multiple-trapping model, which is employed here, the transit time t tr is the free transit time t f ree plus the sum of all dwell times τ i in the trapping centers. Similar arguments could be applied to trap-controlled hopping models. The free transit time is given by the mobility B 0 , if there are no trapping events: t f ree = L/(B 0 F ); it is usually short compared to the sum of all dwell times. Hence to a good approximation where n is the number of trapping events. For a constant trapping rate the number of trapping events is proportional to the thickness L of the slab. For broad distributions of dwell times τ i , the sum in Eq.(9) should be dominated by the largest dwell time. With this assumption The Arrhenius law is assumed for thermally activated processes, where E i is the energy required for release from the trapping center i. The largest dwell time is then determined by the largest trapping energy. The probability density of the depths of the trapping centers, i.e., of the energies necessary for release is assumed to be exponential, Note that f (E) is normalized to unity in the interval (0, ∞). It is easy to convert the probability density of the trapping energies into the probability density of the dwell times ρ(τ ), using the Arrhenius law Eq.(11). We have with the parameter α = k B T /E c . The probability density (13) is normalized, but already its first moment does not exist for 0 < α < 1, which is the parameter range of interest for dispersive transport. The cumulative distribution function of τ is given by Equation (5) for the expected largest value in a sample of n trials yields As already stated, the average number of trapping events n is proportional to the thickness of the experimental sample L. Hence we expect, using the assumption that the transit time is dominated by the largest dwell time This is precisely the behavior of the mobility that has been observed in experiments on dispersive transport, see for instance [8]. Here we have derived this behavior from the statistical theory of extreme events.
A. Motivation
Various questions arise with regard to the validity and the significance of the above result for the length dependence of the mobility. For instance, the meaning of the "expected largest value in a sample of n trials" is partially intuitive; i.e. the precise meaning of the quantity that follows from Eq. (5) is different from what is suggested by this notion. Precisely defined quantities are the moments of the probability density function of the largest value in a sample of n trials (if they exist) and the most probable value. Of course, complete information is contained in the PDF itself. Fortunately, all quantities of interest can be derived exactly, if the underlying PDF for one event is exponential, or of power-law form. This section will describe the results of these calculations, including a numerical determination of the PDF of the dwell time.
B. Exponential probability density for single events
The basic quantity which determines the dwell times of particles in the multiple-trapping model for dispersive transport is the trapping energy E. It is a random quantity and the simplest, experimentally relevant, assumption is the exponential PDF, see Eq. (12). The dimensionless form of this PDF is f (x) = exp(−x) with the variable x = E/E C and the restriction 0 ≤ x ≤ ∞. The cumulative distribution function is then F (x) = 1 − exp(−x). The expected largest value in samples of n trials has already been given in eq.(6). The PDF for the largest value x nl in a sample of n trials follows from Eq.(1) as The index (1) shall indicate that this PDF is of type I in the classification of Gumbel [11,12]. The moments of ϕ (1) n (x) can be calculated exactly. The first moment, or mean value of x nl is defined as The evaluation of the integral is made in the Appendix. The result is The digamma function ψ(z) is defined as [14] ψ(z) = d dz ln (Γ(z)) and where γ E is the Euler constant. For integer n Thus Euler's constant is asymptotically given by Hence the first moment is given for large n by Note that the first moment differs from the expected largest value of x nl by Euler's constant, cf. Eq.(6). Figure 1 contains the results on the first moment Eq.(22) and the expected largest value Eq. (6). Results on a numerical determination of the mean value x nl have been included, to demonstrate that this quantity can also be accurately determined by numerical simulation. The second moment x 2 nl can be calculated by the same technique as employed for the first moment, cf. the Appendix. The result for the second moment is where ψ (1) (z) is the threegamma function defined by [14] ψ (1) The variance of the extreme value is then given by Therefore, the variance of the extreme value is Note that ξ(2) = π 2 /6. Hence the variance of the extreme value approaches asymptotically this value, i.e., it is asymptocially independent of n. The significance of this fact has been stressed by Gumbel [11]. It is easy to derive the following bounds for the variance: The variance of the dimensionless extreme energy as a function of n has been included in Fig.1. We return to the PDF for the largest value x nl in a sample of n trials, which is generally given by Eq.(2) and for an exponential PDF of single events by Eq. (17). We define the most probable value of the random variable x nl , i.e., the most probable extreme value, as the value of x where the PDF ϕ n (x) has its maximum. Differentiation of Eq.(2) yields the extreme value condition This equation can be cast into a simple form, For an exponential PDF f (x) = exp(−x) the solution of this equation yields the most probable extreme value x m.p. as a function of the sample size, and is given by Thus the most probable extreme value agrees with the expected extreme value in the case of the exponential PDF for the single events, cf. Eq.(6).
C. Power-law probability density for single events
The quantity that determines the effective mobility is the sum of all dwell times, which is assumed to be dominated by the largest dwell time. In this subsection we direct the attention to the statistics of the largest dwell time. The dwell time in a trapping center follows from the trapping energy via the Arrhenius relation Eq. (11). The PDF and the CDF for the single events were already given in Eq. (13) and Eq. (14), respectively. We will use dimensionless variables y = τ /τ 0 henceforth. The PDF for the extreme value y in a sample of n events is then given by where the parameter α = k B T /E c and the relevant range is 0 < α < 1. The index (2) shall also indicate that the distribution of y is of type II in the sense of Gumbel [11,12]. The moments of ϕ (2) n (y) can be calculated. Let y k nl denote the kth moment of the y nl . It is easily shown, where β(a, b) is the usual beta function. ¿From the properties of the beta function it is clear that the kth moment of the extreme value exists only if k is less than α. Since in our problem, α is between zero and one, no integer moment k ≥ 1 of the extreme value exists. Nevertheless, the expected extreme value (as defined by Gumbel) exists, from which we have derived the length dependent mobility.
It is instructive to determine the most probable extreme value from the PDF Eq.(33) for the largest value y nl in samples of n trials. The condition for an extremum has been given in Eq.(30); evaluation with the PDF for single events Eq.(13) and the CDF Eq. (14) gives For large n, we can replace 1 + αn by αn, and we get, Another quantity, which is often used to characterize broad probability densities, is the typical value. It is defined by where the brackets indicate a mean value taken with a general PDF. The typical value of the PDF Eq.(33) for the extreme value of τ is given by By the substitution z = y −α this integral is transformed into The above integral can be evaluated exactly employing essentially the technique described in the appendix (for the exponential PDF) and the result is Asymptotically, the typical extreme value behaves as That is, y typ is related to < x nl > by y typ = exp( 1 α < x nl >). We point out that the connection of the typical extreme value of y to the mean extreme value of x is generally valid, if these two variables x and y are related by the exponential transformation y = exp(x). Notice that the energy E and the dwell time τ are related to each other by the Arrhenius law (11), which is an exponential transformation.
In Fig.2 we have plotted the expected largest value τ nl /τ 0 of the dimensionless dwell time, the most probable value according to eq.(40), and the typical value following from Eq.(40), for α = 0.5. In the case of the dwell time, the expected, the most probable, and the typical largest value are all proportional to n 1 α , with different n-independent factors for large n. It seems to be a general fact that the quantities which characterize the largest value in samples of n trials, show similar behavior with respect to n, if they exist.
Although the PDF for the extreme value of y in samples of n trials is exactly given by Eq.(33), the evaluation of this function is not practical for large n. Therefore, we have determined this PDF for fixed values of n by numerical simulations. We have generated random energies according to the distribution Eq.(12) with E c = 1 and calculated dwell times using the Arrhenius law (11) with τ 0 = 1 and α = 0.5. The normalized dwell times were sorted into bins containing L values. The inverse of the lengths of the bins gives the PDF, when properly normalized. The result for n = 1024 is given in Fig.3. The maximum of the distribution agrees with the prediction Eq.(35). The typical value is found right of the maximum, close but not identical to the median value. Similar observations have been made for broad distributions previously, for instance in Ref. [15].
Note that the PDF of the extreme value can have a different form, if it is determined by different techniques. If the dwell times are binned into intervals of logarithmically increasing intervals, the maximum of the resulting PDF is found at a different location. This is due to the fact that by this procedure another PDF is estimated, which is related to the one we used by the usual transformation with a Jacobian. The maximum of the latter distribution is at n 1 α which is identical to the expected largest value as introduced by Gumbel. The message is that the precise characterization of an extreme value depends on the PDF used for this purpose.
A final problem is the justification of the replacement of the dwell times, cf. Eq.(9) by the largest dwell time, as was done in Eq. (10). In the subsequent derivation, the largest dwell time was identified with the expected largest time, however, any quantitiy that characterizes an extreme value can as well be used in the argument. The replacement can be justified by extending the derivations given above to the most probable K th extreme value. Let ϕ n,K (x) denote the probability density function of the K th extreme value. A formal expression for ϕ n,K (x) can be easily derived and is given below: Note that if we set K = 1, we recover the probability density of the first extreme, which we have considered so far. The condition for an extremal value is Solution of this equation with respect to x gives the most probable K th extreme value. The extremum condition Eq.(43) can be solved for a power-law PDF as given in Eq. (13). The result for the dimensionless variable y ≡ τ /τ 0 is Let r K denote the ratio of the most probable K th extreme to that of the most probable (first) extreme. It is given by For α = 0.5 and K = 2, we have r = 9/16. The ratio becomes rapidly smaller for larger K; it is already small for K = 2 when α is small. The consequence is that the inclusion of the second, third, etc. extreme value in the estimate of the dwell time would not change the asymptotic dependence on the sample size n; only numerical factors would be modified.
V. CONCLUDING REMARKS
The apparent anomalies in the transport properties of charges in amorphous substances, which are summarized as 'dispersive transport', are due to broad distributions of trapping times of the charge carriers. It is satisfying that features like the length dependence of the effective mobility can already be derived from the statistics of extreme events, the extreme events being the occurrence of particularly long trapping times.
Although the length dependence of the effective mobility could already be deduced from the notion of the "expected largest value" of the trapping time, we investigated in detail various quantities in the context of the statistics of extreme events. Explicit expressions could be derived for the mean, the most probable, and the typical extreme values in the case of exponential or power-law probability densities for single events. We could also justify the use of the largest dwell time to estimate the summary transit time by considering the second, third, etc. extreme values. Their effect is to modify numerical factors, but they do not alter the asymptotic dependence on the sample size.
All derivations were made here for the ideal case of an exponential density of energy levels. In practice deviations from the exponential density of states are of interest. It turned out that many features of dispersive transport are also present for a Gaussian density of states [5]; if the temperature is sufficiently low. Hence it is of interest to extend the present derivations also to the case of other densities of states, for instance to the Gaussian one. In the case of more complicated probability densites of the single events, analytical derivations are either difficult or impossible. Hence one has to resort to numerical simulation to study the statistics of extreme events in those cases.
Finally we point out that the observation of different behaviors of distributions, depending on which of two exponentially related variables are used, is rather general. For instance, de Arcangelis, Redner, and Coniglio [16] found broad distributions of voltage drops in random resistor networks, whose moments are characterized by an infinite set of exponents. To the contrary, the distribution of the logarithm of the voltage drops behaves normally, in that the moments exhibit constant-gap scaling. Similar behavior of the probability distributions of the cluster numbers in the percolation problem was discussed by Stauffer and Coniglio [17].
Discussions with G. Schütz are gratefully acknowledged. KPNM is thankful to M.C. Valsakumar for numerous discussions on the statistics of extreme events.
The following trick [18] is applied: We have The integral can now be performed with the result u n = −n lim.
|
2014-10-01T00:00:00.000Z
|
1998-05-01T00:00:00.000
|
{
"year": 1999,
"sha1": "6265b58f739a1fd1ac98dfc6acfe27cf9de474aa",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9901047",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6265b58f739a1fd1ac98dfc6acfe27cf9de474aa",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
208527645
|
pes2o/s2orc
|
v3-fos-license
|
Multi-integrals of finite variation
The aim of this paper is to investigate different types of multi-integrals of finite variation and to obtain decomposition results.
Introduction
In [23] was proved that a Banach space valued function is McShane integrable if and only if it is Pettis and Henstock integrable. That result has been then generalized to compact valued multifunctions Γ (see [20]), weakly compact valued multifunctions (see [6]) and bounded valued multifunctions (see [8]). Di Piazza and Marraffa [16] presented an example of a Pettis and variationally Henstock integrable function that is not variationally McShane integrable (= Bochner integrable in virtue of [18,Lemma 2]). It turns out that Fremlin's theorem can be formulated for variational integrals if and only if the variation of the integral is finite in the following sense: Finally, in the last section, using DL or Db conditions we are able to prove that the scalar integrability of a multifunction can be obtained as a traslation of the Pettis integrability (Theorem 4.1), while its Henstock integrability under DL condition is obtained using Birkhoff integrability (Theorem 4.3), both results with integrals of finite variation. This article is the last in which Domenico Candeloro was able to cooperate and to give his personal contribution, always precious, and we want to dedicate it to him, in his memory.
Preliminaria
Throughout X is a Banach space with norm · and its dual X * . The closed unit ball of X is denoted by B X . The symbol c(X) denotes the collection of all nonempty closed convex subsets of X and cb(X), cwk(X) and ck(X) denote respectively the family of all bounded, weakly compact and compact members of c(X). For every C ∈ c(X) the support function of C is denoted by s(·, C) and defined on X * by s(x * , C) = sup{ x * , x : x ∈ C}, for each x * ∈ X * . C h = d H (C, {0}) := sup{ x : x ∈ C} and d H is the Hausdorff metric on the hyperspace cb(X). The map i : cb(X) → ℓ ∞ (B X * ) given by i(A) := s(·, A) is the Rådström embedding (see, for example, [1, Theorem 3.2.9 and Theorem 3.2.4(1)], [14,, or [28]). 1 I is the collection of all closed subintervals of the unit interval [0, 1]. All functions investigated are defined on the unit interval [0, 1] endowed with Lebesgue measure λ and Lebesgue measurable sets L. A map Γ : [0, 1] → c(X) is called a multifunction. In the sequel, given a multifunction Γ : [0, 1] → c(X), we set D Γ (t) := diam (Γ(t)), for all t ∈ [0, 1]. We say that Γ satisfies the We recall that a multifunction Γ : We have always D Γ (t) ≤ 2|Γ(t)|. Hence, if Γ is integrably bounded, then Γ satisfies DL. If Γ(t) ∋ 0 for almost every t ∈ [0, 1], then |Γ(t)| ≤ D Γ (t) a.e. Each function g : [0, 1] → X, considered as a ck(X)-valued multifunction, trivially satisfies the Db property.
We recall that if Φ : L → Y is an additive vector measure with values in a normed space Y , then the variation of Φ is the extended non negative function |Φ| whose value on a set E ∈ L is given by |Φ|(E) = sup π A∈π Φ(A) , where the supremum is taken over all partitions π of E into a finite number of pairwise disjoint members of L. If |Φ| < ∞, then Φ is called a measure of finite variation. If the measure Φ is defined only on I, the finite partitions considered in the definition of variation are composed by intervals. In this case we will speak of finite interval variation and we will use the symbol Φ, namely: If Y is a metric space, for example (cb(X), d H ), which is a near vector space in the sense of [28], and Φ : I → cb(X) is additive we consider in its interval variation the distance d H (Φ(A), {0}) instead of Φ(A) . We recall here briefly the definitions of integrals involved in this article. A scalarly integrable multifunction Γ : [0, 1] → c(X) is Dunford integrable in a non-empty family C ⊂ c(X * * ), if for every set A ∈ L there exists a set M D Γ (A) ∈ C such that If M D Γ (A) ⊂ X for every A ∈ L, then Γ is called Pettis integrable. We write it as (P ) A Γ dµ or M Γ (A). We say that a Pettis integrable Γ : [0, 1] → c(X) is strongly Pettis integrable, if M Γ is an h-multimeasure (i.e. it is countably additive in the Hausdorff metric). A multifunction Γ : [0, 1] → cb(X) is said to be Henstock (resp. McShane) integrable on [0, 1], if there exists Φ Γ ([0, 1]) ∈ cb(X) with the property that for every ε > 0 there exists a gauge δ : [0, 1] → R + such that for each Perron partition (resp. partition) {(I 1 , t 1 ), . . . , If the gauges above are taken to be measurable, then we speak of H (resp. Birkhoff)integrability on [0, 1]. If I ∈ I, then Φ Γ (I) := Φ ΓχI [0, 1]. Finally if, instead of formula (1), we have we speak about variational Henstock (resp. McShane) integrability on [0, 1]. In all the cases Φ Γ : I → cb(X) is an additive interval multimeasure. Thanks to the Rådström embedding, a multifunction Γ is "gauge" integrable (in one of the previous types) if and only if its image i • Γ in l ∞ (B X * ) is integrable in the same way. A multifunction Γ : [0, 1] → cb(X) is said to be Henstock-Kurzweil-Pettis (or HKP) integrable in cb(X) if it is scalarly Henstock-Kurzweil (or HK)-integrable and for each I ∈ I there exists a set N Γ (I) ∈ cb(X) such that s(x * , N Γ (I)) = (HK) I s(x * , Γ) for every x * ∈ X * . If an HKP-integrable Γ is scalarly integrable, then it is called weakly McShane integrable (or wMS).
We recall that a function f : , if there exists an ACG function (cf. [26]) F such that its approximate derivative is almost everywhere equal to f .
is Denjoy-Khintchine integrable and for every I ∈ I there exists C I ∈ C with (DK) I s(x * , Γ) = s(x * , C I ), for every x * ∈ X * . As regards other definitions of measurability and integrability that will be treated here and are not explained and the known relations among them, we refer to [3-7, 9, 10, 13, 17, 21, 31], in order do not burden the presentation.
Multimeasures of finite variation
We begin with a known fact. [26,Theorem 6.12]). Since F is also BV, an application of [26,Theorem 6.15] gives that F is also AC on on [a, b]. So f is Lebesgue integrable.
and so, due to the completeness of cb(X) under Hausdorff distance, the series i Φ(I i ) is convergent in cb(X). But for each x * ∈ X * the function s(x * , Ψ) is a measure and so i s(x * , Φ( It follows that Ψ is σ-additive (in the Hausdorff metric) on the algebra J generated by I. [15,Proposition I.15], i • Ψ restricted to J is strongly additive. It is a consequence of [27] or [15,Theorem I.5.2] that i • Ψ is a measure on the σ-algebra of Borel subsets of [0, 1]. But i • Ψ(E) = 0, provided Lebesgue measure vanishes on E and consequently, i • Ψ is measure on L. Since i(cb(X)) is a closed cone also Ψ is a measure in the Hausdorff metric of cb(X) and therefore Γ is strongly Pettis integrable on L. Under stronger assumptions one obtains stronger results. We proved in [8] the following Proof. We need to prove only that each vH-integrable multifunction Γ : [0, 1] → cb(X) with integral of finite interval variation is variationally McShane integrable. We know already from Theorem 3.2 that Γ is Pettis integrable. Since i • Γ is vHintegrable it is strongly measurable. If M Γ is the Pettis integral of Γ, then i • M Γ is a measure of finite variation and i • M Γ (I) = (vH) I i • Γ. It follows that i • Γ is Bochner integrable. Now we may apply [5,Proposition 3.6] to obtain variational McShane integrability of Γ.
In case of vector valued functions f : [0, 1] → X, by the properties of the Pettis and the Bochner integrals, it follows at once that if f is strongly measurable, Pettis integrable and its Pettis integral has finite variation, then f is Bochner integrable. The next result is the multivalued version of this result. Theorem 3.6. Let Γ : [0, 1] → cb(X) be Bochner measurable, Pettis integrable, and its Pettis integral has finite variation. Then Γ is integrably bounded.
Proof. It is an immediate consequence of Theorem 3.6 if we proceed analogously to [5,Proposition 3.6].
Decompositions
In the study of the integrability of multifunctions it is important to decompose a multifunction as a sum of a selection that is integrable in the same sense and a multifunction that is integrable in a stronger sense than the original one (see for example [5-8, 18, 19, 24]). Using Db or DL conditions we are able to extend decomposition results and to write integrable multifunctions as a translation of a multifunction with its integral of finite variation. (1) Γ satisfies the DL-condition (or Db condition); (2) Γ = G + f , where f is a properly integrable selection of Γ, G is Pettis integrable in cb(X) (cwk(X) or ck(X)) and 1 0 D G (t) dt < ∞ ( and G is bounded). In particular the indefinite integral of G is of finite variation.
Proof. Assume that Γ is DP-integrable. Due to [8, Theorem 3.5] Γ = G + f , where G is Pettis integrable, f is Denjoy integrable and G satisfies the condition DL. It is obvious that the Pettis integral of G is of finite variation.
Remark 4.2. Unfortunately, even if G : [0, 1] → ck(X) is a positive multifunction that is Pettis integrable and its integral is of finite variation, the multifunction G may not satisfy the DL condition. To see it let X = ℓ 2 [0, 1] and let {e t : t ∈ (0, 1]} be its orthonormal system. If G(t) := conv{0, e t /t}, then s(x, G) = 0 a.e. for each separate x ∈ ℓ 2 [0, 1] and so the integral and its variation are equal zero. However, diam{G(t)} = 1/t and so the DL-condition fails. Moreover, G is not Henstock integrable. Indeed, let δ be any gauge and {(I 1 , t 1 ), . . . , (I n , t n )} be a δ-fine Perron partition of [0, 1]. Assume that 0 ∈ I 1 , then t 1 ≤ |I 1 |. Hence λ(I 1 )/t 1 ≥ 1 for t 1 > 0 and so i≤n ei ti λ(I i ) ≥ 1. Consider now the multifunction given by H(t) := conv{0, e t }, where X is as above. We are going to prove that H is Birkhoff-integrable. Given ε > 0, let n ∈ N be such that 1/ √ n < ε and δ be any gauge, pointwise less than 1/n. If {(I 1 , t 1 (We apply here the inequality i a 2 i ≤ ( a i ) 2 . For each fixed k ≤ n we take as a i the number λ(I i ∩ J k ) ). If δ is measurable, then we get Birkhoff integrability of H.
Some additional results will be given now, in order to get decompositions with gauge integrable multifunctions.
) is arbitrary and G is an abs-Birkhoff integrable multifunction. In particular the integral of G has finite variation. If Γ is Bochner measurable, then G is also variationally Henstock integrable.
Proof. Assume that Γ is H-integrable. It is known (see [20,Theorem 3.1]) that Γ has an H-integrable selection f . Thanks to [31,Theorem 4] . So, i•G is Riemann-measurable and integrably bounded, which means that i • G (and so G) is absolutely Birkhoff integrable, thanks to [11,Theorem 2]. Assume now that Γ is H-integrable. Then, according to [8,Theorem 3.5] Γ = G+f , where G is Birkhoff integrable. By the assumption G satisfies the DL-condition. Hence again the function t → i•G(t) is integrably bounded. Consequently, i•G is absolutely Birkhoff integrable and hence also G. The vH-integrability of G follows from [2, Corollary 4.1], since G is Pettis integrable.
A similar result can be given also for Birkhoff integrable functions Γ : [0, 1] → cwk(X): the proof is essentially the same but instead of [20] we invoke [5,Theorem 3.4].
Proposition 4.4. Let Γ : [0, 1] → cwk(X) satisfy DL-condition, and assume that Γ is Birkhoff integrable. Then we have Γ = G+f, where f is any Birkhoff integrable selection of Γ, and G is an abs-Birkhoff integrable multifunction. In particular the integral of G has finite variation. It is also possible to obtain decompositions where the multifunction G turns out to be variationally McShane integrable, as follows.
Proposition 4.6. Let Γ : [0, 1] → cwk(X) satisfy DL-condition, and assume that Γ is Bochner measurable. Then we have Γ = G+f, where f is any strongly measurable selection of Γ, and G is a variationally McShane integrable multifunction.
Proof. Let f be any strongly measurable selection of Γ, and set G = Γ − f . Then clearly G is Bochner measurable. Moreover, since Γ satisfies the DL condition and f is a selection from Γ, i•G is integrably bounded. Then i•G is strongly measurable and integrably bounded, and therefore variationally McShane integrable. Of course this implies that also G is integrable. Proof. Take f as in [16]; then f is vH and Pettis integrable, but not Bochner integrable. Let Γ = conv{0, f (t)}, then Γ is vH-integrable, Pettis but not Bochner integrable, as shown in [5,Example 4.7]. By [5, Proof. Suppose that such a decomposition exists. Then, since G is McShane integrable, the function i • G is also McShane integrable and consequently it has relatively norm compact range.That is however equivalent to the norm relative compactness of M G (L) in d H . But then G can be approximated by simple functions (see [30,Theorem 2.3]). Since the integral of f is norm relatively compact (because Lebesgue measure is perfect) also f can be approximated by simple functions in the Pettis norm (see [29,Theorem 9.1]). As a result the multifunction Γ can be approximated by simple multifunctions, which is impossible, since its range
|
2019-12-02T16:17:18.000Z
|
2019-12-02T00:00:00.000
|
{
"year": 2019,
"sha1": "899b74e87e36640f73bffb093a0a53b78b69c0b3",
"oa_license": null,
"oa_url": "https://iris.unipa.it/retrieve/handle/10447/421438/913150/BUMI.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "899b74e87e36640f73bffb093a0a53b78b69c0b3",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
18922292
|
pes2o/s2orc
|
v3-fos-license
|
The dynamics of reproductive rate, offspring survivorship and growth in the lined seahorse, Hippocampus erectus Perry, 1810
Summary Seahorses are the vertebrate group with the embryonic development occurring within a special pouch in males. To understand the reproductive efficiency of the lined seahorse, Hippocampus erectus Perry, 1810 under controlled breeding experiments, we investigated the dynamics of reproductive rate, offspring survivorship and growth over births by the same male seahorses. The mean brood size of the 1-year old pairs in the 1st birth was 85.4±56.9 per brood, which was significantly smaller than that in the 6th birth (465.9±136.4 per brood) (P<0.001). The offspring survivorship and growth rate increased with the births. The fecundity was positively correlated with the length of brood pouches of males and trunk of females. The fecundity of 1-year old male and 2-year old female pairs was significantly higher than that from 1-year old couples (P<0.001). The brood size (552.7±150.4) of the males who mated with females that were isolated for the gamete-preparation, was larger than those (467.8±141.2) from the long-term pairs (P<0.05). Moreover, the offspring from the isolated females had higher survival and growth rates. Our results showed that the potential reproductive rate of seahorses H. erectus increased with the brood pouch development.
Introduction
In most animals, the potential reproductive rate is the population's mean offspring production when not constrained by the availability of mates (Clutton-Brock and Vincent, 1991;Clutton-Brock and Parker, 1992;Parker and Simmons, 1996;Wilson et al., 2003), and the reproductive rate often vary because of mating competition and parental investment (Trivers, 1972). In some syngnathid species, the reproduction is influenced by the complex brooding structure, brood pouch development, sexual selection, mating patterns and social promiscuity (Parker and Simmons, 1996;Carcupino et al., 2002;Wilson et al., 2003;Vincent et al., 2004;Lin et al., 2006;Naud et al., 2009).
The family Syngnathidae (seahorses and pipefishes) is the sole vertebrate group where the embryonic development occurs within a special pouch in males (Herald, 1959) that provides aeration, protection, osmoregulation and nutrition to the embryo or offspring after the females depositing eggs during mating (Linton and Soloff, 1964;Berglund et al., 1986;Partridge et al., 2007;Ripley, 2009). This pouch is similar to the placental function in mammals and acts like the mammalian uterus, and the embryos become embedded within depressions of the interior lining of the brood pouch (Carcupino et al., 1997;Foster and Vincent, 2004).
For syngnathid species, the survivorship of offspring is often used to evaluate the reproductive ability of the parents (Wootton, 1990;Cole and Sadovy, 1995;Vincent and Giles, 2003). The offspring survivorship of pipefish within a pregnancy is affected by the size of the female, the number of eggs transferred and the male's sexual responsiveness (Paczolt and Jones, 2010). Dzyuba et al. (Dzyuba et al., 2006) reported that in the seahorse Hippocampus kuda the parental size and age could affect the number, survivorship and even the growth of the offspring. Not all the eggs deposited in the male's pouch successfully complete the development and hatch as juveniles (offspring). For example, 1.23% of the eggs failed to develop in the pouch of wild H. abdominalis (Foster and Vincent, 2004); 2-33% of the eggs were found to be sterile in H. erectus pouches (Teixeira and Musick, 2001); and 45% of the eggs were lost during the pregnancy of H. fuscus (Vincent, 1994a). Moreover, the gonad development, clutch size and even the mate competition and physical interference during courtship and egg transfer should also be considered when estimating the offspring number from parents (Vincent and Giles, 2003;Foster and Vincent, 2004;Lin et al., 2006;Lin et al., 2007;Paczolt and Jones, 2010).
Many studies on understanding the potential reproductive rate in seahorses have been conducted through estimating the operational sex ratios, courtship roles, parental investment patterns, mate competitions and so on (A. C. J. Vincent, Reproductive Ecology of Seahorses, PhD thesis, Cambridge University, UK, 1990) (Vincent, 1994a;Vincent, 1994b;Masonjones, 1997;Masonjones and Lewis, 2000). For the investigation of mortality and growth of offspring, most reports focus on the controlled culture experiments, such as effects of environmental factors, diets and culture protocols on the survivorship in a limited duration (Scarratt, 1996;Job et al., 2002;Woods, 2000;Woods, 2003;Hilomen-Garcia et al., 2003;Lin et al., 2008;Koldewey and Martin-Smith, 2010).
The seahorse has a special reproductive strategy, but no report has been made on the dynamics of reproductive rate, especially in the first few births. The lined seahorse, H. erectus Perry, 1810 is mainly found from Nova Scotia along the western Atlantic coast, through the Gulf of Mexico and Caribbean to Venezuela, in shallow inshore areas to depths of over 70 m (Scarratt, 1996;Lourie et al., 1999;Lin et al., 2008). The purpose of the present study was to investigate the dynamics of the reproductive rate over the first few births, and then evaluate the effects of parent ages and mating limitation on the reproductive rate, offspring survivorship and growth of H. erectus.
Experimental seahorses
The lined seahorses H. erectus used in this study were cultured in Leizhou Seahorse Center of South China Sea Institute of Oceanology, Chinese Academy of Sciences (SCSIO-CAS) (latitude 110.04˚E, longitude 20.54˚N) with Animal Ethics approval for experimentation granted by the Chinese Academy of Sciences.
The F2 generation of this species was used for all the controlled breeding experiments. After being released from the broodstock male (F1), the offspring (F2) were cultured in separate concrete outdoor ponds (56461.4 m), with recirculating sea water treated with double sand filtration. Seahorses were fed daily with rotifers, copepods, Artemia, Mysis spp. and Acetes spp. A black nylon mesh was used to shade the outdoor ponds to keep the light intensity below 3400 Lux. The temperature was 22-28˚C and salinity was 31-34% during the study from March 2009 to March 2011.
Reproductive rate over successive birth
Sex was determined by the presence or absence of brood pouch at approximately 70 days after birth. Seventy-five pairs of male (standard body length: 10.360.6 cm) and female (standard body length: 8.760.8 cm) F2 seahorses were haphazardly selected from the outdoor ponds and cultured in 5 indoor round tanks (diameter 1.6 m, depth 0.9 m) with the stocking density of 15 pairs per tank (1 seahorse/60L). In order to estimate the reproductive rate over successive birth, the broods (1 st , 2 nd , 3 rd , 4 th , 5 th and 6 th successive births) from the same pair were recorded. The seahorse pair that mated and then successfully hatched the offspring were marked by the nylon ring with number around their necks. Plastic plants and corallites were used as the substrate and holdfasts for the fish. The indoor husbandry protocol was the same as that in the outdoor ponds, and the light intensity was adjusted through the glass ceiling and black nylon mesh.
Fourteen, 13, 12, 11, 9 and 9 batches of juveniles from 1 st , 2 nd , 3 rd , 4 th , 5 th and 6 th successive births respectively were used to estimate the survivorship and growth of offspring. 60 juveniles haphazardly selected from each brood (batch) were cultured in 3 recirculating tanks (L6W6H, 50630640 cm) (each with 20 juveniles) for 5 weeks. The juveniles were not fed during the first 10 hours after birth, and then they were fed with copepods and newly hatched Artemia nauplii in excess. The temperature, salinity, dissolved oxygen (DO), light intensity and photoperiod in the tanks were 2661.0˚C, 3261.0%, 6.560.5 mg/L and 2000 Lux, and 16 L (0700-2300 h): 8 D (2300-0700 h), respectively. The tanks were aerated gently so as not to form excessive air bubbles and cause turbulence.
In order to assess if the brood size was related to the body condition factor of parent seahorses, the brood pouch length (parallel length from the anus to the tip of the brood pouch on the tail) of the male seahorse and trunk length of the female of each brooding pair were measured after the copulation. Then the linear regressions for the relationship among offspring number, brood pouch size of males and trunk length of females were analyzed.
Effect of parent ages
To compare the reproductive rate of young (one year old) (M1, F1, never mated) and old (two years old) seahorse pairs (M2, F2, mated many times), 4 combination treatments (M1: F1, M2: F1, M1: F2 and M2: F2, respectively) were set up, each with 4 replicates and each replicate had 15 pairs. The body lengths of the male and female seahorses were 17.461.6 and 15.862.1 cm, respectively. Among the 60 pairs of parent seahorses in each treatment, 7 broods from 7 of the males were haphazardly selected and 100 juveniles in each brood were cultured for 5 weeks following the same culture protocol as in the last experiment.
Effect of mating limitation (female isolation)
In order to investigate the effect of mating limitation (female isolation) on reproductive rate of H. erectus, the control and experimental groups of 2-year old seahorse pairs (males: 16.662.1 cm, females: 15.861.7 cm) were used. In the control (TR-1), 60 pairs of seahorses haphazardly selected from the concrete outdoor ponds were cultured in 4 indoor tanks (diameter 1.6 m, depth 0.9 m) as 4 replicates, and each tank had 15 pairs with the stocking density of 1 seahorse/60L. During the experiment, four successive births (TR-1-1, TR-1-2, TR-1-3 and TR-1-4, and 10, 10, 9 and 8 males released their offspring in each birth, respectively) from the same pair of seahorses were utilized.
In the experimental treatment (TR-2), 60 pairs of 2-year old seahorses derived from the same brood stock as the seahorses in control group were also cultured in 4 round tanks (diameter 1.6 m, depth 0.9 m) as 4 replicates, and each had 15 pairs of seahorses. In a tank, the male and female seahorses were cultured separately (male groups and female groups) through a glass wall in the seawater. When the gonads of female seahorse matured (the abdomen was bosomy and the cloaca was protuberant), the female was transferred over to the male side. After the courtship and mating, the female was put back to the female side. The pregnant male seahorse released his babies at approximately 20 days later, depending on the water temperature and nutritional conditions. The female seahorse (gonad already matured) was returned to the male side of the tank where she paired and mated again (not necessarily with the original partners) six days after the male released his offspring (a modified method from Masonjones' and Lewis' investigation (Masonjones and Lewis, 2000)). Brood sizes from the four consecutive births (TR-2-1, TR-2-2, TR-2-3 and TR-2-4, and 11, 9, 9 and 8 males released their offspring in each birth, respectively) were counted. The juveniles from both the control and the treatment groups were cultured for 5 weeks and the mean survival rate and the distributions for standard body length of juveniles per brood were recorded.
Data analysis
Statistical analyses were conducted using the software SPSS 17 (Statistical Program for Social Sciences 17) and Sigma PLOT 10.0. One-way analysis of variance (ANOVA), regression analysis and Kolmogorov-Smirnov test were used to assess the relationship among the brood size, survival rate and body size of the juveniles among the treatments. All the variables were tested for normality and homogeneity. If ANOVA effects were significant, comparisons between the different means were made using post hoc least significant differences (LSD).
Fig. 2. The relationships between brood size and brood pouch length.
Scatter-plots and linear regression lines for the relationships between brood size and brood pouch length of male seahorses Hippocampus erectus (95% confidence bands); between brood size and the trunk length of females (95% confidence bands) for the first 6 successive births.
Discussion
Our results show that the young pairs of seahorses H. erectus had smaller brood size, poor offspring survivorship and growth, when compared with those from the old pair parents. However, they could improve their reproductive efficiency over few successive births. It was reported that in gulf pipefish Syngnathus scovelli, prior pregnancy of males could influence the latter reproduction through the post-copulatory sexual selection and sexual conflict between both sexes in a controlled breeding experiment (Paczolt and Jones, 2010). In this study, the mean brood size increased from 85.4656.9 in the 1 st birth to 465.96136.4 in the 6 th birth with the increase of the births in the young pairs of H. erectus. This is similar with the finding for H. kuda that large parents can reproduce much offspring (Dzyuba et al., 2006). In the wild, the brood size of H. erectus was large, with the maximum birth number of 1552 juveniles (Teixeira and Musick, 2001), which was significantly larger than that from the cultured pairs (Lin et al., 2008;present study). This is partly because the development and metabolic rate of seahorse changed under the low social interaction and low mating competition in the wild (Vincent, 1994a, b;Naud et al., 2009). To ascertain the relationship among the different births by the same male, we used the same body size of females to decrease the probability of the size-biased sexual selection and mate switching (Naud et al., 2009;Hunt et al., 2009;Paczolt and Jones, 2010). During the study, the fidelities were relatively low in most pairs during the reproductive process, so only 10 males among 75 pairs were found to mate with the same females during the 6 successive births. During the culture, pairs of H. erectus had short greeting and re-mating durations before or after each copulation, which significantly improved the reproductive rate. This result is similar to the investigation that the mate choices of males could be reinforced by females with more pronounced secondary sexual characters during postcopulatory sexual selection in S. scovelli (Paczolt and Jones, 2010).
During the study, the young couples were still growing and then they should have higher reproductive efficiency than before, and this might display a status that the number and survival rate of offspring significantly increased over successive births. The large sizes of brood pouches with the growth of the males were able to fit more eggs from females. This is consistent with the report that the old seahorses H. kuda with large body sizes had the larger brood size and higher offspring growth and survival rate (Dzyuba et al., 2006).
We found strong positive linear correlations between the male pouch size and number of offspring produced, as well as between the female standard body length and the number of offspring produced. As it provides for some nutrition, aeration and osmoregulation for developing embryos (Linton and Soloff, 1964;Berglund et al., 1986;Partridge et al., 2007), the surface area of the male's brood pouch is a limiting factor in the number of embryos that can be successfully incubated (Azzarello, 1991;Dzyuba et al., 2006). Therefore, the correlation between brood size and brood pouch length of male seahorses was positive. Paczolt and Jones (Paczolt and Jones, 2010) found a strong positive correlation between the number of eggs transferred and female size in S. scovelli. Similarly, female size in pipefish can affect the egg size and concentration of eggs within the male pouch (Berglund et al., 1986;Ahnesjö, 1992;Watanabe and Watanabe, 2002). However, the larger size males do not necessarily release larger sized offspring than those from smaller size males, and we have found that the body size of offspring was correlated with the brood size, gestation time and nutrient supply in the same males and females' investment (Lin et al., 2008;present study).
The mean number of offspring from the 2-year old female and 1-year old male pairs (302.46113.8 per brood) was much higher than those from 1-year old pairs (89.6636.9 inds/brood, n530, F 2, 27 529.33, P50.000). This result is similar to the report that the female fecundity in pipefish increases over 2.5 times from the small (1-year old) to large females (2-year old) (Berglund and Rosenqvist, 1990). Compared with the 1-year old males, 2-year old males released more offspring after mating with 1-year or 2year old females (increased 36.4% and 74.2% with 1-year and 2year old females, respectively). In addition, the survivorship of offspring released from the 2-year old males was higher than that from the 1-year old males (Fig. 3B). This may be due to the increased size of brood pouches of the old males, or better function of the pouches in old males (Berglund et al., 1986;Azzarello, 1991;Carcupino et al., 2002;Dzyuba et al., 2006). This is similar to the report that the difference in the reproductive Fig. 4. Brood size of four successive births by male seahorses Hippocampus erectus in the two groups (TR-1, TR-2). In TR-1 groups: male and female seahorses were cultured together (broods per birth: n510, 10, 9 and 8); in TR-2 groups: females were isolated (n511, 9, 9 and 8) (ANOVA-Tukey HSD analysis: n574, F 7, 66 50.89, P50.513; 95% confidence bands). rate of male and female Syngnathus typhle increases as the fish age (Svensson, 1988). However, this does not display that the older seahorses have better quality offspring, and the quality is generally correlated with the brood pouch development, gestation time and parental investment Lin et al., 2008;Naud et al., 2009).
Masonjones and Lewis (Masonjones and Lewis, 2000) limited the sexual receptivity in female seahorses H. zosterae through isolating the females separately and then induced the sexual competition among the males, and their results showed that the females need a relatively long time investment to prepare to mate with the males. In this study, the mating competition among the male seahorses was strong and males sometimes courted with the females who were not ready. This is similar to the report that the male seahorses H. fuscus competed more intensively than females (Vincent, 1994a). Therefore, we guess that the mating competition among the males could influence the reproductive efficiency through affecting the gamete preparation of females (or decreasing the investment from the females). Data showed that the mean brood size in the couples of males and isolated females (552.76150.4 per brood) was larger than that in the controls (467.86141.2 per brood, n574, F 1, 72 56.56, P50.013), although the survival rates of the offspring were not significantly different between the two groups (n574, F 1, 72 50.27, P50.610). Furthermore, juveniles derived from the group of isolated females had a high growth rate and wide frequency distribution of body sizes. This partly is due to the quality of newborn offspring. Dzyuba et al. (Dzyuba et al., 2006) have shown that the quality conditions of parent seahorses could significantly influence the offspring growth and survival rates. Then, the frequency distributions of body sizes might be the criteria for assessing the quality of broodstock of the parent seahorses. This is the first study of reproductive process for the young parent seahorses. Further research work should evaluate the evolutionary relationship of the reproductive formation (older broodstock with more mating experiences) between fish in the family Syngnathidae, and then investigate mechanism of potential reproduction and link the relationship among gonad development, mating competition and parental investment during the reproductive process.
|
2017-06-19T16:00:07.430Z
|
2012-02-24T00:00:00.000
|
{
"year": 2012,
"sha1": "16521a706ffdee56bc22c04c6a1139bc7a7be5fc",
"oa_license": "CCBYNCSA",
"oa_url": "http://bio.biologists.org/content/biolopen/1/4/391.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "741be394c37eb745b2ca443286d72bba5570c059",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
47017273
|
pes2o/s2orc
|
v3-fos-license
|
Data article on the effectiveness of entrepreneurship curriculum contents on entrepreneurial interest and knowledge of Nigerian university students
The article presented data on the effectiveness of entrepreneurship curriculum contents on university students’ entrepreneurial interest and knowledge. The study focused on the perceptions of Nigerian university students. Emphasis was laid on the first four universities in Nigeria to offer a degree programme in entrepreneurship. The study adopted quantitative approach with a descriptive research design to establish trends related to the objective of the study. Survey was be used as quantitative research method. The population of this study included all students in the selected universities. Data was analyzed with the use of Statistical Package for Social Sciences (SPSS). Mean score was used as statistical tool of analysis. The field data set is made widely accessible to enable critical or a more comprehensive investigation.
a b s t r a c t
The article presented data on the effectiveness of entrepreneurship curriculum contents on university students' entrepreneurial interest and knowledge. The study focused on the perceptions of Nigerian university students. Emphasis was laid on the first four universities in Nigeria to offer a degree programme in entrepreneurship. The study adopted quantitative approach with a descriptive research design to establish trends related to the objective of the study. Survey was be used as quantitative research method. The population of this study included all students in the selected universities. Data was analyzed with the use of Statistical Package for Social Sciences (SPSS). Mean score was used as statistical tool of analysis. The field data set is made widely accessible to enable critical or a more comprehensive investigation.
& Sample consisted of university students in Nigeria. The researcher-made questionnaire which contained data on entrepreneurship curriculum contents, students' entrepreneurial interest and knowledge were completed.
Experimental features
Entrepreneurship curriculum contents are one of the factors endangering entrepreneurial development of university students. Data source location South west Nigeria Data Accessibility Data is included in this article
Value of data
• These data present descriptive data on entrepreneurship curriculum contents in universities as it relates to development of entrepreneurial interest and knowledge by undergraduates. This is geared towards development of salient entrepreneurial competencies by university students. • The results showed that the use of practical oriented curriculum contents can be very helpful for universities in designing curriculum contents for entrepreneurship education. • The results of this study can be used to improve teaching and learning practices in university entrepreneurship education.
Data
As shown in Table 1 below a total of six hundred copies of questionnaire were administered to the students of four selected pioneer universities to offer a degree in entrepreneurship in Nigeria. Five hundred and sixty-four copies of the questionnaire were retrieved, which amounted to a 94% response rate. Five hundred and sixty-four copies of the questionnaire retrieved were found useable and a total of thirty six copies of the questionnaire were not retrievable, which amounted to 6%.
Based on the copies of questionnaire retrieved, below is the demographic information showing the distribution based on age gender and educational qualification on Table 2. Table 2 above shows the frequency distribution of respondents' demographic data. The distribution of gender reveals that male respondents were 284(50.4%) and female respondents were 280 (49.6%). Despite the 0.8% difference between the two genders, data obtained represents a rich and balanced opinion of both genders. University 4 had the highest number of male respondents (107) representing 37.7% of the total number of male respondents and University 1 had the lowest number of male respondents (29) representing 10.2% of the total number of male respondents. On the other hand, University 2 had the highest number of female respondents (98) representing 35% of the total number of female respondents and University 1 had the lowest number of female respondents (34) representing 12.1% of the total number of female respondents. This validates the even distribution of respondents based on gender.
Age distribution
The age distribution revealed that 261 (46.5%) were respondents between ages 15 to 19 years, 270 (47.9%) were respondents between ages 20 to 24 years, and 33 (5.6%) were respondents above 25 years. The result indicates that most of the respondents were between the ages 20-24 years (270) representing 47.9% of the total number of respondents. However, both University 2 and 3 shared the same top number 84 each of the respondents between the ages 20-24 representing 31.1% each respectively. Respondents within the age bracket above 25 years were the minority, with University 4 having the lowest number of respondents in this age bracket (4) representing 0.7% of the total number of respondents. This implies that most respondents offering entrepreneurship education within the university context are mostly between the ages 20-24 years. This also shows that most of the respondents are young adults who can independently give informed responses.
Degree programme
Information provided by respondents in Table 2 on degree programme of respondents shows that 397 (70.4%) were B.Sc/B.A students, 129 (22.9%) were B.Tech/Eng students, and 38 (6.7%) were B.Ed/ Other students. The degree programme results revealed that more of the respondents were B.Sc./B.A students (397) followed by B.Tech/Eng students 129 and the least were B.Ed/Other students 38. However, the distribution of degree programme of respondents cuts across different disciplines, which implies that the opinions of respondents from different disciplines were considered. Table 3 above reveals that when respondents were asked if better understanding about business is achieved as a result of taking this course, most of the respondents answered positively to the statement. The analysis in the table shows that the mean scores of the respondents from University 1, 2, 3, and 4, are 4.3532, 4.3636, 4.0733, and 3.8443 respectively. On the other hand, respondents from university 1 and 2 agreed more favourably to the statement with mean scores 4.3532 and 4.3636 respectively. This suggests that more respondents from university 1 and 2 opine that their understanding of business and entrepreneurship has been broadened as a result of participation in This result suggests that more respondents from university 1 and 2 with mean scores 4.3117 and 4.3287 respectively, are of the opinion that students have developed interest in engaging in entrepreneurial activities based on the information and knowledge acquired from the entrepreneurship course. Findings from the descriptive statistics showed that most respondents agreed that the entrepreneurship course enhanced better understanding about business and it developed entrepreneurial knowledge and skill [2]. The descriptive statistics also revealed that most respondents agreed that the course developed entrepreneurial knowledge and skills [13]. Furthermore, the findings from the descriptive statistics revealed that most respondents agreed that the course raised interest towards entrepreneurship. These results are in consonance with the findings of [5] who suggested that the design of an entrepreneurship curriculum may stimulate the development of entrepreneurial knowledge and the practice of entrepreneurship. This is in line with the work of [6] and [14] who asserted that the provision of university curricular content on idea generation has implications for the development of entrepreneurial interest and skills of learners. It is also confirms the findings of [11] which showed that the design of an entrepreneurship curriculum affects entrepreneurial learning outcomes. Similar to many other studies in the field of management, the findings of this research is limited on the basis of generalization since a few universities with unique characteristics were sampled for the study.
Study area description
Nigeria is the world's eighth leading exporter of oil, and Africa's second largest economy, following South Africa [1]. Nigeria also represents 15 per cent of Africa's population, and contributes 11 per cent of Africa's total output as well as 16 per cent of its foreign reserves, accounting for half of the population and more than two-thirds of the total output of West Africa sub-region (World Population Prospects [15]). Nigeria is the most heavily populated nation in Africa, which is naturally blessed with millions of acres of arable land, thirty eight billion barrels of state oil reserves, large gas reserves, an assortment of unused and untapped mineral resources, and a wealth of manpower and human capital by reason of its estimated population of above 160 million people [4].
Entrepreneurship education programmes in Nigerian Universities was the focus of this study. Specifically, the study examined the effects of entrepreneurship curriculum contents on students' development of entrepreneurial interest and knowledge. However, emphasis was laid on the first four universities in Nigeria to offer a Bachelors degree programme in entrepreneurship. The entrepreneurship programmes in these universities were considered relevant to the context of this study because there are indications that best practices in entrepreneurship education are obtainable in these universities, and also because the main aim of the entrepreneurship programmes in these institutions is to motivate students to initiate entrepreneurial actions during the course of the programmes. Attention was given the perceptions of students in the selected universities. This provided a basis to understand how students interpret the teaching and learning processes in entrepreneurship education and how these affects their behavioural responses and actions.
Experimental design, materials and methods
Four universities were selected from South west Nigeria. Six hundred students in the selected universities participated in this study. Data were gathered from students across the various colleges in the selected universities with the aid of a researcher-made questionnaire based on the works of [3,[7][8][9][10], and [12]. The demographic data presented information based on gender.age and degree programme as well as questions related entrepreneurship curriculum contents and the development of entrepreneurial interest and knowledge. There was a meaningful relationship between entrepreneurship curriculum contents and the development of entrepreneurial interest and knowledge in the selected universities in south west Nigeria. The collected data were coded and entered into SPSS version 22. Data analysis was performed; using SPSS-22.Data was analyzed applying descriptive statistical tests which involved mean scores.
Statistical Package for Social Sciences may be inadequate to address some advanced modeling and development of statistical approaches however, for a simple analysis such as descriptive analysis; the key selling point is its expansive data analysis options. What makes it even better is that these functions are automated to the point that one simply needs to select the applicable variables and matching applications for output and analysis and the package does the rest.
Conclusion and implications of the study
The propensity for university graduates to become job creators may largely depend on the extent to which the design of an entrepreneurship curriculum stimulates students' entrepreneurial interest and knowledge. This has implications for university managements and other stakeholders as regards the design of entrepreneurship curriculums. Therefore, the data described in this article is made widely accessible to facilitate critical or extended analysis.
|
2018-06-14T00:10:32.803Z
|
2018-03-01T00:00:00.000
|
{
"year": 2018,
"sha1": "0d60f619a637b99f8e38f2fda50e59f2a2726a74",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.dib.2018.03.011",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3a90bf6b446c50f1b383ad5f4759af87909b9056",
"s2fieldsofstudy": [
"Education",
"Business"
],
"extfieldsofstudy": [
"Psychology",
"Sociology",
"Medicine"
]
}
|
9211459
|
pes2o/s2orc
|
v3-fos-license
|
The brain decade in debate: VI. Sensory and motor maps: dynamics and plasticity.
This article is an edited transcription of a virtual symposium promoted by the Brazilian Society of Neuroscience and Behavior (SBNeC). Although the dynamics of sensory and motor representations have been one of the most studied features of the central nervous system, the actual mechanisms of brain plasticity that underlie the dynamic nature of sensory and motor maps are not entirely unraveled. Our discussion began with the notion that the processing of sensory information depends on many different cortical areas. Some of them are arranged topographically and others have non-topographic (analytical) properties. Besides a sensory component, every cortical area has an efferent output that can be mapped and can influence motor behavior. Although new behaviors might be related to modifications of the sensory or motor representations in a given cortical area, they can also be the result of the acquired ability to make new associations between specific sensory cues and certain movements, a type of learning known as conditioning motor learning. Many types of learning are directly related to the emotional or cognitive context in which a new behavior is acquired. This has been demonstrated by paradigms in which the receptive field properties of cortical neurons are modified when an animal is engaged in a given discrimination task or when a triggering feature is paired with an aversive stimulus. The role of the cholinergic input from the nucleus basalis to the neocortex was also highlighted as one important component of the circuits responsible for the context-dependent changes that can be induced in cortical maps.
Abstract
This article is an edited transcription of a virtual symposium promoted by the Brazilian Society of Neuroscience and Behavior (SBNeC). Although the dynamics of sensory and motor representations have been one of the most studied features of the central nervous system, the actual mechanisms of brain plasticity that underlie the dynamic nature of sensory and motor maps are not entirely unraveled. Our discussion began with the notion that the processing of sensory information depends on many different cortical areas. Some of them are arranged topographically and others have non-topographic (analytical) properties. Besides a sensory component, every cortical area has an efferent output that can be mapped and can influence motor behavior. Although new behaviors might be related to modifications of the sensory or motor representations in a given cortical area, they can also be the result of the acquired ability to make new associations between specific sensory cues and certain movements, a type of learning known as conditioning motor learning. Many types of learning are directly related to the emotional or cognitive context in which a new behavior is acquired. This has been demonstrated by paradigms in which the receptive field properties of cortical neurons are modified when an animal is engaged in a given discrimination task or when a triggering feature is paired with an aversive stimulus. The role of the cholinergic input from the nucleus basalis to the neocortex was also highlighted as one important component of the circuits responsible for the context-dependent changes that can be induced in cortical maps.
Introduction
Since the early electrophysiological studies describing sensory "maps" on primary cortical areas (vision (1), hearing (2), touch (3)) the search for brain maps has occupied center stage in system neurophysiology. Areas can be distinguished because their unique functional roles require structural specializations, unique patterns of input and output and neurons with specific ways of responding to stimulus events (4). In primates, although investigators still disagree about the precise number of sensory and motor areas, since more areas are being discovered and some proposed fields are in question, there is good agreement that the visual cortex of monkeys includes more than 30 visual areas, the somatosensory cortex has at least 10, the auditory cortex has 12 or more, and the frontal cortex contains approximately 8 to 10 motor areas (see Ref. 5). There is now overwhelming evidence that maps are not fixed or immutable in the adult brain. Studies of adult brain plasticity are revealing how the brain operates in functional compensations, in improving the performance of sensory motor skills and perceptual abilities, in allowing recalibrations of sensory systems after receptor surface damage, and in mediating recovery after central nervous system damage. As plastic changes in sensory and motor maps seem to be a critical component of learning new skills and providing the ability to make novel sensory discriminations, it has been proposed that plastic processes form a routine part of cortical computation (6).
In a recent paper, Ghazanfar and Nicolelis (7) called attention to the fact that under natural conditions animals must process spatiotemporally complex signals in order to guide adaptive behavior. The authors reflect that map plasticity not only engages "local" circuits but often a more "systemic" activation of broadly related neural structures. As an example, the work of Weinberger (8) showed at the primary auditory cortex level an enlarged representation of a tone that was aversively conditioned. Such plasticity is thought to involve the medial geniculate of the thalamus and the cholinergic system of the basal forebrain (9).
This "system" view is being more and more accepted by different research groups since cumulative evidence points to brain work in an integrated fashion. Cross-modal activity is being revealed even in primary sensory and motor areas, and emotion, attention, motivation and arousal seem to be crucial for the normal processing of both primary and non-primary regions.
In order to discuss the state of the art in the dynamic nature of sensory and motor brain maps, the Brazilian Society of Neuroscience and Behavior (SBNeC) promoted this virtual debate as part of the series in which the main achievements of the so-called "Brain Decade" are revisited. Some active researchers in the field were invited to the present symposium which was held on December 8, 2000 at a chat site provided by the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq). The present article is a transcription of this symposium.
Eliane Volchan: We plan to discuss the following topics: 1) What is a map in the brain? What are they for? 2) How plastic can a map be? 3) Is synesthesia related to brain maps? 4) Are emotions involved in sensory and motor map dynamics? Ricardo might start talking about topography in visual maps.
Ricardo Gattass:
We propose a framework for understanding visual perception based on a topographically organized, functionally distributed network. In this proposal the extraction of shape boundaries starts in retinal ganglion cells with concentric receptive fields. This information, relayed through the lateral geniculate nucleus, creates a neural representation of negative and positive boundaries in a set of topographically con-nected and organized visual areas. After boundary extraction, several processes involving contrast, brightness, texture and motion extraction take place in subsequent topographically organized visual areas in different cortical modules. Following these processing steps, filling-in processes at different levels within each area and in separate channels propagate locally to transform boundary representations to surface representations. These partial representations of the image propagate back and forth in the network, yielding a neural representation of the original image. It has been proposed that cortical topography minimizes the total volume of axons and dendrites (10). The local computations supported by topographic maps might then favor the integration of information that preserves spatial structure. In such cases, filling-in would be accompanied by what seems like spreading interactions and spatially organized representations. Moreover, effective use of the cortical circuitry would be made. In summary, in the task of organizing the scene into context-dependent meaningful representations, the visual system relies both on local estimates of image attributes (such as contrast) and on integration mechanisms that operate in topographically organized areas (which are in some cases associated with perceptual filling-in). Integration need not be confined to a single cortical area. Instead, given the highly interconnected array of visual areas, it is likely that they cooperate in the task of integration, forming dynamic and context-dependent coalitions of areas. Thus, the multiple organizations of the stimulus that are possible are organized into a single coherent one that is used to effectively guide behavior.
Eliane Volchan: Ricardo, how do you compare topographic and non-topographic representations in a single sensory system, say vision? Ricardo Gattass: It has been proposed that different visual attributes are segregated and processed in different extra-striate vi-sual areas (11,12), while the notion of extrastriate areas with analytical properties and inferotemporal areas with synthetic properties was proposed by Gross et al. (13). Charles Gross's model separates topographically organized extra-striate areas (analytic) from non-topographic ones (synthetic). We enhanced Charlie's model by introducing the concept of a topographic distributed network in a number of extra-striate areas (14), containing organized maps of the visual field with different functional modules for visual processing (e.g., blobs in V1 and cytochrome oxidase-rich stripes in V2) should be contrasted to other types of organization that involve non-topographically organized structures. Examples of such non-topographic analytical areas are those of the intraparietal sulcus (LIPv and LIPd) where remapping of the visual information related to the movement of the eyes takes place (15) or those of the parieto-occipital cleft (areas PO and POd), where the visual maps are organized along the isopolar dimension and not along the isocentric dimension (16).
Eliane Volchan: Miguel, would you like to say something about motor maps?
Miguel Nicolelis: Eliane, I think I would start by saying that in the motor cortex the concept of a map is not as straightforward as in primary sensory cortices.
Eliane Volchan: What are the main differences?
Miguel Nicolelis: Well, although one can identify topographic relationships in both motor and sensory maps, in motor cortex there is a hot debate regarding what is represented there.
João Franca: Miguel, are you talking about the problem of what is represented in motor cortices (i.e., movement versus muscle)?
Miguel Nicolelis: Yes, João, that is one of the issues. The other issue is that you can activate the same part of the body by stimulating different locations of a motor cortical area. Actually, Dr. Kaas has studied both sensory and motor maps and could talk a lot about such comparisons. Jon Kaas: Maps reflect an orderly arrangement of inputs, and the inputs can represent anything -receptor surfaces, afferent inputs, or central computations. Maps reflect a global organization, but within a map there can be modular organization as in the motor maps (17).
João Franca: Do you consider the paradigm of mapping sensory versus motor areas really comparable? It seems to me that the variables of a sensory stimulus are better controlled than those of an electrical stimulation in the cortical tissue.
Jon Kaas: Motor maps are output maps, but the sensory maps are input maps. So at that level they are not strictly comparable. But the motor maps have a sensory component that can be mapped, and sensory maps may also contain motor maps.
Eliane Volchan: Jon, do we only consider as "map" the sensory responses in motor cortex (is this what you meant)?
Jon Kaas: Eliane, the organization of inputs to layer 4 determines the organization of maps in any area. Both output and input maps reflect the organization of connections. The sensory map largely overlaps the motor maps, and the sensory areas also contain motor maps. So both input and output maps are 'maps'.
Ricardo Gattass: Dr. Kaas, the visual maps of PO may reflect another type of organization other than organization of connections. This principle may apply to early areas.
Eliane Volchan: Can you explain it, Ricardo?
Ricardo Gattass: For this discussion, I will use the data of Neuenschwander et al. (16) on the visuotopic organization of visual area PO to trace a parallel between map and function. The visual field representation in areas PO and POd is complex. These areas have an almost complete representation of the contralateral visual hemifield, which is more regularly organized in the isopolar dimension. At present, we have no direct evidence regarding the functional significance of the observed order in the isopolar domain versus disorder in the isoeccentric lines present in PO and POd. One may speculate, however, that centrifugal and centripetal organizations of directionality, such as those observed in area PG (18,19), demand interactions between neurons that analyze regions of space sharing a similar polar angle but with different eccentricities. Also, during forward egocentric motion, the angle of gaze is the center of the expanding flow field that shows some constancy in the isopolar domain while presenting large changes in the isoeccentric domain. It may be that the intermixing of eccentricities of receptive fields in adjacent columns in PO and POd allows these interactions to occur by local circuits. Thus, the visual maps of PO and POd may be the consequence of different computational strategies used by these areas. The homogeneous representation of the visual field periphery in PO and POd may provide the essential inputs mediating flowfield perception during animal locomotion. In addition, stimulation of the visual field periphery was shown to dominate the perception of self-movement, even when antagonistic moving images are presented at center of gaze (20). Therefore, one could suggest that these areas with little central representation might be involved in visuomotor integration and perception of global spatial relationships. Interestingly, PO, POd and PG were shown to project to the pontine nuclei, which relay information to the part of the cerebellum involved in motor planning (21).
Miguel Nicolelis: Ditto. I would just add that maps reflect one of the many potential levels of organization of a particular cortical area. Thus, within this global structure one can identify other principles of organization. Wow! That was quite a statement Ricardo. Brazilian style!! Cesar Timo-Iaria: The last comment by Dr. Gattass is very interesting and I would like to add a few words to it. I have a theory to explain not only the incredibly complex network of pathways that go to or leave the cerebellum but also the involvement of this organ in all kinds of neural functions, from modulating motoneurons (a subject I studied in the sixties) to influencing learning, late components of event-related evoked potentials in neocortical areas and conscious functions, and to intervening in sleep physiology (as we are finding currently). The cerebellar cortex is well known to receive information from all sensory channels, including the carotid sinus baroreceptors. It receives a huge projection from all cortical sensory and motor fields, as well as from the basal ganglia, and sends information to all these areas. We might reason that when a given system programs a behavior it sends the program to the cerebellum, which in turn receives from the effector organs information about how the program is being performed; the program and its performance are dynamically compared on-line and the discrepancies are corrected by the cerebellar efferent pathways to the effector neurons, including the vegetative components of any behavior. In 1938, Moruzzi (22) demonstrated that electrical stimulation of the anterior lobe of the cerebellar cortex inhibits the baroreceptor reflex, which is known to be involved in the blood pressure increase as a component of most behaviors. The classic index of fingernose poor performance, the characteristic "cerebellar" speech, the awkward and unstable walking, as well as astasia and abasia that do unveil a cerebellar lesion in humans are fully accounted for by the theory. The finger-nose test is performed and, like speech, walking and fundamental posture, can be produced but quite imperfectly when the cerebellum is lesioned; the core of the behavior is there but not the corrections.
Eliane Volchan: How far does the motor overlap go in sensory systems? Up to pri-mary areas? Jon Kaas: Eliane, all areas have subcortical outputs, but you cannot always elicit movement easily. For example, all visual areas project to the superior colliculus to influence eye movements, but that influence may not be apparent during motor mapping experiments.
Claudia Vargas: Miguel, about motor maps: Could the temporal dynamics of plasticity in motor maps be much more related to what movements are actually being executed? Or can a neuron in the motor cortex have a multiple representation of movement?
Miguel Nicolelis: Claudia, there is nice evidence suggesting that single neurons in the motor cortex are definitely involved in the control of multiple muscles (23). I believe that these neurons may participate in the computation of many motor parameters simultaneously. Motor cortical plasticity also seems to be driven by changes in the animal's experience, so in that sense changes in motor experience are an important driving force for plastic reorganization in the motor cortex. In essence I was trying to say that I agree with you. The dynamics one observes in motor maps seems to reflect the type of movements made by the animal, and this inherent dynamics provides the substrate, the driving force for plastic reorganization.
João Franca: It is already well established that brain maps, even in primary sensory areas, change as a result of lesions or learning (for a review, see Ref. 6). But what are the actual mechanisms of brain plasticity? Are they the same for all cortical areas? Back to the case of motor maps: If new movements imply new representations in motor areas, are they instantaneously generated by the cortical circuitry? How timedependent are the changes involved? What are the mechanisms involved in these changes?
Miguel Nicolelis: João, we know that motor plasticity is not necessarily limited to the production of new movements. It can also be reflected in the acquired ability of arbitrarily associating sensory cues with different movements. In this scenario, the movements are the same, but the combination of sensory cues that triggers them has changed. In this type of learning, also known as conditional motor learning, the output does not change, only the arbitrary sensory-motor mapping does (for a review, see Ref. 24).
Claudia Vargas: Jon, but what about the maps "of internal use"? Those that are used to build the re-representations? For instance, those in non-topographic analytical areas cited by Ricardo? Jon Kaas: Claudia, can you expand on your question please?
Claudia Vargas: In line with Damasio's work (25), what is the relationship between the sensory maps and the subjective body representation? How does it relate to the online corporal awareness?
Eliane Volchan: A recent study by Damasio's group (26) has shown that lesions to the somatosensory cortex impair the visual recognition of a facial emotional expression. The authors discuss the idea that this recognition may not necessarily need the production of facial mimicry by the observer, in line with Damasio's idea of "as-if" loops (27).
Cesar Timo-Iaria: When we pay attention to anything, either in the outside world or in the inside mental world, we always produce a motor output, at least the ocular muscles do. We always at least immobilize the eyes when we are paying attention, including when we are calculating or building up a phrase or a thought. I think this motor component is enough to characterize paying attention or thinking as specific behaviors.
Claudia Vargas: Jon, in some of the above mentioned dorsal stream areas, for instance, the map of the surface representation seems to be space-transformed so as to allow appropriate effector movements. In those areas, space is coded in body-centered frames of reference, anchored now to the motor output (for reviews, see 28,29). Jon Kaas: Claudia, after reorganization the sensory map may often differ from the perceptual map (30). Probably feedback is used to align sensory and perceptual maps.
Eliane Volchan: Jon, what do you mean by perceptual map? Jon Kaas: Eliane, all I mean is that within a map if you stimulate, you get perception of the receptor surface that normally corresponds to the sensory map. So you could ask a person what she feels and get a perceptual map, or you could generate a sensory map by stimulating the peripheral receptors. Perception involves whole systems and many maps (17). When you electrically stimulate one map you are provided output to other maps.
Claudia Vargas: Jon, do you mean that subjective corporal awareness would be more related to perceptual maps? Is there a second level of representation there, which includes the primary maps? On which grounds? A map of maps? Ricardo Gattass: Jon, how do you conciliate perceptual maps with the absence of topography in synthetic areas such as the inferotemporal cortex. Where are the "internal models" located?
Miguel Nicolelis: Ricardo, my only point in response to your question is, why does one need to equate topography with perception? The olfactory cortex is not topographically organized and yet rodents and primates can smell bad food very fast!! Aniruddha Das: Claudia, I suppose also that we should think of these perceptual maps really as a model for actions that the animal is planning, i.e., not as "passive" analysis of input per se (many people take it as an article of faith that this is the most important aim of the sensory system, and pure sensory perception is only a partial answer). So, in that sense, I don't know what you might be thinking of as a "map of maps", but one should consider that the "model" might be at this level of mapping onto the desired action.
Claudia Vargas: Yes, Aniruddha. So in this sense, maybe we could think about a similarity in the plastic modulation of sensory and motor systems, based on the planning of the "desired action" and then we go to the last point Eliane has mentioned, that is, to what extent is there an emotional modulation over the sensory and motor systems.
Ricardo Gattass: Aniruddha, before we finish please comment: Our proposal for cortical computation emphasizes the network of connections that engage topographically organized areas. Feed-forward connections topographically link areas, while a much more disperse feedback arrangement is also present at all levels, including the striate cortex (V1). According to the present proposal, distinct forms of perceptual completion engage different portions of the widely distributed processing network. For example, line completion of the type triggered by the large bar stimuli of the Fiorani Junior et al. (31) study, most likely receive a central contribution from striate cortex circuits as revealed by the cell responses obtained in this area. At the same time, boundary completion associated with illusory contour stimuli probably involves V2 circuits as revealed by the studies of von der Heydt and Peterhans (32). These two forms of boundary completion should be contrasted with feature completion, such as brightness and texture fillingin. The results from the work of De Weerd et al. (33) strongly suggest that V2 and V3 (but not V1) mediate texture filling-in. At the same time, brightness filling-in seems, at least in part, to involve the striate cortex (34). However, the view we advance here is not the "one effect-one area", but rather that a distributed network of topographically organized areas is involved. So, for example, V2 and V3 circuits are important for texture filling-in. At the same time, while V1 was implicated in brightness coding by Rossi et al. (34), several other areas are likely to be involved too, such as V4 whose cells display sophisticated color-coding properties. Fi-nally, although illusory contours trigger responses in V2, recent evidence suggests that other areas may be involved also (such as V4).
Aniruddha Das: Hi, Ricardo -I agree with you that one must consider this combined feed-forward and feedback process, and not "one area-one effect". Many of these higher order perceptions might be "triggered" in the extra-striate cortex, in V2 for illusory contours, etc., but presumably with a feedback to V1 and you could possibly see consequences of that in increased firing or temporal correlation. For example, even with such higher order effects as attentional modulation, you do get distinct effects in V1 at the single cell level (35) (that interacts with contour integration in V1) as well as in the general increase in "background/spontaneous" activity that shows up in functional magnetic resonance imaging studies of V1 activity with difficult tasks (36,37).
Eliane Volchan: We have been dealing with the first and second questions. Maybe we could go on to synesthesia. I add the following question: Ramachandran and Rogers-Ramachandran (38) have shown that amputees can have a "vivid" somatic perception of a phantom arm by visualizing it through a mirror. Are proprioception and vision synesthetic senses in humans? Do other animals have other combinations of senses, like olfaction and vision or hearing and vision?
Miguel Nicolelis: Eliane, I think that there is no doubt that the network of cortical areas and thalamic nuclei that form the somatosensory system contains an internal model of the body. This "internal model" seems to be crucial to define the limits of the body and relationships between our bodies and extrapersonal space. I think the phantom-limb phenomenon illustrates what may happen when this "internal model" is confronted with the lack of input from a part of the body that should be there. This may create a mismatch that creates the illusion that the miss-ing limb is still there! Jon Kaas: Eliane, the perception of the phantom in cases with congenital missing limb can perhaps answer this question (39,40), but the claim that phantoms exist in such cases is still very controversial.
Miguel Nicolelis: Eliane, it is really difficult to judge Ramachandran's experiments. These are pretty subjective descriptions and I am not sure I would build a whole theory based on them. Nonetheless, as you said, this would involve a second sensory modality, vision, reinforcing a somatosensory percept. In this case, I am not sure what the subject is actually feeling, but this description would suggest that the sense of vision could contribute to the definition of an internal body image.
Eliane Volchan: Miguel, but when the person sees the limb in a mirror the phantom becomes much more real including its movements and there is no mismatch but instead a match that can be the same as in normal people (see Ref. 41). That is why I talked about synesthesia. The work on the interaction of visual and kinesthetic perception of normal limbs also highlights the idea of synesthesia (42). It is also in line with recent data obtained with normal subjects indicating the lack of tactile discrimination when the visual cortex is inhibited (43) and activation of the visual cortex during blind-folded tactile discrimination (44).
Miguel Nicolelis: Eliane, it would be really interesting to find out what happens after amputees have interacted with the mirrors for several hours. I would predict that their phantoms go away! What do you think? João Franca: Miguel, this is actually what happens. After a few weeks of training, some subjects report a "telescoping effect". The phantom limb diminishes, but the more represented parts (i.e., the fingers in the case of an amputed arm) remain attached to the stump (45).
Eliane Volchan: We could address the last question now on the role of emotions in sensory and motor map dynamics. Weinberger (8) showed that the tonotopic map in A1 changes by pairing a tone with an aversive stimulus, i.e., the cortical representation of that tone expands. Is this modulation present in other primary sensory cortices? Is it similar for all mammals?
Eliane Volchan: Aniruddha, do you think the representation of one coordinate in V1 could be magnified if this region is paired with an aversive stimulus? Aniruddha Das: Eliane, Yves Fregnac and colleagues (46,47) have shown a whole range of similar effects in V1 -where you get a change in the orientation tuning by pairing a particular orientation with stimulation (they used extracellular electrical stimulation, for example), again, you get a shift in the peak orientation tuning but not a bodily shift in orientation tuning, similar to Norman Weinberger's results.
João Franca: Aniruddha, keep in mind the paradigm of enlarging the S1 representation of a skin patch after training a monkey in a tactile frequency discrimination task (48). Do you think that the receptive field properties of V1 cells (such as orientation tuning) could be similarly changed by training? Aniruddha Das: João, it's conceivableand one point that has come out, for example in plasticity in V1, is that the effects seen on individual neurons depend strongly on the precise task used and you only see them manifested when the animal is engaged in the same task. So, for example, Roy Crist and colleagues (49,50) showed that you could train animals (including humans) to get better at a 3-line bisection task but not a hyperacuity task, even though the two tasks use the same vertical lines in the same region of the cortex. Later, the same subject can be independently trained to get better in the hyperacuity task. Further, by working with animals, Roy showed that a) after training with a 3line bisection task, the cells in the trained hemisphere showed a change in their modulation, i.e., now showed facilitation rather than suppression with parallel flanking lines. This was true only in the trained region and not in an untrained region; b) very interestingly, these cells showed this effect only when the animal was engaged in the same 3line bisection discrimination task. The same stimulus pattern, presented during passive viewing, showed no such change in the modulation. So, the changes are both task-specific, and somehow controlled so as to come into effect only within a particular task context.
Eliane Volchan: Dr. Weinberger, we were discussing your results on the magnification of the tone paired with an aversive stimulus in the primary auditory cortex and wondering if the same could happen with other systems.
Norman Weinberger: We study the primary auditory cortex (A1) not the visual cortex (V1). So far, researchers have found similar effects in the somatosensory cortex. I have discussed the visual cortex with some visual physiologists, who are split. Some seem to think V1 is not plastic but I think this reflects unwarranted assumptions. Certainly, studies of visual learning implicate V1 (51). The learning experiments are simple to do in animals and should be done.
Eliane Volchan: Norman, Aniruddha mentioned experiments by Roy Crist and Charles Gilbert in the primate visual system but none was done with an "emotional" stimulus.
Norman Weinberger: I cannot access previous messages for some reason, but I will add a comment on the issue of what parts of the cortex are plastic in learning. There is good reason to believe that the nucleus basalis cholinergic projections to the cortex enable retuning and other plasticities under circumstances of learning (8). There is every reason to believe that all areas of the cortex can participate. We selected the auditory cortex because of tactical advantages not because it was thought to be uniquely plastic. João Franca: This is interesting. The nucleus basalis does not project uniformly to all cortical areas. Actually, occipital (visual) areas receive less cholinergic projections than the parietal and frontal areas (52). Does it mean that plasticity shall be different in these different sensory areas (i.e., visual versus somatosensory or auditory areas)? Norman Weinberger: It is known that V1 is subject to modulation by acetylcholine. I think we need to be careful in associating the amount of anatomical projections with the amount of physiological influence. McGaugh and Sarter (53) have shown that saporin in visual cortex produces specific degeneration in the nucleus basalis and specific deficits in what he refers to as visual attention.
João Franca: Norman, the differential distribution of cholinergic projections from the nucleus basalis is followed by a heterogeneous distribution of nitric oxide (NO)producing neurons (54,55). In addition, we showed that there are fewer NO neurons in visual areas compared to somatosensory areas (56). Since NO is crucial for some forms of synaptic plasticity (57)(58)(59), I wonder if such anatomical/histochemical heterogeneities could be reflected in the physiological dynamics of sensory maps. Norman Weinberger: I definitely think that could be the case... I hope you can do some of the sensory map or preferably receptive field studies within the context of NO.
Norman Weinberger: There must certainly be limitations and constraints on plasticity. My guess is that plasticity is much greater than thought but we do not yet have an overall theory that would allow precise predictions about which stimulus parameters would be affected in which tasks. The issue of selective engagement of previously developed changes is critical, as pointed out in this discussion. Behavioral studies have long shown that many aspects of learning and memory are context dependent. From the brain's point of view, there is probably no such thing as a stimulus, but simply the totality of afferent sensory influx. Thus, training to a given stimulus may have contextual bounds. A simple example is that our animals do not undergo extinction during determination of receptive fields, although they can extinguish if the same conditioned stimulus tone is presented within the same training context. The context of receptive field determination is completely different. A paper by Diamond and Weinberger (60) addresses this issue.
Eliane Volchan: Norman, that is really interesting since I have been wondering how to test the receptive fields without inducing extinction. Was the animal awake during the recordings?
Norman Weinberger: Receptive field plasticity can be read-out under Nembutal or ketamine if the conditioning first took place in the waking state. All of this is well described in our papers (60)(61)(62)(63)(64)(65).
Eliane Volchan: Norman, do you think the direct amygdalocortical projections play a role in the conditioning paradigm? Norman Weinberger: With respect to the amygdala, it is actually interesting that, e.g., fear conditioning does not require the basolateral amygdala. There is the widespread but not substantiated belief that fear memories are stored in the basolateral amygdala, a position strongly taken by Joe LeDoux (66). In any event, there is not much doubt that the amygdala plays a very important role in the modulation of the strengths of memories, as well attested to by the work of Jim McGaugh and others (53,67). However, such influences require a cholinergic link, still under investigation.
Eliane Volchan: Norman, back to fear conditioning. The problem in the visual sys-tem of non-primates is that we cannot repeat the experiments you did (previous conditioning) because the animal cannot be trained to fixate. I was wondering if a loud sound paired with a visual stimulus would work in an anesthetized and paralyzed preparation, given that changes in heartbeats could be tracked concurrently.
Norman Weinberger: Eliane, I think the main problem is anesthesia. You might be able to induce plasticity in anesthesia by stimulating the nucleus basalis. You might check the work of Dykes (68) in the somatosensory cortex on this. Then, you could control the visual stimuli. It is difficult to establish learning/plasticity in anesthetized subjects and of course there is no behavioral read out. But we did induce fear conditioning in rats with Paul Gold (69,70), when they were under deep barbiturate anesthesia, by giving a small dose of peripheral epinephrine. Days after recovery, subjects exhibited fear.
Norman Weinberger: I am sorry I could join only late. However, I would be most happy to correspond with everyone individually, at any time.
João Franca: I would like to congratulate the SBNeC for this innovative and lowcost initiative of promoting these virtual discussions.
|
2016-10-26T03:31:20.546Z
|
2001-12-01T00:00:00.000
|
{
"year": 2001,
"sha1": "7eef43959998563c77756ce4c77ee6041ca6319e",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/bjmbr/v34n12/4318.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5c4c7100e4f2718e19543c9c4ea952f9efeb05d7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
235590097
|
pes2o/s2orc
|
v3-fos-license
|
Bayesian learning scheme for sparse DOA estimation based on maximum-a-posteriori of hyperparameters
Received Jul 22, 2020 Revised Oct 23, 2020 Accepted Dec 18, 2020 In this paper, the problem of direction of arrival estimation is addressed by employing Bayesian learning technique in sparse domain. This paper deals with the inference of sparse Bayesian learning (SBL) for both single measurement vector (SMV) and multiple measurement vector (MMV) and its applicability to estimate the arriving signal’s direction at the receiving antenna array; particularly considered to be a uniform linear array. We also derive the hyperparameter updating equations by maximizing the posterior of hyperparameters and exhibit the results for nonzero hyperprior scalars. The results presented in this paper, shows that the resolution and speed of the proposed algorithm is comparatively improved with almost zero failure rate and minimum mean square error of signal’s direction estimate.
INTRODUCTION
Direction of arrival (DOA) estimation is a well-known problem in the field of array signal processing where, the angle of arrival (direction) of the signal at the receiver needs to be estimated from the knowledge of the received signal itself. This problem of direction of arrival estimation has attracted many modern researchers because of its large range of applications in certain fields such as RADAR, SONAR, seismology, wireless mobile communication and others. To solve this problem of signal's direction estimation, an array of antennas with linear or non-linear structure having uniform or non-uniform antenna spacing can be used at the receiver. The signals which are generated from the far-field sources arrive at a particular direction and impinge on the antenna array. The received signals from the antenna array of 'M' sensors form the under-sampled observed signal samples. These observed signal samples contain the direction information and hence they are processed to estimate the signal source direction [1]. From past two decades, many algorithms were derived to solve the problem of DOA estimation. These algorithms can be broadly classified into: i) Conventional methods, ii) Subspace methods, iii) Sparse methods.
The standard MUSIC algorithm proposed in [2,3], decomposes signal and noise subspaces along with multiple signal classification methodology to estimate the number of signal sources as well as the spatial spectrum of the received signal. The drawback of this algorithm is that it fails for coherent signal sources. In [4], an improved and modified MUSIC algorithm is proposed by employing matrix decomposition to address the case of coherent signal sources but the performance of this algorithm deteriorates for low SNR region. In [5,6], the performance of all these subspace based standard DOA estimation algorithms are analyzed and found that these techniques offer good speed and less complexity but suffer from low resolution, high sensitivity towards correlated signal sources and high MSE.
In recent years, after the emerge of sparse signal representation several algorithms were derived as solutions to the DOA estimation problem by considering and representing the problem as sparse signal recovery problem by utilizing the sparse nature of to be estimated signal [7]. The sparse based algorithm proposed in [8,9] is based on the orthogonal matching pursuit (OMP) technique, which suffers from low performance and requires the knowledge of source number. In [10], a convex relaxation with l1 norm penalty is applied to mitigate the perturbation effects. In [11,12], an l1-svd based algorithm along with re-weighted l1 minimization is proposed to minimize the complexity of DOA estimation algorithm but suffers with low performance for coherent and closely spaced signal sources. The compressive sensing based DOA estimation algorithms proposed in [13][14][15][16] are based on the simple least squares minimization method which offers good MSE, resolution but suffers from high complexity, the performance of BP and OMP depends on the arraysteering matrix [9], and it degrades for highly correlated array-steering matrix in the case of DOA problem. The scaling/shrinkage operations in convex relaxation may reduce variance for increase in sparsity or vice versa.
In the most recent years, Bayesian methods like maximum a posteriori (MAP) [17,18], maximum likelihood (ML estimation) [18], iterative reweighted l1 and l2 algorithms were applied to solve the DOA estimation problem. These Bayesian algorithms suffer from high MSE, even though true priors are used. In [17], MAP only guarantees maximization of product of likelihood and the prior of the unknown sparse signal. ML estimate in [18], also maximizes only the likelihood function by assuming prior of unknown to be equally likely to occur.
Sparse Bayesian learning (SBL) with relevance vector machine proposed by Tipping in [19] and rerepresented by Wipf in [20] for linear regression/sparse signal recovery problem led a broader way with higher performance results in the research of sparse signal recovery. In this paper, we present the detail inference of Sparse Bayesian learning and its applicability to DOA problem using on-grid approach. We also derive the updating equations for hyperparameters by maximizing the posterior of hyperparameters for nonzero hyperprior scalars.
Further, the paper is organized as: Section 2 describes the signal model used for DOA estimation for a uniform linear array. Section 3 describes the basics and inference of the Sparse Bayesian Learning technique. Section 4 describes about updating of the hyperparameters of SBL estimate by proposing a method of maximum-a-posterior of the hyperparameters. Section 5 summarizes the proposed algorithm. In section 6, the results and performance analysis of the proposed algorithm are presented. Finally, section 7 concludes the paper.
SIGNAL MODEL FOR SPARSE DOA ESTIMATION
Consider 'D' number of arriving signal sources s(n)=[s1(n), s2(n)…. sD(n)] T impinging on the uniform linear array of 'M' sensors with a uniform spacing of d ≤ λ/2, where λ is the wavelength of the arriving signals. Let y(n)=[y1(n), y2(n)…. yM(n)] T be Mx1 observed signal samples received by 'M' antenna array sensors. For simplicity, assuming a single snapshot (single measurement vector) i.e, n=1, the problem of direction of arrival estimation can be modeled as in (1).
where A is MxN array steering matrix given by (2) and a(θi) represents the atom for a particular direction angle θi. For searching the entire angle space for DOA a particular grid of 'N' values of angles are considered. Each atom is a vector of Mx1 antenna array steering vector given in (3).
where, β=2π/λ, x(n)=[x1(n), x2(n)…. xN(n)] T is Nx1 signal vector that needs to be estimated to find the source signal directions in presence of antenna array noise vector w(n)=[w1(n), w2(n)…. wM(n)] T of Mx1 size. The estimated x(n) values are the estimation of signal power s(n) and is related by (4). The model in (1) turns out to be the problem of sparse signal recovery from the under-sampled measurements y [13].
SPARSE BAYESIAN LEARNING INFERENCE
Consider a single snapshot case, DOA estimation problem as in (1). The w(n) are independent noise samples which is assumed to be zero-mean Gaussian random process with noise variance σ 2 . By Bayes's theorem [19], the posterior of unknown x by knowing the observed antenna array received signal y can be expresses as in (5).
where, P(y/x) is the likelihood of observed data for the estimated unknown parameter 'x' which is assumed as P(y/x) = (y/Ax, σ 2 ), where the notation (. ) specifies a guassian distribution over y with mean Ax and variance σ 2 . This assumption is due to another assumption of independence of samples y. Thus the likelihood function of y is given in (6).
The prior of unknown 'x' is also assumed to be as zero mean Gaussian prior distribution over x with variance [19,20]. The Gaussian prior of a single sample of x (i.e, xi) is given in (7).
The overall Gaussian prior for all i = 1 to N is given in (8).
To define prior of unknown x, we require another parameter γ which is variance of unknown x. Thus the γ can be called as a vector of hyperparameters of unknown 'x' [21,22]. Hence, to completely define all the distributions, the hyperparameters γ and noise variance σ 2 needs to be estimated which can be done by defining the hyperpriors of γ and σ 2 as in (9) and (10).
We have chosen gamma distribution because the hyperparameters γ and σ 2 are scale parameters [23,24] where: is the gamma function and a, b, c, d are all hyperprior parameters. After defining the likelihood and priors (5) can be re-written as [19].
In (14) we get one more result for prior of the observed array received signal vector.
with Σ y = (σ 2 I + AΓ −1 A T ) as the prior covariance of observed array received signal vector. Solving (16) and (17) for the known values of hyperparameters γ and σ 2 results in mean and covariance of posterior of unknown x respectively. The posterior mean of unknown x is itself the estimation of the unknown x i.e, ̂= , plotting this ̂ estimate with respect to the on-grid search angle points gives the DOA peaks and hence the arriving signal source's direction can be estimated [25]. In practical situations, the hyperparameters γ and σ 2 will be unknown and there cannot be any closed form expressions obtained for them [26]. Hence, an iterative estimation of hyperparameters γ and σ 2 has to be done.
MAXIMUM A POSTERIOR OF HYPERPARAMETERS
To iteratively estimate the hyperparameters like variance of prior of unknown and variance of noise σ 2 , we maximize the probability function P(γ,σ 2 / ) given by (19). P(γ,σ 2 /y)= P(y/γ,σ 2 )P(γ,σ 2 ) The hyperparameters γ, σ 2 is mutually independent with each other and also the probability of known measured array received signal vector is a constant. Thus, maximizing (19) is equivalent to maximize (20) with respect to γ,σ 2 . P(γ,σ 2 /y)∝P(y/γ,σ 2 )P(γ)P(σ 2 ) As in practice, we assume uniform hyperpriors over a logarithmic scale with the derivatives of the hyperpriors terms goes to zero, we choose to maximize the logarithmic quantity of (20) with respect to log γ and logσ 2 . The logarithm of (20) is given by (21).
The variance of unknown 'x'
The differentiation of (23) with respect to logγ i gives: Setting this (24) to zero and Mackay [27], leads to the update in (25).
The noise variance
The differentiation of (23) with respect to logσ 2 we get: where: γ i −1 Σ xii ) and setting derivative in (26) to zero and re-arranging the terms we get σ 2 update as in (27).
THE PROPOSED ALGORITHM
For a single snapshot case, in DOA estimation, initialize the noise variance σ 2 and the prior variance γ of unknown x to a value (i.e, usually taken as 1). Using (17) will give the covariance estimate and later using (16) will give the first iterative estimate of μ. Before performing the 2 nd iterative estimation of Σ x and μ, let us update the hyperparameters γ and σ 2 using (26) and (27), where the parameters/variables in those equations represent the values of 1 st iteration. Now using these new updated values of γ and σ 2 , estimate the 2 nd iteration values of Σ x and μ. Repeat these steps until a particular stopping criterion is achieved. In this iterative process, some elements of μ vector tend to become very minimum value (i.e, less than a preset threshold), equating these elements to zero, results in sparsity of the solution.
For 'L' number of multiple snapshot/multiple measurement vector (MMV) case also, same procedure can be utilized except that the prior mean of unknown 'x' (i.e, μ) is a matrix, in which each row corresponds to a particular on-grid search point of angle of arrival. Each of these rows of μ for MMV case should be taken as absolute mean square values of all the elements of that particular row. This μ estimate obtained at the final iteration of the proposed algorithm is plotted versus the search grid of angle of arrival. The plot showing peaks corresponding to the particular value of angle of arrival on x-axis, indicates the estimate of direction of arrival. The proposed DOA estimation algorithm based on sparse Bayesian learningmaximum a posterior of hyperparameters (SBL-MAP-H) for MMV case is summarized in Table 1.
RESULTS AND DISCUSSION
In this section, the experimental results of the proposed algorithm is presented for different conditions of various algorithmic parameters. For the simulation of the algorithm, MATLAB R2013a platform has been utilized. The simulation results of the proposed algorithm are compared with standard DOA estimation algorithms like MUSIC [2,3,28], MVDR [29,30] and the recent standard algorithm l1-SVD [11,12]. Considering a uniform linear array (ULA) of M=100 number of array elements with an interelement spacing of λ/2, where λ stands for wavelength of the received signal assumed to be as 1m. Let us assume a single signal source transmitted from a far-field with a direction of 0 0 with respect to the vertical normal axis having an angular frequency of 20π r/s is impinging on the ULA. The proposed algorithm considers a set of on-grid points for searching the direction of the arriving signal with a 0.5 0 step-size. The proposed algorithm is simulated for L=500 number of snapshots in a noisy environment with SNR of 30dB. The hyperparameter updating depends on the hyperprior parameters (a,b,c,d). These parameters highly influence the DOA estimation results as shown in Figure 1. For abcd-parameter values equal to zero, the DOA estimation peak is less steep when compared to the estimation peak obtained for abcd-parameters equal to 0.4. It is also tested with various other values of a,b,c,d and found that for all 0<a,b,c,d<0.5 gives steepest estimation peaks containing maximum peak only at the actual angle of arrival of the received signal and completely flat response for any other grid points. The very high value set for a,b,c,d increases the sparsity in the estimated results and in some cases with weak signal strength, the actual true DOAs also may not contain the estimation peaks. Hence the range of 0<a,b,c,d<0.5 is the optimized option for DOA estimation application. All the next analysis considers abcd-parameters as 0.4. Considering M=10, number of search grid points as N=361, L=100, number of signal sources D=3 with actual true DOAs as -10 0 , 10 0 , 64 0 with corresponding angular frequencies of 20 π, 40 π, 60 π r/s respectively and a noisy environment with SNR 0dB. Figure 2 shows the DOA estimation peaks for the proposed algorithm as well as various standard DOA estimation algorithms. It can be observed that though the value of SNR is very less (i.e, the worst noisy environment), the proposed algorithm shows sharp DOA estimation peaks at the actual true DOAs.
For the same parametric conditions, considering a single snapshot case with L=1 also gives sharp DOA estimation peaks indicating the actual true DOAs with 100% success rate as shown in Figure 3. Figure 4 indicates the estimation case for the number of array elements M=100, which shows that the proposed algorithm performance is almost similar to that for M=10 with respect to mean square error. In the case of very closely spaced source signals with actual true DOA of 10 0 and 11 0 , the proposed algorithm still produces steeper peaks with clear distinguished DOA peaks as compared to other standard algorithms as shown in Figure 5. Figure 6 indicate the case of very closely spaced two coherent signal sources with an angular frequency of 20 π r/s and located at 0 0 and 1 0 . The result in Figure 6 exhibits the high-resolution performance of the proposed algorithm compared to the other algorithms. As the proposed algorithm employs the probability of the measured antenna array signal by knowing the prior of unknowns, good resolution, even for coherent signal sources are obtained. The effect of array sensor noise added up with the received signal for hyperprior parameters a,b,c,d=0 is as shown in Figure 7. Considering L=50, M=100 and a single source signal with actual true DOA of 0 0 , the DOA peak becomes more steeper along with decrease in mean square error for the improvement in SNR value. Figure 8 shows the effect of number of snapshots 'L' on the DOA estimation peaks for SNR=10 dB, M=100 and actual true DOA of 0 0 . As the number of snapshots increases, the DOA peaks become more steeper with better performance. For the case of increase in number of array elements in the ULA, the DOA estimation performance increases along with the increase in estimation success rate as shown in Figure 9. For a single source arriving at actual true DOA of 0 0 with L=50 and M=100, the performance analysis of various standard algorithms compared with the proposed algorithm with respect to mean square error v/s signal to noise ratio. As seen in Figure 10, the proposed algorithm shows less MSE for all the range of SNR, when compared with other standard DOA estimation algorithms. It is true that the proposed algorithm also exhibits very least failure rate with respect to the SNR as shown in Figure 11.
The execution time consumed by the proposed algorithm is quiet more when compared to MUSIC and other subspace-based algorithms. However, the proposed algorithm's execution time is comparatively less with respect to l1-SVD algorithm and almost becomes equal as the number of snapshots increases as shown in Figure 12. The probability of success rate for measuring the resolution performance of the proposed algorithm is as shown in Figure 13. By maintaining constant antenna array elements in the case of closely angular spaced coherent signal sources, as the SNR of the received antenna array signal increases, the resolution/probability of success rate of estimation of the proposed algorithm also increases and produces almost 100% resolution from SNR range of 0 dB onwards.
CONCLUSION
In this paper, a sparse Bayesian learning approach based on maximum a posterior of hyperparameters for DOA estimation is designed and tested for various DOA estimation conditions and parameters. The proposed algorithm exhibits good mean square error and success rate for all the parametric conditions like low SNR range, less array size and a smaller number of snapshots. It also results in good resolution for very closely spaced signal sources. From the result section, it is observed that the proposed algorithm achieves better MSE v/s SNR performance when compared with other standard DOA estimation algorithms in low to medium SNR range. The only exception of the proposed algorithm is that more computation time with increased complexity for larger number of snapshots. As the proposed algorithm shows good performance for single or a very few snapshot cases with manageable computation time, it can be comparatively of more important solution for the problem of DOA estimation in array signal processing field.
|
2021-05-04T22:03:54.897Z
|
2021-08-01T00:00:00.000
|
{
"year": 2021,
"sha1": "4bfc6c86ecaebd9afd3b7a708482bd662b484dd9",
"oa_license": "CCBYSA",
"oa_url": "http://ijece.iaescore.com/index.php/IJECE/article/download/24283/14910",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fb4e796f4e11f7d7f3074e373d5135c395d8b3c5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
119123793
|
pes2o/s2orc
|
v3-fos-license
|
The qc Yamabe problem on non-spherical quaternionic contact manifolds
It is shown that the qc Yamabe problem has a solution on any compact qc manifold which is non-locally qc equivalent to the standard 3-Sasakian sphere. Namely, it is proved that on a compact non-locally spherical qc manifold there exists a qc conformal qc structure with constant qc scalar curvature
Introduction
It is well known that the sphere at infinity of a non-compact symmetric space M of rank one carries a natural Carnot-Carathéodory structure, see [26,27]. In the real hyperbolic case one obtains the conformal class of the round metric on the sphere. In the remaining cases, each of the complex, quaternionic and octonionic hyperbolic metrics on the unit ball induces a Carnot-Carathéodory structure on the unit sphere. This defines a conformal structure on a sub-bundle of the tangent bundle of co-dimension dim R K − 1, where K = C, H, O. In the complex case the obtained geometry is the well studied standard CR structure on the unit sphere in complex space. In the quaternionic case one arrives at the notion of a quaternionic contact structure. The quaternionic contact (qc) structures were introduced by O. Biquard, see [4], and are modeled on the conformal boundary at infinity of the quaternionic hyperbolic space. Biquard showed that the infinite dimensional family [24] of complete quaternionic-Kähler deformations of the quaternionic hyperbolic metric have conformal infinities which provide an infinite dimensional family of examples of qc structures. Conversely, according to [4] every real analytic qc structure on a manifold M of dimension at least eleven is the conformal infinity of a unique quaternionic-Kähler metric defined in a neighborhood of M . Furthermore, [4] considered CR and qc structures as boundaries at infinity of Einstein metrics rather than only as boundaries at infinity of Kähler-Einstein and quaternionic-Kähler metrics, respectively. In fact, in [4] it was shown that in each of the three cases (complex, quaternionic, octoninoic) any small perturbation of the standard Carnot-Carathéodory structure on the boundary is the conformal infinity of an essentially Date: July 26, 2018. 1 unique Einstein metric on the unit ball, which is asymptotically symmetric. In the Riemannian case the corresponding question was posed in [8] and the perturbation result was proven in [14].
There is a deep analogy between the geometry of qc manifolds and the geometry of strictly pseudoconvex CR manifolds as weel as the geometry of conformal Riemannian manifolds. The qc structures, appearing as the boundaries at infinity of asymptotically hyperbolic quaternionic manifolds, generalize to the quaternion algebra the sequence of families of geometric structures that are the boundaries at infinity of real and complex asymptotically hyperbolic spaces. In the real case, these manifolds are simply conformal manifolds and in the complex case, the boundary structure is that of a CR manifold.
A natural extension of the Riemannian and the CR Yamabe problems is the quaternionic contact (qc) Yamabe problem, a particular case of which [13,30,15,16] amounts to finding the best constant in the L 2 Folland-Stein Sobolev-type embedding and the functions for which the equality is achieved, [9] and [10], with a complete solution on the quaternionic Heisenberg groups given in [16,17,18].
Following Biquard, a quaternionic contact structure (qc structure) on a real (4n+3)-dimensional manifold M is a codimension three distribution H (the horizontal distribution) locally given as the kernel of a R 3 -valued one-form η = (η 1 , η 2 , η 3 ), such that, the three two-forms dη i | H are the fundamental forms of a quaternionic Hermitian structure on H. The 1-form η is determined up to a conformal factor and the action of SO(3) on R 3 , and therefore H is equipped with a conformal class [g] of quaternionic Hermitian metrics.
For a qc manifold of crucial importance is the existence of a distinguished linear connection ∇ preserving the qc structure, defined by O. Biquard in [4], and its scalar curvature S, called qc-scalar curvature. The Biquard connection plays a role similar to the Tanaka-Webster connection [31] and [29] in the CR case. A natural question coming from the conformal freedom of the qc structures is the quaternionic contact Yamabe problem [30,15,20]: The qc Yamabe problem on a compact qc manifold M is the problem of finding a metric g ∈ [g] on H for which the qc-scalar curvature is constant.
The question reduces to the solvability of the quaternionic contact (qc) Yamabe equation where △ is the horizontal sub-Laplacian, △f = tr g (∇ 2 f ), S and S are the qc-scalar curvatures correspondingly of (M, η) and (M,η = f 4/(Q−2) η), where 2 * = 2Q Q−2 , with Q = 4n + 6-the homogeneous dimension. Complete solutions to the qc Yamabe equation on the 3-Sasakian sphere, and more general, on any compact 3-Sasakian manifold were found in [18]. The case of the sphere is rather important for the general solution of the qc Yamabe problem since it provides a family of "test functions" used in attacking the general case.
The qc Yamabe problem is of variational nature as we remind next (see e.g. [30,15,20,21]). Given a quaternionic contact (qc) manifold (M, [η]) with a fixed conformal class defined by a quaternionic contact form η, solutions to the quaternionic contact Yamabe problem are critical points of the qc Yamabe functional If η is a fixed qc contact form one considers the functional (which is also called qc Yamabe functional ) where The main result of W. Wang [30] states that the qc Yamabe constant of a compact quaternionic contact manifold is always less or equal than that of the standard 3-Sasakian sphere, λ(M ) ≤ λ(S 4n+3 ) and, if the constant is strictly less than that of the sphere, the qc Yamabe problem has a solution, i.e. there exists a global qc conformal transformation sending the given qc structure to a qc structure with constant qc scalar curvature.
The purpose of this paper is to confirm the above conjecture. Our main result is This is analogous to the result of T. Aubin [1] for the Riemannian version of the Yamabe problem: Every compact Riemannian manifold of dimension bigger than 5 which is not locally conformally flat posses a conformal metric of constant scalar curvature and the result of D. Jerison & J. M. Lee [22] for the CR version of the Yamabe problem: Every compact CR manifold of dimension bigger than 3 which is not locally CR equivalent to the sphere posses a conformal pseudohermitian metric of constant pseudohermitian scalar curvature. Aubin's result is limited to dimensions bigger than 5. In the remaining cases the problem was solved by R. Schoen in [28] (see also [25] as well as [2,3] for a different approach to the Riemannian Yamabe problem). Similarly, Jerison and Lee's result is limited to dimensions bigger than 3 and in the remaining cases the problem was solved by N. Gamara in [11] and N. Gamara and R. Yacoub in [12].
Surprisingly in our theorem for the qc case there is no dimensional restriction which is a bit different with the Riemannian and the CR cases.
To achieve the result we follow and adapt to the qc case the steps of the Jerison & Lee's theorem [22] which solves the CR Yamabe problem on non-spherical CR manifold of dimension bigger than 3. The main idea (as in [22]) is to find a precise asymptotic expression of the qc Yamabe functional. Our efforts throughout the present paper are concentrated on the establishment of a local asymptotic expression of the qc Yamabe functional (1.1) in terms of the Yamabe invariant of the sphere and the norm of the qc conformal curvature W qc introduced by Ivanov-Vassilev in [19]. The next step is to show that the term in front of ||W qc || 2 in this expression is a negative constant and then apply the qc conformal flatness theorem which states that W qc = 0 if and only if the qc manifold is locally qc equivalent to the standard 3-Sasakian sphere [19, Theorem 1.2, The key idea is that the 3-Sasakian sphere posses a one-parameter family of extremal qc contact forms that concentrate near a point. Instead of the sphere one uses as a model the quaternionic Heisenberg group G (H) = H n × Im(H). The Cayley transform [15] gives a qc-equivalence between G (H) and the sphere minus a point, which allows us to think of the standard spherical 3-Sasakian form as a qc form on G (H).
The Heisenberg group carries a natural family of parabolic dilations: for s > 0, the map δ s (x α , t i ) = (sx α , s 2 t i ) is a qc automorphism of G (H). These dilations give rise to a family of extremal qc contact forms Θ ε = δ * 1/ε Θ on G (H) which become more and more concentrated near the origin as ε → 0. We show that the qc Yamabe functional Υ M is closely approximated by Υ G (H) for contact forms supported very near the base point.
We present a precise asymptotic expression for Υ M (η ε ) as ε → 0 for a suitably chosen qc contact forms η ε . To this end we use the intrinsic qc normal coordinates introduced by Ch. Kunkel in [23]. The main ingredient in the Kunkel's construction of these coordinates is the fact that the tangent space of a quaternionic contact manifold has a natural parabolic dilation, instead of the more common linear dilation seen in Riemannian geometry. Kunkel used these curves to define an exponential map from the tangent space at a point to the base manifold and showed that this exponential map incorporate this parabolic structure. Using this map and a special frame at the center point Kunkel was able to construct a set of parabolic normal coordinates. Using these parabolic normal coordinates and the effect of a conformal change of qc contact structure on the curvature and torsion tensors of the Biquard connection Kunkel was able to define a function that, when used as the conformal factor, causes the symmetrized covariant derivatives of a certain tensor constructed from the curvature and torsion to vanish. More precisely, using invariance theory, Kunkel showed in [23] that the only remaining term of weight less than or equal to 4 that does not necessarily vanish at the center point is the squared norm of the qc conformal curvature, ||W qc || 2 .
Using the above coordinates around any fixed point q ∈ M of an arbitrary qc manifold (M, [η]) we define the "test forms" η ε = (f ε ) 2 * −2 η, where η is normalized at q qc contact form and f ε is a suitable "test function" inspired from the solution of the qc Yamabe equation on the 3-Sasakian sphere found in [17,18] and compute an asymptotic formula for Υ M (η ε ) as ε → 0.
We show in Section 5, Theorem 5.6, that the next formula holds where c(n) is a positive dimensional constant. Finally, we compute the exact value of the constant c(n) and show that it is strictly positive. Since, W qc is identically zero precisely when M is locally qc equivalent to the sphere [19], under the hypotheses of Theorem 1.1 there is a point q ∈ M where W qc (q) = 0. This implies that for ε small enough we can achieve Υ M (η ε ) < Λ and applying the main result of W. Wang [30] we prove Theorem 1.
Quaternionic contact manifolds
In this section we will briefly review the basic notions of quaternionic contact geometry and recall some results from [4], [15] and [19] which we will use in this paper. Since we will work with the Kunkel's qc parabolic normal coordinates, we will follow the notations in [23].
2.1. Quaternionic contact structures and the Biquard connection. A quaternionic contact (qc) manifold (M, η, g, Q) is a 4n + 3-dimensional manifold M with a codimension three distribution H locally given as the kernel of a 1-form η = (η 1 , η 2 , η 3 ) with values in R 3 . In addition H has an Sp(n)Sp(1) structure, that is, it is equipped with a Riemannian metric g and a rank-three bundle Q consisting of endomorphisms of H locally generated by three almost complex structures I 1 , I 2 , I 3 on H satisfying the identities of the imaginary unit quaternions, I 1 I 2 = −I 2 I 1 = I 3 , I 1 I 2 I 3 = −id |H which are hermitian compatible with the metric g(I i ., I i .) = g(., .) with the compatibility condition 2g( The transformations preserving a given qc structure η, i.e.,η = f Ψη for a positive smooth function f and an SO(3) matrix Ψ with smooth functions as entries are called quaternionic contact conformal (qcconformal) transformations. If the function f is constantη is called qc-homothetic to η. The qc conformal curvature tensor W qc , introduced in [19], is the obstruction for a qc structure to be locally qc conformal to the standard 3-Sasakian structure on the (4n + 3)-dimensional sphere [15,19].
A special phenomena, noted in [4], is that the contact form η determines the quaternionic structure and the metric on the horizontal distribution in a unique way.
On a qc manifold with a fixed metric g on H there exists a canonical connection defined first by Biquard in [4] when the dimension (4n + 3) > 7, and in [7] for the 7-dimensional case. Biquard showed that there is a unique connection ∇ with torsion T and a unique supplementary subspace V to H in T M , such that: (i) ∇ preserves the decomposition H ⊕ V and the Sp(n)Sp(1) structure on H, i.e. ∇g = 0, ∇σ ∈ Γ(Q) for a section σ ∈ Γ(Q), and its torsion on H is given by (1)) ⊥ ⊂ gl(4n); (iii) the connection on V is induced by the natural identification ϕ of V with the subspace sp(1) of the endomorphisms of H, i.e. ∇ϕ = 0.
This canonical connection is also known as the Biquard connection. When the dimension of M is at least eleven Biquard [4] also described the supplementary distribution V , which is (locally) generated by the so called Reeb vector fields {R 1 , R 2 , R 3 } determined by where denotes the interior multiplication. If the dimension of M is seven Duchemin shows in [7] that if we assume, in addition, the existence of Reeb vector fields as in (2.1), then the Biquard result holds. Henceforth, by a qc structure in dimension 7 we shall mean a qc structure satisfying (2.1). Letting and these are the sp(1)-connection 1-forms of ∇, i.e. the restriction to H of the connection 1-forms for ∇ on V . The isomorphism ϕ is then simply ϕ(R i ) = I i and these are the connection 1-forms on Q.
Notice that equations (2.1) are invariant under the natural SO(3) action. Using the triple of Reeb vector fields we extend the metric g on H to a Riemannian metric on T M by requiring span{R 1 , R 2 , R 3 } = V ⊥ H. The extended Riemannian metric g ⊕ (η i ) 2 as well as the Biquard connection do not depend on the action of SO(3) on V , but both change if η is multiplied by a conformal factor [15]. Clearly, the Biquard connection preserves the Riemannian metric on T M .
Consider a frame {ξ α , R i } α=1,...,4n;i=1,2,3 , where {ξ α } is an Sp(n)Sp(1) frame for H, and R i are the three Reeb vector fields described above. It is occasionally convenient to have a notation for the entire frame; therefore as necessary we may refer to R i as ξ 4n+i . In order to have a consistent index notation we will use different letters for different ranges of indices as in Convention 1.2. For the dual basis we use θ α and η i . We note that both H and V are orientable. The horizontal bundle is orientable since it admits an Sp(n)Sp(1) ⊂ SO(4n) structure and Q has an SO(3) structure, hence so does V . The natural volume form on V is given by ǫ = η 1 ∧ η 2 ∧ η 3 , and this tensor provides a handy isomorphism between V and Λ 2 V (or their duals). We also denote the volume form on H by Ω and the volume form on T M as dv = Ω ∧ ǫ. Using the volume form ǫ ijk and the metrics on H and V , there is a convenient way to express composition of the almost complex structures and contractions of the volume form with itself,
Torsion and curvature.
Since The properties of the Biquard connection are encoded in the properties of the torsion endomorphism T R = T (R, ·) : H → H, R ∈ V . Decomposing the endomorphism T R ∈ (sp(n) + sp(1)) ⊥ into its symmetric part T 0 R and skew-symmetric part b R , T R = T 0 R +b R , O. Biquard shows in [4] that the torsion T R is completely trace-free, tr T R = tr (T R • I i ) = 0, its symmetric part has the properties T 0 where the superscript + + + means commuting with all three I i , + − − indicates commuting with I 1 and anti-commuting with the other two and etc. The skew-symmetric part can be represented as b Ri = I i µ, where µ is a traceless symmetric (1,1)-tensor on H which commutes with I 1 , I 2 , I 3 . Therefore we have T Ri = T 0 Ri + I i µ. The symmetric, trace-free endomorphism on H defined by τ = (T 0 R1 I 1 + T 0 R2 I 2 + T 0 R3 I 3 ) [15] determines completely the symmetric part [19, Proposition 2.3] 4T 0 Ri = I i τ − τ I i . Thus τ together with µ are the two Sp(n)Sp(1)invariant components of the torsion endomorphism. If n = 1 then the tensor µ vanishes identically, µ = 0, and the torsion is a symmetric tensor, T R = T 0 R . Consider the Casimir operator C = 3 i=1 I i ⊗ I i one has C 2 = 2C + 3 and τ belongs to the eigenspace of C corresponding to the eigenvalue −1 while µ belongs to the eigenspace of C corresponding to the eigenvalue 3, Cτ = −τ, Cµ = 3µ. Further in the paper we use the index convention. For example, C αγ βδ = I i α β I iγ δ and [4] (2.2) We denote R abc d = θ d (R(ξ a , ξ b )ξ c ) for the curvature tensor of the Biguard connection. The horizontal Ricci tensor, called qc Ricci tensor, is Ric = R αβ = R γαβ γ and the qc scalar curvature is S = R α α . There are nine Ricci type tensors obtained by certain contractions of the curvature tensor against the almost complex structures introduced in [15, Definition 3.7, Definition 3.9] by The curvature tensor R abαβ decomposes as R abαβ = R abαβ + ρ iab I i αβ , where R abαβ is the sp(n)-component of R abαβ and commutes with the almost complex structures in the second pair of indices [15,Lemma 3.8].
In fact, according to [19,Theorem 3.11] the whole curvature R abcd is completely determined by its horizontal part R αβγδ , the symmetric horizontal tensors τ αβ , µ αβ , the qc scalar curvature S and their covariant derivatives up to the second order.
We collect below the necessary facts from [
2.3.
Conformal change of the qc structure. In this section we recall the conformal change of a qc structure and list the necessary facts from [15]. Let u be a smooth function on a (4n + 3)-dimensional qc manifold (M, η, g, Q). Letη i = e 2u η i ,g = e 2u g be the conformal transformation of the qc structure (η, g).
The new Reeb vector fields given byR i = e −2u R i − I i α β u β ξ α determine the new globally defined supplementary spaceṼ . Note that nevertheless the one forms η i are local the qc conformal deformation has a global nature since the distributions H, V , the bundle Q and the horizontal metric g are globally defined. We setξ α = ξ α andθ α = θ α + I i α β u β η i . Thenθ α (R i ) = 0 andη i (ξ α ) = 0. Denote by P −1 and P 3 the projection onto the (−1)-and the 3-eigenspaces of the operator C. Then the torsion and the qc scalar curvature changes, [15, (5.5),(5.6),(5.8)], with h = 1 2 e −2u , as follows τ αβ = τ αβ + P −1 (4u α u β − 2u αβ );μ αβ = µ αβ + P 3 (−2u α u β − u αβ ); (2.5)Sg αβ = Sg αβ − 16(n + 1)(n + 2)u γ u γ g αβ − 8(n + 2)u γ γ g αβ . (2.6) 2.3.1. The qc conformal curvature tensor. In conformal geometry, the obstruction to conformal flatness is the well-studied Weyl tensor, the portion of the curvature tensor that is invariant under a conformal change of the metric and its vanishing determines if a conformal manifold is locally conformally equivalent to the standard sphere. Likewise, in the CR case, the tensor which determines local CR equivalence to the CR sphere is the Chern-Moser tensor, also determined by the curvature of the Tanaka-Webster connection and the Chern-Moser theorem [6] states that its vanishing is equivalent to the local CR equivalence to the CR sphere. And just as in the conformal case, it is the key to finding the appropriate bound for the CR Yamabe invariant on a CR manifold. Something similar appears in the qc case, dubbed the quaternionic contact conformal curvature by Ivanov and Vassilev in [19]. In that paper they define a tensor W qc and prove that it is the conformally invariant portion of the Biquard curvature tensor. Moreover, if the tensor W qc vanishes, they prove that the qc manifold is locally qc equivalent to the quaternionic Heisenberg group and since the quaternionic Heisenberg group and the standard 3-Sasakian sphere are locally qc equivalent [19, Theorem 1.1,Theorem 1.2,Corollary 1.3], this tensor clearly plays the role of the Weyl or Chern-Moser tensors.
The qc conformal curvature is determined by the horizontal curvature and the torsion of the Biguard connection as follows [19, (4.8)] where the tensor L αβ is given by [19, (4.6)] Clearly the qc conformal curvature equals the horizontal curvature precisely when the tensor L vanishes.
2.4. The flat model-the quaternionic Heisenberg group. The basic example of a qc manifold is provided by the quaternionic Heisenberg group G (H) on which we introduce coordinates by regarding The left-invariant horizontal 1-forms and their dual vector fields are given by The horizontal and vertical subbundles of T G (H) are given by h = Span{X α }, v = Span{T 1 , T 2 , T 3 } and T G (H) = h ⊕ v. On G (H), the left-invariant flat connection is the Biquard connection, hence G (H) is a flat qc structure. It should be noted that the latter property characterizes (locally) the qc structure Θ by [15,Proposition 4.11], but in fact vanishing of the curvature on the horizontal space is enogh because of [19, Propositon 3.2].
QC parabolic normal coordinates
In this Section we present the necessary facts from Kunkel's work [23] concerning construction of qc parabolic normal coordinates and its consequences.
The quaternionic Heisenberg group G (H) has a family of parabolic dilations (x, t) → (sx, s 2 t) which are automorphisms for the Lie group G (H) and also for its Lie algebra. Then its tangent space T G (H) = h ⊕ v come equipped with a natural parabolic dilation which sends a vector (v, a) to (sv, s 2 a), for any scalar s. Consider these vectors to be based at the origin o ∈ G (H), then by moving from o in the direction of the vector (v, a) for time s we arrive to the parametrization of a parabola, s → (sv, s 2 a). The parabola has a simple expression in terms of differential equations as ...
Extending this notion to a qc manifold with the Biquard connection produces curves that can rightly be called parabolic geodesics, i.e. that satisfy a natural parabolic scaling γ (sv,s 2 a) (t) = γ (v,a) (st). By appropriately restricting the initial conditions, Kunkel showed in [23] that there is a parabolic version of the geodesic exponential map called the parabolic exponential map. Here we state Kunkel's result for a qc manifold. Define γ (X,Y ) to be the curve beginning at q satisfying Theorem 3.1 supplies a special frame and co-frame as follows. Let {R i } be an oriented orthonormal frame for V q , and let {I i } be the associated almost complex structures. Choose an orthonormal basis {ξ α } for H q so that ξ 4k+i+1 = I i ξ 4k+1 for k = 0, ..., n− 1. Extending these vectors to be parallel along parabolic geodesics beginning at q, one obtains a smooth local frame for T M = H ⊕ V . Define the dual 1-forms {θ α , η i } by θ α (ξ β ) = δ α β , θ α (R i ) = 0, η i (ξ α ) = 0, η i (R j ) = δ i j and extending the almost complex structures by defining I i ξ 4k+1 = ξ 4k+i+1 for k = 0, . . . , n − 1, one gets the Kunkel's special frame and co-frame.
Given any special frame, one defines a coordinate map on a neighborhood of q by composing the inverse of Ψ with the map λ : T q M → R 4n+3 : X → (x α , t i ) = (θ α (X), η i (X)). These coordinates are the Kunkel's parabolic normal coordinates (called qc pseudohermitian normal coordinates in [23]).
The generator of the parabolic dilations on the quaternionic Heisenberg group For an arbitrary tensor field φ the symbol φ (m) denotes the part of φ that is homogeneous of order m.
Given a qc manifold and parabolic normal coordinates centered at a point q ∈ M Kunkel defined the infinitesimal generator of the parabolic dilations, the vector P , in these coordinates and shows that it is given by [[23], Lemma 3.4] Using this result Kunkel calculated the low order homogeneous terms of the special co-frame and the connection 1-forms, namely he proves In parabolic normal coordinates, the low order homogeneous terms of the special co-frame and the connection 1-forms are Following [23], we denote by O (m) those tensor fields whose Taylor expansions at q contain only terms of order greater than or equal to m. For example, from Proposition 3.2, η i ∈ O (2) and θ α ∈ O (1) .
a=1 form the standard left-invariant frame on G(H) defined in Subsection 2.4 and the given frame on M is expressed as a perturbation of them.
It is easy to check that if φ ∈ O (m) and ψ ∈ O (m ′ ) then φ ⊗ ψ ∈ O (m+m ′ ) and we will use the following: for any index a, let o(a) = 1 if a ≤ 4n and o(a) = 2 if a > 4n. Given a multiindex A = (a 1 , . . . , a r ) we let #A = r and o(A) = r s=1 o(a s ). If we have a collection of indexed vector fields, X a , we let X A = X ar ...X a1 , and similarly for similar expressions.
The next facts we need from [23] are 3.1. QC parabolic normal coordinates. Using the qc conformal properties of a qc structure Kunkel was able to determine a conformal factor helping him to normalize the parabolic normal coordinates (qc parabolic normal coordinates) in which coordinates many tensor invariants of the qc structure vanish at the origin [ [23], Section 4]. We explain briefly his results which we need in finding asymptotic expansion of the qc Yamabe functional.
Letη = e 2u η for a smooth function u. In this subsection we denote the objects with respect toη by., for a object . with respect to η. Suppose that u is of order m ≥ 2 with respect to the vector P . Kunkel showed that the connection 1-forms of the Biquard connectionω and ω corresponding toη and η, respectively, satisfỹ For u ∈ O (m) , applying (2.8), (2.5) and (2.6) one gets where we used the common notation u (αβ) = 1 2 (u αβ + u βα ) for the symmetric part of a tensor. Consider the operator A and the 2-tensor B, defined by Using the first Bianchi identity from [15,19], Kunkel [23] shows that A is invertible and Defining the symmetric tensor Q with it follows from (3.2) and (3.3) that for u ∈ O (m) the symmetric tensor Q changes as follows [23] In his main theorem Kunkel [[23], Theorem 3.16] proves that for any q ∈ M and any m ≥ 2 there is u which is a homogeneous polynomial of order m in parabolic normal coordinates (x, t) such that all the symmetrized covariant derivatives ofQ with total order less than or equal to m vanish at the point q ∈ M , i.e.Q (ab,C) (q) = 0 if o(abC) ≤ m. Such a parabolic normal coordinates are called qc parabolic normal coordinates.
Using this normalization for Q, (2.4) and the identities from [15] and [19] Kunkel shows that in the center q of the qc parabolic normal coordinates the Ricci tensor, scalar curvature, quaternionic contact torsion, the Ricci type tensors and many of their covariant derivatives vanish, Theorem 3.6 ([23], Theorem 3.17). Let (M, η, g) be a qc manifold for which the symmetrized covariant derivatives of the tensor Q vanish to total order 4 at a point q ∈ M . Then the following curvature and torsion terms vanish at q.
We note that Kunkel proved Theorem 3.6 for a qc manifold of dimension bigger than 7, (n > 1) but a careful examination of his proof leads that Theorem 3.6 holds also for dimension 7.
An important consequence of Theorem 3.6 is that at the center q of the qc parabolic normal coordinates the horizontal curvature is equal to its sp(n)-component, i.e. the following curvature identities hold at q: The first identity is clear, the second one holds since the Biquard connection preserves the metric. The third, fourth and fifth equalities are a consequence of the first Bianchi identity (see e.g. [19, (3.2)]), [19,Theorem 3.1], [15,Lemma 3.8] and the fact that S, τ, µ all vanish at q by Theorem 3.6.
Scalar polynomial invariants.
We recall here the definition of the notion "weight of a tensor", which plays a central role in our further considerations. First we remind the following The notion "weight of a tensor" is an extension of the above definition. Namely, one says that a tensor P has weight m and designate w(P ) = m, if its components with respect to the bases {X a } 4n+3 a=1 and {Ξ a } 4n+3 a=1 have weight m. Note that an arbitrary tensor P can be decomposed in homogeneous parts and the components with respect to this bases are certain homogeneous polynomial in {x α , t i }.
Using the curvature and torsion identities from [15,19] Kunkel give in [ [23], Table 1] the following list of curvature and torsion terms of weight less then or equal to four: and shows in [ [23], Theorem 4.3] that at the origin of a qc parabolic normal coordinates the only tensors of weight at most four are the dimensional constants and the squared norm of the qc conformal curvature tensor (2.7), namely we have . Let (M, η, g, Q) be a qc manifold. Then, at the center q ∈ M of the qc parabolic normal coordinates, the only invariant scalar quantities of weight no more than 4 constructed as polynomials from the invariants listed above are constants independent of the qc structure and ||W qc || 2 , in particular, all other invariant scalar terms vanish at q ∈ M .
In what follows we use also the next Convention 3.9. a) We denote by (η = (η 1 , η 2 , η 3 ), g) the qc structure normalized according to Theorem 3.6. The corresponding qc parabolic normal coordinates will be signified by a=1 to denote the special frame, corresponding to the contact form η. The (dual) co-frame will be designated by a=1 . c) The index notations of the tensors will be used only with respect to the special frame {ξ a } 4n+3 a=1 and the special co-frame {θ a } 4n+3 a=1 . For example, A αβ = A(ξ α , ξ β ), Bγ αβ = θγ(B(ξ α , ξ β )) and s.o.
The asymptotic expansion of the qc Yamabe functional
In order to find an asymptotic expansion of the qc Yamabe functional we prove a number of lemmas. Proof. To check a), take the Lie derivative of X α with respect to the vector field P . For a smooth function f we calculate ( . Tho check c) and d), we use the Cartan formula L X ω = X dω + d(X ω) for a vector field X ∈ Γ(T M ) and a differential form ω ∈ Ω(M ). Standard calculations lead to L P Ξ α = Ξ α and L P Θ i = 2Θ i . Note that the last facts are implicitly mentioned in Proposition 3.2.
). Taking the homogeneous parts of order −1 and −2 in the above equality, we obtain that s β α(0) = δ β α , sα α(1) = 0 and sα α(0) = 0, respectively. Similarly, taking the homogeneous parts of order −2 and −1 in the equality ξα = s α α X α + sβ α Xβ we get sβ α(0) = δβ α and s α α(0) = sβ α(1) = 0, respectively, which proofs (4.1). In order to prove (4.2) we take the homogeneous parts of order m+o(b)−o(a) in the equality δ b a = s c a θ b (X c ). We separate the proof in two cases. The first case appears when m + o(b) − o(a) > 0. Then we get Note that the left-hand side of the last identity is just the term in the sum that corresponds to i = 0, while the term that correlates to i = 1 is equal to 0, by Proposition 3.2.
A crucial auxiliary result that help us to find an asymptotic expansion of the qc Yamabe functional is Proof. We shall prove by induction on k that all the objects B) Inductive step. Suppose that all the objects in (4.3) have weight k for k ≤ m. We are going to prove it holds for k = m + 1. The first step is to check the assertion for the torsion and the curvature when A = ∅.
First we have to show that T abc(m+1−o(bc)+o(a)) has weight m + 1. Applying Proposition 3.4, we get where the sum is taken over all multi-indices A : o(A) = m + 1 − o(bc) + o(a). We show that X A T abc | q has weight m + 1 for any multi-index A. (Note that we use here the same letter A to denote the corresponding multi-index; we are doing this now and later in order to avoid the excessive accumulation of letters). For that purpose, we will prove that ξ A T abc | q has weight m + 1 for any multi-index A with the mentioned order.
We recall that if {T a1...ar } are the components of a tensor T of type (0, r) with respect to the frame {ξ a } 4n+3 a=1 then the components of the covariant derivative of T along the vector field ξ b are given by ..ar , A = (a 1 . . . a r ).
We introduce the notation: Taking into account (4.4) and the above notation, it is not difficult to see that We have i.e. we obtain the conditions To investigate the weight of the term [ω d a (ξ ai )] (k1) , we decompose it in the sum where l 1 and l 2 satisfy l 1 + l 2 = k 1 , l 1 ≥ 2, l 2 ≥ −o(a i ). These conditions and the first inequality in (4.8) give . . , 4n}, which implies k 1 , k 2 ≤ m and we can apply the inductive hypothesis to the terms that appears in the right-hand side of (4.7) to obtain that the term in the left-hand side of (4.7) has weight k. So, we can apply the inductive hypothesis for ω d a to conclude that (4.10) w(ω d a(l1) ) = l 1 .
Moreover, the first inequality in (4.8) and the inequality l 1 ≥ 2 gives l 2 = k 1 − l 1 ≤ m − 2, which allows to apply the inductive hypothesis for ξ ai , i.e.
We are interesting in the weight of the summands at the right-hand side of (4.17). We may take k 0 > 0 since the corresponding term is zero when k 0 = 0. Hence k 1 +o(a 1 )+· · ·+k r +o(a which completes the proof that T abc(m+1−o(bc)+o(a)) has weight m + 1.
The proof of the fact that R abcd(m+1−o(ab)) has weight m + 1 is similar and we omit it. It follows from Proposition 3.2 and the just proved two results for the torsion and the curvature that η i (k+2) , dη i (k+2) , θ α (k+1) , ω b a(k) have weight k for k = m + 1. Now, we check that ξ a(k−o(a)) and s b a(k+o(b)−o(a)) have weight k for k = m + 1. We have where we used Lemma 4.1 and (4.2) for the second and the third equality, respectively. It is clear that Finally, S (m) has weight m + 2 since S = g αβ g γδ R αγδβ which ends the proof of the Lemma.
The next result establishes a relation between the volume forms on (M, η, g, Q) and on G(H).
be the natural volume forms on the qc manifold (M, η, gQ) and the quaternionic Heisenberg group G(H), respectively. T hen where v s is a homogeneous polynomial of degree s and weight s, s = 1, 2, 3, 4, and O(ρ 5 ) is a function in O (5) .
Proof. According to Proposition 3.2, Lemma 4.1 and Lemma 4.3, we have the following decomposition of where P i stu is a homogeneous polynomial in {x α , t i } of degree s and weight t. Similarly, using again Proposition 3.2, Lemma 4.1 and Lemma 4.3, we get the following representation of dη i , i = 1, 2, 3, with respect to the base where P i stuv is a homogeneous polynomial in {x α , t i } of degree s and weight t. It is not difficult to see, using (4.24) and (4.25), that where v s is a homogeneous polynomial of degree s and weight s, s = 1, 2, 3, 4, which proves the lemma.
The next essential step is to find an asymptotic expression of the qc Yamabe functional (1.1) over a special set of "test functions".
Proof. We shall examine separately each of the integrals that appear in the expression If ϕ is an integrable function on G(H) and |ϕ| ≤ CΦ(ρ) for a constant C and an integrable function Φ on G(H) that depends only on ρ then the next formula holds Indeed, the polar change (4.26) of the coordinates in the volume form on G(H) yields V ol Θ = ρ 4n+5 dρ ∧ dσ, where dσ is a 4n + 2-form that depends only on ξ 1 , . . . , ξ 4n , τ 1 , τ 2 , τ 3 and (4.29) follows. We also have for some positive constantC which implies We get consecutively: where we used the definition of f ε and Lemma 4.4 for the first equality and the parabolic dilation change of the variables for the second one. The third identity follows from the definition of F ε . To get the fourth one, we used the next chain of relations: where C 0 and C 1 are some suitable positive constants and k is chosen sufficient small. Hence, The fifth identity in (4.31) is clear while the sixth one is obtained from (4.29) and (4.30). For the first term at the right-hand side of (4.31) we have where c i := G(H) F 2 * v i V ol Θ , i = 0, 2, 3, 4, is a quaternionic contact invariant scalar quantity of weight i. It follows by Theorem 3.8 that c 2 = c 3 = 0 and therefore (4.32) where a 0 (n) and a 4 (n) are some dimensional constants, independent on the qc structure. Finally, we are interesting in the last three expressions at the right-hand side of (4.31). We have for i = 0, . . . , 4 that ρ 4n+5+i (1 + ρ) −8n−12 ≤ ρ −6 and consequently, In a similar way, we obtain A substitution of (4.32), (4.33) and (4.34) into (4.31) gives It follows directly by the decomposition of ξ a determined in Lemma 4.2, the definition of the function f ε and Lemma 4.4 that We begin with I 1 . At first, note that the following representation holds where v ab m , m = 0, 1, . . . , ∞, is a homogeneous polynomial of degree m + o(ab) − 2 and weight m. We have where the homogeneous polynomial w ab i , i = 0, . . . , 4, is formed from the polynomials v ab m , v j and is of degree i + o(ab) − 2 and of weight i. Moreover, it follows directly by the definition of F ε that Another fact we shall need is the inequality where C is some positive constant. 1 In order to check (4.39), we suppose firstly that a ∈ {1, . . . , 4n}. Using the definitions of the function F and the vector field X a , we obtain The polar change (4.26) in the above identity gives |X a F | ≤C(1 + ρ) −4(n+2) (ρ + ρ 3 ) for some positive constantC, yielding where C is a positive constant. Thus (4.39) is proved for a ∈ {1, . . . , 4n}. The case a ∈ {4n + 1, 4n + 2, 4n + 3} is considered in a similar way. We return to (4.37), make the parabolic dilations change δ ε (x α , t i ) = (εx α , ε 2 t i ) and use (4.38) to get (4.40) Now we shall examine the three integrals in the right-hand side of (4.40). We have , is a quaternionic contact invariant scalar quantity of weight i. In the same manner as in (4.32) we obtain by Theorem 3.8 that (4.41) whereb 0 (n) andb 4 (n) are some dimensional constants, independent of the qc structure. We get using (4.29) and (4.39) that To handle the last integral in the right-hand side of (4.40), we use (4.39) to get which together with ρ 3+o(ab) + ερ 4+o(ab) + · · · ≤Cρ 3+o(ab) for ρ < k ε , k sufficiently small, and (4.29) give Now, it is easy to see after some standard analysis that We substitute (4.41), (4.42) and (4.43) into (4.40) to obtain (4.44) I 1 =b 0 (n) +b 4 (n)||W qc || 2 ε 4 + O(ε 5 ).
To manage the integral I 2 in (4.36), we note that for k < ρ < 2k the following inequalities hold |s a α ||s b α | ≤ M ab α and |1 + v 2 + v 3 + v 4 + O(ρ 5 )| ≤ N for suitable constants M ab α , N . Then we have for some positive constants C ab α . The latter yields We examine the three integrals in the right-hand side of (4.45). We begin with where we applied the parabolic dilation change of the variables to obtain the first equality, (4.29) and (4.30) to get the second one. The latter together with the inequality (1 + ρ) −8(n+1) ρ 4n+5 ≤ ρ −4 lead to Similarly, we obtain using (4.29), (4.30), (4.38) and (4.39) the next two relations A substitution of (4.46), (4.47) into (4.45) gives (4.48) which combined with (4.44) imply C) Finally, it remains to investigate the integral M S(f ε ) 2 V ol η that appears in (4.28). We have where we used Lemma 4.4 to get the first equality and the definition of f ε to obtain the second one.
Explicit evaluation of constants
The aim of our investigations in this section is to calculate explicitly the constants a 0 (n), a 4 (n), b 0 (n) and b 4 (n) that stay at the right-hand side of (4.27). We begin with an algebraic lemma.
Convention 5.2. From now on we shall use, similarly to the CR case [22], the notation A ≡ B to designate the equivalence of the expressions A and B modulo terms that contain the torsion or the curvature or their covariant derivatives except R αβγδ (q), as well as modulo terms of weight bigger than 4. We shall also use this notation when we omit expressions containing powers of ε that lead through the computations to powers different from ε 0 and ε 4 . The reason is because we know by (4.27) what kind of terms appear in the asymptotic expression of the qc Yamabe functional over the test functions.
We continue with a lemma which is crucial for the explicit evaluation of constants.
Lemma 5.3.
For the test function f ε defined in Section 4 and the parabolic dilation δ ε in qc parabolic normal coordinates defined in Section 3 the following formulas hold: where χ(x) is a homogeneous function of order 4 defined below in (5.20).
Proof. We begin with the relations , m ≥ 2, which are consequences of Proposition 3.2.
We examine the homogeneous parts of certain orders. For the lowest possible order we have the formula We see by a simple induction over m in (5.3) that only the even-degree homogeneous parts of η 1 , η 2 and η 3 are non-zero which implies The situation with the terms [ is more complicated. We have for the first one the decomposition We get by (5.3) and some simple calculations that x α x γ x δ dx ζ , i = 1, 2, 3, which leads to the relations (2) ∧ (dΘ 1 ) 2n ≡ 0. Furthermore, we obtain by (5.7) which gives by straightforward calculations the formula for the trace of dη 1 (4) : where the tensors ζ 1δβ and σ 1γβ are defined in (2.3). The latter combined with Theorem 3.6 yields tr(dη 1 (4) ) ≡ 0 which together with Lemma 5.1, a) lead to 2nη 1 (2) ∧ η 2 (2) ∧ η 3 (2) ∧ (dη 1 ) (4) ∧ (dΘ 1 ) 2n−1 ≡ 0. Substitute the latter and (5.8) into (5.6) to get (5.10) [ We continue with the investigation of the term [η 1 ∧ η 2 ∧ η 3 ∧ (dη 1 ) 2n ] (4n+10) . We have the decomposition and examine each of the terms in the right-hand side of (5.11). The first one is decomposed as follows: To handle the first term in the right-hand side of (5.12), we use that following easily from (5.3). The formula (5.13) yields the equivalence which together with Theorem 3.6 and some standard computations give The last formula and Lemma 5.1, a) imply In order to manipulate the second object in the right-hand side of (5.12), we use Lemma 5.1, b), the equivalence (5.9) and Theorem 3.6. After some long but standard calculations we establish the equivalence We continue with the second term in the right-hand side of (5.11), which can be decomposed as follows According to (5.7) and (5.9) the forms η i (4) and (dη i ) (4) have no dt j term which together with (5.16) imply We decompose the last term in the right-hand side of (5.11) in a similar manner, namely Since the forms η i (4) and η i (6) do not contain dt j term by (5.7) and (5.13), the last identity gives the relation Now we get by (5.11), (5.12), (5.14), (5.15), (5.17) and (5.18) the equivalence where the homogeneous function χ is defined by the equality The relations (5.4), (5.5), (5.10) and (5.19) lead to the equivalence 3 which in turn gives It follows by the very definition of the test function f ε that δ * ε (f ε ) 2 * = ε −4n−6 δ * ε (ψ 2 * )F 2 * , which together with (5.21) give the first formula in (5.2). We omit the multiplier δ * ε (ψ 2 * ) since we know from (4.32) and (4.35) that it appears only in the O(ε 5 )-part of the denominator of the asymptotic expansion (4.27).
In order to prove the second formula in (5.2), we need to find the effect of the parabolic dilation change of the variables on the squared norm We continue with the computations we commenced in Lemma 4.2. We have where the first identity follows from (4.2) and the second one-from (4.1). For the term s β α(2) we obtain in which we utilized (4.2) to get the first relation, whereas the second one is obtained by (4.1) and the last equivalence is a result of a repeated application of the relations in (5.3). In the same spirit we get (5.25) sα α(2) = 0, s β α(3) ≡ 0. Note that the first one is obtained with the help of (4.2) and (4.1), while we used (4.2), (4.1), (5.23), the first relation in (5.25) and (5.3) to establish the second one. Regarding sα α(3) , we obtain similarly that where we set Iα βγ := Iα −4n βγ and used (4.2), (4.1) and (5.3). We get for the term s β α(4) the following chain of relations where we took into account (4.2) to obtain the first equality, while the second one is a result of (4.1), (5.23), To find the effect of the parabolic dilation change of the variables on the squared norm (5.22) we describe the result of this change on the functions X α F ε and XαF ε . We obtain with some standard calculations that δ * ε (XαF ε ) = −4(n + 1)ε −2n−4 [(1 + |p| 2 ) 2 + |w| 2 ] −n−2 tα, where in the right-hand side of the last formula we set tα := tα −4n . Now we have In order to get the first equivalence in (5.31) we use f ε = ψF ε ≡ F ε since (4.36) shows that the function ψ contributes only in the integral I 2 which is O(ε 5 ) according to (4.48 where we omitted the terms that contain powers of ε different from 0 and 4. Moreover, t i must appear only in even powers due to the integration reasons. The last formula together with (5.21) proves the second formula in (5.2) which completes the proof of the lemma.
We give a detail proof of the fourth formula in (5.38) since the proof of the others is very similar. Denote the integral that stays in the left-hand side of the formula by I then we have the decomposition (5.40) where each of the integrals I 1 , I 2 and I 3 corresponds to one of the cases considered below. 4 Case 1: γ = δ, ξ = η. We have (5.41) I 1 = I 1ακ I 1λθ I 1 γβ I 1 ξι R β γθα (q)R ι ξκλ (q) π 2n 2(2n + 1)! = 16n 2 I 1ακ I 1λθ ζ 1θα (q)ζ 1κλ (q) π 2n 2(2n + 1)! = 0, where we used (5.32) to get the first identity and Theorem 3.6 to obtain the third one.
Proof. We begin with the remark that the natural volume form V ol Θ on the Heisenberg group G(H) can be expressed in the terms of the qc parabolic normal coordinates as follows (5.44) V ol Θ = (2n)! 8 dt 1 ∧ dt 2 ∧ dt 3 ∧ dx 1 ∧ . . . ∧ dx 4n .
|
2016-12-07T20:32:01.000Z
|
2016-12-07T00:00:00.000
|
{
"year": 2018,
"sha1": "a4c8a5eddf644d9cccc23efa8dab52b79dac0cd7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1612.02406",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a4c8a5eddf644d9cccc23efa8dab52b79dac0cd7",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
229486894
|
pes2o/s2orc
|
v3-fos-license
|
Expert System Of Syzygium Aqueum Disease Diagnose Using Bayes Method
This research examines how to design an application of an expert system to diagnose Syzygium aqueum (burm.f.alston) plant diseases so that it is easier to detect diseases in the Syzygium aqueum (burm.f.alston) plants. This study uses the Bayes method to be more precise because it is based on the probability value of disease symptoms that arise in Syzygium aqueum (burm.f.alston) plants. In this research, the authors used a web-based application with the Bayes method. The results or outputs from this application provide the probability value of disease certainty and then the hypothesis is chosen with the largest value.
Introduction
In farming, several things must be considered, including soil fertility, rainfall, recognition of symptoms that can damage plants, and how to care well and correctly, such as in planting Syzygium aqueum (burm.f.alston). Sometimes Syzygium aqueum (burm.f.alston) plants also cannot bear fruit, all of which can be affected by plant diseases in the form of parasites or pests, which can affect unsatisfactory results so that farmers suffer losses, especially parasites and plant pests can come at any time, the disease that Syzygium aqueum (burm.f.alston) attacks generally a group of fungus such as root rot disease caused by the fungus Armillaria mellea and Phytophthora sp [2].
In general, the expert system is one of the fields of computer science that utilizes computers so that they can behave intelligently like humans [16] [18]. Bayes method is an expert system method, Bayes method is useful for determining the probability value of expert hypotheses and the value of the evidence obtained from facts obtained from the object being diagnosed.
The expert system is a branch of artificial intelligence (AI) which is quite old because this system was developed in the mid-1960s [3] [16]. The expert system is a system that seeks to adopt human knowledge into computers so that computers can solve problems as is usually done by experts [5]
Methods
The framework that will be carried out by researchers to find data and information that will help in making research is as follows:
Data Collection Method
This research is a quantitative method with direct studies or surveys. This method uses several questions or closed statements with the choice of answers that have been provided.
Bayes Method
Bayes' theorem was put forward by a British Presbyterian priest in 1763 named Thomas Bayes and later perfected Laplace. Bayes' theorem is used to calculate the probability of an event occurring based on the influence obtained from observations [19].
The Bayes method formula is:
Data Analysis Stage
Analysis of disease data by linking to symptom data. so hopefully the results obtained will be more accurate and describe the condition and the population as a whole. This process consists of: • Identification of problems There is a wound on the root 0. From the calculation process using Bayes method above, it can be seen that the Syzygium aqueum (burm.f.alston) has a root rot disease with a confidence value of 0.699 or 69.9%.
Implementation system begins with selecting the disease symptoms in the plant, and the results of prediction with Naive Bayes algorithm will be displayed as in the figure 1, figure 2 and figure 3.
Conclusion
Based on the results of the process of designing and manufacturing an expert system in diagnosing diseases in the Syzygium aqueum (burm.f.alston) using the Bayes method, the authors conclude among others are: a. Problems that occur concerning the symptoms that arise can be solved by applying the Bayes method. b. By applying the Bayes method to diagnose diseases in the Syzygium aqueum (burm.f.alston) can be adjusted according to symptoms and use weights to be used with these algorithms. c. A system built with the Web programming language to diagnose the disease Syzygium aqueum (burm.f.alston) quickly and accurately.
|
2020-11-26T09:07:09.001Z
|
2020-11-01T00:00:00.000
|
{
"year": 2020,
"sha1": "887231c81ee47fff6f91d45937701e87a977b2a7",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1641/1/012097",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "55394c00bdc67bc30dcd4736eadd28e10f901bdc",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
}
|
232022894
|
pes2o/s2orc
|
v3-fos-license
|
WaveTF: a fast 2D wavelet transform for machine learning in Keras *
The wavelet transform is a powerful tool for performing multiscale analysis and it is a key subroutine in countless applications, from image processing to astronomy. Recently, it has extended its range of users to include the ever growing machine learning community. For a wavelet library to be efficiently adopted in this context, it needs to provide transformations which can be integrated seamlessly in already existing machine learning workflows and neural networks, being able to leverage the same libraries and run on the same hardware (e.g., CPU vs GPU) as the rest of the machine learning pipeline, without impacting training and evaluation performance. In this paper we present WaveTF, a wavelet library available as a Keras layer, which leverages TensorFlow to exploit GPU parallelism and can be used to enrich already existing machine learning workflows. To demonstrate its efficiency we compare its raw performance against other alternative libraries and finally measure the overhead it causes to the learning process when it is integrated in an already existing Convolutional Neural Network.
Introduction
The wavelet transform [18] is a powerful tool for multiscale analysis. It produces a mix of time/spatial and frequency data and has countless applications in many areas of science, including image compression, medical imaging, finance, geophysics, and astronomy [2]. Recently, the wavelet transform has also been applied to machine learning, for instance to extract the feature set to be used by a standard learning workflow [3,16] and to enhance Convolutional Neural Networks (CNNs) [4,12,21,15]. For many of these applications, and machine learning in particular, parallel execution on GPGPU accelerators is of critical importance to ensure the tractability of real-world problems. Therefore, a library that provides wavelet transform functionality for this context must efficiently integrate into existing computational pipelines, mitigating the loss of performance due to the cost of exchanging data between memories in different phases of the computation -e.g., if our pipeline runs on a GPU we would like to execute the wavelet on the same device, without the need to repeatedly move data between the GPU and the main memory.
In this work we present WaveTF, a library providing a fast 1D and 2D wavelet implementation that provides scalable parallel execution on CPU and GPU devices. WaveTF enables full GPU execution of computational pipelines including wavelet transforms. The library is built on top of the popular TensorFlow framework and is exposed as a Keras layer, making it easy to integrate into existing Python workflows based on these widely adopted frameworks. Our evaluation shows that WaveTF improves upon the state of the art by providing faster routines and by adding only a negligible overhead to machine learning applications.
The rest of this manuscript is structured as follows. In Sec. 2 we provide a description of wavelet transforms, followed by a discussion of the related work in Sec. 3. Section 4 describes the implementation of the WaveTF library, while an evaluation of its performance is presented in Sec. 5. Finally, Sec. 6 points the reader to the software and Sec. 7 concludes the manuscript.
Wavelet transform
Wavelet transforms are a family of invertible signal transformations that, given an input signal evolving in time, produce an output which mixes time and frequency information [8]. This paper will only focus on discrete transformations.
Haar transform
The simplest wavelet transform is the Haar transform, which, given in input a signal x = (x 0 , . . . , x n−1 ) (with n even) produces as output H(x) := (l 0 , . . . , l n 2 −1 , h 0 , . . . , h n 2 −1 ) = (l(x), h(x)) , where with l i and h i containing low and high frequency components localized at times 2i and 2i + 1 of the original signal. Note that when the input size is not even, the signal must be extended using some form of padding. The wavelet transform is often iterated on the low components to carry out a multiscale analysis:
Daubechies wavelet
The Haar transform can be extended so that l i and h i are linear functions of more than two terms, as done by the following Daubechies-N=2 (DB2) transform (see [7] for details): where and vectors λ := (λ 0 , λ 1 , λ 2 , λ 3 ) and µ := (µ 0 , µ 1 , µ 2 , µ 3 ) being orthonormal. When working with larger kernels (4×2 in this case, where Haar was 2×2) the border of the signal must always be extended with padding to be able to invert the transformation.
Multidimensional transform
The wavelet transform is extended to multidimensional signals by executing it orderly in all the dimensions. For instance, in the two-dimensional case the input is a matrix and the output is obtained by first transforming the rows and then the columns; it is thus formed by 4 matrices (conventionally called LL, LH, HL, and HH), containing the low and high components for the horizontal and vertical directions (an example can be seen in Fig. 1). As with the 1D case, the multidimensional transformations can also be iterated for perform a multilevel analysis (see Fig. 2). When the input is a multichannel image (e.g., RGB or HSV), transformations are performed independently for each channel.
TensorFlow and Keras
TensorFlow [1] is a powerful framework for efficiently manipulating multidimensional arrays (i.e., tensors) in parallel, and it provides APIs for Python, C ++ , Java and JavaScript. It has been developed as a fast and scalable framework for machine learning, and for this purpose it is complemented by the higher level Keras library [6]. However, TensorFlow offers many powerful algebraic routines which can be used independently of the application and its Python API can be seen as a parallel, GPU-enabled version of NumPy [23], with which it shares many similarities in the syntax and names of its methods. Note that TensorFlow supports a wide variety of computing hardware: it can run on multiple CPUs, GPUs and also on specialized ASICs known as TPUs [13], which are now available for end-users as part of the Google Cloud infrastructure.
We have chosen to implement WaveTF leveraging TensorFlow's rich API and scalability, so that it can easily exploit available parallelism, be easily and efficiently integrated with other programs that use TensorFlow and Keras and provide its functions to the growing machine learning community.
Related work
In this section we briefly describe three alternative wavelet libraries available for Python and published as open source software: PyWavelets, pypwt and TF-Wavelets. In Sec. 5.1 we will compare their raw performance to our library.
PyWavelets
PyWavelets [14] is probably the most widely used Python library for wavelet transforms. Its core routines are written in C and made available to Python through Cython. It supports 1D and 2D transformations and provides over 100 built-in wavelet kernels and 9 signal extension modes. Unlike WaveTF, it is a sequential library and runs exclusively on CPUs.
pypwt
pypwt [20] is a Python wrapper of PDWT, which in turn is a C ++ wavelet transform library, written using the parallel CUDA platform and running on NVIDIA GPUs. It implements 1D and 2D transforms, (though it does not support batched 2D transforms) supports 72 wavelet kernels and adopts periodic padding for signal extension.
TF-Wavelets
TF-Wavelets [10,17] is a Python wavelet implementation which, like WaveTF, leverages the TensorFlow framework. It features two wavelet kernels (Haar and DB2) and implements periodic padding for signal extension. It is the library more conceptually similar to ours, allowing, for instance, both input and output to reside in GPU memory, and it is thus the best match for a raw performance comparison against WaveTF. However, it lacks support of batched, multichannel, 2D transforms, which are typically required for machine learning applications in Keras. As a consequence, it does not provide a network layer for that framework.
Implementation
WaveTF is written in Python using the TensorFlow API. It exposes its functions via a Keras layer which can either be called directly or can be plugged easily into already existing neural networks. The library currently implements the Haar (Eq. (1)) and DB2 (Eq. (3)) wavelet kernels -which are the two most commonly used ones. To handle border effects, anti-symmetric-reflect padding (known as asym in MATLAB) has been implemented, which extends the signal by preserving its first-order finite difference at the border. WaveTF supports both 32-and 64-bit floats transparently at runtime.
Direct transform
In order to efficiently implement the wavelet transform in TensorFlow we first reshape it as a matrix operation. Let us consider, as an example, the 1D DB2 transform with input size n, where n is a multiple of 4. The original formulation of the transform presented in Sec. 2.1.2 can be rewritten as a matrix multiplication in the following form: In order to generate the data matrix above we need to group the data vector by 4 and interleave it with a copy of itself, shifted left by two (plus some constant operations for the padding at the border). This operation can be implemented with the reshape, concat and stack methods provided by TensorFlow. Alternatively, the specialized conv1d method can be employed instead of the standard matrix multiplication, somewhat simplifying the data rearrangement. We have implemented both the variants and we have seen that the convolution one is faster in all considered cases, except for the 1D-Haar transform (for which we have thus adopted the matrix multiplication algorithm). Note that when n is not a multiple of 4, the border values are arranged slightly differently, but the procedural steps remain the same.
Inverse transform
In this section we show how to properly invert the DB2 wavelet transform, taking into account the border effects while keeping the padding as small as possible. This is done both to justify the exact algorithmic steps we adopted and to offer a future reference for alternative implementations by other authors. To the best of our knowledge the following derivation, at this level of detail, is original, though it is likely that it might be already present, at least implicitly, in the vast literature on Wavelet transform.
To better understand how to properly handle the border effect when computing the inverse, let us reshape the transformation above in a slightly different way: i.e., as w = W x = KP x, with K being the n × (n + 2) kernel matrix and P the (n + 2) × n (anti-symmetric-reflect) padding matrix: We can then decompose K, P and W in (non-square) blocks (with each block shape shown between parentheses): To invert W we first note that K 11 has orthonormal rows and thus admits its transpose as a right inverse: K 11 K t 11 = I n−4 . Furthermore, W 00 := K 00 P 00 and W 22 := K 22 P 22 have linearly independent columns and thus admit a (Moore-Penrose) left inverse: W + 00 W 00 = W + 22 W 22 = I 2 . Finally, because of the choice of coefficients in Eq. (3), we have We can now verify that W is inverted by and that we can compute its non-border elements similarly to the direct transform case: and its border values as:
Correctness
In addition to the formal derivation given above, we have tested our implementation for consistency against PyWavelets, and we have composed direct and inverse transforms to verify that they result in an identity map (up to numerical precision errors). The randomized test code is included with the source code and is runnable with the pytest framework [19]. Note that, contrary to PyWavelets, WaveTF always uses a minimal padding when transforming: e.g., WaveTF's output for an input vector of size 10 is a 2×5 matrix, whereas PyWavelets produces a 2×6 matrix when using the DB2 kernel and a 2×5 one when using the Haar kernel.
Performance results
The performance of WaveTF has been tested in two ways: • by executing raw signal transforms, leaving the output data available for the user either in RAM or in the GPU memory; In the first test, we also computed the same transformations with the PyWavelets, pypwt and TF-Wavelets libraries to compare their performance to WaveTF's. In order to better exploit the computation power provided by the GPU [5], the tests have been run with single-precision floating-point types: np.float32 for PyWavelets, tf.float32 for WaveTF and TF-Wavelets, and pypwt compiled to use 32-bit floats.
The hardware and software used in the tests are detailed in Tables 1 and 2.
Raw transformation
PyWavelets operates in RAM and pypwt uses RAM for input and output but runs its computation in the GPU. On the other hand, WaveTF and TF-Wavelets operate on TensorFlow tensors which, when GPUs are available and used, reside in the GPU memory. We expect to see this difference reflect on the runtimes, because of the overhead of moving data between GPU and RAM. We have recorded the wall clock time of one-and two-dimensional Haar and DB2 wavelet transforms using WaveTF, PyWavelets and pypwt and TF-Wavelets. For WaveTF, we have measured both the time required when leaving the data in the GPU memory and when input and output are required to be in main memory. For TF-Wavelets we have instead focused on the fastest case of working only on GPU memory, to offer a fair comparison for WaveTF. The test procedure for the one-dimensional case is as follows: • A random array of n elements is created, with n ranging from 5 · 10 6 to 10 8 , • For the non-batched case the array is used as is (i.e., shape = [n]), for the batched case it is reshaped to [b, n/b], with b = 100, • The transform, on the same input array, is executed from a minimum of 500 up to a maximum of 10000 times for smaller data size; the total time is measured and the time per iteration is recorded.
For the two-dimensional case, the input matrix is chosen to be as square as possible given the target total size of n elements, i.e., shape = √ n , √ n .
Note that we have not measured the time to execute a single transformation, but instead the time to execute many of them grouped together (up to 10000), because the single execution time when working in GPU memory would have been completely overshadowed by the setup time required for the library calls. The standard deviations for these grouped measures are all well below 1%, so they are not shown in the plots.
Discussion
As can be seen from the data in Fig. 3 and Table 3, there is a huge gap in performance between PyWavelets and pypwt and the TensorFlow programs. The performance of PyWavelets is explained by the fact that it is a serial program and that it does not exploit the parallelism available in the GPU. pypwt, on the other hand, does use the GPU but incurs a big overhead caused by the data movement between GPU and main memory -as demonstrated by the similar performance achieved by WaveTF when it is forced to have both input and output in RAM. When working directly in GPU memory WaveTF and TF-Wavelets have a big performance advantage over the other evaluated libraries, with WaveTF being about 70x faster than PyWavelets and pypwt on 1D Haar and 30-40x on 1D DB2. For the 2D cases WaveTF has a speedup greater than 40x over PyWavelets and a 12-14x one over pypwt. This test scenario mirrors the common situation in TensorFlow-based machine learning workflows using wavelet transforms.
The speedup of WaveTF against TF-Wavelets is still quite impressive, considered that both libraries adopt the same general strategy, and it ranges from 1.6x up to 3.2x. This improvement is mainly due to a careful algorithmic implementation as to avoid redundant computations.
Machine learning
In this section we quantify the overhead of integrating WaveTF in machine learning workflows. For this purpose we consider a classification problem on a standard image dataset solved by a simple CNN. In our experiment we measure the training and evaluation times before and after enriching the CNN with wavelet layers.
For this test we have adopted the Imagenette2-320 dataset [11] -a subset of 10 classes from ImageNet [22] -consisting of 9469 training and 3925 validation RGB images. For the classification task we used a basic CNN network featuring 5 levels of convolution, followed by downscaling which halves the spatial feature dimensions at each level (i.e., 320x320 → 160x160 → 80x80 → 40x40 → 20x20). To enrich this network with the wavelet transform, each newly downscaled layer ) are concatenated with the wavelet components (LL l , LH l , HL l , HH l , ) at the corresponding level of scale, before the following convolution is performed. Fig. 4), launched iteratively as shown in Eq. (2). This approach has been used, e.g., for improving texture classification [9]. Since the objective of our experiment is only to quantify the computational overhead of adding wavelet features via WaveTF to the network, we disabled all forms of data augmentation for the training -these procedures would add their own considerable overhead which would confound our results. To compute the training overhead, we measured the wall clock time required to train the model for 20 epochs, with and without enriching the network with the wavelet features. We repeated this training process 20 times (after a first, unmeasured run, used to set the memory buffering to a stationary state). On the other hand, to measure the overhead incurred in evaluation we used the trained network to evaluate all the images in the dataset and repeated the process 20 times.
Discussion
As can be seen from the results shown in Table 4, the overhead of adding wavelet features to the existing 5-level CNN is below 1%, both in training and evaluation, thus allowing its use at an almost negligible cost.
accompanying documentation and some usage examples, which also include the CNN used in this paper. The link to the GitHub repository is shown in Table 2.
Conclusion and future work
In this work we have presented an efficient wavelet library which leverages TensorFlow and Keras to exploit GPU parallelism and allows for easy integration in already existing machine learning workflows. Since the wavelet transform is characterized by high parallelism and low computational complexity (time complexity being O(n) for an input of size n), minimizing communication is pivotal to achieve good performance, and in this work we have shown how to do it by limiting the transfer between GPU and memory whenever is possible.
In future we plan to extend the library to include other popular wavelet kernels and padding extensions, as well as extending it to 3D signals.
|
2021-02-24T14:12:32.327Z
|
2020-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "766161d5de35458bfb35ad5c93ccab4668f85859",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/4604789/files/wavetf.pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1b80d97832bd2ded3a03181d7d8dd07f35b6dfb9",
"s2fieldsofstudy": [
"Computer Science",
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
264517895
|
pes2o/s2orc
|
v3-fos-license
|
The human adenovirus E1B-55K oncoprotein coordinates cell transformation through regulation of DNA-bound host transcription factors
Significance Adenovirus E1B-55K proteins can promote cell transformation, likely by operating as functional inhibitors of cellular p53. Through our comprehensive analysis of genomic localizations of chromatin-bound E1B-55K in transformed cells, we confirmed that this oncoprotein represses gene expression by indirectly binding to p53-dependent promoters via the tumor suppressor. Notably, our research has exposed undescribed interactions between E1B-55K and multiple p53-independent promoters and enhancers, resulting in transcriptional repression. These interactions involve host transcription factors that are well-known contributors to cancer and stress response signaling, including members of the TEAD (TEA domain) family, which play crucial roles as regulators of the Hippo pathway. Our findings revealed the remarkable versatility of E1B-55K oncoproteins as transcriptional deregulators of a wide variety of integral cellular pathways.
Transduction of BRK cells
BRK cells were seeded in 6-well plates (Sarstedt) 16-24 h at 80% confluence before lentiviral transduction.Adherent cells were transduced twice, at a multiplicity of infection of 1 and with a 24 h interval between each transduction.The transductions were performed in DMEM supplemented with 10% FCS, 20 mM HEPES (4-(2-hydroxyethyl)-1piperazineethanesulfonic acid, Sigma-Aldrich) and 8 µg/ml polybrene (Millipore).After 48 h cells were harvested and sorted by fluorescence-activated cell sorting.Double-positive cells (BFP and venus-positive cells) were seeded onto an appropriate plate depending on the cell number.The medium was changed every 48 h for 14 days and the cells were subsequently expanded into polyclonal cell lines.In the case of the defective HA-tagged E1B-55K mutants K104R the standard transduction protocol was unsuccessful and reverse transduction was applied followed by antibiotic selection.Briefly, 1 ml cell suspension (approximately 350,000 cells) was seeded onto 6-well plates containing 100 µl of lentiviral concentrate.Transduction was performed in DMEM supplemented with 10% heat-inactivated FCS as well as the abovementioned supplements.Cells were incubated with the lentiviruses for 48 h.For selection, 50 µg/ml blasticidin-S (Invitrogen) and 200 µg/ml G418 (Calbiochem) were applied for 4 weeks.The medium was changed every 3 to 4 days.Occurring foci were pooled and expanded into polyclonal cell lines.
Indirect immunofluorescence
Cells were seeded onto 18×18 mm glass coverslips placed in 6-well plates and fixed 24 h post plating with 4% paraformaldehyde (PFA) at room temperature (RT) for 15 min.PFA was aspirated and coverslips were blocked for 45 min with Tris-buffered saline-BG (TBS-BG; 20 mM Tris/HCl (pH 7.6), 137 mM NaCl, 3 mM KCl, 1.5 mM MgCl2, 0.05% [v/v] Tween 20, 0.05% [w/v] NaN3, 5% [w/v] glycine, 5% [w/v] BSA).The blocking solution was discarded and cells were incubated for 1 h with the respective primary antibodies diluted in PBS.The samples were washed three times with PBS-T and subsequently incubated with the corresponding Alexa-Fluor 488-or 555-conjugated secondary antibodies for 1 h at 4°C in a dark chamber.After three final washing steps, coverslips were mounted in Rotimount (Dako) and digital images were acquired on a Nikon A1 confocal microscope (with a Nikon Ti2 frame).In order to visualize A12 E1B-55K, a Leica DMI 6000B was used.Images were cropped and processed with ImageJ 1.51j and assembled in Inkscape 1.1.
RNA-seq
The polyadenylated (poly(A) mRNA fractions were purified from the total RNA with the NEBnext poly(A) mRNA Magnetic Isolation Module (New England Biolabs, USA), and RNA-seq libraries were generated using the NEXTflex Rapid Directional qRNA-Seq Kit (Perkin Elmer, Bioo Scientific) according to the manufacturer's recommendations.The concentrations and sizes of the final cDNA libraries were measured via RNA High Sensitivity Chip on an Agilent 2100 Bioanalyzer (Agilent).All samples were normalized to 2 nM and pooled at equimolar concentrations.The library pool was sequenced on a NextSeq500 (Illumina, USA) using single read (1 × 75 bp) flow cells via the NextSeq 500/550 High Output Kit v2.5 (Illumina, USA).The samples were sequenced at the high-throughput sequencing technology platform of the LIV.ChIP-seq All ChIP and input libraries were generated using the NEXTFLEX ChIP-Seq Library Prep Kit (Perkin Elmer, Bioo Scientific) according to the manufacturer's instructions.The library pool was sequenced on a NextSeq500 (Illumina, USA) using single read (1 × 75bp) flow cells via the NextSeq 500/550 High Output Kit v2.5 (Illumina, USA).The samples were sequenced at the high-throughput sequencing technology platform of the LIV.
Sequencing data analyses
RNA-seq RNA-seq expression data was generated by quantifying single-end mRNA reads using a decoy-aware mRatBN7.2(Ensembl 105) transcriptome with the salmon quantifier software (10) via the selective alignment algorithm.This transcriptome was generated by concatenating the respective non-coding-and coding RNA in front of the Rattus norvegicus genome, which generates a combined reference file that is subsequently used by salmon to generate an index.This workflow can be obtained from the Zenodo digital library (11) with the DOI 10.5281/zenodo.8047240.The differential gene expression of the quantification data was analyzed with the DESeq2 package according to the developer's vignette (12,13).The script utilized here can be obtained from the Zenodo digital library (11) with the DOI 10.5281/zenodo.8048294.The output of this DEG analysis, including the normalized count table and all investigated conditions can be found in the Supplementary Data File 3A.These annotated gene lists were frequently sub-selected into upregulated, downregulated or non-significant gene sets, based on their respective log2 fold expression values, and used within Figures 2, 3, 4 and 5.The Metascape ( 14) algorithm was used extensively to provide biological context to the generated gene lists using the Reactome- (15), KEGG-( 16) and WikiPathway- (17) databases (Supplementary Data File 3B).Heatmaps and other figures associated to selected p53 targets (Figs.2B and 3A) are based upon a curated (18) list of 116 total genes that were strongly associated with several different p53 ChIP-seq data sets.A slightly reduced list corresponding to their genomic localization in the Rattus norvegicus system can be found in Supplementary Data File 4.An overview over these workflows can be found in Fig. S3.
ChIP-seq Peak identification, visualization and annotation
The single-end reads obtained by sequencing of DNA enriched via HA-E1B-55K-targeted dual-X-ChIP-seq or their inputs and serotype controls in BRK cells were sorted and quality-filtered with samtools (19) using standard settings and subsequently aligned to the Rattus norvegicus reference genome mRatBn7.2(Ensembl 105) using bowtie2 (20) with standard settings.Sequencing-depth normalized genomic coverage files were generated with the deeptools software suite (21) by invoking the bamCoverage tool with -RPGC, -e 150 and -bs 5 settings, assuming an effective genome size of 2.2e9 bp.All heatmaps and profile plots were generated using the deeptools computeMatrix and plotHeatmap tool with varying settings.Significant peaks of individual replicates were identified using the MACS2-algorithm (22) with q-value cut-offs of 0.05, assuming the aforementioned genome size.The peak data from replicates was combined via the MSPC software (23) with standard settings to generate a singular validated peak data set for each E1B-55K protein.In order to reduce background noise and the number of putative false-positive peaks, we discarded any peak that was also identified in the respective serotype control set using the bedtools suite (24), as well as removing any peak that had more than 50% overlap with repetitive genomic regions when compared to an repeat-masked filter file obtained from the UCSC repository for the mRatBn7.2genome.Any putative peaks that were located on non-canonical chromosome tracks were similarly removed.The remaining peaks were subsequently annotated based on their nearest gene using the ChIPseeker package (25), followed by biological contextualization via Metascape (Supplementary Data File 2A and B), using the Reactome-, KEGG-and WikiPathways-databases.These workflows can be obtained from the Zenodo digital library (11) with the DOI 10.5281/zenodo.8048302and 10.5281/zenodo.8048509,respectively.
Genome track visualization and region quantification
The EaSeq (18) software suite was used to generate the coverage tracks depicted in Fig. 4B.The region quantification program was used to calculate the ChIP-seq signal of all A12 E1B-55K replicates around specific gene sets ranging from the transcriptional start to end sites.The normalization program was subsequently used to perform quantile normalization on these region quantifications prior to statistical analysis and visualization via GraphPad Prism v9.4.0(673).
Motif identification
We used the HOMER (26) suite to identify any significantly enriched nucleotide motifs within the final ChIP-seq peak data sets by invoking the findMotifsGenome.plprogram on a 200 bp region around the peak summits, allowing for two mismatches.Thereby identified motifs were obtained from the de novo motif discovery, visualized in Figs.1E, 4D and 5D-F, as well as summarized in Supplementary Data File 1B.Specific peaks that contained either only one or a specific combination of the most prevalent motifs were isolated with the annotatePeaks.plprogram contained in the HOMER suite and annotated with their respective nearest genes and their exact genomic location into either promoter or enhancer-located sets.To increase the precision of peak association to functional enhancers, a specific set of putative enhancers identified in eleven different Rattus norvegicus cell lines (27) was manually combined to create a consensus-enhancer localization file (see DOI 10.5281/zenodo.8048320).Here, the localization of any peak described as "distal intergenic" was compared to the consensus file and only intersecting peaks were assigned the "enhancer"-tag.This approach removed around 15% of distal-intergenic peaks, which were not clearly associated with a previously identified enhancer region.The enhancer-consensus list can be found in Supplementary Data File 8. Through this localization and expression status of their nearest associated genes, we calculated overlaps based on the amount of genes found in the respective groups.These data were visualized using GraphPad Prism in Fig. 5D-F and Figures S5A and B as well as S6A-C.The individual motif-sets, their annotated genes and expression levels can be found in Supplementary Data File 6B.The data associated with the top three motifs identified by A12 E1B-55K ChIP-seq can be found in Supplementary Data File 2B.
Statistical analyses
The Pearson correlation coefficient calculations depicted via GraphPad Prism in Fig. 5A-C are presented in Supplementary Data File 6A, where the strength of interaction, based on the peak score, was correlated with the log2 fold change of corresponding significantly changed genes (adjusted p-value < 0.1).The statistical relationship of the EaSeq-calculated and normalized ChIP-seq signal of three A12 E1B-55K ChIP-seq replicates from three different cell lines between the significantly downregulated, upregulated or non-significant clusters was calculated using a two-way ANOVA test in GraphPad Prism and is presented in Supplementary Data File 5. Here, (A) contains a set of specific p53 target genes, while (B) represents genes that contain peaks directly in their gene body.The upper blot shows the ChIP-seq signal profile track of scores calculated over 50 bp bin size windows with the respective track annotated in the top right corner of the first profile plot.All gene bodies are visualized from 5' to 3' direction and normalised to a size of 50 kb.TSS depicts the transcriptional starting site, while TES depicts the transcriptional end site.The signal around the *TSS (in red) is attributed to interaction with an alternative transcriptional starting site of the CDKN1A and RPS27L genes.The heatmaps visualize the profile below their respective profile plot, the intensity of signal represented by a greyscale, ranging from white (no signal) to black (strong signal).The absolute number of genes within each set is depicted below the respective heatmap plots.
Supplementary Figure 9 :
A12 E1B-55K ChIP-seq signal around gene bodies of deregulated gene sets with associated binding events or non-pre-selected gene sets.Visualization A12 E1B-55K ChIP-seq signal around specific gene bodies of genes that are either significantly up-(Left Window; log 2 fold change ≥ 0 and FDR ≤ 0.1), non-significant (Middle Window; FDR ≥ 0.1) or significantly downregulated (Right Window; log 2 fold change ≤ 0 and FDR ≤ 0.1) according to their relative mRNA abundance compared to an E1B-negative cell line.Here, (A) contains a set of genes that are generally associated with peaks, while (B) represents all genes that have a mRNA transcript base mean above 50.The upper blot shows the ChIP-seq signal profile track of scores calculated over 50 bp bin size windows with the respective track annotated in the top right corner of the first profile plot.All gene bodies are visualized from 5' to 3' direction and normalised to a size of 50 kb.TSS depicts the transcriptional starting site, while TES depicts the transcriptional end site.The heatmaps visualize the profile below their respective profile plot, the intensity of signal represented by a greyscale, ranging from white (no signal) to black (strong signal).The absolute number of genes within each set is depicted below the respective heatmap plots.ChIP-seq signal score (mean 50 bp bin size)
|
2023-10-28T06:17:45.147Z
|
2023-10-26T00:00:00.000
|
{
"year": 2023,
"sha1": "f87d21f18fe7dd5c2215f02431ed6b2113329dbd",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "3d4e3a1240e0940a443cc368b5551a2cb94c8aab",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
124767517
|
pes2o/s2orc
|
v3-fos-license
|
AEROSOL LIDAR FOR THE RELATIVE BACKSCATTER AMPLIFICATION MEASUREMENTS
Backscatter amplification presents only in a turbulent atmosphere, when the laser beam is propagates twice through the same inhomogeneities. We proposed technical solution to detect backscatter amplification. An aerosol micro pulse lidar with a beam expansion via receiving telescope was built to study this effect. Our system allows simultaneous detection of two returns from the same scattering volume: exactly on the axis of the laser beam and off the axis.
INTRODUCTION
This amplification effect arises because of the correlation of forward and backward light waves propagating through the same inhomogeneities of a random medium [1,2].The intensity of the scattered light on the laser beam axis should be bigger than around the laser beam.Moreover, the effect of amplification is possible in a very small angle (Fresnel zone) around the optical axis of the beam.There is one approach on the measurement backscatter amplification known [3].
We designed a lidar transceiver to measure the relative backscatter amplification (RBSA).It measures the average intensity of the scattered radiation exactly on the laser beam axis and the average intensity off axis, where is no amplification at all.Our design is based on laserradar transceiver with the beam expansion via receiving telescope [4].The same telescope works to transmit laser pulse to the atmosphere and to receive radiation scattered from molecules and aerosol particles.Such systems have good thermomechanical stability and this is important for longterm continuous observations.It is also important to get returns on-axis and off-axis at the same time.That is why our system has one transmitter channel and two receiver channels.The schematic of a lidar transceiver is shown in Fig. 1.The main element of the system is an afocal mirror telescope.Antenna switch sends a narrow laser beam to the edge of the telescope, which is pre-focused at a distance of (L = 1 km).A screen with two round apertures is installed in front of the telescope.The laser pulse is emitted into the atmosphere through the top aperture.Scattered radiation from the atmosphere passes through both apertures to the antenna switch (transmit-receiveswitch).The first on-axis beam returned to the system in exactly the same path the laser beam as the transmitted to the atmosphere beam.The second off-axis beam from the scattering volume returns through a different path.In turbulent atmosphere in the presence of RBSA on-axial return exceeds the off-axis return.In case of a weak turbulence or absence of turbulence on-axis return will be close or equal to the off-axis return.
Equations for on-axis and off-axis lidar returns are where z is distance; E0 is the pulse energy; C1 and C2 are hardware constants includes channel transmittance, receiving aperture, and detector efficiency; G1(z) and G2(z) -geometric factors of the receiving channels; is the backscatter coefficient; is the backscatter amplification coefficient; is the double atmospheric transmittance.
The ratio of the lidar returns and is proportional to the RBSA According to the theory [1], the amplification coefficient must be ≤ ≤ .
EXPERIMENTAL SETUP
Figure 2 shows an optical diagram of the lidar.A short light pulse from the laser 1 is directed to beam splitter 3 (50/50) by turning mirror 2. We use the fiber laser ( =532 nm) with outgoing beam about 5 mm in diameter.Box 5 is energy monitor.
Turning mirrors 6 and 7 direct the beam with a small offset from the axis of the Mersen telescope (mirrors 8 and 9).After the telescope, laser beam with a diameter of 45 mm is transmitted into the atmosphere through the top aperture of screen 10.Scattered by the atmosphere radiation can be detected by the lidar only when it passes through the top and bottom holes of the screen 10.Note that the apertures' size is about equal to the Fresnel zone (√), and the separation between them is 5 times bigger than the aperture size.
Incoming beams inside the system are parallel to each other.They pass through the beam splitter 3 and the screen 11, which blocks the parasitic internally scattered radiation within the system at the moment when laser transmits the pulse.Lens 12 focuses the beams on the field stop 13.Size of the aperture 13, focus of lens 12 and telescope magnification (10 × ) define the field-ofview of the lidar.An interference filter 14 is installed after the field stop 13 to block the background light.Then, photodetectors 15 and 16 detect on-axis and off-axis beams respectively.The photomultiplier tubes Hamamatsu with 8-mm size of the photocathode are used.Figure 3 shows the picture of our instrument.The system parameters are listed in Table 1.Hamamatsu H10721P PMT's efficiency 10%
СALIBRATION
In equation ( 3) for the ratio has hardware constants C1, C2, and geometric functions G1(z), G2(z).Let's introduce the ratio which is equal to for ≡ .The ratio is the calibration ratio, because it determines the relative sensitivity of on-axis and off-axis channels.Ratio can be determined in a clean atmosphere with a weak wind, when the turbulence intensity is very low.Often these conditions are at dawn and at sunset.If to divide the ratio (3) on the calibration ratio (4) the normalized ratio ′ can be obtained (5) Ratio ′ is equal to the relative backscatter amplification coefficient .
In reality, recorded returns ̃ and ̃ differ from the signals in the expressions ( 1) and ( 2) where and -background noise, and aftereffect (afterpulse) of photodetectors, which is caused by "blinding" of the detectors at the time of sending the laser pulse into the atmosphere and subsequent relaxation.
To obtain "clean" signals and is necessary to determine the response of the photodetectors and to the instant light at the moment of transmitting a laser pulse to the atmosphere.It is better do this at night before observation cycle.It is necessary to record the signals ̃ and ̃ with closed exit port to the atmosphere.Thus, the echo and background terms can be excluded from expressions (6) and (7).Let's call this procedure a "calibration-1".We usually perform calibration-1 at the beginning of the measurement cycle and at the end of it.6) and (7).Signals from on-axis channel (#1) is shown on the left (a), and from off-axis channel (#2) is shown on the right (b).The background (BG) has the constant value.Afterpulses curves (AP) are significantly different for the two channels.This is due to much larger pulse exposure of the detector of the on-axis channel at the moment of the shot than that for the off-axis channel.First receiving channel (on-axis) is exactly opposite to the beam splitter 3 (Fig. 2), where the laser beam is directed towards the telescope.In accordance with equations (8) and ( 9) "net" echoes in Fig. 4 (curves P) were obtained by subtraction the afterpulse (curves AP) and background (BG) from the recorded signals (curves P+AP+BG).Data were received in October 6, 2014 at 03:40 (hereinafter, the time is local).Note the values of returns on all plots in photons per shot.Total number of laser shots for every profile is 3×10 7 .Signals and are obtained from ( 8) and ( 9) by subtracting the background (BG) and PMT's afterpulse (AP).In accordance with the formula (5), in order to determine the mutual relative sensitivity of the receiving channels a calibration ratio in the absence of turbulence has to be determined (calibration-2).An example of this ratio is shown in Fig. 5, which uses the signals from Fig. 4. circles show obtained by using the averaged signals.A smooth bold black curve in this plot is a 4th degree polynomial fit.This interpolation is used as calibration-2 to obtain the amplification coefficient .
DATA EXAMPLES
Ratio (calibration-2) is the relative sensitivity of the receiving channels.So, , also can be used for comparison of returns from on-axis and off-axis channels in the analysis of experimental data.In the absence of amplification, when the ratio = , far field returns and × should be equal.Figure 6 shows two pairs of signals received at different time.In the data example shown in Fig. 6(a), the backscatter amplification was absent and returns are nearly equal.In case shown in Fig. 6(b) the amplification occurs, because it is clear that the on-axis return is bigger than the off-axis return × .Lidar is designed for long-term and unattended operation.The system currently doesn't have a remote control, but it will be implemented in the future in the newer version of the instrument.It was possible to built instrument with the laser beam expansion via receiving telescope to detect relative backscatter amplification in the atmosphere.We hope this approach is useful for future studies of the atmospheric turbulence.
Fig. 3 .
Fig. 3. Picture of the RBSA-lidar.A white cover with two in/out ports is behind the lidar.
Figure 4
Figure 4 presents all signals in formulas (6) and (7).Signals from on-axis channel (#1) is shown on the left (a), and from off-axis channel (#2) is shown on the right (b).The background (BG) has the constant value.Afterpulses curves (AP) are significantly different for the two channels.This is due to much larger pulse exposure of the detector of the on-axis channel at the moment of the shot than that for the off-axis channel.First receiving channel (on-axis) is exactly opposite to the beam splitter 3 (Fig.2), where the laser beam is directed towards the telescope.In accordance with equations (8) and (9) "net" echoes in Fig.4(curves P) were obtained by subtraction the afterpulse (curves AP) and background (BG) from the recorded signals (curves P+AP+BG).Data were received in October 6, 2014 at 03:40
Fig. 4 .
Fig. 4. Profiles of signals for (a) on-axis channel (#1) on the left and (b) for off-axis channel (#2) on the right.Background (BG), afterpulse (AP), return from the atmosphere (P+AP+BG), and "clean" return (P).The spike at 1950 m is from the solid obstacle that blocks the pulse.Data recorded on 10 October 2014 at 03:40.
Fig. 5 .DOI
Fig. 5.The calibration ratio R0(z) (calibration-2) from data recorded on 10 October 2014 at 03:40 (see Fig. 3).Our system has spatial resolution of 2 m.A lightgray curve in Figure5is a ratio plotted with resolution of 2 m.To improve the statistical error signals were averaged over 25 points, and spatial resolution went down to 50 m.In Fig.5
Figure 7
Figure7presents the ratio ′ for pairs of signals described above (see Fig.6).Calibration ratio retrieved on 6 October 2014 at 03:40 (see Fig.4and Fig.5) is a solid gray line.Normalized ratio ′ shown by circles.By dashed lines show area of ±5% around = , when turbulence is very low.The result of ratio ′ has error less than 10%.
Fig. 7 .
Fig. 7. Normalized ratios ′ obtained (a) in the absence of backscatter amplification and (b) in the presence it.Ratios (a) and (b) here are corresponds to the pairs of signals (a) and (b) in Fig. 6.Calibration ratio determined from the data on 10 October 2014 at 03:40 (see Fig. 4 and Fig. 5).
|
2018-12-18T00:57:00.002Z
|
2016-06-01T00:00:00.000
|
{
"year": 2016,
"sha1": "2746801be64eafb9a5912bfa28c6cf6be5c7f03e",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2016/14/epjconf_ilrc2016_23002.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2746801be64eafb9a5912bfa28c6cf6be5c7f03e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
268901778
|
pes2o/s2orc
|
v3-fos-license
|
Long‐Term Variability and Tendencies in Mesosphere and Lower Thermosphere Winds From Meteor Radar Observations Over Esrange (67.9°N, 21.1°E)
Long‐term variabilities of monthly zonal (U) and meridional winds (V) in northern polar mesosphere and lower thermosphere (MLT, ∼80–100 km) are investigated using meteor radar observations during 1999–2022 over Esrange (67.9°N, 21.1°E). The summer (June‐August) mean zonal winds are characterized by westward flow up to ∼88–90 km and eastward flow above this height. The summer mean meridional winds are equatorward with strong jet at ∼85–90 km and it weakens above this height. The U and V exhibit strong interannual variability that varies with altitude and month or season. The responses of U and V anomalies (from 1999 to 2003) to solar cycle (SC), Quasi Biennial Oscillation at 10 and 30 hPa, El Niño‐Southern Oscillation, North Atlantic Oscillation, ozone (O3) and carbon dioxide (CO2) are analyzed using multiple linear regression. From analysis, significant regions of correlations between MLT winds and above potential drivers vary with altitude and month. The positive responses of U and V to SC (up to 15 m/s/100 sfu) indicates the strengthening of eastward winds in mid‐late winter, and poleward winds in late autumn and early winter. The O3 likely intensifies the eastward and poleward winds (∼100 m/s/ppmv) in winter and early spring. The CO2 significantly influence the eastward flow in late winter and summer (above ∼90–95 km) and strengthen the meridional circulation. The significant positive trend in U peaks in summer, late autumn and early winter (∼0.6 m/s/year), the negative trend in V is more prominent in summer above ∼90–95 km.
Introduction
The mesosphere and lower thermosphere (MLT) is a complex transitional region of the terrestrial middle and upper atmospheres with the interactive dynamical processes dominated by gravity waves (GWs), planetary waves (PWs) and tides.The MLT winds are significantly driven by these waves as they break, and deposit energy and momentum carried from the lower atmosphere.The mean meridional circulation driven by GW drag has great impact on temperature and altitude of the mesopause, particularly over polar latitudes establishing colder, lower in summer, and warmer, higher in winter (e.g., Andrews et al., 1987;Fritts & Alexander, 2003;Garcia & Solomon, 1985;Haurwitz, 1961;Holton, 1982Holton, , 1983;;Lindzen, 1981;Lübken & von Zahn, 1991;Smith, 2012;von Zahn et al., 1996;Xu et al., 2007).The MLT winds/circulation play vital role in the global distribution of important chemical species like NO, CO, and CO 2 (e.g., Marsh & Roble, 2002;Smith et al., 2011).The MLT comprises coupling of atmospheric layers with various physical properties.The energy balance between forcings • The long-term variabilities in arctic mesosphere and lower thermosphere winds in response to potential climate forcings have been investigated for 1999-2022 over Esrange • The variability in U and V significantly correlated with O 3 in winter and early spring, and with CO 2 in summer based on altitude • The interannual variability in U and V from below and above vary according to the magnitudes of the perturbations (Smith et al., 2017).The neutral winds of this region can have primary impact on the space-weather effects in the ionosphere (e.g., Jackson et al., 2019;Sassi et al., 2019).
The 11-year solar cycle (SC) signatures can be seen in the MLT region (e.g., Ramesh et al., 2015).Apart from the temperature, the horizontal winds in the MLT/mesopause region depends on the solar activity (Bremer et al., 1997;Greisiger et al., 1987).The zonal and meridional winds can be positively correlated (based on season) with the SC in response to changes in thermal and dynamical structures of the stratosphere that impact the upward propagating GWs and thereby the MLT winds (e.g., Cai et al., 2021).Cullens et al. (2016) simulated the responses of GWs, and wave-driven circulations to the SC.They showed that the significant changes in GW drag and associated residual circulation/MLT winds could be due to 11-year SC variability over the southern polar latitudes.Wilhelm et al. (2019) used meteor radar observations to characterize the MLT winds over Andenes (69.3°N,16°E).They found that the mean winds exhibit a characteristic season dependent 11-year SC effect in the Arctic MLT.The observations of the MLT wind variability at SC periods are still tentative due to limited data sets, and difficulty to identify small changes.The modeling studies like Qian et al. (2019) emphasizes the significant impact of the solar activity on the MLT winds, variable in spatial and temporal domains.
Any peculiar changes in the troposphere and stratosphere can largely affect the thermal and dynamical structures of the MLT.The major disturbances include wintertime sudden stratospheric warmings (SSW) to temporarily disrupt the MLT circulation and temperatures (Sathishkumar et al., 2009;Vincent, 2015).The large variability associated with the dynamical properties of the SSW events is evidenced in the MLT region from various experimental observations and model simulations.The weakening/reversal of westward winds accompanied by cooling at high latitude MLT are mainly associated with the PW and GW forcings (e.g., Chau et al., 2012;Hoffmann et al., 2007;Yamazaki et al., 2020).The Quasi-Biennial Oscillation (QBO) is a predominant dynamical variability in the equatorial stratospheric (∼16-50 km) zonal winds, characterized by downward propagating eastward and westward wind regimes with a variable period averaging ∼27-28 months (e.g., Baldwin et al., 2001;Reed et al., 1961).The effect of the QBO can be observed globally through dynamical coupling (by interaction of QBO with GWs and PWs) between the tropical lower stratosphere and the global middle atmosphere (e.g., Baldwin & Dunkerton, 1998;Dunkerton & Baldwin, 1991) including the MLT (e.g., Ford et al., 2009;Jacobi et al., 1996;Hibbins et al., 2007).The dependence of SSW occurrence on QBO phase can be found in Salminen et al. (2020).They confirmed that the SSWs occur more often in easterly QBO phase than in westerly phase.
The El Niño-Southern Oscillation (ENSO) is one of the key variabilities to influence the MLT winds, temperature and tidal structure (e.g., Gurubaran et al., 2005;Lieberman et al., 2007;H. Liu et al., 2017;Warner & Oberheide, 2014).It is a coupled atmosphere and ocean phenomenon with irregular periodic (2-7 years) warming in sea surface temperatures over the equatorial eastern and central eastern Pacific Ocean.More details on ENSO can be found in Scaife et al. (2019).The response due to ENSO includes perturbations in the tropospheric convection that influences the tidal forcing and yield variability in tidal amplitudes and subsequently the winds and temperature in the MLT region (e.g., Ramesh et al., 2020aRamesh et al., , 2020b)).The North Atlantic oscillation (NAO) which is closely related Northern Annular Mode, is another dominant driver of the atmospheric variability (e.g., Kolstad et al., 2020) that connects the polar stratosphere and mesosphere via dynamical coupling (e.g., Jacobi & Beckmann, 1999).Although it is the regional scale tropospheric circulation over Atlantic and Europe, the connection between NAO and stratospheric polar vortex could be useful for the detection of long-term variability in the MLT winds over this region.
The anthropogenic increase in greenhouse gases, particularly CO 2 not only influencing the lower atmospheric climate but also the middle and upper atmospheric processes.The rising levels of CO 2 propagate upward due to advective transport and eddy vertical mixing by GWs (e.g., Emmert et al., 2012;Garcia & Solomon, 1985;Garcia et al., 2014;Qian et al., 2017;Rezac et al., 2015;Shia et al., 2006;Yue et al., 2015) and significantly influence the thermal structure through radiative cooling and thereby the dynamical processes, composition and chemical reactions in the MLT.The zonal winds are linked to the temperature gradients in near agreement with the thermal wind balance (Andrews et al., 1987), the exact balance can differ as it is also significantly influenced by the strong dynamical forcings in the MLT.Thus, the changes in temperature and minor species accompanied by increasing CO 2 have direct impact on variabilities in the MLT winds (e.g., Laštoviĉka et al., 2008), however the knowledge of its long-term effects is inadequate.Ozone also plays a significant role for the changes in the MLT temperature and winds.It strongly affects the stratospheric temperature and in turn the winds that alter the spectrum of GWs reaching the MLT (e.g., Venkateswara Rao et al., 2015) and hence the winds there.Furthermore, as it is crucial for generating atmospheric tides through solar heating, any changes in stratospheric ozone greatly impact the propagation characteristics of the tides that effectively modulate the winds in the MLT region.The trends and changes in mesosphere/mesopause ozone and their consequences are least explored (Laštoviĉka et al., 2008).
The MLT winds can be investigated mostly based on remote sensing techniques.Although space-borne observations provide near global picture, radars are efficient ground-based tools for measuring the MLT winds with good height and time resolutions.The meteor radars, that use the reflection of radio waves by meteor trails (e.g., Mitchell et al., 2002), are widely used for continuous measurements of the MLT winds (e.g., K. K. Kumar et al., 2007;Rao et al., 2014;Lukianova et al., 2018 to state a few).The medium frequency (MF) radars also measure the MLT winds but based on the reflections by changes in the refractive index (e.g., Gurubaran et al., 2007;Vincent et al., 1998).Both meteor and MF radars provide continuous horizontal wind measurements in the height region of ∼70-100 km, however the MF radars are believed to underestimate the winds above ∼90 km in the MLT region (Hall et al., 2005;Manson et al., 2004;Wilhelm et al., 2017).The polar MLT is very complex because of the distinctive features like auroral and geomagnetic activities, energetic particle precipitation, coldest part of the terrestrial atmosphere (summer mesopause), pole-to-pole mean residual circulation, noctilucent clouds (NLCs) and associated summer/winter echoes, effects of polar vortex and SSWs, stratospheric ozone hole etc. Hence understanding the dynamics of the MLT, its long-term variability and response to external perturbations is critical to explore the coupling of the middle and upper boundary of the terrestrial atmosphere.The main purpose of this study is to investigate, for the first time, the long-term tendencies and interannual variabilities in Arctic MLT zonal and meridional winds over Esrange (67.9°N, 21.1°E) under the influence of the most significant drivers viz., solar activity, QBO (at 10 and 30 hPa), ENSO, NAO, O 3 , and CO 2 .A multiple linear regression (MLR) model is used with the above forcings to investigate the potential drivers of the variability and tendencies in the MLT winds using the meteor radar observations for 1999-2022 (two SCs) over Esrange.Section 2 provides the details of data and analysis method, Section 3 presents the results, summary and discussion are elaborated in Section 4 and finally the important conclusions of the study are listed in Section 5.
Meteor Radar Observations Over Esrange
The meteor radar used in this study is an all-sky Interferometric SKiYMET VHF system that was installed in August 1999 at Esrange (67.9°N, 21.1°E), nearly 30 km east of Kiruna in Sweden.This is believed to have been the first meteor radar equipped with height-finding ability to be permanently deployed in the Arctic.The radar is a pulsed system that uses a transmitter with a peak power of 6 kW.It operates at a radio frequency of 32.5 MHz, a pulse repetition frequency of 2,144 Hz, and a duty cycle of 15%.The system uses crossed-element Yagi antennas for both transmit and receive to allow a detection of meteor echoes at all azimuths.There is a single transmitter antenna and an array of five receiver antennas which form an interferometer to enable determination of meteorecho zenith and azimuth angles.The height and time resolutions for routine wind measurements are ∼2 km and 1 hr, respectively.This radar configuration has remained essentially unchanged since the system was installedwhich makes it well suited to long-term studies because it greatly reduces the possibility of measurement biases that might otherwise arise from any major changes in the system hardware.
The performance of Esrange meteor radar has been carefully investigated to determine if there is any significant change over the duration of the data set, as it is an essential step in the studies of long-term variability.A particular interest was to see if there was any evidence of damage to the interferometer that might result in greater errors or biases in meteor height estimations and thus in the derived winds.The average altitude of the meteor echoes in each month (Figure S1 in Supporting Information S1) is consistent between ∼88 and ∼91 km throughout the observational period.Further it is in phase with the 11-year SC as revealed in a recent study of the long-term variability in the heights of meteor echoes (Dawkins et al., 2023), that investigated data from 12 meteor radars (including the Esrange radar used in this study) and showed both linear and 11-year variations in peak meteor height.The peak heights decreased at all sites and a positive correlation with solar activity was observed at most sites; however, at high latitudes an anticorrelation was observed.Note that the magnitude of these variations is relatively small compared to the ∼20 km depth of the meteor region and so will have little impact on the ability of the radar measurements to determine the MLT winds at ∼80-100 km.In any study of the inter-annual variability of the MLT winds, it is essential to know that any apparent variability detected is a property of the atmosphere and not an artifact caused by possible changes or drift in the performance/biases of the radar.In the case of the Esrange meteor radar, as noted above, there have been no significant changes to the hardware of the system.Therefore, valid comparisons can be made between the winds deduced from meteor radar observations over the duration of the data set.A complete description of the design of this type of radar and the meteor detection algorithm can be found in Hocking et al. (2001).The data collection over Esrange commenced on 5 August 1999 and since then the radar has been in operational mode till date (in 2023), with several data gaps arising due to technical issues and when the radar was inoperable.The data availability and gaps can be found in Figure S2 in Supporting Information S1.However, the present study uses the data recorded up to the end of year 2022.Note that here the radar refers to "Esrange meteor radar," but it has been sometimes referred to as the "Kiruna meteor radar" in existing literature.
Gaussian-Weighted Method for Meteor Radar Winds
The present study adopts the improved time-height localization of meteor radar winds using a Gaussian-weighting method developed by Hindley et al. (2022).This method significantly improves the accuracy in deriving the horizontal winds for irregular meteor distributions in time and height, when compared to a conventional height gates approach (e.g., Mitchell et al., 2002).For a particular time-height location, instead of binning meteor echoes into time-height bins, the method uses a Gaussian-weighted least squares fit (see Hindley et al., 2022, Equations 1-3) based on all meteors within two standard deviations to estimate the zonal (U) and meridional (V) wind components at the weighted time-height location of the meteor echoes considered.
The advantages of Gaussian weighting approach over a fixed height gates method are (a) full altitude coverage of reliable winds can be achieved during higher meteor counts for some cases like local mornings, (b) the derived winds are correctly attributed to the weighted average time-height location of the available meteors, and (c) derived winds are highly resistant to anomalous values when meteor counts are low because they are constrained with meteors from adjacent heights and times, which are not used in a fixed binning approach.Further details on the Gaussian weighting method for deriving the meteor radar winds used in this study can be obtained from Hindley et al. (2022).
Multiple Linear Regression
The long-term variability and tendencies in Arctic monthly MLT winds in response to the potential climate forcings are determined through MLR analysis for 1999-2022.The mean changes in monthly winds and predictors are calculated with respect to the first 5 years of observations as the difference of the value for each month from the average over the period 1999-2003.The MLR analysis on the changes in zonal (ΔU) and meridional wind (ΔV) determines the response associated with the SC, QBO, ENSO, NAO, O 3 , and CO 2 .The typical expression for regression model can be given as follows.
where α is the predictand, that is, dependent variable, ΔU or ΔV, t is the time in months/years, ∅ is a constant, β k are regression coefficients corresponding to n predictors (here n = 7), M is a matrix containing n predictors, and ε is the residual.The regression analysis uses seven predictors that have potential impact on variabilities in Arctic MLT winds.Here the matrix M consists of the following seven predictors: where.
(i) F 10.7 is the 10.7 cm solar radio flux.It is a proxy for solar activity in solar flux units (sfu) with 1 sfu = 10 -22 Wm 2 Hz 1 .The monthly F 10.7 values are obtained from https://lasp.colorado.edu/lisird/data/cls_radio_flux_f107/. (ii) QBO10 and QBO30 are the zonal mean zonal winds (m/s) averaged over 5°N-5°S at 10 and 30 hPa respectively for QBO.The European Center for Medium-range Weather Forecast Reanalysis version 5 (ERA5) data of zonal winds are used for the QBO.The ERA5 reanalysis zonal winds on 10 and 30 hPa pressure levels are downloaded from the ECMWF archive: https://cds.climate.copernicus.eu/cdsapp#!/dataset/10.24381/cds.bd0915c6?tab=overview(iii) NINO 3.4 index, the proxy for ENSO, is equatorial Pacific Sea surface temperature (K) averaged for 5°N-5°S and 170°W-120°W.The monthly NINO 3.4 index data are obtained from https://psl.noaa.gov/gcos_wgsp/Timeseries/Data/nino34.long.data(iv) NAO, the North Atlantic Oscillation is the key driver of the climate variability in parts of Eurasia, Greenland, North America, and North Africa.The NAO index is the difference in normalized sea level pressures between Ponta Delgadas (Azores) and Akureyri (Iceland The MLR analysis uses the regressors of monthly mean changes with respect to 1999-2003 to derive the regression coefficients/response of each predictor.It can be noted that the data gaps are incorporated deliberately in the regressors for the months and years corresponding to that in the radar wind observations (as given in Figure 1f) to avoid misinterpretations of the results obtained from the regression analysis.
Results
In this section, the performance evaluation, and data availability of the meteor radar observations for 1999-2022 over Esrange along with the predictors and their responses to ΔU and ΔV are presented.
Meteors Detection and Distribution
Figure 1 shows examples that illustrate various properties of the distribution of meteors recorded by the radar system.Figure 1a presents the location projected onto a horizontal plane of normalized individual meteors recorded around the radar on 21 June 2008.Each meteor is represented by a dot and color-coded to indicate the local time at which it was recorded using the color scale of Figure 1e.As seen, meteors are detected at all azimuths, although there are some biases of azimuths toward certain local times, which is a consequence of the visibility above the horizon of sporadic meteor radiants and the right-angle geometry of line-of-sight and meteor trail required for a meteor to be detected (e.g., J. P. Younger et al., 2009;P. T. Younger et al., 2009).Figure 1b exhibits the average height distribution of the meteors per day (total meteor counts/total number of operational days) for all the operational days during August, 1999-December, 2022.Note the strongly peak distribution with a maximum at ∼90 km in association with the neutral density variability that changes by few kilometers based on the season.Very few meteors are detected below 80 km or above 100 km, and so these heights represent a practical limit for routine measurement of winds.Figure 1c shows the measured distribution of zenith angles, which maximizes near ∼60°.The meteors in the zenith angles between 15°and 65°(vertical dashed lines in Figure 1c) are used in the present study to avoid errors in the wind measurement near zenith and at large zenith angles.
The distribution of the meteors corresponding to ground range (i.e., projected onto the surface, rather than slant range) has been depicted in Figure 1d.It is apparent that the meteors can be detected up to ∼240-300 km with a peak range between 60 and 140 km. Figure 1e illustrates the diurnal variation in hourly meteor counts averaged for the observational period.The observed ratio of maximum to minimum count rates is about 2:1 with a maximum occurring at about 06:00 LT and minimum at about 18:00 LT.This ratio is typical for high latitude sites, and it signifies that there are still enough meteors to estimate the winds even during the low count rate part of the day.The number of meteors counts per day during the observational period are shown in Figure 1f to illustrate the seasonal distribution of meteor detections.Note the data gaps when no meteor radar observations were recorded (as given in Figure S2 in Supporting Information S1).The Figure shows that the daily total meteor counts vary from about 2,500 during winter (December-February) and then start increasing from spring (March-May) to peak at more than 5,000 in summer (June-August), an astronomical variation related to the visibility of sporadic meteor radiants.Experience (e.g., Hindley et al., 2022;Mitchell et al., 2002) shows that meteor count totals in this range are sufficient to determine zonal and meridional winds with useful height and time resolutions at the heights of ∼80-100 km in the MLT region.
Climatology of Zonal and Meridional Winds
The variation in monthly mean zonal winds for ∼80-100 km during 1999-2022 over Esrange are presented in Figures 2a and 2b.The vertical white areas represent the data gaps due to non-availability of meteor radar observations.As seen in the figure, there is strong interannual variability in the zonal winds.For example, the summer westward winds range from 25 m/s at ∼80-82 km in 2007 to 40 m/s at the same height in 2008.
Similarly, the variation in eastward winds ranges between ∼15 m/s in 2002 and ∼30 m/s in 2012, especially below ∼90 km in autumn and early winter.The zonal winds are mainly characterized by strong westward winds below ∼90 km and reverses to strong eastward above this height in summer (JJA).In addition, the westward winds are also apparent in the spring (MAM) reaching the higher altitudes in some years (e.g., 2003, 2005, 2008, 2011, 2018, 2022).The zonal winds are eastward at all heights in winter (DJF) and during autumn (SON) except in September (above ∼90 km) with relatively stronger winds in winter.Note the stronger winter eastward winds are not consistent across all the years and vary with altitude.The composite mean (average of all years) of monthly zonal winds are shown in Figure 2c for the climatological features of the MLT zonal winds over Esrange.
From the figure, it is clear that the summer westward flow is stronger below ∼90 km, a region that represents the top of the westward jet in summer polar mesosphere.Above this height, the summer zonal flow reverses to eastward with maximum speed at around 98 km during July-August.The zonal winds are eastward at all heights in winter with equinoxes being the extended transitions between winter and summer flow through the zero-wind line that descends in summer., 2008-2009, 2012-2013, 2020-2021, and 2021-2022.In general, the winter flow is equatorward between poleward wind patterns of the autumnal and spring equinoxes, however this is not consistent for all the years.The composite (mean of all the years) structure of the monthly mean meridional winds is shown in Figure 3c.The meridional flow is characterized by strong equatorward jet centred at ∼85-90 km summer, however it weakens above this height.The equinoctial poleward winds are merged into part of the winter months, and they extend with height up to ∼86 km.
Interannual Variabilities of Zonal and Meridional Winds
The interannual variabilities of the zonal (U) and meridional winds (V) for each month at two different (lower and upper) heights, 82.In the case of meridional winds, the summer equatorward flow exhibits prominent year to year variability at 82.3 km.For example, in June-July, the equatorward wind speed varies from maximum of 15 m/s in 2004-2005 to around 10 m/s in 2017 and 2021 and so on.The reversal of equatorward and poleward winds can be noticed in winter and equinoxes respectively in certain years, for example, 2000, 2009, 2016, 2022in December-January (poleward) and 2003, 2012, 2017 in March and September (equatorward).At 98.5 km, the interannual variability of meridional winds are more pronounced in all the months.The equatorward winds are evidenced in almost all the years except during 2009-2013 in June-July.Besides, the winter equatorward wind reversal to poleward can be observed in December of 2014December of , and in 2008December of , 2009December of , 2013December of , 2016December of , 2021 during January-February.
During equinoxes, the year-to-year variability is more prominent for the equatorward winds including reversal to poleward in March of 2005, in September of 2001, 2010-2011, 2013, 2020, and during October in 2010.
Figure 5 shows the composite mean of all years for the monthly zonal and meridional winds to characterize the interannual variabilities at two heights of 82.3 and 98.5 km (blue lines).The red vertical bars in each panel represent the corresponding standard deviations for each month to signify the variabilities in winds.At 82.3 km, the interannual variability of the zonal winds is more pronounced in winter eastward winds especially in February (∼12 m/s) and least in autumnal equinox (∼3.5 m/s).In summer, the maximum deviation in westward winds occurs in July (∼6.8 m/s).At 98.5 km, the variability in eastward winds is maximum in August (∼11 m/s) and minimum in May and October (∼4 m/s).For the meridional winds at 82.3 km, the large variability can be found during winter with maximum for equatorward winds in January (∼8.4 m/s) and poleward winds in February (∼7.6 km), however it is minimum for equatorward winds in May (∼1.3 m/s).At 98.5 km, the interannual variability of the meridional winds is considerably larger in almost all the months, however it is maximum primarily in summer (∼6-9 m/s) and then in winter (∼5-7 m/s).
Time Series of Predictors
The temporal variations of the climate forcings (predictors) used in the regression analysis (defined in Section 2) are given in Figure 6 The correlation coefficients (r) among the MLR predictors are given in Table 1.The r values less than 0.05 are almost equivalent to zero.Further the r values signifies that the predictors are nearly independent as there is no strong correlation (above ∼0.5) observed between any of the two predictors.However, for further evaluation of the minor to moderate correlations between F 10.7 and O 3 ( 0.25), and F 10.7 and CO 2 ( 0.47), the variance inflation factor (VIF) has been calculated among the predictors to diagnose the degree of multicollinearity in the MLR analysis (e.g., Miles, 2014;Oˈbrien, 2007;Ramesh et al., 2020a).It is calculated for the nth predictor as VIF n = 1/ 1 R 2 n ) with R n as the coefficient of determination.For VIF n ≃ 1, the predictors are not correlated, and multicollinearity does not exist in the regression model.They are moderately correlated for 1 < VIF n < 5.As a rule of thumb, multicollinearity is a cause of concern when VIF n > 10 (e.g., Kutner et al., 2004).Here the VIF values are found to be 1.37, 1.02, 1.02, 1.03, 1.06, 1.11, 1.33 for F 10.7 , QBO10, QBO30, NINO 3.4, NAO, O 3 , CO 2 respectively.As the obtained VIF value for each regressor is much less than 10, the results of MLR analysis are believed to be more reliable.
Zonal Wind Response to the Climate Forcings
The MLR analysis defined in Section 2 has been applied to the monthly mean zonal wind anomalies (ΔU) at each altitude.The wind and predictor anomalies are calculated with respect to the first 5 years of the observations.Figure 7 illustrates the monthly mean zonal wind anomalies along with their responses to the seven predictors viz., solar (F 10.7 ), QBO10, QBO30, NINO 3.4 (ENSO), NAO, O 3 , and CO 2 .The statistical significance is calculated from t-test (e.g., Wilks, 2006) and the regions where the responses are significant at 90% confidence level are indicated by stippling in the figure.As seen in the figure, the solar response is positive and significant at ∼93-99 km during January-February and at around 83-88 km during August-September.However, it is significantly negative at ∼88-98 km during September-October and at around ∼80 km through July-August.The positive responses denote the enhanced eastward winds are likely due to the solar maximum in winter/spring and the negative responses correspond to weakening/reversal of eastward winds leading to strengthening of the westward winds in summer and autumnal equinox.It can be noted that the zonal wind response to solar activity is not uniform but varies with height and month/season.For example, the enhanced incoming solar radiation during solar maximum tend to enhance the eastward winds above summer westward winds over ∼97 km.The QBO at 10 and 30 hPa are responding to the zonal wind in opposite manner especially in winter and spring equinox.The response due to QBO10 is negative and significant above ∼90 km in January-February and below this height during April-May.The QBO30 response is positive in winter but significant only in January at ∼80-85 km with relatively strong eastward flow and above ∼98 km in March.However, both QBO10 and QBO30 show the significant negative response above ∼98 km in December.In addition, the response due to both the indices is significantly positive above ∼88 km in October-November strengthening the climatological eastward flow during autumnal equinox.
The NINO 3.4 index, a proxy for ENSO, comprises El Niño or La Niña events that are defined when the NINO 3.4 SST anomalies respectively exceed +0.4 or 0.4°C for a period of 6 months or more (https://climatedataguide. ucar.edu/climate-data/nino-sst-indices-nino-12-3-34-4-oni-and-tni).As seen in Figure 7, the response of zonal winds to ENSO is not consistent at all the heights/months.It is positive and significantly stronger at around ∼82-95 km during the late autumn/early winter (October-December) when the El Niño reaches to peak intensity.It signifies that the eastward winds are intensified due to the El Niño during autumn and winter.The ENSO response is negative and significantly stronger at ∼80-83 km in the early spring (March-April) when the El Niño/La Niña tend to develop to reverse the eastward winds to westward, however the response is positive to strengthen the eastward winds above 95 km.Further the response due to ENSO is positive to reverse the summer westward winds below ∼83-85 km and it is negative reversing the westerlies above ∼98 km which is more pronounced in the month of July.The NAO is deemed to be competent in affecting the northern hemispheric (NH) mean circulation especially over the Atlantic and European regions.As illustrated in Figure 7, the NAO response is positive from late autumn to winter, but stronger at all heights in January and significant only up to ∼88 km in part of the month.Further it persists into the early spring below ∼85-87 km with significant response up to ∼83 km in March.The significant positive response can also be noticed in the late spring at ∼86-90 and ∼95-100 km.In late summer and early autumn, the response due to NAO reverses to negative which is mostly insignificant except in August at ∼82-84 km.
The O 3 , that plays crucial role for the polar middle atmosphere dynamics, has greater impact on eastward winds in winter and early spring (Figure 7).Its strong positive response likely to amplify the eastward winds during December-March, however it is not consistent at all observational heights.The significant responses are limited to ∼83-94 km in December whereas they are up to ∼85 km during January-March.In summer, the impact of O 3 on the MLT zonal winds varies with height.Its response is significantly positive in late summer to reverse the westward winds to eastward below ∼90 km, while it is negative above this height reversing the eastward winds to westward.Another significant region of positive response can be found during late autumnal equinox to strengthen the eastward winds above ∼95 km.As being another potential constituent to influence the middle atmospheric processes, the significant positive response due to CO 2 denotes the strengthening of eastward winds above ∼92 km in the late winter and at ∼88-94 km in the late autumn and early winter.The significant positive responses are also observed in summer, but at limited height regions of ∼90-95 km in June and above this height in July-August.
Meridional Wind Response to the Climate Forcings
The monthly mean meridional wind anomalies (ΔV) along with the responses due to the seven predictors are given in Figure 8.The stippled regions in the figure panels represent the significant responses at 90% confidence level.As shown in the figure, the solar response to the meridional winds is negative and significant in the late summer strengthening the equatorward flow during solar maximum; however, the significant regions are limited up to ∼88 km and at ∼92-97 km, persisting in the early autumn at lower heights.In late autumn and early winter, the solar influence on meridional winds is significantly positive, likely to reverse the equatorward winds to poleward strengthening at ∼82-92 km.The QBO10 response is significantly positive, amplifying the poleward winds up to ∼87 km in the late winter/early spring, and above ∼96 km in summer and early autumn.The response due to QBO30 is significantly positive above ∼86 km in late summer and late autumnal equinox, and below ∼85 km in early winter, likely to reverse the equatorward winds to poleward.However, it is significantly negative above ∼90 km to intensify the equatorward flow in January.The regions of significantly negative responses can also be observed up to ∼90 km in July-August and at ∼86-93 km during March-April.
The response of ENSO is negative in summer and early autumn, but significantly larger in July and in September up to ∼87 and ∼90 km respectively, indicating to strengthen the equatorward flow.The regions of significantly negative response can also be seen in the month of October at ∼96-98 km.The significant positive responses of ΔV to ENSO are evidenced below ∼83 km in November-December likely to amplify the poleward flow, and above ∼97 km in December to reverse the equatorward flow.The strong positive response above ∼84 km during May-July, peaking at ∼95-98 km, denotes the reversal of equatorward flow and strengthening of the poleward flow possibly associated with El Niño; however, it is statistically insignificant.As an important driver of the regional climate variability, the NAO exhibits significant impact on meridional winds in winter with strong negative response in January-February below ∼84 km and at ∼92-97 km likely to strengthen the equatorward winds.In the early spring (March), the response is significantly positive below ∼87 km to amplify the poleward winds.In the late autumn, the effect of NAO is likely to intensify the poleward winds below ∼85 km and equatorward winds above ∼90 km during October-November by the corresponding positive and negative responses of ΔV.The response of meridional winds to O 3 is significantly positive strengthening the poleward flow in the late winter and early spring.Further the regions of significant positive responses can be found in the early winter (December) below ∼88 km likely to reverse the equatorward winds to poleward.In summer and autumn, the responses are insignificant except during October-November above ∼97 km.The CO 2 influence is more pronounced in summer with significantly large negative response above ∼90 km to strengthen the equatorward winds.However, below this height, the response is positive in June likely to weaken and reverse the equatorward jet to poleward.Further the significant positive response can be found in the late winter/early spring at ∼82-86 km, also in December-January at the higher altitudes of the observations.The significant negative response above ∼95 km in the late autumn denotes the strengthening of the equatorward winds possibly due to CO 2 .
Aggregate Responses of the Zonal and Meridional Winds to the Climate Forcings
The ΔU and ΔV responses to the seven climate forcings viz., solar activity, QBO10, QBO30, ENSO, NAO, O 3 , and CO 2 for all the months and years during 1999-2022 in the MLT region between ∼80 and ∼100 km are computed from MLR analysis.Figures 9a-9g illustrates the altitude variation of cumulative (long-term) responses of the arctic MLT winds to the seven potential drivers.The zonal wind response to SC is positive up to ∼88 km and above this height, it reverses to negative and increases with height.The meridional wind response is positive and slightly increases with height above ∼80 km and decreases from ∼88 km, reverses to negative above ∼95 km.
The response of zonal wind to QBO10 is negative below ∼90 km (peaking at ∼83 km) and it reverses to positive above this height.The response of zonal winds to QBO30 is opposite to that of QBO10.It is positive below ∼90 km and negative above this height.The response of meridional wind to QBO10 is insignificant (i.e., falls within the uncertainty limits indicated by the shaded area in the figure) up to ∼97 km and positive above this height, while it is consistently positive for the QBO30.
The zonal wind response to ENSO (NINO 3.4 index) is negative and maximum at lowermost heights; however, it decreases with height and reverses to positive above ∼93 km.The meridional wind response is negative and decreases with height, turns to positive at higher altitudes (above ∼98 km).The zonal wind response to NAO is positive and maximum at lowermost heights and decreases with height, reverses to negative above ∼90 km.The meridional wind response to NAO is positive and slightly increases up to ∼87 km, and it decreases above this height and becomes negative beyond ∼98 km which is not significant.The zonal wind response to O 3 is like that of NAO.It is positive and maximum at lowermost heights, decreases with height and reverses to negative above ∼87 km.The response of meridional flow to O 3 is positive and slightly increases with height, peaking at around ∼88 km and decreases above this height.The zonal wind response to CO 2 is insignificant at all heights; however, the meridional wind response is positive below ∼90 km and negative above this height.It increases with height peaking at ∼98 km and decreases above this height.
Furthermore, it can be seen from Figure 9 that the large variabilities in long-term responses are more prominent in zonal winds than meridional winds.Besides, the magnitudes of significant variabilities of the wind responses are larger for O 3 when compared to CO 2 .
Trends in Zonal and Meridional Winds
The cumulative long-term trends ("Trend" signifies changes on a time scale longer than a SC; Ramesh et al., 2020a) in zonal and meridional winds are shown in Figure 9h.Here the trends represent the rate of change of ΔU and ΔV (per year) for all the months and years during the observational period.They are calculated irrespective of climate forcings with no trend (time) term included in the MLR analysis (Equation 1) as there exits strong correlation between the CO 2 and trend (time).The trend in zonal wind is positive and increases above ∼82 km with maximum at ∼95 km (∼0.15 ± 0.01 m/s/year), representing the increasing trend in eastward winds; however, it decreases above this height.The trend in meridional wind is consistently positive at ∼81-85 km and decreases above this height and turns to negative above ∼88 km.The negative trend in meridional wind increases with height and peaks at higher altitudes ( 0.1 ± 0.02 m/s/year at ∼99.5 km), indicating the decreasing and increasing trends in poleward and equatorward winds respectively.
The monthly trends in zonal and meridional wind anomalies (with respect to 1999-2003) are calculated for the period 1999-2022 and shown in Figures 10a and 10b.The stippled regions in the figure panels represent the significant trends at 90% confidence level.Here the monthly trends represent the rate of change of ΔU and ΔV (per year) for each month during the observational period.They are calculated regardless of climate forcings with no trend (time) term included in the MLR analysis (Equation 1) due to large correlation between the CO 2 and trend (time).The trends in zonal winds (Figure 10a) are positive and significantly larger in late autumn and early winter (November-December) up to ∼95 km peaking (∼0.6 m/s/year) at ∼85-88 km, implying an increasing trend of eastward winds.Further the significant positive trend at ∼90-95 km in late spring and early summer (May-June), and the strong positive trend (∼0.6 m/s/year) above this height in late summer (July-August) indicates the increasing trend of the eastward flow.In the early spring (March), the trends in zonal winds are positive up to ∼88-90 km and negative above this height denotes the increasing and decreasing trend in the eastward winds respectively.The trends in meridional winds (Figure 10b) are negative and significantly larger above ∼95 km in the late summer (July-August) peaking at ∼98-100 km (up to 0.6 m/s) indicates the increasing trends of equatorward winds.The significant negative trends can also be observed during February-March above ∼88 km.However, below this height, the significant positive trends imply the increasing trend of poleward winds.The positive trends observed at around ∼85 km in June, and above this height in January represent the decreasing trend of equatorward winds.
Summary and Discussion
The present study investigates the long-term tendencies and interannual variabilities of polar MLT zonal and meridional winds using meteor radar observations at ∼80-100 km during 1999-2022 over Esrange (67.9°N, 21.1°E).In addition, for the first time, the influence of significant drivers viz., solar activity, QBO at 10 and 30 hPa, ENSO, NAO, O 3 , and CO 2 on the change in monthly zonal and meridional winds from 1999 to 2003 has been investigated using MLR analysis.The findings are of great importance for comprehensive understanding of the impact of solar and potential climate forcings on the wind anomalies and mean circulations in the polar MLT region.There are comparatively few studies of polar MLT wind climatologies for one SC or more in observational length.The present results demonstrate that the seasonal variation of winds reported in shorter duration studies, from Mitchell et al. (2002) onwards appears representative of the long-term behavior.Note that there may be significant differences (biases) between the wind climatologies from the meteor radar observations in this study and those derived from the MF radar measurements (e.g., Manson et al., 2004;Iimura et al., 2011).These biases are recognized by the experimental community and usually lead to an under-estimation of wind speeds by the MF-radar technique and have been attributed to factors that include, (a) scatter from off the beam axis, (b) side-lobe contamination and (c) the need to correct for refractive effects on the beam pointing (e.g., Wilhelm et el., 2017).
The polar MLT zonal winds are characterized by the summer westward flow up to ∼90 km and eastward flow above this height.The MLT dynamics is largely governed by the interaction from GWs, PWs and tides.In summer, as the zonal flow is westward in the stratosphere and mesosphere, the GWs with westward phase speed are filtered out and only those waves with eastward phase speed propagate to the higher altitudes in the MLT, and deposit energy and momentum when they break and decelerates the mean flow that causes the wind reversal from westward to eastward above ∼90 km.In the winter, the zonal flow is eastward at all heights with the wind speeds of ∼10 m/s; however, the eastward flow can reach up to ∼25 m/s below ∼85 km in late winter during the SSW episodes (based on major or moderate), for example, 2002, 2004, 2006, 2009, 2013, 2019, and 2020 (Figure 2).The effect of SSWs can extend up to the MLT region through vertical coupling and the wind anomalies associated with the stratospheric polar vortex can reach as high as above ∼90 km (e.g., Lukianova et al., 2018).The meridional winds exhibit strong summer equatorward jet centered at the mesopause region of ∼85-90 km in June-July at the same time when the strong zonal wind shear exists, and the maximum of meridional flow occurs near to the zero-wind line in the zonal wind as reported by the previous investigations (e.g., Manson et al., 2004;Mitchell et al., 2002;Lukianova et al., 2018).Since the zonal winds act as a filter to upward propagating GWs, the momentum drag associated with GW breaking in the mesosphere is eastward in summer and westward in winter.The zonal drag along with the Coriolis force induces a summer-to-winter meridional circulation (e.g., Andrews & McIntyre, 1976;Becker, 2012;Garcia & Solomon, 1985;Holton, 1983;Lindzen, 1981).The most significant feature is that the poleward wind strengthens up to ∼15 m/s in the late winter of the SSW years (based on the intensity of the SSW) and extends throughout the altitude range up to ∼100 km when compared to minor/non-SSW episodes.The increased poleward winds can be accompanied by the enhanced eastward winds during SSW periods.
The interannual variability in zonal winds varies with height between eastward and westward winds (Figure 4).For example, the variability is more prominent in winter and least for eastward winds in autumn, whereas it is maximum for westward winds in July at ∼82.3 km (Figure 5).However, at ∼98.3 km, the variability is maximum for eastward winds in August and minimum in May and October.Similarly in the case of meridional winds, the maximum variability occurs in winter and minimum in early summer at lower altitude, while it is larger in almost all the months but relatively maximum in summer and winter solstices at higher altitude.The interannual variability in polar MLT winds over Esrange can be attributed to the wave activity and other dynamical processes associated with the significant drivers of SC, QBO, ENSO, NAO, O 3 , and CO 2 and through thermal wind balance.
Solar activity plays an important role for the thermal and dynamical processes of the polar MLT.Although the zonal winds are associated with temperature gradients, strong dynamical forcings have greater impact on the variability of the MLT winds.In this study, the significant positive response of the zonal winds to 11-year SC variability in later winter (January-February) specifies the intensification of eastward winds associated with enhanced solar flux in solar maximum conditions.Further, the positive response of the meridional flow implies the enhancement of the poleward winds during solar maximum in late autumn and early winter (November-December).Lukianova et al. (2018) reported the enhancement of eastward zonal flow from solar minimum to solar maximum at all heights of the meteor radar observations in the winter MLT over Sodankylä (67.4°N,26.6°E).
They suggested that the stronger eastward winds are accompanied by the strong poleward winds to close the residual circulation, in agreement with the strengthened pole-to-pole mesospheric circulation during solar maximum.The dependence of stratosphere and mesosphere temperature gradients on the solar activity potentially alter the polar MLT winds.The sharpening of the equator to pole temperature gradient in response to enhanced stratospheric ozone chemistry during solar maximum intensifies the mesospheric eastward GW drag which in turn drives a stronger equatorward meridional flow in the summer.Due to solar response, the intensification of summer westward flow at lower heights was also reported by Lukianova et al. (2018).This can be associated with the enhanced westward GW drag at lower heights, and it reverses due to the wave filtering effect and intensifies the eastward flow at higher level during solar maximum conditions.Wilhelm et al. (2019) reported the influence of 11year SC on the MLT winds from meteor radar observations during 2002-2018 over a high-latitude location, Andenes (69.3°N, 16°E) and the mid-latitude locations of Juliusruh (54.6°N, 13.4°E) and Tavistock (43.3°N, 80.8°W ).They found the impact of 11-year oscillation on seasonal basis with high response of zonal winds during summer (around ∼80 km) and winter; however, the meridional winds exhibit almost no changes with SC over three locations.In the present study, the wind responses to SC are evidenced in both the components varying with the altitude and month/season.The QBO10 and QBO30 responses to the zonal flow are positive strengthening the eastward winds in autumn (significant above ∼85 km).However, they exhibit opposite responses in mid-winter, negative due to QBO10 (significant above ∼85-90 km) and positive due to QBO30 (significant below ∼85 km).The manifestation of the stratospheric QBO extends in to the MLT region in both latitude and altitude (e.g., de Wit et al., 2016).The PWs strongly modulate the stratospheric polar vortex in winter and thereby the MLT winds/circulation.Further the PW propagation to the mid-and high-latitudes is essentially influenced by the QBO phase (e.g., Baldwin & Dunkerton, 1998;Holton & Tan, 1980).The regression analysis in the present study shows the significant positive response of zonal winds due to QBO at both pressure levels (10 and 30 hPa) in the late autumn (Figure 7).The PWs can reach the summer polar mesosphere from the opposite winter hemisphere based on the phase of the equatorial mesospheric QBO (e.g., Ford et al., 2009;Hibbins et al., 2009), which is opposite in phase of the stratospheric QBO.When the equatorial QBO is in eastward phase, the mesospheric QBO is westward and PWs are prevented from propagating across the equator, hence the mesospheric vortex is more stable with stronger eastward winds (e.g., Ford et al., 2009).This could be the possible reason for the significant positive responses in the late autumn to strengthen the eastward winds in the MLT in response of the QBO at both pressure levels (Figure 7).Also, in late spring when the westward winds are intensified due to negative response of the QBO10 below ∼90 km.However, this is in opposite situation for QBO10 in winter when the PWs interrupt the MLT zonal winds reversing to westwards.The response due to QBO30 in later winter is similar to that in late autumn but more intense below ∼90 km.Through Holton-Tan effect (Holton & Tan, 1980), the PWs interrupt the NH polar vortex in winter when the QBO (at 50 hPa) phase is eastwards, resulting in stronger polar vortex.In the westward phase of the QBO, the stratospheric polar vortex is more perturbed by PWs, resulting in weaker vortex.Recently, Pedatella (2023) simulated the influence of stratospheric polar vortex variability on the MLT winds/circulation.He reported that the MLT circulation is influenced strongly due to both strong and weak polar vortices.When the QBO10 phase is eastwards, the polar vortex is stronger and more symmetric and hence the meridional flow is more poleward in late winter and early spring (Figure 8).However, this is observed as converse for QBO30 to strengthen the equatorward winds in January.
The ENSO plays significant role for modulating the stratospheric polar vortex and thereby the MLT winds.The warm phase of ENSO (El Niño) weakens, and the cold phase (La Niña) strengthens the stratospheric polar vortex through North Pacific teleconnection (e.g., Butler & Polvani, 2011;Camp & Tung, 2007;Domeisen et al., 2019;Garfinkel et al., 2012;Oehrlein et al., 2019;Sassi et al., 2004).As illustrated above, the strong and weak polar vortices in turn could impact the polar MLT wind/circulation patterns.Further the interaction between QBO and ENSO mainly due to strong wave activity during El Niño phase could impact the intensity of the polar vortex (e.g., Hansen et al., 2016 and references therein;V. Kumar et al., 2022 and references therein).The strong positive response of the zonal winds to the ENSO (Figure 7) coincides with that of the QBO10 during autumnal equinox; however, the overlap regions of significant responses can be found between ∼85 and ∼95 km, also for QBO30 during October-November.Similarly, significant correlations are evidenced for the zonal wind response to ENSO and QBO10 in spring (below ∼90 km) and in summer (at lower and higher altitudes).Below ∼85-90 km, the meridional winds response to ENSO (Figure 8) illustrates the significantly enhanced equatorward (summer pole to winter pole) residual circulation due to increased eastward GW drag during warm ENSO events.Above this height, due to anomalous GW filtering, the ENSO response of summer meridional flow reverses to poleward, but this is statistically insignificant.It is equatorward at all heights in the autumnal equinox except in November.The tropospheric circulation in the regional scale can be characterized by the NAO over the Atlantic and Europe (Walker & Bliss, 1932).The tropospheric NAO is closely related to the winter stratospheric polar vortex through wave-mean flow interaction (Baldwin et al., 1994;Perlwitz & Graf, 1995), and the MLT winds are significantly modulated by strength of the stratospheric polar vortex.Thus, the variability in polar MLT winds is directly linked to the tropospheric NAO through dynamical coupling.Most of the previous studies neglected the importance of NAO for influencing the polar MLT winds, however the present study incorporated its prominence to investigate the variabilities in polar MLT winds.The zonal wind response to NAO is stronger and positive strengthening the eastward winds in late winter, especially in January (Figure 7).Jacobi and Beckmann (1999) found the strong positive correlation between winter (prominent in January) mesopause eastward winds and the NAO index over Collm observatory (52°N, 15°E), Germany possibly due to dynamical coupling between different layers through PWs.However, they also suggested that the strong correlation cannot be expected in summer as the dynamical coupling between the middle atmosphere and the troposphere is relatively weaker in this season than in winter.In late winter, the significant negative response of the meridional flow (equatorward) to the NAO (Figure 8) is accompanied by the strong positive response (eastward) of the zonal winds (Figure 7).
The response of zonal and meridional winds to ozone in winter and early spring could be the result of interaction of polar vortex to the stratospheric ozone.The response due to ozone significantly intensifies the MLT eastward and poleward winds in the winter and early spring (Figures 7 and 8) by modulating the temperature gradients, winds and thereby the GW filtering effect in the stratosphere.An increase (decrease) in arctic stratospheric ozone corresponds to a weakened (strong) stratospheric polar vortex, which in turn resembles strong (week) Brewer-Dobson circulation (e.g., Hu et al., 2023) based on the PW activity.As the ozone recovery commenced due to declining global ozone depleting substances after 1995 (e.g., Ramesh et al., 2020a), the increasing trend in global ozone (at 1 hPa) appeared at least in the beginning years of the observations (Figure 6).After 2004, the tendency in ozone remains almost consistent with seasonal/interannual variabilities as seen in Figure 6.Here, the enhanced ozone could weaken/reverse the stratospheric eastward flow (due to change in temperature gradients) so that the increased eastward GW drag strengthen the eastward winds in the polar MLT during winter, extended to early spring (Figure 7).This subsequently intensifies the poleward circulation as illustrated in Figure 8.However, the response due to ozone significantly reverses the westward and eastward winds below and above ∼90 km respectively, possibly due to the reversal of the GW forcing in the late summer.The increasing CO 2 enhances the tropospheric heating and middle atmosphere radiative cooling.This favors for stronger GW and PW generation in the lower atmosphere and the vertical destabilization impacts the wave propagation into the MLT (e.g., Akmaev & Fomichev, 1998;Rind et al., 1990).The combined stronger momentum deposition by the waves subsequently leads to stronger residual circulation (summer pole to winter pole) in the MLT region (Figure 8).The strong meridional circulation is accompanied by enhanced eastward winds in summer (Figure 7).However, the strong positive response due to CO 2 significantly intensifies the eastward winds in late winter (above ∼85-90 km), and this could possibly due to increased eastward GW drag.
The trends in zonal and meridional wind anomalies (change in ΔU and ΔV per year) are calculated from linear least-squares slopes at all observational heights as cumulative (Figure 9h) and for each month (Figure 10) to quantify the long-term changes in arctic MLT winds over Esrange.As CO 2 evolves with time, the strong correlation between the trend (time) and CO 2 could be a significant concern to interpret the results from regression analysis.Thus, the trend (time) term has been excluded from the MLR (Equation 1) and trends in wind anomalies are estimated regardless the climate forcings.The increasing positive trend in zonal winds signifies the increasing trends of eastward winds above ∼82 km (Figure 9h), however it decreases above ∼95 km.The trend in meridional wind is positive up to ∼88 km and negative above this height.The increasing negative trend in meridional wind represents the decreasing and increasing trends of poleward and equatorward winds respectively.The zonal winds show significant positive trends up to ∼95 km in late autumn/early winter and in late spring/summer above ∼90 km (Figure 10), representing the increasing trend of eastward winds, can be attributed mainly to the consequences of increasing CO 2 that significantly affect the wave (GW, PW, and tides) generation sources, vertical propagation, and dissipative properties in the polar MLT region.However, the significant negative trend above ∼90 km in the late winter/early spring denotes the decreasing trend of eastward winds.The significant negative trend of the meridional winds above ∼90-95 km indicates the increasing trend of equatorward winds, possibly due to increasing CO 2 that influence the meridional residual circulation from summer pole to winter pole in the MLT.The significant positive trend in the mid-winter denotes the increasing trend of poleward (northward) winds, possibly due to modulation in the wave activity.
Conclusions
The long-term variability and tendencies in polar monthly zonal and meridional winds are derived from the meteor radar observations at ∼80-100 km during 1999-2022 (two SCs) over Esrange (67.9°N, 21.1°E).In addition, for the first time, the broad impact of the potential climate forcings viz., SC, QBO (at 10 and 30 hPa), ENSO, NAO, O 3 , and CO 2 on the polar MLT winds has been investigated using MLR analysis for the observational period.
The main conclusions of this study can be listed as follows.
i.The zonal winds are characterized by strong easterlies (westward winds) below ∼90 km and strong westerlies (eastward winds) above this height in summer (JJA).The meridional flow is characterized by strong equatorward jet centered at ∼85-90 km in summer.Although there is a well-defined recurrent seasonal cycle in zonal and meridional winds, there is nevertheless significant inter-annual variability evident in all seasons and at all heights.ii.The interannual variability in zonal winds is more prominent in winter and summer solstices at lower altitudes (∼82.3 km), whereas it is larger in late summer at higher altitudes (∼98.5 km).For meridional winds, the large interannual variability occurs in winter and early summer at lower altitudes, while it is considerably larger in all the months, but more pronounced in summer at higher altitudes.iii.The significant positive responses of zonal and meridional winds to solar activity (up to 15 m/s/100 sfu) in late winter/early spring and in late autumn/early winter respectively indicates the strengthening of eastward and poleward winds during solar maximum conditions.iv.The responses of zonal and meridional winds to QBO10 and QBO30 are positive in autumnal equinox to strengthen eastward and poleward winds.However, the negative response of zonal wind to QBO10 during late winter/spring denote weakening/reversal of the eastward winds.v.The response of zonal wind to ENSO is significantly positive and negative to strengthen and weaken the eastward winds in late autumn and spring (below ∼90 km) respectively.The response of meridional wind to ENSO is to amplify the equatorward flow in summer (below ∼90 km) and early autumn.vi.The response of zonal wind to NAO is significantly positive in January to intensify the eastward flow, however it is negative for the meridional winds in January-February to weaken/reverse the poleward winds.vii.The impact of O 3 is significantly strong to strengthen the eastward winds in winter/early spring (below ∼90 km) and poleward flow in late winter/early spring.viii.The CO 2 intensifies the eastward flow in late winter and summer above ∼90 km.It amplifies the meridional circulation (summer pole to winter pole) with significant negative response above ∼90 km in summer.ix.For the cumulative trends, the positive trend in zonal wind increases at ∼82-95 km and the negative trend in meridional wind increases above ∼88 km.x.The positive trend in zonal wind is significantly larger in summer (above ∼90 km) and late autumn/early winter (∼0.6 m/s/year).The negative trend in meridional wind is more pronounced in summer above ∼95 km ( 0.6 m/s/year).
Figure 1 .
Figure 1.(a) Horizontal distribution of normalized individual meteors recorded around Esrange meteor radar on 21 June 2008.Each meteor echo is denoted by a dot colored according to the time of detection as per the color scale in panel (e); (b) average height distribution of the meteors per day, (c) zenith angle, (d) horizontal (ground) range, (e) local time of all meteor echoes detected per day for all the operational days, (f) daily variation of the number of meteors counts per day during the observational period (1999-2022) over Esrange.
3, and 98.5 km for 1999-2022 are shown in Figure 4.The white pixel regions in the figure represent the data gaps.There is substantial interannual variability in zonal and meridional winds observed in each month at two height regions.It is obvious in zonal winds during June-August at 82.3 km where the summer westward flow is maximum.In June, the westward wind speed ranges from ∼15 m/s in 2002 to ∼50 m/s in 2009 and consistent at ∼25 m/s for consecutive years during 2004-2006 and 2021-2022.Similarly in the months of July and August, there is a significant year-to-year variability of the westward winds.Besides, the large variability of the eastward winds can be observed in winter months.In January, the eastward wind speed ranges from ∼5 m/s in 2001 to ∼30 m/s in 2000, 2005, 2019 and consistent as ∼10 m/s in2002-2004, 2006, 2008-2009, and 2018.In February, the eastward wind speeds vary from a maximum of ∼50 m/s in2006, 2009, and 2013 to the minimum of ∼5 m/s in2003, 2005, 2007, 2017-2019, and 2021-2022.Likewise, the zonal winds exhibit prominent year-toyear variability during the transition periods of equinoxes.At 98.5 km where the summer eastward flow peaks, the maximum eastward wind speeds above ∼50 m/s can be found inAugust of 1999August of , 2001August of -2002August of , 2005August of , 2012August of , 2016August of , and 2021August of -2022, and the minimum of ∼10-20 m/s in 2006 and 2014.However, in recent years, the peak eastward winds attain ∼40 m/s in the month of July.Further, the year-to-year variability is more prominent in winter solstice and equinoxes.
Figure 3 .
Figure 3. Same as Figure 2 but for meridional winds.
. The figure includes the monthly mean of the proxies viz., F 10.7 , QBO10, QBO30, NINO 3.4 index, NAO index, O 3 and CO 2 for 1999-2022.The F 10.7 index includes 23rd, 24th, and part of 25th 11-year SCs with declining solar maxima.The QBO10 and QBO30 zonal winds vary up to 20 m/s in eastward and 40 m/s in
Figure 4 .
Figure 4. Year-to-year variability of (top) zonal and (bottom) meridional winds for each month at two different heights of (left) 82.3 km and (right) 98.5 km during 1999-2022 over Esrange.
Figure 5 .
Figure 5. Monthly variation of composite means of (top) zonal, and (bottom) meridional winds (in blue) at two different heights, (left) 82.3 km and (right) 98.5 km.The vertical bars (in red) represent the standard deviations of the winds for each month.
Figure 8 .
Figure 8. Same as Figure 7 but for meridional winds.
Figure 9 .
Figure 9. Vertical profiles of cumulative zonal (in blue) and meridional (in red) wind (a-g) responses to the seven predictors, and (h) trends.The blue and red patches in each panel denote the corresponding standard errors.
Figures 3a and 3b represent the variabilities in the monthly mean meridional winds between ∼80 and ∼100 km for 1999-2022 over Esrange.The meridional flow is equatorward in summer during all the years with variable speeds at different heights, and strong winds ( 15 m/s) mostly at ∼85-90 km.It exhibits significant interannual variability in poleward flow that ranges from ∼5 m/s in 2002 to ∼12 m/s in 2016 below ∼90 km during autumnal equinox and early winter.Further the meridional winds are equatorward in January of certain years, forexample, 2006, 2007, 2012; however, they are poleward in some other years like2008, 2009, 2016, 2022 (at least below ∼90 km).The poleward flow occurs at least for 2 months in autumnal equinox up to ∼90 km, extending to winter in some of the years for example
|
2024-04-05T17:32:28.304Z
|
2024-04-02T00:00:00.000
|
{
"year": 2024,
"sha1": "33e313308542cd26431bf9d0457bbe5682b57340",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2023JD040404",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "8f40db9f613e0e769fdd8d6f0213132866355294",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
221979169
|
pes2o/s2orc
|
v3-fos-license
|
Proteomics for Studying the Effects of Ketogenic Diet Against Lithium Chloride/Pilocarpine Induced Epilepsy in Rats
The ketogenic diet (KD) demonstrates antiepileptogenic and neuroprotective efficacy, but the precise mechanisms are unclear. Here we explored the mechanism through systematic proteomics analysis of the lithium chloride-pilocarpine rat model. Sprague-Dawley rats (postnatal day 21, P21) were randomly divided into control (Ctr), seizure (SE), and KD treatment after seizure (SE + KD) groups. Tandem mass tag (TMT) labeling and liquid chromatography-tandem mass spectroscopy (LC-MS/MS) were utilized to assess changes in protein abundance in the hippocampus. A total of 5,564 proteins were identified, of which 110 showed a significant change in abundance between the SE and Ctr groups (18 upregulated and 92 downregulated), 278 between SE + KD and SE groups (218 upregulated and 60 downregulated), and 180 between Ctr and SE + KD groups (121 upregulated and 59 downregulated) (all p < 0.05). Seventy-nine proteins showing a significant change in abundance between SE and Ctr groups were reciprocally regulated in the SD + KD group compared to the SE group (i.e., the seizure-induced change was reversed by KD). Of these, five (dystrobrevin, centromere protein V, oxysterol-binding protein, tetraspanin-2, and progesterone receptor membrane component 2) were verified by parallel reaction monitoring. Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis indicated that proteins of the synaptic vesicle cycle pathway were enriched both among proteins differing in abundance between SE and Ctr groups as well as between SE + KD and SE groups. This comprehensive proteomics analyze of KD-treated epilepsy by quantitative proteomics revealed novel molecular mechanisms of KD antiepileptogenic efficacy and potential treatment targets.
INTRODUCTION
Epilepsy is a chronic disease characterized clinically by recurrent and unpredictable seizures (Fisher et al., 2005) due to uncontrolled neuronal hyperactivity. A recent large-scale epidemiological survey of 196 countries and regions around the world found that there were 45.9 million people with epilepsy in 2016, with highest incidence in children aged 5 to 9 years (Beghi et al., 2019). Severe status epilepticus or recurrent seizures can cause cognitive decline, impair quality of life, and increase the risks of injury and sudden death (Nashef et al., 1995). The most common treatments for epilepsy are oral antiepileptic drugs (AEDs). However, about 30% of children are resistant to currently available AEDs (Pluta and Jablonski, 2011).
The ketogenic diet (KD) is a high fat, low carbohydrate regime widely considered an effective non-drug treatment for epilepsy with documented anticonvulsant, antiepileptogenic, and neuroprotective effects on clinically refractory epilepsy and animal models of epilepsy (Lusardi et al., 2015;Simeone et al., 2018;Karimzadeh et al., 2019). Multiple therapeutic mechanisms have been proposed for KD-induced antiepileptogenesis, including increased adenosine and decreased DNA methylation, reduced mTORC1 activity, and blockade of histone deacetylases (Koene et al., 2019;Boison and Rho, 2020). Thus, it is critical to comprehensively assess the molecular changes associated with the KD in epilepsy. Moreover, the KD is often unpalatable, especially to children, and must be sustained for years, resulting in poor compliance. In addition, constipation and weight loss are common adverse effects (Cai et al., 2017). In several clinical studies, the KD was also found to influence mood. Although most of these studies reported positive effects (Halyburton et al., 2007;McClernon et al., 2007;Dm et al., 2016), some reported no effects or even negative effects on mood (Lambrechts et al., 2013;Iacovides et al., 2019). These inconsistences may be related to the type of disease before KD treatment, the number of subjects, and the duration of KD compliance, necessitating larger-scale, multiple-center studies to assess the influence of the KD on mood in specific diseases. Death during KD treatment has also been reported secondary to severe infection and malnutrition (Kang et al., 2004;Suo et al., 2013). Therefore, a better understanding of the therapeutic mechanisms may improve clinical application and reveal new targets for clinical anti-epileptic treatment.
Previous studies on the antiepileptogenic efficacy of the KD focused mainly on changes in the expression of specific preselected proteins or genes, while few have used gene chips to objectively explore larger-scale gene expression changes associated with KD treatment of epilepsy (Bough et al., 2006;Jeong et al., 2010). Modern proteomics techniques can reveal similarities and differences in protein expression at the individual, pathway, and network levels under various physiological and pathological states, thus providing a more comprehensive understanding of disease pathology and progression (Atamna et al., 2002). Such proteomics studies have examined the pathogenesis of epilepsy (Walker et al., 2016;Sadeghi et al., 2017), but not the mechanisms underlying the antiepileptogenic action of KD. At present, the main technologies used in proteomics research are twodimensional gel electrophoresis and mass spectrometry (MS). The former is technically demanding, is not amenable to automation, and has limited separation capacity, especially for low abundance and hydrophobic proteins. Alternatively, mass spectrometry is suitable for high-throughput analysis by automation and can discriminate proteins of similar size and isoelectric point. Therefore, we conducted the first proteomics analysis of the antiepileptogenic response to KD in the rat lithium chloridepilocarpine-induced epileptic model using MS-based tandem mass tag (TMT) quantitative proteomics.
Animal Preparation
Postnatal day 21 (P21) Sprague-Dawley rats (n = 45) were obtained from JOINN Laboratories, Co. Ltd. (Suzhou, China) [License no. SCXK(SU) 2018-0006]. Animals were treated in accordance with the guidelines set by the National Institutes of Health (Bethesda, MD, United States) for the humane treatment of animals. Animal experiments were approved by the Animal Experimental Ethics Committee of Suzhou University. All rats were raised under a 12 h:12 h light: dark cycle with free access to drinking water and the indicated diet (normal or KD). Animals were protected from bright lights and excessive noise during housing. Rats were first randomly divided into a control group (Ctr, n = 10) and seizure model group (n = 35). Rats exhibiting status epilepticus following lithium chloride-pilocarpine treatment (detailed below) were then randomly assigned to the normal diet group (SE) or KD diet group (SE + KD).
Induction of Status Epilepticus
Status epilepticus was induced by lithium chloride-pilocarpine in accordance with our previous study (Chen et al., 2019). Briefly, 35 rats were injected intraperitoneally with 127 mg/kg lithium chloride (Sigma-Aldrich, United States) at P21 and 24 h later (P22) with 1 mg/kg scopolamine hydrobromide (TargetMol, United States) to reduce the peripheral cholinergic response to pilocarpine. Thirty minutes later, 320 mg/kg pilocarpine (Sigma-Aldrich, United States) was injected and response scored according to the Racine scale (Racine, 1972) as follows: (0) no abnormality; (1) mouth and facial movements; (2) head nodding; (3) unilateral forelimb clonus; (4) rearing with bilateral forelimb clonus; and (5) rearing and falling. Animals were selected for further study only if the seizure degree reached level IV or above (n = 28). The onset of status epilepticus was characterized by initial immobility and chewing followed by repetitive clonic activity of the trunk and limbs, repeated rearing with forelimb clonus and falling interspersed with immobility, chewing, and myoclonic jerks singularly or in series. Acute status epilepticus was stopped after 60 min by intraperitoneal administration of 300 mg/kg chloral hydrate (Sigma-Aldrich, United States). Five rats died due to generalized tonic seizures. Animals surviving status epilepticus were randomly divided into the normal diet SE group (n = 12) and SE + KD (n = 11) group. There were no differences in seizure duration and severity between groups. In each group, 10 rats were randomly labeled for weight and blood ketone measurements. After weight and blood ketone were measured, six rats in each group were randomly labeled for proteomics testing and parallel reaction monitoring (PRM) verification. Control group rats received the same treatments
Dietary Intervention
During the modeling period (P21-P22), all groups were fed a normal diet. After vehicle treatment or status epilepticus induction, Ctr and SE groups continued to receive a normal diet for 28 days (4.5% fat, 20% protein and 50% carbohydrate), while the SE + KD group was fed the KD for 28 days (70% fat, 20% protein, and no carbohydrate). The KD formula was reported in detail previously . Further detail contents of the diets are shown in Table 1. Both diets were obtained from the Chinese Academy of Sciences, Shanghai Experimental Animal Center (Shanghai, China).
Weight and Blood Ketone Monitoring
Body weight and blood ketones were recorded at P49. After the rats were anesthetized, blood samples were collected from the tail vein and blood ketone levels measured using a Keto-detector (Beijing Yicheng Bioelectronics Technology, Co., Ltd., China).
Protein Extraction and Digestion
Samples of hippocampus were extracted, flash frozen to −80 • C, ground into powder over liquid nitrogen, and transferred to 5-mL centrifuge tubes. Four volumes of pyrolysis buffer containing 8M urea and 1% protease inhibitor mixture (Calbiochem, San Diego, CA, United States) were added and the mixture sonicated three times on ice at high intensity using a Scientz ultrasonic system (Scientz, Ningbo, China). After centrifugation at 12,000 × g for 10 min at 4 • C, the supernatant was transferred into another centrifuge tube, and the sediment at the bottom was discarded. The supernatant protein concentration was measured by the BCA kit (Beyotime, China). Supernatant proteins were then digested in trypsin (Promega, Madison, WI, United States) as described .
Tandem Mass Tag (TMT) Labeling
After protein digestion, peptides were desalinated on a chromatographic X C18 SPE column (Phenomenex, Torrance, CA, United States), vacuum-dried, dissolved in 0.5M TEAB (Sigma-Aldrich), and labeled according to the operation instructions of the 9-plex TMT kit (Thermo Fisher Scientific).
High-Performance Liquid Chromatography (HPLC) Fractionation
Labeled peptides were fractionated into 60 samples over 60 min by high pH reverse-phase HPLC using an Agilent 300Extend C18 column (5 µm particles, 4.6 mm ID, 250 mm length) and 8-32% acetonitrile (pH 9.0) gradient. Peptides were combined into 14 fractions and dried by vacuum centrifugation for mass spectroscopy.
Liquid Chromatography-Tandem Mass Spectroscopy (LC-MS/MS)
Peptides were dissolved in 0.1% formic acid (solvent A) and loaded directly onto a homemade reversed-phase analytical column (15-cm length, 75 µm inner diameter). Samples were then eluted at 350 nL/min using a mobile phase consisting of 0.1% formic acid in 98% acetonitrile solvent B under the control of an EASY-nLC 1000 UPLC system (Thermo Fisher Scientific). The elution protocol was as follows: 9-26% solvent B for 40 min, 26-35% solvent B for 14 min, 35-80% solvent B for 3 min, and holding at 80% for the last 3 min. Eluted peptides were then subjected to nanoelectrospray ionization (NSI) followed by tandem mass spectrometry (MS/MS) using the Q ExactiveTM Plus system (Thermo Fisher Scientific) coupled to the UPLC. The electrostatic voltage applied was 2.1 kV and the m/z scan range was 400 to 1500. Both intact peptides and fragments were detected in the Orbitrap at resolutions of 70,000 and 35,000 FWHM, respectively. Peptides were then selected for MS/MS using a normalized collision energy (NCE) setting of 28. A data-dependent procedure that alternated between one MS scan followed by 20 MS/MS scans was applied for the top 20 precursor ions above a threshold ion count of 1 × 10 4 in the MS survey scan with 30.0 s dynamic exclusion. Automatic gain control (AGC) was set at 5E4. Fixed first mass was set as 100 m/z.
Database Searches
The MS/MS data were processed using Maxquant (v.1.5.2.8) and searched against the Rat_Protemoe_1905 database (29,947 sequences). A reverse decoy database was used to calculate the false positive rate caused by random matching. Trypsin/P was specified as the cleavage enzyme allowing for up to two missing cleavages. The minimum peptide length was set at seven and the maximum number of peptide modifications at five. The mass tolerance for precursor ions was set to 20 ppm for the first search and to 5 ppm for the main search, and the mass tolerance for fragment ions was set as 0.02 Da. Carbamidomethyl on Cys was specified as the fixed modification, and acetylation and oxidation on Met were specified as variable modifications. False discovery rate (FDR) was adjusted to < 1%.
Parallel Reaction Monitoring (PRM)
We used LC-PRMMS analysis to verify protein expression levels derived from TMT analysis. Peptides remaining from proteomics analyses (above) were dissolved in 0.1% formic acid (solvent A), loaded directly onto a homemade reversed-phase analytical column, and eluted at a constant flow rate of 500 nL/min using the following mobile phase protocol control by an EASY-nLC 1000 UPLC system (Thermo Fisher Scientific): 6 to 25% solvent B (0.1% formic acid in 98% acetonitrile) over 40 min, 25 to 35% solvent B over 12 min, 35 to 80% over 4 min, then holding at 80% for the last 4 min. The peptides were subjected to NSI followed by tandem mass spectrometry (MS/MS) using the Q ExactiveTM Plus system (Thermo Fisher Scientific) coupled to the UPLC. The electrospray voltage applied was 2.0 kV, m/z scan range was 360 to 1080 for full scan, and intact peptides were detected in the Orbitrap at a resolution of 70,000. Peptides were then selected for 20 MS/MS scans on the Orbitrap at a resolution of 17,500 using a data-independent procedure. AGC was set at 3E6 for full MS and 1E5 for MS/MS. The maximum injection time was set at 50 ms for full MS and 110 ms for MS/MS. The isolation window for MS/MS was set at 1.6 m/z. The NCE was 27% with high energy collision dissociation (HCD). The resulting MS data were processed using Skyline (v.3.6). Peptide settings were as follows: enzyme was set as trypsin [KR/P], max missed cleavage as 0, peptide length as 7-25, and fixed modification as alkylation on Cys. The transition settings were as follows: precursor charges were set as 2, 3, ion charges as 1, and ion as b, y. The product ions were set from ion 3 to last ion, and the ion match tolerance was set as 0.02 Da.
Gene Ontology (GO) Annotation
Gene Ontology is a major bioinformatics initiative to unify gene and gene product attributes across all species. The GO annotations for this study were derived from the UniProt-GOA database 1 . First, identified protein IDs were converted to UniProt IDs and then mapped to GO IDs. For identified proteins not annotated by the UniProt-GOA database, InterProScan was used to annotate GO function based on protein sequence alignment. Proteins were classified by GO annotation based on three categories: biological process, cellular component, and molecular function.
Kyoto Encyclopedia of Genes and Genomes (KEGG) Pathway Annotation
Proteins were then annotated to KEGG pathways using the online service tools KEGG automatic annotation server (KAAS) and KEGG Mapper.
GO and KEGG Pathway Functional Enrichment
A two-tailed Fisher's exact test was used to test the enrichment of identified proteins against all proteins in GO and KEGG databases, with a corrected p < 0.05 considered significant.
Statistical Analysis
Body weights and blood ketones were compared among groups by one-way analysis of variance (ANOVA) with the indicated post hoc tests for pair-wise comparisons. A p < 0.05 was considered significant for all tests. GraphPad Prism version 5.0 was used for all data processing.
Effects of the Ketogenic Diet on Appearance
Rats receiving the KD diet following status epilepticus induction (SE + KD group) gained substantially less weight after the 28 days observation period than both seizure-induced rats fed a regular diet (SE group, p < 0.01) and control rats (Ctr group, p < 0.01) ( Figure 1A). Most SE + KD rats developed constipation and oily fur but otherwise were active and showed no evidence of infectious or respiratory complications, and none of them died.
Blood Ketones
As shown in Figure 1B, blood ketone levels were significantly higher in the SE + KD group than Ctr and SE groups (p < 0.01), but did not differ between Ctr and SE groups (p > 0.05).
LC-MS/MS
The abundances of hippocampal proteins were compared among Ctr, SE, and SE + KD groups using LC-MS/MS to identify those showing differential abundance caused by KD (Figure 2). A total of 238264.0 secondary spectrograms were obtained by mass spectrometry, and 82,100 spectrograms were available for analysis. A total of 41,645 peptide segments were identified, among which 38,097 were specific segments. In total, 5,564 proteins were identified, of which 4,740 were quantifiable. The screening criteria for differential abundance of proteins were fold-change > 1.2 (upregulated) or < 0.83 (downregulated) and p < 0.05. According to these criteria, 110 proteins exhibited a significant change in abundance between the SE and Ctr groups (18 upregulated and 92 downregulated), 180 between SE and SE + KD groups (121 upregulated and 59 downregulated), and 278 between SE + KD and Ctr groups (218 upregulated and 60 downregulated). Detailed data are provided in Supplementary Table S1. Optimized screening criteria were then applied for those proteins showing reciprocal abundance changes between SE vs. Ctr and SE + KD vs. SE groups. In total, 79 proteins met this condition ( Supplementary Table S2), of which 72 were downregulated in the SE group compared to the Ctr group but upregulated in the SE + KD group compared to the SE group (i.e., downregulation induced by seizure was reversed by KD). The five showing the largest foldchanges were Hmgb3 protein, cyclic nucleotide-gated channel beta 3, aldose reductase-related protein 1-like, complexin 3, and solute carrier family 17 (sodium-dependent inorganic phosphate cotransporter) member 6. The other seven proteins showing reciprocal regulation were upregulated in the SE group compared to the Ctr group but downregulated in the SE + KD group compared to the SE group. The five proteins showing the largest fold changes among these seven were round spermatid basic protein 1, uncharacterized protein M0R9L6, cyclin dependent kinase inhibitor, reproductive homeobox on X chromosome 12, and IQ motif containing GTPase activating protein 1 (Predicted) isoform CRA.b.
FIGURE 1 | Comparison of body weight (A) and blood ketones (B) among control (Ctr), seizure (SE), and seizure with ketogenic diet (SE + KD) groups at P49 (n = 10 rats/group). Body weights were significantly reduced in SE and SE + KD groups compared to the Ctr group, and significantly lower in the SE + KD group compared to the SE group. Blood ketone level was significantly higher in the SE + KD group compared to Ctr and SE groups, but did not differ between Ctr and SE groups. **p < 0.01compared to Ctr group, #p < 0.01 compared with SE group.
PRM Verification
To further verify the results of MS, five of these 79 reciprocally regulated proteins (dystrobrevin, centromere protein V, oxysterol-binding protein, tetraspanin-2, and progesterone receptor membrane component 2) were selected for PRM analysis. The screening criteria for PRM were based on the following principles: (1) proteins with potential biological function and significance; (2) proteins with a peptide fragment of no less than 1; (3) proteins associated with epilepsy but not reported or reported in only a few previous proteomic studies. Seven target peptide fragments of these five proteins were analyzed by Skyline, and the distributions of fragment ion peak areas are presented in Supplementary Figures S3-S9. The mean relative abundances of the target peptide fragments in each sample group are shown in Table 2. Differences in abundance of relative target proteins among sample groups were further calculated based on abundance of the corresponding peptide fragment (detailed data are provided in Table 3). Quantitative information on target peptide fragments was obtained from all nine samples. Compared to the Ctr group, the abundances of dystrobrevin, centromere protein V, oxysterol-binding protein, tetraspanin-2, and progesterone receptor membrane component 2 were downregulated in the SE group but upregulated in the SE + KD group, consistent with TMT results.
Bioinformatics Analysis
We use Bioinformatics tools to analyze the differential abundances of all proteins detected by MS.
GO Functional Annotation Analysis
The GO database is an international standardized functional classification system that comprehensively describes the characteristics of genes and their products. We performed GO functional annotation searches for all proteins identified in this study and then subjected those demonstrating differential abundance among groups to GO enrichment analysis using Fisher's exact test. According to secondary GO annotations, most of the 79 reciprocally regulated proteins can be classified into three major categories: "molecular interactions, " "cell components, " and "biological processes." The most common "molecular interaction" was "protein binding" (54 proteins, 65%), followed by "catalytic activity" (11 proteins), and "enzyme regulator" (seven proteins). The top three "cell components" classifications were "cell" (58 proteins), "organelle" (46 proteins), and "membrane" (29 proteins), while the top three "biological processes" classifications were "cellular process" (44 proteins), "single-organism process" (36 proteins), and "biological regulation" (32 proteins) (Figure 3). Additional classifications included "positive regulation of transferase activity, " "posttranscriptional regulation of gene expression, " "establishment of protein localization to organelle, " and "other important biological processes." There were also significant group differences in expression of proteins with annotations "protein phosphatase binding, " "phosphatase binding, " "Ras GTPase binding, " "small GTPase binding, " "GTPase binding, " and "other molecular function" as well as "cytosol, " "macromolecular complex, " "nucleus, " "protein complex, " "vesicle, " and "other positioning proteins" (Supplementary Figure S1).
KEGG Pathway Analysis
Proteins interact within pathways and networks to perform specific biological functions and regulate pathophysiological processes. We used KEGG pathway analysis to reveal the biological pathways and relevant regulatory process involving FIGURE 2 | The entire proteomics experimental process. Hippocampus samples were reacted with different isotope-labeling TMT regents after immunoaffinity depletion of high-abundance plasma proteins, SDS-PAGE separation, and FASP digestion. Samples were mixed and peptides fractured by high pH reverse-phase chromatography. Finally, LC-MS/MS was used for high-throughput screening of samples. Peptides were then analyzed for function using multiple bioinformatics tools. hippocampal proteins differing in abundance among Ctr, SE, and SE + KD groups, especially those associated with epileptogenesis and the therapeutic mechanisms of KD. The proteins differing in abundance between SE and Ctr groups showed greatest enrichment in "PI3K-Akt signaling pathway, " proteins differing in abundance between SE + KD and SE groups showed greatest enrichment in "vitamin digestion and absorption pathway, " and proteins differing in abundance between SE + KD and Ctr groups showed greatest enrichment in "glycosaminoglycan degradation pathway" (Supplementary Figure S2). Proteins related to the synaptic vesicle cycle pathway were enriched not only among those differing in abundance between SE and Ctr groups but also among those differing in abundance between SE + KD and SE groups. Moreover, the abundances of complexin 3 and
DISCUSSION
In recent years, our team has conducted a series of studies on the neuroprotective and antiepileptogenic efficacies of KD in rats. Tian et al. (2015Tian et al. ( , 2016 found that chronic KD treatment reversed the adverse neurobehavioral, cognitive, and neurochemical changes in Sprague-Dawley rats subjected to recurrent neonatal seizures. Moreover, these studies utilized a novel "twist" seizure model to assess both spontaneous and induced seizures by coupling early-life flurothyl-induced neonatal seizures with later penicillin exposure, and demonstrated that KD could also increase seizure threshold to penicillin. Lusardi et al. (2015) used two epileptic models to examine the effect of KD on epileptogenesis, and found that 100% of all normalfed rats demonstrated stage-3 seizures or higher after 15 pentylenetetrazol injections, whereas only 37% of KD-fed rats reached comparable seizure stages. They also found that normalfed animals exhibited spontaneous seizures of progressively greater severity and frequency following pilocarpine induction, whereas KD-fed animals showed a prolonged reduction in seizure severity and frequency. Collectively, these studies demonstrated that KD can suppress epileptogenesis in rats. These findings and those of our previous study provide theoretical and technical support for the antiepileptogenic and neuroprotective effects of KD.
In the current study, we identified 79 proteins that were reciprocally regulated by KD (i.e., exhibiting upregulation in the SE group compared to the control group but downregulation in the SE + KD group compared to the SE group or vice versa). These reciprocal changes may be attributed to the antiepileptogenic effect of the KD. Furthermore, the same reciprocal changes in five proteins (dystrobrevin, tetraspanin-2, oxysterol-binding protein, progesterone receptor membrane component 2, and centromere protein V) were verified by PRM. Proteins differing in abundance between both Ctr and SE groups as well as SE + KD and SE groups were enriched in synaptic vesicle recycling pathway proteins according to KEGG pathway analysis, and two of these proteins, solute carrier family 17 (sodium-dependent inorganic phosphate cotransporter) member 6 and complexin 3, were reciprocally regulated. We suggest the following pathogenic processes to explain epileptogenesis and mitigation by the KD. The blood-brain barrier (BBB) was initially damaged by lithium chloride-pilocarpine-induced SE as indicated by abnormal abundance of α-dystrobrevin (Rigau et al., 2007). In turn, BBB disruption induced neuroinflammation as evidenced by tetraspan-2 upregulation, which led to dysfunctional lipid metabolism as evidenced by oxysterolbinding protein upregulation. Dysfunction of lipid metabolism induced mitochondrial dysfunction and deficient autophagy as indicated by the changes in abundance of progesterone receptor membrane component 2 and centromere protein V, respectively. Finally, defective autophagy resulted in accumulation of damaged mitochondria, triggering epilepsy and neuronal death. Alternatively, each of these pathogenic processes was reversed by KD. In addition, KD upregulated the abundance of solute carrier family 17 (sodium-dependent inorganic phosphate cotransporter) member 6 and complexin 3, both of which are neuroprotective (Ono et al., 1998;Van Liefferinge et al., 2015).
The dystrobrevins (DBs) α-DB and β-DB are cytosolic proteins encoded by the DTNA and DTNB genes, respectively. Alpha-DB in astrocyte end-feet is an important regulator of BBB permeability. It was reported that the aquaporin-4 water channel and Kir4.1 potassium channel were downregulated in the brain of DTNA knockout mice, resulting in enhanced cerebral capillary permeability, gradual cerebral edema, and ultimate damage to neurovascular units (Lien et al., 2012). Damage to the BBB can induce astrocyte dysfunction, neuroinflammation, and epilepsy (Rempe et al., 2018;Swissa et al., 2019). Our results suggest that KD mitigates epilepsy development in part by restoring BBB function through increased α-DB abundance.
Tetraspan-2 (Tspan2) is a small transmembrane protein widely distributed in the central nervous system. Knockout of Tspan2 activates white matter astrocytes and microglia (de Monasterio-Schrader et al., 2013), suggesting that Tspan2 inhibits neuroinflammation, a central pathogenic process in epilepsy (Ngugi et al., 2013). During the development of epilepsy, astrocytes and microglia proliferate, activate, and release inflammatory factors, leading to abnormal neural network connections and aggravating neurotoxicity (Rana and Musto, 2018). In contrast, KD promotes neuroprotection and suppresses epileptogenesis by inhibiting this inflammatory response (Stafstrom and Rho, 2012;Simeone et al., 2018). In the current study, the abundance of Tspan2 was downregulated in the SE group compared to the Ctr group but upregulated after KD. Therefore, we speculate that KD also suppresses epileptogenesis by increasing Tspan2 and suppressing epilepsyassociated neuroinflammation.
There is a strong mutual interaction between cellular inflammation and lipid metabolism, as imbalanced lipid metabolism can result in inflammation (Sun et al., 2009), while inflammation can promote cellular lipid uptake and accumulation, and inhibit cholesterol efflux (Khovidhunkit et al.,FIGURE 3 | Gene ontogeny (GO) annotation. Differentially abundant proteins were annotated according to molecular function, cell composition, and biological process. Differentially abundant proteins are mainly annotated as 'protein binding,' 'cell,' and 'cell process,' respectively, in terms of molecular function, cell composition, and biological process. Oxysterol binding protein (Accession number: Q5BK47), also known as oxysterol binding protein-like 2 (OSBPL2), is a highly conserved transporter protein that controls cholesterol and PI (4,5) P2 levels in the plasma membrane (Wang et al., 2019b). In addition, OSBPL2 is involved in the synthesis of cholesterol and cholesterol ester. Knockout or silencing the OSBPL2 gene inhibited AMPK activity and increased intracellular cholesterol and cholesterol ester synthesis (Wang et al., 2019a;Zhang et al., 2019). Imbalanced cholesterol homeostasis is implicated in the pathogenesis of multiple disorders, including cardiovascular, cerebrovascular, and central nervous system diseases (Chistiakov et al., 2016;Xue-Shan et al., 2016;Puglisi and Yagci, 2019). Altered levels of cholesterol and certain oxysterols have been reported in the hippocampus of rats following kainic acid-induced epilepsy (Ong et al., 2003;Heverin et al., 2012). We found that levels of the lipid metabolism-related molecules ApoE, clusterin, and ACAT-1 were upregulated after flurothyl-induced recurrent seizures in neonatal rats, while KD reversed these changes as well as the cognitive and neurobehavioral abnormalities associated with seizures (Tian et al., 2015). Thus, KD may also protect against epilepsy and associated sequelae by normalizing lipid homeostasis. Indeed, the downregulation of OSBPL2 observed in the SE group compared to the Ctr group was reversed by KD, which may in turn reduce cellular cholesterol accumulation, thereby mitigating oxidative stress and mitochondrial damage (Wang et al., 2019a). Accumulation of cholesterol is a major cause of mitochondrial dysfunction in different models and cells. In Alzheimer's disease and Niemann-Pick type C disease, mitochondrial cholesterol accumulation disrupts membrane physical properties and restricts the transport of glutathione into mitochondrial matrix, thus impairing mitochondrial function (Torres et al., 2019). In a mouse non-alcoholic fatty liver disease model, cholesterol overload contributed to a reduction in mitochondrial membrane potential and ATP content, and to significant alterations in mitochondrial dynamics (Dominguez-Perez et al., 2019). Kim et al. (2006) found that 7-ketocholesterol enhanced 1-methyl-4-phenylpyridinium-induced mitochondrial dysfunction and cell death in PC12 cells. Progesterone receptor membrane component 2 (PGRMC2) is a member of the progesterone membrane-related receptor (MAPR) family. It contains a hemebinding domain similar to cytochrome EB5 and a recent study (Galmozzi et al., 2019) found that deletion of PGMRC2 reduced intracellular heme synthesis. Heme promotes neurogenesis as well as neuronal survival and growth. However, dysregulation of intracellular heme concentration can result in neurodegeneration and impaired neurological function (Gozzelino, 2016). Reduction of heme synthesis in primary rat hippocampal neurons using n-methyltropophyrin reduced mitochondrial complex IV, activated carbon monoxide synthetase, and altered amyloid precursor protein (APP)α and APPβ protein levels, suggesting that decreased heme contributes to the neuronal dysfunction of Alzheimer's disease (Atamna et al., 2002). In accord with these findings, blockade of heme biosynthesis by siRNA-mediated knockdown and n-methyltropophyrin IX treatment in differentiated SH-SY5Y neuroblastoma cells resulted in mitochondrial membrane depolarization, lower intracellular ATP production, APP aggregation, suppressed soluble (s)APPα secretion, and increased sAPPβ secretion (Gatta et al., 2009). Reduced intracellular heme was shown to disrupt mitochondrial function. Moreover, mitochondrial dysfunction is a common pathway for neurodegeneration (Rusek et al., 2019), so we speculate that decreased abundance of PGRMC2 in the SE group compared to the Ctr group is indicative of mitochondrial dysfunction, consistent with our previous study showing that flurothyl-induced seizures significantly depolarized mitochondrial membrane potential, reduced mitochondrial fusion protein 2 expression, and upregulated dynamic related protein 1 (drp1) in hippocampus (Liu et al., 2018). Conversely, KD upregulated PGRMC2, suggesting that KD also protects against neuronal death and epilepsy by sustaining mitochondrial function (Simeone et al., 2018;Rusek et al., 2019).
Intracellular cholesterol accumulation not only damages mitochondria, but also impairs autophagy by interfering with the fusion of autophagosomes with endosomal-lysosomal vesicles (Barbero-Camps et al., 2018). Autophagy defects reduce the capacity of cells to remove damaged organelles, protein aggregates, macromolecules, and other toxic substances, leading to dysfunction and death. Further, numerous studies have implicated autophagy defects in epilepsy. Knockout of ATG-7, a key molecule in the autophagy cascade, leads to spontaneous seizures in mice, implying that inhibition of autophagy is sufficient to induce epilepsy (Boya et al., 2013). We also reported that ratio of LC3 II/I was downregulated in the hippocampus of newborn rats subjected to repeated seizure induction using flurothyl, indicating reduced numbers of autophagosomes, while p62 was upregulated, indicating enhanced autophagic flux . Centromere protein V (CENPV) contributes to the maintenance of cell dynamics by stabilizing microtubules (Honda et al., 2009), and this process is critical for autophagy. The microtubule organizing center (MTOC) containing CENPV is critical for centripetal transport of autophagosomes from the cell periphery as well as for the fusion of autophagosomes and lysosomes (Kochl et al., 2006;Xu et al., 2014). In the present study, the abundance of CENPV was reduced in the SE group, suggesting impaired microtubule stability leading to disrupted autophagy. We suggest that the ability of KD to activate autophagic pathways and reduce brain injury in response to both pentylenetetrazol-induced seizures (Wang et al., 2018) and lithium chloride-pilocarpine-induced seizures is mediated by CENPV upregulation.
The synaptic vesicle cycle plays an important role in maintaining the structural and functional integrity of the presynaptic terminal. Disruption of synaptic vesicle recycling leading to defects in synaptic transmission may contribute to neurological disorders such as Alzheimer's disease and autism (Waites and Garner, 2011), and changes in synaptic vesicle recycling have also been observed in pilocarpine-induced status epilepticus model rats (Upreti et al., 2012). Further, KD can support synaptic vesicle recycling (Hrynevich et al., 2016), so we speculate that KD also prevents epileptogenesis by normalizing this pathway. In fact, synaptic vesicle recycling pathway proteins were enriched in both populations of proteins demonstrating differential abundance between groups (SE vs. Ctr and SE + KD vs. SE), and two proteins involved in the synaptic vesicle cycle, solute carrier family 17 member 6 and complexin 3, were reciprocally regulated (upregulated in the SE group and downregulated after KD). Thus, these proteins may be the targets of KD for preventing epileptogenesis.
Solute carrier family 17 (Sodium-dependent inorganic phosphate cotransporter), member 6, also known as vesicular glutamate transporter 2 (VGLUT2, encoded by Slc17a6) is a low affinity transporter of glutamate from the cytoplasm into synaptic vesicles (Bellocchio et al., 2000). Expression is lower in the hippocampus of patients with intractable epilepsy and hippocampal sclerosis (Van Liefferinge et al., 2015), consistent with findings of reduced abundance in the SE group. Lobo et al. (2011) found that high glutamic acid exposure reduced VGLUT2 expression by hippocampal neurons, resulting in substantial excitotoxicity. As KD reversed this decline, improved glutamate transport may also contribute to reduced epileptogenesis.
The complexins (Cplxs) are four small SNARE-related proteins (Cplx1-4) that regulate rapid calcium-triggered exocytosis of synaptic, and thus are important for maintaining synaptic neurotransmission (Hazell and Wang, 2005;Yi et al., 2006). Knockout of all Cplxs genes in mice significantly reduced the calcium-triggered release of glutamate and γ-aminobutyric acid from hippocampal and striatal neurons (Xue et al., 2008). Alternatively, injecting recombinant Cplx2 into Aplysia buccal ganglion neurons inhibited neurotransmitter release, while injecting Cplx2 antibody increased release (Ono et al., 1998). Mice harboring a mutant Cplx1 gene exhibited ataxia and sporadic convulsions (Reim et al., 2001). In the current study, the abundance of Cplx3 was decreased in the SE group and was restored by KD, suggesting that KD may mitigate epileptogenesis by reducing uncontrolled glutamate release, thereby restoring appropriate excitatory-inhibitory balance.
CONCLUSION
To our knowledge, this is the first study to comprehensively analyze the changes in protein abundance induced by the KD diet among epileptic model rats through quantitative proteomics. We identified several 100 proteins demonstrating differential abundance among control, epilepsy, and epilepsy plus KD groups, of which 79 were reciprocally regulated by SE and KD. Five of these proteins were further verified by PRM. Subsets of these proteins are implicated in lipid metabolism, blood-brain barrier integrity, mitochondrial function, neuroinflammation, and autophagy. Other proteins regulated by both seizures and KD are involved in synaptic vesicle recycling. Collectively, these findings provide clues to the molecular mechanisms underlying the antiepileptogenic effects of KD and define multiple potential therapeutic targets. However, the precise molecular mechanisms of action require further verification. In future studies, we will focus on selected KD-sensitive target proteins and examine the phenotypic changes conferred by knockout and overexpression, identify proteins interacting with target proteins, observe the effects of target protein expression level changes on epilepsyrelated pathophysiological processes, and examine if KD can preserve neural circuit integrity, normal behavior, and cognition in epileptic rats via changes in target protein expression.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material.
ETHICS STATEMENT
The animal study was reviewed and approved by Animal experiments were approved by the Animal Experimental Ethics Committee of Suzhou University.
AUTHOR CONTRIBUTIONS
HN designed the study. YZ and MJ performed the experiments. GS, YW, and YS analyzed the data and are responsible for the statistical analysis. YZ wrote the manuscript. All authors have reviewed and approved this version of the manuscript.
|
2020-09-29T13:11:05.226Z
|
2020-09-29T00:00:00.000
|
{
"year": 2020,
"sha1": "4ef14b3cd9e6c44de0cf315fc7a264b9b4bab0e6",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2020.562853/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ef14b3cd9e6c44de0cf315fc7a264b9b4bab0e6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
122723609
|
pes2o/s2orc
|
v3-fos-license
|
The relationship between statistics of warm-season cloud episodes and synoptic weather regimes over the East Asian continent
In this study, the relationship between statistical properties (including zonal span, duration, and propagation speed) of warm-season cloud episodes in Hovmöller space and synoptic conditions over the East Asian continent is investigated for the period of May–July 1997–2002. Synoptic conditions are classified into four regimes: those with baroclinity only at lower level (L), only at upper level (U), at both lower and upper levels (B), or at neither level (N), and cloud streaks (i.e., episodes as identified through an automated procedure) in each regime are stratified based on their zonal span (length in the East–West direction). It is found that there exists a tendency for episodes in regime B to be larger than episodes in regime N. For larger and less frequent episodes with a zonal span more than about 1,400 km, low-level conditions appear to have slightly higher importance than upper-level conditions, as streaks in regime L tend to be larger than those in regime U. Overall, the results point to the possibility that both upper-level steering and low-level features are important for major episodes that propagate at the leeside of the Tibetan Plateau for long distances across the East Asian continent. A better understanding of the episode behavior in the area is important for future application to improve the quantitative precipitation forecasts in warm season.
Introduction
Based on high-resolution (2 km in space and 15 min in time) radar data, Carbone et al. (2002) found that warmseason precipitation episodes, defined as clusters of rainproducing systems in Hovmöller (longitude-time) space, in the continental United States (US) exhibit characteristics of propagation at the speed range of 10-25 m s -1 . The longevity of some episodes in space (at the zonal direction, specifically) and time, up to 3,000 km and 60 h, suggests the existence of an intrinsic predictability and thus the potential for improvements in quantitative precipitation forecast (QPF) at lead times over the range of 6-48 h (Fritsch and Carbone 2004). For East Asia, Wang et al. (2004) used the Geostationary Meteorological Satellite (GMS) infrared (IR) blackbody brightness temperature (T BB ) data and found similar properties in cloud episodes to the lee of the Tibetan Plateau (TP), particularly in early summer (May-June), with event scales up to 2,500 km and 40 h. Several recent studies of rainfall episodes in other parts of the world also showed coherent behavior of propagation, eastward in mid-latitudes/subtropics and westward in the tropics, including Laing et al. (2008) for Africa, Keenan and Carbone (2008) for Australia and the maritime continent, Levizzani et al. (2010) for Europe and the Mediterranean, Pereira et al. (2010) for South America, and Liu et al. (2008) for the Bay of Bengal.
An important property of these precipitation/cloud episodes lies in their close ties to the diurnal cycle of major elevated terrain (Carbone et al. 2002Wang et al. 2004Wang et al. , 2005. As illustrated in Fig. 1, East Asian episodes tend to develop over the eastern TP near local dusk (1100 UTC or 1800 LST at 105°E) then propagate eastward overnight, sometimes into the next day (cf. Fig. 2). The high terrain and its eastern slope act as the heat source to trigger deep convection (e.g., Holton 1968;Wallace 1975;Ahijevych et al. 2004;Huang et al. 2010), often on a daily basis, and subsequent downstream propagation tends to occur when strong upper-level steering winds are present in a sheared environment, sometimes for long distances (e.g., Rotunno et al. 1988). In the US, the nocturnal propagation of organized convection across the Great Plains is well known (e.g., Augustine and Howard 1991;Laing and Fritsch 1997;Davis et al. 2003), and such events may account for a sizeable fraction of all episodes Carbone and Tuttle 2008). In East Asia, nocturnal convection to the lee of the plateau near the Sichuan Basin (SB, cf. Fig. 2) has been previously noted (e.g., Asai et al. 1998), and the eastward shift in peaking time of summer rainfall along the Yangtze River Valley (YRV), from midnight at the upper reaches (i.e., over the SB area) to morning at the middle reaches than further to afternoon at the lower reaches, is recently confirmed using rain-gauge data (Yu et al. 2007;Chen et al. 2010). Several nocturnal events that produced severe floods downstream from the plateau have also been studied (e.g., Wang and Orlanski 1987;Wang et al. 1993). The rainfall events that tie to the diurnal cycle of the terrain, due to their coherency and regularity, represent an area where the improvement in warm-season QPF appears quite promising at the present time.
Although warm-season precipitation episodes occur under a variety of synoptic conditions, many of which lack significant dynamical forcing and can be described as ''weakly-forced'' (as opposed to cold-season conditions), they are nevertheless affected and modulated by their environment as demonstrated by many earlier studies (e.g., Wang and Orlanski 1987;Tuttle and Carbone 2004). The role played by the environment can be essential, and this is perhaps especially true in East Asia (as compared to the US) where monsoon circulations are prominent (e.g., Ding 1992). Over China, particularly, the summer monsoon typically exhibits two distinct rainy periods in its life cycle (e.g., Lau et al. 1988;Chen et al. 2004). The first occurs roughly from late-May to mid-July from the southern to central China, and is associated with frequent development of Mei-yu fronts (also Tao and Chen 1987). Following the northward migration of the subtropical high and a monsoon break, the second rainy period generally covers late-July through August and takes place over both southern and northeastern China (and northeastern Asia). The rainfall in southern China is related to the intertropical convergence zone (ITCZ) and tropical cyclones (e.g, Lau and Li 1984;Chen et al. 2004), while that to the north is mainly attributed to midlatitude frontal activities (e.g., Lau et al. 1988;Chen et al. 2004). To the lee of the TP (over southerncentral China), Mei-yu fronts therefore often provide some forcing of varying degree throughout May to July, i.e., during the first summer monsoon rainy period (e.g., Chen 1993;Chen et al. 2003). Thus, it is expected that some synoptic conditions in the environment favor more organized, longer-lived episodes to occur, while others tend to suppress the larger events. Such investigation on synoptic conditions is not only helpful for understanding the behavior of episodes, but also essential for a more successful application of their statistical properties in QPF as an aid to model prediction (e.g., Fritsch and Carbone 2004). As a first effort, the present study therefore examines this relationship through a comparison of statistical properties of episodes (or streaks) under different synoptic weather regimes, using a cloud-streak data base extended from Wang et al. (2004).
The remaining part of this paper is arranged in the following manner. The data and methodology are described in Sect. 2, and the results of weather regime classification are discussed in Sect. 3. The statistical relationship between episodes and synoptic weather regimes are presented in Sect. 4, while Sect. 5 is devoted to the discussion and conclusion.
Data and methodology
The set of cloud streaks (referred to here as the episodes identified through the automated procedure to be described) used in the present study is the same as that in Wang et al. (2004), except that the data period is extended to 1997-2002 (from 1998-2001). As reviewed in Sect. 1, since the rainfall over southern-central China is mainly caused by tropical systems and the propagation of rainfall episodes to the lee of the TP nearly ceases in mid-summer when the westerly flow shifts north , all events in August are excluded from our analysis. The methodology to identify the streaks and obtain their statistics is detailed by Wang et al. (2004) and described briefly below. The computational domain of 20-40°N and 95-145°E ( Fig. 2) was first divided into narrow N-S strips of 0.1°wide in longitude, and T BB values (at 5-km and 1-h resolutions) were averaged within each strip. Using time as the second dimension, data were thus aggregated into longitude-time (Hovmöller) space. To remedy the problem caused by rising background temperature from May to July and also to emphasize deeper, colder clouds, T BB [ 0°C were replaced by 0°C before averaging. Then, cloud streaks were identified through a two-dimensional (2D) rectangular autocorrelation function (4°longitude 9 8 h in size). The function is weighted evenly in one direction and by a negative cosine curve in the other, and by rotating it on each point in the Hovmöller space until the coefficient is maximized, data orientation can be identified . A ''fit'' was recorded if the coefficient reached 0.4 at a given point, while streaks were recognized by fits at contiguous points. Using this procedure, endpoints of cloud streaks were identified and their statistics (zonal span, duration, and mean propagation speed) were obtained.
For the classification of synoptic regimes, the Japan Meteorological Agency (JMA) daily weather maps at 0000 UTC at the mean sea level (MSL), 850, 500, and 300 hPa were used. Since our main interests were over the continent where episode propagation is more frequent and pronounced , weather systems linked to baroclinity inside 24-36°N, 100-125°E (cf. Fig. 2) were identified through visual inspection and used to classify the data period into four mutually exclusive regimes with criteria listed in Table 1: (1) the low-level regime (L regime) where a front or a low pressure system with central pressure \1,000 hPa existed at the surface (i.e., MSL), or strong horizontal wind shear or a southwesterly low-level jet (LLJ, C12.5 m s -1 ) appeared at 850 hPa; (2) the upperlevel regime (U regime) where a closed low or a trough with clear v-components, or strong southwesterly flow of C15 m s -1 ahead of a trough existed at 500 hPa, or a trough appeared at both 500 and 300 hPa; (3) the both The relationship between statistics of warm-season 119 regime (B regime) where the criteria for both lower-and upper-level systems were met; and (4) the no regime (N regime) where neither lower-nor upper-level system existed ( Table 1). Note that in the B regime, favorable conditions existed at both lower and upper levels, but they are not necessarily dynamically coupled since the identification is independent. Likewise, although manual checks for selected periods reveal that the weather systems identified to the lee of the TP were often associated with widespread convection, there is no guarantee that a close link always exists between them that occur nearby. In fact, during this procedure, such identification of synoptic weather regimes should be kept strictly independent from that of cloud streaks, so that no bias is introduced. After the daily weather regimes were classified, all eastward-moving cloud streaks identified were then categorized according to the regime at the date of their midpoint in the Hovmöller space. All streaks that originated east of 125°E or centered east of 130°E were excluded, as they were outside the area of interest. Cases that moved westward were similarly omitted so that tropical influences during May-July were also eliminated.
Weather regime classification
Using the described method, the synoptic regimes in the order of N, L, U, and B occurred roughly 26, 36, 11, and 26% of the times during our data period, while the remaining 1% was not classified since T BB data were missing (Fig. 3a). Days in regime B were almost twice more frequent in May-June than in July (114/366 = 31.1% vs. 29/186 = 15.6%). On the contrary, there were fewer days in regime L in May (51/186 = 27.4%) than in June-July (150/366 = 41.0%), when slow-moving Mei-yu fronts often appeared near 30°N (e.g., Tao and Chen 1987;Ding 1992). Consequently, days with no system (regime N) were the fewest in June (12.8%) but more in May and July (25.3 and 39.2%). Regime U was the least frequent (10.5%) of all regimes with the majority occurring in May (33 days, Fig. 3a). The above results of the intra-seasonal variation in the frequencies of regimes B, L, U, and N are in general agreement with the gradual northward migration of the subtropical high and the baroclinic zone (e.g., Lau et al. 1988;Chen et al. 2004).
Among a total of 2,505 all eastward-moving cloud streaks, without considering their span and duration, the largest fraction occurred in regime L (38.7%) followed by regime N (25.9%), B (25.7%), and U (9.7%, Fig. 3b), roughly in proportion to the number of days in each regime, except for July where streaks in regime N were relatively few. When only the 1,794 streaks at least 3 h in duration were considered (Fig. 3c), the percentages rose slightly in regimes L, U, and B (by about 1%) but dropped in regime N (by 3%). The difference in Fig. 3b, c indicated only a slightly higher percentage of short-lived streaks (i.e., those \3 h) in regime N compared to other regimes, and the relationship between synoptic regimes and the number of cloud streaks is apparently weak when the streaks are not stratified according to their size.
Statistical relationship between episodes and synoptic regimes
The relationship between cloud streaks and synoptic regimes is further examined in (Table 2). On the contrary, streaks in regime N had the smallest thresholds at all frequencies, since no favorable system existed to help organize the convection into episodes of longer duration. Thus, Table 2 suggests that synoptic forcing at both low level and aloft (over deep layers for most cases) was helpful for the propagation of cloud streaks. Regime L contained the most days and streaks (201 and 711) and the cut-off values at 1 per day and 1 per 2 days were quite comparable to those in regime B (Table 2). For large episodes fewer than once a week, thresholds in regime L were also the second largest among all categories. Regime U, though the least frequent (58 days and 188 cases), was also quite favorable for events about several hundred to 1,400 km in span. For episodes at low recurrence frequency (i.e., those with larger span and longer duration), however, the values in Regime U became increasingly smaller than those in regime B (Table 2). This suggests that upper-level troughs with their forcing alone were helpful for streaks to grow into moderate size, but low-level conditions tended to play an increasingly more important role if very large episodes were to develop. The scatter plots of zonal span versus duration for eastward-moving streaks C3 h (Fig. 4) also indicated that there were greater numbers of large events in regimes B The relationship between statistics of warm-season 121 and then L (say, for episodes with span [1,500 km or duration [30 h), and these streaks tended to travel at a speed of about 17-20 m s -1 , slightly faster than both the mean (13-14 m s -1 ) and median speed (about 12.5 m s -1 ) of all events sampled in each regime (Fig. 4b, d). Regimes N and U (Fig. 4a, c), on the other hand, contained relatively fewer large streaks. However, a small fraction of the cloud streaks in the latter two categories could still reach about 2,500 km and 36 h, i.e., develop under weakly forced conditions, similar to the result of Carbone et al. (2002).
Discussion and conclusion
Results in Sects. 3 and 4 investigate only the statistical properties of cloud episodes in relation to synoptic-scale weather regimes over the East Asian continent. To further make physical links, the European Centre for Mediumrange Weather Forecasts (ECMWF) gridded analyses at 1.125°latitude/longitude resolution at 0000 UTC during the data period were used to produce composites for each regime. The 500-hPa pattern for regime B (Fig. 5a) shows a confluent trough approaching our domain of interest (24-36°N, 100-125°E), and close to the averaged midpoint position of all eastward-moving cloud streaks C 3 h. The trough has an east-northeast-west-southwest orientation with west-northwesterly flow behind, and the wind speeds ahead of it, over eastern China, are about 5-14 m s -1 . A similar trough did not exist, and both the height gradient and wind speed were much weaker in the composite for regime N (Fig. 5b). Thus, the (mean) cyclonic vorticity advection and the associated divergence at upper levels in regime B (e.g, Bluestein 1993, Sec. 5.7;Doswell and Bosart 2001) produce synoptic-scale upward motion beneficial for convection development and maintenance. The baroclinity and vertical westerly wind shear, likewise, are both significantly stronger in regime B than regime N, and such sheared environment is more prone to the maintenance and propagation of rain-producing systems (e.g., Rotunno et al. 1988. At 850 hPa, a closed low and a clear wind shear line also appear near the Yangtze River Valley along 30°N in the composite (thick dashed) for regime B, with strong southwesterly winds C15 m s -1 and convergence exceeding 0.3 9 10 -4 s -1 (Fig. 5c). These weather features are reminiscent of those associated with Mei-yu fronts during the first active period of the summer monsoon in the region (e.g., Lau et al. 1988;Chen et al. 2004), and can provide a means to trigger and organize convection (e.g. Chen et al. 2003Chen et al. , 2008 and to further transport and supply moisture from the South China Sea (e.g., Chen and Chen 1995). In the composite at 850 hPa for regime N, a similar shear line does not exist and the flow is mainly from the south and much weaker (Fig. 5d). In the study by Trier et al. (2006), the authors also found that both the low-level frontal zone and LLJ were important for the maintenance of several long-lived nocturnal propagating episodes over the central US. For regime U, the composites at 500 and 850 hPa were similar to Fig. 5a, d, while they were similar to Fig. 5b, c for regime L. Thus, these composites are not shown here.
In the present work, the warm-season cloud episodes over the East Asia, identified by Wang et al. (2004) using GMS IR T BB data and extended for the period of May-July 1997-2002, were studied. Through the use of JMA weather maps and the criteria listed in Table 1, daily synoptic conditions for the domain of 24-36°N, 100-125°E during the data period were classified into four mutually exclusive types based on the presence of baroclinity at different levels: Regime L where weather systems favorable for convection existed only at low levels; regime U where systems were present only at upper levels; regime B where criteria for both regimes L and U were met; and regime N where no system was present at either lower or upper troposphere. The dependency of episodes (streaks) on weather regimes was examined from a statistical standpoint, and discussed through the use of composited ECMWF analyses.
When cloud streaks were stratified based on their size, a tendency existed for episodes to be larger under regime B and smaller under regime N, as expected. Thus, conditions were most favorable when significant synoptic systems existed at both upper and lower levels through the troposphere in regime B, which occurred about 26% of the time and more in May and June. In contrast, conditions were least favorable with no significant system in regime N (also about 26% and less frequent in June). When regimes L and U were compared, favorable conditions at low levels appeared to be slightly more important relative to those at upper levels for larger events exceeding about 1,400 km. Overall, the results point to the possibility that both upperlevel forcing and steering, as well as low-level features are important for major episodes that propagate for long distances across the East Asian continent.
Through a straightforward comparison of statistical properties of episodes (or streaks) associated with different synoptic weather regimes, the present study is able to show that the environment does affect cloud/precipitation episodes in an averaged sense. While a better understanding of the behavior of the episodes is important for future application to improve warm-season QPFs, a different approach must be taken in order to explore further the impact of environmental conditions (collectively or in individual events) since the statistics cannot provide information on physical processes underlying the behavior of episodes. Such an effort is already underway where we examine the commonly observed synoptic flow patterns in association with episodes that propagated for long distances, as well as those accompanying short-lived events, and the results will be reported later in a separate paper.
|
2019-04-20T13:06:20.845Z
|
2011-04-06T00:00:00.000
|
{
"year": 2011,
"sha1": "d35d93b965222db0ff7389050a175d680215e480",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00703-011-0123-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9922cb66056043e138f8f7356166d684e90c486b",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Geology"
]
}
|
115194295
|
pes2o/s2orc
|
v3-fos-license
|
Image Captioning with Word Gate and Adaptive Self-Critical Learning
: Although the policy-gradient methods for reinforcement learning have shown significant improvement in image captioning, how to achieve high performance during the reinforcement optimizing process is still not a simple task. There are at least two difficulties: (1) The large size of vocabulary leads to a large action space, which makes it difficult for the model to accurately predict the current word. (2) The large variance of gradient estimation in reinforcement learning usually causes severe instabilities in the training process. In this paper, we propose two innovations to boost the performance of self-critical sequence training (SCST). First, we modify the standard long short-term memory (LSTM)based decoder by introducing a gate function to reduce the search scope of the vocabulary for any given image, which is termed the word gate decoder. Second, instead of only considering current maximum actions greedily, we propose a stabilized gradient estimation method whose gradient variance is controlled by the difference between the sampling reward from the current model and the expectation of the historical reward. We conducted extensive experiments, and results showed that our method could accelerate the training process and increase the prediction accuracy. Our method was validated on MS COCO datasets and yielded state-of-the-art performance. the inference can lower the variance of the gradients and improve the training procedure. FC is the full connected layer which projects the image vector to the word dictionary probabilities. CIDEr is an image captioning evaluation method proposed in [5]. method with β set to 0.8, and the h represent different hyper-parameters for the history factor. We found that with longer history reward information, better performance was obtained. In practice, β set to be 0.8 and h set to be more than 2500 iterations could help our model to yield better performance.
Introduction
Image captioning is the task of automatically describing an image with natural language. As shown in Figure 1, image caption methods usually follow the encoder-decoder paradigm. The process often includes two parts: a convolutional neural network (CNN) to encode an image into semantic features, and a recurrent neural network (RNN) to decode the input features into a text sequence word-by-word. At the training stage, the RNN is typically given the previous ground-truth word and trained to predict the next word with the cross-entropy loss as target function, while at test-time the model is expected to generate the entire sequence from scratch. This discrepancy between training and testing which is regarded as an exposure bias causes error accumulation during generation at test time. This will lead to suboptimality of the maximum likelihood training [1,2]. Recently, some reinforcement learning (RL)-based methods [1,3,4] have been proposed to tackle this problem. In these methods, the text generation is viewed as a stochastic procedure in which the word generation is modeled as action selection and the task-specific score (e.g., CIDEr [5] score) can be formulated as the reward directly. By using RL, the exposure bias problem can be addressed, and the non-differentiable task-specific metric can be directly optimized. However, it suffers from two significant issues. The first issue is that the image captioning has a high dimension of action space (e.g., 10 4+ token/word actions). It is challenging to learn an exact policy in such an action space. Besides, the high dimensionality of the sample space leads to high variance of the Monte Carlo estimates [6], which thus aggravates the instability of the RL training. Another issue is the high variance of the gradient estimation, which may cause the training to be unstable. Existing methods have attempted to reduce the variance via a learned baseline or a critic network [7] by training another network, which increased the difficulty of optimization. In order to avoid training a new network, self-critical sequence training (SCST) [4] was proposed. This model is based on the reward of the RL, which is obtained at the test inference time. However, the learned baseline is not a tight approximation of the expected reward signal. This also leads to high variance of the estimated gradient. In addition, we have found that the greedy method in SCST usually results in a higher reward than the multinomial sampling, and this is problematic since the baseline is too high for the agent to get positive feedback, and the training may get blocked.
To tackle the above two issues, we propose a boosted learning framework based on SCST with the following two innovations. First, we design a word-gated long short-term memory (LSTM) decoder to generate the output caption after encoding an input image with a deep CNN, in which the word gate function is used to predict the distribution of possible words regarding the input image. Specifically, the word gate is trained directly under the supervision of the words that appear in the ground-truth sentence. Our method draws inspiration from the observation that although the image captioning has large vocabulary, the actual quantity of valid words for a given image is relatively small. For example, given an image about the summer, the word "snow" is unlikely to be presented. From this viewpoint, the word gate function can significantly reduce the valid action space of the RL method, and further guide the output of the text generation model.
Secondly, for more stable gradient estimation, we give an improved version of SCST, called adaptive self-critical sequence training (ASCST), which brings a more approximate expected reward, and thus leads to easier optimization and better performance. The history reward information of the sample and greedy methods can tell approximately how wide the gap between the expected reward and the computed baseline is. By adaptively adjusting the greedily computed baseline with the history reward information of the sample and greedy methods, we can shrink the gap between the baseline and the expected reward, and thus significantly reduce the variance. We also introduce a novel control parameter to rescale the baseline adaptively with history reward information. Since natural learning progress moves from easy to hard, we introduce another parameter that gradually increases the baseline as the model converges. By adopting these two control parameters to the baseline, the performance of the agent can gradually ramp up and the training is stabilized.
The contributions of this paper are presented as follows: • we introduce the word gate to dramatically reduce the valid action space of the text generation, which brings the reduced variance and easier learning.
•
We present the adaptive SCST (ASCST), which incorporates two control parameters into the estimated baseline. We show that this simple yet novel approach significantly lowers the variance of the expected reward and gains improved stability and performance over the original SCST method.
•
Experiments on MSCOCO dataset [8] show the outstanding performance of our method.
Image Captioning
Inspired by the success of deep neural networks in neural machine translation, the encoderdecoder framework has been proposed for image captioning [9]. Vinyals et al. [9] first proposed an encoder-decoder framework which contained a CNN-based image encoder and an RNN-based sentence decoder, and was trained to maximize the log-likelihood. Various other approaches have been developed. Xu et al. [10] proposed a spatial mechanism to attend to different parts of the image dynamically. Authors in [11,12] integrated high-level attributes into the encoder-decoder framework by feeding the attribute features into RNNs. However, the attributes only concerned top-frequency words, which only account for about 10% of all the words in the vocabulary.
Image Captioning with Reinforcement Learning
Recently a few studies [1,3,4] have been proposed which used techniques from RL to address the exposure bias [1] and the non-differentiable task metric problems. However, compared to standard RL applications, image captioning has a much larger action space. The large action space will cause a high variance in gradient estimates in RL [6], which is proved to be the cause of unstable training. Variance reduction can often be achieved with a learned baseline or critic. Ranzato et al. [1] were the first to train the sequence model with policy gradient, and they used the RL algorithm [13] with a baseline estimated by a linear regressor. Zhang et al. [14] used the actor-critic algorithm in which a value network (critic) was trained to predict the value of each state. While the above works need to train an additional baseline network or critic, the SCST approach [4] avoids this by utilizing an improved RL with a reward obtained by the current model under test inference as the baseline.
However, the baseline estimated by the greedy decoding method has a large gap with the sample decoding method. This will lead to high variance, and the baseline will not be able to provide positive feedback.
Methodology
In this section, we first introduce our word gate decoder. Then, we give a brief explanation of the high variance of gradient estimation in basic RL-based methods. Finally, we introduce our adaptive self-critical training scheme.
Overview of the Proposed Model
As shown in Figure 2, our model contains two innovations: the word gate and the adaptive SCST. The word gate is used to reduce the search scope of the RNN. Adaptive SCST is a boosted RL method to help the model achieve better performance. The training method of the word gate model is shown in Section 3.2. The word gate model combines with the output of the LSTM to obtain better dictionary probabilities. The adaptive SCST needs two inferences: the sample inference and the argmax inference. The sample inference is at the top of the procedure of the overall model, as shown in Figure 2. The argmax inference is at the bottom of the procedure. The argmax inference is the baseline for the RL method to make the training become more stable. The proposed hyper-parameters help the training procedure become more stable. At the same time, the proposed adaptive SCST can achieve better performance.
Word Gate
In the neural image caption (NIC) model [9], the caption sentence is generated by an RNN, word-by-word. Given the target ground truth sequence {y 0 , y 1 , ..., y t }, the RNN can be trained by minimizing the cross-entropy loss, as described in the NIC model [9]. The cross-entropy loss is formulated as: where θ are the parameters of the RNN model, and I is an image vector. The probability p(y t |I, y 0 , ..., y t−1 ; θ) is from the output of the RNN with a softmax function as follows: where W h projects the hidden state h t to the prediction probability space and L is the size of the vocabulary. Usually, L is pretty large and learning a distribution over such a large vocabulary space is a difficult task. However, when given a specific image, the candidate word set is usually small. For example, if the image is taken inside a room, then words like sofa, chair, wall, floor, etc. are in high probability to be the candidate words, while words such as train and river should not appear. Therefore, in order to reduce the prediction space, we introduce a gate mechanism for the words in the vocabulary, termed as the word gate (WG). In WG, the predicted distribution is gated by its gating score, which can be formulated as: p(y t |I, y 0 , ..., where o = {o 1 , o 2 , ..., o L ; 0 < o i < 1} is the word gate vector, to decide whether a word in the vocabulary should be a predicted candidate word, and means the element-wise product. h t ∈ R r is the output of the LSTM at the last time. W h ∈ R L×r is the weight. The so f tmax function is used to normalize the whole prediction of the LSTM. The input of the LSTM is an embed vector. The embed vector is a continuous vector space with a much lower dimension which is converted from a space with one dimension per word or per image conception. Learning the gating score vector o is directly based on images without the need to consider syntax. As shown in Figure 3, a CNN is used to get the image feature I. Then, a fully connected layer is added to map the K-dimensional image vector to an L-dimensional embed vector, where K is the size which is same as that of the LSTM input and L is the size of the vocabulary. Then, we add a sigmoid layer to get the probability of all the words. We use the binary cross-entropy loss to train the WG: where o ∈ R L is the word probability learned from the image. t ∈ R L is an indicator vector. If one word occurs in the caption, the corresponding position in t will be set to 1. The image feature used for the word gate is the same as the input of LSTM, and this could improve the learning performance of the image embedding vector.This image feature, which represents the global information of the image, is different from the spatial image matrix used in the attention, and its dimension is the same as that of the word embedding feature. The image embedding layer is equivalent to being trained twice and this layer can get better semantic information of the image. Finally, we use the sum of the two losses as our final loss, which is given as: where µ is the hyperparameter to decide the weight of these two losses. In this paper, we empirically set µ = 0.9. During testing, the WG first predicts a gating vector for all the words in the vocabulary. Then, the RNN generates the sentence word-by-word, predicting their distribution in each time step. The probability distribution is gated by the gating vector, and the final probabilities are output for each word.
The word gate is different from the image attributes used in [12]. Image attributes usually use the top 1000 words to train the CNN model. The image attribute model replaces the normal CNN to get the image vector which is the input to the LSTM. The word gate model does not need to train the attribute model independently. The CNN of the word gate is the same as the one used in [9], and it can be trained jointly. Furthermore, we do not need to select the top 1000 words-we use the whole vocabulary to learn the probabilities of all the words.
Adaptive Self-Critical Sequence Training
Similar to [1], we can cast image captioning as an RL problem, in which the "agent" (the LSTM) interacts with the external "environment" (i.e., words and image features). At each episode of the training process, the agent takes a sequence of "actions" (the predictions of the next word, w t ) according to a policy p θ (the parameters of the network, θ), and observes a "reward" R (e.g., the CIDEr score) at the end of the sequence. The goal of training is to find a policy (the parameters) of the agent that maximizes the following expected reward: where w s = (w s 1 , ..., w s T ) is a sentence sampled from p θ and w s t is the words sampled at time step t. Using the REINFORCE algorithm [13], the gradient can be computed as: In practice, the expectation can be approximated with a single Monte Carlo sample from the following distribution of actions: However, the Monte Carlo estimation method is not stable, especially when the policy changes in the runtime, which usually causes high variance in the estimated gradient. A common solution is to reduce the following variance by shifting the reward with a "baseline" B: where B can be any function that is independent with the action w s . The optimal baseline that yields the lowest variance estimator on ∇ θ J(θ) is the following expected reward: Finally, using the REINFORCE algorithm with a baseline B, the gradient of o t (the input to the softmax) is given as [1]: For the full derivation of the gradients, please refer to [13,15] and Chapter 13 in [16]. Self-critical sequence training [4] gives a typical implementation of the RL-based method. Its core idea is to estimate the baseline with the reward obtained from the current policy model in testing reference: where R(w s ) is the reward obtained by the Monte Carlo sampling, and R(ŵ) is the reward obtained by the current model at the inference stage. If R(w s ) is higher than R(ŵ), the probability of these samples will be increased. If R(w s ) is lower than R(ŵ), this probability will be suppressed. Since the SCST baseline is based on the test-time estimate under the current model, it improves the performance of the model under the inference algorithm used at test time. This ensures the training/test time consistency and makes it possible to optimize with evaluation metrics directly. Self-critical sequence training minimizes the impact of baseline with the test-time inference algorithm on training time, which requires only one additional forward pass. It makes it so the model is optimized quickly, converges easily, and has a lower variance. Self-critical sequence training can be more effectively trained on mini-batches of samples with stochastic gradient descent (SGD). The beam search method is a heuristic search algorithm that explores a graph by expanding the most promising node in a limited set. This method is widely used in the natural language generation model. SCST uses greedy decoding to select the current action w t for the baseline estimation in time step t:ŵ t = arg max w t p(w t |h t ). (13) This is used as the foundation of the beam search method, and the original training method with cross-entropy loss optimizes the probability of the max.
However, this method only considers the single-word probability, while the Monte Carlo sampling considers the probabilities of all words in the vocabulary. This inequivalence is harmful, since the aim of R(ŵ is to give a baseline for R(w s ), but its value is computed in a very different way. Furthermore, we found that this greedy strategy causes SCST to be unstable in the training progress. As shown in Figure 4b, in SCST, the greedy reward (R(ŵ) has higher variance than the Monte Carlo sampling reward (R(w s )), and their gap is also very unstable. As a result, the CIDEr score in Figure 4a of SCST is lower than that of adaptive SCST. In order to stabilize the estimation of baseline, we introduce a factor α for R(ŵ) as follows: We argue that there exist at least two criteria for the factor α: on one hand, it should be able to normalize R(ŵ) so that R(ŵ) will not shift severely; on the other hand, it must stabilize the gap between R(ŵ) and R(w s ) to ensure that the gradient will not change rapidly. To this end, we formulate the factor α in such a way that it not only takes account of the history of R(ŵ), but also considers the average level of R(w s ). So, the factor α is formulated as: where E(R(·)) denotes the expectation of the reward. In this formulation, E(R(ŵ)) is used to normalize the current greedy baseline R(ŵ) with its expectation, and E(R(w s )) ensures the baseline has a similar level with the expectation value of R(w s ). It is not realistic to compute E(R(w s )) and E(R(ŵ)). Instead, we use the mean value of R(·) in previous v iterations (once the optimization for a mini-batch is defined as an iteration) as a estimation for the expectations.
Although the history factor α can normalize the greedy baseline and stabilize the gap between R(ŵ) and R(w s ), it may lead the absolute value of the gap to an incorrect level (e.g., the loss may keep a negative value). Thus, we further introduce another factor β to adjust the tradeoff of R(ŵ) and R(w s ). The final equation is shown as follows: where R i (·) is the R(·) in the ith iteration. Here we assume that t is the current iteration and h, which is called "history factor", determines how many previous rewards are used to estimate the difference. Different from the history factor, β is a hyper-parameter which was empirically set to be 0.9. Our experiments show that the adaptive SCST method became more stable and reliable than the SCST method. It could reduce the variance of the gradient estimate, and we got better performance based on the adaptive SCST model.
Dataset
We used the MSCOCO dataset [8], which is now the largest dataset of the image caption task to evaluate the performance of the proposed models. The official MSCOCO dataset includes 82,783 training images, 40,504 validation images, and 40,775 testing images. The image captioning model was evaluated on the offline testing dataset and the online server. For offline evaluation, we used the same dataset splits as in [17], which in recent papers have usually been used as offline evaluation. The training set in the offline dataset contains 113,287 images, and every image has five captions. We used a dataset of 5000 images for the validation and report results on a testing dataset of 5000 images.
We used words which appeared more than five times in all captions. In the end, we obtained a vocabulary with 9487 words. Words which occurred less than five times were replaced with the unknown token < UNK >. We counted the length of all captions and found that the lengths of 97.7% captions were less than 16. We truncated the words to maintain the maximal length of the caption at 16.
Implementation Details
We used the feature map of the final convolution layer in the Resnext_101_64x4d [18] model to be regarded as our image feature. This model was pre-trained on the ImageNet dataset [19]. For the LSTM network, we set the hidden unit dimension to be 512 and the mini-batch to be 16. In order to avoid the gradient explosion problem, we used the gradient cutting strategy proposed in [20]. If the gradient was higher than 0.1, we set the gradient to be 0.1. In order to prevent the LSTM network from over-fitting, we added the dropout layer at the output of LSTM. We used the Adam method [21] to update the CNN and LSTM parameters. For the language model part, the initial learning rate was 4 × 10 −4 . For the convolution neural network, the initial learning rate was 5 × 10 −5 . The momentum and the weight decay were 0.8 and 0.999, respectively. We utilized the PyTorch deep learning framework to implement our algorithm. In testing, we used the beam search algorithm for better description generation. In practice, we used a beam size of 2.
In order to further verify the effect of the algorithm, we conducted a comparative experiment with the soft attention model. We selected our best model based on the CIDEr score as the initialization for the adaptive SCST training. We ran the adaptive SCST training by using Adam with a learning rate of 5 × 10 −5 .
We added the word gate method to the soft attention model, and the soft attention model was our baseline to evaluate the performance of our word gate method.
In practice, we found that the reward obtained from the sample and the reward obtained from the greedy method were very different. This made the training loss become large, as shown in Figure 2. For comparison, we used both SCST and ASCST to fine-tune the same WG model. In Figure 2, the scores were evaluated every 50 iterations under the MSCOCO testing dataset. They were both trained with the same initial model, whose CIDEr was 1.092. From Figure 2, we can find that the ASCST model had better performance than the original SCST method. At the beginning of the RL, the SCST method obtained a lower result than the initial model, but ASCST was not affected by the difference of the two rewards, and it could improve the CIDEr score continuously.
In Table 1, we report the performance of the soft attention (Resnext_101_64x4d) baseline model [10], then we add the proposed word gate model (WG) to the baseline model without RL. Finally, we add SCST and ASCST to the soft attention + WG model for the comparison between the SCST method and the ASCST method. The above models were all validated on the test portion of the Karpathy splits, and they were all single models without ensemble method. From Table 1, we can find that the soft attention with WG model yielded a better performance than the soft attention model alone. Furthermore, to compare the SCST and the ASCST, we fine-tuned the same soft attention + WG model with the SCST method and ASCST method. Then, we obtained the soft attention + WG + SCST model and the soft attention + WG + ASCST model. From the results, we could find that the soft attention + WG + ASCST model had better performance than the soft attention + WG + SCST model reported in [4] on BLEU, ROUGE-L, and CIDEr metrics. This result shows that the proposed WG method could help the model to obtain better performance. Moreover, the ASCST could not only make the lRL training more stable, but could also improve the model's performance. Compared with other state-of-the-art methods, our model also achieved competitive results.
We then used our best model to get the test results with the official test split, and submitted our results to the official MSCOCO evaluation server. In Table 2, the state-of-the-art results on the leaderboard are also depicted. We outperformed the baseline method on all evaluation metrics.
To further evaluate the ASCST method and get the appropriate hyper-parameters settings, we tried several comparison experiments where only h or β in Equation (16) were different. In Figures 5 and 6, we present the model with different h and β settings. In Figure 5, we can find that the alpha_5000_beta_08 model had the best CIDEr score among these models ( alpha_5000 means the WG model trained with the ASCST method with h set to be 5000 iterations). From Figure 6, we can see that the performance of our model was significantly influenced by the β. Figure 6 shows the influence of the hyper-parameter h. alpha_h_beta_08 refers to the WG model trained with the ASCST method with β set to 0.8, and the h represent different hyper-parameters for the history factor. We found that with longer history reward information, better performance was obtained. In practice, β set to be 0.8 and h set to be more than 2500 iterations could help our model to yield better performance.
Quantitative Analysis
As shown in Figure 7, we selected some samples from the local test set for reference, and we can see that the model could generate readable text content and maintain rich semantic information about the image. For example, in the first image of Figure 7, we can see the generated text "a group of people playing tennis on a tennis court". The generated caption successfully describes people and tennis in the image. In the second image of Figure 7, we can see our model could recognize people and skis in the image, and furthermore it could determine that people were standing in the snow.
Conclusions
We present two innovations under the RL mechanism to boost the image captioning performance. First, a word gate function is introduced into the LSTM-based decoder model to reduce the search scope of the vocabulary for the sequence generation. Second, during the gradient updating along with the self-critic learning framework, two additional control parameters are defined to rescale the baseline with history reward information, in order to lower the variance of the expected reward. Finally, extensive experimental results show that the two innovations jointly obtained a boosted captioning performance and increased the stability of model training. Furthermore, we obtained impressive performance on the MSCOCO benchmark compared with some state-of-the-art approaches. We intend to study the application of the proposed method in the field of digital virtual asset security in future research.
Author Contributions: X.Z., L.L., J.L. and H.P. conceived and designed the experiments; X.Z. performed the experiments; X.Z., L.L., J.L. and H.P. analyzed the data; X.Z., L.L., L.G., Z.F. and J.L. wrote the paper. All authors interpreted the results and revised the paper.
|
2019-04-16T13:28:20.965Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "2075df33102576b0da89607556aad168e8fabeea",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/8/6/909/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e0109cea178632ae6cff0b7398799e6218d0b14a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
67838119
|
pes2o/s2orc
|
v3-fos-license
|
Dimensional consistency achieved in high-performance synchronizing hubs ( • )
The tolerances of parts produced for the automotive industry are so tight that any small process variation may mean that the product does not fulfill them. As dimensional tolerances decrease, the material properties of parts are expected to be improved. Depending on the dimensional and material requirements of a part, different production routes are available to find robust processes, minimizing cost and maximizing process capability. Dimensional tolerances have been reduced in recent years, and as a result, the double pressing-double sintering production via (“2P2S”) has again become an accurate way to meet these increasingly narrow tolerances. In this paper, it is shown that the process parameters of the first sintering have great influence on the following production steps and the dimensions of the final parts. The roles of factors other than density and the second sintering process in defining the final dimensions of product are probed. All trials were done in a production line that produces synchronizer hubs for manual transmissions, allowing the maintenance of stable conditions and control of those parameters that are relevant for the product and
INTRODUCTION
The properties and performance of low-alloyed sintered steels depend on their composition and processing conditions.The main application of this family of steels is the automotive industry, which requires production sets with a high performance-to-price relationship.The production of highly reliable sintered parts, based on materials that can ensure high performance, dimensional stability together with reasonable cost is one of the main aims of the PM industry [1][2][3] .Improvements in this field can be achieved in two different ways: by designing high alloying systems or new alloying systems, or by improving the density through the processing route.It is important to identify the step of the processing route that can produce improved density (the pressing, the sintering, or even post-sintering rev.metal.49 (1), EnEro-fEbrEro, 55-64, 2013, ISSn: 0034-8570, eISSn: 1988-4222, doi: 10.3989/revmetalm.1218operations).For structural parts, pressing is responsible for the densification from an apparent density near 3.0 g/cm 3 to 6.9 -7.0 g/cm 3 or even higher, but sintering can be considered to be the most important step in the final densification process since if it is not done properly, mechanical properties of final product will be not the accurate ones, whatever green density is.Today, the combination of a good pressing process with a well-performed sintering cycle makes it possible to produce a new generation of high-performance sintered steels with well-controlled porosity, a good microstructural balance and, as a consequence, high performance [4 and 5] .
Nevertheless, an improved density does not ensure that the product will reach the design objectives for which it was intended.One of the most important aims of PM low-alloyed steel components developed for the automotive industry is the achievement of close dimensional tolerances after the sintering step.The alloying content and its scatter, sintering time, density variation after pressing, and sintering temperature are all directly related to the conformation of a part to its tolerance [6] .The quenching conditions are important not only because they determine material properties, but also because they define the dimensional behavior of sinterhardened parts.
In the case of synchronizer hubs, there are several kinds of dimensional tolerance.Diameters that are not involved directly in the function of the part may have tolerances of approximately 0.1 mm, but this value can be reduced to 20 µm in the case of functional diameters.At the same time, one of the most important dimensional parameters is the quality of splines, both internal and external.The normal value for external teeth is Q9, but Q8 can be required.This requirement means not only that the diameters must fulfill the tolerance, but also errors in the teeth should be controlled.According ISO 5480-1, errors in the teeth for the parts considered in this paper are: f p = 17 µm, F p = 40 µm, F a = 21 µm and F b = 14 µm.
Additionally, this kind of parts use to present density gradients which produce that dimensional variations are not uniform, so at this point, materials that have minimal swelling or shrinkage during sintering and low deformation during quenching (in case of sinter-hardening processes) are necessary to achieve described requirements.It is also necessary to define the process parameters that allow the development of a robust process with excellent capability.In this case, a double pressing and sintering process ("2P2S") is an attractive possibility.
In the 1980s, complex Fe-Cu-Ni-Mo-C systems were developed based on the mixing of powders [7][8][9][10] , and these systems were later produced with the diffusion bonding alloying method [11][12][13][14][15][16] .These alloying systems were partially replaced in the 1990s by an alloying system that introduced chromium, and much later, manganese was introduced.The systems based on Ni-Cu-Mo are the most widely used in the industry today because of their high reliability with respect to dimensional stability and mechanical properties, and they have been used in 2P2S production processes for years.Most scientific investigations use these materials as standard specimens, but little information and few results are obtained from real parts produced in real industrial conditions from these materials [17] .
The main objective of this paper is to obtain information regarding the influence of pre-sintering conditions on the dimensional and material properties of components (synchronizing hub, in figure 1) that are assembled in gearboxes for manual transmissions.
From a dimensional perspective, the pre-sintering time and temperature have been tested for the comparison of product dimensions.Regarding the material properties, the hardness, density and the results of an 'in house' breaking load test have been analyzed.All other process parameters, including tooling, have been kept constant to evaluate only the influence of the pre-sintering conditions.
In this work, two of the most-used steel compositions for structural parts (Distaloy AE and HP1 grades, from Höganäs AB, Sweden) have been selected for use in the study, but a double pressing double sintering process (2P2S) was used to ensure high performance.All components were manufactured in a large-scale production route, following the production line.All process step parameters were 57 controlled to avoid damage to the production equipment, including both tooling and machines.
EXPERIMENTAL PROCEDURE
The materials used in this work are those described in table I.
These two compositions are reliable from the point of view of dimensional control [18 and 19] .Ni4Cu2Mo1.5 can be considered to be a "high strength material" and Ni4Cu1.5Mo0.5 can be considered a "medium strength material".Both materials follow a double press and sintering process.First materials are pressed and pre-sintered (state 1P1S) in the conditions of tables II and III.After materials are presintered, are pressed again (Fig. 4) to get the densities described in table IV (final state 2P2S).
The dimensions in both the first and second press steps were fixed to obtain a proper value of the dimension H (Fig. 1).From a functional perspective, this characteristic is one of the most important, and depending on tolerances, even it can define the production process (1P1S or 2P2S, machining before or after sintering, sizing, etc.).The density and the H dimension must be balanced in the full part, but in those areas with functional applications, this relation is even more critical to the accurate behavior of the part in the gearbox.
During the experimental procedure, the H parameter, and consequently the density, were kept constant for each material in the pressing steps.As a result, it is possible to measure the influence of the pre-sintering conditions on the other process parameters and on the part dimensions.
After the first pressing step, the density is controlled in different areas to ensure that the filling and pressing have been done correctly and that the values are homogeneous in the full part (in figure 2 are explained the three different areas considered for measuring the density).
Table II shows the values of the density in different areas for both materials, obtained by the Archimedes method.The compacting pressure for each material is also shown.Experimental trials were carried out as shown in table III.
Depending on the pre-sintering conditions, time and temperature, it was found that the compacting pressure parameter of the second pressing varies.As in the case of the first pressing step, the value of the H parameter was taken as the basis to adjust the compacting pressure.This value was kept constant for each material in all three trials, which means that the density of this area was kept constant as well.The density after the second pressing step is shown in table IV.The compacting pressure of the second pressing is shown as one of the results of this trial.
After the first cycle of pressing and sintering and once the full processing cycle (2P2S) was done, the dimensional variation of the hubs was evaluated by measuring the external and internal diameters.The apparent Brinell hardness (HB) has also been measured in both process stages.
To evaluate the strength of the hubs, an 'inhouse' testing procedure was developed.In figure 3 the basis of the method is described.The component is loaded by an expansive clamp, which is located in an internal spline, until the part breaks.The load is applied through a cone, and in all the cases at the same speed, allowing a direct comparison of the breaking load and the maximum displacement before breaking between all the parts.Finally, some metallographic analyses were done to determine the evolution of the material through the processing route.
Second pressing compacting pressure and dimensional results
Figure 4 shows the compacting pressure in the second pressing, for each pre-sintering condition, on both materials, taking the second pressing after 825 o C-t as a reference.Figures 5 and 6 show the total dimensional results for both materials in all trials, after the first pressingsintering cycle, and after the second one.Because the behavior of the internal and external diameters is similar, only the external diameters are shown in the figures.
The dimensions after pre-sintering at 825 o C-t are taken as a reference, and all other measurements are specified in reference to this value for each material.It is important to note that because tolerances are so tight, a percentage variation of just 0.02 % is highly significant and may mean the production of parts is within or out of tolerance.
Figure 5 shows the dimensional results for Ni4Cu2Mo1.5.When the pre-sintering temperature is 880 o C, significant swelling is produced at this stage.This fact must be taken into account in the tooling design to avoid further damages.The part then shrinks after sintering.When the pre-sintering time is doubled, the size of the resulting parts is smaller.Nevertheless, the dimensions of the parts pre-sintered at 825 o C have similar dimensional behavior after sintering.When the dimensions after pre-sintering at 825 o C-t are taken as a reference, in both cases after sintering, the parts present swelling.This result is opposite to that obtained when the pre-sintering temperature is 880 o C: swelling after the first sintering is followed by shrinkage in the final sintering.This observation supports the claim that for this material, the pre-sintering temperature defines the dimensions of the final part.
Figure 6 shows the results for Ni4Cu1.5Mo0.5.In this case, the dimensional behavior of the parts is similar in all trials.Minor shrinkage is observed when the first-sinter temperature is increased to 880 o C, and also when the time is doubled.In all cases, swelling exists after the second sintering.Significant dimensional differences cannot be found for this material, regardless of the pre-sintering conditions.For this material and within the parameters of this study, the pre-sintering time and temperature have no decisive influence on the dimensions of the final part.
Nevertheless, both materials show good process stability in terms of variation; the observed shrinkage for Ni4Cu2Mo1.5 should be taken into account in the tooling design, allowing the final part to fulfill the drawing specifications.These materials show good dimensional behavior because of an ideal combination of copper and nickel that balances the growth effect of the copper with the shrinkage produced by the presence of nickel.
Hardness results
In figure 7, the values obtained for the hardness after the full 2P2S cycles are displayed.As could be expected, the benefit of the alloying elements of Ni4Cu2Mo1.5 is the resulting increase of the hardness.
The hardness is also affected by the higher hardenability of Ni4Cu2Mo1.5, which has more martensite in its microstructure and, as a consequence, possesses increased hardness.The variability of the hardness values is much higher for materials that are pre-sintered at a higher temperature.The values obtained for the dimensional changes and hardness are in full agreement with those obtained using the standard test samples [20] .
Significantly, the hardness values in the final parts are not very different from the ones obtained when the pre-sintering conditions were 825 o C-t.For Ni4Cu2Mo1.5, the hardness values in final part (after 2P2S process) for a pre-sintering temperature of 880 o C are slightly higher, but not sufficient to ensure that the material property (hardness) has been increased.It is possible to confirm that, while significant difference in the hardness values in part after second sintering cannot be claimed when the presintering temperature is 880 o C, dimensions of the parts produced with Ni4Cu2Mo1.5 vary significantly.
In figures 8 and 9, the values of the applied load and the elongation of the parts before breaking can be seen, as determined by the test described in figure 3.
The best results for the applied load are obtained, as expected, for the most-alloyed material (Ni4Cu2Mo1.5).The pre-sintering conditions do not have a significant influence on the performance of this material.Ni4Cu1.5Mo0.5 shows a slight improvement of the measured values with both presintering parameters: time and temperature.
The larger values for the elongation are those obtained for Ni4Cu1.5Mo0.5, which is in accordance with the hardness values.
Nevertheless, when the temperature is kept constant and the time is increased, the elongation of both materials presents a slight improvement.The increase in the elongation with time and temperature is strongly related to the density.Therefore, it can be concluded that the combined results related to the mechanical properties are better for Ni4Cu2Mo1.5 than for Ni4Cu1.5Mo0.5.
Microstructure
In figure 10, we can analyze the microstructure of the materials after the first press and sintering operation.In both materials, there is no diffusion of carbon at 825 o C, and the number of sintering necks increases when the sintering time is increased.For Ni4Cu2Mo1.5 at 880 o C, there is good diffusion of carbon inside the particles, and pearlite and bainite formation happen.These phenomena occur because molybdenum is fully pre-alloyed in this material, while in Ni4Cu1.5Mo0.5 it is diffusion bonded; additionally, the percentage of this element is larger, accounting for the difference in microstructure between the materials.It could be said that, for Ni4Cu2Mo1.5,when the presintering temperature is 880 o C, the proper sintering phenomenon has begun.These ideas are supported by the dimensional behavior of parts made of Ni4Cu2Mo1.5 after pre-sintering at 880 o C: swelling exists, as it does after the proper sintering process.
In the case of Ni4Cu1.5Mo0.5,only a few pearlite grains are formed near the border of the former iron particles.Nickel-rich areas can also be identified in both materials, as they retain austenite.
In figure 11, the final microstructure of both materials can be seen.
The final microstructure for each material is quite similar for all of the processing routes, and in accordance with the expected result for these materials.Ni4Cu2Mo1.5 consists mostly of lower bainite/martensite, and Ni4Cu1.5Mo0.5 consists of fine pearlite and lower bainite/martensite.This result is in agreement with those obtained when hardness, breaking load and elongation were measured.
Because nickel is diffusion bonded in both materials, nickel-rich areas present similar aspect.Differences between these areas have not been observed to depend on the pre-sintering conditions.
Engstrom [21] compare these two materials using a standard test for tensile strength, and the results for the hardness, ultimate tensile strength and elongation are in full agreement with the results obtained here.These materials can exhibit robust performance improvements after heat treatments and high-temperature sintering.
CONCLUSIONS
The results of this work can be summarized as follows: -The pre-sintering temperature defines the dimension after this stage (pre-sintering), but also after sintering for both materials.This effect is more pronounced in Ni4Cu2Mo1.5.In both materials, we can consider that the dimensional stability is acceptable for industrial applications.The effect of the pre-sintering temperature must be taken into account in tooling design.
-The hardness values after pre-sintering for Ni4Cu2Mo1.5 are increased with high presintering temperature.After sintering, this process route results in a slightly increased hardness, but no significant change.Ni4Cu1.5Mo0.5 does not present any significant hardness variation, either in the pre-sintering or sintering stage.-The mechanical properties (breaking load) of the material are slightly improved with time and temperature.The improvement in Ni4Cu1.5Mo0.5 is more significant.-The elongation of both materials is increased when pre-sintering time is doubled.-The pre-sintering temperature has great influence on the pressing parameters, so any change in the pre-sintering conditions must be controlled to avoid further damage to the production equipment.-After pre-sintering at high temperature, the material has started the sintering process (particularly Ni4Cu2Mo1.5),therefore, at this point second pressing operation for Ni4Cu2Mo1.5 is more similar to sizing than to traditional repressing operation.Due to this fact, an additional investigation via is open: machining operations which in normal conditions must be done after sintering could be done at this stage.Machining before final sintering is always an important cost-saving measure for the production process.-The main conclusion of this paper is the following: depending on the pre-sintering temperature, the dimensions of parts after the full process are completely different, especially for Ni4Cu2Mo1.5,keeping constant or slightly improving all other relevant characteristics, such as hardness and elongation.This result is relevant when considering the tolerance requirements for synchronizing hubs.
Figure 1 .
Figure 1.Synchronizing hub used in this work.
Figure 2 .
Figure 2. Sketch of component cross-section.Measuring areas for density.
Figure 3 .
Figure 3. Sketch of the mechanical testing procedure.
Figure 10 .
Figure 10.Microstructures of the materials after the first press and sintering cycle.
Figure 11 .
Figure 11.Microstructures of the materials in their final state, after the second press and sintering cycle.
|
2018-12-26T23:14:52.280Z
|
2013-01-01T00:00:00.000
|
{
"year": 2013,
"sha1": "3a986ade7e673fc4cfd5f281e7fab53f3d58afcb",
"oa_license": "CCBY",
"oa_url": "https://revistademetalurgia.revistas.csic.es/index.php/revistademetalurgia/article/download/1268/1279/1284",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3a986ade7e673fc4cfd5f281e7fab53f3d58afcb",
"s2fieldsofstudy": [
"Business",
"Materials Science"
],
"extfieldsofstudy": []
}
|
54957246
|
pes2o/s2orc
|
v3-fos-license
|
Performance Evaluation of Downdraft Gasifier for Generation of Engine Quality Gas
The use of renewable energy resources is broadly increasing for power generation, so the engine quality producer gas which comes from gasifier must be evaluated for engine applications. The Internal Combustion Engine (ICE) generates electricity for lighting & other end uses. The system consists of a gasifier coupled with ventury scrubber, coarse filter filled with wood chips; two fine filters filled with saw dust and one security fabric filter. The gases produced are cooled in water scrubber and then tar is removed in subsequent filters and gases are supplied to Spark Ignition Engine for operating AC Generator set. The unit is tested with resistive loading, which increases gradually from 20 %, 40 %, 60 %, 80 % & 100%. The cold and hot gasification efficiencies are 75.41% & 80.85 % respectively. The biomass consumption rate is 27kg/h. The air and gas flow rate are measured 20.79 m3/h and 79m3/h respectively.The temperature above the grate is 603°C. The tar level after gas cooling and cleaning unit is counted 9 mg/Nm3.
Introduction
Though producer gas as a fuel has been known since 1785, gasifier use with engines for power generation came into existence only around 1920. The IInd world war created a very large demand of power. Maximum Gasification development activities were carried out during this period due to the shortage of fossil fuels. The possibilities of using this gas for heating and power generation was first realized by Europe, therefore this gas emerged in Europe producer gas system which used charcoal and peat as feed material.
Sims [1] suggested the wood biomass as a greatest potential fuel for electricity generation in New Zealand. Narvaez [2] reported biomass gasification with air and effect of several variables on performance of gasifier. Delgado [3] discussed the cleaning of raw producer gas using cheap calcined materials and rock downstream from gasifier. Hasler [4] showed the tar reduction efficiency of cleaning system must be 90% for satisfactory operation of IC engine. The dry sand bed was used to remove additional tar and dust from gas. Mukunda [5] suggested the use of dry sand filters after a wet scrubber to be an effective way of removing particulates and tar from producer gas. Ministry of Non-Conventional Energy Source [6] has represented the protocol of measurement of the upper limits for tar and dust or particulates content of the producer gas, which are as follows: Nm mg = Total tar + Dust content 3
/ 150
Nm mg < Moreover, Hellwig [7] recommended a reduction in the moisture content before combustion by pre-drying the biomass in order to increase the overall energetic efficiency. Centeno [8] predicted the effect of molar concentrations of different species in syngas and temperature profile in gasifier along its height. This information of syngas characteristics can be used for sizing the reactor and material selection. McKendry [9] represented the biomass characteristics broadly and focused on suitable biomass species, which provided high-energy outputs, to replace conventional fossil fuel energy sources. The type of biomass required waslargely determined by the energy conversion process and the form in which the energy was required.Gielena [10] and Johansson [11] studied the biomass as a renewable energy source with almost zero carbon dioxide emission. This zero CO 2 emission is due to the fixed amount of carbon and energy take part during the growth of biomass. Therefore, biomass has been the focus of most of countries for sustainable energy production and also for reduction of greenhouse gases emissions. Murphey [12] described the biomass components include cellulose, hemicelluloses,lignin, lipids, proteins, simple sugars, starches, water, HC, ash etc. andwhat these components have effects on the fuel properties.
Chen [13] reported that not only the biomass characteristics but also present technology is also responsible for fuel gas production from biomass. A characteristics modeling approach had been addressedto encounter the challenges of present gasification technology. Beenackers [14] presented a review of moving bed biomass gasifier for its high performance. It is commercially available in European Union.
The most biomass materials are found in developing countries like Africa and Asia so it would be most viable for energy in the developing countries. However, in recent years, developed countries in Europe have showed interest and promoted the use of biomass as fuel [15][16][17][18][19]. Hence, the biomass has aroused the interest not only of developing countries but also of developed countries. Stahl [20] depicted the Sweden has been continue investing in biomass energy program since Second World War. The difficulties faced during the Second World War have always been constant reminder to the Swedes and it must be for most countries in the world, which depends upon imported fossil fuel.
System Description
The downdraft gasifier is introduced into the system, in which the air and biomass flow take place in the same direction.The biomass is fed through the feed door and is stored in the hopper. The limited and controlled amount of air for partial combustion enters above the oxidation zone through air nozzles. The Reactor holds charcoal for reduction of partial combustion products while allowing the ash to drop off in the ash pond. The gas passes through the annulus area of reactor from upper portion of the perforated sheetand exit at the bottom of gasifier. This type of gasifier produces the tar free producer gas which is the main advantage, while the major disadvantage is the necessity to process the biomass to avoid excessive drop of pressure. In addition, the efficiency is low and not practical to be used for capacity more than 350kW.
The producer gas outlet of gasifier is connected with the various downstream systems viz. Ventury scrubber, Drain box, Coarse Filter, Flare with valve, Engine gas control valve, Fine filters, Safety filter and Engine shut-off valve. Gas produced in Gasifier is scrubbed and cooled in Ventury Scrubber with re-circulating cooling water in cooling pond with the help of AC scrubber pump. Gas is separated from water in Drain box and introduced in Coarse Filter, Fine Filters (active and passive) and a Safety Filter. Cool, Clean Gas and Air is then sucked into the Engine through a mixer butterfly consisting of piping and valves arrangement. The producer gas then starts engine on gas mode. Governor linked control butterfly is provided to vary the gas quantity as per electrical load on the generator, keeping frequency within limits. An electric driven biomass cutter and reactor heat recovery based wood pieces drying arrangement are also provided to make the system self-sufficient.
Fuel
The biomass used for testing of the gasifier is firewood chips of approx size 60×45×25mm 3 having moisture 12%.
Loading Device
The loading device is developed, having 47 bulbs of 500W capacity each. These bulbs are well distributed on each phase kept ON or OFF to load the engine gradually.
Measurement of Tar
The tar samples are collected in copper tube condenser dipped in ice bath. The length of condenser is 5m .The temperature of water bath in which condenser is placed, was maintained to be 5±1 0 C to cool the gasses passing through the condenser. This tar is collected by the displacement method in which it is displaced for the 100 liters of water and then it is cleaned by acetone from the condenser.
Biomass Consumption Rate
The biomass consumption per hour of the gasifier operation was estimated by dividing the weight of total biomass poured in the gasifier with the total time of operation
Gas Analysis
The producer gas leaving the gasifier was collected in the gas samplers and analyzed using gas chromatograph. The make of gas chromatograph is "CHEMITO, Model No.8510" collaboration with USA.
Measurement of Temperature
The temperature profile of the gasifier was measured using chromel-alumel thermocouple at 8 different locations above the grate. The temperature at the inlet and outlet of the ventury scrubber & filters were measured by thermocouples (J-type) make "EMSON Pvt. Ltd. Ajmer". These thermocouples were fitted into the Data logger for measurement of the temperature. The make of Data logger is "DATA TAKER, Model No.-DT600 and Series3".
Measurement of Pressure
The pressure of the gasifier's reactor & hopper and at the different filters was measured using the U-tube manometer and the manometer height for the air and producer gas were also taken from the U-tube manometers make "PEEKAY Scientific" Govindpura Bhopal.
Performance Evaluation of Downdraft Gasifier for Generation of Engine Quality Gas
Results and Discussion
For evaluating the downdraft gasifier, the system was continuously operated with mixed wood for more than eight hours in a day during test. The temperature inside the reactor was measured 145mm, 165mm, 185mm, 205mm and225mm above the grate using the thermocouple were 603℃, 585℃, 544℃, 519℃ and 408℃ respectively which is shown in fig.2. This graph is indicating the steady state of equilibrium in gasifier at particular height for stability in operation. The temperature of grate was recorded approximately 1200℃.
The tar level in the producer gas as shown in fig.3, was measured 230mg/Nm 3 at the outlet of gasifier and after, all outlets of the filters, coarse filter, active filter, passive filter and security filter or cleaning unit were 78 mg/Nm 3 ,46 mg/Nm 3 ,18 mg/Nm3,9 mg/Nm 3 . The fig.3 is showing the time of cleaning of filters. In the form of efficiency, the fabric filter is most efficient. Fig.4. shows the variation in pressure with respect to time for the different filters, it has been approximately constant after the stabilization of the plant up to the maximum output. The increasing order of pressure is indicating the cleaning of producer gas. 6. shows variation in cold gasification efficiency and hot gasification efficiency with respect to load. The load is varying from 4.5kW to 18kW, while the cold and hot gasification efficiency increases from 67.5% to 75.41% and 69.55% to 80.85% respectively. It is reflecting the improvement in quality of producer gas, which can be used efficiently in electric power production and in internal combustion spark ignition engine for various applications. Fig.7. depicts the biomass consumption rate is increasing with respect to increase in load. It is due to increase in the biomass gasification. When the load increases the demand of producer gas also increases in internal combustion engine so maximum amount of biomass will be burnt in gasifier and biomass consumption rate will increase with increase in load. The biomass consumption rate is measured 27 kg/h at maximum load.
Conclusion
The performance evaluation of the downdraft gasifier through the above discussion is clearly indicating the following conclusions: (1). Wood is found to be effective fuel for generation of producer gas. (2). The pressure drop across the filters increases with duration of operation, which indicates that the moisture, tar and particulates being filtered out from the producer gas. (3). The fabric filter is most efficient filter. The value of tar after this filter is 9 mg/Nm3. (4). The grate temperature was nearly 12000C, which is adequate for proper gasification.
Percentage of load (%)
Biomass consumption rate (kg/h) the quality of producer gas improves when the system is operated for longer duration. (7). The gasification system can be used as a small power unit to meet the thermal and electrical needs. There is a huge scope to save the fossil fuel. The performance of the gasifier for power generation with spark ignition engine has demonstrated technically feasible and indicating to meet the requirement of engine quality producer gas.
|
2019-04-13T13:12:30.091Z
|
2013-01-01T00:00:00.000
|
{
"year": 2013,
"sha1": "fd99fb5421e3bd63f1c2ba21373072b0b51dc327",
"oa_license": "CCBY",
"oa_url": "http://www.hrpub.org/download/201309/nrc.2013.010204.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e6c424d3f45264eb449158abbfab3382d00b9c88",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
214671320
|
pes2o/s2orc
|
v3-fos-license
|
The Gibberellin Producer Fusarium fujikuroi: Methods and Technologies in the Current Toolkit
In recent years, there has been a noticeable increase in research interests on the Fusarium species, which includes prevalent plant pathogens and human pathogens, common microbial food contaminants and industrial microbes. Taken the advantage of gibberellin synthesis, Fusarium fujikuroi succeed in being a prevalent plant pathogen. At the meanwhile, F. fujikuroi was utilized for industrial production of gibberellins, a group of extensively applied phytohormone. F. fujikuroi has been known for its outstanding performance in gibberellin production for almost 100 years. Research activities relate to this species has lasted for a very long period. The slow development in biological investigation of F. fujikuroi is largely due to the lack of efficient research technologies and molecular tools. During the past decade, technologies to analyze the molecular basis of host-pathogen interactions and metabolic regulations have been developed rapidly, especially on the aspects of genetic manipulation. At the meanwhile, the industrial fermentation technologies kept sustained development. In this article, we reviewed the currently available research tools/methods for F. fujikuroi research, focusing on the topics about genetic engineering and gibberellin production.
INTRODUCTION TO F. fujikuroi
F. fujikuroi is a prevalent plant pathogen, which causes the bakanae disease of the rice plant. The sick plants grew inordinately long, and eventually felled off and died. This phytopathogen was latterly found causing devastating disease in many other economically important plants, including maize, sugarcane, wheat, asparagus etc. In the early 20th century, scientists from Japan, United Kingdom, and United States isolated the active compounds, gibberellic acids (GAs), which was also isolated later from the higher plants (Mitchell et al., 1951). Since then, differentially structured GAs were isolated, and GAs became a large family of structurally identical diterpenoids with 136 known isoforms, of which some are active plant hormones, including GA 1 , GA 3 , GA 4 , and GA 7 (Blake et al., 2000;Bomke and Tudzynski, 2009;Rodrigues et al., 2012). GAs are now classified as one of the five major types of phytohormones, namely the auxins, cytokinins, gibberellins, abscisic acid, and ethylene. The use of ppm (parts per million, mg/l) levels of GAs may result in physiological effects such as elimination of dormancy in seeds, acceleration of seed germination, improvement in crop yield, promotion of fruit setting, overcoming of dwarfism etc. GAs have been widely applied to improve the quality and quantity of fruit, crop and omamental plants. Although GAs are present extensively in plants, fungi and bacteria, F. fujikuroi is the only organism being applied for industrial production of GAs, as it shows excellent productivity. Taking the product GA 3 , a representative GA product of F. fujikuroi, as an example, the yield in industrial submerged fermentation (SMF) has reached more than 2g/L after 7 days fermentation, while in solid state fermentations (SSF), its yield reached 7 g/kg support or even higher after 9 days fermentation. These values are much higher than the reported yield of other microbes. Besides, many other valuable secondary metabolites were also discovered to be produced by F. fujikuroi, indicating the potential of F. fujikuroi to be apply for production of other chemicals (Janevska and Tudzynski, 2018).
GA induced signal transduction is very complicated in plants, while overdose of GAs may result in plant death (Eckardt, 2002). As a phytopathogen, fusaria can be loosely classified as hemibiotrophs. Upon infection with F. fujikuroi, the plant becomes sick/weak, and gets subsequently more easy to be invaded, which could be largely due to the contribution of GAs secretion. Research from Wiemann et al. showed that the infected rice plant experienced dramatically increased invasive fungal growth of a GA-secreting wild type F. fujikuroi when compared to its GA-deficient mutant . At the meanwhile, the enlarged plant body by the abnormal elongation might also provide the pathogen additional space and nutrients. Eventually, the infection turns to the stage of killing and consuming the host body, while the fusaria become necrotrophic in this stage (Ma et al., 2013).
It would be interesting to reveal the underlying mechanism of the virulence factors of F. fujikuroi, which may help to discover the potential antifungal targets or to develop a strain that are nonpathogenic and safe for the agricultural environment. Currently, we still lack the systemic knowledge to control the pathogenesis of F. fujikuroi. The virulence/pathogenicity genes of some Fusarium species has been characterized and summarized in some review articles (Michielse and Rep, 2009;Walter et al., 2010;Kazan et al., 2012). The virulence linked host-pathogen interaction is a very complicated process with a massive amount of genes and regulators involved. Based on the infection strategies, Ma et al. classified the virulence genes into two types. The genes of the first class were named as the basic pathogenicity genes, which is universal in the Fusarium genus and shared with many other pathogens. Genes of mitogen-activated protein kinase (MAPK) signaling pathways, Ras proteins (small GTPases), G-protein signaling components and cAMP pathways etc. are involved in this class. These genes usually correlate also globally with the cell fitness. The genes in the second class were named as the specialized pathogenicity genes, which is usually specific to a Fusarium species on specific hosts (Ma et al., 2013). GAs production is apparently a key virulence factor of F. fujikuroi and requires a set of specialized pathogenicity genes. However, GAs production is not essential to the virulence. Deletion of the entire GAs gene cluster could neither impair the host-cell colonization nor abolish the invasive growth completely in a riceroot infection experiment . Besides GAs, F. fujikuroi synthesizes a large amount of other metabolites, of which many are toxic compounds. In addition, many secreted enzymes may also help the fusaria to penetrate the cell wall and ultimately invade the plant. Bashyal et al. analyzed the F. fujikuroi genome and predicted that there were 1194 secretory proteins, of which 38% proteins might relate to the virulence. Moreover, out of secretory proteins, 5% were polysaccharide lyases, 7% were glycosyl transferases, 20% were carbohydrate esterases, and 41% were glycosyl hydrolases (Bashyal et al., 2017). It is interesting to exploit further experimentally the specialized pathogenicity genes, especially some secreted cell wall degrading enzymes and mycotoxins in F. fujikuroi (Desmond et al., 2008).
Beside the gibberellin-producing fusaria, the helminthosporol-producing Helminthosporium sativum was also focused. Helminthosporol is a natural sesquiterpenoid that is able to induce GAs like bioactivity (Miyazaki et al., 2017) and cause seedling blight and root rot in some plants (Pringle, 1976). However, far less is known about the biosynthesis of helminthosporol and the biology of H. sativum, when compared to GAs and F. fujikuroi. Compared to helminthosporol, GAs are prevalently present in the high plants in nature, thus have a broader application, whereas helminthosporol is a plant growth regulator that synthesized by the microorganism. Besides, although helminthosporol and its analogs helminthosporal and helminthosporic acid, have GA-like activity in some plants, they act less efficient or differentially in many experiments. For instance, helminthosporol and helminthosporic acid work less efficiently than GAs in reversing 2-chloroethyltrimethylammonium chloride induced dwarfing on the hypocotyls of lettuce seedlings and in stimulating sugar release from de-embryonated barley (Briggs, 1966). The GA biosynthetic inhibitor prohexadione did not inhibit the shoot elongation caused helminthosporic acid. H. sativum infected wheat was not elongated, because helminthosporol has no GA activity in wheat. H. sativum did not infect the rice plant as a host, although helminthosporol may promote rice seedlings (Miyazaki et al., 2017).
A prerequisite to characterize the molecular biology is having efficient analysis tools. After a number of F. fujikuroi genome sequences are available (Jeong et al., 2013;Wiemann et al., 2013;Bashyal et al., 2017;Niehaus et al., 2017), it is more eager than ever to develop molecular tools. However, unlike many other fungi, such as the baker's yeast and some phylogenetically close Aspergillus spp., F. fujikuroi is critically short of molecular tools. Lacking of handy molecular tools has become the major obstacle for the development of the F. fujikuroi research, especially in the aspect of genetic modification of this fungus. Genetic manipulation is difficult in this microorganism, which is due to poor protoplast formation, inefficient transformation, low homologous recombination (HR) rate etc. In this review, currently applying methods and tools, including the methods to identify Fusarium species, plant infection assays, sexual cross method, promoters for gene expression, plasmid toolbox, protocol of protoplast preparation, transformation technologies, genome editing strategies, RNA-mediated gene silencing assay, protein fluorescent tags, methods of biomass quantification, gibberellin fermentation technologies and strategies of strain improvement, have been reviewed. We summarized the currently using materials and techniques for F. fujikuroi research, providing a perspective in the development of molecular tools for this industrial and agricultural important fungus.
IDENTIFICATION OF FUSARIUM SPECIES
The Fusarium species are ubiquitous in nature, and are extensively distributed in soil, plants and various organic substances. Identification of the Fusarium species becomes crucial for agricultural application, healthcare purpose and scientific investigation. To date, hundreds of Fusarium genome sequences have been deposited in the database. These genome sequences can be used as efficient and essential tools for identification of Fusarium species, gene/enzyme mining, evolutionary, and phylogenetic analysis etc. (Ward et al., 2002;O'Donnell et al., 2013;Bashyal et al., 2017). There are currently 68 Fusarium species with their genome sequences available in the NCBI (National Center of Biotechnology Information, United States) database. Among them, Fusarium oxysporum, F. fujikuroi, Fusarium proliferatum and Fusarium graminearum are the best focused four species. The numbers of their genome assemblies in the database are 222, 18, 13, and 11 respectively. These Fusarium species are all prevalent phytopathogens and economically very important, thus are also better studied and with more molecular tools available when compared with the other species in the Fusarium genus. The Fusarium fujikuroi species complex (FFCS), previously known as Gibberella fujikuroi species complex, contains about 50-100 phylogenetically close Fusarium species, of which F. fujikuroi, Fusarium proliferatum, and Fusarium verticillioides are best studied. The taxonomy of FFSC was based on the evolutionary, biological and morphological species concepts (Kvas et al., 2009;Summerell et al., 2010), whereas the modern biology also employs sequencing data.
Conventionally, the identification of microorganisms is mainly based on the morphology. The morphological identification is generally based on the macroscopic and microscopic characteristics. The macroscopic characteristics include the colony appearance, pigmentations and growth rates. The FFSC cells present usually as white to dark purple cottony aerial mycelium. The microscopic characteristics include the microscopic observation of the macroconidia, microconidia, chlamydospores, the mode of microconidial formations etc. At the moment, the most recent and systemically documented guide for morphological characterization of the Fusarium species was contributed by Leslie and Summerell (Leslie and Summerell, 2006). However, the morphological identification is time consuming and could easily result in misidentifications, especially for the phylogenetically close species Raja et al., 2017). Although it might be problematic to use morphology alone, this method is still helpful in practice and is frequently used now in combination with other molecular means.
The MALDI-TOF MS (Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry) assay is an advanced tool for rapid and accurate identification of microorganisms. This technique has been widely applied to identify bacteria, yeasts and other fungi (Pinto et al., 2011;Mesureur et al., 2018;Pauker et al., 2018;Quero et al., 2019;Kim et al., 2019), especially for the identification of the human pathogens, whereas might be relatively less popular in the plant pathogen research at this moment. However, a broader application can be foresee in the near future based on its excellent accuracy and efficiency, and fast development of analyzing equipment. Briefly, this assay is carried out based on the mass spectral readout of the molecular mass from the ionized protein mixture. Thus, each cell culture may result in a very specific mass spectral pattern, which can be taken as the unique fingerprint to identify a microorganism from the very closely related species (Huschek and Witzel, 2019). To be taken as an identification tool, a database of such mass spectral patterns has to be established beforehand. A MALDI-TOF MS database has been established with 24 reference strains for identification of mainly the clinical isolates belonging to the FFSC. It was reported that 93.6% of the isolates can be correctly identified to the species level (Al-Hatmi et al., 2015). Recently, Wigmann et al. expanded the database (Wigmann et al., 2019). In their work, MALDI-TOF MS was carried out for 49 species from the species complex, taking the sequencing data of the translation elongation factor 1 α (TEF1α) gene as the reference. The MALDI-TOF MS fingerprints were then taken as a database to screen over 80 isolates from the FFSC, and resulted in a high correct-identification-rate of 94.61%.
PCR based cell identification is another type of rapid, accurate and cost effective method to identify the microorganisms. Unlike the MALDI-TOF MS method, the PCR based methods require only the routine facilities in a molecular lab. The PCR based methods have been developed for the Fusarium species identification since many years ago, whereas without a standardized protocol. Usually, different genomic loci were targeted, and ended up with diverse forms of results. The galactose oxidase gene gaoA was taken as the PCR target to identify the Fusarium species, as the gene region has very low homology among the fungi (Niessen and Vogel, 1997;de Biazio et al., 2008). The internal transcribed spacer (ITS) regions have been successfully used to identify some closely related fungi. The ITS regions of the conserved rDNA have been successfully used to identify some Fusarium species (Abd-Elsalam et al., 2003;Lacmanova et al., 2009). The TEF1α gene is usually a single copy gene in the Fusarium genus, and is frequently employed for species identification, as it also presents a high level of sequence polymorphism in different species. Other genes such as the β-tubulin, RNA polymerase II (RPB2), nitrate reductase, phosphate permase, and the mitochondrial small subunit were also targeted for PCR identification. However, for a better resolution, a multi-locus sequence typing (MLST) method should be used by targeting multiple genes. Usually, at least three gene loci were taken for such identifications (Baayen et al., 2000;O'Donnell et al., 2000;Skovgaard et al., 2001). As an example, Ke et al. (2016) identified the Fusarium species by PCR of ITS, RPB2 and TEF1α. Faria et al. (2012) developed a multiplex PCR method after testing 6 pairs of primers targeting different genes/genomic DNA of different Fusarium species. The failure/success of PCR amplification, using different pairs of primers, was counted to determine the belonging of a specific Fusarium species (Faria et al., 2012). Recently, a TEF1α LAMP (Loop-Mediated Isothermal Amplification) based identification method has been developed for detection of the seedborne F. fujikuroi and Magnaporthe oryzae in rice seeds. Four independent F. fujikuroi isolates were tested taking their serially diluted DNA samples as the amplification templates. Based on the time-to-positive of the LAMP assay, the authors claimed that this assay showed a detection sensitivity/limit of 100-999 fg (vary among different isolates) of F. fujikuroi DNA (Ortega et al., 2018).
PLANT INFECTION ASSAYS
Although F. fujikuroi may invade many plants, the rice plant is a preferred host. The ability to cause rice bakanae disease has become the hallmark of the microorganism F. fujikuroi. Thus, the rice plant was frequently chosen as the host to investigate the virulence of F. fujikuroi. Wiemann et al. investigated F. fujikuroi virulence that linked to a velvet-like protein complex using a rice plant infection assay. In their experiment, the husks removed rice seeds were incubated for 3 days in agar gel for germination and co-incubated subsequently with 5 mm diameter F. fujikuroi mycelial plugs in Vermiculite filled test tubes. The infected plants were grown for another 7 days, supplying with water and nutrients. The germination period and growth period were both implemented at 28 • C under a 12 h light -12 h dark cycle. Finally, the bakanae symptoms such as chlorotic stems and leaves were observed and documented (Wiemann et al., 2010). Adam et al. infected the rice plants using conidia samples of different F. fujikuroi strains. In their experiment, the rice seeds were germinated for 2 days for seedlings with developed shoots/roots length of 1-2 mm. A fixed amount of conidia were co-inoculated then with the prepared seedlings for infection. The infected plants were grown for another 10 days with the programmed lighting and nutrient supply. Finally, the plant length and internodal distances were recorded, while the paler pigmentation of the bakanae disease was characterized and verified by measuring their content ratio of chlorophylls/carotenoids . Similar to the previously described experiment, the whole assay took around 2 weeks to evaluate the systemic F. fujikuroi infection of the rice plant in vivo (see Figure 1). A rice/maize root infection assay was carried out by Wiemann et al. to evaluate the pathogenicity of GAs production. The pathogens were inoculated by co-cultivation with the geminated rice and maize seeds. Fixed temperature and humidity, and programmed lighting cycles (differentially for rice and maize) were supplied to the infected seedlings in agar gel support. After 10 days of growth the root samples were collected for visualization of invasive growth of the corresponding pathogen by fluorescence microscopy, and the penetration events were quantified. At the meanwhile, pathogen spores (10 4 /ml) were collected for measurement of relevant mRNA by RT-PCR . On the basis of the described infection assays, mostly the rice seedlings were preferred to be chosen as the host plant.
Different fungal samples, such as mycelia and conidia, can be used to infect the seedlings, while different methods are available to characterize the pathogenicity/virulence (see Figure 1).
SEXUAL CROSS
Crossing is always a powerful method to combine genotypes, exchange genetic materials and obtain large-scale mutations. Studt et al. implemented a sexual cross between a pair of mating partners, C1995 and IMI 58289, two well-studied laboratory strains with opposite mating types, Mat-2 and Mat-1, respectively. With the experiment, they pinpointed that a newly found gene FSR1 was involved in perithecial pigmentation in F. fujikuroi . The crossing experiment was carried out using a carrot agar medium, which has been reported by Klittich et al. previously (Klittich and Leslie, 1988). The crossing protocol has been well documented by . Briefly, in their experiment the female parent and male parent were inoculated in the carrot medium and complete medium respectively. After 7 days growth at 25 • C, the mycelium from the male parent was harvest and suspended in Tween 60 for spore suspension, which was subsequently spread into the mycelium of the female parent on a carrot agar plate. The carrot agar plate was then incubated at 27 • C for a few weeks until the perithecia were produced. The F. fujikuroi species complex was divided into many biological species, designated as mating populations A to J. F. fujikuroi is the mating population C. Generally, F. fujikuroi is heterothallic, and should be readily crossed in the laboratory. However, sexual fertility varies from strain to strain, making the sexual crosses not always successful .
PROMOTERS
Selection of a proper promoter is crucial in genetic engineering. Usually, based on the research purpose and the gene to be expressed, a native promoter, a constitutively expressing promoter or an inducible promoter can be used. In F. fujikuroi, the most frequently used promoters, such as the gpdA (Michielse et al., 2014) oliC and trpC (Rosler et al., 2016) promoters, are originated from the Aspergillus spp. and provide very strong expression. The native strong glnA promoter can be induced under the nitrogen starvation condition, while can be repressed under the addition of NH 4 NO 3 or glutamine (Teichert et al., 2004). The transcriptional regulation of glnA is on the basis of the transcription factor AreA, which is extensively involved in regulation of a wide range of metabolism pathways. Thus, nitrogen starvation/induction is closely linked to the synthesis of many important secondary metabolites, suggesting that potential conflict between the glnA expression and the research purpose has to be taken care of before the glnA promoter is chosen. The glnA promoter has been used for conditional expression of a gene in F. fujikuroi (Teichert et al., 2006). The alcA promoter is another strong inducible promoter that has been successfully applied in F. fujikuroi research (Teichert et al., 2006). The alcA promoter driven gene expression can be well induced by 1% (V/V) ethanol and repressed by 2% (W/V) glucose in F. fujikuroi. The alcA promoter is originated from Aspergillus nidulans. alcA and two other genes, alcR and aldA, are the genes of ethanol regulon, and are all transcriptional regulated by CREA and ALCR proteins (Mathieu and Felenbok, 1994). The glnA and alcA promoters are both strong promoters. Besides these, a Tet-on system has been developed for F. fujikuroi . This Teton system was established based on the adaptation of a Tet-on system of Aspergillus niger (Meyer et al., 2011). This expression construct was composed by an oliC promoter, a tetracyclinedependent transactivator rtTA2 S -M2, an A. fumigatus terminator TcgrA, and an rtTA2S-M2-dependent promoter tetO7:Pmin (see Figure 2). The constructed Tet-on promoter has been shown to successfully activate a silent gene cluster in F. fujikuroi by adding 50 µl/ml doxycycline.
PLASMID TOOLBOX
Plasmids have formed an essential part in molecular biology and genetic manipulation. However, compared to the model organisms, such as Saccharomyces cerevisiae, which has the most diverse plasmids, F. fujikuroi has almost no specific plasmid for use. In F. fujikuroi, the currently working plasmids are generally integrative plasmids that originated from the plasmid toolbox of the Aspergillus species. An A. nidulans DNA fragment AMA1, which enables autonomous replication (AR) of plasmids in some Aspergillus species, such as A. nidulans, A. niger, and Aspergullus oryzae (Gems et al., 1991), has been tested in F. fujikuroi. The transformation efficiency increased over two times by using the AMA1 integrated AR plasmid in comparison with the backbone plasmid. However, this plasmid showed to be very instable inside the cells. After 10-19 days incubation in non-selective medium, only 8-44% of the cells still contain the selection marker (Bruckner et al., 1992). In addition, the southern blot test of the cells transformed with this plasmid gave very weak bands when compared to the cells transformed with the backbone plasmid, although the total amount of DNA used for this assay is equal for both conditions, suggesting that transformation of AMA1 plasmid resulted in a low copy number of the plasmid maintaining in the cells.
It would be interesting to test a centromeric plasmid based on this AR construct. In yeast, a centromeric plasmid usually works as a small chromosome, with one or two copies in each cell, which provides a stable expression profile. Some years ago, yeast centromere CEN11 had been tested with an plasmid in A. nidulans (Boylan et al., 1986). However, this yeast centromere seems to be unfunctional in A. nidulans, since it has no effect in plasmid stability, and does not prevent chromosomal integration of the vector. Thus, to construct a centromeric plasmid, it might be necessary to test a native centromere of F. fujikuroi. In fact, several Fusarium species have been known to contain dispensable mini-chromosomes. These mini-chromosomes stay independently from the other chromosomes, can somehow communicate between neighborhood cells and contribute to the pathogenicity (Nagy et al., 1995;Ma et al., 2010;Ma and Xu, 2019;Peng et al., 2019). However, little is known about the entity of these mini-chromosomes. It would be very interesting to exploit how these chromosomes are utilized, manage to replicate, and are selectively present in the host fungi.
The selective markers are essential for the screening of plasmid transformations. In F. fujikuroi, the choices of selection markers are very limited. Drug resistance markers, as the representatives the nourseothricin resistance marker nat1 (Teichert et al., 2006;Bomke et al., 2008;Janevska et al., 2017), hygromycin FIGURE 2 | The construct of Tet-on promoter for conditional expression in F. fujikuroi. The promoter region is composed of a tetracycline-dependent transactivator rtTA2 S -M2 (on the left of the construct, encodes rtTA protein) and an rtTA protein driven operator tetO7. The tetracycline activated rtTA protein is capable to bind the tetO7 operator and induce the targeted gene expression.
resistance marker hph (Studt et al., 2013a;Wagner et al., 2013) and geneticin (G418) resistance marker nptII (Castrillo et al., 2013), were the most frequently used selective markers in F. fujikuroi. Nutrition selection markers were hardly seen to be used in this microorganism, for instance the use of auxotrophic complementary marker genes, which could be due to the lack of constructed auxotrophic strains. Sanchez-Fernandez et al. (1991) mutated the nitrate reductase gene niaD in F. fujikuroi, and developed a selection system employing a complementary niaD gene of A. niger. With this system, the transformants were screened for the ability to utilize nitrate as the sole nitrogen source. The A. niger niaD gene was subsequently replaced by a native niaD gene of F. fujikuroi for future applications (Tudzynski et al., 1996;Prado et al., 2004). Table 1 has listed the currently using transformantsscreening markers for F. fujikuroi. Wiemann et al. (2012) knocked out the Sfp-Type 4 -Phosphopantetheinyl Transferase Ppt1. The resulted mutant strain became lysine auxotrophic and dramatically increased in GAs yield. It might be interesting to apply this strain for future lysine auxotrophic screening.
Twaruschek et al. developed a plasmid that is able to recycle markers for continuous genetic engineering in F. graminearum, as such to overcome the shortage of selection markers in this species. In their strategy, the recombinase system Cre-loxP was activated upon induced expression to remove the marker genes after genetic engineering, while URA3/pyrG was involved in the system to counterselect marker removal isolates with 5-fluoroorotic acid (Twaruschek et al., 2018). Similarly, the yeast FLP recombinase has been applied in Candida albicans Tudzynski et al., 1996;Prado et al., 2004 to recycle the nourseothricin resistance marker (Reuss et al., 2004), while a Cre-loxP marker recycling system has also been tested in A. oryzae (Mizutani et al., 2012). Induced expression of recombinases has been widely employed to recycle the selective makers, and in the meanwhile to remove the redundant DNA fragment after genetic engineering. This is a feasible strategy to get applied also in F. fujikuroi, especially when multiple genes need to be disrupted.
PROTOPLAST PREPARATION AND TRANSFORMATION TECHNOLOGIES
In many filamentous fungi, successful protoplast generation is the prerequisite of single cell isolation, efficient transformation, successful genetic engineering etc. F. fujikuroi is a polynuclear mycelial fungus. Some important industrial applying strains do not produce conidia, making it obliged to prepare protoplast for single cell isolation. Efficient cell wall degradation enzymes are of substantial importance to generate protoplasts. Some degradation enzymes, such as the snailase (Mink et al., 1990) and chitinase (Patil et al., 2013;Halder et al., 2014), have been frequently used to deconstruct the hyphae in some fungi. Shi et al. (2019) performed a series of optimizations on the preparation of protoplasts. This is the latest update about the optimized protocol for protoplast production in F. fujikuroi. They have tested five enzymes, including the lysozyme, snailase, cellulase, lysing enzyme and driselase. Only the lysing enzyme (Sigma-Aldrich, United States) and driselase (Sigma-Aldrich, United States) treated cells gave a reasonable amount of living protoplasts. The lysing enzyme and driselase were then tested in combination and the optimum ratio was obtained at 3:2 with a total concentration of 15 mg/L. Finally, the hydrolyzing time was optimized based on the amount of produced protoplast and cell regeneration efficiency, and the optimal hydrolysis time was ultimately chosen at 3.5 h. There are many transformation methods available for F. fujikuroi. Electroporation, PEG (polyethylene glycol)mediated transformation and Agrobacterium transformation are the three most popular transformation methods in filamentous fungi. The PEG-mediated transformation is an easy-to-operate method that usually combines a heat-shock process. This method has been frequently used to transform F. fujikuroi (Bruckner et al., 1992;Linnemannstons et al., 1999;Fernandez-Martin et al., 2000). Besides F. fujikuroi, PEG-mediated transformation method has also been frequently used in many other filamentous fungi, such as Aspergillus fumigatus (Fuller et al., 2015), Stagonospora nodorum (Liu and Friesen, 2012), and Pseudogymnoascus verrucosus (Diaz et al., 2019). Electroporation is another frequently used method for DNA transformation in fungi. This method is usually known to achieve high transformation efficiency. However, the applied voltage needs to be carefully adjusted, especially when the transformation is applied to the protoplast, a very week cell form. Garcia-Martinez et al. (2015) successfully implemented electroporation in F. fujikuroi at the voltage amplitude of 600 V/mm, with one pulse duration of 200 µs, in a cuvette with 1 mm electrode distance. The Agrobacterium transformation method has been successfully developed for the filamentous fungi for many years. It was claimed to be much more efficient than the conventional techniques (de Groot et al., 1998). In the Agrobacterium transformation protocol, both conidia and protoplast can be used as the host. This method has been succeed in transforming DNA in many different fungi, including A. niger, Aspergillus awamori, Trichoderma reesei, Colletotrichum gloeosporioides, Neurospora crassa etc. The Agrobacterium transformation method has also been successfully applied in some Fusarium species, such as F. oxysporum (Islam et al., 2012),
Homologous Recombination (HR)
Unlike the baker's yeast, genome editing is usually inefficient in most of fungi. This is mainly due to the bad transformation efficiency and low HR rate. In S. cerevisiae, efficienct gene targeting can be carried out using 30-40 bp homologous flanking sequence on each side of the donor DNA, while in F. fujikuroi or many other fungi, we usually use 500 bp or longer homologous flanking sequences, even though the correct integration rate is still very low. We harvested 1 correct mutant after screening over 100 transformants when deleting a gene in F. fujikuroi, although a donor DNA construct harboring 700 bp homologous flanking sequence on each side was used (data not shown). In many other organisms, the "correct transformant rate" can be significantly improved by blocking the Non-Homologous End Joining (NHEJ) system. Usually, either gene ku70/80 or gene lig4 was knocked out to eliminate NHEJ in different fungi, including N. crassa (Ninomiya et al., 2004), Kluyveromyces lactis (Kooistra et al., 2004), Cryptococcus neoformans (Goins et al., 2006), Aspergillus spp. (da Silva et al., 2006;Takahashi et al., 2006;Meyer et al., 2007), Pichia ciferrii (Schorsch et al., 2009), and Candida glabrata (Ueno et al., 2007;Cen et al., 2015). In comparison to KU70/80, deletion of the LIG4 gene showed to have less side effect, except for losing the NHEJ function (Daley et al., 2005;Schorsch et al., 2009;Cen et al., 2015). It will be interesting if we can delete the lig4 gene in F. fujikuroi, as such to enhance the gene targeting efficiency in this species. However, according to our previous work, deficient NHEJ could not always make noticeably increase in HR efficiency, but reduce dramatically the ectopic integration rate (Cen et al., 2015). The CRISPR-cas system was frequently reported to promote the HR efficiency when compared to the classical approaches (Lin et al., 2014;Zhang Y. et al., 2016;Cen et al., 2017;Chung et al., 2017). Thus, applying the CRISPRcas technology in an NHEJ deficient F. fujikuroi strain can be expected to ideally improve the genetic engineering efficiency. We suggest to apply the CRISPR-cas system in combination of lig4 disruption (Cen et al., 2017).
Although the CRISPR-cas technology has been used for a few years in filamentous fungi (Nodvig et al., 2015), this system was only tested successfully in F. fujikuroi very recently. Shi et al. (2019) developed a CRISPR-cas system for genome editing in F. fujikuroi as the first time at the beginning of 2019. In their work, three nuclear localization signal (NLS) peptides, the classical SV40 NLS, an endogenous histone NLS and an endogenous Velvet NLS, was tested to import the Cas9 protein into the nucleus. Finally, the NLS of native histone H2B was chosen to fuse with the Cas9 protein, as it gave the best mutagenesis rate. The promoter selection for gRNA transcription was the second challenge. Shi et al. (2019) evaluated three promoters, the polymerase II promoter, the endogenous polymerase III U6 promoter and the endogenous 5SrRNA promoter, of which the 5SrRNA promoter gave the best editing efficiency. The resulted CRISPR-Cas system showed a genome editing efficiency of approximately 60-80%.
The advantage of this F. fujikuroi CRISPR-Cas system is that the expression of Cas9 protein did not show any effect to the cell growth, while the non-specific toxicity resulting from FIGURE 3 | Diagram of Cas9 complex and DNA repair in genome engineering. The CRISPR-Cas system employs a short guide RNA to direct the Cas9 protein, an endonuclease, to a specific cutting site in the genome and generates DNA double strands break (DBS). The lethal DBS can be repaired by either NHEJ or HR. In the NHEJ process, proteins such as Ku70, Ku80 and Lig4 are involved. The NHEJ process results in error-prone repairs. In the HR process, usually a donor DNA is employed for precise genetic engineering.
Cas9 expression or nuclease activity has been widely noticed in many other organisms (Morgens et al., 2016(Morgens et al., , 2017Munoz et al., 2016;Cen et al., 2017). In some other microorganisms, such as E. coli and baker's yeast, usually the CRISPR-Cas elements can be eliminated after the genetic manipulation, which not only avoided the possible toxicity of the Cas9 nuclease, but also enabled continuous genome editing. In E. coli, a dual-plasmidbased system was employed. The gRNA sequence was included in one plasmid (named as pTarget). The cas9 gene, a temperaturesensitive replicon repA101(Ts) and an arabinose induced gRNA expression cassette that targets the pTarget plasmid were edited in another plasmid (named as pCas). The pTarget can be eliminated by adding arabinose, while the pCas can be removed by rising the temperature (Jiang et al., 2015). In S. cerevisiae, another dual-plasmid-based system was employed using a centromeric plasmid and a 2µ plasmid. In the centromeric plasmid, Cas9 can be expressed upon galactose induction. The gRNA containing 2µ plasmid can be eliminated by culturing the cells in nonselective medium (DiCarlo et al., 2013) for continuous genome editing. However, the current F. fujikuroi CRISPR-Cas system is a one-off system. For multi-gene editing, multiple gRNA expression cassettes have to be cloned into one plasmid to execute genetic engineering one time for all gene targets. The multi-gene disruption efficiency was also tested by Shi et al. The disruption efficiencies were 79.2, 10.8, and 4.2% for single, double and triple gene disruption, respectively. This result indicates that genetic manipulation for 3 genes or more would be very difficult in F. fujikuroi by this system. Another type of CRISPR-Cas system has been developed for several non-fujikuroi fusaria, using the in vitro prepared Cas9 protein/gRNA ribonucleoproteins (RNPs). Transformation of in vitro prepared Cas9/gRNA RNPs has been claimed to be able to reduce the specific integration of the donor DNA, while the CRISPR elements can be degraded naturally after genetic manipulation . However, compared to the plasmid-transformation method, this RNPs-transformation method is more complicated in handling, as the Cas9 protein needs to be additionally purified and concentrated before the transformation. Ferrara et al. (2019) developed a CRISPR-Cas9 system for F. proliferatum by transforming in vitro-assembled dual Cas9 RNPs. Using this method, the genomic DNA was cut twice at a specific locus and the donor DNA can target the DSB (DNA double strand break) efficiently using a short homologous flanking sequence of 50 bp on each side (Ferrara et al., 2019). Such efficient HR (efficient DNA integration using 35-60 bp flanking arms) has also been achieved in some other filamentous fungi, including Penicillium chrysogenum (Pohl et al., 2016) and A. fumigatus (Zhang C. et al., 2016). The efficient homology directed repair may simplify the construction of donor DNAs and reduces off target rate. It is very interesting to also achieve it in F. fujikuroi.
RNA-MEDIATED GENE SILENCING
An alternative method to gene deletion is to silence gene expression at the post-transcriptional level, mostly known as RNAi (RNA interference). The RNAi technology has become an excellent tool to exploit gene function in microorganisms, plants and animals. Briefly, RNAi employs a specific double strand RNA and the homologous based mechanism to attack and degrade the targeted mRNA, and finally knockdown the gene expression. This technology has been successfully established in different filamentous fungi for many years (Liu et al., 2002;Goldoni et al., 2004;Mouyna et al., 2004;Ullan et al., 2008). McDonald et al. applied the RNAi technology in three filamentous fungal phytopathogens, two aspergilli and F. graminearum, over a decade ago (McDonald et al., 2005). More recently, Nino-Sanchez et al. developed an RNAi system for F. oxysporum on the basis of the dsRNA expression cassette used for P. chrysogenum and Acremonium chrysogenum (Nino-Sanchez et al., 2016). Compared to gene deletion, RNAi is apparently a better choice to target the essential genes. Unfortunately, to date there's no report about RNAi application in F. fujikuroi. RNAi is highly conserved among many eukaryotes, thus has a great potential to be used also in F. fujikuroi.
Bimolecular Fluorescence Complementation
Investigation of protein-protein interaction is essential for understanding the signal transduction and regulation of metabolism, and helps to reveal many intrinsic biological mechanisms. Bimolecular Fluorescence Complementation (BiFC) assay is a decent tool for in vivo observation of proteinprotein interaction. Michielse et al. (2014) have applied the BiFC assay in F. fujikuroi several years ago. They tagged two transcription factors with two splitted gene fragments of an enhanced yellow fluorescent protein (EYFP). The two gene fragments encode the N terminal amino acids 1-154 and C terminal amino acids 155-238. Two transcription factors were then tagged with these two gene fragments and co-transformed into F. fujikuroi. The resulted transformant showed a co-localized fluorescence signal in the nucleus. This method was previously tested by Hoff et al. in another filamentous fungus (Hoff and Kuck, 2005), and was directly applied in F. fujikuroi. It would be more confident if we can test the BiFC assay systemically with additional controls to eliminate errors such as EYFP self-aggregation before we can start to apply it extensively in F. fujikuroi.
Fluorescent Proteins
Fluorescent-protein tags are common tools to monitor the protein localization. It has been extensively practiced in the research of F. fujikuroi (Studt et al., 2013b;Pfannmuller et al., 2015). Michielse et al. (2014) tagged the GATA transcription factors AreA and AreB with GFP and RFP respectively to track their cytosolic or nucleic localization. Wiemann et al. (2010) tagged two proteins of a velvet complex, FfVel1 and FfLae1, by enhanced GFP and YFP, and visualized a nuclear co-localization signal. Garcia-Martinez et al. tagged the light-sensitive gene carO with enhanced YFP and visualized the membrane localization signal. To successfully express a functional fusion protein, they used an 18 bp DNA fragment to bridge the gene and tag (Garcia-Martinez et al., 2015).
QUANTIFICATION OF BIOMASS
Biomass quantification techniques are very basic but important to monitor the cell proliferation. Usually, the microorganisms can be quantified simply by counting the cells using a hemocytometer, measuring the optical density, or weight the cell dry weight. Due to the filamentous nature of many fungi, dry weight measurement become the only feasible way to efficiently determine the biomass. However, F. fujikuroi is an industrial production microbe. Those traditional measurements cannot satisfy the complicate fermentation medium, which usually contains insoluble components, such as the corn/rice flour, soybean pulp and arachis flour. These components may form sticky paste like medium or present as insoluble particles. The tetrazolium salt (XTT) method is another efficient way to quantify the biomass. XTT can rapidly penetrate into the living cells and being catalyzed by the active dehydrogenase. Thus, the XTT method can also discriminate the active biomass from the cell debris and bio-inactive particles. The XTT reaction uses a color change for readout and has been applied in many filamentous fungi (Meletiadis et al., 2001;Antachopoulos et al., 2007;Moss et al., 2008). We developed an XTT assay to measure the active biomass of F. fujikuroi (Cen et al., 2018).
The established method was then tested and approved using the industrial fermentation conditions. Using this method, the cell growth can be well monitored during the fermentation.
There are other methods available for biomass quantification, for instance, the ELISA method and the quantitative PCR (QPCR). These methods have a very high resolution, and are able to distinguish a trace amount of difference in biomass. Besides, they can also distinguish the targeted sample in a complicated mixture. As an example, both ELISA and QPCR were used to quantify the biomass of filamentous fungi in infected plants (Brunner et al., 2012;Song et al., 2014;Feckler et al., 2017). However, when the cell sample is in a large quantity, such as the cells in fermentation, then the big dilution factor could confer significant error to these assays.
Medium Composition
Gibberellin fermentation has a very long history since 1950s (Darken et al., 1959;Rodrigues et al., 2012). Development of fermentation technologies has been sustained for decades for the development of the gibberellin industry. The optimization of the fermentation conditions started since the early 1950s (Borrow et al., 1955). Darken et al. (1959) tested different carbon sources as the first time, and concluded that addition of slowly utilized carbon source may result in increased GAs production, while slow-feed of glucose in a fermentation also positively affected the GAs production. The carbon-catabolite-repression has been known for a long time, while addition of a large quantity of glucose inhibits GAs production. However, little is known about the molecular basis of this phenomenon. Based on the cost efficiency, the currently used industrial fermentation media are mostly composed of a large quantity of starch. Plant oils were also successfully used for gibberellin fermentations (Gancheva et al., 1984;Gokdere and Ates, 2014). The addition of plant oils was interpreted to inert the carbon-catabolite-repression (Tudzynski, 1999). However, it has not been experimentally verified. Addition of plant oils might also functions to balance the nutritional needs of the fungi and release the metabolic burden of alternative biosynthesis pathways. Besides, as a phytopathogen, it is reasonable that F. fujikuroi secretes GAs to infect plants in the case that nutrients are poorly present in nature. More complex carbon sources would possibly resemble the natural system better. Nitrogen inhibition has also been known for many years in gibberellin production (Borrow et al., 1964). Kuhr et al. (1961) investigated the influence of different nitrogen sources on the production of GAs. The complex organic nitrogen sources, such as the peanut meal, soybean meal and yeast extract, are favorable for the production of GAs. The molecular basis of nitrogen inhibition has been well studied during the past 10 years Wagner et al., 2010Wagner et al., , 2013Michielse et al., 2014;Tudzynski, 2014). Other fermentation parameters, including the growth temperature, kinetics of nutrient metabolisms and impact of some other nutrients were also studied many years ago (Borrow et al., 1964;Bruckner et al., 1991). The plant extract seems to be in favor of, as they were frequently reported to promote the GA synthesis. Sucrose, corn steep liquor, glycerol, soybean pulp, arachis flour etc. were frequently chosen as the key components of fermentation media to promote GAs production, especially in the modern gibberellin industry (Cen et al., 2018). These fermentation factors, such the plant extract and temperature (25-28 • C), are more like simulations of the nature environment, in which F. fujikuroi secretes GAs to invade the plant to survive and propagate. To date, modifications of fermentation conditions are still on going with the purpose to increase the productivity, reduce the cost and make it compatible to the following processings.
Fermentation Types
Different types of fermentation have been established for GAs production. Currently, the most widely used industrial fermentation is the SMF. However, the SMF usually requires high energy consumption, is deficient in aeration, gets frequently contaminated and ends up with a large amount of waste water. The SSF is the most frequently tested fermentation other than SMF. The GAs yield is much higher in SSF in comparison to that of the other types of fermentation. The reported GAs yield reached more than 5 mg/g support after 7 days fermentation (Corona et al., 2005;Rodrigues et al., 2009;Satpute et al., 2010). The reason could be that SSF mimics best the growth conditions of this microorganism in nature, and is able to overcome all the previously mentioned shortcomings of SMF. In addition, the increased GAs yield might be largely due to the increased aeration. The supplied oxygen might be consumed during the GAs anabolism, as the monooxygenases play very important roles during the synthesis of GAs (Tudzynski, 1999(Tudzynski, , 2005Albermann et al., 2013). Or probably the enhanced mitochondrial respiration benefited the cell growth in general and indirectly improved GAs synthesis. In an SSF system, the selection of support/substrate materials is crucial, as it may provide nutrients, serve as a support material, and induce product synthesis. Different types of substrate materials have been tested for GAs production, including wheat bran (Agosin et al., 1997;Bandelier et al., 1997), Coffee husk (Machado et al., 2002(Machado et al., , 2004, citric pulp (Rodrigues et al., 2009;de Oliveira et al., 2017), Pigeon pea pods, Corncobs, Sorghum straw (Satpute et al., 2010) etc. The use of these plant derived materials/wastes reduced the production cost whereas all resulted in a high productivity of GAs. A variety of tested SSF experiments for GA 3 production have been listed in Table 2.
Water activity reflects the active part of moisture content, and is usually considered to correlate with microbial growth. It has been found that the water activity has a significant impact on GAs fermentation in a SSF. Usually, the water activity needs to reach 0.99 or higher for optimal cell growth and efficient GA production (Gelmi et al., 2002;Corona et al., 2005). However, unlike SMF, these SSF technologies are all non-conventional setup, and are currently only tested in the laboratory scale. The fermentation scale-up tests, further processings, such as the extraction and purification technologies, stay to be investigated before it can be applied in the industry. Oliveira et al. developed a semisolid state fermentation (SSSF) system using submerged citric pulp particles. They compared the SMF, SSF, and SSSF systems using similar citric pulp based media in both bubble columns (BCR) and Erlenmeyer flask reactors. In general, the GA productivity in BCRs is lower than that in the Erlenmeyer flasks, while the SSSF system is better than the SMF system in yield, indicating that an SSSF system could be possibly used to replace the SMF system while keep the current industrial facilities and production processes.
STRAIN IMPROVEMENT FOR GA PRODUCTION
F. fujikuroi is able to produce a variety of valuable secondary metabolites. There are at least over 10 different types of metabolites that are known to be produced by this microorganism. Forty seven putative gene clusters of different secondary metabolites have later been revealed and well documented Janevska and Tudzynski, 2018). Among all these secondary metabolites, GAs are the iconic products, as they were discovered first, widely produced in industry, extensively applied in farming and with its genetic basis of metabolism best studied. The excellent GAs yield is the trademark of F. fujikuroi. Besides optimization of the fermentation processes, strain improvement is also very crucial for industrial application. In industry, the researchers traditionally engaged in random mutagenesis and screening for improved production strains. These methods usually employ extreme physical conditions or toxic chemical reagents, and require an efficient screening protocol and a large amount of labor. An advantage over knockout mutations is that these methods may generate unpredictable mutations on genes in question rather than just simply turn them off. This is probably the main reason why these methods are favored and widely practiced.
The GAs synthesis pathway (see Figure 4) has been completely uncovered for about two decades (MacMillan, 1997;Tudzynski and Holter, 1998;Tudzynski, 1999;Hedden and Thomas, 2012). Many efforts have been contributed to improve the GA productivity by altering the metabolic pathway. Wiemann et al. deleted the ppt1 gene, which is involved in the posttranslational modification of some key enzymes of the nongibberellin metabolic pathways. The mutated strain in the C-1995 strain background showed to have over twofold increased GAs yield, indicating that elimination of other metabolites synthesis might somehow reduce the metabolic burden and enhance the metabolic flow to GAs. However, this phenotype could not be reproduced in another ppt1 mutant in the IMI58289 strain background . Albermann et al. (2013) overexpressed ggs2, the first gene of the gibberellin-specific pathway (see Figure 4), and resulted in an increased productivity of 50%. Interestingly, the increased productivity was due to the increased product of GA 4 / 7 , while the GA 3 kept its original yield. Beside this, they overexpressed a truncated HmgR protein with the N terminal deleted. The N terminal of HmgR corresponds to the regulatory domain that involved in the inhibitory feedback to the protein. The constructed strain resulted in a 2.5-fold increase in GAs yield (Albermann et al., 2013). Compared to GA3, GA 4 , and GA 7 may perform more excellently when treating different plants (Kim and Miller, 2009;Curry, 2012;Qian et al., 2018). Tudzynski et al. deleted the P450-3 gene and the resulted mutant could not produce GA 1 and GA 3 anymore, while the production of GA 4 and GA 7 was significantly increased . Shi et al. (2019) deleted the P450-3 gene and harvested a strain with 4.6 times increase in GA 4 and GA 7 production. Based on this strain, they additionally overexpressed the /ks genes and the truncated hmgR gene. The resulted strain showed to have increased production of GA 4 /GA 7 mixtures of approximately eightfold (Shi et al., 2019).
To date, the genetic engineering works are mostly base on the GAs metabolic pathway. To engineer a gibberellin specific producer, genes that relate to the pathogenicity and other secondary metabolites might be redundant and will increase metabolic burden. Beside this, there is a great potential to reveal the upstream regulation network of gibberellin metabolism. Some unknown global regulators, such as the nutrient sensing and signaling pathways, might also be critical to GAs synthesis. Some of those regulators can certainly be the future targets to get genetically engineered for a supreme GA producer.
CONCLUSION
F. fujikuroi has been designated as a prevalent plant pathogen for around one century, while as an industrial GAs producer for over 50 years. The research activities concerning its virulence, drug resistance, host-pathogen interaction, metabolic pathways, signal transductions, fermentation and strain improvement have been carried out for many years. However, there are still a lot unresolved puzzles on the biology of F. fujikuroi. The slow progress of biological research is largely due to the sluggish development of efficient molecular tools/technologies. In the recent years, with the increased research interests in this species, the toolkit for F. fujikuroi is expanding rapidly.
The optimized protoplast preparation and DNA transformation protocols, the established CRISPR-Cas system, the increased choices of promoters and selection markers etc. can satisfy now the basic needs for genetic manipulations. Challenges are also emerging. Firstly, it might be interesting to knock out the NHEJ machinery to increase the efficiency of transformants screening. The HR efficiency is relatively low and varies among different isolates of this species. In classic genome engineering, usually homologous flanking arms of 500-1500 bp are required for homology directed DNA integration. Even though, the editing efficiency was not ideal. Microhomology-mediated DNA integration using 35-60 bp homologous flanking arms has been achieved in many filamentous fungi with a CRISPR-Cas system. Optimization of the current CRISPR-Cas system is then necessary to enhance the HR efficiency for more efficient genome editing. Besides, it will be also interesting to construct a CRISPR-Cas system that is able to pop out the CRISPR-cas elements for continuous genetic manipulations. Secondly, investigation of large DNA fragment deletion is missing. A large amount of gene clusters of different secondary metabolites, secreted proteins and virulence genes are harbored in the F. fujikuroi genome. It would be interesting to develop a genome engineering tool to efficiently alter a large DNA fragment. Thirdly, RNAi should be tested in this microorganism to complement with the current genetic engineering methods, while development of episomal plasmids could be a valuable attempt in the research field.
There are many methods available for identification of the Fusarium species, such as the morphological identification, PCR methods and MALDI-TOF MS method. Among them, the morphological identification is less precise, while the MALDI-TOF MS is the most accurate and efficient with the fingerprints database of the FFSC established. However, MALDI-TOF MS requires a large facility, thus is not very popular at the current stage. The PCR methods were frequently used and also precise. This method took several conserved genes as the PCR targets. However, the amplified DNA fragments should always be necessary to be sequenced for confirmation. The in vivo plant infection assays have been tested extensively with F. fujikuroi, taking mostly the rice seedlings as the hosts. This assay can be efficiently implemented within 2 weeks and end up with different virulence parameters. Different fluorescent proteins (GFP, RFP, YFP) have been used to tag F. fujikuroi proteins.
The BIFC assay was once tested to analyze protein-protein interaction. Further assessment of this technology is necessary, as some essential controls were missing. The FRET (fluorescence resonance energy transfer) technology is another important tool to analyze protein-protein interaction, whereas is presently missing for F. fujikuroi research.
Due to the economic importance of gibberellins, the fermentation technologies and strain improvement studies were better developed. However, the literatures of medium optimization are mostly from many years ago. The current industrial production conditions of SMF need to be updated. Other technologies, such as the SSF systems, were mostly carried out at the laboratory level. Further studies about the scale-up testes, system optimization and adaptation to postfermentational processings remain to be carried out. The reported strain improvement works were mostly based on engineering of the GAs metabolic pathway. We lack knowledge about the upstream regulation/signal-transduction network. It will be also interesting to engineer a GAs production strain with the redundant pathogenesis genes and gene clusters of other secondary metabolites deleted.
AUTHOR CONTRIBUTIONS
Y-KC created the idea and wrote the first draft. J-GL participated in writing the section of CRISPR-Cas. Y-LW helped to correct Table 2. Y-LW and J-YW participated in writing the chapter of Fermentation technologies. Z-QL and Y-GZ finalized the draft.
|
2020-03-28T01:07:32.919Z
|
2020-03-27T00:00:00.000
|
{
"year": 2020,
"sha1": "24edb4bd440ab2d187ca039d154a0d0d62a61936",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2020.00232/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "24edb4bd440ab2d187ca039d154a0d0d62a61936",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
237635004
|
pes2o/s2orc
|
v3-fos-license
|
Deep Bayesian Estimation for Dynamic Treatment Regimes with a Long Follow-up Time
Causal effect estimation for dynamic treatment regimes (DTRs) contributes to sequential decision making. However, censoring and time-dependent confounding under DTRs are challenging as the amount of observational data declines over time due to a reducing sample size but the feature dimension increases over time. Long-term follow-up compounds these challenges. Another challenge is the highly complex relationships between confounders, treatments, and outcomes, which causes the traditional and commonly used linear methods to fail. We combine outcome regression models with treatment models for high dimensional features using uncensored subjects that are small in sample size and we fit deep Bayesian models for outcome regression models to reveal the complex relationships between confounders, treatments, and outcomes. Also, the developed deep Bayesian models can model uncertainty and output the prediction variance which is essential for the safety-aware applications, such as self-driving cars and medical treatment design. The experimental results on medical simulations of HIV treatment show the ability of the proposed method to obtain stable and accurate dynamic causal effect estimation from observational data, especially with long-term follow-up. Our technique provides practical guidance for sequential decision making, and policy-making.
I. INTRODUCTION
M ANY real-world situations require a series of decisions, so a rule is needed (also referred to as a dynamic treatment regime (DTR), plan or policy) to assist making decisions at each step. For example, a doctor usually needs to design a series of treatments in order to cure patients. The treatment often varies over time, and the treatment assignment at each time point depends on the patient's history, which includes the patient's past treatments, the observed static or dynamic covariates that denote the patient's characteristics, and the past measured outcomes of the patient's disease. Estimating the causal effect of specific dynamic treatment regimes over time (also known as dynamic causal effect estimation) benefits decision makers when facing sequential decision-making problems. For example, estimating the effect of a series of treatments given by a doctor can apparently guide the real selection of correct treatments for patients. Although the gold standard for dynamic causal effect estimation is randomized controlled experiments, it is often either unethical, technically impossible, or too costly to implement [1]. For example, for patients diagnosed with the human immunodeficiency virus A. Lin, J. Lu,J. Xuan, F. Zhu, G. Zhang are with the Australian Artificial Intelligence Institute, FEIT, University of Technology Sydney, Australia (e-mail: Adi.Lin@student.uts.edu.au, Jie.Lu@uts.edu.au, Junyu.Xuan@uts.edu.au, Fujin.Zhu@uts.edu.au, Guangquan.Zhang@uts.edu.au).
(HIV), it is unethical to pause their treatment for the sake of a controlled experiment. Hence, causal effect estimation is often conducted using uncontrolled observational data, which typically includes observed confounders, treatments, and outcomes.
The challenges associated with estimating the causal effect of DTR are censoring and time-dependent confounding. For example, the patients may quit the treatments due to dead over time and then the outcome values of these patients are unknown (censoring). Furthermore, the features and outcomes of former time steps also affect current treatments and current outcomes, so this increases the features of current time point (time-dependent confounding). A longer follow-up makes the challenges even more difficult. Another challenge is to address the highly complex relationships between confounders, treatments, and outcomes, which causes the traditional and commonly used linear methods to fail.
A motivating example is a HIV treatment research study with a focus on children's growth outcomes, measured by height-for-age z-scores (HAZ). The subjects of the study are HIV-positive children who were diagnosed with AIDS, before the commencement of the study. The treatment is antiretroviral (ART) which is a dynamic binary treatment. The study is to evaluate the effect of different treatment regimes applied to HAZ. The research took a couple of months to observe the outcomes at a time point of interest. A child may be censored during the study, where they may cease to follow the study during the treatment process. Censoring leads to a reduction in the number of subjects at each time point in the HIV treatment research study, since not all the children may survive the study after several months. ART may also bring drug-related toxicities, so a judgement needs to be made as to whether to use ART or not by the measurements of a child's cluster of differentiation 4 count (CD4 count), CD4%, and weight for age z-score (WAZ). These measurements also serve as a disease progression measure for the child. The CD4 count, CD4%, and WAZ are affected by prior treatments and also influence treatments (ART initiation) and the outcomes (HAZ) at a later stage. Thus, CD4 count, CD4%, and WAZ are time-dependent confounders.
Existing methods can be roughly categorised into three groups. The first group requires the treatment models to fit for both the treatment and censoring mechanisms (these mechanisms model the treatment assignment process and the missing data mechanism), such as the inverse probability of treatment weighting (IPTW) [2]. However, the cumulative inverse probability weights may be small due to the censoring problem, which leads to a near positivity violation (this violation makes it difficult to trust the estimates provided by the methods). The second group fits the outcome regression models (these models are to estimate the relationships between the outcome and the other variables), such as the sequential gformula (seq-gformula) [3]. However, the outcome regression models may introduce bias into seq-gformula . The last group is a doubly robust method (LTMLE) [4], [5], which combines the iterated outcome regressions and the inverse probability weights estimated by treatment models. However, it suffers from the censoring problem because the observational data reduces in the long-term which in turn decreases the modeling performance, especially with long-term follow-up.
In this paper, we propose a two-step model to combine the iterated outcome regressions and the inverse probability weights, where the uncensored subjects until the time point of interest are all used. Hence our model improves the target parameter using potentially more subjects than LTMLE. LTMLE improves the target parameter using the uncensored subjects following the treatment regime of interest. It is for this reason that it is problematic to fit a limited number of subjects following the treatment regime of interest for common models in the long-term follow-up study. As our model uses a larger number of subjects in the estimation improvement, it may achieve better performance and stability. To capture the complex relationships between confounders, treatments, and outcomes in the DTRs, we apply a deep Bayesian method to the outcome regression models. The deep Bayesian method has a powerful capability to reveal complex hidden relationships using subjects of a small sample size. The experiments show that our method demonstrates the better performance and stability compared with other popular methods.
The three contributions of our study are summarised as follows: • Our method is able to use all the uncensored data samples which cannot be fully used by other methods like LTMLE, so our method is able to achieve a more stable performance. • A deep Bayesian method is applied to the iterated outcome regression models. A deep Bayesian method has a strong ability to capture the complex hidden relationships between the confounders, the treatments, and the outcomes in a small-scale dataset with high dimensions. • The deep Bayesian method is able to capture the uncertainty for the causal prediction of every subject. Such uncertainty modeling is essential for the safety-aware applications, like self-driving cars and medical treatment design.
The remainder of this paper is organized as follows. Section II discusses the related work. Section III describes the problem setting and a deep Bayesian method. We introduce our deep Bayesian estimation for dynamic causal effect in Section IV. Section V describes the experiments with a detailed analysis of our method and its performance compared with other methods. Finally, Section VI concludes our study and discusses future work.
II. RELATED WORK
In the majority of previous work, learning the causal structure [6]- [11] or conducting causal inference from observational data in the static setting (a single time point) [12]- [17] have been proposed. These methods cannot be applied to dynamic causal effect estimation [18], [19] directly. A few methods focus on the counterfactual prediction of outcomes for future treatments [18], [20]- [22]. Counterfactual prediction methods aim to estimate the causal effect of the following future treatment, which is different from our problem setting.
Another body of work focuses on selecting optimal DTRs [21], [23]. G-estimation [24], [25] has been proposed for optimal DTRs in the statistical and biomedical literature. G-estimation builds a parametric or semi-parametric model to estimate the expected outcome. Two common machine learning approaches, Q-learning [26] and A-learning [27], are applied to estimate DTRs. Q-learning is based on regression models for the outcomes given in the patient information and is implemented via a recursive fitting procedure. A-learning builds regret functions that measure the loss incurred by not following the optimal DTR. It is easy for the Q-functions of Q-learning and the regret functions of A-learning to fit poorly in high dimensional data.
Finally, we discuss the methods proposed to estimate the dynamic causal effect. The methods proposed to evaluate DTRs from observational data include the inverse probability of treatment weighting (IPTW) [2], the marginal structural model (MSM) [19], [28], sequential g-formula (seq-gformula) [3], and the doubly robust method (LTMLE) [4], [5]. IPTW estimates dynamic causal effect using the data from a pseudopopulation created by inverse probability weighting. The misspecified parametric models used for the treatment models will lead to inaccurate estimation. MSM fits a model that combines information from many treatment regimes to estimate the dynamic causal effect. However, MSM may result in bias introduced by the regression model for the potential outcome and bias introduced by treatment models. Sequential g-formula uses the iterated conditional expectation to estimate the average causal effect. The performance of sequential g-formula relies on the accuracy of the outcome regression models. LTMLE calculates the cumulative inverse probabilities by the treatment models on the uncensored subjects who follow the treatment regime of interest. Then it fits the outcome regression models and uses the cumulative inverse probabilities calculated to improve the target variable. The unstable weights from the treatment models may affect the accuracy of LTMLE.
III. PRELIMINARY KNOWLEDGE
This section briefly introduces the problem setting using HIV treatment as an example and the deep kernel learning method.
A. Dynamic Causal Effects
Observational data consists of information about the subjects. In the HIV treatment study, patients are the subjects. For each subject, time-dependent confounders X k , treatment T k and outcome Y k are observed at each time point k. Given the HIV treatment example, the treatment indicator is whether ART is taken; CD4 variables are affected by previous treatments and influence the latter treatment assignment and outcomes, CD4 variables belong to time-dependent confounders; and HAZ is the outcome. The static confounders V represent each subject's specific static features, such as patients' age. We use L k to describe the union of static confounders V and time-dependent confounders X k at time k.
The outcome values are unknown (censored) for some subjects who pass away before follow-up. The censoring variable is represented as C k . If a subject does not follow the study, the subject will not attend the following study (if C i = 1, then C j = 1, for any time j > i). If a subject follows the study at a time point, then the subject has followed the study in all previous time points (if C i = 0, then C j = 0, for any time j < i). Usually, a few subjects are censored at each time point [4], and fewer subjects would continue to follow the study.
An example of censoring is given in the Table I. At the beginning, all subjects take part in the study. After the first treatment assignment, some subjects do not follow the study. The features of these subjects are "missing" in the following study. As the study continues, an increasing number of subjects fail to follow the study. Thus, fewer subjects are available in the following study, which becomes a problem in the long follow-up study.
To summarise, the observational data can be described as All subjects are uncensored at the baseline, which is C 0 = 0, and all subjects are untreated before the study, which is represented as T −1 = 0. TheT k = (T 0 , T 1 , ..., T k−1 ,T k ) represents the past treatments until the time k. Other symbols with an overbar, such asȲ k ,L k , have similar meanings. The history of covariates at time k is H k = (T k−1 ,L k ,C k ,Ȳ k ). For simplicity, the subscript of variables will be omitted unless explicitly needed. Let Yd be the potential outcome of the possible treatment rule of interest,d, and Ld be the potential covariates ofd. The potential outcome Yd is the factual outcome or counterfactual value of the factual outcome.
A treatment regime, also referred to as a strategy, plan, policy, or protocol, is a rule to assign treatment values at each time k of the follow-up. A treatment regime is static if it does not depend on past history h k . For example, the treatment regime "always treat" is represented byd k = (1, 1, ..., 1), and the treatment regime "never treat" is represented bȳ d k = (0, 0, ..., 0). Both two treatment regimes assign a constant value for a treatment, so the assignments do not depend on past history. A dynamic treatment regime depends on the past history h k . The treatment assignment at the time point k is d k = f (h k ). A dynamic treatment regime is a treatment ruled k = (d 0 , ..., d k ). The outcome at the time point k + 1 is affected by the history h k , the treatment t k , and the observed confounders l k+1 . The outcome can be described by The causal graph [29] for dynamic treatments is illustrated in Fig. 1. The nodes represent the observed variables. Links connecting the observed quantities are designated by arrows. Links emanating from the observed variables that are causes Fig. 1: The causal graphical model of time-dependent confounding for the time points k and k + 1. H is the history of past treatments, confounders, and outcomes, T is the treatment, L is the observed confounders, and Y is the outcome.
to the observed variables that are affected by causes. The treatment at time k is influenced by the observed history H k . The confounders L k+1 and outcome Y k+1 at time k + 1 are influenced by the history H k . The confounders L k+1 also influence the outcome Y k+1 .
If all subjects are uncensored, the dynamic causal effect is to estimate the counterfactual mean outcome E[Yd K ]. To adjust the bias introduced by uncensoring in the data, the treatment assignment and uncensoring are considered as a joint treatment [19]. The goal is to estimate the counterfactual mean outcome The mean outcome refers to the mean outcome at time K + 1 that would have been observed if all subjects have received the intended treatment regimed and all subjects had been followed-up. The identifiability conditions for the dynamic causal effect need to hold with the joint treatment (T m , C m+1 ) at all time m, where m = 0, 1, ..., K. Note that dynamic causal effect estimation from observational data is possible only with some causal assumptions [19]. Several common causal assumptions are: The consistency assumption means the potential variable equals the observed variable, if the actual treatment is provided.
The positivity assumption means the subjects have a positive possibility of continuing to receive treatments according to the treatment regime.
The symbol ⊥ ⊥ represents statistical independence. The sequential ignorability assumption means all confounders are measured in the data.
The causal assumptions consistency and sequential ignorability cannot be statistically verified [19]. A well-defined treatment regime is a pre-requisite for the consistency assumption. The sequential ignorability assumption requires that the researchers' expert knowledge is correct.
Based on these assumptions, the counterfactual mean outcome can be estimated from the observational data. I: An example of censoring. "-" represents a missing value.
with all the subjects remaining uncensored at time K + 1. When k = 0, f (l 0 ) refers to the marginal distribution of l 0 . After integration with respect to L, the iterated conditional expectation estimator [3] can be obtained using the iterative conditional expectation rule, where the equation holds that (2)
B. Deep Kernel Learning
Deep kernel learning (DKL) [30] models are Gaussian process (GP) [31] models that use kernels parameterized by deep neural networks. The definition and the properties of GPs are introduced first. Then, the kernels parameterized by deep neural networks follow.
A Gaussian process can be used to describe a distribution over functions with a continuous domain. The formal definition [31] of GPs is as follows, Definition 1 (Gaussian Process): A Gaussian process is a set of random variables, and any finite number of these random variables have a joint Gaussian distribution.
A GP can be determined by its mean function and covariance function. The mean function m(x) and the covariance function The features of the input are represented as x. The x and x are two input vectors. Consider observations with additive independent and identically distributed Gaussian noise, that is y = f (x) + ε. Observations can be represented as {(x i , y i )|i = 1, · · · , n}. The GP model for these observations is given by The features to evaluate are denoted as x * and the outputs to evaluate are represented as f * . The predictive distribution of the GP-the conditional distribution f * |X * , X, y is Gaussian, that is where The log-likelihood function is used to derive the maximum likelihood estimator. The log likelihood of the outcome y is: (7) where K γ denotes kernel function K(X, X) given the parameter γ.
DKL transforms the inputs of a base kernel with a deep neural network. That is, given a base kernel k(x i , x j |θ) with hyperparameters θ, the inputs x are transformed as a nonlinear mapping g(x, w). The mapping g(x, w) is given by a deep neural network parameterized by weights w.
In order to obtain scalability, DKL uses the KISS-GP covariance matrix as the kernel function K γ , where K deep U,U is a covariance matrix learned by Eq. (8), evaluated over m latent inducing points U = [u i ] i=1...m , and M is a sparse matrix of interpolation weights, M contains 4 non-zero entries per row for local cubic interpolation.
The technique to train DKL is described briefly in the following content. DKL jointly trains the deep kernel hyperparameters {w, θ}, that is the weights of the neural network, and the parameters of the base kernel. The training process is to maximise the log marginal likelihood of the GP. The chain rule is used to compute the derivatives of the log marginal likelihood with respect to {w, θ}. Inference follows a similar process of a GP.
Most deep learning models which cannot represent their uncertainty and usually perform poorly for small size observations. Compared with most deep learning models, DKL may achieve good performance with data of small size and DKL can capture the uncertainty for the predictions of the observations.
IV. OUR PROPOSED MODEL
This section begins with an outline of our proposed method, followed by a description of the methods applied in the treatment models and the outcome regression models. Some potential outcome of the treatment regimed Yd ,c potential outcome of the treatment regimed and censoring historyc Pr() probability function key notations used throughout this paper are summarised in Table II. Firstly, we aim to estimate the causal effect by using both the information from the outcome regression models and the treatment models with as many subjects as possible. We designed models for the treatment models on the uncensored subjects, which are usually larger than the uncensored subjects following the treatment regime of interest (these subjects are in LTMLE). Models may achieve better performance in data with larger observations. Secondly, we develop a deep Bayesian method for the outcome regression models considering the small-size but high dimensional data. The deep Bayesian method is believed to be superior in modeling this kind of data.
As most approaches deal with censoring [3], [4], [32], we consider the treatment and the uncensoring as a joint treatment. Suppose we are interested in the dynamic treatment regime d, our target is to estimate the counterfactual mean outcome E[Yd K ,c K+1 =0 ]. In the following context, we consider how our method can be implemented for a general situation with a binary treatment and a continuous outcome.
Since the treatment and the uncensoring are handled as a joint treatment, the treatment models relating to both the treatment and censoring mechanism need to be estimated. The treatment model relating to the treatment mechanism at the time point m is which refers to the probability of uncensored subjects who are treated, after a m−1 time points treated history. The treatment model relating to the censoring mechanism at the time point m is which is the probability of subjects who will follow the study at the time point m + 1. For a dynamic treatment and time-dependent covariates, the inverse probability weights need to be calculated at each time point. The inverse treatment probabilities and inverse cen- soring probabilities are calculated separately. The cumulative product of inverse treatment probabilities is calculated by The cumulative product of inverse modified treatment probabilities is calculated by .
The subjects have followed the causal study for the first m Similarly, the cumulative product of inverse censoring probabilities is calculated by The cumulative product of inverse modified censoring probabilities is calculated by . The cumulative product of inverse treatment and censoring probabilities is The cumulative product of inverse modified treatment and modified censoring probabilities is The weights W m and W md m are applied in the outcome regression models of our proposed method.
Another component of our proposed model is the outcome regression models. Fit a DKL model for outcome regression models (Fig. 2) III: The baseline methods. IPTW and MSM only apply treatment models relating to both the treatment and censoring mechanism. Sequential g-formula only applies the outcome regression models using the iterated conditional expectation estimator. Both LTMLE and our proposed method combine treatment models and outcome regression models to obtain a dynamic causal effect. Fit models for the treatment models relating to both the treatment and censoring mechanism (Eq. (10) and (11)). The whole procedure of our algorithm is summarised in Algorithm 1.
We use a logistic regression model on the assumption of a linear additive form of covariates to estimate time-varying inverse probability weights. Then, we apply DKL [30] for the outcome regression models. We name our two-step (TS) model with DKL for the outcome regression models, TS-DKL.
V. EXPERIMENTS
We design a series of experiments to evaluate the effectiveness of the proposed model. First, we introduce data simulations for dynamic treatment regimes. Then, we present the experimental setting and baseline methods, followed by an analysis of the experimental results. The experiments include the property analyses of the proposed model and the evaluation of the performance of the proposed model on simulated data.
A. Data simulation
Since rare ground truth is known in real non-experimental observational data, we use simulation data from LTMLE [4] with a slight revision to fit the problem setting. We run ten simulations for the data generation process. We simulate a binary treatment indicator, a censoring indicator, three static covariates, three time-dependent covariates and an outcome variable with twelve time points using causal structural equations Eq. (18) by R-package simcausal [33].
We simulate the static covariates (V 1 , V 2 , V 3 ), which refers to region, sex, age, respectively. The time-dependent covariates (L 1 k , L 2 k , L 3 k ) refer to CD4 count, CD4%, and WAZ at time k, respectively. We simulate a binary treatment indicator T k , referring to whether ART was taken at time k or not; a censoring indicator C k , describing whether the patient was censored (failing to follow the study) at time k; and a continuous outcome Y k , which refers to HAZ at time k. The binary variable is simulated by a Bernoulli (B) distribution. We use uniform (U ) distribution, normal (N ) distribution, and truncated normal distribution which is denoted by N [a,b] where a and b are the truncation levels, to simulate continuous variables. When values are simulated in truncated normal distributions, if the values are smaller than a, they are replaced by a random draw from a U (a 1 , a 2 ) distribution. Conversely, if the values are greater than b, then the replacing values are drawn from a U (b 1 , b 2 ) distribution. The values of (a 1 , a 2 , b 1 , b 2 ) are (0, 50, 5000, 10000) for L 1 , (0.03,0.09,0.7,0.8) for L 2 , and (−10, 3, 3, 10) for both L 3 and Y . The notationD denotes the data that have been observed before the time point. All the subjects are untreated before the study and are represented by T −1 = 0, and all the subjects are uncensored at the baseline k = 0.
The goal is to estimate the mean HAZ at time K + 1 for the subjects. We consider four treatment regimes (Eq. (19)), where two treatment regimes are static, and the other two are T−1 ∼ B(p = 1) For k > 0 : dynamic. The first and fourth treatment regimes are static. The first treatment regime "always treat" means all uncensored subjects are treated at each time point during the study. The last treatment regime "never treat" means all uncensored subjects are not treated at each time point during the study. The second and third treatment regimes, "750s" and "350s", mean uncensored subjects receive treatments until their CD4 reaches a particular threshold. Our proposed method is designed for the dynamic treatment regimes "750s" and "350s".
B. Experimental setting
The experimental setting is described briefly in this section. The target is to evaluate the average dynamic causal effect, under a treatment regime of interest, from the observational data. The evaluation metric for estimation is the mean absolute error (MAE) between the estimated average causal effect and the ground truth.
We use the Python package gpytorch [34] to implement the DKL model [30], [35]. We use a fully connected network that has the architecture of three hidden layers with 1000, 500, and 50 units. The activation function is ReLU. The optimization method is implemented in Adam [36] with a learning rate of 0.01. We train the deep kernel learning model for five iterations, and the DKL model uses a GridInterpolationKernel (SKI) with an RBF base kernel. A neural network feature extractor is used to pre-process data, and the output features are scaled between 0 and 1.
C. Baseline Methods
We present the baseline methods which are the inverse probability of treatment weighting (IPTW) [2], marginal structural model (MSM) [19], [28], sequential g-formula (Seq) [3], and longitudinal targeted maximum likelihood estimation (LTMLE) [4], [5]. The comparative methods and publication references are listed in Table III. IPTW and MSM only apply treatment models, sequential g-formula only applies outcome regression models, and LTMLE and our proposed method apply both the treatment models and outcome regression models. We describe the general implementation procedure for these comparative methods as follows.
Given a fixed regimed,ḡ k (L k+1 ,Ȳ k ) is defined as Eq. (20), Now we describe the general procedure of LTMLE [4], [5]. Before the outcome regression models with respect to the target variable are fitted to the observational data, we calculate the probabilitiesḡ n,k (L k+1 ,Ȳ k ) for each subject that follows the given treatment regimed. The following steps can be implemented as follows: SetQ n,K+1 = Y (the continuous outcome should be rescaled to [0,1] using the true bounds and be truncated to (a,1-a), such as a = 0.0005). Then for k = K, ..., 1, 1) Fit a model for E[Q n,k+1 |C k+1 = 0,T k ,L k+1 ]. The model is fitted on all subjects that are uncensored until time k + 1. 2) Plug inT k =d k based on the treatment regimed k , predict the conditional outcomeQ n,k with the regression model from step 1 for all subjects with C k = 0. 3) • Plug inT k =d k based on the treatment regimed k , getQ k with the regression model from step 1 for all subjects with C k+1 = 0. • Construct the "clever covariate" where I(·) is an indicator function for a logical statement. • Run a no-intercept logistic regression. The outcome refers toQ n,k+1 , the offset is logit(Q k ), and the covariate CL k is the unique covariate. The model fits all subjects that are uncensored until time k + 1 and followed the treatment regimeT k =d k . Letˆ be the estimated coefficient of H k . • Obtain the new predicted value ofQ n,k for all subjects with C k = 0 bȳ Q n,k = expit(logit(Q n,k )+ˆ k g n,k (L k+1 ,Ȳ k ) ). (22) 4) An estimate of the dynamic causal effect is obtained by taking the mean ofQ n,1 over all subjects; the mean is transformed back to the original scale for the continuous outcome. The sequential g-formula (the sequential g-computation estimator or the iterated conditional expectation estimator) can be estimated by step 1, step 2, and step 4 of LTMLE.
IPTW uses the time-varying inverse probability weights to create pseudo-populations in order to estimate the dynamic causal effect. The IPTW estimator for dynamic causal effect at time k can be obtained from E[Y · I(C k+1 = 0,T k = d k )/ḡ n,k (L k+1 ,Ȳ k )], where I(·) is the indicator function.
To estimate the dynamic causal effect, we fit the ordinary linear regression model E[Yd k ,c k+1 =0 ] = θ 0 + θ 1 cum(d k ) to estimate the parameters of the marginal structural model (MSM). The cum(d k ) represents the cumulative treatment in k time points. MSM is fitted in the pseudo-population created by inverse probability weights. That is, we use weighted least squares with inverse probability weights.
To improve the performance of the outcome regression models, Super Learner [37], [38] is often used. Super Learner is an ensemble machine learning method. We test three different sets of learners for outcome regression models which are applied in sequential g-formula, LTMLE, and our model. Learner 1 consists of ordinary linear regression models, learner 2 contains ordinary linear regression models and random regression forests, and learner 3 adds a multi-layer perceptron (MLP) algorithm with a hidden layer with 128 units in addition to the algorithms used in learner 2. We represent sequential g-formula with learners 1, 2 and 3 as Seq-L1, Seq-L2 and Seq-L3, respectively. We represent similarly in LTMLE and our proposed method with the three learners as the outcome regression models. In order to show the effectiveness of DKL, we also implement our proposed method with a fully connected neural network (TS-NN) as the outcome regression model. The network has the architecture of three hidden layers with 128, 64, and 32 units. The activation function is the ReLU activation function and dropout regularization is used. We train our outcome regression model for 5 epochs with a dropout rate of 0.9. The optimization method is implemented in Adam [36] with a learning rate of 0.01.
D. Propensity score truncation for positivity problems
To calculate the dynamic causal effect, all baseline methods excluding sequential g-formula have treatment models relating to both the treatment and censoring mechanism. The treatment models for the K + 1 time points is 2K + 2, that is, K + 1 treatment models relating to the treatment assignment mechanism, and K + 1 treatment models relating to the censoring mechanism. In long-term follow-up causal studies, the treatment models need to regress on high-dimensional features using small-size samples at each time point. The estimated treatment and uncensored probabilities may be small and the cumulative probabilities would be close to zero. This may lead to near-positivity violations.
We deal with small treatment and uncensored probabilities by truncating them at a bound. We truncate both the estimated treatment and uncensored probabilities at a lower bound 0.1 and a higher bound 0.9. The truncation improves the stability of the performance for the methods. In addition, we normalize the weights by their sum, that is a new weight W i = W i / j W j . For all methods that apply the treatment models, we use standard logistic regression models for the treatment models relating to both the treatment and censoring mechanism.
E. Evaluation of a long follow-up time and declining sample size
Fewer subjects follow the study as the causal study progresses. If a subject is uncensored at time k, then the subject has followed all the studies in previous time points. If a subject is censored at time k, then the variables related to the subject are unobserved from time k + 1. The estimation of dynamic causal effects is necessarily restricted to uncensored subjects. Another challenge is that time-dependent confounders are affected by prior treatments and influence future treatments and outcomes, hence it is necessary for the time-dependent confounders to be adjusted. In long-term follow-up causal studies, high-dimensional adjustment confounders exist. All baseline methods are required to fit the data of uncensored subjects at each time point. As a result, the estimators need to handle a reduced sample size and high-dimensional confounders at each time point.
We show that fewer subjects are available with more features to regress over time in Table IV. There are 485 uncensored subjects and 66 adjustment confounders at the twelfth time point, that is K = 11. The number of uncensored subjects at the time point of interest is quite small. It is seen that the size of uncensored subjects is declining as the study progresses (the time point increases) and it is noted that the number of features to regress at each time point increases. Regression estimators need to fit data with increasing features but smaller available observations. Table IV show that the number of uncensored subjects following the dynamic treatment regime of interest ("750s" and "350s") is even smaller than the number of uncensored subjects.
The limited number of uncensored subjects who follow the treatment regime of interest makes it challenging to estimate the dynamic causal effect. Some estimators (such as LTMLE and IPTW) need to use the information from the limited number of uncensored subjects who followed the treatment regime of interest. However, these estimators may have an unstable estimation in a long follow-up study. We investigate the trend of two dynamic treatment regimes, "750s" and "350s", to show the declining, limited sample size of subjects who followed the treatment regimes in Fig. 3. The size of the uncensored subjects reduces as the study progresses. This is also true for the uncensored subjects following the treatment regime of interest, where Fig. 3 shows that the sample size of the uncensored subjects is much larger than those uncensored subjects who followed a special treatment regime. It is usually easier for a model to achieve stable performance by fitting the data with more observations than fitting the data with less observations. The treatment models of our proposed model fit the data of uncensored subjects. The treatment models of LTMLE fit the data of uncensored subjects who followed the The label "censored" represents the number of subjects who are uncensored. The label "750s" represents the number of uncensored subjects who receive the dynamic treatment regime "750s". The label "350s" represents the uncensored subjects who follow the treatment regime "350s".
dynamic treatment regime. This makes it possible for our proposed model to achieve a more stable estimation.
F. Experimental analysis of our proposed method
We analyse the influence of the number of subjects applied in the treatment models. The treatment models of LTMLE and IPTW need to fit the limited number of uncensored subjects who followed the treatment regime of interest at each time point k. Unlike LTMLE and IPTW, the treatment models of our model fit on all subjects who are uncensored at each time k. As more subjects are available in the treatment models, the estimate of our proposed method for a dynamic causal effect may be more stable. DKL is applied in the outcome regression model at each time point. DKL is suitable for the data which has high features but a small number of observations. We show the MAE estimates for the two dynamic treatment regimes, "750s" and "350s", in Fig. 4 and Fig. 5. The MAE for the treatment regime, "750s", varies between 0 and 2 for different time points. The MAE for the treatment regime, "350s", varies between 0 and about 2 for different time points. It is interesting that bias reduces as the time point increases. Next, we focus on the experimental analysis of the long follow-up, that is the time point 11 and 12. As illustrated in Fig. 4 and Fig. 5, the estimates of the dynamic causal effect with the given dynamic treatment regimes ("750s" and "350s") at different time points (the time point 11 and 12 ) do not vary significantly. The figures show our model provides stable estimates for the longterm follow-up causal study. It is also noted that the bias of the dynamic causal effect at the time points 11 and 12 is small. These results indicate that our proposed method is able to achieve a stable and effective estimate in a long-term followup study.
Next, we analyse DKL's ability to capture the variance for the causal prediction of a subject. For every predicted value, DKL also produces its variance. Ten observations were selected to show the predicted valuesŷd K ,c K+1 =0 and the variances of these predicted values. Fig. 6 show the predicted values and its five standard deviation. The variances are presented by the shade area in the figure. It can be seen that IV: The number of features to regress increases, but the sample size of the uncensored subjects to regress declines, over time. As the time-dependent confounding exists, more features need to regress but less subjects are available in a longer study. The label "uncensored" refers to the uncensored subjects. The labels "750s" and "350s" refer to the uncensored subjects who have followed the dynamic treatment regime. Time Point 1 2 3 4 5 6 7 8 9 10 11 12 Dimension 11 16 21 26 31 36 41 46 51 56 61 66 Uncensored 883 821 777 736 699 664 632 598 571 541 514 485 750s 480 429 397 374 353 333 315 297 283 266 250 235 350s 564 434 386 355 330 308 287 266 250 232 215 199 9 the variance is small for each observation. DKL describes the uncertainty of predictions through these small variances.
G. Evaluation of dynamic causal effect with synthetic data
The results of the experiment using synthetic data are analysed in this section. We aim to estimate the dynamic causal effect E[Yd K ,c K+1 =0 ]. The real dynamic causal effects are directly obtainable within the simulated data. The performance is evaluated using the MAE between the estimated causal effect and the real causal effect, for all ten simulations. Our proposed method (TS-DKL) is designed for the dynamic treatment regimes, "750s" and "350s". The comparative methods and publication references are listed in Table III. The outcome regression models of sequential g-formula, LTMLE and our proposed method are equipped with three different sets of learners, detailed in Section V-C. A neural network is only implemented in the outcome regression model of our proposed method (TS-NN). TS-NN is used to evaluate the effectiveness of the DKL. We summarise the experimental results of the estimates for the dynamic treatment regimes, "750s" and "350s", in time points 11 and 12 in Table V. Table V shows that no models can obtain the best results for all estimates for the two dynamic treatment regimes at the two different time points. We can observe that TS-DKL obtains the best results (three treatments out of the four), followed by IPTW (one treatment out of the four), in the two time points. TS-DKL achieves the second best experimental performance for the dynamic treatment regime "750s" at the time point 12, where the experimental performance of IPTW is the best. Compared with TS-NN, TS-DKL achieves better experimental performance in all dynamic treatment regimes, hence we conclude that applying DKL in the outcome regression models is effective. The reason for this is the outcome regression model needs to fit the data, which has high dimensions but is small in terms of sample size. A deep neural network usually works poorly with a limited number of observations. This is the main reason that DKL is applied in the outcome regression models. Compared with the three TS-L* models, TS-L1, TS-L2, and TS-L3, we believe DKL has a strong ability to capture the complex treatment, time-varying confounders, and outcome relationships. The reason for this is that the only difference between TS-DKL and the three TS-L* models is that the different methods are applied in the outcome regression models. As the performance of TS-DKL is the best one in both two time points, TS-DKL has a strong ability to capture the causal effects of dynamic treatment regimes in a long-term follow-up causal study.
We analyse the experiment results for our proposed method for the static treatment regimes. TS-DKL is designed to estimate the dynamic causal effects for the dynamic treatment regimes. It is still unknown whether our proposed method is able to achieve a comparative experimental performance for static treatment regimes or not. Table VI show the experimental performance of the baseline methods for the static treatment regimes, "always treat" and "never treat". TS-DKL obtains the best experimental performance under the static treatment regime "always treat", and Seq-L3 obtains the best experimental performance under the static treatment regime "never treat", for time point 11. IPTW obtains the best experimental performance under the static treatment regime "always treat", and TS-L2 obtains the best experimental performance under the static treatment regime "never treat", for time point 12. It is noted that the experimental performance of LTMLE-L3 and TS-DKL is relatively better than the other baseline methods, excluding IPTW, under the static treatment regime "always treat", for time point 12. Thus, the experimental performance of TS-DKL under the treatment regime "always treat" is relatively better than the other methods. All the baseline methods obtain poor estimates under the treatment regime "never treat". One potential reason for this is the limited number of uncensored subjects who followed the treatment regime "never treat", which reduces the models' ability to capture the data generating process. Moreover, the limited number of subjects particularly affect the models IPTW, MSM and LTMLE.
We analyse the bias introduced by the outcome regression models. The outcome regression models are applied in sequential g-formula, LTMLE and our proposed method. The results of the empirical standard deviations (ESD) of the estimates for dynamic causal effect in time points 11 and 12 are shown in Table VII. TS-DKL performs stably under both dynamic treatment regimes in the two different time points. Seq-L2 obtains stable experimental performance under the dynamic treatment regime "350s", and obtains relatively larger ESD under the dynamic treatment regime "750s". TS-L1 and TS-L2 have stable performance under the dynamic treatment regime, "350s", in time point 12. We can conclude that DKL is an effective method to implement in the outcome regression models. By comparising of TS-NN and TS-DKL, it can be seen that Bayesian deep learning introduces robustness in the small-sized data. The neural network applied in TS-NN performs unstably for the small-sized observations.
VI. CONCLUSION AND FUTURE WORK
We proposed a deep Bayesian estimation for dynamic treatment regimes with long-term follow-up. Our two-step method combines the outcome regression models with treatment models and improves the target quantity using the information of inverse probability weights on uncensored subjects. Deep kernel learning is applied in the outcome regression models to capture the complex relationships between confounders, treatments, and outcomes. The experiments have verified that our method generally achieves both good performance and stability.
Currently, our approach lacks an analysis of asymptotic properties. The work on the analytic estimation of standard errors and confidence intervals is yet to be undertaken. In future work, we will provide theoretical guarantees for standard errors and confidence intervals.
|
2021-09-27T01:15:55.766Z
|
2021-09-20T00:00:00.000
|
{
"year": 2021,
"sha1": "95832395cfc53e9f865f11533ce8e2c3a88b9ca3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "95832395cfc53e9f865f11533ce8e2c3a88b9ca3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
139896971
|
pes2o/s2orc
|
v3-fos-license
|
Experimental and numerical investigation of cooling performance of a cold storage in a pharmaceutical industry
This paper describes the study of cooling performance of cold storage in a pharmaceutical industry. It was intended to investigate the temperature distribution inside the storage that is an important performance factor in pharmaceutical industry cold storage. Cold storage that used is a ceiling type with the liquid bottle loading. Temperature distribution and the storage cooling performance were studied using experimental measurement and numerical simulation. Some variation of bottle arrangement and rack arrangement have been observed to show the impact of distribution temperature and cooling performance of cold storage. Surface temperatures of the bottles were measured with different bottles and rack arrangement. The temperature of cold storage was set to 5°C. In numerical simulation, a transient three dimensional Computational Fluid Dynamic (CFD) model was developed to investigate the cooling performance and temperature distribution inside bottle. At this stage, the results showed that rack arrangement that parallel with the cold room fan and V shape bottle layout has given a good cooling performance (it takes 1480 minutes to reach a stable temperature at the setpoint) and an optimum temperature distribution (with temperature difference of 0.58 °C). For the measurement of the distribution of temperature in the bottle, the mean deviation value between the simulation and the experiment on the measurement of 2 coordinate points (X = 0,1 m, Y = 0,3 m, Z = 0 m and X = -0,1 m, Y = 0,3 m, Z = 0 m) were 5,5 % and 7,6 %.
Introduction
In a pharmaceutical industry, especially in a biotechnology industry, product quality begins from raw materials until final products. Monitoring for each step process should be done includes storage processing. Those were done to guarantee that the product still in a good quality.
A cold storage was used to store the product that need cold temperature to guarantee its quality. To achieve the desired temperature, a refrigeration system is employed. Using refrigeration system, high energy consumption of the cold storage could happen. Therefore the cold storage should be designed as efficiently as possible with considering of its performance. Most cold storage are designed by experiences which may cause increasing operating cost. Moreover, a uniform airflow field can also improve the refrigeration quality of the products in the cold storage.
Some studies to improve performance of cold storage have been done both experimentally and by numerical simulation. Several studies modeling of temperature distribution in cold storages have demonstrated the effectiveness of various methods (Nahor et al. 2005 Rajan et al. (2015) investigated performance of cold storage for the different stacking arrangements experimentally. The temperature is the only parameter at which the performance of cold storage depends on.
The objective of this study is to investigate the effect of loading arrangements of vaccine bottles to the cooling performance of cold storage by experiment. It also investigates temperature distribution and cooling rate inside the bottle both experimentally and by numerical simulation.
Research Methodology
This research was conducted using two kinds method, by experimental and by numerical simulation using Ansys Fluent.
Experimental Setup
Thermocouple sensors were used to measure air temperature distribution and the cooling rate of the cold storage The cold storage was designed for 2 -8 o C store temperature with set point temperature 5 o C. The cold storage is used to store 24 liquid vaccine bottles with 20 litres volume each bottle. These bottles were arranged in 2 racks and cooled to achieve 5 o C. Distribution temperature measurement was done with thermocouple sensors that put at the outside bottle surface.
Bottle & Rack Arrangement
In this research, some variation of bottle and rack arrangements were observed to see optimum condition of the cold storage that can be achieved. Bottle and rack arrangements were designed with considering of cold air flowing through the bottles and personnel accessibility.
Numerical Simulation Method
In numerical method, simulation of cooling process inside the bottle was carried out. This research is started by drawing geometry of bottle that shown in figure 6. The bottle is drawn according to its original dimension (280 mm in diameter and 520 mm in height). The material of bottle is high densinty polypropilene (hdpp) and the material of liquid is water. Inside the bottle, it is assumed that heat conduction is the only mode of heat transfer.
Result and Discussion
It was intended to see the effect of bottles and rack arrangements on the performance of cold storage. In this section, the average product temperature distribution and cooling rate for various bottles and rack arrangements has been presented. The graphs of average surface bottle temperature distribution inside the cold storage for four different arrangements are plotted as follows.
a. Arrangement I The average temperature distribution, maximum temperature and minimum temperature of cold storage are plotted as following: Based on experimental result, arrangement II has the best cooling performance. It has more uniform distribution temperature than the other arrangements. It also has less cooling time than the others. Cold air can easily flow through the outer surface of each bottle in this arrangement.
The graphs of distribution temperature around coordinate X = 0 which resulted by numerical simulation method are shown in the following figures.
International Conference on Computation in Science and Engineering IOP Publishing that close to wall of botlle is colder than the liquid temperature in the center of bottle. Figure 14 shows that temperature around the bottle close to the temperature of environment (5 o C).
Comparison result between simulation and experimental method is shown in figure 15.
Conclusion
Based on result of experimental and numerical simulation, it can be concluded that : • Loading arrangement of the bottles influences cooling performance of the cold storage. • Numerical simulation can be used to predict heat transfer and to asses cooling performance in the cold storage.. • The model has an error about 5,5 % -7,6 %
|
2019-04-30T13:08:19.653Z
|
2018-09-01T00:00:00.000
|
{
"year": 2018,
"sha1": "9d3548b8bf2f331212b4e3ecdc3375a0c0825bb7",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1090/1/012012",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a3df30b1eb188655d048d0abf895543077d6e8c3",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
157411423
|
pes2o/s2orc
|
v3-fos-license
|
The Effect of Education on the Demographic Dividend
RESEARCH ON the effect of the population age structure on economic growth has beenmainlymotivated by the demographic transition from high to low rates of mortality and fertility that most countries are experiencing as they develop. Previous research was focused on the link between population size and economic growth, but the influential work of Bloom and Williamson (1998) introduced age structure into the analysis, finding that this was an important mechanism by which demographic variables affect economic growth. The concept of the demographic gift, later re-named as the demographic dividend, first appeared in Bloom andWilliamson’s work to refer to the positive effect that the demographic transition can have on economic growth. During this process, the working-age population temporarily grows faster than the rest. Consequently, per capita income can increase as there are fewer economic dependents in the population. Nevertheless, this effect will vanish some years later, when large youth cohorts reach retirement age, leading to an increase in old-age dependency ratios, i.e., population aging. The demographic transition has coincided with a significant educational expansion that occurred in virtually every country during the twentieth century, especially after the 1960s. Certainly, important differences remain between areas, but all world regions show general improvements in education (UNESCO 2011). This means that the empirically observed effects of population age structure on economic growth are probably influenced by improvements in the education level of the population. Two strands of literature analyzing the determinants of economic growth have evolved separately in recent decades. On the one hand, research on the demographic dividend seeks to elucidate the effects of the population age structure on economic growth, but without paying specific attention to changes in educational level, only in the investment in human capital of children (Lee and Mason 2010; Mason, Lee, and Jiang 2016). On the other hand, a longstanding branch of economic research seeks to study the returns to education and the effect of educational attainment on
RESEARCH ON the effect of the population age structure on economic growth has been mainly motivated by the demographic transition from high to low rates of mortality and fertility that most countries are experiencing as they develop. Previous research was focused on the link between population size and economic growth, but the influential work of Bloom and Williamson (1998) introduced age structure into the analysis, finding that this was an important mechanism by which demographic variables affect economic growth. The concept of the demographic gift, later re-named as the demographic dividend, first appeared in Bloom and Williamson's work to refer to the positive effect that the demographic transition can have on economic growth. During this process, the working-age population temporarily grows faster than the rest. Consequently, per capita income can increase as there are fewer economic dependents in the population. Nevertheless, this effect will vanish some years later, when large youth cohorts reach retirement age, leading to an increase in old-age dependency ratios, i.e., population aging.
The demographic transition has coincided with a significant educational expansion that occurred in virtually every country during the twentieth century, especially after the 1960s. Certainly, important differences remain between areas, but all world regions show general improvements in education (UNESCO 2011). This means that the empirically observed effects of population age structure on economic growth are probably influenced by improvements in the education level of the population.
Two strands of literature analyzing the determinants of economic growth have evolved separately in recent decades. On the one hand, research on the demographic dividend seeks to elucidate the effects of the population age structure on economic growth, but without paying specific attention to changes in educational level, only in the investment in human capital of children (Lee and Mason 2010;Mason, Lee, and Jiang 2016). On the other hand, a longstanding branch of economic research seeks to study the returns to education and the effect of educational attainment on economic growth but without special regard to the population age composition (Johnes and Johnes 2004). Papers by Lutz, Crespo-Cuaresma, and Sanderson (2008) and Crespo-Cuaresma, Lutz, and Sanderson (2014) act as a bridge between the two research lines, as the authors try to disentangle the roles of age structure and education in economic growth using panel data. These authors estimate a macroeconomic growth model using a newly available dataset on human capital, containing information about educational attainment distribution by age and sex for more than 100 countries for the period 1980-2005. 1 They conclude that, when correcting for educational expansion, the effect of population age structure on GDP per capita is reduced significantly-that is, the demographic dividend is mainly an education effect. Education and age composition of the population are treated as two separate factors in the regressions, as if the education level of the population was unrelated to the age structure of the same population.
In our study, we link education attainment to the evolution of the population age structure using a different method. We propose an extension of the methodology developed by Mason (2005) and Mason and Lee (2006), in order to decompose the growth in the support ratio-the demographic dividend-into two components: age and education. These authors combined demographic data with the age profiles of consumption and labor income, estimated following the National Transfer Accounts (NTA) method. We follow the same strategy, taking it one step further. We first estimate the NTA age profiles by education level and then adapt the method to incorporate education variability. Second, to illustrate the potentialities of this methodological extension, we perform a simulation exercise for Mexico and Spain, for which we were able to construct the NTA profiles by level of education. 2 The countries represent two distinct contexts in terms of demographic transition and educational achievements of their populations. Our simulation starts in 1970 and projects it into the future. We are thus able to evaluate the impact of population age structure on the support ratio while taking into account that changes in education also influence the level of production and consumption.
Decomposing the demographic dividend by age and education level Following Mason (2005), the concept of the demographic dividend can be formally derived starting from the following decomposition of per capita income at year t: with Y being income, N total population, and W working-age population (hereafter workers). The first term on the right-hand side, the ratio of workers to total population, represents the support ratio (SR). The second term on the right-hand side is output per worker (labor productivity, Pr). Hence, income per capita depends on the support ratio and productivity. Expressing Equation (1) as growth rates (using the operator g), one can state that changes in the support ratio and in productivity growth determine per capita income growth: The demographic dividend is captured by the evolution of the support ratio. Using regression analysis, Bloom and Williamson (1998) concluded that the demographic transition contributed to the so-called economic miracle observed in East Asia over the period 1965-1990. Kelley and Schmidt (1996 and Bloom and Canning (2003) also conducted empirical studies using cross-country aggregate data. Mason (2005) and Mason and Lee (2006) derived an alternative estimation process for the evolution of the support ratio, combining demographic and economic information. By using the per capita age profiles of labor income and consumption, they obtained effective consumption (C) and effective production (L) instead of N and W in Equation (1). With c i and ly i being the per capita age profiles of consumption and labor income, respectively, C and L can be obtained as follows: where the summation is over ages i. In this way, the pure demographic support ratio in Equation (2) is redefined as an economic support ratio (ESR), as it considers not only demographic effects of population age structure, but also economic variables such as labor and consumption patterns. Estimates of the demographic dividend based on the ESR are available for many countries (Mason 2005;Mason and Lee 2006;Oosthuizen 2015;Patxot et al. 2011a;Prskawetz and Sambt 2014;Mejía-Guevara and Partida Bush 2014;Rosero-Bixby 2011). Results show that, for most developed countries, the demographic dividend started around the 1970s and lasted for about three decades, but some differences can be observed depending on the specific demographic and economic characteristics of each country.
ESR, as the ratio of effective production to effective consumption, can also be expressed in terms of growth rates: To consider the effect of education in the estimation of the demographic dividend, we further break down Equations (3) and (4) by educational group, represented by j: Once the economic profiles have been differentiated by both age and education, we can measure the contribution of each of these two factors to the demographic dividend (estimated as the change in the ESR).
We separately decompose the annual growth rate of production (L) and consumption (C) into an age effect (A) and an education effect (E), applying the method of Das Gupta (1993), adding a rate effect (R), as follows (see Appendix for details 3 ): Each of the effects is estimated observing how L (or C) changes annually when the other two factors remain constant. For example, the age effect on labor income (A L ) is obtained keeping constant the labor income profile and the population distribution by level of education, while only population age structure varies. By substituting Equations (8)-(9) in Equation (5) and rearranging terms, we obtain the three effects on ESR growth (rate, age, and education), corresponding to production and consumption.
To carry out the decomposition, we derive age profiles of consumption and labor income by education level and apply them to the population over several years. Population data, therefore, also need to be disaggregated by age and education level. We show the decomposition for Mexico and Spain from 1970 to 2100, taking one base year for the economic profiles. This implies that we will capture both education and age effects, but not the rate effect shown in Equations (8) and (9).
Population data by level of education
We used population projections by level of education, available from the Wittgenstein Centre for Demography and Global Human Capital (WICD). 4 WICD has produced, for the first time, projections of population by educational level, age, and sex for 195 countries for the period 1970-2100, using exhaustive information and analyses of recent trends in fertility, mortality, migration, and educational level for all areas of the world (Lutz, Butz, and KC 2014;Speringer et al. 2015). They also consider other scenarios in their projections. We use two of them for the sensitivity analysis, the Constant Enrollment Rate (CER) and the Fast Track (FT) scenarios. The CER scenario assumes that enrollment rates remain constant over time in both Mexico and Spain from 2015 onward; therefore, no significant improvements in education level are expected beyond the coming decades. On the other hand, the FT scenario assumes that enrollment rates improve faster than in the central projection.
To observe the evolution of the age structure in both countries, Figure 1 shows the dependency ratios (using data by age and year) obtained from WICD for 1970-2100. First, child dependency has experienced a clear decline in both countries but with different patterns. The demographic transition started later in Mexico than in Spain, but it has been much more pronounced in the former as the initial level of fertility was higher. At the beginning of the century child dependency still exceeded 50 percent, but it will continue to decrease until 2050, after which it will stabilize at around 25 percent. In Spain, child dependency reached its minimum, at slightly above 20 percent, in the early 2000s and is expected to remain at around that level until 2040. After that year, it will increase to 25 percent and will remain there for the rest of the century.
Second, demographic patterns also differ regarding elderly dependency. In Mexico, it will increase especially after 2030 and will continue to grow over the rest of the century. In Spain, it starts to grow earlier and will peak at around 69 percent by 2050, a level that Mexico will never reach during the period. The process is strongly driven by the evolution of the fertility rate, which was 2.2 in Mexico in 2015 (CONAPO 2015), compared with only 1.3 in Spain in 2013 (INE 2015). Projections for Mexico suggest that the fertility rate will remain higher than in Spain, and consequently the increase in its elderly dependency ratio will be slower (UN 2015). Finally, during the first part of the period analyzed-until 2010 in Spain and 2030 in Mexico-the total dependency ratio is mainly driven by the evolution of child dependency. Conversely, elderly dependency will become the main driver of total dependency in the future. Note also that the minimum level of the total dependency ratio expected in Mexico (48 percent in 2030) is slightly higher than in Spain (44 percent in 2005-9).
Figure 2 displays population projections by level of education (percent of adult population in each education level) for the baseline scenario (the medium case), as well as the alternative scenarios (CER and FT) for the period 1970-2090. Mexico and Spain have seen great improvements in their level of education in recent decades, reducing the share of adults with less than primary education and increasing the proportion with higher education. Nevertheless, differences between the two countries remain. According to the OECD (2013), in 2011 Mexico was clearly behind the OECD average in terms of people aged 25-34 who had completed at least upper secondary education (55 percent in Mexico vs. 82 percent in OECD) and who had attained tertiary education (23 percent vs. 39 percent). Those figures were higher in Spain (65 percent aged 25-34 with upper secondary education, 39 percent with tertiary education) (OECD 2014). According to projections by Lutz, Butz, and KC (2014), the differences between the two countries will persist in the future. For example, adults in Spain with less than primary school will make up less than 3 percent of total population in 2035, while this level will occur 20 years later in Mexico. By 2100, 53 percent of Spaniards and 41 percent of Mexicans will have post-secondary education.
When observing the two alternative scenarios, important differences between educational attainment in Mexico and Spain remain. According to the CER scenario, however, educational attainment of the population stops improving after 2050 in both countries. In the FT scenario, proportions with post-secondary education increase faster, and proportions with primary or less than primary education are reduced to very low levels. Nevertheless, by 2090, Spain continues to have a higher proportion of the population with post-secondary education than Mexico.
Age profiles of consumption and labor income by level of education
We briefly describe the procedure to construct economic profiles by age and educational level for Mexico and Spain. We are interested in two profiles: labor income and consumption. The consumption profile will be used to obtain the number of effective consumers (Equation 3), and the labor income profile to estimate the number of effective producers (Equation 4). The difference between labor income and consumption age profiles defines the lifecycle deficit (LCD) in the NTA methodology. The LCD shows how production and consumption vary over the lifecycle. Typically, individuals consume more than they produce at the beginning and the end of their lives, and the opposite occurs for working-age individuals. The length of these three periods, together with the amount of the corresponding deficit (consumption higher than labor income) or surplus (consumption lower than labor income), varies among countries (Lee and Mason 2011).
We followed the NTA methodology (UN 2013) to construct economic profiles estimated from surveys and official data and then adjusted to aggregate data from National Accounts. The labor income profile represents the sum of earnings and self-employment income profiles in the total population by age and education level, and the consumption profile includes both public and private consumption. We go beyond the standard NTA methodology by differentiating age profiles by education level. We consider four levels of education: less than primary, primary completed, secondary completed, and higher education. 5 Figures 3 and 4 show the per capita age profiles of labor income and consumption by level of education for Mexico and Spain, respectively. To make the profiles comparable, they have been divided by the average (total) labor income for ages 30-49 in each country. Although with differences, both average economic profiles show the typical shape by age: while consumption remains stable over the lifecycle for adults, labor income is clearly concentrated in the middle years of working age (Lee and Ogawa 2011;Tung 2011). Nevertheless, the differentiation of those profiles by educational attainment shows significant new features. First, in both countries labor income profiles display greater differences than consumption profiles; that is, labor income is more unequal according to level of education than consumption. In Mexico, the labor income profiles peak at very different age groups according to education level, peaking at 65-69 for the highest education level and at 35-39 for individuals with less than primary education. Inequality in profiles by educational attainment in Mexico is clearly observed, where per capita labor income in post-secondary education at ages 30-54 is over 3 times the average labor income at 30-49. In Spain, the labor income profile peaks at around ages 50-54 for higher education levels (post-secondary and secondary education), while it peaks at younger ages for lower levels of education. Although inequality in Spain is also clear, it is smaller than in Mexico. On average, labor income of individuals at ages 30-59 with post-secondary education in Mexico is 8.6 times that of the individuals with less than primary education, compared to 5.1 times in Spain.
Second, the differences in consumption profiles by level of education are again clearly higher in Mexico. Consumption of highly educated individuals is more than double the average consumption, while consumption of the less educated is half that of average consumption. For Spain, consumption profiles by level of education are much more uniform, and consumption of highly educated individuals is 60 percent higher than consumption of less educated individuals at ages 30-59. The average consumption profile observed in Spain among persons aged 30-49 is 66 percent of the average labor income for ages 30-49, significantly lower than in Mexico, where it is around 90 percent.
The per capita lifecycle deficit (LCD) profiles for both countries are shown in Figure 5. In general, Mexico has higher deficits than Spain because, as seen in previous figures, its consumption profiles are clearly higher than Spain's at every level of education, while labor income profiles are not. In Mexico, only individuals with at least secondary education can generate a surplus (labor income higher than consumption) during their lifecycle. This surplus is much more significant for individuals with postsecondary education and very modest for those with secondary schooling. On the other hand, people with less than secondary education consume more than they produce over the whole lifecycle. In Spain, the picture is slightly more favorable: individuals with primary education already experience a surplus during part of their working-age years.
The role of education in the demographic dividend
We now present the results of our simulation exercise to evaluate the impact of education on the support ratio in the period 1970-2100. We apply the profiles of consumption and labor income to the population projections by age and level of education and observe how the support ratio evolves over time. Figure 6 shows trends in the demographic dividend (defined as the rate of growth of the support ratio) in Mexico and Spain, disentangling the education and age effects. As explained above, the education effect captures the impact on the demographic dividend of changes in population composition by education level, while the age effect estimates the impact of changes in population age structure. In Mexico, the support ratio attained its highest growth rate in 1985 after which it declines steadily until it becomes negative in 2040 and remains so for the rest of the century. In Spain, the support ratio peaks a decade later-in 1995-99-but decreases more quickly, becoming negative by 2030-34. Negative values are clearly higher in Spain than in Mexico, but they last until 2055, when the support ratio becomes positive again for a period of 20 years.
The estimated positive age effect in both Mexico and Spain will last until 2020 after which it will remain negative for the rest of the century. In Spain, the negative age effect will peak in 2040 (coinciding with the full retirement of the large youth cohorts) and will rise from then on. In Mexico, the negative effect of age is never as important as in Spain because of the different time path of the two countries' demographic transition. While the age effect closely follows the trend of the total dependency ratio, the education effect is positive throughout the period for both Mexico and Spain as the education level of both populations continues to increase. Hence, the growth of the support ratio will remain positive if the positive education effect is higher than the negative age effect.
Although Spain's population is expected to reach higher education levels than Mexico's, it also has a much more negative age effect, which retards the positive effect of education. Thus, while education expansion can partly overcome the negative impact of an increasing dependency ratio on the demographic dividend, the population age structure remains crucial in the evolution of the ESR.
As described earlier, ESR is the ratio of effective production (labor income) to effective consumption in a given economy. That is, total ESR growth is the difference between the growth of effective labor income and effective consumption (Equation 7). Figure 7 shows the growth rate of the ESR (both the age and the education effects) decomposed by the changes in labor income and consumption separately. We represent consumption growth in negative terms (meaning that its impact on the total ESR is negative). Interestingly, education and age effects on consumption growth are very similar, while the education effect on labor income is clearly higher within the entire period. That is, the education effect on labor income seems to be the main factor behind the positive impact of education on the demographic dividend.
As mentioned above, the demographic dividend measures the effects of changes in age structure and educational attainment on economic growth. To explore this relationship, Table 1 shows past trends in the demographic dividend decomposed for age and education, together with the annual GDP growth observed, both in per capita and per effective consumer terms. 6 Mexico registered average annual growth in the support ratio of 1.76 percent over the period 1970-2015, as a result of a positive age structure and especially a very favorable education effect. This is observed when comparing the impact of education on labor income growth (2.75 percent), which is clearly higher than the impact on consumption growth (1.59 percent). However, although annual GDP per capita growth was 3.27 percent, GDP per effective consumer grew by only 1.92 percent, owing to the unfavorable relation between labor income and consumption. Of upmost importance, during the most favorable period for ESR growth (1980-95), this was well above the GDP growth. This result indicates that Mexico was unable to take full advantage of its favorable demographic and educational circumstances.
In Spain, the demographic dividend (ESR) was also positive (1.53 percent) throughout the period 1970-2015, although the age effect is zero in the last years. This accounted for 73 percent of GDP growth per effective consumer. As in Mexico, the impact of education on the support ratio is mainly explained by the higher effect of education on labor income growth (1.87 percent), compared to consumption growth (0.73 percent). However, during some periods (1980-85, 1990-95, and 2005-15), growth in Spain's economy was clearly less than growth in the demographic dividend, meaning that the opportunities offered by population structure in terms of age and education were missed. Hence, it seems that in the past Spain, and especially Mexico, were unable to fully benefit from their significant demographic dividend. This is particularly worrying given that the demographic dividend will be much lower, and even negative, in the future.
To evaluate the robustness of our results, we perform two sensitivity exercises. First, we evaluate the impact of the education projections by using two alternative scenarios, as described earlier. Second, we evaluate the impact of the economic profiles of consumption and labor income by transposing the profiles estimated for the two countries.
Alternative education projection scenarios
We re-estimate our results for the demographic dividend decomposed with two alternative scenarios of population distribution by level of education, also available in WICD (2015). The CER scenario assumes very little improvement in the educational attainment of both countries, while the FT scenario assumes a faster education expansion than in the central projection. Both alternative scenarios use the same assumptions as the baseline scenario except for the education enrollment rates. However, because demographic components depend on the education level of the population, the two alternative scenarios produce different population age structures. Results obtained with the two alternative education scenarios, together with our baseline estimation, are shown in Figure 8 for Mexico and Figure 9 for Spain. Solid lines refer to the alternative scenarios (CER and FT), dashed lines to the baseline scenario. As expected, in the CER scenario the education effect is clearly lower for both countries, becoming zero around 2040 in both cases and remaining close to that level thereafter. The decline of the education effect means that the demographic dividend (positive ESR) becomes dependent mostly on the age effect and turns negative earlier and much deeper than in the baseline scenario. In the FT projections, on the other hand, the education effect is much more positive during the first half of the projection. After 2060 in Mexico and 2070 in Spain, the education effect is lower than in the baseline scenario, probably because once a majority of the population is already enrolled in school until tertiary education, improvements are necessarily smaller. Therefore, the consequences are positive in the medium term, but turn negative thereafter. In Mexico under the faster education expansion, the demographic dividend remains positive until around 2040, but then the age effect decreases sharply, driving the ESR to very negative values beginning around 2060. In Spain, the negative ESR from 2020 to 2050 almost disappears.
Overall, the results of both scenarios demonstrate that improvements in the educational attainment of the population are crucial in the evolution of the demographic dividend, both through the direct impact of education and through its effect on the demographic components influencing the age effect.
The effect of the economic profile
As we noted earlier, economic profiles of labor income and consumption (and hence of LCD) differ significantly by level of education within each country, but there are also disparities between the two countries. Spain has more favorable profiles in terms of LCD, as its relative consumption profiles are clearly lower than Mexico's for all education levels, while its relative labor income profiles are slightly higher. To evaluate the impact of the economic profiles on the demographic dividend, Figure 10 shows the estimated trend in the demographic dividend in Mexico under the assumption that it has the economic profile of Spain, and vice versa. The solid lines refer to the changed profiles simulation and the dashed lines to the baseline.
The results show that both age and education effects are influenced by economic profiles. In Mexico, more favorable economic profiles imply a considerably higher demographic dividend, which remains positive through 2100. This means that, ceteris paribus, a more favorable per capita lifecycle deficit profile in Mexico would overcome the negative economic effects of population aging. The opposite is observed for Spain. Less favorable LCD profiles-with higher deficits and lower surpluses-than those observed in Mexico would cause the demographic dividend to become negative earlier and remain below zero longer. Hence, the same conclusion is confirmed in both cases: less (more) favorable lifecycle deficit profiles would have negative (positive) consequences for the trend in the demographic dividend, as a combination of effects in both age and education components.
Conclusions
The potential positive effects of a favorable population age structure on economic growth have been investigated in recent decades through the estimation of the demographic dividend. This research was mainly motivated by the demographic transition that most countries are facing as they develop. Initial estimates of the demographic dividend examined the relation between the working-age population and economically dependent individuals, namely the support ratio. In the first stage of the demographic transition the working-age population grows faster than the rest of the population, producing a positive effect on economic growth. The opposite occurs in the second stage, when population aging begins and the support ratio growth becomes negative. However, the first stage of demographic tran-sition coincided with a significant education expansion in most countries. This means that economic growth is influenced not only by changes in age structure but also by improvements in the education attainment of the population. In this article we disentangled the age and education effects through a decomposition of the demographic dividend.
We focused our analysis on Mexico and Spain, estimating their economic profiles by age and level of education. Our results reveal three key insights. First, the positive age effect in Mexico starts before 1970, peaks around 2000, and turns negative shortly after 2020. In Spain the age effect starts later, in 1980, but ends by 2020 as well. Second, the education effect is clearly higher than the age effect in the past in both countries and remains positive throughout the period observed. Adding the education component to the demographic dividend partly offsets the future negative effect of aging on the support ratio. This implies that education is an important mechanism in reducing the adverse effects of aging, as education expansion delays the start of the negative growth of the support ratio. Nevertheless, it is important to realize that higher educational attainment of the population also implies faster aging in the future, 7 turning the age effect more negative. Third, economic profiles by age and level of education could also have important effects on the demographic dividend. For example, our sensitivity scenarios show that if Mexico had consumption and labor income age profiles similar to those for Spain, the country could completely avoid negative growth of the support ratio. The reason is that, with more favorable economic profiles, educational expansion would be sufficient to offset a less rapid aging of Mexico's population compared to the situation in Spain.
These findings also offer insight into how to approach the demographic transition from a policy point of view. The demographic dividend could be expanded through policies that focus not only on population aging but also on expanding education attainment and increasing the lifecycle deficit surplus. This gives governments more options to overcome the potential negative impact of aging. In developing countries in the first stage of the demographic transition, education policy seems to be the best way to take advantage of, or even extend, the demographic dividend.
1 The dataset was constructed by the International Institute for Applied Systems Analysis at the Vienna Institute of Demography (IIASA-VID).
2 The NTA age profiles for Mexico are built upon Mejía-Guevara (2015), though estimated at the individual rather than the household level of education. Recently, NTA profiles by level of education have been obtained also for Austria (Hammer 2015).
3 Appendix is available at the supporting information tab at wileyonlinelibrary.com/journal/pdr. 4 We used the newest version of the WICD data, including both past data and future projections of population distribution by educational level from 1970 to 2100 (Lutz, Butz, and KC 2014).
5 This methodology is similar to that of Mejía-Guevara (2015), except that we use individual education instead of the education level of the household head. This approach allows us to apply the economic profiles to population projections by age and education. We assign the average level of household consumption to individuals under age 25, given that a considerable proportion of them have not finished their education. Therefore, any educational effect coming from the consumption side of the population under 25 is suppressed.
6 The GDP per effective consumer weights population by the estimated consumption profile in the corresponding country. We use the consumption profile estimated for Mexico and Spain, updated to the corresponding year.
|
2019-05-19T13:03:58.588Z
|
2016-12-01T00:00:00.000
|
{
"year": 2016,
"sha1": "25b5676e1e9839ff226c10144d15dd4f6ee95486",
"oa_license": "CC0",
"oa_url": "http://diposit.ub.edu/dspace/bitstream/2445/111150/1/667222.pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e8b92456fa94998eba9ecdab7c9c7d640ebc25ba",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
204298745
|
pes2o/s2orc
|
v3-fos-license
|
E ff ect of Microstructure on the High-Cycle Fatigue Behavior of Ti(43-44)Al4Nb1Mo (TNM) Alloys
: To investigate the high-cycle fatigue (HCF) behavior of TNM alloys, three di ff erent microstructures were designed and obtained by di ff erent heat treatments. Staircase tests and fatigue tests in a finite life-region were performed to evaluate the fatigue properties. Then, the fracture surfaces were analyzed to study the fracture behavior of TNM alloys with di ff erent microstructures. Results showed that the TNM alloys with duplex microstructure possesses the highest fatigue strength and fatigue life, followed by near lamellar TiAl alloys. HCF failure exhibited cleavage fracture morphologies, and multiple facets were generated in the crack initiation region of di ff erent TNM alloys. Two di ff erent crack initiation modes, subsurface crack nucleation and surface origin, were observed. Both crack initiation modes appeared in near lamellar alloys, while only subsurface crack initiation were obtained in the duplex (DP) alloy. It contributes to the high scatter of S - N data. The HCF failure of TNM alloys was dominated by crack nucleation rather than crack propagation. These findings could provide guidance for optimizing the microstructure and improving the HCF properties of TiAl alloys. of related to the large scatter of the S - N data. To clarify the problem, the fracture surface was analyzed in the following content for studying the crack initiation, as well as the e ff ect of microstructure on the fatigue properties.
Introduction
Gamma TiAl-based alloys have been successfully applicated in aerospace and automotive industries after decades of developments [1][2][3]. In recent years, TiAl alloys containing a large amount of β-stabilizing elements, such as Nb and Mo, have attracted much more attentions due to their excellent elevated temperature properties [4][5][6][7]. Among them, the TNM alloy, which contains a balanced concentration of Nb and Mo, was recognized as the representative third generation TiAl alloy [8,9]. It has the potential for utilization as the last stage low-pressure turbine blades in advanced geared turbofan engines.
As the most expected and important applications of TiAl alloys, turbine blades and exhaust valves in engines generally serve under cyclic loading conditions [10,11]. Hence, it is important to investigate their fracture behavior during high-cycle fatigue (HCF) for improving the service property of TiAl alloys. Most of the previous researches have been focused on fatigue crack growth [12][13][14][15]. It was demonstrated that the low ductility and toughness of TiAl alloys result in a low tolerance to fatigue crack growth and a relatively steep slope of the Paris region [16,17]. Due to the rapid fatigue crack growth rate, the threshold stress intensity has been used for directing component design and predicting the fatigue life of TiAl alloys [18].
Recent studies on fatigue behavior revealed that the crack initiation plays an important role in the fatigue fracture process of high strength steels [19], titanium alloys [20][21][22] and TiAl alloys [23,24].
Materials and Methods
The TNM alloy investigated in this study was prepared by twice vacuum arc remeltings followed by induction skull melting. The practical composition of the ingot was Ti42.92Al4.01Nb0.99Mo0.18B (at.%), which was measured by inductively-coupled plasma atomic emission spectrometers in the Northwest Institute for Non-Ferrous Metal Research, Xi'an, China. After the measurement of composition, ingot breakdown was accomplished through canned forging. Then, the pancake was hot isostatic pressed (HIP) at 1280 • C and 140 MPa for 4 h. Two-step heat treatments were applied to the HIPed alloy, see Table 1. The different solution-treatments led to different microstructures including duplex (DP) and near lamellar (NL), as shown in Figure 1. The volume fractions of the constituents were determined through quantitative analysis of SEM micrographs, and listed in Table 1. DP alloy contains equiaxed γ (TiAl) grains, B2 phase, and lamellar colonies composed of α 2 (Ti 3 Al) and γ lamellae. Meanwhile, a lot of γ grains and ω particles precipitated from retained B2 phase during cooling of the TNM alloys. NL#1 alloy was composed of lamellar colonies, as well as γ grains and the B2 phase which discontinuously distributes at colony boundaries. NL#2 alloy was solution-treated at a higher temperature, which resulted in larger lamellar colonies with a massive B2 phase and no γ grains located at colony boundaries. Uniaxial tensile fatigue tests were conducted at room temperature for measuring the HCF property of TiAl alloys. The geometry of a diabolo specimen used for fatigue test and the fixed specimen during HCF test are shown in Figure 2. The curvature in the middle of the specimen guarantees strain concentration and failure in the center. Tensile fatigue tests were performed on a QBG-50 HCF machine (Qianbang, Changchun, China). The stress sinusoidally varied between maximum and minimum tensile stress with a stress ratio of 0.1 and a frequency of 150-160 Hz. All the specimens were mechanically ground to reduce the surface roughness. Additionally, uniaxial tensile tests were carried out at room temperature for assessing the tensile properties of TiAl alloys before fatigue tests. The tensile tests were performed on an INSTRON 1195 machine (Instron, Canton, OH, USA) with a strain rate of 0.5 mm/min. Cylindrical specimens with a gauge length of 25 mm and a diameter of 5 mm were used in the tensile tests.
Microstructure
Heat Treatment Uniaxial tensile fatigue tests were conducted at room temperature for measuring the HCF property of TiAl alloys. The geometry of a diabolo specimen used for fatigue test and the fixed specimen during HCF test are shown in Figure 2. The curvature in the middle of the specimen guarantees strain concentration and failure in the center. Tensile fatigue tests were performed on a QBG-50 HCF machine (Qianbang, Changchun, China). The stress sinusoidally varied between maximum and minimum tensile stress with a stress ratio of 0.1 and a frequency of 150-160 Hz. All the specimens were mechanically ground to reduce the surface roughness. Additionally, uniaxial tensile tests were carried out at room temperature for assessing the tensile properties of TiAl alloys before fatigue tests. The tensile tests were performed on an INSTRON 1195 machine (Instron, Canton, OH, USA) with a strain rate of 0.5 mm/min. Cylindrical specimens with a gauge length of 25 mm and a diameter of 5 mm were used in the tensile tests. Uniaxial tensile fatigue tests were conducted at room temperature for measuring the HCF property of TiAl alloys. The geometry of a diabolo specimen used for fatigue test and the fixed specimen during HCF test are shown in Figure 2. The curvature in the middle of the specimen guarantees strain concentration and failure in the center. Tensile fatigue tests were performed on a QBG-50 HCF machine (Qianbang, Changchun, China). The stress sinusoidally varied between maximum and minimum tensile stress with a stress ratio of 0.1 and a frequency of 150-160 Hz. All the specimens were mechanically ground to reduce the surface roughness. Additionally, uniaxial tensile tests were carried out at room temperature for assessing the tensile properties of TiAl alloys before fatigue tests. The tensile tests were performed on an INSTRON 1195 machine (Instron, Canton, OH, USA) with a strain rate of 0.5 mm/min. Cylindrical specimens with a gauge length of 25 mm and a diameter of 5 mm were used in the tensile tests. The microstructure of the raw material and fractured specimens were analyzed by a HITACHI SU3500 scanning electron microscope (SEM, HITACHI, Tokyo, Japan) in the back scattered electron (BSE) mode. The morphology of fatigue fracture surface was characterized using the secondary electron (SE) mode. The samples used for microstructure characterization were prepared by standard metallographic procedure. The surfaces were mechanically ground and electrochemically polished. The microstructure of the raw material and fractured specimens were analyzed by a HITACHI SU3500 scanning electron microscope (SEM, HITACHI, Tokyo, Japan) in the back scattered electron (BSE) mode. The morphology of fatigue fracture surface was characterized using the secondary electron (SE) mode. The samples used for microstructure characterization were prepared by standard metallographic procedure. The surfaces were mechanically ground and electrochemically polished.
Fatigue Strength
The fatigue strengths of TiAl alloys with different microstructures were estimated by staircase methods [32]. As shown in Figure 3a, the ultimate tensile strengthes (UTS) of DP, NL#1 and NL#2 alloys obtained from RT tensile tests were 820 MPa, 890 MPa and 860 MPa, respectively. During the staircase tests, the maximum stress, σ m , for the first specimen was determined according to the tensile strength. It has been illustrated that the ratio between the fatigue strength and UTS of TiAl alloys was about 0.7-0.8 at RT [33]. Thus, the initial stress levels in staircase tests of all TiAl alloys were determined to be 620 MPa. During staircase tests, the stress level of the subsequent test depended on the result of the prior test. The expected fatigue life of TiAl alloys for this study was defined to be 10 7 cycles. If the specimen failed before reaching 10 7 cycles, the prior test was classified as failure, and the subsequent test would be conducted at a lower stress; otherwise, it is grouped in pass classification, and the next specimen would be tested at a higher stress level. The stress decrement and increment between consecutive stress levels, ∆σ, were 20 MPa. The results obtained from staircase tests are shown in Figure 3b-d. As shown in Figure 3b, the maximum stresses are plotted in the ordinate when the specimen number is in the abscissa. The DP specimen at the stress level of 620 MPa passed 10 7 cycles without failure. It was labeled as the first effective specimen. The specimens of both NL#1 and NL#2 TiAl alloys tested at 620 MPa failed before reaching 10 7 cycles. Therefore, the staircase test continued, and the maximum stress decreased until the specimen passed 10 7 cycles. The prior failed specimen was labeled as the first effective specimen and plotted in the up-and-down diagram. As shown in the As shown in Figure 3b, the maximum stresses are plotted in the ordinate when the specimen number is in the abscissa. The DP specimen at the stress level of 620 MPa passed 10 7 cycles without failure. It was labeled as the first effective specimen. The specimens of both NL#1 and NL#2 TiAl alloys tested at 620 MPa failed before reaching 10 7 cycles. Therefore, the staircase test continued, and the maximum stress decreased until the specimen passed 10 7 cycles. The prior failed specimen was labeled as the first effective specimen and plotted in the up-and-down diagram. As shown in the diagrams of Figure 3, the last specimens were tested at the same stress level with the first effective specimens, which successfully addressed a self-closed condition [34]. The fatigue strength, σ f , and standard deviation, σ d , could be calculated by the Dixon-Mood approach [32].
where σ 0 is the lowest stress of σ j . σ j denotes the stress levels corresponding to the less frequent events between pass and failure, which are arranged in ascending sort order. The pass specimens were selected for the calculation of σ f , and the plus sign was used in Equation (1) for the DP and NL#1 alloys.
In contrast, the failure specimens were taken into account, and the minus sign was used for the NL#2 alloy. The parameter D was calculated by The parameters A, B and C were determined by equations where n j was the number of specimens tested at stress level σ j in the up-and-down diagram, and j = 0, 1, 2, . . . . The results showed that the fatigue strengths of DP, NL#1 and NL#2 alloys were 622 MPa, 594 MPa, and 542 MPa, respectively. Additionally, the standard deviations were all 10.6 MPa for different TiAl alloys. The ratios between the fatigue strength and UTS were separately 0.76, 0.67 and 0.63, which is in consistent with previous investigations.
Fatigue Life
A few more fatigue tests were conducted at high stress levels as supplements. The S-N data of different TiAl alloys were obtained and shown in Figure 4. The fatigue life, N f , were plotted in the abscissa, logarithmic scale. Although the standard deviations of staircase tests were relatively small, the S-N relations were quite scattered, especially for the NL#1 alloy. The large scatter of the fatigue life data may be correlated with the damage process of TiAl alloys. The specimens of DP alloy, which failed between 10 6 and 10 7 cycles, had similar maximum stresses and defined a fatigue limit of about 620 MPa. In contrast, the S-N data of the NL#1 and NL#2 alloys did not seem to approach such a stress asymptote, which revealed that the failure of NL#1 and NL#2 alloys was resulted from applying enough cycles. It could be deduced from Figure 4 that the DP alloy has the highest fatigue life, followed by the NL#1 and NL#2 alloys at the same stress level. It verifies the sequence of fatigue strengths illustrated in Section 3.1. However, it has to be noted that, although NL#1 alloy possesses a higher fatigue strength than NL#2 alloy, it's fatigue life at the stress level of 600 MPa is lower. This phenomenon may be related to the large scatter of the S-N data. To clarify the problem, the fracture surface was analyzed in the following content for studying the crack initiation, as well as the effect of microstructure on the fatigue properties. life data may be correlated with the damage process of TiAl alloys. The specimens of DP alloy, which failed between 10 6 and 10 7 cycles, had similar maximum stresses and defined a fatigue limit of about 620 MPa. In contrast, the S-N data of the NL#1 and NL#2 alloys did not seem to approach such a stress asymptote, which revealed that the failure of NL#1 and NL#2 alloys was resulted from applying enough cycles. It could be deduced from Figure 4 that the DP alloy has the highest fatigue life, followed by the NL#1 and NL#2 alloys at the same stress level. It verifies the sequence of fatigue strengths illustrated in Section 3.1. However, it has to be noted that, although NL#1 alloy possesses a higher fatigue strength than NL#2 alloy, it's fatigue life at the stress level of 600 MPa is lower. This phenomenon may be related to the large scatter of the S-N data. To clarify the problem, the fracture surface was analyzed in the following content for studying the crack initiation, as well as the effect of microstructure on the fatigue properties.
Crack Initiation
The typical fracture surfaces of different TNM alloys tested at various loading conditions were shown in Figure 5. Figure 5a-c presents full views of the fracture surfaces of DP, NL#1 and NL#2 alloys, respectively, as Figure 5d-f shows the corresponding crack initiation sites at higher magnifications. Although the morphology of the crack source is almost the same as that of the crack propagation path for TNM alloys, the crack nucleation sites can still be determined based on the low
Crack Initiation
The typical fracture surfaces of different TNM alloys tested at various loading conditions were shown in Figure 5. Figure 5a-c presents full views of the fracture surfaces of DP, NL#1 and NL#2 alloys, respectively, as Figure 5d-f shows the corresponding crack initiation sites at higher magnifications. Although the morphology of the crack source is almost the same as that of the crack propagation path for TNM alloys, the crack nucleation sites can still be determined based on the low magnification photographs. It has been illustrated previously that two crack initiation modes: surface crack nucleation and subsurface nucleation, exist during fatigue failure of TiAl alloys [24]. As indicated by black arrows in Figure 5a-c, subsurface crack origins were observed on DP and NL#1 alloys, while the crack originated at the surface of NL#2 alloy.
The fatigue fracture surfaces of all three different TiAl alloys exhibit typical characteristics of cleavage fracture. As shown in Figure 5d, several facets and numerous river patterns can be observed in the crack initiation region of DP alloy. The facets are generated by cleavage fracture on specific planes of γ phases, the size of which corresponds to γ grain size. Since the crack propagation in TiAl alloys may also lead to cleavage facets, and the fatigue crack can initiate in a large area composed of multiple facets [35], only the crack initiation region (rather than the exact crack nucleation position) was identified in the SEM micrographs. The facets were also observed in the crack origin region of NL#1 alloy, see Figure 5e. The sizes of these facets were approximately equal to the colony sizes, which reveals the brittle fracture along α 2 /γ interfaces. Apart from these facets, broken lamellar edges and river patterns also appeared on the fracture surface of NL#1 alloy. The morphology of the fatigue fracture surface of NL#2 alloy in the crack initiation region is shown in Figure 5f. The cleavage facets and broken lamellae were similar with NL#1 alloy, except that some facets were distributed along the sample surface, as indicated by black arrows. Therefore, it was classified as surface crack initiation for this sample.
Metals 2019, 9, x FOR PEER REVIEW 7 of 11 magnification photographs. It has been illustrated previously that two crack initiation modes: surface crack nucleation and subsurface nucleation, exist during fatigue failure of TiAl alloys [24]. As indicated by black arrows in Figure 5a-c, subsurface crack origins were observed on DP and NL#1 alloys, while the crack originated at the surface of NL#2 alloy. The fatigue fracture surfaces of all three different TiAl alloys exhibit typical characteristics of cleavage fracture. As shown in Figure 5d, several facets and numerous river patterns can be observed in the crack initiation region of DP alloy. The facets are generated by cleavage fracture on specific planes of γ phases, the size of which corresponds to γ grain size. Since the crack propagation in TiAl alloys may also lead to cleavage facets, and the fatigue crack can initiate in a large area composed of multiple facets [35], only the crack initiation region (rather than the exact crack nucleation position) was identified in the SEM micrographs. The facets were also observed in the crack origin region of NL#1 alloy, see Figure 5e. The sizes of these facets were approximately equal to the colony sizes, which reveals the brittle fracture along α2/γ interfaces. Apart from these facets, broken lamellar edges and river patterns also appeared on the fracture surface of NL#1 alloy. The morphology of the fatigue fracture surface of NL#2 alloy in the crack initiation region is shown in Figure 5f. The cleavage facets and broken lamellae were similar with NL#1 alloy, except that some facets were distributed along the sample surface, as indicated by black arrows. Therefore, it was classified as surface crack initiation for this sample. The surfaces of the fatigue fractured specimens were analyzed. The results were shown in Figure 6. Hollow and solid circles separately represent surface and subsurface crack initiation. As shown in Figure 6, only subsurface crack initiation was observed in the fatigue tests of DP alloy, while both surface and subsurface initiations were obtained for NL#1 and NL#2 alloys. It has been investigated that the existence of two different crack initiation modes may lead to high scattered fatigue results and even duality of S-N curves [26,36], which means grouping of the fatigue data into two distinct curves. The scatter of fatigue data of near lamellar TNM alloys is higher compared with that of the DP alloy. It can be attributed to the appearance of surface crack initiation. For NL#1 and NL#2 alloys, it seems that the probability of surface crack nucleation increases with the increasing maximum stress. Nevertheless, no clear and definite correlations between crack initiation mode and fatigue life are observed in this paper. The surfaces of the fatigue fractured specimens were analyzed. The results were shown in Figure 6. Hollow and solid circles separately represent surface and subsurface crack initiation. As shown in Figure 6, only subsurface crack initiation was observed in the fatigue tests of DP alloy, while both surface and subsurface initiations were obtained for NL#1 and NL#2 alloys. It has been investigated that the existence of two different crack initiation modes may lead to high scattered fatigue results and even duality of S-N curves [26,36], which means grouping of the fatigue data into two distinct curves. The scatter of fatigue data of near lamellar TNM alloys is higher compared with that of the DP alloy. It can be attributed to the appearance of surface crack initiation. For NL#1 and NL#2 alloys, it seems that the probability of surface crack nucleation increases with the increasing maximum stress. Nevertheless, no clear and definite correlations between crack initiation mode and fatigue life are observed in this paper.
Effect of Microstructure
The microstructure characteristics and mechanical properties of TNM alloys with different microstructures were listed in Table 2. It can be seen from the table that DP alloy has the highest fatigue strength, σf, and tensile fracture strain, εt, while NL#1 and NL#2 alloy decreases in turn. The fatigue strength and fracture strain increases with the decreasing colony/grain size. The fracture toughness of TiAl alloys increases with colony size [37]. As confirmed in our former research [38], the duplex alloy possesses a much lower fracture toughness than near lamellar TiAl alloy. The DP alloy thus has weaker resistance to crack growth compared with near lamellar alloys. It has been evidenced by substantial experiments that the fatigue crack growth rate in duplex TiAl alloys is much faster than in lamellar alloys [16,39]. In this study, DP alloy has the highest fatigue strength, and the fatigue strength is negatively correlated with colony/grain size. Therefore, it is reasonable to draw a conclusion that the fatigue fracture of TNM alloys is dominated by microcrack nucleation rather than crack propagation.
Effect of Microstructure
The microstructure characteristics and mechanical properties of TNM alloys with different microstructures were listed in Table 2. It can be seen from the table that DP alloy has the highest fatigue strength, σ f , and tensile fracture strain, ε t , while NL#1 and NL#2 alloy decreases in turn. The fatigue strength and fracture strain increases with the decreasing colony/grain size. The fracture toughness of TiAl alloys increases with colony size [37]. As confirmed in our former research [38], the duplex alloy possesses a much lower fracture toughness than near lamellar TiAl alloy. The DP alloy thus has weaker resistance to crack growth compared with near lamellar alloys. It has been evidenced by substantial experiments that the fatigue crack growth rate in duplex TiAl alloys is much faster than in lamellar alloys [16,39]. In this study, DP alloy has the highest fatigue strength, and the fatigue strength is negatively correlated with colony/grain size. Therefore, it is reasonable to draw a conclusion that the fatigue fracture of TNM alloys is dominated by microcrack nucleation rather than crack propagation. As illustrated in Section 3.2, there are two different crack nucleation modes for TNM alloys: surface crack initiation and subsurface initiation. It leads to the large scatter of the fatigue life. Although the correlation between crack nucleation mode and fatigue life can not be determined based on the present data, the results provide valuable implications for optimizing the HCF properties of TiAl alloys. Since the HCF fracture is dominated by crack nucleation, the improvement in HCF properties can be accomplished by decreasing the colony/grain sizes and eliminating the B2 phase, which can reduce stress concentration and modify the deformation heterogeneity according to our previous research [40].
Conclusions
The HCF properties of different TNM alloys were obtained by fatigue tests at ambient temperature. The fatigue properties and fracture surfaces were analyzed to investigate the effect of microstructure on fatigue fracture behavior of TiAl alloys. The results are summarized as follows:
|
2019-10-03T09:12:23.250Z
|
2019-09-26T00:00:00.000
|
{
"year": 2019,
"sha1": "ca149dd0a34c2a274fb6f8f6a15d176da1194b5d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4701/9/10/1043/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5deb02a3ebdcc7a2b4cf84720297ef6fd868a713",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
253543155
|
pes2o/s2orc
|
v3-fos-license
|
High-Frequency Intraoral Ultrasound for Preoperative Assessment of Depth of Invasion for Early Tongue Squamous Cell Carcinoma: Radiological–Pathological Correlations
The eighth edition of the TNM classification officially introduced “depth of invasion” (DOI) as a criterion for determining the T stage in tongue squamous cell carcinoma. The DOI is a well-known independent risk factor for nodal metastases. In fact, several experts strongly suggest elective neck dissection for tongue cancer with a DOI > 4 mm due to the high risk of early and occult nodal metastases. Imaging plays a pivotal role in preoperative assessments of the DOI and, hence, in planning the surgical approach. Intraoral ultrasound (IOUS) has been proposed for early-stage SCC of the oral tongue as an alternative to magnetic resonance imaging (MRI) for local staging. The aim of this work is to investigate the accuracy of IOUS in the assessment of the DOI in early oral SCC (CIS, pT1, and pT2). A total of 41 patients with tongue SCCs (CIS-T2) underwent a preoperative high-frequency IOUS. An IOUS was performed using a small-size, high-frequency hockey-stick linear probe. The ultrasonographic DOI (usDOI) was retrospectively compared to the pathological DOI (pDOI) as the standard reference. In patients who underwent a preoperative MRI, their usDOI, magnetic resonance DOI (mriDOI), and pDOI were compared. Specificity and sensitivity for the IOUS to predict a pDOI > 4 mm and to differentiate invasive and noninvasive tumors were also evaluated. A high correlation was found between the pDOI and usDOI, pDOI and mriDOI, and usDOI and mriDOI (Spearman’s ρ = 0.84, p < 0.0001, Spearman’s ρ = 0.79, p < 0.0001, and Spearman’s ρ = 0.91, p < 0.0001, respectively). A Bland–Altman plot showed a high agreement between the usDOI and pDOI, even though a mean systematic error was found between the usDOI and pDOI (0.7 mm), mriDOI and pDOI (1.6 mm), and usDOI and mriDOI (−0.7 mm). The IOUS was accurate at determining the T stage (p < 0.0001). The sensitivity and specificity for the IOUS to predict a pDOI ≥4 mm were 92.31% and 82.14%, respectively, with an AUC of 0.87 (p < 0.0001). The specificity, sensitivity, negative predictive value (NPV), and positive predictive value (PPV) for the IOUS to predict an invasive cancer were 100%, 94.7%, 60%, and 100%, respectively. The AUC was 0.8 (95% CI 0.646–0.908, p < 0.0001). The IOUS was accurate in a preoperative assessment of a pDOI and T stage, and can be proposed as an alternative to MRI in the preoperative staging of tongue SCC.
Introduction
Oral squamous cell carcinoma (SCC) is the most frequent head and neck neoplasm, and the oral tongue is the most common site of presentation [1]. Despite innovations in
Introduction
Oral squamous cell carcinoma (SCC) is the most frequent head and neck neoplasm, and the oral tongue is the most common site of presentation [1]. Despite innovations in treatment, the prognosis of tongue SCC is still difficult to predict: even though overall survival and disease-specific survival are satisfactory at early stages, the risk of lymphatic dissemination is high and represents the most critical prognostic factor [2]. Moreover, the prevalence of occult nodal metastasis in a clinically negative neck ranges from 8.2% to 46.3%, and mortality is increased five-fold if occult nodal metastases occur [2][3][4][5]. In 2017, the eighth edition of the tumor-node-metastasis (TNM) staging system of the American Joint Cancer Committee (AJCC)/Union for International Cancer Control (UICC) officially introduced the depth of invasion (DOI) as a staging criterion for the T stage along with the surface dimension in oral SCC [6]. The DOI is defined as the depth of the tumor invasion measured from the level of the basement membrane of the closest normal mucosa following an ideal "plumb line" [7] (Figure 1). The DOI is a well-known prognostic factor. In addition, several authors have demonstrated a direct correlation between the DOI and the incidence of nodal metastasis [2,3,8,9]. For these reasons, elective neck dissection has been proposed for early-stage oral tongue SCC. Furthermore, the DOI can be used to dictate prophylactic neck dissection in clinically N0 patients. Notwithstanding, there is no global consensus for the threshold, with a range varying between 3 mm and 10 mm [3,10,11]. However, a DOI cut-off of 4 mm was recently proposed by several experts [12][13][14].
Imaging plays a pivotal role in the local staging of tongue cancer by estimating the DOI and guiding the surgeon to plan the surgical intervention properly and define the need for an elective neck dissection. In fact, cT1 and selected cT2 can be safely removed transorally. In contrast, more advanced tumors require a pull-through approach to obtain better control of deep margins [15]. Computed tomography, magnetic resonance imaging (MRI), and an intraoral ultrasound (IOUS) can be used to assess the local and regional extent of oral cancer (Table 1). MRI is now considered the first choice for the preoperative staging of oral cancer [16], but the interest in IOUS has been progressively growing over time. Several studies have reported the utility of an IOUS in the preoperative staging of oral tongue SCC [17]. In an early study, Shintani and coworkers compared IOUS with CT and MRI using histology as the gold standard. They reported that ultrasonography was superior to CT and MRI in an assessment of the primary lesion of oral carcinoma, mostly because CT and MRI could not detect the primary tumor if the thickness was less than 5 mm [18]. IOUS is also used for intraoperative tumor thickness assessments and to improve locoregional control during surgery [19,20]. However, most studies in the literature evaluated IOUS in the assessment of tumor thickness [21][22][23][24][25][26][27][28][29][30]. The DOI is a well-known prognostic factor. In addition, several authors have demonstrated a direct correlation between the DOI and the incidence of nodal metastasis [2,3,8,9]. For these reasons, elective neck dissection has been proposed for early-stage oral tongue SCC. Furthermore, the DOI can be used to dictate prophylactic neck dissection in clinically N0 patients. Notwithstanding, there is no global consensus for the threshold, with a range varying between 3 mm and 10 mm [3,10,11]. However, a DOI cut-off of 4 mm was recently proposed by several experts [12][13][14].
Imaging plays a pivotal role in the local staging of tongue cancer by estimating the DOI and guiding the surgeon to plan the surgical intervention properly and define the need for an elective neck dissection. In fact, cT1 and selected cT2 can be safely removed transorally. In contrast, more advanced tumors require a pull-through approach to obtain better control of deep margins [15]. Computed tomography, magnetic resonance imaging (MRI), and an intraoral ultrasound (IOUS) can be used to assess the local and regional extent of oral cancer (Table 1). MRI is now considered the first choice for the preoperative staging of oral cancer [16], but the interest in IOUS has been progressively growing over time. Several studies have reported the utility of an IOUS in the preoperative staging of oral tongue SCC [17]. In an early study, Shintani and coworkers compared IOUS with CT and MRI using histology as the gold standard. They reported that ultrasonography was superior to CT and MRI in an assessment of the primary lesion of oral carcinoma, mostly because CT and MRI could not detect the primary tumor if the thickness was less than 5 mm [18]. IOUS is also used for intraoperative tumor thickness assessments and to improve locoregional control during surgery [19,20]. However, most studies in the literature evaluated IOUS in the assessment of tumor thickness [21][22][23][24][25][26][27][28][29][30]. Table 1. Advantages and disadvantages of computed tomography (CT), magnetic resonance imaging (MRI), and intraoral ultrasound (IOUS) in oral cancer local and regional staging. Although conceptually different, tumor thickness (TT) and the DOI demonstrated a good prognostic performance due to the high correlation with the risk of lymph node metastases. However, the use of TT instead of the DOI can cause the risk of upstaging in a small percentage of patients; moreover, the DOI is now considered to be the most reliable parameter to predict the risk of lymph node metastases and prognosis [31,32]. Only four studies have investigated the role of an IOUS in the assessment of the DOI to date. Iida and colleagues demonstrated a good correlation between an ultrasonographic DOI (usDOI) and pathological DOI (pDOI), even in early tongue SCCs (<5 mm) [33]. Filauro and coworkers compared the usDOI and MRI-measured DOI (mriDOI) and demonstrated a better correlation between the mriDOI and pDOI than between the usDOI and pDOI [34]. Rocchetti and colleagues found a strong correlation between the usDOI and pDOI, but a moderate correlation between the usDOI and US-measured diameter; moreover, they found a high sensitivity, specificity, and PPV in the assessment of the infiltration of the tumor beyond the lamina propria into the submucosa (93.1%, 100%, and 100%, respectively) [35]. Takamura and coworkers found a high radiological-pathological agreement and showed that an IOUS was more accurate than CT and MRI at detecting T1 and T2 in squamous cell carcinomas [36]. To date, few studies have analyzed the role of IOUSs in the preoperative staging of early tongue SCC [22][23][24]26,28,35,37], but only the study by Takamura et al. evaluated the radiological-pathological agreement between the pDOI and usDOI [36]. Herein, we assessed if the IOUS could be an alternative staging tool, especially for early SCC. In particular, we investigated the ability of the IOUS to predict the pDOI and T stage in oral tongue SCC, as well as to predict a pDOI > 4 mm, which is the threshold value to perform elective neck dissection in tongue SCC.
Patients
A total of 72 patients underwent an IOUS for oral tongue mucosal lesions from 2017 to 2021 at our institution. The inclusion criteria for the study were: (1) pathologically demonstrated tongue or tongue pelvis SCC; (2) a usDOI assessment; (3) complete surgical excision and histopathological measurement of the DOI. The exclusion criteria were: (1) benign lesion, dysplasia, nonsquamous cell carcinomas; (2) diagnosis of distant metastases and/or synchronous head and neck SCC; (3) treatment with neoadjuvant therapy; (4) T3 to T4 SCC. Among those who met the inclusion criteria for the study, 29 also underwent preoperative MRI.
All patients had been submitted to surgery after a multidisciplinary team (MDT) discussion and preoperative counseling between head and neck surgeons, radiologists, radiation, and medical oncologists. All patients were preoperatively evaluated by a dedicated head and neck surgeon by rigid endoscopy under white light (WL) and narrow-band imaging for the assessment of the superficial boundaries of the lesion.
Measurement of Radiological and Pathological DOI
An IOUS was performed using a 22-8 MHz 8 mm footprint hockey-stick probe; for patients who were scanned before 2018, a hockey-stick 15-7 MHz probe was used. The probe was shielded with a latex cover on which a small amount of ultrasound gel was introduced. The examination was performed with the patient extending the tongue, which was gently held with gauze on the contralateral side by the operator. The ultrasound examination was performed using light pressure to avoid compression distortion. The entire lesion was examined to determine the deepest point of infiltration. The usDOI was measured perpendicularly to the mucosal surface using the closest normal mucosa as the reference line; the exophytic parts of the lesion were excluded from the measurement, while the ulcerated part was included ( Figure 2). patients who were scanned before 2018, a hockey-stick 15-7 MHz probe was used. The probe was shielded with a latex cover on which a small amount of ultrasound gel was introduced. The examination was performed with the patient extending the tongue, which was gently held with gauze on the contralateral side by the operator. The ultrasound examination was performed using light pressure to avoid compression distortion. The entire lesion was examined to determine the deepest point of infiltration. The usDOI was measured perpendicularly to the mucosal surface using the closest normal mucosa as the reference line; the exophytic parts of the lesion were excluded from the measurement, while the ulcerated part was included ( Figure 2).
MRI was performed with a 1.5 T scanner and a 3.0 T scanner, with the manufacturer's phased-array head and neck coils. The exams were conducted on axial and coronal planes using turbo spin echo (TSE) T1 and T2 weighted sequences, and diffusion-weighted imaging sequences (b-values: 50, 800) with an apparent diffusion coefficient map and fatsaturated gadolinium-enhanced gradient echo T1 weighted sequences. If needed, T1 and T2 weighted sequences for motion artifact reduction were used (using, for instance, radial sampling of the k-space sequences).
The pDOI was assessed using a micrometer in formalin-fixed paraffin-embedded specimens. The DOI was measured following a reference line perpendicular to the plane of the basement membrane of the closest normal mucosa.
Statistical Analyses
Statistical analyses were performed using MedCalc software. A Shapiro-Wilk test was used to study the distribution of the variables. A correlation analysis was performed with Spearman's rank correlation. The agreement between the pDOI, usDOI, and mriDOI was assessed with the Bland-Altman plot. An χ² test was used to evaluate the ability of the IOUS to correctly assign the tumor in the corresponding pathological T stage (pT stage), testing the null hypothesis that there is no correlation between the ultrasonographically assessed T stage (usT stage) and pT stage. The specificity, sensibility, and area under the ROC curve were calculated to determine the ability of the IOUS to predict a pDOI ≥ 4 MRI was performed with a 1.5 T scanner and a 3.0 T scanner, with the manufacturer's phased-array head and neck coils. The exams were conducted on axial and coronal planes using turbo spin echo (TSE) T1 and T2 weighted sequences, and diffusion-weighted imaging sequences (b-values: 50, 800) with an apparent diffusion coefficient map and fat-saturated gadolinium-enhanced gradient echo T1 weighted sequences. If needed, T1 and T2 weighted sequences for motion artifact reduction were used (using, for instance, radial sampling of the k-space sequences).
The pDOI was assessed using a micrometer in formalin-fixed paraffin-embedded specimens. The DOI was measured following a reference line perpendicular to the plane of the basement membrane of the closest normal mucosa.
Statistical Analyses
Statistical analyses were performed using MedCalc software. A Shapiro-Wilk test was used to study the distribution of the variables. A correlation analysis was performed with Spearman's rank correlation. The agreement between the pDOI, usDOI, and mriDOI was assessed with the Bland-Altman plot. An χ 2 test was used to evaluate the ability of the IOUS to correctly assign the tumor in the corresponding pathological T stage (pT stage), testing the null hypothesis that there is no correlation between the ultrasonographically assessed T stage (usT stage) and pT stage. The specificity, sensibility, and area under the ROC curve were calculated to determine the ability of the IOUS to predict a pDOI ≥ 4 mm. Specificity, sensibility, and area under the ROC curve were also assessed for the IOUS to predict the deep invasion beyond the epithelial layer (pT1 and pT2 vs. CIS).
Results
Altogether, 41 patients met the criteria for the study and were included. Clinicodemographic data are shown in Table 2. The primary site was the lateral surface in 33 cases, ventral surface in 3 cases, dorsal surface in 2 cases, and tongue pelvis in 3 cases. On the US, the tongue SCC appears as a slightly hypoechoic lesion that replaces the normal epithelial layer, which is very hypoechoic. If invasive, it infiltrates the deeper hyperechoic layer, which may represent the subepithelial connective tissue and possibly the intrinsic muscle of the tongue layer. The mean usDOI, mriDOI, and pDOI were 3.79 mm (95% CI 2.93-4.65), 5.30 mm (95% CI 4.23-6.37), and 3.07 mm (95% CI 2.24-3.91 mm), respectively. The mean difference between the radiological and pathological DOI was 1.06 mm and 1.64 mm for the usDOI and mriDOI, respectively. However, the difference was not statistically significant (p = 0.26).
The null hypothesis that there is no correlation between the usT stage and pT stage was rejected, and the alternative hypothesis that there is a relation between the two classifications was accepted (p < 0.0001) ( Figure 5). The sensitivity and specificity for the IOUS to predict a pDOI ≥4 mm were 92.31% and 82.14%, respectively. The area under the ROC curve (AUC) was 0.87 (95% CI 0.73-0.95, p < 0.0001) (Figure 6).
Discussion
The tongue is the most common subsite for oral SCC [1]. The eighth edition of the TNM classification officially introduced the DOI as a criterion for determining the T stage in oral cancer. The DOI is a well-known prognostic factor. Indeed, several authors have demonstrated a direct correlation between the DOI and the incidence of nodal metastasis [2,3,8,9]. A precise preoperative measurement of the DOI is crucial for planning the surgical approach: cT1 and selected cT2 tumors with a DOI < 10 mm that do not infiltrate extrinsic tongue muscles can be removed transorally [15]. Moreover, if a reliable Using the IOUS, we found that 39 tumors went beyond the epithelial layer and infiltrated the subepithelial connective tissue, while 3 did not. Upon a histological examination, we found that 36 tumors were invasive while 5 were in situ. The specificity, sensitivity, negative predictive value (NPV), and positive predictive value (PPV) for the IOUS to predict an invasive cancer were 100%, 94.7%, 60%, and 100%, respectively. The AUC was 0.8 (95% CI 0.646-0.908, p < 0.0001) (Figure 6).
Discussion
The tongue is the most common subsite for oral SCC [1]. The eighth edition of the TNM classification officially introduced the DOI as a criterion for determining the T stage in oral cancer. The DOI is a well-known prognostic factor. Indeed, several authors have demonstrated a direct correlation between the DOI and the incidence of nodal metastasis [2,3,8,9]. A precise preoperative measurement of the DOI is crucial for planning the surgical approach: cT1 and selected cT2 tumors with a DOI < 10 mm that do not infiltrate extrinsic tongue muscles can be removed transorally [15]. Moreover, if a reliable radiological tool for estimating DOI was available, it would be possible to plan a primary tumor removal and elective neck dissection simultaneously.
Currently, there is no standard radiological technique to estimate the DOI. MRI is considered the first-choice imaging modality for preoperative staging for tongue SCC. However, ultrasonography may be a suitable alternative, thanks to its high spatial resolution, especially in early tumors.
In our study, an excellent radiological-pathological agreement in estimating the DOI was found. Compared with histopathology, both ultrasonography and MRI showed a good correlation, with slightly better performance for ultrasonography than MRI (0.84 and 0.79, respectively), even though the difference was not statistically significant.
The radiological-pathological agreement was studied using the Bland-Altman plot (setting histology as the reference standard), which enabled us to reveal the systematic mean bias. Both ultrasonography and MRI had a good performance since only three cases and one case were outside the limits of agreement, respectively, for the first and second methodology. Not surprisingly, a systematic error was found for both imaging modalities. However, ultrasonography showed a smaller bias and 95% limits of agreement than MRI: the IOUS systematically overestimated the DOI by 0.7 mm, while the mean bias for MRI was 1.6 mm. Therefore, ultrasonography was more precise than MRI with regard to CIS, T1, and T2 tumors. Only the study by Takamura et al. studied radiological-pathological agreement and mean bias between usDOI and pDOI, showing a higher reliability of IOUS than MRI in the preoperative prediction of DOI for T1 and T2 SCCs and a 0.2 mm mean bias between the usDOI and pDOI [36]. The radiological overestimation of the DOI may be related to two phenomena. Firstly, tissue shrinkage has been demonstrated on formalinfixed surgical specimens of oral SCC [38,39]; moreover, tumors are usually surrounded by peritumoral inflammation and edema that may influence radiological measurements of the DOI, as several authors have demonstrated that MRI significantly overestimates the DOI [40]. Based on our results, we can hypothesize that an IOUS may be less influenced by peritumoral edema. Moreover, for early tongue SSC, an mriDOi mean bias >1 mm has to be considered clinically meaningful.
However, despite the mean bias, our study showed the good performance of the IOUS in estimating the T stage correctly: 70.7% of patients were assigned to the correct pathological T stage, 17.1% of patients were assigned to a higher T stage, and 12.2% of patients were assigned to a lower T stage.
The DOI was found to be a good predictor for occult nodal metastases. Controversies remain about the proper pDOI threshold for a clinically relevant risk of occult nodal metastases; however, data from the literature suggest that an elective neck dissection should be performed with a pDOI ≥ 4 mm [12][13][14]. An accurate instrument to preoperatively measure the DOI may allow proposing concomitant tumor resection and elective neck dissection, reducing the rate of two-step procedures. In our series of early tongue SCCs, the IOUS was very accurate in determining whether a tumor had a DOI ≥ 4 mm or not, which was the threshold chosen for elective neck dissection at our institution, regardless of the T stage, with a sensitivity of 92.31%, specificity of 82.14%, and AUC of 0.87 (95% CI 0.73-0.95, p < 0.0001). However, further studies with a larger sample are required to understand if the usDOI alone can be considered as a predictor for neck nodal metastasis and local recurrence in the same way as the pDOI.
One of the main strengths of our work is the use of a very high frequency linear probe, in contrast to most studies in the literature [19][20][21][22][23][24][25][26][27][28]30,33,[35][36][37]. Moreover, a small probe allowed a relatively easy intraoral approach, even though it is more difficult to scan posteriorly located tumors. Furthermore, in this study, the use of a very high frequency probe enabled us to recognize the superficial layers of the mucosa of the tongue and, in such a way, to measure the usDOI from the subepithelial connective tissue to the deepest point of infiltration of the tumor. However, caution has to be used for the usDOI assessment of more advanced SCCs: ultrasonography is less reliable if the pDOI is >5 mm [33,41]. For small high-frequency probes, the risk may be even higher, both for the weak penetration of the ultrasound and the small field of view.
The present work enlightens the role ultrasonography in early tongue SCCs. The clinical importance to precisely estimate the DOI in early stages is due to the fact that oral tongue SCCs have an early lymphatic spread and a high risk of occult metastases [2][3][4], especially if they have a DOI > 4 mm. To our knowledge, this is the second study to investigate radiological-pathological agreement in early tongue SCCs as a target and to include CISs, confirming the results in the literature [35,36]. Therefore, we believe that a high-frequency IOUS may be a radiological tool to noninvasively estimate the risk of the invasiveness of a clinically revealed lesion. Thanks to its highest spatial resolution among radiological devices, ultrasonography appears to be indicated explicitly to study small, superficial lesions. Recent work by Rocchetti et al. showed that the sensitivity, specificity, PPV, and NPV of an IOUS in the assessment of the extension of the tumor beyond the lamina propria into the submucosa were 93.1%, 100%, 100%, and 60%, respectively [35]. We report a sensitivity, specificity, PPV, and NPV for the IOUS to predict an invasive cancer of 94.7%, 100%, 100%, and 60%, respectively, in accordance with the literature. However, similar to the study by Rocchetti et al., we only evaluated five carcinomas in situ. Further studies are required to establish the role of an IOUS in differentiating invasive and noninvasive tumors.
The main limitations of this study are the small sample size and retrospective methodology. Moreover, a high-frequency IOUS is a new and operator-dependent technique, and radiologists need time to learn how to measure the DOI correctly.
The sample was too small and the follow-up was insufficient to identify the correlation between the DOI and nodal metastases. However, since we found a high correlation between the usDOI and pDOI, we can indirectly assume that the usDOI may be a preoperative prognostic factor for nodal disease: future studies must be conducted to support this hypothesis.
Conclusions
In conclusion, our study showed a good agreement between the usDOI and pDOI. We believe that an IOUS might be indicated for the local staging of early clinical N0 oral lesions: this is the scenario in which the precision in estimating the DOI would have a greater clinical impact, at least to estimate the risk of occult lymph node metastases. In contrast, MRI might be proposed for more advanced lesions or if suspicious or enlarged lymph nodes are present, given its panoramic view and better sensitivity in detecting bone invasions and floor-of-the-mouth infiltration. However, larger and prospective studies must be conducted to confirm this hypothesis.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki and was approved by the Regional Ethics Committee of Liguria (CER Liguria 133/2021).
|
2022-11-16T16:18:16.349Z
|
2022-11-01T00:00:00.000
|
{
"year": 2022,
"sha1": "a1d164c7536ddf5a19464b6016c6ae828dbe29f6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/22/14900/pdf?version=1668244364",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a9610106133b93fd8a7adf066e160a9302b3ec32",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235765491
|
pes2o/s2orc
|
v3-fos-license
|
Active Stokesian dynamics
Abstract Since its development, Stokesian dynamics has been a leading approach for the dynamic simulation of suspensions of particles at arbitrary concentrations with full hydrodynamic interactions. Although developed originally for the simulation of passive particle suspensions, the Stokesian dynamics framework is equally well suited to the analysis and dynamic simulation of suspensions of active particles, as we elucidate here. We show how the reciprocal theorem can be used to formulate the exact dynamics for a suspension of arbitrary active particles, and then show how the Stokesian dynamics method provides a rigorous way to approximate and compute the dynamics of dense active suspensions where many-body hydrodynamic interactions are important.
Introduction
Active matter is a term used to describe matter that is composed of a large number of self-propelled active 'particles' that individually convert stored or ambient energy into systematic motion (Schweitzer & Farmer 2007;Morozov 2017).The interaction of many of these individual active particles can lead to complex collective dynamics (Ramaswamy 2010).Natural examples include a flock of birds, a school of fish, or a suspension of bacteria (Toner, Tu & Ramaswamy 2005), but active matter may also be composed of synthetic active particles (Bechinger et al. 2016).These out-of-equilibrium systems are most often in fluids, so understanding their dynamics and rheology involves a connection between fluid-body interactions and non-equilibrium statistical physics (Marchetti et al. 2013;Saintillan 2018).
The study of active matter at small scales is complicated by the fact that the Stokes equations, which govern momentum conservation of Newtonian fluids when inertia is negligible, feature a long-range decay of fluid disturbances (Happel & Brenner 1965).
Because of this, active particles interact through the fluid over distances that are long relative to their individual size, and to properly capture the effect of the fluid in these systems, one may need to sum hydrodynamic interactions between all bodies, particularly at higher particle concentrations.
The difficulty of capturing accurately many-body hydrodynamic interactions is well known from the study of suspensions of passive particles, where early efforts to sum hydrodynamic interactions in infinite suspensions were plagued by problems of divergent sums (see, for example, the long literature on sedimenting particles; Davis & Acrivos 1985), eventually overcome by the pioneering work of Batchelor (1972), Jeffrey (1974), Hinch (1977), O' Brien (1979) and others.The Stokesian dynamics method that was developed soon after facilitated the efficient dynamic simulation of passive particle suspensions at arbitrary concentrations (Brady & Bossis 1988).The essential basis of the Stokesian dynamics method is a mixed asymptotic approach wherein hydrodynamic forces on particles due to interactions are computed distinctly when the particles are in close proximity versus widely separated.When the particles are widely separated, the method sums many-body hydrodynamic reflections between particles through inversion of a truncated grand mobility tensor, whereas when the particles are in close proximity, pairwise additive lubrication forces are used (Durlofsky, Brady & Bossis 1987).When the suspension is infinite or periodic, a modification of the method introduced by O' Brien (1979) is used to obtain absolutely convergent expressions for the hydrodynamic interactions among all particles, suitable for the numerical simulation of a wide range of problems from sedimentation to rheology (Brady et al. 1988).Since its inception, the Stokesian dynamics method has served as a foundational tool for the development of our understanding of suspension mechanics in the last several decades.
Unlike passive suspensions, in active suspensions each active particle in the fluid is endowed with non-trivial boundary conditions due to activity, and constantly injects energy into the fluid.Many advances have been made in understanding the dynamics of individual swimming microorganisms (biological and synthetic), from the pioneering work of Taylor (1951) through to several detailed reviews of microscale locomotion research (Lighthill 1976;Brennen & Winet 1977;Lauga & Powers 2009).However, in a fashion similar to the early development of the passive suspension literature, the majority of research on collective locomotion of many bodies and active suspensions has emphasized dilute suspensions where swimmer-swimmer interactions are greatly simplified, and far-field approximations are still valid (Saintillan 2018).Interesting phenomena, such as particle clustering (motility-induced phase separation), have been observed for dense suspensions of active particles (Bechinger et al. 2016), but very often numerical simulation of these suspensions is done with active Brownian particle models that neglect hydrodynamic interactions entirely (Cates & Tailleur 2015).Others have used approaches for active suspensions that only approximate the Stokes equations, such as multiparticle collision dynamics (Zöttl & Stark 2014) or lattice Boltzmann methods (Stenhammar et al. 2017) that still may not be accurate for very dense concentrations.Results for simplified swimmers in concentrated suspensions display qualitative differences (Ishikawa, Locsei & Pedley 2008;Evans et al. 2011;Alarcón & Pagonabarraga 2013;Matas-Navarro et al. 2014;Zöttl & Stark 2014;Thutupalli et al. 2018).Some argue that hydrodynamic interactions act to suppress phase separation in active matter (Matas-Navarro et al. 2014), while others have shown that hydrodynamic interactions with boundaries can control phase separation (Thutupalli et al. 2018).A complete understanding of the connection between individual particle activity, the hydrodynamic interactions between many particles that arise as a consequence of this activity, and the role this plays in the macroscopic dynamics of concentrated active suspensions has not been developed.
As we discuss in the following, the Stokesian dynamics methodology is easily adapted for the dynamic simulation of suspensions of active particles at any concentration, and as with passive suspensions, is particularly well suited for dense concentrations and periodic boundary conditions.The mathematical structure of the dynamical equations remains essentially unchanged between passive and active particles, so any implementation of the Stokesian dynamics method for passive particles may be modified simply and easily for use with active particles.By making the connection to passive suspensions and Stokesian dynamics, we obtain directly a long-developed framework for summing hydrodynamic interactions, adding non-hydrodynamic forces, including Brownian motion and constructing the equivalent Smoluchowski equations, for active suspensions with full hydrodynamic interactions (Burkholder & Brady 2018).We believe that this mathematical structure provides an ideal formalism for theoretical analysis of active suspensions (Burkholder & Brady 2019), much as it has for passive suspensions (Brady 1993a,b).
The Stokesian dynamics method was first adapted for use with self-propelled active particles by Mehandia & Nott (2008).In their work, they introduced spheres each with a prescribed virtual propulsive force, that interact through a prescribed stresslet whose magnitude sets the size of the virtual propulsive force, and an induced stresslet caused by particle rigidity in a bulk flow.The dynamics of these active spheres was then solved numerically using the Stokesian dynamics framework.The authors found that near-field interactions appeared important even at low concentrations, as particles tended to cluster, and they found qualitative differences in the dynamics between low-and high-volume fractions.Despite the novelty, the authors did not specify how the propulsive force arises from the surface boundary conditions, or how to generalize this approach.Shortly afterwards, Ishikawa et al. (2008) adapted the Stokesian dynamics framework for use with spherical particles with a prescribed tangential slip velocity, so-called squirmer particles (Ishikawa et al. 2008).Using their own previous results for two-body hydrodynamic interaction between squirmer particles (Ishikawa, Simmonds & Pedley 2006), Ishikawa et al. (2008) were able to incorporate both near-field interactions and many-body far-field interactions for the study of dense suspensions of (2-mode) squirmer particles.This framework was then used to study the rheology (Ishikawa & Pedley 2007b), diffusion (Ishikawa & Pedley 2007a) and coherent structures (Ishikawa & Pedley 2008) of these active suspensions.The Stokesian dynamics framework was then extended for use with passive and active spherical particles that had a fairly general surface velocity field (but were individually immotile), which could be linked together to form complex swimming assemblies (Swan et al. 2011).That machinery was then used to simulate a number of model swimming microorganisms, from pusher and puller swimmers to helical flagella, by using assemblies of spherical particles (Swan et al. 2011).Recently, a higher-order Stokesian-dynamics-like approach (without lubrication), namely constructing mobility tensors by a higher-order moment expansion of the boundary integral equations (Ichiki 2002), was developed for suspensions of squirmer particles by using tensorial spherical harmonics (Singh, Ghose & Adhikari 2015).Here, we show that this previous literature may all be synthesized into a fairly general theory for the dynamics of suspensions of arbitrary active particles.This work illustrates the similarity of hydrodynamic interactions in 'passive' colloidal suspensions versus active suspensions and how particle activity can be incorporated directly into the Stokesian dynamics approach.
Alternative approaches to the Stokesian dynamics methodology for the numerical simulation of passive and active suspensions have also been developed recently.An approach known as the force-coupling method (FCM) (Maxey & Patel 2001) instead uses regularized (Gaussian) force distributions in the fluid and integrals over the entire fluid domain (rather than just particle boundaries) to construct (far-field) mobilities.The FCM was extended to incorporate fluctuating hydrodynamics for passive suspensions (Keaveny 2014;Delmotte & Keaveny 2015) and then to squirmer active particles, up to the stresslet level (Delmotte et al. 2015).Similarly, a fluctuating immersed boundary method (FIBM) was also developed to simulate suspensions of Brownian particles (Delong et al. 2014).Like the FCM, the FIBM uses an explicit (fluctuating) solvent but uses kernels developed by Peskin (2002) for the immersed boundary method to mediate the fluid-particle interactions.This immersed boundary approach has also been extended to simulate rigid assemblies of particles and active particles (at the monopole level) (Balboa Usabiaga et al. 2016).A key benefit of these approaches is that exact Green's functions need not be found, which for complex boundaries may not be feasible at all.The method used to construct mobilities can be chosen depending on the boundary conditions, and this is the approach taken in the rigid multiblob method with exact Green's functions (at the Rotne-Prager level) for unbounded or half-space simulations, while using the FIBM for confined geometries (Balboa Usabiaga et al. 2016;Sprinkle et al. 2017).These methods are both fast and have well-supported code bases, and although they generally include only far-field hydrodynamic interactions, in principle, lubrication could also be added.Finally, rather than forming the mobilities by expanding in tensorial spherical harmonics as done by Singh et al. (2015) and also shown here, a recent approach (for non-Brownian suspensions) uses instead vector spherical harmonics (Corona & Veerapaneni 2018;Yan et al. 2020), which seems to lead to simpler formulas for higher-order terms, allowing simulation of the near field without resorting to a lubrication approximation.
We begin by developing a general kinematic description of an arbitrary active particle in § 2. Next, we show how the reciprocal theorem can be used to yield the exact dynamics for a suspension of N arbitrary active particles in § 3. We then show how the Stokesian dynamics technique is used for the approximation and dynamic simulation of these exact equations in § 4.
Kinematics of an active particle
Consider an active particle identified with the region B as shown in figure 1. Changes in the spatial configuration of the active particle can be described by a map χ from a reference configuration B 0 such that x = χ (X , t) for x ∈ B and X ∈ B 0 .The motion of the body can be decomposed into shape change χ s , which represents the swimming gait of the active particle, and rigid-body motion χ r , which arises as a consequence of interaction with the fluid, so that where x c is the translation, and Θ is the rotation (about χ s (X 0 , t)) of the body under the action of χ r .Upon differentiation, we obtain the velocity of the body, . Schematic of a deforming active particle.
due to shape change is where the last term is the deformation velocity in the unoriented configuration, In a purely kinematic description of the activity of the particle, we would consider the shape change χ s to be prescribed and then solve for the rigid-body translation and rotation of the active particle such that momentum (of the particle and of the fluid) is conserved.
Often, in the process of modelling an active particle, the details of the deformation are coarse-grained away and one simply prescribes a surface velocity on a suitable reference configuration, ũ(X ∈ ∂B 0 ), such as ciliary beating on the surface of a microorganism that is represented as a slip velocity over a fixed geometry (Blake 1971).
Dynamics of active particles
Consider a suspension of N particles, each labelled B i where i ∈ [1, N], immersed in an arbitrary background flow denoted u ∞ .The disturbance velocity field generated by the particles is Neglecting the inertia of the active particles and of the Newtonian fluid in which they are immersed, the rigid-body dynamics of active particles is governed by an instantaneous force balance where F and F ext are respectively 6N-dimensional vectors of hydrodynamic and external (or interparticle) forces/torques on all N particles.In general, the hydrodynamic forces may be easily shown, by the reciprocal theorem of low Reynolds number hydrodynamics, to be weighted integrals of the boundary traction during rigid-body motion: where n is the normal to the surface ∂B i pointing into the fluid.The tensor field derivation).Substitution of the boundary conditions of each active particle, (2.2), into (3.3)yields a decomposition of the hydrodynamic forces into three separate forces due to each aspect of the boundary motion: the hydrodynamic 'swim' force (or thrust), generated by each active particle as if held fixed in an otherwise quiescent fluid; the hydrodynamic drag force on each particle, as if inactive and held fixed in a background flow; and the hydrodynamic drag due to the rigid-body motion of each particle, as if inactive (passive) in an otherwise quiescent fluid.The latter is written in terms of a (6N × 6N) resistance tensor, which is the linear operator that gives hydrodynamic forces due to rigid-body translational/rotational velocities U (another 6N-dimensional vector).Substitution of these forces into (3.2) and inversion of the resistance tensor gives This relationship simply states that the rigid-body motion of active particles is linearly related to the forces exerted by or on those particles.The deterministic formula in (3.7) is exact and completely general; it governs the dynamics of a suspension of active (and passive) particles of arbitrary shape and activity in a general background flow.A stochastic Brownian force may also be included in the above force balance, with the associated thermal drift term that arises upon elimination of inertial degrees of freedom: The vector U now represents discrete changes in position and orientation over an interval t, and the Brownian force is where k B is the Boltzmann constant, T is the fluid temperature and Ψ is a vector of standard Gaussian random variables.
Although (3.7) and (3.8) are exact, to compute the dynamics, the tensor field T U would need to be found at each instant, and this is prohibitively expensive for suspensions of large numbers of particles (and more so if they are changing shape).Instead, an approximate approach used in Stokesian dynamics is to evaluate a truncated set of moments of the traction operator n • T U on the surfaces of the particles ∂B i .We outline this approach for spherical active bodies below, where the approach is particularly elegant and simplified, but the methodology can certainly be extended to anisotropic bodies (Claeys & Brady 1993a,b,c;Nasouri & Elfring 2018).
Spherical moments
In order to facilitate computation, we use the fact that one may write an arbitrary function on a sphere in terms of an expansion in irreducible tensors of the unit normal Active Stokesian dynamics n (dimensionless tensorial spherical harmonics) (Hess 2015).In this way, the deformation velocity of each active particle may be written as where in this shorthand notation the superscript indicates the tensor order of the coefficients, and the overbracket means the irreducible (or fully symmetric and traceless) part of the tensor (see Appendix B for further details).
The coefficient tensors C (n) may be obtained easily by appealing to the orthogonality of the tensorial spherical harmonics where the weight is w n = (2n + 1)!!/n!.Hence we see that the coefficient tensors are (weighted) irreducible moments of u s .We can recast these coefficients in more familiar terms by separating symmetric and antisymmetric parts of C (2) i , (3.11) (3.12) to rewrite the surface velocity in familiar form (Swan et al. 2011) We can likewise express the background flow in terms of a moment expansion, and in this way write in a consistent fashion for the disturbance field where constants everywhere in the flow (but still may be arbitrary functions of time).
Using the expansion (3.15) in (3.3), one may write the hydrodynamic forces in terms of a set of moments of the traction operator n • T U on the surfaces of the particles ∂B i (forming resistance tensors): (3.16) Now using the above expression for the hydrodynamic forces, together with Newton's second law (3.2),we obtain the translational and rotational velocities of the spherical active particles: background flow to surface velocity, e.g.E ∞ → E ∞ − E s .If the particles are passive, u s = 0, then we recover equations of motion for passive particles in a background flow (Brady & Bossis 1988); however, computing the far-field hydrodynamic interactions of active spherical particles is no more difficult than for passive spherical particles, assuming that the velocities on the boundaries of the active particle, u s i , are prescribed.If hydrodynamic interactions are completely neglected, then the particles all move with their respective single-particle velocities U = −U s + U ∞ (higher-order moments do not contribute to self-propulsion for isolated spherical particles by symmetry), and we recover the classic result for single active spheres (Anderson & Prieve 1991;Stone & Samuel 1996;Elfring 2015).It is technically possible to devise a perfect stealth swimmer that does not disturb the surrounding fluid by setting u s = U s , for example by the jetting mechanism proposed by Spagnolie & Lauga (2010), but this is an extreme case, and in general, higher-order moments lead to hydrodynamic interactions.We emphasize that hydrodynamic interactions due to moments of the particle activity enter in exactly equivalent form to interactions due to moments of the background flow.For example, the leading-order change in the dynamics of the particles due to hydrodynamic interactions is given by , where the resistance tensors act to couple the particles in precisely the same fashion for active particles as passive particles.In Appendix D, we give the leading-order hydrodynamic interactions (a dilute approximation) in the mobility formulation more commonly employed in the literature.
We see that the leading-order hydrodynamic interactions due to activity are given by the symmetric first moment of activity, E s i , of each active particle.This is not a surprise as the term E s sets the active component of the stresslet S (Ishikawa et al. 2006;Lauga & Michelin 2016;Nasouri & Elfring 2018) of individual spherical active particles, where The 'active strain rate' E s can be zero, but then the leading-order term will generally arise at the second-moment level for self-motile active particles (that is, ones with non-zero surface averaged velocity).This is the case for so-called neutral squirmers (see § 3.2 below) or symmetric phoretic particles (Michelin & Lauga 2014).This illustrates why it is particularly important to incorporate higher-order moments to capture accurately hydrodynamic interactions between active particles (Singh et al. 2015).Active particles may also be immotile (not self propelling), meaning that U s = 0 but higher-order moments are non-zero.A canonical example of immotile active particles is extensile microtubule bundles that are driven by kinesin motors (Sanchez et al. 2012).Immotile active particles, sometimes called 'shakers' in contrast to 'movers' that are motile (Hatwalne et al. 2004), still interact hydrodynamically through higher-order active moments in much the same way as motile active particles.These equations, above all, simply reflect the linear relationship between velocity and force moments.Using still more compact notation for all disturbance velocity moments U . .]T and hydrodynamic force moments F = [F , S, . ..]T , we may write, more generally, the linear relationship where R is the grand resistance tensor, an (unbounded) linear operator that maps velocity moments to force moments.In this notation, the hydrodynamic force is written compactly In order to capture the dynamics of active particles, we seek an effective and efficient way to form R F U .The grand resistance tensor is a purely geometric operator, depending only on the position (and orientation if they are anisotropic) of each active particle (Happel & Brenner 1965).Perhaps less obvious is that the grand resistance tensor does not depend on the prescribed surface activity of the particles, and is thus identical to the case when they are passive.This also applies to particles in a bounded geometry -R is a function of geometry only (Swan & Brady 2007, 2010).We have assumed here that the surface activity of the particles is prescribed; however, the surface activity may depend on the traction on the boundary, as it would, for example, for biological particles that have power-limited surface actuation, but the linear relationship between force moments and velocity moments makes it straightforward to prescribe force moments (Swan et al. 2011).
We focus here on spherical particles, as is common in the literature for colloidal suspensions; however, the method described above can be generalized to other geometries.We formed moments of forces and velocities by projection onto tensorial spherical harmonics, but for other geometries, a more suitable basis for the vector fields on the particle surfaces ∂B i would be used.Alternatively, and more generally, one may perform Taylor series expansion of the boundary integral equations about the centre of each particle, which naturally projects tractions onto force moments for particles of arbitrary geometry (see recent work by Swan et al. (2011) and Nasouri & Elfring (2018) for details of this method applied to active particles).A problem with this approach is that the particle activity u s might be defined only on the particle surfaces (in the form of surface slip as in § 3.2), but this difficulty can be ameliorated by lifting u s to a suitably continuous function defined in R 3 .Despite this complication, fundamentally, the linear relationship between velocity and force moments remains, regardless of geometry.
Squirmers
A squirmer is a spherical particle whose surface slip velocity is tangential to the surface (Pedley 2016).Most often, the slip velocity is taken to be axisymmetric; here, the direction of the axis of symmetry of the particle is denoted by p (the particle director).A purely tangential slip velocity is of course an idealization, but one that arises quite naturally, for example in the limit of small-amplitude deformations that are projected onto a time-averaged spherical manifold (Lighthill 1952;Blake 1971), or as the outer solution of phoretic flow due to chemical concentrations confined to a thin layer near the sphere surface (Anderson 1989;Golestanian, Liverpool & Ajdari 2005).The slip velocity is typically written as an expansion in Legendre polynomials: where P n is the Legendre polynomial of degree n, and P n (x) = (d/dx)P n (x).The polar slip coefficients B n are often called 'squirming' modes, while the azimuthal slip (or Recasting the slip velocity in terms of irreducible tensors of the surface normal such that we obtain where Δ n is an isotropic 2n-order tensor that when applied on a tensor of rank n, projects onto the symmetric traceless part of that tensor (see Appendix B for further details).By symmetry, each coefficient is necessarily composed only of products of the particle director p.We see that the swimming speed is given by the first squirming mode B 1 as U = −U s = (2/3)B 1 p for an isolated squirmer, but note that the first mode also contributes a higher-order term to hydrodynamic interactions between particles embedded in B s .The stresslet due to surface activity of a particle is given by the second squirming mode, S = 4πηa 2 B 2 pp.This determines if a squirmer is a pusher or a puller, but can easily be zero -a so-called neutral squirmer -and in that case, the leading-order term contributing to hydrodynamic interactions is necessarily given by B 1 (and B 3 if non-zero).
Azimuthal slip leads naturally to rotation given by the C 1 mode, Ω = −Ω s = −(C 1 /a 3 )p for an isolated squirmer, while the C 2 mode leads to a rotlet dipole contribution in the far field (Pak & Lauga 2014).
As an example of the framework developed here, consider an active squirmer particle, labelled B 1 , in the presence of a freely suspended passive sphere, labelled B 2 , as shown in figure 2. Using (3.17), we obtain the velocities of the two particles in terms of moments of the surface activity of the active particle: where the superscripts, for example R αβ FE , indicate the linear relationship between particle α and particle β, while M UF = R −1 FU .The first term on the right-hand side of (3.28) represents the self-propulsion of the active particle, while the second term represents the change in the velocity due to hydrodynamic interactions induced by the surface strain rate of the active particle E s 1 (and higher-order moments).Hydrodynamic interactions also induce the motion of the passive particle.In essence, the moments of u s on the active particle result in a 'swim' force on both particles, which must then be balanced by drag due to rigid-body motion.The trajectories of both active and passive particles are illustrated in figure 2. As another example, consider a suspension of immotile 'shaker' particles.Using (3.17), the velocities of the particles are Here, there is no self-propulsion, only the effects of hydrodynamic interactions induced by the surface strain rate of the active particles E s k (and higher-order moments), as illustrated in figure 3.
Assemblies
As detailed by Swan et al. (2011) A set of particles may be constrained to move as a rigid body, that is, particle α in a rigid assembly A will move as where x A is a convenient point on the assembly.Following the notation in Swan et al. (2011), this may be written compactly in terms of six-dimensional vectors as , where Σ T αA projects the translational and rotational velocity of the assembly onto particle α.The rigid-body translational and rotational velocities of all N particles in an assembly may then be written in terms of 6N-dimensional vectors and tensors as The forces and torques that enforce the rigid constraints on the assembly, F c , must be included in the sum of forces on the particles: (3.34) These constraint forces are internal forces, and as such exert no net force or torque on the assembly.This may be written as where the operator Σ A , the transpose of the projection above, sums forces and torques (about x A ) on the assembly.In this way, the force balance on the assembly is Substitution of the relevant hydrodynamic forces and the kinematic constraint in (3.33) into this force balance leads to the rigid-body motion of the assembly given by where is the hydrodynamic resistance of the assembly.Equation (3.37) is an exact description of the dynamics of an assembly of active or passive particles; no approximation has yet been made.In particular, we note that while (3.37) yields the instantaneous rigid-body motion of the assembly, it does not mean that the assembly cannot deform.Indeed, through the prescription of the activity of each particle, by way of u s , we may construct an assembly of virtually any shape and kinematics.This approach is also extended straightforwardly to multiple assemblies through an extended operator Σ that sums forces on each assembly as shown by Swan et al. (2011).As discussed above, a natural method of solution is to use Stokesian dynamics to resolve hydrodynamic forces as a truncated set of moments.
As an illustrative example of a deforming assembly, consider a simple reciprocal two-sphere (or dumbbell) swimmer (see figure 4a).In this model swimmer, two spheres labelled B 1 and B 2 (where B A = B 1 ∪ B 2 ), of radii a and λa, respectively, have a prescribed distance between their centres, L(t), that changes periodically in time.We describe the shape change of this swimmer as the motion of sphere B 1 relative to sphere B 2 ; in this way, u s is non-zero only on B 1 .Written in terms of an expansion in moments as in (3.22), u s (x ∈ B 1 ) = U s 1 = Lp, with all other terms exactly zero, while of the assembly u(x ∈ B 2 ) = U A , while B 1 has an additional component due to shape change, u(x ∈ B 1 ) = U A + U s 1 .By symmetry, this swimmer does not rotate, Ω A = 0.The choice of reference is not unique and affects what is delineated as rigid-body motion versus shape change at any particular instant; however, typically we are concerned with the time-averaged motion of the body, which is invariant to the choice of reference for periodic gaits, and the flexibility allows one to take advantage of simplifications implied by a particular choice.
Due to the lack or rotation or torque, only the force-velocity resistance tensor of the assembly, R A FU , and the linear operator that gives stress due to translation, T U , are required.Substitution of the swim force (3.4) into (3.37) and simplification leads to (3.38) where the resistance tensors FU are functions of the length L(t), and hence depend on time.We may further simplify by noting that the propulsive force and velocity will be collinear with the axis of symmetry, so only a scalar coefficient for each resistance is required.A symmetric swimmer with λ = 1 has Lp, so the dumbbell moves opposite to the deformation with half the speed, as expected.This reciprocal motion clearly leads to zero net displacement over a period when L(t) is periodic.Less obvious, but also true, is that this holds for any λ, by the scallop theorem (Purcell 1977).
The previous example was particularly straightforward because u s was uniform, hence only the zeroth moment, U s , was non-zero.In contrast, consider a dumbbell swimmer with a fixed length L = const., but where sphere B 1 is a squirmer particle (see figure 4b), namely , with the moments of the surface velocity given by the squirming modes.In this case, the velocity of the assembly is given by and when the spheres are equal in size, λ = 1, we have simply Note that this swimmer can self-propel even when U s 1 = 0 due to hydrodynamic interactions with the second sphere.
Stokesian dynamics
The configuration-dependent N-body resistance tensors may be formed indirectly by first constructing the grand mobility tensor M = R −1 .In the Stokesian dynamics approach, one takes irreducible moments of the velocity field, as given by the boundary integral equation, over the surfaces of all the particles, yielding Faxén's laws for the velocity moments of the active particles (Batchelor 1972).If the boundary integral equations are also expanded in irreducible moments (Durlofsky et al. 1987), then we obtain a linear relationship between force and velocity moments: For active particles, the force moments contain contributions from the double-layer kernel due to the surface activity (see Appendix C for details).The grand mobility tensor is then inverted to obtain the grand resistance tensor R = M −1 , thereby summing many-body hydrodynamic interactions among the particles (Durlofsky et al. 1987).In principle, to capture near-field lubrication effects, the entire unbounded set of moments would need to be computed, and in practice this is unfeasible.The coupling between the mth moment of velocity and nth moment of force scales as r −(1+m+n) , so higher-order moments decay quite quickly with separation distance r between two particles, and a reasonable and common far-field approximation of the mobility is to truncate at the first moment level -we label this truncated mobility (Swan et al. 2011).This level of approximation is inappropriate for particles that are nearly touching, and the compromise used within Stokesian dynamics is to use a mixed asymptotic approach wherein close interactions are computed separately using pairwise exact solutions (Durlofsky et al. 1987).In this manner, the hydrodynamic forces on the particles are decomposed, into far-field interactions between many bodies F ff , and two-body interactions computed exactly for nearby bodies F 2B,exact .Note that the last term arises because the far-field interactions must be removed between any two bodies where the interactions are computed exactly, to avoid double counting.
To render exact solutions for active two-body hydrodynamic interactions, one needs to obtain the tensor field T U from the two-particle rigid-body motion problem.The general passive two-sphere problem for arbitrary separations may be constructed from a basis of four simplified two-sphere problems that have all been solved in the literature and are nicely summarized by Sharifi-Mood, Mozaffari & Córdova-Figueroa (2016) and Papavassiliou & Alexander (2017) in the context of two-body interactions between diffusiophoretic Janus particles and spherical squirmers, respectively.Asymptotic solutions for lubrication interactions, which are valid strictly only when the particles are very close, may be used alternatively and are given by Ishikawa et al. (2006) for spherical squirmers.
It is important to note that unlike passive particles in a linear background flow, active particles can have higher-order velocity moments due to surface activity.In § 3.2, we showed that even two-mode squirmer particles contribute third-and fourth-order velocity moments.For far-field interactions, these higher-order moments may not be significant due to the decay of the associated flow disturbances.However, for near-field interactions there is no rationale, other than the convergence of the series of tensorial spherical harmonics, to discard the contributions of higher-order velocity moments in the swim force.If higher-order moments are nonetheless discarded, then the approach for active 952 A19-14 particles is virtually identical to that of passive particles in Stokesian dynamics; the dynamics are given by (3.17), and the resistance tensor used is modified to include both exact two-body interactions R 2B,exact and a truncation of the moment expansion valid for far-field interactions, (M ff ) −1 , such that where the two-body interactions that are captured by the near-field approach must be subtracted in the far-field solution to avoid double counting (Swan et al. 2011).More accurately, the exact two-body swim force contributions may be computed entirely separately, as done by Ishikawa et al. (2006), by integrating (3.4) directly.In this way, (3.7) is written as Here, the terms on the first line represent the dynamics of passive spheres exactly as in conventional Stokesian dynamics, while the second line accounts for the contribution of activity, separated into two-body and far-field contributions.The primed resistance tensors contribute only far-field interactions If Brownian motion is included, then it is R FU from the total resistance (4.3) that sets the magnitude of the Brownian force.
4.1.Infinite suspensions The method described above was for a finite system of N active particles for which the fluid can be assumed to decay in the far field.For an infinite or periodic suspension of particles (active or passive), no such assumption can be made, and indeed naively extending N → ∞ leads to divergent integrals, a problem that plagued the earlier suspension literature (Batchelor 1972).Brady et al. (1988) adapted the method of O' Brien (1979) wherein the fluid domain for a set of particles is bounded by a large macroscopic surface over which suspension averages can be performed.Specifically, U ∞ , Ω ∞ and E ∞ become the average values of the suspension -particle plus fluid.Individual particle motion is then relative to the volume-averaged quantities.Suspension-averaged terms serve to regularize the formulas leading to absolutely convergent expressions for fluid and particle velocities.Periodic boundary conditions may then be employed easily, and as Brady et al. (1988) showed, the far-field mobility matrix M ff may be simply replaced by the appropriated Ewald-summed mobility matrix M ff * .As discussed above, the mobility matrix is unchanged if particles are active or passive; only the force and velocity moments are altered by activity, and mobility is unchanged whether or not the suspension averaged quantities are non-zero.Therefore, the Ewald-summed mobility matrix used for periodic passive suspensions is unchanged for active suspensions (Ishikawa et al. 2008).It is important to note that for self-propulsion there is no net volume displacement of material as the body moves: as the body advances, an equal volume of fluid moves in the opposite direction.In contrast, a body moving in response to an external force drags fluid along with it, and to have no net flux of mass, an external pressure gradient must be imposed.In principle, because the mathematical structure shown in (3.17) remains essentially unchanged between passive and active particles, any of these approaches may be used to simulate active suspensions with minor modification, and we do not suggest any particular numerical implementation here.Indeed, the recent fast Stokesian dynamics method utilizes an imposed Brownian 'slip' velocity in order to obtain the stochastic rigid-body motion of passive Brownian particles as shown in previous work (Delmotte & Keaveny 2015;Sprinkle et al. 2017), and we have adapted the fast Stokesian dynamics approach to include particle activity (see figure 5) but will discuss algorithmic details in a future work.
Conclusions
In this work, we have given a detailed exact theoretical description of the dynamics suspensions of active particles in fluids in the absence of inertia, including full hydrodynamic interactions among particles.We argue that, as is done for passive particles, hydrodynamic interactions are ideally separated into near-field and far-field forces, with the latter expanded in a truncated set of moments.The resulting mathematical structure of the dynamical equations remains virtually unchanged between passive and active particles save for the addition of velocity moments due to particle activity.Because of this, any implementation of the Stokesian dynamics method for passive particles may be modified simply and easily for use with active particles.Moreover, we believe that this mathematical structure provides an ideal formalism for theoretical analysis of hydrodynamic interactions in active matter, much as it has for passive suspensions.
The tensorial spherical harmonics are orthogonal, with the relationship where the weight is The isotropic tensor Δ n is a 2n-order tensor that projects an n-order tensor into its symmetric irreducible form (Hess 2015); i.e. for the n-order tensor A, Δ n A = A, where is a complete tensor contraction.The first several symmetrizing tensors are where the primed indices are distinct from the unprimed ones.
Appendix C. Expansion of the boundary integral equation
We derive here the grand mobility relationship between velocity moments and force moments by means of a Galerkin projection onto tensorial spherical harmonics (Singh et al. 2015).Consider the boundary integral equation for a suspension of active particles The velocities and tractions on each particle are now expressed in terms of expansions in tensorial spherical harmonics: Now taking moments of the flow over the surface of particle α, u(x ∈ ∂B α ), we systematically obtain mobility relationships for the α particle: (C9) (C10) where the stresslet for active particles includes a contribution from the double-layer kernel The mobility tensors are identical to those for passive particles: where I is the fourth-order identity tensor.We give the mobilities here in integral form (Ichiki 2002;Fiore & Swan 2018), but it is much more common to see them in the equivalent differential form, which may be found by Taylor expansion about the particle centres (Wajnryb et al. 2013;Mizerski et al. 2014;Fiore et al. 2017).For all N particles, we write the mobility relationships between velocity moments and force moments in compact form as To leading order in a dilute approximation, only the single particle stresslet terms remain, ES is diagonal, and we have where ) 952 A19-6 https://doi.org/10.1017/jfm.2022.
952Figure 2 .
Figure 2. Trajectory of a (pusher) active particle (labelled B 1 ) in the presence of a passive particle (labelled B 2 ).
Figure 3 .
Figure 3.An illustration of dynamics mediated by hydrodynamic interactions between immotile active particles (shakers).(a) 512 shaker particles are initially randomly oriented on a cubic lattice.Hydrodynamic interactions alone drive particle dynamics and mixing (b,c).Images are snapshots in time from left to right.
952Figure 5 .
Figure 5. Simulation of a suspension of 4000 identical active particles using the fast Stokesian dynamics method (warmer colours indicate faster speeds).
|
2021-07-09T01:15:40.096Z
|
2021-07-07T00:00:00.000
|
{
"year": 2022,
"sha1": "6ca8bf05198ef856b456eddd73c66104bd9d6cbd",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/4FAE47B1A6F0531AE9B6C8F1EAC6D95C/S0022112022009090a.pdf/div-class-title-active-stokesian-dynamics-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "6c00398941df44de030565c118d524b1d9043bbe",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
237605112
|
pes2o/s2orc
|
v3-fos-license
|
Complexity of magnetic-field turbulence at reconnection exhausts in the solar wind at 1 AU
Magnetic reconnection is a complex mechanism that converts magnetic energy into particle kinetic energy and plasma thermal energy in space and astrophysical plasmas. In addition, magnetic reconnection and turbulence appear to be intimately related in plasmas. We analyze the magnetic-field turbulence at the exhaust of four reconnection events detected in the solar wind using the Jensen-Shannon complexity-entropy index. The interplanetary magnetic field is decomposed into the LMN coordinates using the hybrid minimum variance technique. The first event is characterized by an extended exhaust period that allows us to obtain the scaling exponents of higher-order structure functions of magnetic-field fluctuations. By computing the complexity-entropy index we demonstrate that a higher degree of intermittency is related to lower entropy and higher complexity in the inertial subrange. We also compute the complexity-entropy index of three other reconnection exhaust events. For all four events, the $B_L$ component of the magnetic field displays a lower degree of entropy and higher degree of complexity than the $B_M$ and $B_N$ components. Our results show that coherent structures can be responsible for decreasing entropy and increasing complexity within reconnection exhausts in magnetic-field turbulence.
INTRODUCTION
Magnetic reconnection in plasmas refers to the process in which magnetic energy is converted to particle kinetic and thermal energy, resulting in a change of topology of the magnetic-field lines (Yamada et al. 2010;Treumann & Baumjohann 2013;Lazarian et al. 2015). The study of magnetic reconnection is key to understand the dynamics of solar flares, coronal mass ejections, rope-rope magnetic reconnection in the solar wind, and the interaction between solar wind and planetary magnetospheres. In addition, magnetic reconnection, turbulence, and intermittency seem to be intrinsically related in plasmas in a complex manner, hence, they need to be studied in relation to each other.
The solar wind is a natural laboratory for the study of magnetic reconnection. The conversion of magnetic energy into particle kinetic energy during the reconnection process leads to the formation of magnetic exhausts. The properties of magnetic exhausts have been studied recently using observational data. For example, Enžl et al. (2014) performed a statistical survey of 418 reconnection exhausts detected by the Wind spacecraft. They showed that the magnetic flux available for reconnection and the reconnection efficiency increase with the magnetic shear angle. Mistry et al. (2015) analyzed data from different spacecraft sampling oppositely directed reconnection exhausts of three different reconnection events. They showed that bifurcated current sheets are clearly observed when the spacecraft is located at a distance greater than ∼ 1000d i from the X-line, where d i is the ion skin depth. Chian et al. (2016) demonstrated that magnetic reconnection at the interface of two magnetic flux ropes provides an origin of intermittent magnetic-field turbulence in the solar wind. The statistics of 188 reconnection exhausts was studied by Mistry et al. (2017). They showed that the guide magnetic field within the exhaust is enhanced, and the plasma density and ion temperature at the exhaust increase as a function of the inflow plasma beta and the guide field. Numerical simulations have also been used to understand the properties of the turbulent plasma in reconnection exhausts. Pucci et al. (2017) performed three-dimensional (3D) particle-in-cell (PIC) simulations of magnetic reconnection, and showed that the turbulence at the outflows is anisotropic, and that the energy exchange and dissipation is concentrated at the interface between the ejected plasma and the ambient plasma. The outflow region has also been identified by Lapenta et al. (2018) as a source of instabilities that feeds a turbulent cascade and secondary reconnection sites in 3D PIC simulations of reconnection with a weak guide field. Adhikari et al. (2020) computed the scaling laws of energy spectra and second-order structure functions of 2.5D PIC simulations. They demonstrated that the inflow region displays a lower degree of turbulence compared to the diffusion, exhaust, separatrix, and island regions resulting from the reconnection process. Hence, there is a need to quantitatively characterize the turbulent behavior in these regions; for example, using a complex system approach; which can complement theoretical and simulation efforts.
In this respect, the Jensen-Shannon (J-S) complexity-entropy index is a statistical tool that allows to distinguish noise from chaos (Rosso et al. 2007). It has been successfully applied to data from experiments with electronic oscillators (Soriano et al. 2011), stock market data (Zunino et al. 2009), the Southern Oscillation index (Bandt 2005), and heart rate variability (Bian et al. 2012), among others. For a detailed list of applications see Riedl et al. (2013). Weck et al. (2015) computed the J-S index of the interplanetary magnetic-field data detected by the Wind spacecraft, magnetic-field data of the Swarthmore Spheromak Experiment (SSX), and the ion saturation current data at the edge of the Large Plasma Device (LAPD). They showed that the Wind data displays high-entropy and low-complexity similar to stochastic signals; whereas the SSX and the LAPD data display intermediate entropy and high complexity due to the lower number of degrees of freedom of the experimental devices, and the confined nature of the experiments compared to the interplanetary magnetic-field data. The J-S index was also applied to solar wind data collected by the Helios, Wind, and Ulysses spacecraft by Weygand & Kivelson (2019). Several intervals were selected, including slow and fast solar wind, interplanetary coronal mass ejections, and corotating interactions regions. They also obtained J-S index values characteristic of stochastic fluctuations, and showed that the complexity decreases and the entropy increases with the distance from the Sun.
In this paper we characterize the complexity-entropy of magnetic-field data of four reconnection exhausts detected in the solar wind. For the first event, we show that intermittency and multifractality are related to the degree of entropy and complexity. By projecting the magnetic field into the LMN coordinates (Sonnerup & Cahill Jr 1967;Gosling & Phan 2013), we show that the L component displays lower entropy and higher complexity than the M and N components. Our paper is organized as follows. Section 2 describes briefly the four magnetic reconnection events. Section 3 presents the methods employed for the data analysis. The results are presented in Section 4, and a discussion and conclusions are given in Section 5.
MAGNETIC RECONNECTION EVENTS
We analyze four reconnection exhausts detected at 1 AU. The timing of each interval is indicated in Table 1. Events 1 and 3 are magnetic reconnection exhausts detected by Wind at the interior of a magnetic cloud associated with an ICME, with a main shock arrival observed at 1:13 UT on 30 December 1997 and at 8:55 UT on 22 November 1997, respectively. Event 2 is a magnetic reconnection exhaust detected by Wind after the passage of an ICME with a main shock arrival observed at 7:18 UT on 12 November 1998. Event 4 is a reconnection exhaust detected by Cluster on 2 February 2002 (Phan et al. 2006). This exhaust is the result of the magnetic reconnection between a small-scale interplanetary magnetic flux rope (IMFR) and an intermediate-scale IMFR (Chian et al. 2016). We use the magneticfield data from Wind at a resolution of 11 Hz for events 1, 2 and 3, whereas for event 4 we employ the data from Cluster at a resolution of 22 Hz.
The timing of events 1, 2, and 3 is based on the supplement table of 188 magnetic reconnection exhausts studied by Mistry et al. (2017). These events are the longest exhaust intervals during which the IMF experiment onboard Wind was operating at 11 Hz, resulting in a sufficient number of data points for analysis. For event 4, we use the data from four Cluster spacecraft and apply the curlometer technique to compute the modulus of the current density J. Since the exhaust is bounded by a bifurcated current sheet in the Petschek model of magnetic reconnection (Gosling & Szabo 2008), the exhaust interval of event 4 is defined using the timing of the main peaks of |J|. Table 1 also indicates the number of data points available from each interval.
METHODS
The vector magnetic field is projected on the LMN coordinate system by applying the hybrid minimum variance analysis (MVA) (Gosling & Phan 2013;Mistry et al. 2015Mistry et al. , 2017Hietala et al. 2018). The L component is given by the direction of maximum variance and is related to the exhaust outflow direction, the M direction is related to the reconnection guide field direction, and the N component is the direction of minimum variance related to the normal of the current sheet. The N direction can be obtained aŝ where B 1 and B 2 are the magnetic-field vectors immediately adjacent to the exhaust boundaries. The M direction is given byê whereê L ′ is the maximum variance direction obtained from the classical MVA (Sonnerup & Cahill Jr 1967); and the L direction isê Note that, from Eq. (1),ê N andê L ′ are not necessarily orthogonal, whereasê N andê M are orthogonal. From Eq.
(2),ê L is made orthogonal toê N andê M . (Mistry et al. 2017). We compute the power spectral density (PSD) of the B L , B M , and B N components using the Welch method (Welch 1967) which allows us to reduce the error of the spectrum estimate. The compensated PSD is obtained by multiplying the original PSD by f +5/3 . The inertial subrange can be identified as a frequency range in which the compensated PSD is nearly horizontal.
The large number of data points within the exhaust interval of event 1 allows us to compute higher pth-order Miranda et al. 2013). The scaling exponents of structure functions can be computed from S p (τ ) ∼ τ α(p) , to quantify the departure from self-similarity (i.e., multifractality). The scaling exponents α(p) can be obtained by plotting S p (τ ) as a function of τ in log-log scale, and applying a linear fit within the inertial subrange. The inertial subrange is identified as the range of scales in which α(p = 3) = 1. However, the number of scales within the inertial subrange can be small and difficult to determine, especially for short time series. Therefore, the numerical values of α(p) will be affected by a large statistical error. For this reason we apply the Extended Self-Similarity technique (Benzi et al. 1993 The value of ζ(p) can be estimated by plotting S p (τ ) as a function of S 3 (p) in log-log scale. The ESS technique results in a larger range of scales in which ζ(p = 3) = 1, thus reducing the statistical error and producing a more robust value for the computed scaling exponents.
We compute the Jensen-Shannon complexity index, in which a probability distribution function (PDF) of ordinal patterns is obtained from the magnetic-field data within the exhaust interval. This PDF represents the frequencies of occurrence of all possible ordinal patterns of length d (Bandt & Pompe 2002;Weck et al. 2015). For example, suppose that the time series of a component of the magnetic field starts with {-2.67, 10.80, 1.72, -2.40, 11.21, ...}. The first d-tuple of length d = 3 is (-2.67, 10.80, 1.72) and the corresponding ordinal pattern, in ascending order, is (1, 3, 2) because −2.67 < 1.72 < 10.80. The second 3-tuple is (10.80, 1.72, -2.40) and the ordinal pattern is (3, 2, 1). For a time series of length K, there are K − d + 1 d-tuples and d! possible permutations of a d-tuple. The PDF of ordinal patterns is obtained by counting the number of occurrences of each possible permutation of ordinal patterns within the time series where "#" stands for "number", and K − d + 1 > d!. The Shannon entropy is given by where P represents the PDF of ordinal patterns. The Shannon entropy is equal to zero if, for any i, p i = 1 and p j = 0, j = i. This case represents a completely ordered system. Conversely, the Shannon entropy will be maximum if all possible ordinal patterns have the same probability. In this case S(P e ) = ln(d!), where P e represents the uniform distribution. The normalized Shannon entropy can be written as Similarly, the Jensen's divergence measures the "disequilibrium", or the "distance" between a distribution P and the uniform distribution P e (Martin et al. 2006;Rosso et al. 2007) where Q 0 is a normalization constant given by (Martin et al. 2006) The Jensen-Shannon complexity is then given by The pair (H, C S J ) can be represented in a plane called the complexity-entropy (C-H) plane. This plane can be separated into three regions, namely, a low-entropy and low-complexity region corresponding to highly predictable systems, an intermediate-entropy and high-complexity region corresponding to unpredictable systems with a large degree of structure, and a high-entropy and low-complexity region corresponding to stochastic-like processes (Rosso et al. 2007).
The number of data points K needed to compute Eqs. (4) and (6) reliably is (Amigó et al. 2008;Riedl et al. 2013) A large value of the d parameter can result in unreliable statistics, whereas a small value of d can result in an overestimated value of Eq. (6) (Gekelman et al. 2014). We set d to the maximum value for which Eq. (7) is satisfied for all intervals, as recommended by Amigó et al. (2008) and Riedl et al. (2013). For d = 5 we need K > 600, whereas for d = 6, K > 3600. From Table 1, all intervals have enough data points to satisfy Eq. (7) with d = 5.
INTERMITTENCY AND COMPLEXITY IN RECONNECTION EXHAUSTS
We start our analysis by describing the solar wind conditions around event 1. Figure 1 shows an overview of the plasma parameters observed by the MFI an SWE instruments onboard Wind, namely, the modulus of the magnetic field |B| (nT); the three components of the magnetic field B x , B y , and B z (nT) in the GSE coordinates; the modulus of the proton velocity |V p | (km/s); the proton density n p (cm −3 ); the proton temperature T p (eV); and the proton beta β p = 8πn p K B T p /|B| 2 , where K B is the Boltzmann constant. This event is characterized by a main shock arrival detected at 1:13 UT on 30 December 1997, and a magnetic cloud from 9:35 UT on 30 December to 8:51 UT on 31 December (Nieves-Chinchilla et al. 2018). This magnetic cloud is characterized by an increase on |B|, a rotation of the magnetic-field direction, and a decrease in T p and β p . The horizontal lines indicate the boundaries of the ICME (black) and the magnetic cloud (violet), and the vertical dashed lines indicate the reconnection exhaust interval. Note that the value of β p is low in Fig. 1 outside the reconnection exhaust because this event occurs at the interior of a magnetic cloud. Figure 2 shows a detailed view of the plasma parameters around event 1. The magnetic reconnection event is characterized by a decrease of |B|, a change of the B z magnetic-field component in the GSE coordinates, and an increase of the proton beta β p . The magnetic-field components in the LMN coordinates are also shown. The reconnection event is also characterized by the corresponding increases of the V z velocity component (in GSE), n p , and T p . The reconnection exhaust interval, bounded by the two vertical dashed lines, has a duration of 1343 seconds which gives 14783 data points within the exhaust (as indicated in Table 1).
The time series of the B L component shown in Fig. 2 displays discontinuous "jumps" near the boundaries of the reconnection exhaust interval, associated with the magnetic-field reversal that is required for the magnetic reconnection to occur. Since the magnetic shear occurs at a scale larger than the selected interval, we apply a trend removal technique to the time series of B L . This is achieved by computing a third-order polynomial fit to the time series of B L , and removing the resulting fit from the original time series. Figure 3(a) shows the time series of B L after detrending, represented by B * L . We have also applied the same procedure to obtain the time series of the B * M and B * N components, shown in Figs. 3(b) and 3(c), respectively. Hereafter, we will analyze the time series of B * L , B * M , and B * N , and refer them simply as B L , B M , and B N , respectively. We compute the scaling exponents ζ of the structure functions with the ESS technique (i.e., S p (τ ) ∼ [S 3 (τ )] ζ(p) ), within the inertial subrange identified by the compensated PSD. Figure 5 shows the structure functions before and after applying the ESS technique to the time series of the B L component. The inertial subrange in Fig. 5(a), shown as a grey background, is obtained from the compensated PSD of Fig. 4(b), and coincides with the interval in which α(p = 3) = 1. The grey background of Fig. 5(b) indicates the extended interval in which the scaling exponents ζ are obtained. The horizontal black line represents the original inertial subrange. Note that the extended interval cannot include kinetic scales, which starts near the ion cyclotron frequency. From Fig. 4, a spectral break marking the end of the inertial subrange occurs near f = 0.5 Hz, which corresponds to a scale of τ ∼ 2 s. This scale is outside the shaded region of Fig. 5(a), and corresponds to S 3 (τ )/S 3 (T ) = 238 in the horizontal axis of Fig. 5(b), which is also outside the interval used to compute the scaling exponents. We have also applied the ESS technique to the structure functions of the B M and B N components using the same procedure. Figure 6 shows ζ as a function of the p-th order structure function for B L , B M , and B N . The vertical bars indicate the error in the fit. Intermittency and multifractality within the inertial subrange are responsible for deviations of ζ from the linear scaling of Kolmogorov's 1941 (hereafter K41) self-similar model. This figure shows clearly that, for higher-order statistics, the B L component displays a stronger departure from the K41 scaling than B M , which in turn displays a greater departure than B N . Therefore, the B L component displays a higher degree of multifractality and intermittency than the B M and the B N components.
Next, we show how the intermittency of the magnetic-field fluctuations is related to entropy and complexity for event 1. We compute the J-S index (i.e., Eqs. (4) and (6)) for B L , B M , and B N . Figure 7 shows the d = 5 C-H plane. The crescent-shaped curves indicate the minimum and maximum values of C S J for a given value of H. Symbols indicate the (H, C S J ) values of three chaotic maps, namely, the logistic map x n+1 = rx n (1 − x n ) with r = 4; the skew tent map with w = 0.1847; and the Hénon map We also define an embedding delay T , which means that d-tuples are sampled on a larger time scale instead of consecutive points. This allows us to relate the computed (H, C S J ) values to a given time scale. We set the embedding delay T = 110 data points. For event 1, this value of T corresponds to a time scale of 10 s, which is within the inertial subrange (see Fig. 4). Similar results were obtained for embedding delay values corresponding to time scales ∈ [5, 15] s. Figure 7 shows that the three magnetic-field components display (H, C S J ) values close to the bottom-right region of the C-H plane, which correspond to stochastic-like processes. However, the B L component displays a lower degree of entropy and a higher degree of complexity than the B M component, which in turn displays lower entropy and higher complexity than the B N component. This pattern is also observed when we choose different values of d. Setting d = 4, the (H, C S J ) values of the three magnetic-field components are slightly shifted closer to the bottom-right region of the entropy-complexity plane, while their relative positions in the C-H plane with respect to each other are maintained. For d = 6, the three values are slightly shifted away from this region, also keeping their relative positions in the C-H plane, demonstrating the robustness of this result. Note that the number of data points of event 1 still satisfy Eq. (7) with d = 6. The numerical values of (H, C S J ) for d = 4, 5, and 6 are given in Table 2. We apply the same analysis to events 2, 3, and 4, except for the computation of scaling exponents, because the number of data points for each of these events is insufficient to guarantee the convergence of high-order statistics. However, as stated in Section 3, the number of data points of these events satisfy Eq. (7) for d = 5, allowing us to compute H and C S J . Figure 7 shows the entropy-complexity plane of the four reconnection exhausts. For all the analyzed events, the B L component displays lower values of H and higher values of C S J , as compared with the B M and B N components.
DISCUSSION AND CONCLUSIONS
The results shown in Figs. 6 and 7 suggest that a higher degree of intermittency is related to a decrease of entropy and an increase of complexity. Intermittency is related to the presence of coherent structures in turbulent fluids and plasmas, resulting in non-Gaussian PDFs (Sorriso-Valvo et al. 2001;Chian & Miranda 2009), finite degree of amplitude-phase synchronization (Koga et al. 2007;Chian & Miranda 2009), and multifractal scaling exponents (Bershadskii & Sreenivasan 2004;Bruno et al. 2007;Miranda et al. 2013). Coherent structures are also responsible for lower values of the Fourier power spectral entropy in 3D compressible MHD simulations of an intermittent dynamo (Rempel et al. 2009) and lower values of the spectral power and phase entropies in 3D incompressible MHD simulations of a Keplerian shear flow (Miranda et al. 2015). We have shown that the B L , B M , and B N components have H and C S J values similar to stochastic fluctuations, in agreement with previous analyses of interplanetary magnetic-field data (Weck et al. 2015;Weygand & Kivelson 2019). Our results indicate that, within magnetic reconnection exhausts, coherent structures are responsible for decreasing entropy and increasing complexity. We note that the H and C S J values of the B L , B M , and B N components are consistent with those of fBm with Hurst exponents 0.525, 0.425, and 0.335, respectively. These values can provide additional quantitative constraints to simulation and modeling of magnetic reconnection in plasmas.
By comparing the J-S index of the B L , B M , and the B N magnetic-field components we have shown that, for the four events selected, the B L component has lower entropy and higher complexity than the B M component; and the B M component has lower entropy and higher complexity than the B N component. From the analysis of event 1 it follows that the B L component is more intermittent than B M and B N due to coherent structures within the inertial subrange. Coherent structures and intermittency are also related to energy dissipation in anisotropic MHD turbulence (Müller et al. 2003;Miranda et al. 2013). Our results indicate that the energy dissipation in these four magnetic reconnection exhaust events is strongest in the B L component. For events 2, 3, and 4, high-order statistics and scaling exponents cannot be computed reliably due to the small number of data points. High-resolution data from instruments operating on higher cadence modes would be needed for the convergence of higher-order statistics. However, we have shown that similar results can be obtained by computing the J-S index. Since magnetic exhausts in the solar wind at 1 AU usually have a short duration (Enžl et al. 2014), the J-S index can be a useful tool for the analysis of the magnetic-field turbulence within exhausts.
The exhaust intervals in Table 1 were carefully defined to avoid including the strong discontinuities due to the magnetic-field reversal near the boundaries of the reconnection exhaust (see the time series of B L in Fig. 2). We have also applied a detrending technique to further remove large-scale variations of the time series (see Fig. 3). Despite this, the higher degree of intermittency and complexity, and the lower degree of entropy, observed in the B L as compared with the other components, can still be due to the field reversal that occurs at a larger scale. Previous studies have pointed out evidence of a direct coupling between large-scale fluctuations and small-scale intermittency in the solar wind (Vörös et al. 2006;Miranda et al. 2018). Therefore, the inertial-range coherent structures in the B L component within the reconnection exhaust can have their origin on the magnetic reconnection process that occurs at a larger scale. Note that this is in agreement with the main conclusion of Chian et al. (2016).
Our analysis of the three components of the magnetic field by the hybrid MVA suggests that intermittency and complexity-entropy vary with the field direction. Other characterizations of anisotropy in magnetic-field fluctuations in the solar wind, such as spectral anisotropy, have been demonstrated by several studies (e.g., Matthaeus et al. 1990;Dasso et al. 2005;Šafránková et al. 2021). In that context, spectral anisotropy refers to the unequal distribution of energy between the wave-vectors directed parallel and perpendicular to the mean magnetic field. Energy spectra computed using single-spacecraft data displays unequal distribution of energy among magnetic-field components, which is termed variance anisotropy. However, this anisotropy observed using single-spacecraft data is not sufficient to demonstrate spectral anisotropy, and must be interpreted with caution. For example, calculated variances from single-spacecraft data can be misleadingly anisotropic even in the presence of an isotropic distribution of energy (Oughton et al. 2015). A careful analysis of the variance anisotropy displayed by the energy spectra, and the different behavior of intermittency and complexity-entropy of magnetic-field components will be the focus of a future work.
In summary, in this paper, we analyzed the LMN components of the magnetic field at the exhaust of four reconnection events detected in the solar wind at 1 AU. The link between intermittency and complexity within the inertial subrange was demonstrated for the first event by computing the scaling exponents and the J-S index. For the four events, all components have H and C S J values within the stochastic region of the C-H plane. The B L component displays a higher degree of intermittency, lower entropy, and higher complexity than the B M and the B N components. Our results confirm that magnetic-field turbulence within reconnection exhausts are intermittent with various levels of multifractality in different directions, suggesting that nontrivial coherent structures are responsible for varying degrees of entropy and complexity. These results can contribute with additional constraints to the ongoing efforts on magnetic reconnection modelling and numerical simulation.
|
2021-09-24T01:15:31.261Z
|
2021-09-22T00:00:00.000
|
{
"year": 2021,
"sha1": "30f34f827ec0f9cf3cc955436a40e3318d7e2ecc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f0d46fc81b052c6a889d25b5d1a9c1e2a0528875",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
3394689
|
pes2o/s2orc
|
v3-fos-license
|
Observationally quantified reconnection providing a viable mechanism for active region coronal heating
The heating of the Sun’s corona has been explained by several different mechanisms including wave dissipation and magnetic reconnection. While both have been shown capable of supplying the requisite power, neither has been used in a quantitative model of observations fed by measured inputs. Here we show that impulsive reconnection is capable of producing an active region corona agreeing both qualitatively and quantitatively with extreme-ultraviolet observations. We calculate the heating power proportional to the velocity difference between magnetic footpoints and the photospheric plasma, called the non-ideal velocity. The length scale of flux elements reconnected in the corona is found to be around 160 km. The differential emission measure of the model corona agrees with that derived using multi-wavelength images. Synthesized extreme-ultraviolet images resemble observations both in their loop-dominated appearance and their intensity histograms. This work provides compelling evidence that impulsive reconnection events are a viable mechanism for heating the corona.
General: Unfortunately I do not find the work contained in the manuscript terribly novel. Also, I do not think that the outcomes are unique, largely as a result of the model-observation metric applied. As a result it is very difficult to assess if the rather obvious claim being made in the title and abstract is indeed the case. I will provide some examples below, but cannot see how anything other than a radically different approach could be used to back the claim made above.
Are they novel and will they be of interest to others in the community and the wider field? I [opinion] think it is widely believed that magnetic reconnection at small-scales (nanoflares; as discussed originally by Parker) is important for the transport of mass and energy throughout the outer solar atmosphere, so the questions that I ask myself are: does the present work unambiguously highlight reconnection being the majority component of the possibly competing heating events? and, do the diagnostics presented even permit such a discernment being made? In both situations I cannot answer in the affirmative.
There is a statement in the opening paragraph that wave energy [Alfvenic, Torsional, etc] is not sufficient to heat the active corona. I believe that this statement may be true but McIntosh et al follow with a statement that their assessment regarding wave energy is impeded by the patiotemporal resolution of the observations and there could be considerable amounts of "hidden energy". That's ok. There is a different issue however that, at the smallest spatial scales there is NO difference between the processes of wave dissipation and nano flare heating. The braiding, twisting, jostling of the field lines [which are invisible] induced by magneto-convection is really a wave process and the dissipation of induced small-angle differences in the magnetic field are "nanoflares." I refer the authors to two recent and popular reviews: http://adsabs.harvard.edu/abs/2012RSPTA.370.3193D and http://adsabs.harvard.edu/abs/2012RSPTA.370.3217P My issues with the bulk of the manuscript can be broken into those two categories, 1) the modeling approach and 2) the diagnostics applied to. I will address these separately, as follows: Modeling Methodology: The model is static, there is no flux emergence -can the authors comment on the validity of excluding flux emergence as driving heating, surely the impact could/would be "nanoflares" too. I am to believe that the driving velocities are easily discernible from observations. That is likely not the case. The driving velocities must be subject to noise characterisics in the methodology applied. Can the "field lines" be resolved at BBSO/NST resolution, can they demonstrate -I'm highly skeptical that the resolution of HMI is extremely out of range. Are chromospheric diagnostics [imaging or otherwise] used to help clarify? This approach is a theoretical toy. Also, what is the statistical range of alpha parameters in the NLFF extrapolations and how do they vary locally and globally over the modeled field of view as well as over time. I am VERY concerned that the noise induced by forcing a NLFF solution is actually what you're seeing the impact of and not any physical transport. What is the velocity spectrum applied? Is it an observational one? There's simply no completeness here to establish the applicability of the method. Diagnostic Methodology -DEM is a blunt knife. It is a measure of density square weighted emission of plasmas at a given temperature. It is a function of density and temperature as the authors identify. But as used it can characterize the apparent distribution of temperatures of the emitting plasma in a pixel of code/observation from relatively broad (in temperature) transmission functions. it ohs not a unique method The broadband transmission profiles are nowhere near as exact as the state-of-the art approach of reproducing line intensities, widths and Doppler velocities that are required to back up claims made that go beyond the "well nanoflares must be important" presumption in the community.
-While qualitatively giving the appearance of reproducing the observations, depending on the colorable and range chosen, critical elements are missing indicating that the final temperature distribution in the active region modeled is not correct and the thermodynamics are incorrectthere are extensive regions of "moss" in the filed of view. Moss lies in the transition region of hot, high pressure loops. What does the lack of moss being reproduced tell us about the modeling and/or diagnostic methodology applied? Do you feel that the paper will influence thinking in the field? > No. Too many simplistic approximations made in analysis and methodology.
We would also be grateful if you could comment on the appropriateness and validity of any statistical analysis, as well the ability of a researcher to reproduce the work, given the level of detail provided. > See above.
REFEREE REPORT
I consider the work of extreme interest not only for physics of the solar corona, but for stellar coronae in general and possibly also for other applications in magnetized plasmas.
The slipping reconnection is an important energy release mechanism known to be present in solar flares, both predicted theoretically and later confirmed observationally. The idea to apply the slipping reconnection mechanism to coronal heating and modeling of coronal emission is novel; as is the finding that the slipping velocity is structured on small spatial scales. The observational paper of Aulanier et al. (2007, Sci 318, 1588) and a short comment in Testa et al. (2013, Astrophys. J., 770, 1) represent the only clues I am aware of that the slipping reconnection could be important in coronal physics as well.
The authors succeed in generating a coronal emission that both contains coronal loops and requires no fudge (filling) factor to scale the modeled emission to the observed one. Both these points are important, given that some of the previous attempts failed in either or both accounts.
However, the coronal emission synthesis employed ought to be improved upon, as there are a number of difficulties that must be addressed.
COMMENTS AND SUGGESTIONS
[1] The synthesis of temperature and density (from which the DEM(T) and AIA emission is calculated) is simplistic, and in turn may not be entirely realistic, since it i] disregards the loop geometry as calculated from the NLFFF extrapolation of the HMI vector magnetograms; both assuming semi-circular geometry (Eq. M2) and no expansion of the magnetic flux tubes. The latter is not even commented upon.
ii] assumes equilibrium solutions, i.e., no time derivative in Eq. (M1). This perhaps could be justified by public unavailability of the 1D hydrodynamic codes. However, the manuscript finds that the value of R, representing a ratio of the heating scale-length to loop half-length L/2, is optimally chosen at R=0.3. This finding is problematic, since short heating scale-lengths could prevent equilibrium solutions, a well-known fact already noted by Serio et al. (1981, Astrophys. J., 243, 288) and subsequently by Aschwanden et al. (2001, Astrophys. J., 550, 1036. observations and the model. [5] The authors cite articles from The Astrophysical Journal (13) and Solar Physics (6), but only two articles from Astronomy & Astrophysics. The authors should consider whether there might be an unconscious bias.
[6] The reference #3 (Zou et al. 2017) does not seem to be appropriate on page 1, line 19. This paper concerns observations of an active region filament rather than a coronal structure.
[7] The reference #23 (Viall & Klimchuk 2012) does not seem to be appropriate on page 4, line 103. These authors do not build a model of active region corona.
[8] The CHIANTI 8.1 atomic database and software is not properly referenced, although it is essential to building the model of coronal emission. The proper references (see www.chiantidatabase.org/referencing.html) are Dere et al. 1997, Astron. Astrophys. Suppl., 125, 149 Del Zanna et al. 2015, Astron. Astrophys., 582, A56 [9] Please note that the "latest" (page 8, line 204) 'hybrid' abundances Schmelz et al. (2012) are not necessarily the 'correct' coronal abundance values, as the FIP bias in corona may depend e.g. on age or individual structure. Using a different set of abundances (photospheric or coronal) would change the total radiative losses Lambda(T), by a factor of about 2, but will not influence the visualization process of AIA images itself, since in coronal conditions the AIA images are dominated only by Fe ions.
Jaroslav Dudik referee
Reviewer #3 (Remarks to the Author): Here is my review for the manuscript entitled Observationally quantified reconnection providing a viable mechanism for active region coronal heating Yang et al.
The manuscript addresses the relevant issue of seeking for observational support for the nanoflare scenario for coronal heating in active regions. The paper is very well written and the authors make a compelling case using a schematic theoretical model and the observational data at their disposal. However, I think a few words of caution should be added to the text, before I can recommend publication of this manuscript.
I agree with their back-of-the-envelope calculation to estimate the heating rate, and I understand that the authors do their best to come up with constrains for the free parameters in this calculation. However, for the benefit of the readers, I think the authors should stress that these are crude estimates at best. I am referring for instance to the diameter L for the flux elements being reconnected. Based on Big Bear data, 160 km would definitely be an upper limit rather than a typical value. Chances are that there is a whole range of values of L for the very many reconnection events consistent with the nanoflare scenario. There is no reason to believe that the events that Big Bear managed to resolve just barely, happen to be typical dissipation events. Also, the authors assume that the heating time $\tau_r$ is much longer than typical cooling times, but that is based on the same assumption that 160 km is the typical size and not an upper limit. I urge the authors to take these considerations into account and warn the reader in this direction.
We thank the reviewers very much for a careful reading and constructive comments on the paper.
Based on these comments, we have made a substantial revision of the paper. We hope this revision can meet the requirements and answer the questions by the reviewers. Our revisions in the paper are in cyan color. The replies to the reviewers' questions are listed as follows. With a recent discussion on the terms, we have changed the term 'slipping velocity' to 'non-ideal velocity'.
Reviewer 1, Reviewer's general comments: Q: Unfortunately I do not find the work contained in the manuscript terribly novel. Also, I do not think that the outcomes are unique, largely as a result of the model-observation metric applied. As a result it is very difficult to assess if the rather obvious claim being made in the title and abstract is indeed the case. I will provide some examples below, but cannot see how anything other than a radically different approach could be used to back the claim made above.
A: Thanks for the comments. We would express our point of view on the novelty of the paper. We think that our results are novel in the following aspects. We consider the effect of 3D reconnection of magnetic fields in a novel way that the conjugate footpoints show a departure from the ideal plasma flow. And we do NOT use the velocity spectrum as the input to the model. We do not drive the system with an imposed motions with specified spectral properties, as the referee implies. Instead we use a novel form of heating expression which incorporates measured quantities and only a single free parameter, R.
The most important is that the parameter, L, could be constrained by the high resolution observations instead of an artificial high resistivity, which can not be measured directly. Even the coronal model (we use the equilibrium state as the approximation of the plasma response to the heating) is simple, however, its predictions are remarkably similar to the observations. As shown in the main text, we calculate the non-ideal velocity by combining the techniques of DAVE4VM and the optimization method of the NLFFF. These techniques offer a reasonable way to infer the physical parameters from the observations, V and B, which are used to test our heating expression. The diagnostic methodology uses not only the intensity histogram but also the DEM inversion technique. Although the DEM method is an ill-posed problem, if we vary the input data and try different codes as shown in Figure R5, we could still trust the region where the solution is stable. Considering that there are no other more accurate and practical methods at present, we think we have done our best in performing such an investigation. We have also estimated the errors by 50 Monte-Carlo realizations ( Figure 5 in the main text). The result shows that our calculation is trustable with acceptable errors.
Q: Are they novel and will they be of interest to others in the community and the wider field? Q: I am to believe that the driving velocities are easily discernible from observations. That is likely not the case. The driving velocities must be subject to noise characterisics in the methodology applied.
I [opinion] think it is widely believed that magnetic reconnection at small-scales (nanoflares; as discussed originally by Parker) is important for the transport of mass and energy throughout
A: Thanks for the comments. We agree that it is difficult to calculate the driving velocities, but we are here making the best attempt with state-of-the-art observations, namely from HMI. We agree that noise is a limitation and thus we have now attempted to estimate the level of errors in our computation of the heating (shown in Figure R2).
Among all the features, we note that those locations with the largest values of the plasma velocity, non-ideal velocity and heating flux, have the smallest errors ( Figure R2 a, b, and c). In those locations with magnetic field strength greater than 100 G, the relative errors of the plasma velocity and non-ideal velocity are less than 0.6 ( Figure R2 d and e). Moreover, when the non-ideal velocity and heating flux increase, the relative errors decrease ( Figure 2R g, h, and i). In addition, the non-ideal velocity is larger than the plasma velocity, but it does not matter, since it is only an apparent motion, not a physical motion. A figure of these errors has been added in the revised manuscript as Figure 5, and a discussion on the errors has been added in the first paragraph of the Discussion section (Line 155-160 Page 6). Figure R3, which shows that during 2 hours, it changes very little. A Monte Carlo test has been conducted by adding random, normally distributed errors to the photospheric magnetogram, and repeating our analysis 50 times. The reduced plasma velocity, non-ideal velocity, heating, and the corresponding errors are shown in Figure R2. We are not imposing a velocity spectrum, but are deducing the footpoint velocities from observations, and then combining the magnetic field to measure the non-ideal velocity. However, as the reviewer requested, we show the plasma velocity spectrum and the non-ideal velocity spectrum in Figure. R4. ii] assumes equilibrium solutions,i.e.,no time derivative in Eq. (M1). This perhaps could be justified by public unavailability of the 1D hydrodynamic codes. However, the manuscript finds that the value of R, representing a ratio of the heating scale-length to loop half-length L/2, is optimally chosen at R=0.3. This finding is problematic, since short heating scale-lengths could prevent equilibrium solutions, a well-known fact already noted by Serio et al. (1981, Astrophys. J., 243, 288) and subsequently by Aschwanden et al. (2001, Astrophys. J., 550, 1036. The importance of geometric effects and their connection to departures from thermal equilibrium in coronal loops are discussed in works of Dudik et al. (2011, Astron. Astrophys., 531, A115) andMikic et al. (2013, Astrophys. J., 773, 94). In particular, combinations of expanding cross-sections and short heating scale-lenghts could produce thermal non-equilibrium in coronal loops; evidence of which has been detected in the solar corona (e.g., Froment et al. 2015, Astrophys. J., 807, 158;Froment et al. 2017, Astrophys. J., 835, 272; see also Mok et al. 2016;Astrophys. J., 817, 15).
A: i] Thanks for the comments. We agree with the referee that our approach may not be 'entirely realistic'. We have two reasons for not using the real geometry from the NLFFF.
• Since we populate every coronal point independently rather than superposing distinct loops, in our data we have 341 × 501 × 501 points. If we use the real geometry of the field lines from the NLFFF, we would need to solve the equilibrium equations for 8 × 10 7 separate loops. This seems impractical at present, and would take an undue amount of time and computer resources.
• The way the assumed axis geometry enters our model is actually a rather subtle issue.
Every volume element is populated independently using a single point taken from an equilibrium loop. We just use the physical parameters at this single point but not the whole loop; thus semi-circular loops are not evident in any of our model results. We use the true distance along the real (non-semi-circular) field line to determine which point along the equilibrium loop to draw the physical parameters from.
The majority of every loop is at high enough temperature that thermal conduction is extremely effective and the pressure scale height is comparable or larger than the loop's full length. This means that our assumed geometry has very little effect on the precise values of density and temperature, which we find and use for the vast majority of the coronal volume. The only place where this reasoning breaks down is near the feet where temperature is low.
The loop's feet will not be affected by the detailed shape we adopt for the entire axis, but they will be affected by the angle from vertical the axis makes at the foopoint: the inclination angle of the legs. We had assumed that the legs were perfectly vertical, which is admittedly a singular case. Under that assumption density and pressure fell off with the small length scale along the axis. In point of fact, the loops found in the NLFFF have a distribution of inclination angles, which will lead to a distribution of density scale distances: all systematically larger than ours. We have therefore chosen to quantify how the distribution of inclination angles will affect our results. We Figure R7. We can see that the loops with a lower height usually have a large inclination angle, while the higher one is nearly perpendicular to the bottom (close to the semi-circular structure). And we make a comparison between the oblique loop and the semi-circular geometry with a typical heating value ( Figure R8)
measure the inclination of the loop footpoint at the bottom and show them in
where Γ is the expansion factor and L is the loop length. The value of parameter R in our fitting results can be treated as the one that has been modified with the expansion factor. If we only consider the case s H < L, then the parameter R without expansion effect can be found in Figure R10. Thus the expansion effect here is only changing the optimal value of parameter R to a large one, e.g., R(R Γ = 0.3, Γ = 30) ≈ 1. Some discussions have been added in the 3rd and 4th paragraphs in the Discussion section (Line 178-186 Page 6-7). ' (page 4, line 93). This statement appears to be at odds with the equilibrium assumption.
A: We thank the referee for this comment. It seems that we had written this incorrectly. (2004, Astrophys. J., 615, 512), while an entirety of newer literature on the subject is ignored. This is a large omission. An incomplete list some of the papers dealing with synthesis of coronal emission is provided below, and I request that authors cite at least several of these papers: Mok et al. 2005, Astrophys. J., 621, 1098Mok et al. 2008, Astrophys. J., 679, 161 Mok et al. 2016, Astrophys. J., 817, 15 Warren & Winebarger 2006, Astrophys. J., 645, 711 Warren & Winebarger 2007, Astrophys. J., 666, 1245Winebarger et al. 2008, Astrophys. J., 676, 672 Lundquist et al. 2008, Astrophys. J. Suppl. Ser., 179, 509 Lundquist et al. 2008, Astrophys. J., 689, 138 Martinez-Sykora et al. 2011, Astrophys. J., 743, 23 Dudik et al. 2011, Astron. Astrophys., 531, A115 Peter & Bingert 2012, Astron. Astrophys., 548, A1 Lionello et al. 2013, Astrophys. J., 773, 134 Bourdin et al. 2013 Figure 2c. Omitting the intensity histograms precludes further quantitative comparison among the observations and the model. (13) and Solar Physics (6) There is no reason to believe that the events that Big Bear managed to resolve just barely, happen to be typical dissipation events. Also, the authors assume that the heating time τ r is much longer than typical cooling times, but that is based on the same assumption that 160 km is the typical size and not an upper limit. I urge the authors to take these considerations into account and warn the reader in this direction.
From our analysis, the nano-flare frequency is about 5 km s
A: We thank the reviewer for this comment. We agree that 160 km is an upper limit rather than a typical value. We have emphasized that 160 km is an
|
2018-02-17T07:54:11.000Z
|
2018-02-15T00:00:00.000
|
{
"year": 2018,
"sha1": "31d1d4d0aaa13c165028513b79b2df7864aede47",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-018-03056-8.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d4b5888b972d123b1677a063f2a451e55c0a0e6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
255738037
|
pes2o/s2orc
|
v3-fos-license
|
Ultrasound Elastography: Basic Principles and Examples of Clinical Applications with Artificial Intelligence—A Review
: Ultrasound elastography (USE) or elastosonography is an ultrasound-based, non-invasive imaging method for assessing tissue elasticity. The different types of elastosonography are distinguished according to the mechanisms used for estimating tissue elasticity and the type of information they provide. In strain imaging, mechanical stress is applied to the tissue, and the resulting differential strain between different tissues is used to provide a qualitative assessment of elasticity. In shear wave imaging, tissue elasticity is inferred through quantitative parameters, such as shear wave velocity or longitudinal elastic modulus. Shear waves can be produced using a vibrating mechanical device, as in transient elastography (TE), or an acoustic impulse, which can be highly focused, as in point-shear wave elastography (p-SWE), or directed to multiple zones in a two-dimensional area, as in 2D-SWE. A general understanding of the basic principles behind each technique is important for clinicians to improve data acquisition and interpretation. Major clinical applications include chronic liver disease, breast lesions, thyroid nodules, lymph node malignancies, and inflammatory bowel disease. The integration of artificial intelligence tools could potentially overcome some of the main limitations of elastosonography, such as operator dependence and low specificity, allowing for its effective integration into clinical workflow.
Introduction
Elasticity is a mechanical property of tissues that is vitally linked to their microstructure, function, and pathology [1].In biological tissues, elasticity mainly depends on the composition of the extracellular matrix, which is affected by aging and the development of diseases [2].Therefore, a pathological process that alters the structure and composition of tissue also affects its elasticity.Intuitively, this is why clinical palpation is still an essential tool for the detection of the disease during physical examination [3].
Elasticity cannot be measured directly using conventional imaging methods; however, several techniques have been developed to obtain an indirect quantitative estimate of this important parameter [1].Ultrasound elastography (USE) is a promising non-invasive imaging technique that exploits the physical principles and properties of ultrasound to assess the elasticity (or, conversely, the stiffness) of different biological tissues for diagnostic purposes [4,5].The physical principles at the base of ultrasound imaging can be found here [6].
In the first section of this article, we propose a step-by-step introduction to the basic principles of USE, beginning with the definition of elasticity and its main equations, with the help of some simplified models.Since this article targets clinicians and residents, we avoided differential notation in the equations.We are aware of several limitations to our description, from a formal standpoint.Still, we have tried to achieve a good balance between correctness and ensuring the accessibility of this content for people without advanced mathematical knowledge.
In the second part, we provide a concise overview of the main clinical applications, focusing on the liver, breast, thyroid, lymph nodes, and bowel.For each application, we report some evidence to illustrate how USE could successfully integrate B-mode ultrasound to enhance diagnostic performance [7,8].
USE has many of the advantages of standard ultrasound examination that have contributed to its popularity in recent decades: it is safe, inexpensive, widely available, and quick to perform.However, elastography also inherits important limitations, first of all, its low specificity and operator dependence, which have hindered its large-scale implementation in clinical practice [7,8].
In this landscape, therefore, for each application we finally present some recent evidence of how artificial intelligence, when applied to elastosonography measurements, could partially overcome these limitations, thereby improving its diagnostic performance and effective integration into the clinical practice.
Principles of Ultrasound Elastography
The physical principles of the USE are complex: for general, introductory articles, see [5,7]; for intermediate-level reviews, see [9,10]; in-depth accounts can also be found elsewhere [11,12].
Definition of Elasticity
Since elastosonography is essentially elasticity imaging, it is worth commencing with a definition of elasticity.
In material mechanics, the elasticity of a material is the property that describes its tendency to resume its original size and shape after being subjected to a deforming force, or stress [13,14].A more detailed account of the elastic properties of tissues may be found in an earlier work [14].Rubber is an example of a material with the property of being elastic at a high grade.
The opposite of elasticity is plasticity, which is the propensity of a solid to undergo largely irreversible changes in shape, in response to applied forces (for example, clay).In the real world, most materials present a mixture of elastic and plastic properties.
Conversely, stiffness is also defined as the opposite of elasticity and is the ability of a body to oppose the elastic deformation caused by an applied force [13].Thus, elasticity and stiffness do not describe different properties, but instead convey the same underlying property of "being elastic" from opposite perspectives: a tissue that provides high elasticity has low stiffness (rubber), while a tissue that has high stiffness has low elasticity (steel).
Longitudinal Elasticity and Young's Modulus (YM)
In elasticity imaging, researchers are interested in knowing a particular quantity, called Young's modulus (YM), which expresses the tissue property of their topic of interest, namely, its elasticity.Elastosonography techniques are essentially differentiated on the basis of how they estimate this quantity [7,8].
If an elastic body is not free to move when it is subjected to an external force, it deforms and develops a force that opposes the deformation.Hooke's law describes the behavior of an elastic spring subjected to a force, F, longitudinally, either in traction or compression (Figure 1) [13].Hooke's law states that an elastic body undergoes a deformation that is directly proportional to the force applied to it: where F is the force of retraction of the spring that is equal and opposite to the applied force, ∆l x represents the elongation undergone by the spring along the x-axis (we suppose that the deformation occurs only in one dimension), and k E represents the longitudinal elasticity coefficient, which depends upon the nature of the material itself and is dimensionally expressed as [N • m −1 ].This law is valid, of course, within certain limits, beyond which the body loses the capacity to return to its original shape (elastic behavior) and becomes permanently deformed (plastic behavior).The minus sign in the second member of the equation means that we are considering the restoring force exerted by the spring on whatever is pulling its free end since the direction of the restoring force is opposite to that of the displacement.
The formulation of Hooke's law in Equation ( 1) is useful to describe the deformation of a spring that is stressed longitudinally, in traction or compression, along the x-axis, but it does not fit well with more complex models.
Hereafter, we present a more general formulation of Hooke's law (Equation ( 2)) that uses two vector quantities, the stress and the strain [8,13]: where σ represents the stress applied, ε l represents the strain, and E is a constant, known as Young's modulus (YM) [8][9][10][11][12][13][14], which simply represents the deformation resistance along the axis of stress.Similarly to the k E in Equation ( 1), YM relies on the intrinsic characteristics of the body.Stress is defined as the ratio between the applied force, F, and the surface, A, on which the force is applied [9,14]: Its unit of measure is the Pascal, dimensionally defined as N/m 2 .
The longitudinal strain ε l is defined as the ratio between the change in length and the initial length [9,14]: where l f is the final length and l i is the initial length (or L).The strain, unlike the displacement, ∆l x , in Equation ( 1), is a dimensionless quantity.
Although Equations ( 1) and ( 2) look similar, they are not equivalent (see Figure 2).In Equation ( 2), Young's modulus (E) appears in place of the longitudinal coefficient of elasticity (k E ).YM expresses the relationship between the longitudinal stress and the longitudinal deformation (strain), in the case of uniaxial load conditions and in the case of the fully elastic behavior of the material: As previously stated, YM describes the property of the tissue that is of most interest to us, that is, the way in which it reacts to external mechanical stress.Unlike the longitudinal elasticity coefficient, the accepted measurement unit of YM in the International System is the Pascal N/m 2 .
For a homogeneous isotropic solid, the relation between the stress and the strain can be considered to be linear for small changes (i.e., for strains of less than a few percent); thus, the ratio E is a constant [10].With the stress increasing from zero, the strain increases rapidly and the elastic modulus becomes progressively greater with the increasing strain [10].
A further complication is that tissue may be viscoelastic, and this issue will be discussed in the next sections.
Shear Wave and Shear Modulus (G)
Ultrasounds propagate via compression and rarefaction waves, longitudinally, and via shear waves, in which the particles propagate orthogonally to the direction of the ultrasound beam [6,10,13] (Figure 3).The other principal wave modes are surface and plate waves; however, these are hardly relevant to ultrasound propagation in biological soft tissues and, therefore, are not given further consideration here [10].Shear waves are at the basis of shear wave elastography (SWE) methods, in which the physical quantity of interest is the shear wave speed, measured in [m/s].Young's modulus of elasticity (E) may then be derived from the shear wave velocity, knowing the relationship between the shear modulus (G) and the elastic modulus (E), using the assumptions of the constant density, homogeneity, isotropy, and incompressibility of the material [7][8][9].
To understand the mechanisms underlying shear wave elastography, we consider a cylindrical model, which represents a portion of tissue.Until now, we have only considered the deformation that occurs along the force axis (longitudinal strain); however, in the absence of volume variations, a cylindrical object becomes thinner and wider when compressed (Figure 4).The relationship between the longitudinal and the transverse strains in solids, using a cylindrical model.Before compression, the cylinder has a length (L) and radius (R).After the application of force (red arrow), the cylinder becomes thinner and wider.The longitudinal strain (ε l ) is defined as the ratio between the change in length (∆l) and the initial length (L).The transverse strain (ε t ) is defined as the ratio between the variation of the radius (∆r) and the initial radius (R).
For this cylinder, it is possible to calculate not only the longitudinal strain, ε l , but also the transverse strain, ε t .The percentage change in the radial direction, ε t , is called the transverse strain and, analogously to the longitudinal strain in Equation (2), is defined as reported previously [5,7,10,14]: where ∆r represents the change in width and R represents the initial radius.Since the two components act together, it is possible to calculate the ratio between the longitudinal and transverse deformations.This ratio is named Poisson's ratio and is indicated by the Greek letter ν ("ni") [5,7,10,14]: The Poisson ratio is important as it represents the degree to which the material shrinks or expands transversely in the presence of longitudinal stress (see Figure 5) [5,7,9,10].In the case of a virtually incompressible material, it has a value equal to 0.5 [5,7].This is important because biological tissues react to compression more similarly to our cylinder than to the spring in Figure 1.In the cylinder example, we considered transverse strain as being the result of longitudinal compression, and that model is important to understand how the two types of deformations are related.However, it is possible to consider a simplified model in which the transverse strain is produced by a force that acts tangentially to the direction of displacement; in this case, we may speak more properly of shear stress (Figure 6).Similar to the calculations for YM, it is possible to describe a quantity, which we define as G, to indicate the shear modulus [5,7,9,10].The equation is: where σ is the tangential (or shear) stress, G is the shear modulus, and ε t represents the shear strain (i.e., the percentage of transverse displacement).The model helps to visualize the concept of shear stress.We consider a prism of elastic material and a shear force F (red arrow), applied parallel to the x-axis.Similarly to the calculations for Young's modulus (E), it is possible to describe a quantity, which we define as G, to indicate the shear modulus.G is the constant of proportionality between the shear stress and the shear strain.The transverse strain can also be expressed as a function of the angle, θ.
The two elastic moduli, E and G, are interrelated.The shear modulus, G, can be derived through the longitudinal modulus, E, and the Poisson's ratio: which, by making E explicit, becomes: Due to the high water content of biological tissues, the Poisson's ratio is near 0.5, from which [5,7]: Finally, since biological tissues are not exactly incompressible, we must introduce a third modulus of elasticity, called the bulk modulus (Figure 7), which is a measure of how the volume changes under pressure [5,7,10]: where ε v = ∆V/V and represents the volume strain.These three elastic modules are important, not only because they define the deformation of a material subjected to a stress but also because they influence the speed of propagation of mechanical waves in the medium [5,7,9,13,14].For soft materials, such as rubber or biological tissue, where the bulk modulus is much higher than the shear modulus, Poisson's ratio is nearly 0.5 [5,7,14].
For a sound wave propagating through biological tissue, which have mechanical properties halfway between solids and liquids, the parameter K allows us (better than parameter E) to estimate the longitudinal propagation speed, c l , which is given by: where ρ represents the density of the medium [5,7].For soft tissues, K must range from about 1800 MPa in fat to about 2800 MPa in muscle [10].
If we consider the speed of the waves propagating in the transverse direction (shear waves), the notation becomes: where ρ represents the density of the medium [5,7].
According to Equations ( 13) and ( 14), the greater the value of K or G (the stiffer the medium is), the faster the waves will propagate [5].
The shear wave velocity in soft tissues is significantly lower (1-10 m/s) than the longitudinal wave velocity and covers a wider range of values.However, in soft tissues, the longitudinal wave velocity is usually approximated to 1540 m/s, comparable to sound velocity in water (1500 m/s), with minimal changes between the different tissues (ranging from 1412 m/s in fat to 1629 m/s in muscle) [10].
This fact has important implications in terms of USE because the differences in the shear wave velocity between soft tissues can be used to obtain information on the elasticity between different tissues, with good contrast resolution [5,7].
Viscoelastic Models
Until now, we have considered biological tissues as soft solids that are characterized by perfectly elastic behavior, without considering the viscous components.However, since biological tissues are highly hydrated, internal friction forces cannot be neglected [10].Ignoring the viscosity introduces errors and biases in the context of elasticity estimates [15].Since biological tissues contain both elastic and viscous components, they are called viscoelastic media.
When shear stress is applied to a fluid, instead of deformation (as with the solid in Figure 6), it produces sliding of the various layers of fluid [13].This sliding is hindered by internal frictional forces, a property known as viscosity.The dynamic viscosity of a fluid is a measure of its resistance to flow when tangential stress is applied, and this is due to the adjacent layers of fluid moving at different speeds.Shear elasticity in solids and viscosity in fluids are somehow interconnected, but there are differences.Viscosity is a property that manifests itself in a liquid and concerns the speed with which a stress is exerted.
Figure 8 illustrates the concept of viscosity and highlights the similarities with shear strain in a solid.
Shear wave velocity is determined by both elasticity and viscosity [15].The model presented in Figure 9 (the Kelvin-Voigt model) is used to describe the behavior of solid viscoelastic media.Where the velocity of the applied force is very slow, the effect of viscosity can be ignored.Conversely, if high-frequency vibration is applied, the viscous component will have a greater effect, the magnitude of which will depend on the frequency., pulled with a force of intensity, F. If there were no friction present, the plate would move with a uniformly accelerated motion; however, in reality, its speed tends to be constant over time since the force F is balanced by a friction force due to the sliding of the various layers of liquid (under the conditions of laminar motion).The viscous friction force acting on the plate has an intensity that is given by relation 1 and it is directly proportional to the sliding speed (v).From relation 1, it is easy to derive relation 2, as highlighted by the red arrow.It is worth noting that, unlike the shear stress in solids (relation 3), the shear deformation is not expressed as a longitudinal strain, but instead as a gradient of velocity, that is, the change in velocity with respect to the depth of the liquid.Red arrows highlight the similarities.When the Kelvin-Voigt model is used, the following equation is derived to establish the speed of a transverse wave, instead of Equation ( 14) [5,15,16]: where G is the shear modulus, µ is the dynamic viscosity coefficient, and ω is the angular frequency of vibration [15].
Therefore, (1) SWV is determined by both elasticity and viscosity [15,16], and ( 2) SWV increases at the rate of the square root of the frequency; the higher the frequency, the faster the speed [5,15,16].
In biological tissue characterization applications, G is often in the order of a few thousand and µ is often less than 10.Thus, for a purely elastic material, or when the wave frequency is very low (less than a few Hz), the viscous component can be omitted; the SWV is constant over all frequencies and can be obtained via Equation ( 14) [15,16].
However, for viscoelastic media, especially when the frequency is very high (higher than a few tens of kHz), the shear wave speed increases monotonically with frequency [15][16][17].Not only does the velocity increase with the frequency but also the velocity dispersion caused by the viscosity occurs during wave propagation, when the frequency is high in the soft tissues [5,[15][16][17].
From Elasticity to Strain Imaging
Since elasticity concerns the material's resistance to deformation, the application of an external force is required to measure it: this, in very simple terms, is the underlying principle of strain imaging, which extracts information on tissue elasticity by exploiting the deformation induced by external stress [5,7].Strain imaging was the first type of ultrasound-based elastography, developed in the 1970s by Dr. Jonathan Ophir [18].
It is divided into two different techniques, according to the modality of the compression [5].
In strain elastography (SE), compression is applied (usually to the skin surface) directly through manual compression or, indirectly, through the heart pulse or respiratory movements [5,7,10,19].In freehand compression, the ultrasound probe takes on the dual functions of a transducer and a mechanical actuator [19].The operator manages the transducer to produce a quasi-static load with a compressive stress of up to 3-5% [5].Force is applied for a length of time that is sufficiently long for the induced tissue strains to effectively become stabilized (quasi-static) [10].
In theory, YM could be derived from Equation (2), knowing the stress and strain.However, since the stress that is applied manually or physiologically is not quantifiable, SE cannot provide quantitative elasticity measurements of YM, expressed in kPa [23].However, assuming that the stress that is exerted is uniform, strain imaging provides qualitative information concerning tissue elasticity by comparing the different strains of the insonified adjacent tissue [7,23].The differential displacement between the adjacent tissues can be evaluated via different techniques, depending on the manufacturer, including spatialcorrelation methods, Doppler processing, or a combination of these two methods [5,7,9].
In the spatial correlation method, the strain is inferred from an analysis of the images before and after compression, and by mapping the differences in the speckle pattern within a region of interest (ROI) along the beam axis [5,7,9,24].Even if it has currently been replaced by other mechanisms, this method is useful for understanding the underlying general principles [5,7,9,24].
In Figure 10A-D is presented a simplified model of its inner workings.Assuming that the tissue has a purely elastic behavior and that the displacement occurs only in the longitudinal direction, this deformation can be approximated with a one-dimensional spring model (Figure 10A).Before the compression (a), 4 different points (named P1-P4), located at different depths (z), are identified on the spring.After the compression (b), each point is displaced downward (δ), with the more superficial showing a greater displacement than the deeper ones (δ 1 > δ 2 , δ 2 > δ 3 , . . .).Intuitively, the displacement is maximum for a point located at the free end of the spring (on the skin) and is virtually zero for a point located at the anchor end.The graph in Figure 10A(c) represents the relationship between the depth (z) and the displacement (δ).The slope of the line expresses the coefficient of elasticity, an intrinsic characteristic of the medium.
Figure 10B shows three spring models, representing biological tissues with different elasticity.Before compression occurs, they have the same initial length (Z), represented with the dotted gray silhouette superimposed.Assuming the stress exerted (red arrows) to be identical, the tissue on the left (blue spring) demonstrates the greatest stiffness, the tissue in the middle (green spring) has intermediate stiffness, and the one on the right (red spring) is the most elastic.In each spring, we depicted four representative points (yellow dots) which were virtually aligned before the compression.The three graphs in the second row represent the relationship between the initial depth of the points and the displacement obtained.Again, the slope of the corresponding graph (same color as the spring) expresses the coefficient of elasticity; it is higher for the blue spring and lower for the red spring.
In Figure 10C, in place of the spring, a speckle pattern is represented along the main axes.The ultrasound beam passes through tissues with different elasticity: points A and B are located in a more elastic part of the tissue (in red), while points C and D are located in a stiffer area (in blue).The system correlates the position of the speckles before and after the compression.For each part of the tissue, the strain (ε) is inferred by considering the differential displacement (displacement in relation to the depth).As is evident, points A and B are displaced proportionally more than points C and D. The strain of the soft tissue (ε soft ) is higher than the strain of the hard tissue (ε hard ).The graph in the bottom right shows the relationship between depth and displacement for different areas of the tissue: this is the result of the combination of two graphs with different slopes, with a stiffer central part and two more elastic parts at the ends.
In Figure 10D, the stiffer area is represented by a nodular lesion (in blue).The two graphs on the right represent the relationship between the strain (ε) (upper graph) and the elastic modulus (E) (lower graph) and the depth (z).Note that there are no units of measurement in the graphs, as the estimate is qualitative (stress is not quantifiable).
The mechanism explained in Figure 10 works as long as the strain is extremely slight, and the window moves while maintaining its speckle pattern.We only consider the displacement in the direction of propagation of the ultrasound beam.
The autocorrelation-based method is the preferred estimator used without considering the lateral displacement [24].However, each region of interest (ROI) also moves in the transverse direction due to the lateral deformation of the tissues; therefore, an adjustment is needed to estimate the displacement accurately in both the longitudinal and transverse directions.
In the Doppler-based method [5,7], the phase difference between the echo signals obtained by transmitting repeated pulses before and after compression is detected, and an autocorrelation method is used to calculate the displacement [5].
The methods described so far are not suitable for real-time processing because of the amount of computation time required.Concerning their practical application in the clinical setting, since the fluctuations in compression speed are large when manual compression is used, a high degree of accuracy is required to accommodate small displacements.More recently, some authors have developed a new tool, called the combined autocorrelation method (CAM), which combines the merits of the spatial correlation method and phase differences detection [5,24].
Visualizing SI information
In strain imaging, elasticity information can be displayed in many ways [8].The main modality is a semi-transparent color map, called an elastogram, which is represented as if superimposed on a B-mode image.Typically, high stiffness is displayed in blue, and low stiffness is displayed in red, although the color scale may vary, depending on the manufacturer or personal preferences [8,23,[25][26][27].
In strain imaging, the results may also be expressed through semi-quantitative parameters.The strain ratio is a parameter that is normally used to compare the stiffness of a discrete mass lesion with the adjacent tissue.In the strain ratio, two regions of interest (ROI) are drawn on the target region and an adjacent (usually normal) reference region that is experiencing similar stress.Then, the strain ratio is automatically calculated by the machine as the mean strain in the reference (B), divided by the mean strain in the "lesion" (A) [8,23].Both ROIs should be placed at the same depth.
Strain Ratio (B/A) =
Mean strain o f f at area (B) Mean strain in lesion o f interest (A) .
A strain ratio > 1 usually indicates that the target lesion compresses less than the normal reference tissue, thus indicating lower strain and greater stiffness (or lower elasticity) [8,23].The usefulness of this index emerges, for example, for the evaluation of nodular lesions in which the probability of malignancy increases as the deformation ratio increases [24].
Many elasticity scores or grading systems have been proposed to qualitatively classify the elastography color patterns in a wide spectrum of disease processes, including breast imaging [27], thyroid nodules [28,29], and bowel inflammatory disease [30].
The fat-to-lesion strain ratio is the strain ratio between fat and a lesion [31].
Finally, the elastography-based maximum size of the lesion can be compared with the corresponding B-mode image and can be expressed as a ratio as well [32].
A good practical guide written by experts in the field on how to perform deformation elastography is available in a previous work [23].
From Shear Waves to Shear Wave Imaging
Shear wave imaging focuses on the shear waves created by mechanical excitation in solids, in which the particles move perpendicularly to the direction of propagation [1,9,15].As previously mentioned, the propagation speed of the shear waves in soft tissue is several orders of magnitude slower than the speed of sound waves in soft tissue, and ranges from 1-10 m/s, compared to 1540 m/s.For this reason, measurement of the shear wave speed is suitable for producing a good contrast resolution for soft tissues.In shear wave imaging, Young's modulus, E, is calculated from the shear wave speed.Starting from Equation (9) and by making G explicit, we obtain: However, E 3G (Equation ( 11)), from which we derive: where the measurement of c s allows the estimation of E and G. Density ρ has units in kg/m 3 and c s has units in m/s; therefore, ρc 2 s is dimensionally defined as [kg/(m•s 2 )], which is equivalent to [N/m 2 ] or [kPa], that is, the units of measurement of E and G.A recent consensus advocates reporting the results as shear wave velocity (SWV) in terms of m/s, as part of a standardized approach [5,33].
From a technical point of view, the calculation of the shear wave speed makes use of so-called time-of-flight (TOF) methods, which perform a linear regression of the wave time arrival with respect to different positions [5,33].The TOF indicates the measurement of the time taken by an object, a particle, or a wave to travel a certain distance in a given medium; knowing the distance between the two points and the time taken to travel that distance, it is possible to derive the speed.
In shear wave elastography (SWE), the shear wave speed within a location of interest is derived by cross-correlating the time profiles of the shear wave-induced displacement at two neighboring points.Starting from the comparison of these profiles, a mathematical function yields the time taken for the shear wave to travel between the two points, then the shear wave speed is obtained by dividing the distance between the two points by the transit time [5,33].TOF-based methods employ assumptions about tissue behavior to generate an estimate of the shear wave velocity, including local homogeneity and a known direction of propagation [5,33].
As with strain imaging, shear waves can be generated by different sources, including external vibration and the acoustic radiation force [5,7].There are currently two technical approaches for SWE: one-dimensional transient elastography (1D-TE) [5,7,34,35] and acoustic radiation force impulse (ARFI) shear wave elastography [5,7,21,[36][37][38].The first commercially available ultrasonic shear wave measurement system was the FibroScan TM (Echosens, Paris, France), which uses the probe as a mechanical actuator [5,34].The probe produces a controlled mechanical excitation through a piston that punches the surface of the body at a known frequency and amplitude; it is integrated with an ultrasonic transducer to monitor the impulse of the shear waves generated by the piston.This method was designed specifically to measure liver stiffness and does not provide a 2D guide for the operator; it is similar to A-mode imaging.Therefore, the sampling relies on the operator's knowledge of the gross anatomy of the liver.The analysis of the echo pattern along the A-line allows adjustment of the acoustic window, avoiding the suboptimal ones due to the interference of vascular structures or other causes.
In ARFI-based SWE, an acoustic impulse is used to generate shear waves.Unlike ARFI strain imaging, the tissue displacement itself is not measured; instead, the velocity of the shear waves, which propagate perpendicularly to the direction of the ultrasound beam, is measured.YM can then be derived.
Visualizing SWE Information
In transient elastography (TE), the ultrasound transducer has a fixed focal configuration; the same probe is used to generate the vibrating stress and to measure the shear wave velocity along the main axes.The FibroScan TM device displays the elastogram, which represents the strains induced in the liver by shear wave propagation as a function of time and depth.An A-mode and M-mode are provided as well, for quality checks of the measurement.The equivalent stiffness in [kPa] is derived from the SWV and can also be visually assessed according to the slope of the elastogram [35].
The lack of a grayscale B-mode makes it difficult to understand where the measurement is performed [35,39].Other limitations concern the need to recalibrate the spring in the device, at intervals of 6 to 12 months (depending on the type of probe), the reduced feasibility in cases of obesity, and the impossibility of employing the device in patients with ascites [7,39].
ARFI-based SWE is divided into point SWE (pSWE) and two-dimensional SWE (2D-SWE).In pSWE, an ARFI is used to induce tissue displacement in the normal direction in a single focal location (therefore, it is named "point" SWE) [5,36].SWV may be reported either in [m/s] or converted in [kPa], to provide a quantitative estimation of tissue elasticity.Unlike TE, p-SWE may be performed on a conventional ultrasound device; B-mode image guidance is possible during the measurement since the same probe is used to both generate the shear waves and detect their propagation [5,7,33,36].
In pSWE (Figure 11), the tissue region interrogated by a single, highly focused ultrasound beam is narrow because the shear waves are rapidly attenuated by the internal frictional forces, as they propagate from the excitation region.To derive tissue stiffness over a larger ROI, data from multiple pushes must be combined; this is the basis of twodimensional shear wave elastography (2D-SWE) [5,7].The technique of 2D-SWE is the latest technological innovation that uses acoustic radiation force to assess tissue elasticity (Figure 12).First, 2D-SWE alternates the multiple perturbations and reading phases, enabling an image of the shear wave speed for analyzing a larger tissue sample.Instead of a single focal location, multiple focal zones are interrogated in rapid succession, faster than the shear wave speed.The need to transmit multiple pulses in sequence to synthesize a single elastographic image results in an increase in the acquisition time.To achieve a larger ROI without increasing the acquisition time, some devices transmit multiple thrust beams at the same time, each constituting an independent source of shear waves [5,7,33].As they propagate, the wavefronts generated by each thrust eventually meet and pass through each other.The combined wavefronts can assess a much larger region of tissue in a single transmission event.This creates a near-cylindrical shear wave cone, allowing the real-time monitoring of shear waves in the transverse direction for the measurement of shear wave speed and the generation of quantitative elastograms [5,10,33].
Figure 12.Two-dimensional shear wave elastography.In 2D-SWE, instead of a single focal location, as in ARFI strain imaging and pSWE, the acoustic radiation force is used to interrogate multiple focal zones in rapid succession, faster than the shear wave speed.In step 1 (on the left), shear waves are generated using acoustic radiation force impulses; they propagate perpendicularly to the primary US wave at a lower velocity.In step 2 (on the right), since the speed of sound in tissue is approximately 1000 times faster than the shear wave speed, fast longitudinal wave excitation (blue and red arrows) can be used to track the displacement (orange small arrows) as the shear waves propagate through the tissue.
In 2D-SWE, a TOF algorithm is used to estimate the local shear wave speed at every location in the shear wave elastography ROI [33].The speed within a location of interest is derived by correlating the displacement induced by the shear waves at two neighboring points.The cross-correlation function also provides the correlation coefficient, which is used to assess the quality of the measurement.
The quality of the measurement is important in clinical practice [40].Some vendors provide a color-coded "confidence map" that helps the operator to visually assess the quality of the acquired signals in real time [40] (Figure 13).In this viewing modality, the operator can easily work with the split-screen modality.On one side, the confidence map displays an ROI that is identical to the one of the elastogram, which chromatically represents the quality of the measurements.On the other side, the elastogram provides a chromatic map of elasticity values according to the confidence map, but the filtering thresholds are filtered out and, thus, appear transparent, showing the areas below the threshold.The operator can easily compare the confidence map and the elastogram to select the most suitable tissue areas for the sampling, improving the quality of the examination (Figure 13).The confidence map is displayed on the left, setting a threshold of 60%.A standard deviation of 60% or less of the mean value is indicative of a good-quality acquisition.The areas of low quality (red) are filtered out by the system and display as transparent (note that the area in red corresponds to a vascular structure on the B-mode); the yellow color is a warning indicating that the area is not of good quality; the green indicates a good quality of the measurement.On the right, the elastogram depicts a colorimetric map of the stiffness, in which the blue color corresponds to greater elasticity while red represents greater stiffness.In the elastogram on the right, a circular sampling area was inserted to obtain a quantitative measure of the stiffness value in the selected location (mean kPa 4.83).The presence of the confidence map helps the operator to check the quality of the sampling in real time.
Compared to pSWE and TE, this technique includes the real-time visualization of a color elastogram, superimposed on a B-mode image, enabling the operator to be guided by both anatomical and tissue stiffness information [7,39,40].The optimal depth for sampling is considered to be 4-5 cm from the skin; although most vendors allow measurements up to 8 cm from the transducer, it is proven that the accuracy decreases below 6 cm because of the attenuation of the ARFI pulse [7].
Figure 14 summarizes the different types of USE and some of the commercial devices that are currently available.
Liver
Ultrasound elastography has been applied in the staging of liver fibrosis more extensively, compared to other clinical settings.Chronic liver disease results in the development of collagen deposition and hepatic fibrosis, leading to increased parenchymal stiffness.Although biopsy is the gold standard for staging liver fibrosis, it is an invasive method that presents some limitations, including some variability between observers [39].Many clinical guidelines recommend USE as a noninvasive imaging technique for the non-invasive detection and staging of liver fibrosis [39][40][41][42].
TE was the first modality to be applied systematically and was quickly established as the point-of-care technology for the non-invasive quantitative assessment of liver fibrosis, although it presents some limitations [41][42][43].The feasibility and results of the p-SWE and 2D-SWE techniques have been extensively investigated and the recommended protocols are reported in the reference guidelines [41].In addition, the SWE techniques showed excellent reproducibility, provided that the recommendations of the manufacturer and expert recommendations are followed [39].
For example, in patients with chronic viral hepatitis, SWE is the preferred method for first-line assessment for the severity of liver fibrosis in untreated patients and to rule out advanced disease; however, it cannot be used to stage liver fibrosis or rule out cirrhosis in patients with a sustained viral response, due to the loss of accuracy of the cutoffs defined in viremic patients [39].
Measurements are usually taken with the patient holding their breath for a few seconds on a medium apnea, while supine, or slightly rotated in the lateral decubitus, with the right arm raised above the head.Sampling is conducted in the right lobe of the liver.When using SWE techniques, ARFI should be applied perpendicular to the hepatic capsule, to a depth of 4-5 cm deep, ensuring that the ROI and the immediately adjacent areas are free of vascular and biliary structures, along with the rib shadows [5,39].A detailed account of the main limiting factors can be found in the main reference guidelines [39,41].
With the availability of antiviral therapies, chronic viral hepatitis is slowly but progressively decreasing its burden of disease in favor of non-alcoholic fatty liver disease (NAFLD) as the leading cause of chronic hepatitis in the world and is an emerging issue in public health [44].NAFLS includes a wide range of conditions, from simple steatosis to nonalcoholic steatohepatitis (NASH).
USE plays a promising role in the non-invasive assessment of these patients [39].Among the different techniques, SWE showed superior performance in the assessment of liver fibrosis in NAFLD patients and can be used to rule out advanced fibrosis and select patients for further assessment [39,45].Much effort is directed toward the potential application of USE in patient screening for the quantitative assessment of patients with simple steatosis, in order to perform risk stratification and ensure follow-up.
The controlled attenuation parameter (CAP), expressed in decibels per meter (dB/m), describes the decrease in the amplitude of the ultrasonic signal in the liver and correlates with the degree of hepatic steatosis [46,47].Its accuracy is not influenced by fibrosis or cirrhosis [39,48].It is available as an add-on tool in the latest version of the FibroScan 502 Touch System and is considered a standardized and reproducible point-of-care technique that is suitable for detecting fatty liver disease [39].However, since there is a large overlap between the adjacent grades, there are no reference consensual cut-offs, and quality criteria have not yet been established [39].Combinational elastography is a new imaging technique available on the Fujifilm Arietta 850 that combines strain and shear wave elastography for the quantification of liver fibrosis and steatosis, which could partially overcome some of the traditional limitations of the studies in this field [39,49].
Artificial intelligence could potentially assist quantitative ultrasound imaging data analysis and integration for the assessment of liver fibrosis and steatosis, aiming to develop individualized classifications and predictive models.However, to date, only a few studies have applied AI in this context [50].
A large recent prospective multicenter study was conducted to assess the accuracy of a deep learning (DL) model in patients with chronic hepatitis B, using 2D-SWE elastograms instead of the traditional 2D-SWE approach, aspartate transaminase-to-platelet ratio index, and fibrosis index, and using liver biopsy as the reference standard [51].The DL-based model shows the best overall performance in predicting liver fibrosis stages, compared with 2D-SWE and biomarkers; its diagnostic accuracy improved as more images (especially ≥ 3 images) were acquired from each individual [51].
A recent study by Destrempes et al. aimed to develop a quantitative B-mode ultrasound and elastography-based model to improve the classification of steatosis and fibrosis in patients with chronic liver disease, in comparison with SWE alone, adopting histopathology as the reference standard.A random forest model for classification and a bootstrapping technique was used to identify those combinations of parameters that provided the highest diagnostic accuracy.The ML-based model incorporating quantitative US and SWE data shows better accuracy in the classification of liver steatosis and fibrosis when compared to SWE alone [52].
Breast
USE is a complementary tool for improving the detection and non-invasive characterization of breast lesions, which can help radiologists to enhance patient management [7].In particular, adding SWE to B-mode US has been shown not only to provide additional diagnostic information but also to reduce the likelihood of unnecessary biopsies [53].
Several tools have been developed to frame benign and malignant breast masses using SE [7].The most common parameters are the Tsukuba score (elasticity score) [54], the elastography-to-B-mode ratio (width or length ratio, LR) [32], and the strain ratio (SR, or fat-lesion ratio) [55].The Tsukuba score (a five-point color scale) is based on a tissue stiffness map in and around the lesion, where lower scores (1-3) indicate a lesion that is likely to be benign, while higher scores (4-5) indicate a higher probability of malignancy that requires a biopsy.Several studies have evaluated the performance of deformation elastography, many of which used the Tsukuba score, and showed an aggregate sensitivity and specificity of 83% and 84%, respectively, ranging from approximately 80 to 90% [54].
In the LR, the lesion size measured on the elastogram is divided by the lesion size measured on the B-mode.Since the stromal response to breast cancer also increases the stiffness of the surrounding tissues, the transverse diameter of a malignant lesion appears larger on an elastogram than in the B-mode.One study showed excellent sensitivity and specificity (100% and 95%, respectively) in terms of this score when differentiating benign and malignant breast lesions [32].
The strain ratio of the nodular lesion to the strain in the subcutaneous fat is another important parameter when using strain imaging.Since fat has a constant modulus of elasticity over various compressions, the ratio is a semi-quantitative measure that reflects the relative stiffness of the injury [55].
A meta-analysis including 12 studies (2087 breast lesions) compared the performances of strain ratio (9 studies, 1875 patients) and the length ratio (3 studies, 450 breast lesions), showing that the sensitivity and specificity were good for both parameters (88% and 83%, respectively, for SR and 98% and 72%, respectively, for LR) [56].
A study by Ricci et al. compared the sensitivity and specificity of a B-mode ultrasound, color map, SR, and LR, and found that the combination of these three elastography parameters improved the overall diagnostic performance, compared to these parameters alone [57].
Concerning SWE, a recent meta-analysis of 25 studies including 5147 breast lesions showed pooled sensitivities of 0.94 and 0.97 (p = 0.087), pooled specificities of 0.85 and 0.61 (p = 0.009), and area under the receiver operating characteristic curve (AUC) values of 0.96 and 0.96 (p = 0.095) for the SWE and B-mode combined, compared to the traditional B-mode alone.When SWE was combined with the US B-mode, the B-Rads assignment changed from 4 to 3 in 71.3% of cases, reducing the frequency of unnecessary biopsies by 41.1% [58].
Artificial intelligence tools could further improve the real-time integration and performance of USE in B-mode US.
A recent study by Li et al. evaluated the performance of an AI system that integrates complementary information from the US mode and SWE mode and, thus, enhances the feature representations of each mode image.The diagnostic performance and concordance between expert and inexperienced radiologists in the classification of breast nodules in two operative settings were compared, with an independent diagnosis on the ultrasound; after an interval of 7 days, they performed a secondary diagnosis with the aid of AI (secondary diagnosis mode).A dataset containing 599 images of 91 patients was used, including 64 benign and 27 malignant breast tumors.AI assistance provides a more pronounced improvement in diagnostic performance for inexperienced radiologists; meanwhile, experienced radiologists benefited more from AI in terms of reducing interobserver variability [59].
Another study by Kim et al. investigated the added value of a DL-based CAD tool (S-Detect) on the SWE and B-mode US for the evaluation of 156 breast masses, detected at US screening in 146 women.S-Detect was applied to the most representative images selected on the B-mode US, followed by the application of S-Detect software.Color-coded SWE maps were created for the mass and the normal fat and the lesion-to-fat elasticity ratio was calculated.The BI-RADS score was applied for the final assessment category by three radiologists for the B-mode US alone, B-mode US plus S-Detect, and B-mode US plus SWE.Compared to the B-mode US alone, either the addition of S-Detect or SWE increased the specificity without a significant loss in sensitivity when using either S-Detect or SWE.In two assessments, the AUC of the B-mode plus SWE was higher than in the B-mode plus S-Detect [60].
Thyroid
Thyroid nodules are a common finding in US B-mode samples and affect up to 68% of the adult population.With the widespread use of imaging in clinical practice, incidental thyroid nodules are being discovered with increasing frequency [61,62].TI-RADS classification, based on B-mode ultrasound features, is adopted to select nodules for sampling by fine needle aspiration biopsy (FNAB), which is generally used for the confirmation of malignancy.Although FNAB is considered the gold standard for diagnosis, up to 15-30% of samples are considered nondiagnostic or indeterminate due to technical factors, such as insufficient sampling or histological dilemmas between similar histotypes [63].In addition, the TI-RADS system is characterized by low diagnostic specificity, resulting in the execution of an excess number of invasive examinations [64].
Thyroid elastosonography provides complementary information to B-mode US and FNAB for the evaluation of thyroid nodules.Analogously to the breast scan study, semiquantitative parameters, such as the thyroid stiffness index (strain in the background normal thyroid/strain in the thyroid nodule) [65], and qualitative parameters, such as the Rago and Asteria criteria, were developed to stratify the risk of malignancy of thyroid nodules, based on the SE results [28,29].Their application has shown controversial results [7,66].
It has been hypothesized that the diagnostic performance of the combination of US grayscale and SE measurements is greater than that of the individual modalities for malignancy assessment.This hypothesis was supported by a study conducted by Trimboli et al., in which the combination of the two modalities produced a sensitivity of 97% and a negative predictive value of 97%, which are higher than when using the two modalities individually [67].
In contrast, in the study by Moon et al., neither elastography nor the combination of elastography and grayscale US showed better performance for the diagnosis of thyroid tumors than grayscale US [68].It has been observed that this heterogeneity of results could be due to the different exclusion criteria and the variability in the percentage of malignant nodules [7].More recently, a multicenter study by Hairu et al., conducted on a total of 1445 thyroid nodules (834 malignant and 611 benign), evaluated the performance of SE in the diagnosis of highly suspect thyroid nodules, based on the 2015 ATA guidelines in the Chinese population, and demonstrated that the combination of the TI-RADS classification and SE led to a significant increase in the sensitivity and NPV (97.1 and 91.9%, respectively), compared with the TI-RADS, in particular in nodules of ≥1 cm [69].
Several studies have suggested that SWE is a promising tool for differentiating malignant and benign thyroid nodules, and some early meta-analyses appeared to support this view of a high-precision diagnostic tool.For example, in the meta-analysis by Zhan et al. (16 studies and 2436 nodules), the overall mean sensitivity and specificity of ARFI for the differentiation of thyroid nodules were 0.80 and 0.85, respectively [70].In a review by Lin et al. (15 studies and 1867 nodules), the pooled sensitivity, specificity, and area under the summary ROC curve of SWE for detecting malignant thyroid nodules were 84.3%, 88.4%, and 93%, respectively [71].When adopted as a screening tool, the PPV and NPV were 27.7-44.7%and 98.1-99.1%,respectively, calculated with a malignancy prevalence of 5-10% in the thyroid nodules [71].
However, the comforting results at the population level are less convincing at the individual level, due to the reproducibility problems of the method and the enormous overlap of the elasticity indices, which means that the proposed cut-off levels do not have optimal diagnostic accuracy [7,[72][73][74].
A meta-analysis by Hu et al. in 2017 compared the results of RTE and SWE, and concluded that the overall sensitivities of RTE and SWE are approximately comparable in terms of the differentiation between malignant and benign thyroid nodules, while the difference in specificity between these 2 methods was statistically significant; the specificity of RTE was statistically superior to that of SWE.
Several authors have suggested using SWE as an adjunct to the US B-mode to select patients for FNAB or surgery, rather than for use as a separate diagnostic tool.The diagnostic accuracy of SWE as an adjuvant test has been addressed in several studies, but the results have been mixed, and some studies have shown an increase in sensitivity at the expense of specificity [75,76].However, the most recent evidence seems to suggest that multimodal integration is the way forward.A recent study by Petersen et al. compared the diagnostic performance of the TIRADS system (Kwak-TIRADS, EU-TIRADS), in combination with SWE imaging, for the assessment of 61 thyroid nodules detected in 43 patients (10 malignant and 51 benign).The addition of SWE resulted in an increased accuracy from 65.6% to 82.0% when using Kwak-TIRADS and from 49.2% to 72.1% when using EU-TIRADS, suggesting that the combination of TIRADS and SWE seems to be superior for the risk stratification of thyroid nodules, compared to each method when used in isolation [77].
A 2020 meta-analysis by Filho et al. compared the performance of the SWE of different manufacturers as an independent predictor of malignancy in the diagnostic differentiation of thyroid nodules (TN), obtaining ROC curves between 0.84 and 0.88 [78].
A recent review concludes that the present SWE technology seems not to be robust enough for clinical implementation on a wider scale [74].However, AI can effectively integrate information in grayscale and elastosonography to improve the classification of thyroid nodules, overcoming some of the natural limitations of the method.For example, in a study of 2050 thyroid nodules, it was shown that a random forest model performed better than a radiologist in the differential diagnosis of thyroid nodules (malignant vs. benign), based on conventional US only (AUC = 0.924 (confidence interval (CI) 0.895-0.953)vs. 0.834 [CI 0.815-0.853])and based on both conventional US and real-time elastography (AUC = 0.938 [CI 0.914-0.961]vs. 0.843) [79].
A study by Qin et al. proposed a new model based on a convolutional neural network that combines the characteristics of conventional ultrasound and ultrasound elasticity images to form a hybrid functionality space.Experimental results show that the accuracy of the proposed method is 0.947, which is better than that of other single data-source methods under the same conditions [80].
A recent retrospective study by Zhao et al. evaluated the performance of an ML model using a US and SWE image dataset of 743 nodules in 720 patients with biopsyconfirmed thyroid nodules (≥1 cm) and an independent test dataset.The radiomic features extracted from the USA and SWE images were used for the development of ML-assisted radiomics approaches and were then compared with the results obtained via the TI-RADS classification.The ML-assisted visual ultrasound approach performed better than the US approach alone.Furthermore , ML-assisted US + SWE visual approach also resulted in a reduction in the unnecessary FNAB rate, which decreased from 30.0% to 4.5% in the validation dataset and from 37.7% to 4.7% in the test dataset, compared to TI-RADS [81].
The integration of grayscale and elastosonographic modalities could improve the accuracy of the estimation of LN metastases for patients with papillary thyroid cancer.A study by Liu et al. trained and validated an AI model of the support vector machine type, using three feature sets extracted from B-US, SE-US, and a multi-modality containing B-US and SE-US.The results obtained with the multimode set produced a better area under the ROC curve than when using those functions extracted from B-US or SE-US separately [82].
Lymph Nodes
Both SE and SWE have been successfully applied to improve the diagnostic characterization of lymph nodes and have demonstrated that USE and conventional US can play complementary roles in the differentiation of malignant and benign LNs [7,83].In addition to the traditional method, endoscopic US (EUS) and endobronchial US (EBUS) have also gained considerable popularity in recent years [84].
In normal LNs, the cortex is usually stiffer than the hilum, and this architecture is generally also preserved in inflammatory LNs.Conversely, malignant carcinoma cells proliferate rapidly and infiltrate the lymph node, distorting the normal architecture and increasing its stiffness [85].However, focal infiltration may be challenging to detect, and lymphoma may produce soft lymph nodes that can predominantly have similar elasticity to the surrounding tissue, thus representing a specific challenge for this application [7].
Several studies have been published, evaluating the performance of elastosonography in differentiating between benign and malignant LNs, showing sensitivity values ranging from 64.5% to 100% and a specificity ranging from 41.7 to 91.3%, with an overall accuracy ranging from 60% to 96.7% [82].
Again, artificial intelligence (AI) can improve lymph node classification, compared to the radiologists' evaluation alone.
A study by Tahmasebi et al. assessed the accuracy of image classification software (Google Cloud AutoML Vision, Mountain View, CA) compared to three expert radiologists, regarding a dataset containing ultrasound images of 317 axillary lymph nodes, using the histopathology as a reference standard.They showed that AI has comparable performance to expert radiologists and could be used to predict the presence of metastases in ultrasound images of the axillary lymph nodes [85].
Huang et al. have compared different machine learning models, including ultrasound elastography data, in the prediction of LN metastasis risk for patients with papillary thyroid microcarcinoma.The random forest classifiers show that the better performance had the strongest prediction efficiency, with an AUC of 0.889 (95% CI: 0.838-0.940)and 0.878 (95% CI: 0.821-0.935) in the training set and testing set, respectively [86].
Bowel
The incidence of inflammatory bowel disease (IBD) is growing worldwide [87].The main types are ulcerative colitis (UC), which affects only the colon and rectum, and Crohn's disease (MC), which could potentially affect every part of the digestive tract.IBD has a complex pathogenesis and a wide variety of clinical presentations, with clinical symptoms that often poorly correlate with bowel disease activity [88].Therefore, both the initial diagnosis and further monitoring may be challenging for the clinicians, requiring the integration of clinical data, laboratory indices, and endoscopic and imaging data, together with a histopathological assessment.However, the assessment of disease activity and intestinal complications is crucial for therapeutic decisions.Several potential noninvasive biomarkers have been proposed for IBD activity assessment, including genetic, serological, fecal, microbial, histological, and immunological biomarkers [89].In this context, ultrasound elastography could play an important role as a new non-invasive tool to improve patient monitoring [90][91][92].
Several studies have been performed to evaluate the feasibility and diagnostic contribution of elastography data in the assessment of IBD, which are summarized in two recent literature reviews, one by Ślósarz et al., including 12 records, and another by Grazynska, including 15 studies [90,91].
In Crohn's disease in particular, chronic inflammation leads to the remodeling of the extracellular matrix and fibrosis, which, as is known, is one of the main determinants of tissue stiffness [93,94].For this reason, much research has focused on the role of elastography in differentiating between inflammations and fibrosis [90].The development of fibrosis is, in fact, associated with the onset of stenosis, a dreaded complication that can lead to surgery.Some authors focused on distinguishing fibrotic bowel segments from inflamed ones, based on a qualitative assessment of color patterns in SE, using MRI as a reference standard.Sconfienza et al., for example, proposed a new classification score in which each patient could receive from 8 to 24 points: the terminal ileum was divided into 8 sectors and subsequently graded according to the prevalent color: red = 1 (minimal fibrosis), green = 2 (intermediate fibrosis), and blue = 3 (maximal fibrosis) [30].Another study by Lo Re et al. evaluated a total of 41 affected bowel segments and 35 unaffected bowel segments in 35 patients, using a SE color-scale and enterography magnetic resonance imaging (E-MRI), showing a significant correlation between the findings of the two methods in distinguishing fibrotic and edematous matter.In particular, the color coding showed a blue color pattern in the fibrotic mesentery and bowel wall and a green color pattern in the edematous tissue [95].
Fraquelli et al. evaluated the role of the semiquantitative parameter strain ratio when discriminating patients with stenosis as candidates for surgery from patients with active non-narrowing/non-penetrating disease.Using histopathological data as a reference standard, they found a significant correlation between strain ratio and fibrosis severity, suggesting that SE shows excellent discriminatory ability in diagnosing severe intestinal fibrosis [96].However, further studies are needed to consolidate these data, since other authors have not found a significant correlation between the strain ratio and the histopathologic scores of inflammation, fibrosis, or the clinical or biochemical biomarkers [97].
Even if the application of artificial intelligence in the field of inflammatory bowel disease has grown significantly in the past decade [98], demonstrating its great potential in extracting valuable information from multiple data-streams, to the best of our knowledge, there are still as yet no significant studies applying AI to elastosonography data in IBD.Therefore, this could present an interesting field of application for AI to overcome some traditional limitations and finally perfect the full integration of the method in patient management.
Conclusions
Elastosonography is a powerful non-invasive method for the assessment of tissue stiffness, which includes a set of different techniques depending on the principles used for elasticity estimation.In this review, we provided a brief introduction to the basic physical principles that underpin ultrasound elastography imaging.Although USE is a promising method, both investigator-dependent and independent factors may affect elasticity measurement.A general understanding of the underlying principles could benefit the entire process of data acquisition and interpretation, enhancing the USE reproducibility.Furthermore, the application of AI can be a precious ally, allowing us to extract more information from elasticity data to improve the diagnostic process and the integration of USE into the clinical workflow.
Figure 1 .
Figure 1.Spring model of Hooke's law.A simple helical spring model can be used to represent Hooke's law.Hooke's law states that the force (F) required to extend or compress a spring for a certain distance (∆l x ) scales linearly to that distance.On the left, the upper part of the image (a) represents the spring in resting conditions, characterized by a longitudinal elasticity constant, k E , that is typical of the spring.The lower part of the image (b) represents the spring in traction due to a force, F, parallel to the x-axis.∆l x represents the longitudinal displacement of the free hand of the spring.The minus sign in the second member of the equation means that we are considering the restoring force exerted by the spring on whatever is pulling its free end since the direction of the restoring force is opposite to that of the displacement.
Figure 3 .
Figure 3. Longitudinal waves (a) and shear waves (b).The green arrows and the red line in figure (b) show the movement of the molecules with respect to the axis of propagation.
Figure 4 .
Figure 4.The relationship between the longitudinal and the transverse strains in solids, using a cylindrical model.Before compression, the cylinder has a length (L) and radius (R).After the application of force (red arrow), the cylinder becomes thinner and wider.The longitudinal strain (ε l ) is defined as the ratio between the change in length (∆l) and the initial length (L).The transverse strain (ε t ) is defined as the ratio between the variation of the radius (∆r) and the initial radius (R).
Figure 5 .
Figure 5. Poisson's ratio.The relationship between the longitudinal and transverse deformation is called Poisson's ratio and depends on the characteristics of the medium.Red arrows in the cylinder highlight the strain direction.
Figure 6 .
Figure 6.Shear strain.The model helps to visualize the concept of shear stress.We consider a prism of elastic material and a shear force F (red arrow), applied parallel to the x-axis.Similarly to the calculations for Young's modulus (E), it is possible to describe a quantity, which we define as G, to indicate the shear modulus.G is the constant of proportionality between the shear stress and the shear strain.The transverse strain can also be expressed as a function of the angle, θ.
Figure 7 .
Figure 7.The bulk modulus, or modulus of compressibility.Red arrows represent the pressure acting on the surface.In the bulk modulus equation the volumetric stress is usually expressed as a pressure P (force to surface area ratio) or, more precisely, the change in pressure.
Figure 8 .
Figure 8.The concept of viscosity.A green plate flows from left to right on the surface of a fluid (light blue), pulled with a force of intensity, F. If there were no friction present, the plate would move with a uniformly accelerated motion; however, in reality, its speed tends to be constant over time since the force F is balanced by a friction force due to the sliding of the various layers of liquid (under the conditions of laminar motion).The viscous friction force acting on the plate has an intensity that is given by relation 1 and it is directly proportional to the sliding speed (v).From relation 1, it is easy to derive relation 2, as highlighted by the red arrow.It is worth noting that, unlike the shear stress in solids (relation 3), the shear deformation is not expressed as a longitudinal strain, but instead as a gradient of velocity, that is, the change in velocity with respect to the depth of the liquid.Red arrows highlight the similarities.
Figure 9 .
Figure 9.The Kelvin-Voigt model for viscoelastic materials.The Kelvin-Voigt model is used to describe viscoelastic materials.This model is represented by a purely viscous damper (bottom), connected in parallel to a purely elastic spring (top).The constitutive equation associated with the model is represented on the right.For more explanations, see the main text.
Figure 11 .
Figure 11.Point shear wave elastography.The image is obtained with point shear-wave elastography (ElastQ, Philips), using a convex probe in a healthy volunteer.A small ROI (white square) is placed in the middle third of the renal parenchyma.The mean stiffness value was 9.95 kPa.
Figure 13 .
Figure 13.Confidence map and elastogram in 2D-SWE.The image is obtained via two-dimensional shear-wave elastography (2D-SWE) (ElastQ, Philips), using a convex probe in a healthy volunteer.The confidence map is displayed on the left, setting a threshold of 60%.A standard deviation of 60% or less of the mean value is indicative of a good-quality acquisition.The areas of low quality (red) are filtered out by the system and display as transparent (note that the area in red corresponds to a vascular structure on the B-mode); the yellow color is a warning indicating that the area is not of good quality; the green indicates a good quality of the measurement.On the right, the elastogram depicts a colorimetric map of the stiffness, in which the blue color corresponds to greater elasticity while red represents greater stiffness.In the elastogram on the right, a circular sampling area was inserted to obtain a quantitative measure of the stiffness value in the selected location (mean kPa 4.83).The presence of the confidence map helps the operator to check the quality of the sampling in real time.
Figure 14 .
Figure 14.Different techniques of ultrasound-based elastography and some of the commercial devices that are currently available.
|
2023-01-12T16:43:03.384Z
|
2023-01-06T00:00:00.000
|
{
"year": 2023,
"sha1": "80990052d1ee7c14c4f3bffdf01272fc3baa32e4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2673-7426/3/1/2/pdf?version=1673577859",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "55f8a8469653010b01935302f8043779fb2e24c4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
266287384
|
pes2o/s2orc
|
v3-fos-license
|
Reverse Logistics and Performance of Footwear Manufacturing Firms in Kenya.
The use of reverse logistics improves the corporate image and environmental performance, which in turn leads to a firms’ competitive advantage. However, in the wake of increasing environmental pollution and climate change, the use of reverse logistics in footwear manufacturing firms remains low. The performance of footwear manufacturing firms in Kenya has been declining over the years. The objective was to assess effect of reverse logistics on performance of footwear manufacturing firms in Kenya. This study made use of cross-sectional study design. The unit of analysis was all the 16 footwear manufacturing firms in Kenya. The unit of observation comprised of the managers in four departments, which include marketing, procurement/supply chain, operations and store in footwear manufacturing firms in Kenya. The target population was 64 marketing, procurement/supply chain, operations and store managers in the 16 footwear manufacturing firms in Kenya. The study used a census approach and hence involved the entire target population of 16 footwear manufacturing firms with 4 respondents from each firm. The study made use of primary data. Primary data was collected by use of semi-structured questionnaires comprising of closed ended and open-ended questions. Qualitative data was analyzed using thematic analysis. The questionnaires generated quantitative data. Inferential and descriptive statistics were used for quantitative data analysis with the help of SPSS version 25 statistical software. Descriptive statistics included frequency distribution, percentages, standard deviation and mean. Inferential statistics included regression and correlation analysis. The study findings were displayed in figures (bar charts and pie charts) and tables. The study found that reverse logistics has a positive and significant effect on the performance of footwear manufacturing firms in Kenya. This study recommends that the management should adopt reusable packaging, recycling, repackaging and product return to help in lowering material costs hence improving profit and the performance of the firm. Moreover, reverse
I.0 INTRODUCTION 1.1 Background of the Study
Organizations, currently, exists and operate in a turbulent and competitive business environment described by increasing globalization1as1well1as1the1ever-changing1needs1and1demands of the1customers (Younis, Sundarakani & Vel, 2016).To1achieve1a1competitive advantage, organizations are required to integrate environmental thinking into their supply chain management reverse logistics.Green supply chain management aims to include environmental criteria into decision-making at every stage of supply chain, from material management through consumer disposal and the closing the loop notion of reverse logistics (Mosbei, 2021).In both developing and developed countries, firms have been focusing on combining and improving their green practices so as to improve their corporate image and environmental performance.Corporate image assists in reinforcing, generating and maintaining a competitive advantage as well as improving organizational acceptance and stakeholder's approval (Machogu, 2019).Reverse logistics refers to all operations related to the reuse of products and materials.It can include all recycling, reclamation of raw materials, refurbishment, and reselling of items that have been restocked (Sarhaye & Marendi, 2017).The importance of reverse logistics is to ensure a continuous movement of goods.The strategy reduces costs, adds value, reduces risk, and brings the product life cycle to a close.It also involves repairing, changing, or reusing items that have reached the end of their useful lives.It helps manufacturers recoup assets by helping them to extract more value as possible from their products, leading in a second return on their investment (Omwenga, 2019).The components of reverse logistics include reusable packaging, recycling, repackaging and product return.Reusable packaging is both environmentally and economically beneficial.Companies can save money on raw materials, energy, and labor by decreasing the need to make single-use packaging.Recycling is the act of gathering and processing items that would otherwise be thrown away as rubbish and them into new products (Omusebe, 2018).A product return is when a customer returns previously purchased products to a merchant and receives a refund in original mode of payment, an exchange for diverse or similar item, or a store credit.Despite its importance in improving the competitive advantage of firms, the execution of reverse logistics, as a component of green supply chain management practices, in manufacturing firms around the world remains low.Consequently, the majority of Kenya's manufacturing firms are characterized by inefficiency in the production process (Wyawahare & Udawatta, 2021).Chelangat (2017) observed that only 21% of all manufacturing firms have fully implemented reverse logistics.Successful execution of reverse logistics improves economic and environmental performance as well as organizational performance.By minimizing environmental expenses while assuring environmentally friendly operations, reverse logistics enable businesses to meet financial and market share goals (Omusebe, 2018).In the manufacturing sector in Pakistan, Akhtar (2019) found that reverse logistics' initiatives influence firm competitiveness as well as economic performance significantly.In Sri Lanka, Priyashani and Gunarathne (2021) found significant positive association between GSCM strategies like reverse logistics and organizational performance existence.In the United Kingdom, Cousins, Lawson, Petersen and Fugate (2019) revealed that reverse logistics has been linked to better environmental and operational cost performance.In addition, Ojo, Mbohwa and Akinlabi (2019) discovered that reverse logistics positively influence organizational performance in South Africa.In Ghana, Anane (2020) reverse logistics led to enhanced organizational performance.According to Ochieng (2019), reverse logistics have a significant positive impact on large chemical manufacturing companies' performance in Kenya.
Statement of the Problem
Reverse logistics, as a component of green supply chain management, plays a key role in minimizing or eliminating wastages, reducing carbon emissions and reducing energy use (Wyawahare & Udawatta, 2021).In addition, Younis, Vel and Sundarakani (2016), observed that the use of diverse reverse logistics leads to an improvement in corporate image and environmental performance which in turn lead to a firms' competitive advantage.However, in the wake of increasing environmental pollution and climate change, the implementation of reverse logistics in footwear manufacturing firms remains low (Otieno, 2016).The performance of footwear manufacturing firms in Kenya has been declining over the years.Makori, Datche and Nondi (2019) observed that 75% of state owned manufacturing firms wrongly exposed or dumped posing a threat to humans and the environment.In addition, only 39% of state owned manufacturing firms had conducted training on green supply chain management like reverse logistics amongst their staff.It is therefore essential to assess whether reverse logistics influence the performance of state owned manufacturing firms.In Kenya, some of the state owned manufacturing firms have been declared bankrupt and others are in the resultant state of receivership (Wanyeri & Moronge, 2018).The footwear manufacturing firms in Kenya have been performing poorly over the years.Specifically, the footwear manufacturing firms are losing up to 4 per cent sales annually from inefficient execution of critical day-to-day processes (KAM, 2021).Additionally, between the year 2018 and 2019, the profitability of footwear manufacturing firms decreased by 12.15%, which decreased again in 2020 by 3.5% and 6.6% in 2021.According to the Kenya Association of Manufacturers (2021) report, footwear manufacturing firms were under-stocked and there was a decline in stock levels between 2019 and 2020 by 14%.In addition, footwear manufacturing firms risks losing its market shares, leaving Kenya with an option imports and heavy job losses (Nyanumba & Ndeto, 2021).As a result, footwear distributers in Kenya have been importing shoes preferably from China, Turkey and India among other countries.Further, footwear manufacturing firms have been experiencing stock outs in their raw materials including raw leather.Therefore, the performance of footwear manufacturing firms remains poor even after the adoption of reverse logistics.
Studies conducted on implementation of reverse logistics in Kenya have been limited to specific manufacturing firms in Kenya.For instance, Sarhaye and Marendi (2017) assessed the impact of reverse logistics on the organizational performance of Coca-Cola; Omwenga (2019) assessed the effect of reverse logistics techniques on the effectiveness of selected plastic packaging companies within Nairobi County; and Kabergey and Richu (2019) investigated the impact of reverse logistics on the operational efficiency of sisal processing companies in Nakuru County.Firms in different sectors use different production processes and target different markets and hence the use of reverse logistics may vary from one sector to another.Therefore, the researcher sought to examine the effect of reverse logistics on the performance of footwear manufacturing firms in Kenya.
Objective of the Study
The general objective of the study was to examine the effect of reverse logistics on the performance of footwear manufacturing firms in Kenya.
2.0 Literature Review 2.1 Theoretical Review Theoretical framework includes explanations of diverse theories and ways in which they relate to variables being investigated (Babbie, 2017).The theoretical framework introduces and explains theories that explain research problem being investigated.This study was anchored on institutional theory.Institutional Theory was developed in 1977 by Brian Rowan and John Wilfred Meyer.According to the theory, organizations are able to enhance their performance by better coordination and control of tasks.The process of institutionalization is characterized by the use of rules in the social processes, actualities, obligations and also in thought and actions.According to Scott (2018), the processes by which institutions, procedures, standards, and norms become established as parameters for acceptable behavior are the subject of institutional theory.This theory's assumptions are based on the fact that the center of an organization world both internal and external is based on things that are well understood and visible to the organization's members.This means that the management although they are affected by given social norms also look at the world in a given dimension and behave according to this perception.This leads them to creating an organizational environment that is based on this perception (Cai & Mehari, 2015).Organizations are not lone rangers that aim to get the best of the economic opportunities available, they are based on social norms and expectations that are part of the things management have to consider before making decisions that relate to the firm.The social norms give a basis through which the interpretations of social happenings are given and provide a means through which people behave and have their purposes (Sahin & Mert, 2022).The social rules become part of the organization's through other mediums such as consultants, states, media, analysts, professional bodies and agencies among other carriers of beliefs and ideas that point at the most appropriate behavior for firm management.When firms act according to the social norms, they get approval, public endorsement and support which increases their popularity and legitimacy.This study adopted the institution theory to assess effect of reverse logistics on organizational performance.Foot wear manufacturing firms have institutionalized reverse logistics practices like reusable packaging, recycling, repackaging and product return because of internal and external pressures (Sahin & Mert, 2022).Companies also institutionalize reverse logistics strategies to avoid losing market share to other competitors and to avoid negative implications of noncompliance with environmental mandates (Cai & Mehari, 2015).This is in addition to rising customer and environmental organization demand for ecologically friendly products.As a result of these demands and barriers, businesses are forced to consider environmental influence of their operations.Three institutional strategies may affect managerial decisions to implement environmental management initiatives: normative, coercive, and mimetic.Organizations are obliged to conform due to normative pressures, such as consumer expectations, in order to be viewed as more legitimate.Depending on their influence, a variety of external stakeholders can exert coercive pressure on businesses.Environmental regulations imposed by government entities, for example, may influence the adoption of environmental policies by businesses.
Conceptual Framework
Figure 1 is a diagrammatic representation of the hypothesized relationships between study variable.
Independent variable was reverse logistics.Dependent variable was the performance of footwear manufacturing firms in Kenya.
Figure 1: Conceptual Framework Empirical Review
All operations involving reuse of materials and products are referred to as reverse logistics.This entails recycling, raw material reclamation, refurbishment, and remarketing of restocked items (Nyarega, 2017).The importance of reverse logistics is that it ensures a continuous movement of goods.The strategy reduces costs, adds value, reduces risk, and completes the product life cycle.Mutangili (2019) looked at the impact of reverse logistics in SC leadership and management on parastatal performance in Kenya, using Kenya Pipeline and Kenya Airways as examples.Desk study review methodology was used in the research.To identify the paper's major topic concepts, a comprehensive examination of empirical literature was done.Sarhaye and Marendi (2017) assessed the impact of reverse logistics on Coca-Cola Kenya's organizational performance and the impact of supplier assessment on Coca-Cola Kenya's organizational performance.The research was conducted using descriptive research approach.A total of 642 people were surveyed, representing all levels of Coca-Cola employees.The researchers utilized stratified random sampling in their research.The 64 personnel were used in the study.Questions utilized in the study were open-ended as well as closed-ended.Statistical analysis was employed to examine qualitative data using SPSS version 23.According to the study, there exists a link between RL and Coca-Cola Company organizational performance.In Kenya, Nyarega (2017) studied the impact of RL on government owned manufacturing firms' performance.Additionally, the researcher used descriptive research approach, with each of 14 governmentowned industrial companies constituting the sample frame.The main research instrument was a questionnaire, which was dropped three times and later retrieved from the businesses.The research revealed that Kenyan government-owned industrial enterprises had used significant reverse logistics strategies.It was discovered that higher use of reuse, remanufacture, and recycling reverse logistics practices was linked to increased manufacturing company organizational performance.In Nairobi County, Samson (2018) investigated whether reverse SC logistics influences the imported furniture distribution enterprises' performance.This study used a descriptive research approach with 130 managers from twenty six Imported Furniture Distributing companies as participants.To generate sample size of 83 respondents for this investigation, a simple random sampling method was used.Secondary and primary data were employed in this research.Primary data was obtained from employees via questionnaires, whereas secondary data was gathered from companies' inventory records.The study established that reverse transportation, reverse storage constraints and reverse inventories management all influenced the performance of companies that distribute imported furniture significantly.In Kenya, Omwenga (2019) assessed the effect of RL techniques on the effectiveness of selected plastic packaging companies within Nairobi County.Descriptive research approach was employed during the study.The study population composed 180 managers as well as assistant managers from the major departments of all 22 companies involved in sale and manufacture of plastic packaging.The study deployed primary data gathered using self-structured questionnaire.Reverse logistics procedures influences the overall performance of plastic packaging manufacturing companies significantly, according to the study.According to the study, standardized and effective reverse logistics can provide a company with competitive edge and perhaps
Dependent Variable Independent Variables
capture more market share due to their superior method and ability to meet their customers' constantlychanging demands.Mbovu and Mburu (2018) conducted research to determine the impact of RL methods on improving competitiveness in Kenyan industrial companies.The study included 240 employees from East Africa Breweries Limited's logistics, procurement, and finance departments.The raw data was collected from respondents through a questionnaire.Simple random was used, and primary data was gathered by employing questionnaires.To reinforce the primary data, secondary data from journals, reports, periodicals and magazines was gathered.The researcher found that manufacturing firms' competitiveness is influenced by reverse logistics practices measured in terms of remanufacturing practices, repackaging practices and recycling practices.Kabergey and Richu (2019) cited by Dacha, Omwenga , & Namusonge , (2023) investigated the impact of reverse logistics on the operational efficiency of sisal processing companies in Nakuru County using a correlational research design and cross sectional survey.Employees from all sisal processing companies in Nakuru County were the study's population.Employees in production, accounting and finance, procurement and marketing departments were chosen using purposeful sampling while the sample for the study was determined using stratified random sampling.Structured questionnaire with a 5-point likert scale was used to collect data, which was then analyzed using SPSS version 21.According to the findings, both product recovery and product reuse have a positive impact on the operational performance of sisal processing companies.
Research Methodology
This study adopted a cross-sectional study design.The unit of analysis will be all footwear manufacturing firms in Kenya.According to Kenya Association of Manufacturers ( 2021), there are 16 footwear manufacturing firms in Kenya.The unit of observation comprised of the managers in four departments, which include marketing, procurement/supply chain, operations and store in footwear manufacturing firms in Kenya.The departments were used in this study because they are involved in the implementation GSCM practices procurement process and in the supply chain of their organizations.The study used a census approach and hence the entire target population of 16 footwear manufacturing firms with 4 respondents from each firm was involved.The respondents in each firm included managers in marketing, procurement/supply chain, operations and store departments.The study used primary data, which was collected by use of semi-structured questionnaires.Closed ended questions and were in form of a Likert scale as well as nominal scale.The study variables were then measured using five-point Likert scale.Demographic information of the respondents was collected using a nominal scale.The study also used open-ended questions to collect qualitative data.Open ended questions obtained information that was used to explain the quantitative results.A pilot test was conducted in two footwear manufacturing firms with 8 respondents.The pilot test group in this study was chosen at random and comprised of 10 percent of total population.Babbie (2017) suggests that a sample size should be 10% of the total sample size required for a full study.The content validity of the questionnaire was enhanced by structuring questions as per study's indicators and objectives.Face validity was enhanced by employing experts' reviews in procurement and supply chain management field including the supervisor.Construct validity was found to be above 0.7 and hence the research instrument was valid.In addition, the research instrument was found to be reliable.The questionnaires generated quantitative and qualitative data.This study analyzed qualitative data by employing thematic analysis.The findings were displayed in form of narrative.Inferential as well as descriptive statistics were employed in quantitative data analysis with the help of SPSS version 25 statistical software.Descriptive statistics included percentages, frequency distribution, standard deviation and mean.Additionally, inferential statistics in this study included regression and correlation analysis.The findings were displayed in figures (bar charts and pie charts) and tables.The researcher used 95% confidence level and hence the p-value was 0.05.Therefore, associations and relationships with p-value of 0.05 and below were considered significant but associations with p-value of above 0.05 was considered insignificant.Regression model in this study was; Whereby; Y = Performance of footwear manufacturing firms; B0 = Constant; β1 =Coefficients of determination; X1 = Green purchasing; and ε = Error term
Research Findings and Discussions
The population of this study comprised of 64 marketing, procurement/supply chain, operations and store managers in the 16 footwear manufacturing firms in Kenya.The researcher distributed 64 questionnaires to the respondents.Out of 64 questionnaires that were distributed to the respondents, 58 questionnaires were dully filled and returned to the researcher hence providing a response rate of 90.63%.Babbie (2017) suggests that 75 percent response rate is adequate for data analysis, drawing conclusions as well as making recommendation.This denotes that 90.63% response rate was adequate for data analysis.
General Information of the Respondents
The general information of the respondents comprised of their highest level of education and the duration of working in the present position.The findings were as shown in Table 2.In relation to the respondents' highest level of education, 53.4% (31) of the respondents pointed out that they had undergraduate degree as their highest level of education, 22.4% (13) had diploma, 19% (11) had masters and 5.2% (3) had PhD.This means that the respondents had undergraduate degree as their highest level of education and hence were literate enough to provide relevant information on reverse logistics and performance of footwear manufacturing firms.
In regard to the duration in which the respondents had been working in the present position, 55.2% (32) of the respondents revealed that they have been working in the present position for between 11 and 20 years, 24.1% ( 14) indicated for between 5 and 10 years, 12.1 % (7) indicated for more than 20 years and 8.6% (5) indicated for less than 5 years.This implies that most of the respondents had been working in the present position for more than 11 and therefore they had adequate information on reverse logistics and performance of footwear manufacturing firms.
Descriptive Statistics
This section covered descriptive statistics on reverse logistics and the performance of footwear manufacturing firms in Kenya.The closed ended questions were measured on a 5-point Likert scale, with SD representing strongly disagree D representing disagree, N representing neutral, A representing agree and SA representing strongly agree.
Reverse Logistics
The respondents were asked to specify their level of agreement on various statements regarding reverse logistics in footwear manufacturing firms in Kenya.The findings were as shown in Table 3.The respondents agreed with a mean of 4.3 0 (SD=0.883)that reusable packaging play a key role in minimizing carbon dioxide emissions.These findings are in line with Sarhaye and Marendi (2017) cited by Dacha, Omwenga , & Namusonge , (2023) arguments that reusable packaging can help to reduce carbon dioxide emissions by allowing for more efficient transportation.Moreover, the respondents agreed with mean of 4.069 (SD=0.989)that they are satisfied with the level of reusable packaging practiced in the organization.In addition, the respondents agreed that the organization procure products that are made using recycled materials as shown by a mean of 3.759 (SD=0.823).In addition, the respondents agreed with a mean of 3.655 (SD=0.762)that reusable packaging saves the cost of raw material, energy and labor.
With a mean of 4.310 (SD=0.467), the respondents agreed that repackaging reduces environmental pollution.Moreover, the respondents agreed that recycling prevents pollution and keeps the environment clean as shown by a mean of 3.931 (SD=0.953).These findings are in line with Nyarega (2017) arguments that recycling prevents pollution, helps in keeping the environment clean, reduce emissions of greenhouse gas and reduces waste that ends up in landfills.The respondents also agreed1that the organization has established a recycling system for defective and used products.This is shown by a mean of 3.828 (SD=0.881).In addition, the respondents agreed with a mean of 3.586 (SD=0.899)that the organization use environmental friendly repackaging materials.
The respondents agreed with a mean of 4.035 (SD=0.858)that the organization accepts returns provided customer has receipt as a proof of purchase.These findings conform to Kabergey & Richu (2019) arguments that numerous stores allow returns if consumer has a receipt as proof of purchase and not greater than certain duration of time must have gone since the purchase.In addition, the respondents agreed with a mean of 3.862 (SD=0.868)that they are satisfied with the level of repackaging in the organization.Moreover, the respondents agreed with mean of 3.862 (SD=0.437)that the organization encourage and accept product return.Furthermore, the respondents agreed that the organization does not accept the return of goods after a certain period has passed since the purchase as shown by a mean of 3.724 (SD=1.089).4.310 0.467 I am satisfied with the level of repackaging in our organization 3.862 0.868 Our organization encourage and accept product return 3.862 0.437 Our organization accepts returns provided customer has receipt as a proof of purchase.
4.035 0.858 Our organization does not accept the return of goods after a certain period has passed since the purchase.
3.724 1.089 The respondents were asked to specify other ways in which reverse logistics is utilized the organization.The respondents noted that the organization use reverse logistics to ensure a continuous movement of goods.The strategy reduces costs, adds value, reduces risk, and brings the product life cycle to a close.In addition, the organization use reverse logistics to repair, change, or reuse items that have reached the end of their useful lives.The respondents further revealed that the organization use reverse logistics to recoup assets by helping to extract more value as possible from their products, leading in a second return on their investment.The respondents noted that the organization use reverse logistics to make sure that abandoned products would not wind up in landfills.The respondents revealed that the organization uses reverse logistics to repackage the goods.By repackaging the organization tends to attract the consumer towards its product amongst all the other competitive products in the market.
Performance of Footwear Manufacturing Firms
The dependent variable in this study was performance of footwear manufacturing firms.The respondents were requested to indicate their agreement level on different statements regarding performance of footwear manufacturing firms in Kenya.With a mean of 4.035 (SD=1.042), the respondents agreed that revenue in the organization has been increasing over the years.Moreover, they agreed that the profitability of the organization has been increasing over the years as shown by a mean of 3.931 (SD=0.876).In addition, the respondents agreed with a mean of 3.931 (SD=1.024)that the market share of the firm has been increasing.Moreover, they agreed that the firm has been experiencing increased customer loyalty as shown by a mean of 3.759 (SD=1.048).In addition, the respondents agreed with a mean of 3.655 (SD=0.890)that the firm has managed to obtain other firms share of the market.Further, the respondents were neutral that the cost of production has been decreasing leading to an increase in profitability as shown by a mean of 3.621 (SD=1.006).With a mean of 3.966 (SD=0.898), the respondents agreed that there has been repeated repurchases from the customers.Moreover, they agreed that the number of customers has been increasing as shown by a mean of 3.862 (SD=0.907).The respondents also agreed that customer satisfaction in the organization has been increasing as shown by a mean of 3.793 (SD=0.894).The number of customers has been increasing 3.4 6.9 6.9 65.5 17.2 3.862 0.907
Inferential Statistics
In this section, inferential statistics such as regression and correlation analysis were used to examine the effect of reverse logistics on the performance of footwear manufacturing firms in Kenya.
Correlation Analysis
Pearson product-moment correlation coefficient was utilized to assess the strength of association between independent variables (reverse logistics) and dependent variable (performance of footwear manufacturing firms).The findings were as presented in Table 5.The study found a very strong and positive relationship between reverse logistics and the performance of footwear manufacturing firms in Kenya (r=0.926,p-value=0.000).The p-value=0.000was less than 0.05, thus the relationship was considered to be significant (Charles, & Benson Ochieng, 2023).These findings are in line with Mbovu and Mburu (2018) arguments that manufacturing firms' competitiveness in Kenyan industrial companies is influenced by reverse logistics practices.
Regression Analysis
Regression analysis was carried out to examine the relationship between independent variable (reverse logistics) and dependent variable (performance of footwear manufacturing firms).As depicted in Table 4.11, R-squared for the relationship between green supply chain management practices and performance of footwear manufacturing firms was 0.323 which means that 32.3% of the variation of dependent variable (performance of footwear manufacturing firms) could be explained by independent variables (green purchasing, green distribution, eco-design and reverse logistics).In this study, the ANOVA was performed to determine if the model was good fit for the data.As shown in Table 7, the F-calculated was 211.241 and the F-critical from the F-distribution table was 2.55.Because the F-calculated was greater than F-critical and the p-value (0.000) was not more than the significance level (0.05), the model was considered to be a good fit for the data.The study revealed that reverse logistics has a positive and significant effect on the performance of footwear manufacturing firms in Kenya (β2=0.727,).Because the p-value of 0.000 was less than the significant level (0.05), the relationship was considered to be significant.This means that an enhancement in reverse logistics will lead to 0.727 improvement in the performance of footwear manufacturing firms.The findings confirm to those of Nyarega (2017) who found that reverse logistic was linked to increased manufacturing company organizational performance in Kenya.
Conclusions
The study also concludes that reverse logistics has a positive and significant effect on the performance of footwear manufacturing firms in Kenya.Moreover, the study established that reusable packaging, recycling, repackaging and product return influences the performance of footwear manufacturing firms.This means that improving reverse logistics (reusable packaging, recycling, repackaging and product return) enhances the performance of footwear manufacturing firms.
Recommendations
The study found that reverse logistics has a positive and significant effect on the performance of footwear manufacturing firms in Kenya.Therefore, this study recommends that the management should adopt reusable packaging, recycling, repackaging and product return to help in lowering material costs hence improving profit and the performance of the firm.Moreover, reverse logistics adds value, reduces risk and ensures a continuous movement of goods.
Areas for Further Research
The general objective of the study was to assess effect of reverse logistics on the performance of footwear manufacturing firms in Kenya.However, the study focussed on footwear manufacturing firms hence, the findings cannot be applied to other manufacturing firms in Kenya.As a result, this study recommends that more studies should be conducted to determine how reverse logistics influence the performance of other manufacturing firms in Kenya.Furthermore, the study found that reverse logistics can explain 32.3% of the performance of footwear manufacturing firms.As such, more studies should to be conducted to examine other factors that influence the performance of footwear manufacturing firms.
|
2023-12-16T17:04:33.286Z
|
2023-12-12T00:00:00.000
|
{
"year": 2023,
"sha1": "dacdd3bbcad500457cd0a9aa26324103211c154b",
"oa_license": "CCBYNC",
"oa_url": "https://researchbridgepublisher.com/index.php/ijsshr/article/download/62/61",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "94ca8abc7d3e8fec91cf092a9f394f2dc69180f0",
"s2fieldsofstudy": [
"Environmental Science",
"Business"
],
"extfieldsofstudy": []
}
|
265633274
|
pes2o/s2orc
|
v3-fos-license
|
IDENTIFYING STUDENTS’ DIFFICULTIES IN PRONOUNCING ENGLISH VOCABULARY AT THE TENTH GRADE OF SMTK MO’ALE
Pronunciation is the fundamental aspect in building a communication so that everyone can convey the message or ideas to other people clearly, correctly, and orally. Pronunciation will be a problem if the people who pronounced the words not as the correct pronunciation so that, will be effected on listener or hearer comments or reaction. This research was aimed to identifying students difficulties and the causes of students difficulties in pronouncing English vocabulary at the tenth grade of SMTK Mo’ale. This research designed by using qualitative method. The instruments of data collection were interviewed and documentation by video record. The technique of data analysis were data reduction, data display, and conclusion drawing verification. After analyzing the data, researcher found that students have difficulty in pronunciation such as consonants sounds[ ʤ ,θ, t∫, ∫, v, c], and vowel sounds[ə-ɑ , ɔ , e-I, ʌ ], difficulty in pronouncing close consonant sounds , difficulty in pronouncing all the suffixes tion “∫”, and changes letter into other pronunciation. The causes of students difficulties were (1) Lack of confidence (2) Lack of practice (3) The influence of Indonesian language (4) Difference in spelling, pronunciation, and the meaning (5) Lack of vocabulary. Therefore, the researcher concluded that Identifying Sudents’ Difficulties In Pronouncing English Vocabulary was necessary because it can be primary attention in teaching learning process after knowing the difficulty and the cause of students difficulty. It is suggested that research for all the English teachers, students, and next researcher, hopefully can be used as consideration and refereces to make students can pronounced the word correctly.
A. Introduction
Language is a tool of communication that is used by someone to do the interaction and convey message and ideas with another people.In daily life, Language cannot be separated from human life because it is used to communicate to each other.It means that, language is the important thing for human being.According to Brown (2002:60),"language is used for communication."So, the language here is English.As the students from English department and in education field, to speak well and fluently are one of the main dreams and for students also can be said in all of the school, English is one of subject materials to learn.
In English, there are some components of skills that should be mastered by students such as speaking, reading, listening, and writing.
In this research, the researcher just focus to describe the students difficulties in pronunciation as the aspect of speaking skills.Goh and Burns (2012:15) states, "speaking is accepted by everyone as an essential language communication skill, but it's importance to language learners goes beyond just day today communication.
"For beginner, especially speak English like something new and seems different on how to pronouce it clearly and correctly, but the more as often as possible to do it by practice can get increase.
Speaking is the important skill that students' have to be master in English language.According to Bailey (2005:2),"Speaking is an interactive process constructing meaning that involves producing and receiving and processing information."It means that, speaking refers to process of communication by producing verbal utterances where utterance deals with simply things people say to convey the meaning.Speaking is an activity of delivering message, it occur between speaker and listener orally Speaking is one of the general aspect of English skills that should be mastered by students.By speak something everything can convey averything to other people.
According to Brown (2004:406-407), there are four aspects of speaking that the students cloud consider as follow: 1. Pronunciation In oxford dictionary (2008) defined that pronunciation is way which a language or particular words or sounds is spoken.Pronunciation refers to the problem especially human sounds that we use to make meaning.
Grammar
Grammar is the way how to organized the word into the correct sentence.
Fluency
Fluency is the area of language ability which related to the speed and ease with a language learner'sperformance in one or four core language skill of speaking, listening, writing, reading.
Vocabulary
Vocabulary is talking about how someone can choose the witch is used based on the topic what they are talking about.Pronunciation is the act or manner of pronouncing words.As human being the word is produced by sound.Our voice is produced by vibration of our vocals cords with the airstream from the lungs pass the process until mouth so that created the sound, and the sound is pronounced it became a word and sentences that has the meaning.Kelly (2000:1) stated that Pronunciation is one of the important things in learning English in order to make a good communication.It means that to make a good communication need to pronounce the word correctly because sometimes the people who pronounce the wrong sound, listener could have a wrong understanding and will be effected on their response or action to do something.Of course when someone starting to pronounce a word it can marked by sound, here pronunciation is when we use the same organs of speech to produce the sound in particular away.Sound of speech can be studied from various points of view.There is a specific course that researcher had learned previously.In this research, the researcher wants to identify what makes the students become difficult in pronouncing English vocabulary.
Pronunciation is a basic thing that cannot separated from human life in communicating to each other Pronunciation plays an important role in delivering speech.As the act of manner of pronouncing words, usually people do communication to someone by arranged significant sound and pronounce it as the result a word.So that, the listener can be understand and answer correctly.it can be said that Pronunciation is the way to pronounce words.Hancock (2003:70), stated that Pronunciation refers the use of a sound system that important for speaking and listening.For example when they are speaking fast, many native speaker fast, many native speakers join word together in certain ways.Speech refers to the faculty or power of speaking, oral communication or ability to expression of something.So the way it by pronunciation.
In pronunciation, Kelly (2000:1) stated that there are two elements that should be studied by students.Firstly Suprasegmental features (include stress and intonation) and phonemes (included consonants and vowel).So, the explanation from those features are:
Suprasegmental feature
Suprasegmentalis the feature that related with sounds like tone, stress and intonation.According to Kelly (2000:3),Suprasegmental feature as the name implies, are feature of speech generally apply to group of segmental phonemes.From this statement suprasegmental deals with indicator on how the sound can be produce correct or incorrect.Here is brief information: 1. Stress Stress refers to loud or weak a word is said.Kelly (2000:3) assumes that all the word has each identifiable syllable, and one of the syllables in each word will sound louder then other.From the expert statement, it's clear that in stress occurs when a sounds is spoken where each syllables are combined to produce a sound and have meaning so that other people can hear it and understand.
Intonation
Intonation refers to the high and low of spoken sound.
Phonemes
Phonemes are the different sounds within a language, for example when we pronounce a word from the word there are syllables that are divided, so that they are arranged into one word and have meaning.Generally this is study in phonology course.
According to Carr (2008:157),"Segmental phonology is the study of segmental phenomena such as vowel and consonants allophones."It's clear that the main concern here in segmental are vowel and consonants.
English consonants
Consonants is characterized mainly by some obstruction above the larynx, especially in the mouth cavity.Crystal (2008:103),"stated that consonant in terms of both phonetics and phonology".Phonetically, it is a sound coming from closure or narrowing in the vocal tract therefore the airflow is either completely blocked or restricted that audible friction is produce.Humans employ speech organs in producing consonants that the term "Articulation''.Phonologically, consonants are those units which function at the margins of syllables, either singly in clusters.There are 24 consonants: and [j].
English Vowels
Vowel is sounds which are made without any kind of closure the escape of air through.A vowel is defined as some of the continuous voiced sounds produced without obstruction in the mouth and they are what may be called pure musical sounds unaccompanied by any friction noise.The quality of vowels is depending upon the position of the tongue and the lips because those articulators have a great role in producing the vowels.As a result, the production of most vowels is managed by tongue that rises to the palatal ridge.Vowel classification is based on what part of tongue which is managed to produce the vowels.
Vocabulary plays important role in English.According to Linse (2005: 121),"vocabulary is the collection of words that an individual knows."Vocabulary is one of linguistic components in learning English.In English skill, the first step that student learned is vocabulary itself because it is the basic component to learn four skills in language.In addition, vocabulary is a core component of language proficiency and provides much of the basis for how learners listen, speak, write, and read well.It means that these four skills are talking about the words because the more people master vocabulary the more they can speak, write, read as much as possible.Vocabulary is the main component which is having the important role in language teaching.The step to mastering in speech is good and fluent start with mastering the words, rules, and the levels is starting from vocabulary.
Furthermore, the students' who have the quality and capability to communicate the meaningful information to others and support the student to comprehend language is by vocabulary itself.Vocabulary is a group of word that has own meaning, usually there is the original word and besides that there is meaning in other language.For example in Englishinto Indonesia.
According to Oxford dictionary (1995:322),"Difficulty is the state or quality of being difficult, the trouble or effort that still involves."Whoever it is and any aspects, someone will definitely find a difficulty or obstacle on himself or herself to be able to know about something or be able to do it properly and correctly.According to Webster dictionary, Difficulties is a factor causing trouble in achieving a positive result or tending to produce a negative result.Difficulty also is a thing that is hard to accomplish, deal with, or understand, hard to change into a behavior quickly However, based on preliminary research by observation and interviewed five students did by researcher at SMTK Mo'ale it was found that students have difficulties in pronouncing of English vocabulary.For example when students get a gift or get helping from someone of course the word to reply is "Thank you" so in this word the way to say it by /'ϴaeŋkju:/, or asking them to count the number like "one"/w˄n/, but as the result they pronounce it the same as written form such as /thank you/, /on/.In fact, they can not to differentiate how to pronounce the word that almost have word similarity, like here/hear, /eye/I, had/hat, four/for, send/sent.For them, English word is something new because in their daily life they seldom to use it as a tool of communication, they still thick with local language.Sometimes they feel shame to pronounce English words, like teaching leaning process especially if their classmates like to gloat if they are wrong, it becomes a diffident of confidence and a desire to speak English language.In line with learning English vocabulary, English vocabulary is different from Indonesia viewed from the form, including pronunciation and spelling.In addition, the way how to pronouncing the word is quite different from writing, therefore, people especially students who learn English often find difficulties in learning.So, this is the basic problem for students.
Based on the background above, the researcher formulated a research question as follow: The data in this research will be the errors, mistakes, and difficulties in pronouncing English vocabulary, and the source of the data will be taken from students utterances when they pronouncing English vocabulary.
In instrument of the research, researcher using data collection techniques follows: 1. Interview Interview is verbal activity to ask and answer the question to obtain information.The form of information obtained is stated in the writing, or audio recorded, visual, or visual audio.
Documentation
Documentation is the evidence provided information and ideas borrowed from others.In addition, researcher doing record of student's utterances to make the data gained from the field become trustable, where the purpose of data recording was to set in writing and assure the preservation of the data collected in the course of field or laboratory studies.After collecting the data, the next step is analyzing the data.The researchers in this case will use qualitative descriptive analysis, namely data analysis model in the concept given by Miles and Huberman.Miles and Hubermen (1994:11,12)
C. Reserch Findings and Discussion a. Students Difficulties In Pronouncing English Vocabulary
Based on the results of the researchers observation and the result documentation added by video reording of students that researcher already mention previously, the result of the study indicated that the class students of SMTK Mo'ale face some difficulties in pronouncing English vocabulary.In obtaining the data, researcher made observation and documentation at the same time the researcher record student utteracess when they read and pronounce some vocabularies.Based on the video recording of their utterancess, the researcher found many mistakes on how to pronounced corectly as the rule or standard of good pronunciation.In this study the researcher found that the students have weaknesses on using some letter to ponounce.As the resuts students' difficulties such as: 1. Consonant a. Difficulty in pronouncing words that containt sound "ʤ".
Based on the data analysis, it was found that students got difficulty in pronouncing sound English vocabulary containt"ʤ".This difficulty was indicated by errors made by students in pronouncing the words.For example when students are asked to pronounced word "introduce, teenegers" they pronounce it "introduk , introdus, introduce tenagre, teananggers".However, the correct pronunciation must be (Intrə'dƷu:s, Ti;neidƷərs).In this case students' pronounced like "d" or "g".In adittion, most of them are pronounced the word as written form.The detail information on this errors can be seen in the appendix.b.Difficulty in pronouncing words that contain sound"θ" Based on the data analysis, it was found that students got difficulty in pronouncing words that contain sound"θ".This difficulty was indicated by errors made by students in pronouncing the words.In this case, students cannot used the organ of speech, where there is not combination with the tongue against or close to the superior alveolar ridge because it containts with sockets of upper teeth.for example when students are asked to pronounced words " thinking, birthday, mother tongue" they pronounce it " tiking or tingki, bridei, moter tuk or moter tong".however, the correct pronunciation must be (Ɵiŋkiŋ, B3:Ɵdei, məðc tɅŋ).c.Difficulty in pronouncing words that containt "t∫" Based on the data analysis, it was found that students got difficulty in pronouncing sound English vocabulary contain sound "t∫" .This difficulty was indicated by errors made by students in pronouncing the words.In this case, students cannot produced by placing the tongue front of the palate which is near the alveolar rigde.For example when students are asked to pronounced words " future, feature, structure".As the results, they pronounce it "future, fature, fatur, struktur, fitur or feature.However, the correct pronunciation must be "'fju:tʃə, fi:tʃə, StrɅktʃə".
d. Difficulty in pronouncing close consonant sounds
Based on the data analysis, most of students are omitted some letter when combined with other letter.This difficulty was indicated by errors made by student in pronouncing the words, where students are not pronounced overall sounds correctly.For example when they pronounced words "excuse me, content, and reward, sport.thinking" They pronounced "eks mi, ekus me or ikus mi, centet or, conte, rewed, rewar, tiking".However, the correct pronunciation must be "Ik'skj:s mi, kən'tεnt, ri'wƆ:d, Ɵiŋkiŋ". in this case, the sounds that omitted by students are, "x", "t", d".e. Difficulty in pronouncing all the suffixes tion ("∫") Based on the data analysis, it was found that students got difficulty in pronouncing sound English vocabulary contain sound "∫".This difficulty was indicated by errors made by students in pronouncing the words.In this case, students cannot produced by placing the tongue front of the palate which is near the alveolar rigde.forexample, attention, apreciation, invitation, information,celebration, congratulation, aplication, prediction, direction, reduction, communication, graduation, implementation.f.Difficulty in identifying the sound of letter "c" in certain English words.
Based on the data analysis, it was found that students got difficulty in identifying the sound of letter "c" in certain English words.This difficulty was indicated by errors made by students in pronouncing the words.In this case, the students are not pronounced the word as the standard of pronunciation, it means that they don't know the position using letter "c" become "k or s" in a word.For example.When students are asked to pronounced words "content, courage, communication, confuse, carefull."They pronounced by begin sound letter "c" not "k or s".however, the correct pronunciation must be "kən'tεnt, 'keəful, kəmju:nə'keiʃən, kən'fju:z, Sεli'breiʃən" g.Difficulty in pronouncing words that contain sound "v" Based on the data analysis, it was found that students got difficulty in pronouncing that word.This difficulty was indicated by errors made by students in changes using letter "V" become "F".for example when students are asked to pronounced word "invitation" they pronounced it "infitation", however, the correct pronunciation must be " invə'teiɭən" 2. vowel a. Difficulty in pronouncing words that containt sound "ə".Based on the data analysis, it was found that students got difficulty in pronouncing sound English vocabulary containt "ə".This difficulty was indicated by errors made by students in pronouncing the words.In this case, students pronounced the words like letter "a' not "ə".know For example when students are asked to pronounced word "achievement, awareness, aprovement, amazing, accompany, attention", they pronounce it "Aktivemen, aprovemen, amazing, akompani or akompeni".However, the correct pronunciation must be (ə'tʃi:vmant,ə'weərnəs, ə'pru:vmənt, ə'meiziŋ, ə'kɅmpəni).
The detail information on this errors can be seen in the appendix.b.Difficulty in pronouncing words that containts sound "ɔ" Based on the data analysis, it was found that students got difficulty in pronouncing sound English vocabulary containt "ɔ".This difficulty was indicated by errors made by students in pronouncing the words.In this case, the way to pronounced this sound included in long vowel, where tis almost fully back it has quite strong lip rounding or can be said half "o". .For example when students are asked to pronounced word "sport , morning" they pronounce it "spor , morning".However, the correct pronunciation must be (SpƆ:t, mƆ:niŋ).c.Difficulty to distinguish words that contint sound "e" and "i" Based on the data analysis, it was found that students got difficulty in distinguishing sound English vocabulary containt ""e" and "i".This difficulty was indicated by errors made by students in pronouncing the words.In this case, students cannot distinguish the position of using "e" or "i" in a word .For example when students are asked to pronounced word "excuse me, reward", they pronounce it" Eks mi or Es kusi mi, Rewar".However, the correct pronunciation must be Ik'skj:s mi (ri'wƆ:d,).d.Difficulty in pronouncing words that containts sound "ʌ" Based on the data analysis, it was found that students got difficulty in distinguishing sound English vocabulary containt ""e" and "i".This difficulty was indicated by errors made by students in pronouncing the words.For example when students are asked to pronounced word "lunch", they pronounce it" lunch or luc" However, the correct pronunciation must be (lɅntɭ Based on the result that cause of student students difficulty in pronounciation were the influence of Indonesian language.This thing happend because the students position were in the village and far from the city of little influent on the ability and introducing of English in their activity every day especially in communication.so that when they try to use english to communicate to someone seems difficult because always carried the regional accent that effected on how they pronounced it in Indonesian language because same with written form.R: Tidak pak ,tidak ada yang ajarin.. karena kata orang tua bukan bahasa nenek moyang saya .d. Difference in spelling, pronunciation, and the meaning Based on the result based on the result that cause of students difficulty in pronounciation were differ in how to write, how to read, and the meaning.this is the Most of the respondends answer so that indirectly lower their intention to want to know and there is assumption that english is not the language of my ancestors. R
D. Closing
After the researcher conducted and analyzed the data from the school, it can be concluded that: first, students from the tenth grade of SMTK Mo'ale have difficulties in pronouncing English vocabulary which consists of phonemes (English consonants and English Vowel).While Pronunciation is the wayin which a language or sound is spoken.Students cannot pronounced each word correctly based on the standard of good pronunciation so students should study hard and give attention
|
2023-12-05T16:55:40.771Z
|
2023-05-03T00:00:00.000
|
{
"year": 2023,
"sha1": "d1dffc7638661d2ed495bd0c625fb148fec4b863",
"oa_license": "CCBY",
"oa_url": "https://jurnal.uniraya.ac.id/index.php/Relation/article/download/876/817",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "688fde75131fc3163cacd082d86fe2bcd699b6f5",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": []
}
|
234082106
|
pes2o/s2orc
|
v3-fos-license
|
Assessment of Tidal Range Changes in the North Sea From 1958 to 2014
Abstract We document an exceptional large‐spatial scale case of changes in tidal range in the North Sea, featuring pronounced trends between −2.3 mm/yr at tide gauges in the United Kingdom and up to 7 mm/yr in the German Bight between 1958 and 2014. These changes are spatially heterogeneous and driven by a superposition of local and large‐scale processes within the basin. We use principal component analysis to separate large‐scale signals appearing coherently over multiple stations from rather localized changes. We identify two leading principal components (PCs) that explain about 69% of tidal range changes in the entire North Sea including the divergent trend pattern along United Kingdom and German coastlines that reflects movement of the region’s semidiurnal amphidromic areas. By applying numerical and statistical analyses, we can assign a baroclinic (PC1) and a barotropic large‐scale signal (PC2), explaining a large part of the overall variance. A comparison between PC2 and tide gauge records along the European Atlantic coast, Iceland, and Canada shows significant correlations on time scales of less than 2 years, which points to an external and basin‐wide forcing mechanism. By contrast, PC1 dominates in the southern North Sea and originates, at least in part, from stratification changes in nearby shallow waters. In particular, from an analysis of observed density profiles, we suggest that an increased strength and duration of the summer pycnocline has stabilized the water column against turbulent dissipation and allowed for higher tidal elevations at the coast.
particular ports, weirs, and estuaries. More recently, the topic of changes in ocean tides has been revived and extended to the scales of shelves, basins and the global ocean-a development fueled by the digitization and publication of global data sets of tide gauge records, see P. Woodworth et al. (2017). In fact, statistically significant trends of tidal parameters of the order of a few percent (in relative terms) are now well documented around the world (e.g., Flick et al., 2003;Jay, 2009;Mawdsley et al., 2015;Ray, 2009;Talke & Jay, 2017; P. L. Woodworth et al., 1991). Fluctuations of similar magnitude and regional extent have been observed on interannual time scales (e.g., Devlin et al., 2014;Feng et al., 2015;Müller, 2011;Ray & Talke, 2019).
Despite this ample evidence of changes in tides in water level series, the forcing factors and spatial extent of secular and short-term variability in tides remain uncertain. P. Woodworth (2010) succeeded in detecting coherent patterns of amplitude and phase trends in primary constituents along the North American coasts, but found less regional consistency in data from Asia, the Australian Seas or Europe. However, some spatially coherent changes could still be observed in smaller and well-instrumented areas. A major problem identified by P. Woodworth (2010) is that small-scale (often site-specific) and large-scale changes may occur simultaneously, thereby impeding research of the underlying physical processes. Over wider coastal sections, and at sites open to the sea, the effects of a rise in mean sea level (MSL) on tidal wave propagation explain only a fraction of the observed trends Schindelegger et al., 2018). Accordingly, the assumption persists that other mechanisms-such as changes in stratification, turbulent dissipation, and variations in shoreline position or bed roughness-play major roles; see Haigh et al. (2020) for a review. The present consensus is that in many areas of the world a combination of different oceanographic processes may be at work. For instance, Ray and Talke (2019) suggest that the large secular changes of the lunar M 2 tide in the Gulf of Maine could be caused by both sea level rise and persistent stratification changes. Yet, as implied above, any contributing mechanism will act on its own characteristic spatial and temporal scales, overlaying and possibly reinforcing other processes. This particularly applies to anthropogenic construction measures (e.g., building of dykes and tidal barriers) that can cause transient perturbations to the local tidal regime and affect adjacent stretches of coastline (Talke & Jay, 2020). Therefore, a major challenge is the separation of local effects and large-scale changes and their subsequent attribution to certain forcing factors.
Exceptional changes of tidal range in the German Bight have been documented as early as in Führböter & Jensen (1985) and are illustrated in Figure 1; see also Jensen (2020). Between 1958 and 2014, changes in tidal range amount to approximately 3% (e.g. Helgoland Binnenhafen, #55 in Figure 2/Table1) at some of the investigated tide gauges to more than 11% at others (e.g. Wyk auf Föhr, #65). The latter is equivalent to a trend of 5.7 mm/yr at Dagebüll (#66) and outpaces the simultaneous local (∼2 mm/yr, Dangendorf JÄNICKE ET AL. Trend of tidal range: 2.7 mm/yr et al., 2015) and global MSL rise, which is approximately 3 mm/yr today (Dangendorf et al., 2019) and was 1.5 mm/yr between 1900 and 2012 (Oppenheimer et al., 2019). To our knowledge, this magnitude of tidal range change is one of the highest in the world, only exceeded by developments in the Gulf of Maine (Ray & Talke, 2019). It further seems that the overlap between local and large-scale effects in the North Sea is particularly pronounced, possibly nurtured by the region's character as a shelf sea with a tide generated in the Atlantic. Previous research (summarized in Jensen et al., 2014) has ruled out astronomical, large-scale morphological, or tectonic causes (at least in the German Bight), but pointed to the generally non-linear and non-uniform behavior of water levels in the North Sea. To improve our understanding of these puzzling tidal range changes, we aim to address the following questions through systematic data analysis: (1) Are these changes on different time scales detected within the German Bight a localized phenomenon, or are they part of a larger-scale development spreading over adjacent areas within or even outside the North Sea region? (2) Is it possible to separate and quantify large-scale and small-scale effects from observed records?
(3) If (2) is the case; can we attribute physical causes to the observed changes?
Below, we first discuss geographic and oceanographic characteristics that are fundamental to the understanding of the tidal regime in the North Sea, the available database, its limitations, and major processing steps (Section 2). Section 3 introduces the analytical methods of Ordinary Kriging, which is here mainly used for gap-filling as the subsequent PCA requires complete time series. The results of our analyses are described extensively in Section 4. To answer the abovementioned research questions, we start our analyses with the detection of observed changes in the tidal range at individual sites. In a second step, we apply a PCA to identify modes of variability common to all (or the majority of) sites and to distinguish them from local anomalies. In a last step we analyze potential causes and drivers of the observed changes. The paper concludes with a summary and additional remarks in Section 5.
Study Area
The North Sea is one of the largest shelf seas on Earth with a size of about 575,300 km 2 (Huthnance, 1991).
Counted counter-clockwise, its margins comprise coastal sections of the United Kingdom, France, Belgium, the Netherlands, Germany, Denmark, and the south of Norway (Figure 2 (Becker et al., 2009;Schrottke & Heyer, 2013). Also shown are the locations of tide gauges (black dots) used in this study including their respective numbering (see also Table 1). The black propellers indicate the location of the three semidiurnal amphidromic areas (including the amphidromic points for the M 2 and S 2 constituent) and the black dotted lines indicate contours of equal mean tidal range (Sündermann & Pohlmann, 2011). to the North Atlantic via a large inlet between Scotland and Norway in the north and a narrow opening through the English Channel in the southwest and it opens to the Baltic Sea in the east. Water depths in the North Sea are on average 90 m but vary greatly, generally increasing from south to north. While the southern parts are often shallower than 40 m with lowest depths in the German Bight, they increase to about 300 m at the continental shelf toward the Norwegian Trench and toward the entry into the Norwegian Sea in the northwest. There are also extensive shallow water regions off the south-eastern coast of the United Kingdom known as the Dogger Bank complex, with their western part extending to the coasts of Norfolk and Suffolk (Quante & Colijn, 2016 Figure 2) The tidal regime in most parts of the North Sea is strongly influenced by the astronomical, mainly semidiurnal, tides entering the basin from the Atlantic. The greater part of these oscillations enters between the Shetlands and Scottish mainland and a smaller part through the English Channel. They travel counter-clockwise through the entire North Sea basin as Kelvin waves. The entry times of the tidal high and low waters are therefore shifted relative to each other according to the celerity of the tidal wave. This physical setting results in three amphidromic points, one close to the English Channel, one off the coast of Norway, and one central in the North Sea basin (Proudman & Doodson, 1924). Since the North Sea's basin shape is close to the resonance frequency in the semidiurnal spectral band, the superposition of the principal lunar and solar tides M 2 and S 2 leads to a significant spring neap cycle. These two constituents cause a potential tidal range between 1 and 5 m (Quante & Colijn, 2016). Accordingly, the tidal regime of the North Sea can be classified as macrotidal (>4 m), mesotidal (2-4 m) and microtidal (<2 m) (Haigh, 2017), with the actual tidal range being strongly influenced by local factors. For example, the mean spring tidal range at the east coast of the United Kingdom varies between 3.60 m (Aberdeen) and 6.20 m (Immingham) (Horsburgh & Wilson, 2007). The mean tidal range in the data set used below is about 3.40 m in the UK and the English Channel, 1.98 m at the Dutch west coast, 2.33 m at the Dutch north coast, and 2.82 in the German Bight.
Data
Time series of water level from 70 available tide gauges around the North Sea basin were collected from various sources. Data from Global Extreme Sea Level Analysis, GESLA Version 2 (GESLA; P. Woodworth et al. 2017), Open Earth (Deltares) and the responsible German authorities (Wasser-und Schifffahrtsverwaltung des Bundes via the portals of the associated Central Data Management, ZDM) were used. The available time series vary considerably in length and completeness. The earliest measurements in the form of tidal high and low water readings are from 1843 (Cuxhaven Steubenhöft, Germany, #60), while on the Dutch coast, data from some stations have only been digitally available since the 1980s. High-resolution data sets with an equidistant sampling between 1 and 60 min were used as well as time series of tidal high and low water. We excluded equidistant time series with a resolution lower than 60 min, as supplemental analyses have shown that they insufficiently describe the height and timing of individual tidal high and low waters. The tidal range was calculated as the difference between each tidal high water and the mean of the two surrounding tidal low waters, according to the German standard (DIN 1994(DIN -3, 1994. From those, we calculated monthly averages and removed the mean seasonal cycle, as we are mainly interested in longerterm changes. Considering the 18.6-year nodal cycle and the end of numerous water level series in December 2014, we adopt an analysis period from January 1958 to December 2014, approximately 3 nodal cycles. Tide gauges known to be located near to weir installations or in rivers were excluded, as these are at least partially separated from the oscillation system of the North Sea. Seven time series of tidal range remained in the data set, forming the basis for our investigations (Table 1, Figures 2 and 3). Acknowledging the counter-clockwise propagation direction of the tidal wave, the tide gauges used in this study are counted by starting at Lerwick (Shetland Islands) and ending at Tregde (Norway). The average completeness of the stations is 64% in the United Kingdom, 65% in the Netherlands, and around 88% in Germany.
The statistical analyses and procedures (Ordinary Kriging, Trend analysis, PCA) carried out here are based exclusively on the tide gauge records named in
Methodology
In addition to the procedures explained in the following sections, linear trend analysis, harmonic analysis of tidal constituents, and wavelet coherence analysis were carried out to characterize multiple feature of the tide gauge records in the North Sea. Any significance statements made throughout the manuscript are based on a 95% confidence level. We calculated linear trends using ordinary least squares regression and assessed their significance by considering normally distributed but serially correlated residuals following an autoregressive process of the order 1 (e.g., Mawdsley & Haigh, 2016). Annual amplitudes for the leading constituents were determined by a harmonic analysis using the MATLAB toolbox U-Tide (Codiga, 2011) and the wavelet analyses were conducted with the MATLAB package of Grinsted et al. (2004). None of these methods are explained here in detail due to their general recognition and widespread use.
Kriging
Kriging (also Gaussian process regression) is a geostatistical method to interpolate missing values based on information stemming from neighboring stations (i.e. their covariance matrix). It is here mainly used for gap-filling as the following Principal Component Analysis (PCA) requires complete time series. Originally developed in the 1950s for mining purposes (Krige, 1951), this method has been used increasingly in other areas including the analysis and interpretation of incomplete surface air temperature fields (Rigor et al., 2000;Rohde et al., 2013). In general, Kriging is a linear interpolation procedure. Missing values are determined according to a given covariance matrix, which is calculated from the existing observations (Cressie, 1990). Kriging provides some important advantages over other interpolation procedures. The interpolated values change smoothly and always pass through the observed values at the sample points. Problems related to the accretion of measurement points are avoided by considering the statistical distances between the neighbors used in the interpolation of a certain value, which means that the spatial variance is taken into account. If clustering occurs in a region, the weights of the affected sample points are reduced by including the density. In sparse regions, only the distance is considered. The procedure can be summarized with the formula where Ẑ is the query value at the unobserved location x 0 and i = 1 … n represents a running index over n observations. Ẑ is computed from a linear combination of all observed values z i = Z(x i ), which are weighted by the parameter w according to distance and density. A special property of the Kriging procedure is the convergence of interpolated values to the mean value of their region with increasing distance to the available samples. That is why Kriging estimates at query points tend to be conservative (Cowtan & Way, 2014).
In keeping with this characteristic, the general tidal range behavior worked out later in Section 4.1 is also valid when the Kriging step is omitted.
We use Kriging for two different purposes. First, the temporal gaps in the tidal range data (Section 2.2) were closed for each monthly time step in the investigation period. Figure 3a illustrates that this is a relevant issue in the Netherlands, in particular before 1970, while in the UK data gaps occur before 1990. Second, additional data points along the coastline of the North Sea were interpolated, allowing us not only to analyze the temporal evolution of each station series in terms of a linear trend but also the spatial structure of these trends ( Figure 4). For both applications, we use the Ordinary Kriging algorithm of Schwanghart (2020). Note also that in transitioning from Figure 3a to Figure 3b, the nodal cycle (with peaks for semidiurnal M 2 in the years 1977, 1996, and 2015) was removed.
Principal Component Analysis
Principal Component Analysis (PCA), a method of multivariate statistics, is used to structure and simplify extensive data sets by approximating a large number of statistical variables with a smaller number of significant, non-correlated (orthogonal) linear combinations. If x is a vector with n random variables, first a linear function f 1 (x)-dependent on constant coefficients c 1i -is determined by calculating the eigenvector from the spatially weighted covariance matrix of x. Then f 1 (x) represents the largest possible overall variance of all variables in x 11 1 12 2 1 1 1 This decomposition process is repeated for a function f 2(x) , which is uncorrelated with f 1(x) and describes the largest possible amount of the remaining variance. It is possible to find n such functions, but the purpose is usually to explain as much variance as possible with significantly fewer functions f i(x) , known as Principal Components (PCs) (Jolliffe, 2002). Therefore, the PC of a temporally and/or spatially varying physical process represents orthogonal spatial patterns, in which the data variance is concentrated. Using the leading PC, an approximate reconstruction of the observed variable can be generated. This type of analysis is often ̭ JÄNICKE ET AL.
In this study, we apply PCA to the entire monthly de-seasoned tidal range data set from the 70 sites (Figure 2), whose gaps were previously filled through Ordinary Kriging. If there are indeed large-scale signals affecting the tidal range in the North Sea, they should appear as a coherent pattern at multiple sites, and therefore be visible in the leading PCs. By contrast, spatially confined ("small-scale") anomalies in tidal range will be shifted into the higher PCs, as these can only be responsible for a small part of the overall variance. Such shifting includes not only the response of the local tidal system to, for instance, anthropogenic construction measures but also to changes in bathymetry or morphology. Local effects can explain more variance than large-scale effects at individual sites or small subsets, but never for the entire data set. It is therefore important to consider the explained variance of the PCs at each tide gauge individually to ensure that large-scale effects with a very small influence on the overall variance are retained. With this approach, the PCA enables us not only to attribute tidal range changes to small-scale and large-scale effects, but also to calculate the spatial extent and the temporal development of patterns that might reflect important environmental factors.
Trends of Tidal Range and Tidal Constituents
To address the three research questions defined in the introduction, we first map the spatial extent of the long-term changes in tidal range in the study area. We start our analysis by calculating linear trends for each individual record over a common period between 1958 and 2014 and map them in Figure 4. In this step of the analysis, the time series of Lerwick (Shetland Islands) and Tregde (Norway) were omitted, since both are the only available tide gauges within large areas and, therefore, there is insufficient data density for use by the Kriging algorithm. We identify a variety of trends with a particularly pronounced spread in the southern parts of the basin. While there are no significant trends at the north-eastern coast of the United Kingdom, negative trends occur further south between Immingham and Dover. Here, six of eight stations show significant negative trends while the remaining two do not differ significantly from zero. In this area, Immingham shows the largest negative and statistically significant trend ( (Table 2). Local changes affect some tide gauges like Den Oeverbuiten (Netherlands) or Büsum (Germany), which at first sight seem to contradict this spatial pattern. We suggest that these local exceptions are mainly caused by anthropogenic interventions such as the building of the Afsluitdijk at Den Oeverbuiten or dredging and dike constructions near to Büsum, which coincide with anomalies in the local tidal range series. From the aforementioned findings, we conclude that widespread and statistically significant secular changes in tidal range occurred around large parts of the southern North Sea between 1958 and 2014, although locally interrupted by opposing signals at individual sites. Furthermore, we note contrasting and dipole-like trends along south-western (significant negative values) and south-eastern margins of the North Sea (significant positive values). It remains to be critically noted that the changes in the tidal range at some individual tide gauges could also be instrumental. However, due to the large-scale and the spatial homogeneity of the patterns, this cannot be causal for the overall picture.
The identified dipole-like trend pattern has its node approximately at the longitude of the English Channel ( Figure 4) and suggests a westward displacement of the main low amplitude areas (including amphidromic points of M 2 and S 2 ) located in the central North Sea and near the English Channel ( Figure 2). To obtain further indications of such a shift, we have performed a harmonic analysis to determine the main semi-diurnal M 2 and S 2 tidal constituents, which make the largest contributions to the tides in the North Sea. Since high-resolution hourly time series with a coverage of at least 75% between 1958 and 2014 are required for a tidal analysis, only a subset of 28 tide gauge records is appropriate for our assessment. The available database is thus reduced and fewer stations show significant trends (20 for M 2 , 14 for S 2 ). Nevertheless, the overall findings ( Figure 5) are similar to the assessment focusing on tidal ranges highlighted in Figure 4; that is for both constituents (though with larger magnitude for M 2 ), negative trends occur in the southeast of the UK and the highest positive trends are found in the German Bight area. A displacement of the M 2 and S 2 amphidromic point is, therefore, also implicated.
The observed changes in the tidal range can be considered in the context of the elaborations of Taylor (1922) on amphidromic systems. Based on simple analytical solutions, Taylor demonstrated an altered propagation speed due to increased water depth, leading to a shift of the amphidromic point toward the open boundary in a semi-enclosed basin. As a result, the tidal range at the opposite (dissipative) end of the basin increases. In our case, this statement implies a shift of the amphidromic points toward the north, seemingly contradicting the changes (i.e., an east-west shift) observed here. However, as pointed out in Haigh et al. (2020), increasing the tidal range and thus the tidal currents at the dissipative end could lead to a higher frictional energy loss. This would cause a leftward deflection of the tidal wave and the amphidromic point, see (Idier et al., 2017;Pelling and Green, 2014) point to sensitivities of the tidal response to the magnitude of Sea level rise and whether or not low-lying land is inundated in the numerical simulation (flooding or no-flooding). In addition to these two extreme cases of shoreline treatment, Pelling and Green (2014) investigated the M 2 response to partial flooding, roughly based on the actually existing protective structures. This last option provides the greatest agreement with our results, but again does not reflect the negative trends in the South East UK. In fact, the tide around Suffolk/Essex exhibits little sensitivity to the shoreline scenario (Idier et al., 2017). More to the point, the assumption of no-flooding seems to be plausible in the areas of the greatest changes (German Bight, northern parts of the Netherlands) and here the results agree with all existing modeling studies. No final assessment can thus be made here as to whether and which models are most consistent with the observations. In this context, effects may be at work that are not included in numerical models so far. As Arns et al. (2015a) point out, various non-linear relationships between the individual parameters in marginal seas are of particular importance, especially the dynamic response of the sea surface to meteorological forcing (see also Arns et al., 2020). In addition, time-varying bed roughness and bottom friction coefficients (Rasquin et al., 2020) and changes in turbulent dissipation with stratification (Müller 2012) may play a role.
Principal Components and Large-Scale Effects
Our results of the linear trend analysis point toward a distinct spatial pattern that is occasionally interrupted by diverging trends at individual locations. To further distinguish between the large-and small-scale effects of tidal range changes-comprising both trends and short-term variability-we apply PCA ( Figure 6). The first two PCs, which are presented in Figure 6, explain about 69% of the total variance in the entire data set (PC1: 55%, PC2: 14%), while each of the remaining 68 PCs contributes between 0.01% and 4%. Additionally, no other PC represents significant parts of the variance at a larger number of tide gauges and is therefore rather local in character. This indeed suggests that the two leading PCs reflect coherent large-scale effects, while local effects through anthropogenic interventions are retained in the reminder of the lower PCs. The amount of these percentages depends to some extent on the spatial distribution of the tide gauges, making it necessary to consider the PCA results at each tide gauge (Figures 6c,d, 7d). PC1 describes an increase in tidal range over time, as evident from its positive slope and the consistently positive values of the associated coefficients at all sites (Figure 6a). The magnitudes of the coefficients reveal that the signal represented by PC1 increases as one travels counterclockwise throughout the basin reaching its strongest expression in the German Bight. PC2 exhibits a negative trend and is most pronounced in the area of the southeastern coast of the UK. The coefficients of PC2 change sign from positive values along the UK coast to negative values in the area of the German Bight (Figure 6b). Similar to the trends of measured tidal range (Figure 4), a dipole-like temporal evolution with a node in the area of the English Channel is detected. In general, JÄNICKE ET AL. PC, principal components.
Table 2 Measured and Reconstructed Trends in Tidal Range and Explained Variance of the Different Regions
PC1 accounts for the increase in tidal range in the German Bight and PC2 represents the decrease in tidal range at the south-eastern coast of the UK. This contrast is also reflected in the correlation coefficients of the first two PCs with the measured tidal range changes (a metric that is mostly influenced by inter-and intra-annual variability). Figure 6c shows moderate but significant correlations of 0.3-0.5 for PC1 at the south-western boundary of the North Sea and displays the highest values (∼0.9) in the area of the German Bight. A contrasting picture emerges for PC2. In the area of the German Bight, correlations with tidal range changes are non-significant and close to zero but almost consistently above 0.7 and significant in the United Kingdom (Figure 6d).
These patterns are also confirmed when considering the explained variance for particular clusters of tide gauges. Along southeastern United Kingdom coastlines, where negative trends are found, the explained variance of PC1 amounts to only 3%, while PC2 explains about 58% (Table 2). In the Netherlands, the mean explained variance for PC1 is 45% and only 10% for PC2. The contribution of the second mode drops to 3% in the German Bight, whereas PC1 explains 77% of the variance on average. This spatially reversing pattern is also detectable in the coefficients for PC1 and PC2 (Figure 6b), just as in the linear trends of the tidal range observations. Apparently, PC1 with its positive slope is more pronounced in the area of the German Bight, whereas PC2 (negative slope) dominates in the southeast of the United Kingdom. This indicates different underlying physical mechanisms for these large-scale signals.
Impacts on Local Tidal Range
After identifying two large-scale patterns relevant at the majority of tide gauge records in the North Sea, we next ask whether we also can identify small-scale effects using the residual signal after removing the linearly regressed PC1 and PC2 at individual sites. Figure 7d shows that alongside the described contrast between PC1 and PC2, local influences play a major role in some cases. Especially noticeable are again tide gauges Den Overbuiten (Netherlands, #33) and Büsum (Germany, #60) due to their high percentage of local effects. For example, PC3 (explained overall variance: 4%) captures more than 50% of the variance at Büsum and around 30% at Cuxhaven (Germany, #59). This anomaly is reflected in the comparison of the measured trends with those from re-synthesizing PC1 and PC2 (Figure 7a). The confidence bounds show clear overlaps for most cases, but not at tide gauges Den Overbuiten, Büsum, and Cuxhaven. The local characteristics are sufficiently pronounced to overshadow the large-scale signals, which is also evident from the difference between measured and reconstructed trends in Figure 7b. In this plot, the 1.0 mm/yr residual at Delfzijl (Netherlands, #52) stands out, too. This difference can also be traced back to significant local effects, most likely caused by the deepening of the outer areas of the Ems (Hollebrandse, 2005). Hence, local effects have a very large influence on the explained variance at individual sites. However, the general trends at most gauges can be qualitatively and quantitatively reproduced by PC1 and PC2. Figure 7c underlines this statement by a spatial map of the reconstructed trends, again highlighting the dipole-like pattern between UK and German Bight sites. Comparing with the estimates in Section 4.1, the mean trend of tidal range synthesized from PC1 and PC2 at the southwest coast of the United Kingdom is −1.0 mm/yr, just like the measured trend (Table 2). Similar findings apply to the European west coast, where an average reconstructed trend of 1.0 mm/yr is achieved compared to 0.8 mm/yr from the in situ data. Local effects increase the tidal range by 0.2 mm/yr on average. In the German Bight, the trend from our reconstruction is 3.5 mm/yr, overshooting the measured trend by 0.2 mm/yr. Hence, we conclude that the opposing trends between the United Kingdom and the German Bight are largely controlled by the physical processes driving PC1 and PC2.
Identifying Physical Causes
The PCA suggests two modes of variability (Figure 6a) that appear coherently at the investigated sites in the North Sea. Now the question naturally arises whether these signals are produced within or outside the basin. If the former is the case, then the corresponding PCs should show no correlations to tide gauge records from the adjacent North Atlantic, while an external forcing would possibly provide some sort of coherence with those records. Therefore, PC1 and PC2 generated from tide gauges inside the North Sea basin were compared with selected tide gauges from outside the North Sea basin in the North Atlantic, which were not included in the PCA. To that end, the additional 24 North Atlantic tide gauges from the GESLA data set described at the end of Section 2.2 were used. No coherence is found for PC1 and we therefore conclude that it is produced within the basin, which will be addressed later. The opposite applies to PC2. A comparison between PC2 and available tide gauge records along the European Atlantic coast, Iceland and Canada is shown in Figure 8. Figure 8c indeed documents high and significant correlations of about 0.7 on average between PC2 (calculated exclusively on the basis of North Sea data set) and Atlantic tide gauge records spanning the region from the English Channel southward to Spain. Moreover, there are significant correlations of 0.64 in the north (Reykjavik, Iceland), and even in the Northwest Atlantic (still reaching 0.46 in Portaux-Basques, Newfoundland) (Figure 8a, c). Further south toward the Gulf of Maine, these correlations disappear (not shown). A supplemental wavelet analysis (not shown) reveals that the common oscillations between PC2 and the measured tidal range changes mainly occur on time scales from 6 to 24 months with particularly high coherence at around 12 months. We interpret this finding as an indication for a common high-frequency signal in the North Atlantic of unknown origin, causing widespread changes in tidal range.
In order to narrow down the possible causes for the PC2 signal, outputs from the barotropic shallow-water model run by Arns et al. (2015aArns et al. ( , 2015b over the period 1958 to 2014 were used. To facilitate a rigorous comparison with our in situ data, simulated time series at the locations of the 70 tide gauge stations were extracted. A PCA revealed that the PC2 pattern is represented well in the simulated data. We find similarly high correlations between the model-based PC and the observations of the Atlantic tide gauges. While the mean correlation of the European tide gauge records (Figure 8b) with North Sea PC2 from observations is 0.70 (p < 0.05), it is only marginally lower with the barotropic model outputs (r = 0.66). If the simulated signal is removed from the model, the correlation becomes insignificant and even disappears at most sites. In consequence, PC2 must be driven by a process initially included into the boundary conditions from the numerical model. Since we have used a barotropic formulation without buoyancy forcing and thermodynamic calculations, we can further infer a purely barotropic relationship.
(c)
The effects of bottom friction are more involved, but some simple geometric considerations are instructive. As the tidal wave enters the extensive shallow water areas of the southern North Sea, energy losses due to friction become dominant, yet the influence of PC2 is increasingly attenuated in the direction of propagation ( Figure 6d). This discrepancy suggests that frictional effects do not represent the physical cause of PC2, although they might play a role in suppressing the magnitude of PC2 in the highly dissipative eastern North Sea region. As our simulations were performed with an invariant bathymetry and no changes to friction parameters, sea level rise and meteorological forcing remain as possible causes. We therefore analyzed correlations between PC2 and these factors (MSL rise, atmospheric pressure loading, wind velocities, and directions) but could not detect a clear and significant linear relationship. In this context, Arns et al. (2015a) already referred to the numerous non-linear relationships between the individual parameters in marginal seas. Specifically, the nonlinear interaction between tide and sea level rise as well as the dynamic response of the sea surface to meteorological forcing are important (see also Arns et al., 2020). Further analyses, in particular sensitivity studies taking into account altered tidal boundary conditions and time variable friction coefficients, will perhaps allow for a final identification of the ultimate driving factors (e.g., Rasquin et al., 2020).
While the signal of PC2 is reproducible, PC1 cannot be detected in the simulated data, which means PC1 is absent in the barotropic model. At the beginning of this section we stated that there is no coherence to the Atlantic tide gauges for PC1, which suggests an origin of the signal within the basin. We thus conjecture that a baroclinic, density-related effect inside the North Sea is responsible for PC1 and attempt an explanation in terms of known relationships between tidal currents and turbulent energy losses in varying stratification conditions. This attribution primarily arises from considerations at seasonal time scales. Using hydrographic casts and baroclinic model simulations, Müller et al. (2014) linked M 2 elevation changes of 1-5 cm in the southern North Sea to the see-sawing of continental shelf stratification between statically stable summer and well-mixed winter conditions. Strong buoyancy gradients in mid-depths (20-30 m) of shallow waters arise during summer months (see e.g., van Haren et al., 1999) and stabilize the water column against energy losses to vertical mixing. The associated increase in barotropic tidal transport and surface elevations was found to be most pronounced in very shallow areas and for cyclonic rotation of strong tidal currents (Müller, 2012) -conditions that are all present in the North Sea.
To relate at least parts of the PC1 content to this process, we have analyzed the temporal evolution of the North Sea's density structure based on gridded temperature and salinity profiles from the KLIWAS data set (Bersch et al., 2016). These data are provided as annual values through to 2013 at comparatively high spatial resolution (0.25 × 0.5° latitude-longitude boxes, 2-5 m depth intervals). For consistency, the monthly PC1 series was binned to annual values with respect to the length of the KLIWAS data set) and cleaned from low-frequency with periods longer than 30 years. Because it is unknown how well KLIWAS represents the smaller, more subtle changes of density across the water column over several decades, we limit our comparison between stratification and PC1 to variability on interannual time scales. To suppress noise in the climatology, vertical density profiles from a particular set of grid points around the German Bight were averaged to a mean water column structure per year (Figure 9). These query points, indicated by black dots in Figure 9b, lie within 2° of 54.5°N/6.0°E and have an exact depth of 35 m in the KLIWAS data set. The sampled area is shallow, hosts strong tidal currents, and is not permanently mixed, thus favoring a potential effect of stratification on tides. The corresponding time-averaged density profile (Figure 9a) indicates a pycnocline at 20-25 m, conforming in principle to modeling results (e.g., Guihou et al., 2018;van Leeuwen et al., 2015). While this agreement is reassuring, we also note that our crude spatial averaging ingests profiles in various states of stratification (i.e. homogeneous, seasonally or intermittently stratified conditions, see Van Leeuwen et al., 2015). Given the tendency for in situ measurements being taken in summer, the KLIWAS data set may, however, mainly represent the seasonally stratified case.
Some interannual variability in density gradients is already evident from Figure 9a, where we plot individual profiles for the years 1995 and 1998, which differ markedly, by almost 1 kg/m 3 , near the surface. An extension to the full depth-time sequence (1958-2013, upper 35 m, see Figure 10a) suggests that fluctuations of this magnitude are common but the density perturbations are often mixed throughout the water column, making it difficult to align stratification changes in particular years to highs or lows in the PC1 series. We therefore define an approximate stability index as top-to-bottom stratification (cf. Eq. 2.9 of Knauss & Garfield, 2017) . The adopted metric is akin to the potential energy anomaly advocated by Simpson (1981) and expresses the shape of the density profile through its first derivative. From Figure 10b, we see that the stability index exhibits some noticeably similarity with interannual tidal range changes in PC1. It closely follows the PC1 curve until 1979, echoes the broad peaks around the years 1987 and 1995, and features multiple reversals in sign from 2007 onward. Alongside this qualitative agreement, the observed changes in density gradients amount to about 0.3 kg/m³ per 10 m of depth and thus correspond to the order of magnitude that maintains the seasonal cycle of M 2 in this region (Müller et al., 2014). Therefore, all indications are that changes to the intensity of summer stratification and/or the time spent in a stratified (or mixed) regime over the course of a year cause the variance in tidal range represented by PC1. When PC1 is multiplied by the corresponding EOF coefficients, we find that for a 1− variation in the stability index the tidal range at tide gauges in the Southern German Bight changes by 2.4-2.7 cm, depending on location.
A breakdown of our results into different modes of stratification variability is tempting but beyond the scope of our study as it would call for consideration of several factors, including freshwater buoyancy input, variable local wind stirring, and the inflow of Atlantic water masses through the northern and southern boundaries (Mathis et al., 2015). Nevertheless, we have analyzed long-term hydrographic data of the North Atlantic and detected high negative correlations (−0.8) between PC1 and temperature of the upper ocean off the Scottish (down to about 300 m) and Norwegian coasts (150 m). The anti-correlation is most pronounced in individual years prior to the 1990s and still persists on decadal time scales. This preliminary finding suggests that a wider North Atlantic scope must be adopted to unravel the origin of the North Sea tidal range changes, including the observed trends.
Summary and Conclusion
We have shown that the tidal range in the southwest and the southeast of the North Sea is characterized by a dipole-like pattern between 1958 and 2014, indicating that different forcing mechanisms of shelf-wide or larger spatial character may have been present. To separate these processes, and treat both trends and shortterm variability in a unified framework, a PCA-based method was applied to 70 monthly time series of tidal JÄNICKE ET AL. 1960 1965 1970 1975 1980 1985 1990 1995 Indicator without dimensions [-] 1027 1028 1029 1030 Density [kg/m³] range throughout the North Sea between 1958 and 2014. Data gaps were filled by the statistical method of Ordinary Kriging. A special property of the Kriging procedure is the conservative nature of its estimates at query points, resulting in under-rather than over-estimation of the general system behavior with regard to trends and PCs. We were able to detect two large-scale signals and explain about 69% of the overall variability in the study area. We attribute the remaining variability of 31% to local effects, which vary widely; they may be absent or could well cause over 50% of variability at an individual tide gauge. In the overall variance, the maximum contribution of a single local effect is at 4%, the average is below 0.4%.
The second PC represents a large-scale barotropic signal and accounts for the negative trends in the United Kingdom area (up to −2.3 mm/yr). This mode of variability has a North Atlantic extent, as shown by supplementary analysis of tide gauges in Canada, Reykjavik, and the European Atlantic coast. Correlations across the basin are high (0.5-0.7) and are caused by common oscillations on time scales between 6 and 24 months. By detecting the same barotropic signal in the shallow-water model of Arns et al. (2015aArns et al. ( , 2015b, and eliminating suspects that are not part of the model input or physics, we conclude that only sea level rise and meteorological forcing remain as possible causes. However, no linear correlations with these parameters were found, implying that non-linear interactions must be present. A further indication for the presence of shallow water effects is the severe weakening of the signal as the tidal wave advances from the relative deep water at the United Kingdom into the shallow water areas at the southern and the eastern boundaries of the North Sea. The absence of PC1 in the barotropic model and its confinement to the southern North Sea coast has prompted us to hypothesize that local stratification changes exert a strong influence on the tidal range in shallow water at various time scales. By analogy to the known seasonal tidal cycle in the area (Müller et al., 2014), we argue that a stronger pycnocline, possibly lasting over longer periods, stabilizes the water column against turbulent dissipation and allows for higher tidal elevations at the coast. The qualitative and quantitative agreement between inter-annual PC1 changes and an empirically derived stability index is certainly tentative, yet it provides an attractive first-order target for more systematic data analysis and numerical modeling. Further insight into the nature of large German Bight tidal range changes-particularly the underlying trends-could be furnished by a regional general circulation model with realistic background flow and open boundaries to the North Atlantic.
Acknowledgments
The analyses performed here were part of the DFG-funded project TIDEDYN (Analyzing long-term changes in the tidal dynamics of the North Sea, project number 290112166) and the BMBF-funded project ALADYN (Analyzing long-term changes of tidal dynamics in the German Bight, Subproject A, BMBF 03F0756A). Michael Schindelegger acknowledges financial support made available by the Austrian Science Fund (FWF, project P30097-N29).
|
2020-12-24T09:13:55.813Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "543917aacfb2dc700d860b19158c50295ab77d63",
"oa_license": "CCBY",
"oa_url": "https://agupubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2020JC016456",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a0e3141e9df61fad6a356fc8eded0725a676bab6",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Geology",
"Medicine"
]
}
|
255168644
|
pes2o/s2orc
|
v3-fos-license
|
Students’ multimodal knowledge practices in a makerspace learning environment
In this study, we aim to widen the understanding of how students’ collaborative knowledge practices are mediated multimodally in a school’s makerspace learning environment. Taking a sociocultural stance, we analyzed students’ knowledge practices while carrying out STEAM learning challenges in small groups in the FUSE Studio, an elementary school’s makerspace. Our findings show how discourse, digital and other “hands on” materials, embodied actions, such as gestures and postures, and the physical space with its arrangements mediated the students’ knowledge practices. Our analysis of these mediational means led us to identifying four types of multimodal knowledge practice, namely orienting, interpreting, concretizing, and expanding knowledge, which guided and facilitated the students’ creation of shared epistemic objects, artifacts, and their collective learning. However, due to the multimodal nature of knowledge practices, carrying out learning challenges in a makerspace can be challenging for students. To enhance the educational potential of makerspaces in supporting students’ knowledge creation and learning, further attention needs to be directed to the development of new pedagogical solutions, to better facilitate multimodal knowledge practices and their collective management.
knowledge practices within a school-based makerspace, and ask the following research questions: How do different types of talk contribute to students' interaction and creation of knowledge when carrying out STEAM learning challenges in a makerspace?
How do language and other mediational means mediate students' knowledge practices in a makerspace?
Which multimodal knowledge practices can be identified and how are they collaboratively enacted by the students in their design and making processes?
Our empirical study was undertaken in a makerspace called the FUSE Studio within a public comprehensive school in Finland, which had undergone a curriculum reform from 2016 onwards. The new national core curriculum has a strong emphasis on students' interest-driven learning and student engagement, with a special focus on the development of the students' digital and learning-to-learn skills. Traditionally, in all Finnish schools, design and creation of artifacts have already been included in craft education for a century, which is an obligatory school subject, now enriched with digital fabrication technologies. The FUSE Studio makerspace was built within the school and introduced as one of the school's elective courses in 2016, providing pre-defined STEAM learning challenges and tools for students participating in scientific, engineering and design practices, and creative ways of working with knowledge. It can thus be viewed as a potential tool for the teachers to integrate the next generation standards for science education (National Academy of Sciences 2012). As a course included in the local curriculum, it can also be regarded as a long-term project and an effort aimed at creating educational change within the school. The FUSE Studio concept was originally created at Northwestern University in the US (see: www.fusestudio.net) and in addition to this school, it is currently being adopted also in five other schools in Finland as part of a curriculum reform effort.
The core idea of the FUSE Studio is that, with the help of digital and non-digital tools, it can promote students' STEAM learning and to cultivate STEAM ideas and practices among those who are not already familiar with them, and by so doing broadening access to participation in STEAM learning (Stevens et al. 2016;Stevens and Jona 2017). Even though the 30 different STEAM challenges are provided to the students in this context, they are still "open ended", allowing for diverse solution paths, creativity and innovation. In other words, the students have a substantial say in how they engage in the design and making activities and with whom. In the school in focus in this investigation, students are able to use the school's computer lab, one regular classroom and the corridor, for carrying out the challenges, such as to build a dream home with 3D modelling software or to make windmills, solar-powered cars, laser mazes, or roller coasters. This type of maker work requires continuous sketching and prototyping from the students. From the teachers' perspective, FUSE offers the opportunity for low threshold activity in terms of providing a natural context for peer tutoring and not requiring advanced digital competence from the teachers themselves.
Within this sociocultural context, we view knowledge and human activity as cultural and deeply contextual and oriented by historically specific social organization, in our case, the school where the FUSE Studio makerspace is located. We locate language and tool-mediated social interactions at the center of the analysis of knowledge creation and human learning (Vygotsky 1978;Säljö 1999;Ludvigsen et al. 2011). Further, we consider talk to be a pivotal mediator in student participation within peer interaction and collaborative processes, and in their learning (e.g. Rowell 2002;Mercer et al. 2019;Mercer 2005). Together with language, tools and artifacts are at the center of our research attention, aiding the externalization of the participants' internal mental work (Vygotsky 1978). The externalization is pivotal because it enables the ideas and tools to be appropriated by others, enhancing their further use and refinement, as well as collaborative learning (Baker et al. 1999). Further, we view social action and tools (both conceptual and tangible) as intertwined resources (see also Ingold 2010;Mäkitalo 2011) that will make possible particular kinds of actions that come into being via social, embodied actions in a certain cultural setting (Goodwin 2003).
From a sociocultural view, we perceive knowledge practices as a collective phenomenon emerging from interaction and joint learning efforts, not the independent actions carried out by individual participants. We thus define students' knowledge creation as a social practice (Knorr-Cetina 2001), with knowledge practices guiding the students' learning and shared practical understanding of their learning activity (Hakkarainen et al. 2004, Hakkarainen 2009Seitamaa-Hakkarainen et al. 2010). By social practices we refer to the recurring patterns of activities, which are embodied and mediated by language and tools, artifacts that are grounded in epistemic objects and artifacts, and shared by the participants of a certain community (Schatzki 1996;Schatzki 2001;Schmidt and Volbers 2011;Miettinen 2006). Moreover, we view the processes of collective knowledge creation taking place through interactive practices that contribute to ideas and learning being materialized into shared epistemic (i.e., knowledge) objects and artifacts Mehto et al. 2020;Paavola et al. 2011).
On this basis, we carried out sociocultural discourse analysis (Mercer et al. 1999;Mercer 2005, also Mercer 2019) complemented with a multimodal interaction analysis (Goodwin 2003;Kress 2010;Streeck et al. 2011;Taylor 2014), focused on digital and hands on materials, embodied actions and spatial arrangements, to analyze students' interaction, socio-material mediation and collaborative knowledge practices when carrying out the FUSE learning challenges. As our original contribution, this led us to identify four, intertwined knowledge practices; namely, orienting to, interpreting, concretizing, and expanding knowledge. These practices guided and facilitated the students' creation of shared epistemic objects, artifacts, and their collective learning in the FUSE Studio makerspace.
Students' knowledge creation in technology-and materially rich learning environments
Knowledge creation has been a central theme among scholars interested in students' learning (see e.g., Bereiter and Scardamalia 2014;Brown and Duguid 2017;Kump et al. 2013;Mercer 2005;Mercer et al. 2019;Cress and Kimmerle 2008;Scardamalia 2002). During the knowledge creation process, the students share their different ideas, engage in individual and collective interpretation and meaning-making processes, and thus influence the thinking and the productive activities of one another (Arvaja et al. 2007). This process may also involve collaborative practices of problem definition and problem solving (Hennessy and Murphy 1999), and exploration of new knowledge, ideally leading the actors to transcend the boundaries between old and new knowledge. When students are viewed as active creators of knowledge, taking collective responsibility over their learning Scardamalia 2002;Zhang et al. 2009), technology-rich learning environments are able to provide opportunities for the emergence of their epistemic agency, in other words, the ways the students engage in improving their ideas collectively (Scardamalia 2002;Scardamalia et al. 2012;Damşa et al. 2010).
As shown by previous classroom-based studies, the production of new knowledge and advancement of individual knowledge usually requires interaction and collective effort during collaborative tasks (Damşa et al. 2010;Fernández et al. 2009;Ludvigsen et al. 2016;Mercer 1994;Wegerif 1996). Language functions as a particularly important medium for collaborative knowledge creation and for developing the students' thinking, ideas and learning (Mercer 2005;Mercer and Littleton 2007). Language is an important mediational means for the students' dialogue, which may also be referred to as "productive discourse engagement" (Scardamalia 2002), and for their successful collaborative creation of new knowledge (e.g., Bereiter and Scardamalia 2014;Cress and Kimmerle 2008). Further, the social experience of language use significantly shapes individual cognition, the dialogic interaction raising students' awareness of their collaborative talk (Mercer et al. 1999;Wertsch 1991). Moreover, "in dialogues, the students gain the psychological benefit of the historical and contemporary experience of their culture" (Mercer et al. 1999, 96). In sum, the student's adoption of a "social mode of thinking" (Mercer 1995) and their induction into ways of using language for seeking, sharing and constructing knowledge is crucial for their collaborative work and learning (Mercer 1995(Mercer , 1996Rojas-Drummond et al. 1998;Mercer et al. 1999).
During collaborative learning, students work together on a common problem and their interaction enhances knowledge sharing, joint knowledge creation and the development of shared understanding (Hmelo-Silver 2003). However, the interacting students possess different knowledge, with some being less and some being more knowledgeable than others (e.g. Brown and Campione 1996), which creates challenges and tensions in collective knowledge creation (Ludvigsen 2012). For instance, it can be challenging for the students to reflect on their knowledge and to explain it (Mercer 2008), to ask each other questions, and to explain and to clarify their own ideas and opinions, and to elaborate the reasoning behind these (Kollar et al. 2006;Martin and Hand 2009), or to take alternative views into consideration (Sampson et al. 2011). Therefore, much basic knowledge is often left implicit, creating misunderstandings in classrooms and in other settings (Mercer 2008).
Learning arrangements with technological infrastructures have been introduced to provide better opportunities for student interaction, collaboration, knowledge creation and conceptual advancement (Scardamalia and Bereiter 1994), allowing the students to improve ideas collaboratively (Scardamalia 2002;Scardamalia et al. 2012;Damşa et al. 2010). Such environments can also entail the transformation of the spatial and temporal relations of student learning and pedagogical activities, for instance by breaking traditional spatial and temporal boundaries of learning, making remote information sources accessible and collaborative learning location-free (Ritella and Hakkarainen 2012;Suthers 2006).
By emphasizing students' own choice and interest and connecting the educational learning activities to their everyday lives and communities outside the school, technology-and materially-rich learning environments can also create local or online "hybrid spaces" for teaching and learning in which the formal and everyday creatively intersect (see Gutiérrez et al. 1999;Ludvigsen 2012;Kajamaa et al. 2018). The novel technological infrastructures and learning arrangements also allow students to relate to materiality and to transform it in new ways, as the students are typically invited to act creatively to modify and develop material objects as part of the learning process (Kumpulainen et al. 2019a, b). The different knowledge resources available can accumulate collective knowledge and experience, thus having an instrumental value (Miettinen and Paavola 2016) for the students' learning (see also Engeström 2007).
In the context of schools, to enhance computer supported learning, increasing research attention has been paid to reciprocal interaction between students' knowledge creation and practice, introducing the notion of knowledge practices, useful in taking us beyond researching and analyzing "mere" knowledge and "mere" practice. By definition, knowledge practices (i.e., epistemic practices) can refer to discursive practices in relation to knowledge (Sandoval and Reiser 2004). Knowledge-creation learning can be understood "to be dependent on materially embodied practices rather than mere conceptual experiences" (Hakkarainen 2009, p. 224). It can include the utilization and creation of a variety of digital and non-digital tools and artifacts, with language mediating the students' activity Kumpulainen et al. 2019;Mehto et al. 2020;Riikonen et al. 2020), and channeling the participants' collective learning processes (Hakkarainen 2009;Hakkarainen et al. 2004;Stahl and Hakkarainen 2020). In materially-rich makerspaces, solving and managing the complex knowledge problems can involve all of the students' senses, such as looking, touching, feeling and listening (Koskinen et al. 2015), contributing to the creation and usage of different types of epistemic (i.e., knowledge) objects and artifacts. Furthermore, when working interactively with learning challenges and materials, students utilize tools for making their tacit knowledge explicit (Illum and Johansson 2012), adding to their accumulated cultural knowledge (Koskinen et al. 2015). In these contexts, the materialization of the knowledge objects is critically dependent on embodied practices connected to making (Kangas et al. 2013;Blikstein 2013;Kafai et al. 2014). The objects are usually negotiated and defined by the students and left more open-ended than traditional 'objects' (Hakkarainen 2009).
Furthermore, along with language and materiality, embodied resources are pivotal for grasping the unfolding of working processes, and in advancing the creative process (Härkki et al. 2017) in makerspaces. Also, the available structures, resources and arrangements of the context are important for enhancing student engagement (Kangas et al. 2013;Kumpulainen et al. 2018), which can be seen as a means of supporting students' transformative agency, in other words, transformation and reframing of the collective activity resulting from their learning via creative utilization of various resources available in the makerspace (Kajamaa and Kumpulainen 2019). In this study, we analyzed how students' collaborative knowledge practices are mediated multimodally in a FUSE Studio makerspace environment in which they carry out STEAM learning challenges.
Research setting
The context of this study is a public comprehensive school with 535 students and 28 teachers at the primary level. The school strives for student-centeredness and stresses design and digital learning, which aims to enhance students' creative problem-solving skills across the curriculum. In 2016, as a response to the new national core curriculum requirements, the school introduced the FUSE Studio as one of its elective courses.
The FUSE Studio was situated in the school's computer lab, with a neighboring classroom space and the nearby corridor for the students to use as needed. There were 22 desktop computers and separate laptops available for the students, and a rich variety of hands-on materials. While taking part in the FUSE Studio session, the students are able access the learning challenges and their associated instructions through a website (www.fusestudio.net).
The instructions provided by the FUSE Studio program offer students "a stimulus" for their maker work, and a vision or image of the object, which will then, via the process of design and development, transform as well as materialize into a shared epistemic object or (tangible) artifact. Through this process, it is our view that the notion of "epistemic" refers not only to knowledge, but also to material artifacts and the ideas out of which they are constructed. Fig. 1 shows the students' view of the digital FUSE Studio environment on which the students find trailer videos of each challenge and choose the challenge most appealing to them.
More specifically, the FUSE Studio consists of a computer program and hands-on material packages including 30 (pre-given) activities, called challenges, from which the students are free to select the challenge most appealing to them, with whom to pursue it (or alone), and when to move on, progressing at their own pace. The technological and pedagogical infrastructure of the FUSE Studio consists of digital tools (computers, 3D printers) and other materials (e.g., foam rubber, a marble, tape and scissors). The learning challenges students engage in range from designing jewelry to building a dream home with 3D modelling software, to making windmills, solar-powered cars, laser mazes, and roller coasters. Some of the challenges are fully digital and in others, students use physical materials that are provided to them in separate kits. The design challenges level up in difficulty following the basic logic of video game design principles (see e.g., Holbert and Wilensky 2014). During the FUSE Fig. 1 "My Challenges" student interface studio sessions, the teacher(s) make rounds throughout the makerspace to follow how the students' work progresses (Greiffenhagen 2012;Koskinen et al. 2015). The students can also call upon teachers and their peers (in other groups) when needed. The assessment of a student's participation and learning does not include grading, but is carried out by using photos, video or other digital artifacts and the student's own documentation (Stevens and Jona 2017). Failures are viewed as just another try, and as significant experiences during the processes of making (Hilppö and Stevens 2020).
Data collection
The video recordings were collected intermittently during one academic year of participation within the makerspace. We collected data three times a week from August to December 2016. The data comprised 111 h of transcribed video recordings, and our field notes about groups of students (N = 94, age 9-12 years) and their facilitator-teachers (N = 7) in the FUSE Studio. The students' guardians were informed about the research and its data collection methods and were asked to give their written consent for their children's participation. Participation in the research was voluntary and could be ended at any time. The research respects the teachers' and children's anonymity and privacy, and all names mentioned in the research are pseudonyms. The data were taken from three groups of students who had chosen the FUSE Studio as an elective course for the 2016-2017 academic year. In particular, Group 1 consisted of 32 4th graders (22 boys and 10 girls), Group 2 of 30 5th graders (19 boys and 11 girls), and Group 3 of 32 6th graders (19 boys and 13 girls). Each group had one 60-min FUSE session per week. Each group had two appointed teachers to support student work in the FUSE Studio.
In the FUSE Studio, we used four video cameras to capture the moment-by-moment activities of the students and teachers. Usually, two of the cameras followed the teachers and two were set to record selected students' work. The main principle that guided the decisions regarding the focus of the cameras for each session was the need to form a comprehensive picture of the nature of interaction and activities. On each of the videos, we had one camera that was set to film the group. The angle was adjusted according to the students' movement in the space. The computer screens and other "hands-on" materials were captured as parts of the students' interaction as visible within the scene. We did not specifically zoom in for close-up shots of the computer screens. The following photographs (captured from the videos) providing an example of our videoing of a group working in the computer lab area of the FUSE Studio.
For this study, we selected 350 min of the transcribed video data focusing on two groups working together on several FUSE challenges. One group was a group of four boys pseudonymized as Leo, Alex, John and Mark (photos of their work are shown in the findings section). The other was a group of four girls pseudonymized as Nellie, Emmi, Sara and Nora (photos of their work are shown in Fig. 2 above). The total duration of the data from the girls' group was 185 min, and from the boys, 165 min (350 min in total). Each group was supported by two to four teachers and teaching assistants. We selected these two groups as we wanted to follow those students who had previously worked as a group in the FUSE studio with a broad range of STEAM learning challenges, and who used various spaces and materials within the FUSE Studio makerspace (i.e., the computer lab, the neighboring classroom space, and the corridor). It was also a goal to include a group of girls and a group of boys. The two groups worked on STEAM challenges in which they needed to collaborate to design a shared epistemic object and also construct a (tangible) material artifact, to which the created ideas materialize. Further, one of our selection criteria for these two groups was that these student groups worked purposefully and completed the FUSE challenges during a single session.
This was not always the case in other groups, as students often continued with the same challenge in subsequent session(s). Due to space limitations, and the detailed nature of the analysis, we present only selected parts of the examples, often focusing only on two students in the group.
One advantage from an analytic perspective in selecting these two groups in particular is that while the group work practices we observed in these groups appeared similar to the work in other groups, these students worked in the same groups over many session and multiple design challenges. The other students changed the composition of their groups more often than these two groups. All students participating in our study had taken part in craft education classes (included as part of the national core curriculum) as part of their schoolwork, hence working on design tasks in the FUSE studio cannot be regarded as a totally novel activity to the students. Further, we analyzed these particular examples with reference to the information from the whole corpus of our video data (all 111 h). For the analyses of the other four teams, the interested reader may refer to separate articles Leskinen et al. 2020). We acknowledge that if the groups had just started to work in FUSE studies, the results of analyses might have looked different.
Data analysis
We first approached the data more holistically and then focused on selected events in greater depth as explained below (Derry et al. 2010). First, we uploaded the 111 h of videotaped and transcribed FUSE sessions into the Atlas.ti analysis software, to view and to read as a whole, to identify student groups in which the students stayed over several FUSE Studio sessions and who worked on those design challenges that involved active and sustained use of digital technologies in the FUSE Studio makerspace. Based on consensus (via researcher negotiations), this resulted in selecting 350 min of video data of two student groups working with three different STEAM learning challenges in different spaces, for presentation in this study. Our in-depth analysis then included the following three sequential phases.
To respond to our first research question on how different types of talk contribute to students' interaction and creation of knowledge, we analyzed the 350 min of video-data to find out how language contributed to students' interaction and joint creation of knowledge when carrying out the STEAM learning challenges. This first phase of the analysis was abductive, involving repeated iterations between theory and the data (Van Maanen et al. 2007). We coded the videos by applying sociocultural discourse analysis, providing us with the typology for the analysis of three "archetypical forms" of students' talk; Cumulative, Disputational and Exploratory (see Mercer et al. 1999;Mercer 2005, also Mercer 2019). We coded speaking turns during the students' interaction in which they constructed common knowledge via accumulation, repetition, conformation and elaboration of statements and suggestions, in other words cumulative talk. We searched for short utterances and instances of assertion and counter- Fig. 2 Here we display photos of Nellie, Emmi, Sara and Nora illustrating the orientation of the camera within the makerspace assertion, and in which the student's competed with each other, ignored the other, challenged or criticized the other, namely disputational talk. We also coded speaking turns in which the students actively contested each other, critically but constructively in their engagement with each other's opinions, statements and suggestions, made joint decisions and their reasoning and knowledge was made publicly accountable, in other words exploratory talk.
To respond to our second research question on how language and other mediational means serve to mediate students' knowledge practices, we viewed the videoed data again and carried out a multimodal interaction analysis (Goodwin 2003;Kress 2010;Streeck et al. 2011;Taylor 2014) across the 350 min of data. This second phase included our identification of the material, embodied, and spatial resources involved in the students' interaction during their maker activities. More specifically, we analyzed the students' knowledge creation as embodied and materially and spatially mediated, and as evolving through and within the students' interaction. We coded parts of the data in which these resources mediated the students' joint attention, in other words, their capacity to coordinate actions and attention with others on an object (see Tomasello 2000). We coded the topics, concepts and notions mentioned by the students while making inquiries and solving problems, allowing us to identify the emergence of joint attention and the core epistemic (knowledge) objects during their maker work. We also coded parts of the interaction in which (tangible) epistemic material artifacts were discussed, created, appropriated (also Baker et al. 1999) and used, paying special attention to the students' use of the physical space and its arrangements. We also coded verbal and non-verbal embodied actions and signals, such as postures, gestures, gazes and physical movement (such as moving closer and withdrawing from the interaction) (see Mondada 2018), related to the creation and use of epistemic objects and artifacts, and mediating their handling. We also engaged in relating our analysis to the existing literature on the role of socio-material mediation (Vygotsky 1986;Säljö 1999;Ludvigsen et al. 2011) and embodiment (Härkki et al. 2017) in students' collective activity. Note that any one instance of multimodal interaction could be coded with more than one code.
As our third analytical phase, in response to our third research question on the identification of multimodal knowledge practices and their collaborative enactment by the students, we re-viewed the parts of the video data that we had coded in the discursive and multimodal interaction analysis, to analyze the data further, to identify the students' multimodal knowledge practices guiding and facilitating their learning during the design and making activities. In the videos, we focused our attention on depicting the student groups' recurring patterns of activities mediated by types of talk and other mediational means, such as the embodied actions used as a complementary channel to express and demonstrate ideas to others (Taylor 2014), we had identified in our two earlier analytical phases. In this phase, our analytical interest was in exploring how the knowledge practices, involving epistemic (knowledge) objects and their materialization to (tangible) artifacts are enacted and became shared by the students and their peers in guiding and facilitating the students' joint attention, knowledge creation and learning in the FUSE Studio makerspace. In this, we related our data-driven analysis to the existing literature on joint attention (see Tomasello 2000), knowledge practices (Hakkarainen et al. 2004;Hakkarainen 2009;Seitamaa-Hakkarainen et al. 2010;Knorr-Cetina 2001), and to the instructions for carrying out the STEAM learning challenges, provided by the FUSE Studio program in the website (www.fusestudio.net). This led us to identifying four, intertwined knowledge practices namely orienting to, interpreting, concretizing, and expanding knowledge which we viewed as constitutive of the students' activity in the FUSE Studio.
To ensure inter-coder reliability, the primary coder first analyzed videos using the Atlas.ti software, applying our emergent analysis framework. To establish the reliability of the analysis, the second coder scored a representative sample of the data by applying the same analysis framework. We discussed any disagreements in coding (e.g. some of the coding rules were further clarified) until there was agreement.
Findings: Students' multimodal knowledge practices in the FUSE studio
Here we discuss our findings in relation to the interactional situations that characterized the students' collaborative knowledge practices in our video data. The four, intertwined, multimodal knowledge practices that we identified, namely orienting to, interpreting, concretizing and expanding knowledge, are first defined. Thereafter, we will present three episodes, each including the four multimodal knowledge practices via discursive, material, embodied and spatial dimensions of analysis.
(1) Orienting to knowledge emerged in our data through discourse and embodied actions when the student groups began their design and making processes. The students prepared themselves to conduct a STEAM learning challenge by first selecting who to work with, and as a group positioned themselves (by bodily acts, such as taking an independent or collaborative stance) in the physical space of the FUSE Studio and its organization, such as the furniture and the computers. By utilizing the "My Challenges" interface in the FUSE Studio program (see Fig. 1), they then decided which STEAM learning challenge they were interested in working with, and began to familiarize themselves with its instructions, presented on the FUSE website. As an indication of their interest, the students usually expressed their intention (verbally or through gesture) to pursue a certain challenge. In this, talk, embodied actions, and material and spatial mediators mediated and coordinated the students' joint attention and engagement. The ways in which the student groups used language mediated the nature of their engagement and the atmosphere of their social interactions. Their process of orienting to knowledge typically involved cumulative talk, in other words, the students' construction of common knowledge via confirmation and repetition of the instructions in the computer program. It also included the students (verbal and embodied) selection and appropriation of the materials and spaces, such as by selecting the space for their work on the challenge. In some cases, we were also able to witness disputational talk, such as assertion and counter-assertion, competing, or at times even ignoring each other.
(2) Interpreting knowledge refers to the students verbally reflecting on the given instructions and ideating further about how to proceed with the challenge and in their problem solving. Moreover, the interpretation efforts between the students was evidence of their attempts to achieve a shared understanding. The interpretation involved the students' construction of common knowledge via cumulative talk. They began offering information to each other by sharing and exchanging their existing knowledge and experiences in relation to the various materials, tools and the problem at hand. This also included preliminary visioning, planning and ideating the next steps of the design and making process. This included the students' usage of the computer program for designing epistemic objects and artifacts, and their search and familiarization of themselves with other materials, such as foam rubber, a marble, tape, scissors, and 3D printers. It also included disputational talk. For instance, the students questioned and reframed the given circumstances in terms of deciding not to follow the instructions of the FUSE Studio. It also included the students (verbal and embodied) judging of the existing arrangements, materials and spaces, such as moving to work in the corridor. They also carried out re-positioning (by bodily acts) themselves and the materials within the makerspace when something was not functioning in an orderly fashion, to better coordinate their collaborative work.
(3) Concretizing knowledge involved students' externalizing of their knowledge creation process into different knowledge (i.e. epistemic) objects and material / tangible artifacts. The students made creative acts of explaining and working with the available conceptual, material and spatial resources and made initiatives and plans to use the existing objects and artifacts, and to create new ones. The students viewed and engaged in the actions by exploring ways of working, demonstrating, seeking help, coordinating and moving back and forth in their making process (by verbal and embodied actions), to solve the problems involved in the learning challenge. In their joint search for solutions, they usually negotiated, adjusted, agreed and further elaborated their ideas. Also, when concretizing their visions, ideas and plans into epistemic objects and material artifacts, the student talk often exemplified disputational talk, as the students disagreed and contested one another. They also exemplified exploratory talk, when they actively contested each other, constructively engaged with each other's opinions, statements and suggestions, and made joint decisions. The decision making often also involved acts of disregarding many of the proposed, optional ways to proceed with the challenge, and collective appropriation of certain suggestions, leading to collaborative distillation (i.e. a participant first shifts away from what he believes to be unsuitable and then the other follows) of the solutions and the concretizing of the knowledge into shared (epistemic, digital) objects and (tangible) artifacts meaningful to the participating students.
(4) Expanding knowledge related to situations in which the students critically but constructively engaged in further planning, coordinating and carrying out the learning challenge. This took place when they encountered difficulties in the design and making process, disagreed, or were not satisfied with the process, after which they had to expansively either reject, modify or entirely revise their epistemic object in making, often exemplifying disputational talk. Further, their knowledge was made publicly accountable through discourse and can be defined as joint reasoning, featuring exploratory talk. They then often changed their original plan, the direction of their making, redefined the usage of certain tools, iterated, further demonstrated (by embodied actions) and repeated parts of the process. The goal of this process was to solve the problem, and to create a more satisfying shared epistemic object or artifact. They also assessed their design and making processes and expressed a collective will to ignore, alter or expand the pre-given instructions. This appeared to result in the emergence of the students' expanded understanding of the situation and the process as a whole, as well as expansion of the specific knowledge object or artifact in progress, to improve its functionality and meaningfulness as a co-created outcome of the activity.
Episode 1: Proceeding with the challenge in the corridor to create a security system In this episode, it is possible to see how discourse, digital and other "hands on" materials, embodied actions, such as gestures and postures, and the physical space with its arrangements mediated the students' collaborative knowledge practices (i.e. orienting to, interpreting, concretizing and expanding knowledge). In this episode, the boys used the FUSE Studio materials and spaces in creative ways and relocated their maker work from the computer lab to the nearby corridor. The learning challenge was new to the boys in the group and they shifted from first relying on the pre-given instructions to actively ideating and exploring alternative paths for proceeding with the complex challenge. In many parts of the episode, we were able to witness the dominance of non-verbal (embodied) actions, and material and spatial mediation, mediating joint attention, coordination and knowledge creation. The students often engaged with each other's ideas critically, but mainly constructively, and took alternative and expansive directions to proceed with the challenge. Despite their dispute and fact that multiple times, their attention was drawn away from the challenge by their peers, they finally successfully completed their Laser Defender.
The four boys, (pseudonymized as) Leo, Alex, John and Mark, began orienting to the challenge by deciding to work as a group, and by positioning themselves by sitting down in pairs on the desks in the computer lab. They then choose to work on the "Laser Defender" learning challenge, in which the students create a laser beam security system to protect a valuable "treasure" and challenge their friends to break in. (For more information, see: www. fusestudio.net/challenges). The Laser Defender challenge was new to all four boys, making it impossible to exchange any prior knowledge and experiences in relation to it. When sitting in two separate rows in the computer lab, they decide to continue with the challenge in pairs, to create their own security systems, to which the other pair could then try to break in.
At the beginning of the making, Leo is sitting by his computer, trying to open the "My Challenges" interface to get access to the instructions for the challenge from his computer screen. Alex is quietly standing behind him, positioned by the toolbox, handling and selecting some of the tools as needed, and orienting to the challenge. John comes to stand by him and states that he had already started the challenge with Mark. Alex gazes towards him, and featuring disputational talk, states: "Do not do that, do not start, we are supposed to start!", as he wanted to be the first to begin the challenge with Leo. John does not accept this and states his differing opinion: "Aren't we all supposed to start this on our own computers?". John then withdraws from the mutual gaze and as shown in Fig. 3, begins to delve the toolbox to prepare for the challenge. The toolbox mediated and coordinated the students' joint attention and engagement, John leaning over the box, pushing Alex toward the wall, and leaving Alex with little room to continue his search. They began arguing about the tools required, and it sounds and looks just as if John was entering Alex's territory, even though the boys belong to the same team. Leo maintains his distance and does not get involved in Alex and John's debate in orienting to the challenge, and is still sitting behind them in the front of his computer. Mark also takes a distant stance and stays at the desk John and he have in their use and is thus out of the reach of our video camera. In Fig. 3, Alex is wearing a hat and John is wearing a hoodie. Leo sits behind Alex and Mark is not shown.
Then, Leo encounters some technical problem in trying to open the trailer video for the challenge and asks Alex to help him. Meanwhile, John continues searching the materials for the challenge from the box. Alex drags his chair and moves physically closer to Leo and leans toward his computer, to see better what he is doing, to help him in opening the trailer. At this point, John takes a laser pen out of the box, switches it on, opens the classroom window (the one shown in Fig. 3 behind Leo's head), and starts to point outside the window. Alex finally gets the trailer functioning, and Leo begins to familiarize himself with it. Meanwhile, Alex begins to experiment with a laser pen by moving it in his hands. He seems excited, switches it on, and points toward the teachers' desk, saying to Leo: Alex: "Look, this light reaches so far!". He then sits down and points to the roof of the computer lab. John still leans on the windowsill using the pointer and redefining its use from carrying out the challenge, to pointing people walking past. By doing this, he leaves Mark to familiarize himself with the instructions, and the two boys do not communicate.
Then, the collaborative interaction of Leo and Alex is temporarily interrupted as a girl, Emilia, from another group, approaches Alex to ask for help finding the right equipment for carrying out a challenge she has selected with her group, sitting at the opposite corner of the computer lab. As Emilia arrives at the desk, Alex starts to point at her face with the laser light on, joking that he is going to shoot her. As shown in Fig. 4, Emilia uses her body and, without words, tries to "defend" herself by covering her face with her hands and smiles. The teacher notices this and says that it is not safe to point at another's eyes with the bright laser light. Then Alex stops and walks with Emilia toward her group. Here, instead of mediating and coordinating joint attention and engagement in carrying out the learning challenge, the students' uses of the laser pens caused temporal fragmentation of the joint activity.
After helping the girls, Alex returns to Leo who had meanwhile collected the requisite tools and is eager to begin building the Laser Defender. Leo then wonders where they should carry out the learning challenge and suggests the computer lab part of the FUSE Studio where they are already. Alex makes an alternative suggestion to move into the corridor, judging the computer room as an unsuitable space for carrying out the laser challenge. Leo appropriates this suggestion, and taking the toolbox and a laptop with them, repositions themselves within the nearby corridor. They position themselves very close to one another, sit down in the corridor, and engage with interpreting the instructions of the learning challenge, and using cumulative talk, by repeating the instructions. They also begin to envision, plan and ideate the next steps of the process. Leo offers information to Alex by stating: "It (the Laser Defender) needs to be one meter long". As we can see from Fig. 5, Leo, John (to Alex): We already began to do the laser challenge with Mark. I pushed this and then we… (turns and points to Leo's computer screen). Alex: Do not do that, do not start, we are supposed to start (with Leo). Leo will start by doing this so that we will… John: Aren't we all supposed to start this in our own computers! We started already (picks up a laser-pen). (Alex nods as a positive response). John: Hey, shouldn't' these be two of these lasers, look. Here is only one now! No, here. Look, I found it! Alex: No, it's not the right one, we need a different one, this is the same kind (takes another laser pen in his hand) Alex: Look, this light reaches so far! Fig. 3 John leans over the toolbox to search for the laser pen and pushes Alex Fig. 4 Alex points at Emilia's face with the laser pen, and she tries to defend and smiles who is wearing a striped jumper, places a measuring tape on the floor, and Alex (wearing the hat) holds one of the mirrors to begin with the challenge.
A video camera is placed in the corridor to film them. Alex suddenly points to the camera's lens with the laser pen, temporarily turning the boys' attention away from making the Laser Defender. While playing with the laser pen and the camera, both boys smile and laugh as Alex's light is brightly reflected from the camera's lens, as shown in Fig. 6. Leo also tries to point to the camera with his laser pen but realizes that it does not work, and guesses its battery has run out. He thus returns to the computer lab to get new batteries. The new battery first flips from his hand and lands on the floor, but he then gets the laser pen functioning, and the boys continue playing with the laser pens and laughing. Then, Alex repositions himself toward the measuring tape, gazes down, and draws Leo's attention by calling his name in a loud voice: "Leo! Come and look!", and Leo turns toward Alex, and their joint attention returns to the challenge.
When externalizing their knowledge creation of the Laser Defender (i.e. the knowledge object), Leo and Alex do not often engage in gazing at one another. Instead, their joint attention (talk and embodied actions) is strictly focused on the hands-on materials, for example Alex asking Leo's opinion: "How should the mirror be then?" And Leo, having familiarized himself with the instructions, constructs knowledge via cumulative talk gained from this, and tries to negotiate the concretizing of the knowledge object with Alex by saying: "I think we must use two of these mirrors to get the light to hit the target". Then, Alex starts pointing with Leo: Alex, where should we go to do this? Shall we do it here? (in the lab) Alex: We are getting close to having all the equipment we need. Let's not do it here, let's do it in the corridor! Leo: Okay, let's go there. (Then, both boys stand up and collect the rest of the tools and move to the corridor to build the Laser Defender. In the corridor, both boys sit on the floor very close together). Leo: It (the Laser Defender) needs to be one meter long (Leo places a measuring tape). How should the mirror be placed then? Leo: I think we must use two of these mirrors to get the light to hit the target. There needs to be two (mirrors). Alex: Do not do that! Put that away, we don't need that! (the mirror Leo has).
Fig. 6
Boys having fun pointing to the video camera's lens with their laser pens his laser pen toward the wall, as we can see in Fig. 7. Constructing knowledge via disputational talk, Alex then disagrees and says to Leo, who has another mirror in his hand: "Do not do that! Put that away, we don't need that!". By stating where they should place a mirror, and by disregarding the option offered by Leo, Alex tries, however, to further concretize the knowledge object. The atmosphere seems quite tense and Leo disagrees, with a critical tone in his voice, using disputational talk: "There should be two mirrors, not just one!", demonstrating by holding the second mirror in his right hand. As illustrated in Fig. 7, Alex continues pointing to the wall with the laser pen. Suddenly, Leo then looks away from Alex, and temporarily, physically withdraws from the challenge by walking away from the corridor, back to the to the computer lab without saying anything more (see Fig. 7).
As we can see from Fig. 7, Alex gazes towards Leo but does not say anything to him as he walks away. He keeps on placing the mirrors in different positions, trying to concretize the epistemic object (the security system using lasers and mirrors). It looks at that point as if their collaborative interaction will not continue, however, after a while Leo returns. At this point, no words are exchanged, and the boys silently continue placing the mirror in a range of positions and directing the laser pen multiple times, via trial and error. After a while, they constructively engage with each other's opinions, contributing to knowledge creation via exploratory talk, Alex first demonstrates with his hand his novel idea on how the light would move, and suggests: "We should try this out, then we would have a laser which goes there". In practice, he then explores this new option of placing the two mirrors and says with a confident voice: "Let's place this here, and this here, I really like this!", appropriating and expanding Leo's original idea of including the two mirrors. By doing this, Alex also distills away his earlier idea, unsuitable to concretize the knowledge object. Leo first agrees, and they stand in silence for a while viewing what Alex invented.
Some students then walk past and disrupt their work. However, they stay focused on the task. Yet, Leo is still not satisfied, again disagrees, and uses disputational talk to further contest Alex. He suggests a modified approach, of his own invention, relating to positioning the mirrors, stating that: "No, I do not think this is the way, I have an idea!". He silently demonstrates this to Alex by placing the mirrors in different positions to further concretize Leo: There should be two mirrors, not just one! (Leo walks away and after a while comes back) Leo: Let's place this here, and that there and then. I'm trying really hard to do this! Place it there, we need to reflect it from the mirror. We cannot place it here! Alex: I know now, like this. We should try this out, then we would have a laser which goes there (demonstrates). Let's place this here, and this here, I really like this! (the boys view Alex's work). Leo: No, I do not think this is the way, I have an idea! (demonstrates). Let's put one here, the other there, there is one and it reflects here and the other there, does this suit you? Alex: Yes. (Alex nods and moves close to Leo to adjust the mirrors). the shared epistemic object. Alex then appropriates by simply replying: "Yes!", accompanied by his use of his body to expresses shared understanding by nodding to Leo. Alex joins Leo in further testing with the laser pens, and soon the Laser Defender starts functioning, and the boys successfully complete the challenge.
Thereafter, their knowledge is made publicly accountable as they then introduce their laser defender to their peers John and Mark. John and Mark had also carried out the same challenge, and completed it successfully, but through a different design and making process in the FUSE Studio space. Alex and Leo invite them to break in, to try to steal their valuable "treasure". While they are doing this, Emilia, who Alex had helped earlier, and her group member Linnea enter the corridor to view the Laser Defender created by Alex and Leo. All the boys then turn their attention to the girls, and away from stealing the treasure, and start playing with the video camera, posing and making funny faces. Alex throws his cap and rolls on the floor, and (without words) points to Linnea with the laser pen, who as shown in Fig. 8, tries to defend herself by holding her left hand as a shield. Both laugh. Then, John and Mark leave, and Linnea, who is still sitting on the floor, picks up the laser pen and leans over the Laser Defender, expanding the usage of the knowledge object from the established group, trying to steal the treasure while Leo and Alex are viewing. Lastly, Leo takes a photograph of the boys' accomplishment.
In this episode, Alex behaves in a commanding manner toward Leo. Nevertheless, together he and Leo collaboratively interacted and equally contributed to solve the learning challenge of creating a laser beam security system to protect a valuable "treasure". Cumulative, disputational, and exploratory types of talk were used as mediational means, together with multiple material artifacts, such as the toolbox, measuring tape, mirrors, laser pens and the video camera. All these served as important mediators, enhancing the externalization and the internalization of knowledge as well as the enactment of knowledge practices during their interaction. In this episode, the corridor as a physical space played an especially pivotal role in mediating the students' interactions and knowledge practices, and it can be viewed as an improvised spatial expansion for the makerspace, providing opportunities for the students' embodied actions and the extension of the given instructions. Further, the corridor importantly allowed for the creation and testing of innovative solutions, as well as the concretization of the shared epistemic object. Despite the boys' frequent use of disputational talk, combined with strong embodied actions, such as pushing one another and withdrawing from the situation, the boys successfully solved the laser challenge. Further, the disruptions, although temporarily fragmenting their maker work, served important social functions, such as gaining the girls' Fig. 8 Alex points at Linnea with the laser pen, who holds her hand as a shield attention and having fun with peers, and the boys always managed to return to their joint attention on the challenge.
Episode 2: Pursuing the keychain challenge in the classroom part of the FUSE studio Also in this episode, discourse, digital and other hands-on materials, embodied actions, such as gestures and postures, and the physical arrangement mediated the students' collaborative knowledge practices (i.e. orienting to, interpreting, concretizing and expanding knowledge), generating joint attention and leading to extending the given instructions, and the successful and creative completion of the learning challenge. Here, we focus on an episode in which a group of girls had a member among them who was familiar with the challenge and who, when asked by her peers, took the responsibility for coordinating the other three girls' making process. Furthermore, in this episode the girls did not make use of the surrounding space but remained in the classroom part of the FUSE Studio for the whole time, sitting around a desk. In this space, their interaction involved a rich constellation of embodied actions, which were first carried out by the member holding knowledge on the challenge, but later by all the other members of this group, to demonstrate, coordinate and support the other group members' work, to creatively work with their hands, leading to the creation of unique keychains.
Four girls (pseudonymized as) Nellie, Emmi, Sara and Nora begin collectively orienting themselves to carry out a learning challenge as a group. For this, they position themselves in the part of the FUSE Studio that is arranged as a more regular classroom, having a single large table around which they all sit next to each other. They jointly decided to work on the FUSE challenge called the "Keychain Challenge", in which the students design and 3D print a keychain with their name or custom message. (For more information see: www.fusestudio.net/challenges). Each of the girls has her own laptop on the table, and they sit closely beside one another as they share two computer screens to design keychains by using the FUSE computer program. Then, Emmi, who is familiar with the challenge, walks into the next room, the computer lab, to get the hands-on materials she wants the girls to use in their work on their keychains.
The episode in our focus here begins as Sara tries to connect the parts of the keychain without succeeding and soon asks for help from Emmi. This situation is illustrated in Fig. 9 in which Emmi is the third girl from the left and Sara is sitting second from the left wearing a black cardigan. Emmi says to Sara: I'm good at this! I have viewed this challenge and I understand this", sounding quite self-satisfied, but we can see from the way in which Emmi engages her body in Sara's making activity by grasping Sara's hand with a smile and friendly gaze (in Fig. 9), that she eagerly and genuinely wants to help her. We do not know whether Emmi has, as a matter of fact, carried out this particular learning challenge before, as she says she has viewed it, which may refer to merely viewing the instructions or the work of other students. However, here, Sara depends on Emmi, and also the other two peers grant her the responsibility for instructing them as well. Nellie is sitting on the far left and Nora on the far right (in Fig. 9). They each take a rather individual stance, trying to familiarize themselves further with the instructions by looking at the "My Challenges" interface on their own respective computer screens. This ends very soon as the girls turn their attention to jointly following what Emmi is doing with Sara.
Then, Emmi begins orienting all the others to the task by offering information and demonstrating with her hands how to make a special keychain by sewing beads of different colors to it. Her existing knowledge, embodied actions, as well as the thick thread and the beads, began to mediate and coordinate the four girls' joint attention and engagement. To interpret Emmi's demonstration, Sara then starts constructing knowledge via cumulative talk by repeating by heart and reiterating by embodied actions what she had demonstrated to her. For this, she quickly takes the keychain from Emmi's hands, visioning, planning and ideating the next steps in her making process: "Could I do something like that?". Emmi confirms by nodding to her that she has understood correctly. Thereafter, Sara, with a happy look on her face, continues exploring, stitching and connecting the parts by herself, beginning to concretize the shared epistemic object (i.e. designing and producing a customized keychain).
Their making process then stops for a while as the girls begin to discuss their pets, especially about a rabbit Sara had and that had recently died. The girls reminisce about the rabbit and also make jokes about it and how difficult it is to select a name for pets. Then, their making activity continues as Nellie draws their attention back to the task by asking a question: Are we doing this in the right way?, combined with a long gaze toward Emmi, as a way of expressing a desire for her to assess their progress and also to support her. After this, Sara tries to connect the parts of the keychain and is moving back-and-forth (by embodied actions), but then again begins to struggle and asks for further help from Emmi. Then, Emmi begins the joint interpretation of knowledge, and she looks down at Sara's hands, and then glances at the laptop in front of her. She frowns, and confidently begins elaborating to all girls how the parts of the chain need to be connected, in fact raising the difficulty of the learning challenge, and extending the instructions of the FUSE program.
The making process then proceeds so that Sara states to Emmi: "Oh dear, this is so difficult! This hurts my hands a bit". As shown in Fig. 10, the two girls withdraw from the group activity by taking an embodied action of leaning forward and directing their joint attention (as a pair) to supporting Sara's sewing and the concretization of the shared knowledge object. When verbally externalizing her knowledge to Sara, Emmi simultaneously demonstrates to her the procedure by sewing and connecting some parts of her own keychain she has in her hands. Again, Sara is trying to interpret this by replicating Emmi's actions with her hands, sewing black beads to decorate her keychain. Yet, Sara finds it difficult to proceed with attaching the beads. At this point, Nellie is also carefully paying attention to Emmi's demonstration, and Nora has stood up to see it more closely. After the girls view Emmi's actions for some time, we can see (in Fig. 10) how Nellie gazes at Nora (standing on the righthand side of Fig. 10), and raises her thumb up, as if a sign of appropriation and understanding Emmi's instructions, to Sara (to Emmi): I am totally lost now. How can we proceed from here? Tell us. Emmi: I'm good at this! I have viewed this challenge and I understand this! (begins to unpack the toolbox). Let me help you! Nellie: Are we doing this in the right way? What are these purple parts? How does this go? Sara: How can I make this smaller and at the same time wider? Please, help me. I would really like to do this. Could I do something like that?
Fig. 9 Sara struggles and Emmi smiles and offers help
International Journal of Computer-Supported Collaborative Learning which Nora nods, as a sign of appropriation. It can also be seen as an embodied act of collaborative distillation. Nellie sets aside some of the options and solutions she and Nora had so far tried out, reinforcing Nellie and Nora's joint decision to proceed with the making activity according to Emmi's instructions.
After a while, Sara is again struggling, and seeking for Emmi's help, who tries to strengthen her confidence and simultaneously begins to give her more elaborate advice about creating the keychain. In this, first by praising Sara by saying: "Yes, good!, and then with a confident gaze, taking the role as an expert, Emmi further externalizes her tacit knowledge and experience to Sara, guiding her: "Then, take that black bead next because you need it to hold this together". Emmi states that it needs to be a different black item, and hands the necessary one to Sara, who takes it from Emmi's hand. She then supports her with encouraging words, and exemplifies exploratory talk by suggesting: "Let's not think about if it's difficult or not, we can just try it and see how it goes. It's fun". As demonstrated in Fig. 11, all girls including Sara then smile, appropriating Emmi's view. This is an important turning point in Sara's progress with the challenge, as thereafter, holding the bead in front of Emmi, she smiles and constructively engages with her opinion and reasoning. She says: "Here is the black one, I think this should work", expressing that she had understood the instructions and gained some confidence via the other girls (verbal and embodied) support.
Sara: I wonder how this needs to be done (tries with her hands). Oh dear, this is so difficult! This hurts my hands a bit. This comes off really easily (tries out). Sara: How can I do this? Emmi: Sure, I can help, I know this (leans towards Sara). Please take the black one. Yes, good! You first place these things together. Then, take that black bead next because you need it to hold this together (Sara takes a bead from Emmi's hand).
Fig. 10
Emmi and Sara direct their joint attention to their hands to support Sara, and Nellie raises her thumb to Nora Sara (to Emmi): How can I do this? Look how this looks? Emmi: Let's not think about if it's difficult or not, we can just try it and see how it goes. It's fun! (all of the girls smile). Sara: OK. Here is the black one, I think this should work. (Then, she accidently drops her computer mouse on the floor). Sara: I think this mouse was already a little broken. The tools and embodied actions mediated and coordinated the students' joint attention and engagement, and the four girls had collected information, concretized knowledge, and had developed a shared understanding of the artifact they were creating. Their making activity then is temporarily disrupted as Sara drops her computer mouse on the floor. She picks it up and starts trying to see whether it is broken, and the other girls silently wait for her, and the mouse still functions. Then the girls direct their attention back to the task, and Emmi is still looking over Sara's work and providing her with support and guidance when needed. Soon after, Emmi finalizes her own keychain, raises her head up, and starts to giggle. These are all signs of a positive atmosphere. Sara smiles back to her and then puts down the bead from her right hand, holding the new, squared black object in her left hand, and starts to insert a cord to run through it. At this point, Nellie and Nora, also supported by Emmi's instructions and by one another, successfully complete their keychains. Viewing Sara's work, leaning closer to her, and pointing at her hand with the black bead, Emmi then explains: "You need to bind it then in a hook-like manner". Sara adds her own reasoning by responding with a question: "Like that?", demonstrating that she had understood and simultaneously changing her making process, and Emmi replies: "No, not like that, like this". This is not viewed as disputational talk, but more as an effort to negotiate the concretizing of the knowledge object, as Emmi is responding with a smile on her face. Appropriating Emmi's idea Sara then adjusts the position of her hand, leading to the successful completion of the keychain, and joyfully shouts: "Oh, my god, look, I managed to do it!" As illustrated in Fig. 12, Nora turns her attention to Sara and Nellie raises her hands to celebrate her success and all girls smile.
In this episode, Emmi, who had more accumulated knowledge of the challenge and in stitching, played a pivotal role in sharing her knowledge via talk and bodily movements, such as touch and demonstration, and use of the thick thread and the beads. Through this contribution, the challenge became understandable to the others, and concretized into a shared epistemic artifact. Especially, Sara's questions to Emmi promoted the externalization of Emmi's knowledge, cumulative talk, and the enactment of the knowledge practices. Emmi's creative expansion of the instructions generated exploratory talk and embodied actions leading to a special design (inclusion of beads) of the keychains, simultaneously increasing the difficulty of the learning challenge, yet also taking full responsibility for guiding others. However, even if Sara often struggled, in this episode, we did not witness disputational talk. Instead, the atmosphere was positive and supportive, and all successfully completed the challenge. Even though Emmi mostly instructed the others, they worked collaboratively as she was also discovering new things. The other three girls also, mainly by embodied actions Emmi: No, no, not like that! Like this, from there (demonstrates). Then, you need to do like that. You need to bind it then in a hook-like manner. Sara: Oh, my god, look, I managed to do it! Wow!! Nellie: Go and get the camera, so that we can take a photo of this! Fig. 12 Nellie raises her hands to celebrate Sara's success International Journal of Computer-Supported Collaborative Learning introduced new ideas to the joint activity, contributed to the concretization of the artifacts. Nora and Nellie mostly worked independently and remained mostly silent. However, with their embodied actions they actively took part in the inquiry, collaborative interaction process, and knowledge practices. In this episode, the girls did not comment on the presence of the video camera. They stayed in their selected space, and did not express an intention to reposition themselves within the room, or to move elsewhere during the session, except for Emmi in the beginning, collecting materials from the space next door.
Episode 3: Building a roller coaster in the computer lab As in the two previous episodes, in this case discourse, digital and other "hands on" materials, embodied actions, and the physical arrangements mediated the students' collaborative knowledge practices (i.e. orienting to, interpreting, concretizing and expanding knowledge), leading to the successful completion of the learning challenge. Also, in this example, the students (the same boys as in the first example) made use of the FUSE Studio space and its materials in creative ways, for instance, by standing on the cupboard in order to coordinate their work in the computer lab. Also, in this episode, the learning challenge was new to the boys. They critically but mostly constructively engaged with each other's ideas, and made expansive uses of the tools they'd been given. Their work included a rich constellation of embodied actions and acts of trial and error, pivotal for the boy's joint attention as they progressed with the challenge. Also, in this episode, the boys' critical commenting and joint exploration led to the creative expansion of the instructions provided by the FUSE Studio computer program.
Four boys, Leo, Alex, John and Mark decided to work as a group again, this time in the computer lab part of the FUSE Studio on a "Coaster Boss" learning challenge. In this challenge, a roller coaster is built, and a marble ball must pass through the track at a certain speed and under certain conditions. (For more information, see: www.fusestudio.net/ challenges). To orient themselves to the challenge, the boys select materials for the challenge from the cupboard without looking at the instructions on the computer screen and place the materials into a carboard box. As the boys decide to build two roller coasters, John and Mark, belonging to this group, worked with the same challenge building their roller coaster in the opposite corner of the computer lab, and did not communicate with Leo and Alex until the end of the FUSE Studio session. In this episode, we focus on presenting the work of Leo and Alex.
When positioning themselves in the space, to carry out the challenge, Alex and Leo place a computer on the floor, and they focus their joint attention in viewing the instructions from the "My Challenges" interface. Alex is reflecting on the instructions with Leo and they soon realize that for the ball to roll fast enough, the roller coaster needs to begin from as high as possible in the room.
Taking a collaborative stance, they decide that Alex should climb on the low cupboards, behind the teacher's desk. Leo hands him tape and foam strips. Leo begins envisioning the roller coaster and interpreting the instructions by suggesting they slice the tape into pieces of a certain length, long enough to keep the coaster steady, to start from the wall. For this Alex asks for tape in a commanding voice. As shown in Fig. 13, Leo is wearing a striped jumper and gazes towards Alex (whose face does not show due to the positioning of the camera) and asks his opinion: "What size do you need?". Alex bodily offers him information, by showing the size with his fingers. Then Leo, via disputational talk, begins to question the way Alex is attaching the foam strip to the wall: "OK, here is the tape. But hey, that part does not go like that. I'm sure, it will not stay put if you place it like that". To which Alex firmly replies that he thinks it will stay put, and continues working on the challenge alone, iterating and repeating some earlier phases of the process.
As Leo disagrees, Alex jumps from the cupboard to the floor to construct jointly a more solid tube with Leo. The boys work together using tape to attach the strip foams together. Alex then climbs back on the cupboard to hold the rollercoaster in a certain position so that Leo can add parts to it. However, Alex does not say anything but still seems not completely satisfied about how the coaster is positioned. He jumps from the cupboard to the nearby desk to reposition it. He keeps sitting on the table for quite a while trying to adjust the coaster's position. Then, both boys enthusiastically move to see how the marble ball moves along the tube and try to roll it for the first time. For this, Alex jumps from sitting on the desk to stand again on the teacher's cupboard. Suddenly, the ball gets stuck in the tube, making Leo question Alex's way of adjusting the position of the tube. Then, Leo takes over adjusting the tube, which Alex agrees to, and as suggested by Leo, cuts some foam from the high end of the tube. Then, Alex suggests to Leo: "Now, let's test it with the ball, to make this coaster really great!", exemplifying exploratory talk and trying to create a satisfying shared epistemic artifact. Leo nods and appropriates his suggestion. The boys test the coaster to see whether it works. When carrying out the test ride, the marble ball, however, rolls too fast and bounces away from the boys, under a desk, and Leo seeks it, with Alex holding the roller coaster that they had created so far, as shown in Fig. 14. The boys looked disappointed (from their facial expressions) and decided to revise their plan and the epistemic object in progress, in order to jointly appropriate it and to make it function properly.
As Leo finds the marble ball, the boys then re-direct their attention to making adjustments to the coaster, and decide to reposition it, to make the ball roll slower, not to bounce off the track. Leo continues reasoning and adjusting the foam strips by adding tape, to which Alex loudly exclaims by disputational talk: "No, no, no, do not do that!" and disregards Leo's further suggestion about adding a whole new foam strip to the coaster. Alex then further commands Leo to provide him with some materials from the floor and scissors. Then, Alex very carefully adds a small piece to the end of the tunnel, and with this embodied action distillates Leo's idea of using the longer piece of the foam strip. Alex then shares his existing knowledge and suggests they create a loop to make the ball roll slower. As we can witness Alex: Hand me more of that tape and tubes (foam strips) (in a commanding voice). Leo: What size do you need? (Alex replies by showing withhis fingers). Leo: OK, here is the tape. But hey, that part does not go like that. You need to put it there! I'm sure, it will not stay put if you place it like that. If we do it like this (walks to the toolbox and gets two more foam strips form the box) . Leo: I'm starting to create a track from this. But the ball will not go like that! (walks again to the tool-box and gets two more foam strips form the box). Fig. 15, we can see how he uses his body to makes a circular movement with his left hand, to demonstrate a loop. Leo carefully gazes toward him and follows his demonstration (see Fig. 15) on how to proceed to make the coaster function correctly. Having observed this for a while, Leo negotiates with Alex on the positioning of the loop. They adjust it a little, and then he appropriates Alex's idea by confirming: "Yes, you are right, it goes really handy there, just like that", featuring shared understanding of the knowledge object in the making. It also exemplifies collaborative distillation, as Leo and Alex are diverting away options, which did not aid their progress. To proceed, the boys then decide to switch roles and Leo adjusts the tube by adding tape to attach it more firmly to the table, and Alex holds the foam strip from where they are creating the loop. During this collaborative interaction, joint attention, mutual engagement and decision making, the knowledge object (the roller coaster through which a marble ball rolls at a certain speed), further concretizes. Both boys smile and seem satisfied by the way their work is proceeding.
Alex: Now, let's test it with the ball, to make this coaster really great! (Leo hands Alex the ball and he puts it into the tube and it rolls). Leo: Here it comes (the ball) (the boys silently gaze at the tube). Leo: Now, this was a real fast speed! There it went! This was far too fast! The ball drops from there (points with his finger) as it bounces off. We need one more long tube part and more tape to make this work (Leo starts making adjustments) Alex: No, no, no, do not do that! Let me add some additional strengthening to the tube. Hand me that small piece and hand me the scissors (Leo does this but disagrees). Leo: The tube is not long enough; we need to make it a bit longer. What if we tried to do it like this? (demonstrates with his hands). Alex: It goes like this, we need to attach these together here, and place this part in between, like that. Now, it will definitely stay put! We should create a loop here to slow the speed down a bit. Is that ok? (demonstrates this by making circles with his left arm and holding the tube in his other hand). (…) Leo: Ok, after you tape, it will stay like that. Yes, you are right, it goes really handy there, just like that! Alex: What if we place it here? (points to the desk he sits on). Alex: Take the tube, not that, the longer one. Can you hold this? Not like that, like this (Leo holds the additional strip foam and Alex uses tape to attach it to the existing tube).
Fig. 15
Alex smiles and demonstrates, by making circles with his hand, his idea of creating a loop Then, the teacher comes along, telling the boys that the session will soon end. The boys begin to hurry to finish their roller coaster, as Alex had just begun adjusting the tube. Both boys wanted to test whether this would make the marble ball roll at the right speed. As shown in Fig. 16, Leo inserts the ball into the roller coaster, and Alex gazes upwards in order to observe. Before doing this, Leo uses his knowledge, accumulated from the first trial of the coaster (including multiple instances of trial and error), and places a plastic box at the end of the tube, to ensure that the ball will not bounce onto the floor and break. After some further reasoning, exploring and adjusting by Alex and Leo to improve the coaster, and Alex jumping back and forth from the desk to stand on the cupboard, the roller coaster begins to work as they wanted.
Leo and Alex then made their epistemic object (the roller coaster in which the ball rolls at a certain speed) publicly accountable by inviting their peers, John and Mark, to roll the ball with them on the coaster. They do this multiple times and are having fun. By doing so, they refuse the teacher's instruction not to continue their maker work. Soon, the other teacher present in the makerspace (in a strict tone of voice) tells them to stop and to leave the classroom, as they had already stayed longer than allowed. Then, the boys physically turn their backs to the teacher, and avoid looking at him, and do not verbally respond to him. As the teacher repeatedly commands them, they finally begin tearing down the coaster, and while doing it, they suddenly transform their epistemic object into two "weapons" with which they start playfighting with one another by using the long foam pieces as swords. As illustrated in Fig. 17, Alex first raises his sword and Leo holds his one in his hand and responds by hitting Alex's sword, while Mark and John are watching. Disregarding the teacher's command, all four boys then start playing with the video camera, filming themselves, and leave the computer lab after the other students.
In this episode, as in our first example, Leo and Alex collaboratively interact and challenge one another by expressing differing opinions. As in the first example, this is, however, mostly constructive and, in this episode, we witnessed less disputational talk than in the first example. In concretizing their knowledge object, the roller coaster, the boys moved back and forth in their process. They searched for solutions to the problems included in the learning challenge, leading to the successful completion of the challenge. The challenge was new to the boys and, as in the first example, both actively contributed to the making and problem-solving activity. Also, though Alex had a tendency to command Leo, we were able to witness from his nonverbal communication that he respected Leo as a collaborator. They used a rich constellation of embodied actions, as well as the materials and the space itself by their standing and sitting on the cupboard and the desk, to enhance the enactment of their knowledge practices and Teacher: Our time will soon end and then you need to clean up. It's time to assemble things (the boys continue to work). Leo: Hurry, hurry, we are running out of time! Now, now! (after some adjustments they test the tube ones more). Alex: Wow, now it works!! Leo: Yippee! We made it!! (John and Mark come closer to view). Teacher: You coaster bosses, you must wrap up now! creativity. In contrast to the first example, the boys did not express an intention to reposition themselves in the space, or to move elsewhere during the session. In this episode, when they were close to completing the challenge, the teacher came along and asked them to tear down their roller coaster as the session was ending. Consequently, they stayed in their selected space and when rejecting the teacher's commands, expanded their usage of the shared epistemic object to have fun with one another and their peers.
Discussion and conclusions
School-based makerspaces have not yet received much research attention when it comes to understanding students' collaborative knowledge practices within these novel learning environments. In this study, with the aim of furthering the educational potential of technology-rich learning environments, to support students' knowledge creation and learning, we investigated the students' knowledge practices through a multimodal lens, directing our attention to discourse and material, spatial and embodied modalities mediating students' design and making activities. This led us to identifying four types of student knowledge practices, namely orienting to, interpreting, concretizing and expanding knowledge, guiding and facilitating the students' learning activity in the FUSE Studio makerspace.
Our findings add to the existing research on makerspaces, as well as collaboration within them, by making connections between different types of students' talk (see Mercer et al. 1999;Mercer 2005, also Mercer 2019) and their collaborative knowledge practices. When the students were carrying out the STEAM design challenges and enacting the knowledge practice of orienting to knowledge, they typically applied cumulative talk, and often relied on the FUSE Studio computer program to follow the instructions it provided. Yet, as an extension to orienting they used embodied actions such as moving physically towards one another and began searching for mediating means/tools to support their inquiry, such as foam rubber, a marble, tape and scissors, and 3D printers. They also sought alternative spaces and physical arrangements to better serve their making activities, depending on the challenge.
The interacting students held different knowledge sources, and some were less and some more knowledgeable than others, creating challenges and tensions in collective knowledge creation (see Ludvigsen 2012). However, the more knowledgeable students played a crucial role in guiding their peers, and the groups we studied did not typically call their teachers to help them. The interpreting of the knowledge in the pre-given instructions, and one another's Fig. 17 Alex raising his "sword" to playfight with Leo existing knowledge often involved the students' construction of common knowledge via cumulative talk, in other words, by accumulation and repetition of the instructions provided by the computer program. In their interpreting we can also witness an increase in the students' usage of short utterances and instances of assertion and counter-assertion, competing arguments and constructive criticism, in other words disputational talk. It also included some critical but constructive engagement in hearing each other's opinions, statements and suggestions, accompanied by embodied acts and gestures, such as gaze, demonstration, and physical withdrawal from proceeding with the challenge (see also Mondada 2018). With language, such gestures in the students' interaction provide an informative source of evidence regarding the students' knowledge, and the nature of maker work (see also Koskinen et al. 2015). Our analysis thus contributes to the understanding of human gestures as tied to the physical, social and cultural properties of the learning environment (also Goodwin 2003;Streeck et al. 2011).
When carrying out the STEAM learning challenges and enacting the knowledge practice of concretizing their visions, ideas and plans into epistemic objects and material artifacts, the students' talk often exemplified features of disputational talk, as the students disagreed and contested one another when working with the available conceptual, material and spatial resources, and in creating knowledge objects and artifacts. We were also able to witness exploratory talk, in which the students made use of a rich variety of available verbal and embodied resources, and for example collaboratively expanded the instructions and redefined the use of certain tools (also Baker et al. 1999), to produce artifacts meaningful to them. The students enactment of the knowledge practice named expanding featured disputatonal talk, as they disagreed, were not satisfied with the work process and rejected, modified or entirely revised their epistemic object in progress. It also exemplified exploratory talk, as the students explored, negotiated, jointly reasoned, and gave motivations and explanations for the other's arguments and ideas. By doing so, they reached decisions and agreements, and produced epistemic objects and artifacts, successfully solving the learning challenges. In this, the students also made an effort to make their knowledge publicly accountable, first to their comaker(s) and then to their peers and the teacher(s). In students' enactment of the knowledge practices, we have sought to underline the crucial importance of learning from peers. As shown in our examples, the students continuously introduced and demonstrated new ways of working to their peers, and their guidance was more important than the pre-given instructions or the support provided by the teachers in this context.
Along with the language, characteristic in the students' interaction in the FUSE Studio makerspace we studied, were both material and spatial mediation as well as the pivotal role of non-verbal communication when enacting the knowledge practices during STEAM learning. Our study generates new knowledge about the ways the materials of makerspaces become joint attention in ongoing interaction with opportunities and tensions for engagement and learning. The materials and spaces played an important role for the students to communicate, and to establish joint attention, in other words, to coordinate their actions and attention with others on an object (see also Tomasello 2000). Further, the student groups familiarized themselves with negotiated and collectively appropriated tools (also Baker et al. 1999), and the tools then changed and molded their interactions (see also Riikonen et al. 2020). Echoing previous studies on technologically and materially rich learning environments, the students' creation of epistemic objects and artifacts was critically dependent on embodied practices connected to making (Kangas et al. 2013;Blikstein 2013;Kafai et al. 2014). In our study, the students repositioned (by bodily acts) themselves and the materials within the given makerspace room, for example, by relocating from the makerspace to an outside corridor. Also, gestures, such as smiles and signs of excitement played a significant role in the enactment of the students' knowledge practices when working with the available conceptual, material and spatial resources.
Our findings point to the need for further investigation of the spatial mediators, namely the physical room and its spatial arrangements in the students' maker work. Adding to the existing research knowledge on makerspaces, our findings highlight the physical space and its organization as an important mediator for the students' knowledge practices as well as their enactment and management of them. The students in our examples chose to carry out maker work in different spaces. Thus, instead of a singular entity, the physical makerspace context and its materials need to be seen as a constellation, providing a variety of opportunities and, in some cases, also constraints for the students' maker work. In light of our findings, the FUSE Studio could be seen as offering a new multi-level instrumentality of learning (see also Engeström 2007), in other words, multi-layered and complex constellations of conceptual, material, embodied and spatial mediators, mediating students' knowledge practices and guiding their activity.
Further, our findings are in line with previous research showing that the presence of heterogeneous learners (also Riikonen et al. 2020;Stahl and Hakkarainen 2020), and the multiple digital and non-digital mediators, often new to the students in a makerspace, can be challenging for the students and may create tensions (also Bråten and Braasch 2017;Ludvigsen 2009). In the FUSE Studio, the students' maker work was not without tensions, as they struggled with the technology and handling the various materials. The pedagogical principles of the FUSE Studio, emphasizing students' own interest and choice, supported their multimodal knowledge practices, but at the same time demanded active participation and a high level of responsibility taking (also Zhang et al. 2009) from the students. Yet, from our perspective, the complexity and tensions involved were not always harmful but also triggered and drove students' collaborative knowledge creation and learning (also Kumpulainen et al. 2019a, b. To support student-participation, peer tutoring, and learning, the FUSE Studio as a technologyrich makerspace context can be viewed as "a third space" (also Gutiérrez 2008) in which to establish dialogic interaction between the students (Mercer 2005), who hold different knowledge resources (Brown and Campione 1996) and skills and aim for collectively solving challenges. It then provides the students with a learning environment that may support connecting "in deep ways to the life experiences of all students" (see Nasir et al. 2006), to productively apply, use and reflect on their own knowledge. Supporting the multimodal knowledge practices identified in our study with novel pedagogical solutions is important to enhance the students' collaboration and the development of their (and also their teachers) digital competencies. This kind of a learning environment can also offer an important place for enhancing students' management of the design and making activities, knowledge creation and learning.
We understand that our study is small scale and descriptive and hence further research is needed to explore students' multimodal knowledge practices in makerspaces. We suggest that the typology for the analysis of three "archetypical forms" of students' talk (Mercer et al. 1999;Mercer 2005, also Mercer 2019) is a worthwhile starting point for future research to investigate students' collaborative knowledge creation in makerspace contexts. As we demonstrated, it can be extended by applying the knowledge practices approach (Hakkarainen et al. 2004;Hakkarainen 2009;Seitamaa-Hakkarainen et al. 2010), and multimodal interaction analysis (Goodwin 2003;Kress 2010;Streeck et al. 2011;Taylor 2014), to take adequate account of the reciprocal interaction between knowledge creation and students' collaborative practice on the one and, as well as the materiality, spatiality and embodied actions on the other in mediating students' knowledge practices.
The design and implementation of technology-and materially-rich makerspace learning environments, situated in schools, and new pedagogies suited to them, is not a one-time effort, but a continuous process that includes tensions, modification, implementation and adjusting new and old artifacts, tools, technologies and procedures (see also Engeström 2007). In reporting our findings, we have presented situational examples of the students' interaction and knowledge practices during their design and making activities. In the case of the elementary school that was the focus of our study, the FUSE Studio was embedded in the school's locally-adopted official curriculum (the students frequently attending sessions). We argue that it can thus be regarded as a long-term project, offering a tool to teachers for enhancing educational changes in this particular school, and also in other schools.
In our view, widening the understanding of students' multimodal knowledge practices in makerspaces can provide valuable lessons and guide knowledge advancement and transformation of learning environments in schools. We hope that with our findings, viewing knowledge creation as achieved through a multimodal process, we will be able to further inform the design and implementation of novel pedagogical solutions, to consider and facilitate the management of multiple mediators and peer learning at the intersection between tacit and explicit knowledge in technology-and materially-rich learning environments. At their best, such learning environments can serve as "amalgams of arrangements and mechanisms" (Knorr-Cetina 1999) for collaborative knowledge practices, mediated multimodally. Multimodality is particularly important, often by the support of mediational means other than language, in making visible how the students know and what they know, or do not know, to enhance their creation of meaningful, shared epistemic objects and artifacts, and learning of something that is not yet present and known.
|
2022-12-28T14:55:35.892Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "0361f9ff191132de4c3a2f3db2c1e5c920891ba7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11412-020-09337-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "0361f9ff191132de4c3a2f3db2c1e5c920891ba7",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
257147853
|
pes2o/s2orc
|
v3-fos-license
|
Thoracoabdominal Aortic Replacement Together with Curative Oncological Surgery in Retroperitoneal and Spinal Tumours
Malignancies with an extended encasement or infiltration of the aorta were previously considered inoperable. This series demonstrates replacement and subsequent resection of the thoracoabdominal aorta and its large branches as an adjunct to curative radical retroperitoneal and spinal tumor resection. Five consecutive patients were enrolled between 2016 and 2020, suffering from cancer of unknown primary, pleomorphic carcinoma, chordoma, rhabdoid sarcoma, and endometrial cancer metastasis. Wide surgical resection was the only curative option for these patients. For vascular replacement, extracorporeal membrane oxygenation (ECMO) was used as a partial left-heart bypass. The early technical success rate was 100% for vascular procedures and all patients underwent complete radical tumour resection with negative margins. All patients required surgical revision (liquor leak, n = 2; hematoma, n = 3; bypass revision, n = 1; bleeding, n = 1; biliary leak, n = 1). During follow-up (average 47 months, range 22–70) primary patency rates of aortic reconstructions and arterial bypasses were 100%; no patient suffered from recurrent malignant disease. Thoracoabdominal aortic replacement with rerouting of visceral and renal vessels is feasible in oncologic patients. In highly selected young patients, major vascular surgery can push the limits of oncologic surgery further, allowing a curative approach even in extensive retroperitoneal and spinal malignancies.
Introduction
Due to the late onset of symptoms, retroperitoneal and primary spinal bone tumours are usually advanced at the time of diagnosis and their majority is malignant [1]. In the past, excessive involvement of vascular structures defined the limits of curative surgical resection in malignancies [2]. Especially when reaching the maximum cumulative radiation dose or with inadequate response to chemo-or radiotherapy, surgical irresectability would inevitably result in a palliative situation, even in solitary lesions. Therefore, young and fit patients may be offered an individualised curative approach for tumour entities with excellent oncological results after surgical resection, such as retroperitoneal sarcoma. Reaching good local control by uncompromising wide resections with negative surgical margins is the only real curative option for sarcoma [3]. More aggressive approaches seem to offer lower local recurrence rates and longer survival [4]. Another example is chordoma, a locally aggressive malignant bone tumor, not responsive to chemotherapy and is usually discovered in a late state [5]. Survival strongly depends on local tumor control [6].
Consequently, advanced oncological resection concepts also include vascular resection and replacement of encased or infiltrated peripheral [7] or abdominal arteries [8,9] up to the infrarenal aorta [10,11]. However, the involvement of the aortic thoracoabdominal transition zone, including the visceral and renal arteries, limits radical oncological resections. Replacement of the thoracoabdominal aorta and its branches is already a burdensome surgical procedure for patients. Nevertheless, analogous to open thoracoabdominal aneurysm repair, with the support of extracorporeal membrane oxygenation (ECMO) perfusion, a replacement can be performed in young patients with acceptably low morbidity and mortality [12][13][14] and should also be offered to oncological patients. Therefore, as an interdisciplinary approach, we indicated and performed aortic replacement and debranching of the visceral and renal arteries to enable wide R0 resection (free histologic margin) in solitary malignant disease of the retroperitoneum or spine. Excessive involvement of the aorta and its large branches could thus be treated with curative intention in highly selected cases. This article presents the largest published series of such extensive surgical procedures. The indications, technical feasibility, complications, and results are summarised.
Data Acquisition
The medical records of all patients were reviewed retrospectively, and follow-up data, including computer tomography (CT) and magnet resonance imaging (MRI), were collected and inspected. Only descriptive analyses were used due to the small number of patients. The study was approved by the local ethics committee of Technische Universitaet Dresden, Germany (file reference: BO-EK-225042021).
Patients
Consecutive patients with the need for wide resection for solitary malignant lesions either encasing or infiltrating the thoracoabdominal aorta were included. For curative surgical treatment, these patients required aortic replacement including visceral and renal branches. Tumour staging was performed by systematic staging with MRI, CT, and fluorodeoxyglucose positron emission tomography (FDG-PET) in all patients. The absence of distant metastasis was confirmed in all patients, using these imaging modalities. The tumour mass of patient #1 can be seen in Figure 1a on the CT scan, and the FDG-PET ( Figure 1b) shows that it is a solitary lesion.
CT-or sonography-guided biopsy with histologic workup has been discussed at a multidisciplinary tumour conference. All potential neo-/adjuvant therapy strategies were evaluated and critically discussed, considering the underlying biological behaviour of the malignant tumour, especially responsiveness to radio-, chemo-, or immunotherapy. A curative surgical approach was recommended by the local multidisciplinary tumour board. As an individualised therapy approach, it was seen as the only option for local control with potential long-term survival and for control of immediately threatening tumour-growthinduced complications. Sole palliative care was dismissed due to the young age and good medical condition of all patients with ASA (American Society of Anaesthesiologists physical status) score of I or II and ECOG (Eastern Cooperative Oncology Group) score between 0 and 1 ( Table 1). In addition, due to intolerable pain or compression of neighbouring structures, surgery was at least indicated for symptom relief in all cases. Prior to surgery, all patients received detailed interdisciplinary information about the extensive surgical intervention and potential functional limitations, morbidity, and mortality, as well as alternative non-surgical-but less promising-treatment options. Exclusion criteria were the lack of physical fitness, especially with regard to multiple anaesthesias, and the extent of organ sacrifice such as one-sided kidney resection. Further contraindications were multiple metastases or multiple primary lesions. about the extensive surgical intervention and potential functional limitations, morbidity, and mortality, as well as alternative non-surgical-but less promising-treatment options. Exclusion criteria were the lack of physical fitness, especially with regard to multiple anaesthesias, and the extent of organ sacrifice such as one-sided kidney resection. Further contraindications were multiple metastases or multiple primary lesions.
Standardised Surgical Approach
To reduce surgical trauma and re-establish hemodynamic and clotting homeostasis as well as for recovery of the patient, we used a staged approach and divided major surgical procedures into several sequential steps. Vascular rerouting as a prerequisite for wide retroperitoneal or spinal en bloc resection was always performed first. A standardised approach with a reproducible sequence was used, beginning with the preparation of the left inguinal vessels as access for partial left heart bypass using a peripheral veno-arterial ECMO setup. This was followed by either a transperitoneal or retroperitoneal approach with abdominal or retroperitoneal exploration and definition of oncological resection lines. Subsequently, the splitting of the aortic hiatus exposed the thoracic aorta above the coeliac trunk. Tumour-free zones for visceral and renal artery anastomoses and aortic bifurcation were exposed without opening the tumour compartment. Systemic heparin was administered until reaching an activated-clotting-time of 200 s or more. Retrograde ECMO perfusion was then started for abdominal and retroperitoneal organ and lower limb perfusion in terms of a partial left heart bypass as described by Palombo et al. [12]. The aorta, including the visceral artery segment, was bypassed with a 20 mm tube graft from 8 to 10 cm above the coeliac trunk (end-to-end anastomosis) to the right common iliac artery (end-to-side anastomosis). Visceral and renal bypasses were inserted end-to-side into the main body and end-to-end into the target vessels. Figure 2 shows case #3 with encasement of the aorta, superior mesenteric artery (SMA), and neighbouring structures by chordoma of the first lumbar vertebra. Aortic rerouting can be seen in Figure 2c including the liberation of the anterior aspect of the spine with dissection of the paraspinal muscle insertions and excision of the intervertebral discs (T11/12 and L2/3) as sufficient lateral, cranial, and caudal resection planes.
To reduce the risk of graft infection silver coated polyester prostheses (B. Braun Melsungen AG, Melsungen, Germany) were used for arterial bypasses. The inferior vena cava was replaced with ring-enhanced 20 mm PTFE grafts (W. L. Gore & Associates, Inc., Newark, DE, USA). Non-accessible vessels (inferior vena cava or renal vessels) were addressed in the second step after definitive resection.
After the rerouting procedure, all patients were monitored and stabilised in the intensive care unit (ICU). The patients received the thromboprophylaxis dose of unfractionated heparin or low molecular weight heparin starting on day 1 after surgery. In cases with spinal tumour involvement, posterior liberation of the affected vertebrae and internal fixation were performed as the second step. Posterior instrumentation, as depicted in Figure 2d was performed using screw-rod stabilization (DePuy Spine, Raynham, MA, USA). The vertebral column was then replaced with a carbon-composite cage (ostaPek ® vertebral body replacement, Coligne AG, Zurich, Switzerland or ObeliscProTM, Ulrich GmbH & Co. KG, Ulm, Germany). Figure 3 shows the arterial and venous rerouting of patient #1 and the 3-level vertebral body replacement with a titanium-expandable cage system.
Standardised Surgical Approach
To reduce surgical trauma and re-establish hemodynamic and clotting homeostasis as well as for recovery of the patient, we used a staged approach and divided major surgical procedures into several sequential steps. Vascular rerouting as a prerequisite for wide retroperitoneal or spinal en bloc resection was always performed first. A standardised approach with a reproducible sequence was used, beginning with the preparation of the left inguinal vessels as access for partial left heart bypass using a peripheral venoarterial ECMO setup. This was followed by either a transperitoneal or retroperitoneal approach with abdominal or retroperitoneal exploration and definition of oncological resection lines. Subsequently, the splitting of the aortic hiatus exposed the thoracic aorta above the coeliac trunk. Tumour-free zones for visceral and renal artery anastomoses and aortic bifurcation were exposed without opening the tumour compartment. Systemic heparin was administered until reaching an activated-clotting-time of 200 sec. or more. Retrograde ECMO perfusion was then started for abdominal and retroperitoneal organ and lower limb perfusion in terms of a partial left heart bypass as described by Palombo et al. [12]. The aorta, including the visceral artery segment, was bypassed with a 20 mm tube graft from 8 to 10 cm above the coeliac trunk (end-to-end anastomosis) to the right common iliac artery (end-to-side anastomosis). Visceral and renal bypasses were inserted end-toside into the main body and end-to-end into the target vessels. Figure 2 shows case #3 with encasement of the aorta, superior mesenteric artery (SMA), and neighbouring structures by chordoma of the first lumbar vertebra. Aortic rerouting can be seen in Figure 2c including the liberation of the anterior aspect of the spine with dissection of the paraspinal muscle insertions and excision of the intervertebral discs (T11/12 and L2/3) as sufficient lateral, cranial, and caudal resection planes.
To reduce the risk of graft infection silver coated polyester prostheses (B. Braun Melsungen AG, Melsungen, Germany) were used for arterial bypasses. The inferior vena cava was replaced with ring-enhanced 20 mm PTFE grafts (W. L. Gore & Associates, Inc., Newark, DE, USA). Non-accessible vessels (inferior vena cava or renal vessels) were addressed in the second step after definitive resection. After the rerouting procedure, all patients were monitored and stabilised in the intensive care unit (ICU). The patients received the thromboprophylaxis dose of unfractionated heparin or low molecular weight heparin starting on day 1 after surgery. In cases with spinal tumour involvement, posterior liberation of the affected vertebrae and internal fixation were performed as the second step. Posterior instrumentation, as depicted in
Results
Between 2016 and 2020, five consecutive tumour patients (four males, mean age 44 (range 28-56) years) with encasement of the thoracoabdominal aorta including its large branches were treated in our institution. The treated tumour entities were cancer of unknown primary, pleomorphic carcinoma, chordoma, rhabdoid sarcoma, and metastasis of endometrial cancer with no, insufficient, or unsuccessful conservative treatment options. Patient characteristics can be seen in Table 1.
Vascular Procedures
All five patients were treated with the originally intended vascular replacement. The mean operation time was 606 min (range 491-718) for rerouting. Vascular rerouting in-
Results
Between 2016 and 2020, five consecutive tumour patients (four males, mean age 44 (range 28-56) years) with encasement of the thoracoabdominal aorta including its large branches were treated in our institution. The treated tumour entities were cancer of unknown primary, pleomorphic carcinoma, chordoma, rhabdoid sarcoma, and metastasis of endometrial cancer with no, insufficient, or unsuccessful conservative treatment options. Patient characteristics can be seen in Table 1.
Vascular Procedures
All five patients were treated with the originally intended vascular replacement. The mean operation time was 606 min (range 491-718) for rerouting. Vascular rerouting included replacement of the distal thoracic, complete abdominal aorta in all five patients and further bypasses to the coeliac trunk, superior mesenteric artery, and kidney arteries as can be seen in Table 2. The use of arterio-venous ECMO in all cases reduced the time of organ ischemia to an average of 10 min, the time for the vascular anastomosis between bypass and target vessel. The average total ECMO operating time was 118 min (range 87-144).
Tumour Resection and Adjunctive Procedures
Definitive tumour resection was scheduled for 5.4 days (range 4-8) after the vascular rerouting with a duration of 626 min (range 407-969) for wide tumour resection. In three patients (# 2, 3, and 5), a two-stage approach was used, and in two patients (# 1 and 4), a three-stage approach was used. The entire tumour mass could be resected en bloc in all patients with microscopic negative margins in histopathological workup (n = 5). The resected organs can be seen in Table 2. In one patient (#1), resection, and later in situ autotransplantation of the right kidney were necessary to enable wide resection. In another patient (#4) with complete pancreas resection, portal islet cell transplantation was used to preserve metabolic function as described elsewhere [15].
Outcomes
The mean ICU stay was 44 days (range 20-112). No patient experienced neurologic complications in the sense of spinal ischemia after resection of the thoracoabdominal aorta. Mild paraesthesia of the leg was witnessed in one patient who underwent partial resection of the femoral nerve. All patients showed a transient reduction in their kidney function (glomerular filtration rate) on average 24% (0-57%) after the rerouting and 54% (0-87%) in the first three days after the tumour resection. Three patients (#1, 3, and 5) recovered well after fluid therapy and returned to preoperative kidney function. However, two patients (#2 and 5) developed acute kidney failure requiring haemodialysis. The same two patients had respiratory failure with the need for long-term ventilation, while the other three patients were extubated on the day of the operation or the following day. All patients needed additional surgical procedures due to complications which can be seen in Table 3. The liquor leak was addressed with a dura patch and the cage was realigned under fluoroscopic control. Fluid collection and hematoma were evacuated surgically and the biliary leak was treated by sutures of the common bile duct and percutaneous transhepatic cholangiography with external drainage. The vessel kinking occurred after tumour resection because the graft appeared too long after the tumour was removed and could be corrected with the shortening of the graft. Organ swelling made primary closure of the abdomen unfeasible so patient #5 needed temporary vacuum dressing and later midline closure. Bold numbers show the total amount in the cohort.
One hepatic bypass was temporarily occluded resulting in a primary patency of 96% and primary-assisted bypass patency of 100% (Table 4). Septic arrosion was observed in one vascular anastomosis at a hepatic artery (#2) and could be controlled with a short graft interposition. In two patients (#2 and 4) with extensive bowel resection, swelling of the intestines led to postponed reconstruction. These two patients died in the hospital due to septic complications, resulting in a 30-day mortality of one and in-hospital mortality of two.
Follow-Up
All three discharged patients survived without recurring tumour disease (mean followup: 47 months, range 22-70) and patent bypasses (Table 4). Two patients could walk without any constraints, and one patient needed walking aids for longer distances. One patient (#1) required revision of the posterior instrumentation of the spine after two and three years due to implant failure and one patient (#3) required talc pleurodesis three months after resection due to chylothorax.
Discussion
This study demonstrates the surgical feasibility, excellent technical success rate, and oncological-prognostic usefulness of wide resection of retroperitoneal or spinal tumours with thoracoabdominal aortic replacement in a heterogeneous series of five oncological patients. Even as independent procedures, open thoracoabdominal aortic repair, and multivisceral wide resection are extensive and technically demanding surgical interventions. The combination of both is even more complex and challenging and is thus prone to complications for the patient. Therefore, patients are most often either rejected and considered inoperable, or treated by intralesional procedures with both options leaving them with palliative treatment and no chance of long-term survival. However, cancer patients with extensive but solitary tumour lesions are candidates for such combined interventions as the only curative option. With optimal local and systemic tumour control by complete resection of the malignant retroperitoneal tumour, these patients have a favourable survival prognosis, because local control is the key to curing primary retroperitoneal and spinal malignancies such as sarcoma [16] and chordoma [6]. The completeness of the resection determines progression-free and overall survival in sarcoma [3,4], isolated colorectal metastasis [17,18], and retroperitoneal recurrences of gynaecologic tumours [19]. Consequently, for patients in good general condition together with clinically relevant tumour symptoms (Table 1), this combination therapy should be considered in specialised centres. Availability of shared experience among orthopaedic, vascular, and visceral surgeons and proceeding as one multidisciplinary team holds the key to technical success and is the mandatory prerequisite to offering a curative surgical approach to such selected patients. This study cohort included individuals with extensive but solitary malignancies in the retroperitoneum, i.e., absence of distant metastasis, as assessed by whole-body PET-CT scan. Biopsy with a histologic classification of the tumour was a prerequisite to distinguish tumour entities with a favourable chance for cure by resection. All cases were discussed by the multidisciplinary tumour board and non-surgical or radio-oncological treatment options alone did not offer curative approaches in any of the included patients. Resection was considered the only potentially curative option in an individualised setting.
In recent decades, the sole presence of vascular involvement was no longer seen as a contraindication for wide resection [7,9,11]. Oncological outcomes after resection of retroperitoneal malignancies were comparable to cases without vessel involvement [20]. Moreover, with the assistance of a partial left heart bypass provided by extracorporeal membrane oxygenation (ECMO), the adverse effects of long organ ischemia during thoracoabdominal rerouting and arterial debranching can be eliminated with limited functional constraints for the patient. Therefore, aortic resection and replacement can be considered even in patients with an extensive disease requiring complex vascular rerouting and revascularization. In our institution, the use of ECMO is now regularly used when thoraco-abdominal aortic replacement is performed. We used ECMO as a partial left-heart bypass instead of a cardiopulmonary bypass (CPB) conventionally used in cardiac surgery because no circulatory arrest was needed and aortic cross-clamping was located below the supraaortal vessels. The ECMO does not have a reservoir, hence no blood stasis is present and less anticoagulation with heparin is needed. Usually, CPB needs an activated clotting time (ACT) of around 480 s. whereas for ECMO use, only mild heparinization is needed. We used an ACT of around 200 s. while the ECMO was running and vessels were clamped or blocked. With this moderate use of heparin, we expected a good balance between anticoagulation during vessel replacement and haemostasis so that the subsequent tumor resection did not cause too much bleeding.
Wide resections for retroperitoneal tumours involving both vascular and spinal structures, as performed by vast aortic resection and multilevel en bloc spondylectomy followed by vertebral body replacement, is remarkable. Compared with the present literature, the extent of vascular rerouting, including the thoracoabdominal aorta and visceral-and renal arteries, and the use of temporary ECMO in this series is unique (Table 5). To date, only four series of various vascular replacements mention the replacement of the thoracoabdominal section of the aorta (Poultsides et al. [20] one thoracoabdominal replacement including two visceral bypasses, Schwarzbach et al. [11] replacement of thoracoabdominal section twice but without visceral and renal branches, Song et al. [21] and Homsy et al. [24] each reporting one thoracoabdominal replacement with bypasses to hepatic artery, SMA and left renal artery), and one case report (Gösling et al. [22] and Graulich et al. [23]; same case with a thoracoabdominal aortic replacement but without visceral or renal rerouting). To our knowledge, this is the largest published series of combined thoracoabdominal aortic and visceral resection and reconstruction in patients with vascular and spinal involvement due to retroperitoneal or spinal tumour growth (Table 5). In particular, the extent of aortic resection and the sum of renal and visceral artery reconstructions have not yet been reported. In comparison to the remaining literature, with primary patency of 58-89% [11,[20][21][22][23], this series with primary patency of 96% and primary-assisted patency of 100% of all vascular reconstructions had excellent results (Table 5).
Owing to the extent of the combined surgery, all patients required surgical revision of any kind, mostly linked to radical tumour resection. Only one patient had vascular complications with occlusion of a visceral bypass and later bleeding due to septic arrosion. None of the patients had paraplegia, even though large parts of the aorta including essential thoracic and lumbar segmental arteries or multiple spinal levels with epidural tumour compression were resected. Resection of these vital segmental arteries has already been proven to be safe in orthopaedic resections [25].
The prior use of chemo-or radiotherapy might have had both benefits and disadvantages. As a benefit of neo-adjuvant treatment, resection margins were probably thicker or would involve more different tissues. The presumably more radical resections might support the goal of cure due to uncompromising local control. At the same time, neoadjuvant treatment with consecutive scaring might lead to injury of essential structures during preparation. Especially if scar tissue makes it difficult to optically discriminate neighbouring structures or if it is technically demanding to dissect adhering structures, intestinal or nerve damage is possible. Further, prior therapy might lead to prolonged healing time, leading again to further issues such as leaking intestinal anastomosis or open wound healing with possible bacterial contamination of bypass or internal fixation material. The relevant hospital mortality of two out of five in this study needs to be discussed. Certainly, it is obvious that the combination of two maximum surgical interventions within a short time interval is risky. In particular, the combination of visceral organ resection with prosthetic vascular reconstruction, as well as vertebrectomy and cage reconstruction bears a high risk of septic and bleeding complications. Especially, postponed gastro-intestinal reconstructions because of intestine swelling might further exacerbate septic complications in the sequel. Mortality cannot be compared to the published series mentioned above because of the heterogeneous locations and smaller extent of replaced vessels in the available literature. In series with the replacement of just the infrarenal aorta due to malignant processes, mortality was lower and ranged between 0-17% [26,27]. If related to the mortality of thoracoabdominal replacement for aortic aneurysms of 8-15% [13,28,29], the 30-day mortality of one and in-hospital mortality of two out of five patients seems high. However, the patients in this series bear the inherent burden of additional major tumour operations with relevant intraoperative and postoperative morbidity and mortality [30,31]. Both deceased patients requested surgical therapy, with patient # 2 suffering from constant pain and bowel obstruction due to the large tumour mass. Patient # 4, a medical expert himself, requested surgical therapy with a curative intention for favourable outcomes in sarcoma compared to palliation.
In all patients who survived the intermediate postoperative phase, tumour-free survival was given until the end of follow-up; matching, or even exceeding the clinical results published by other groups (Gösling et al. [22] 100%, Schwarzbach et al. [11] 90% after two years, 67% after five years). Good long-term outcomes allowed patients to return to their daily routines with good mobility and normal bowel and bladder functions. The freedom of recurrent disease of all surviving patients seems promising but needs to be re-evaluated in a longer follow-up period with a larger number of patients.
Conclusions
Incomplete resection has long been seen as a predictor for poor outcomes in the surgical treatment of retroperitoneal sarcoma [32]. As a consequence radical surgery is indicated even if it takes vascular replacement. Extensive combined major oncovascular surgery must be well prepared with optimal imaging such as CT, MRI, and PET-CT. Biopsy and histopathological examination are obligatory to further clarify the type of malignancy. Distant metastases need to be excluded preoperatively by all means in order to justify the complexity and complication profile of the extensive surgical procedure. A closely cooperating and well-experienced multidisciplinary team should then advise an individualised treatment plan including the indication for surgery. A multimodal approach with neoadjuvant radiotherapy and or chemotherapy should be considered. Both morbidity and mortality need to be discussed in detail with the patient, considering that surgery is the only curative treatment option in highly selected cases. Using a curative approach, reconstruction and resection of the large vessels, especially in the thoracoabdominal segment, has proven feasible and shows good patency. This further expands the limits of surgical resectability in oncological patients towards curative treatment concepts.
Limitations
This single-centre study consists of a small number of patients as a result of individual treatment recommendations for every case. Thus, this study is descriptive only. It is a retrospective analysis of the patient records. Hence, no preregistration exists for the study reported in this article. Institutional Review Board Statement: All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the local ethics committee of Technische Universitaet Dresden, Germany (file reference: BO-EK-225042021).
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
|
2023-02-24T16:07:03.544Z
|
2023-02-21T00:00:00.000
|
{
"year": 2023,
"sha1": "5282b5d0b72c90d1baceb9e45eb7c3e7046a1034",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1718-7729/30/3/195/pdf?version=1676971069",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ada2522ed9b18d73fc02bf54a2688f9d1dad1143",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
233459310
|
pes2o/s2orc
|
v3-fos-license
|
Influence of Sintering Temperature of Kaolin, Slag, and Fly Ash Geopolymers on the Microstructure, Phase Analysis, and Electrical Conductivity
This paper clarified the microstructural element distribution and electrical conductivity changes of kaolin, fly ash, and slag geopolymer at 900 °C. The surface microstructure analysis showed the development in surface densification within the geopolymer when in contact with sintering temperature. It was found that the electrical conductivity was majorly influenced by the existence of the crystalline phase within the geopolymer sample. The highest electrical conductivity (8.3 × 10−4 Ωm−1) was delivered by slag geopolymer due to the crystalline mineral of gehlenite (3Ca2Al2SiO7). Using synchrotron radiation X-ray fluorescence, the high concentration Ca boundaries revealed the appearance of gehlenite crystallisation, which was believed to contribute to development of denser microstructure and electrical conductivity.
Introduction
Generally, Ordinary Portland Cement (OPC) allows sufficient thermal stability for most typical applications. However, at elevated temperatures, the properties of OPC fall off due to physical and chemical changes [1]. Several studies were conducted to identify an alternative material which possesses outstanding thermal stability and fire resistance in elevated temperature. Geopolymers are an amorphous three-dimensional (3D) aluminosilicate system, which was synthesised at ambient or slightly higher temperature by alkaline activation of suitable precursor raw material from industrial waste, fly ash [2][3][4][5][6], slag [7,8], and kaolin [9,10]. It was shown that these inorganic materials that were identified, delivered an exceptional performance at elevated temperatures.
According to Davidovits [11], the geopolymer material delivered the ceramic behaviour with the existence of crystalline phases up to 1000 • C. When heated or sintered at elevated temperatures, the transformation of the crystalline phase began to exist. Presently, Abdulkareem et al. [12] studied the fly ash geopolymer after having been heated up from ambient temperature to 800 • C without focussing on the changes in the phase composition, and element distribution occurred at elevated temperatures. Wang et al. [13] found that the structure of kaolin was majorly influenced by the calcination temperature. The change of aluminium species affected the structural changes of the geopolymer instead of the silicon atoms after having been heated at 900 • C. Meanwhile, the crystallinity behaviour and microstructural change of the slag-based geopolymer at high-temperature conditions were investigated by Rovananik et al. [14]. The calcium aluminosilicate framework filled the pores between akermanite crystals after having been heated up to 1200 • C. The dense heated geopolymer displayed a glassy phase, which is the basis of ceramic. Traditionally, ceramic vitrification is commonly initiated from 900 • C on, which was marked by the melting of several solid phases that bound the present solid particles and led to enhancing the bonding strength [15,16].
Meanwhile, according to Cui et al. [17], the electrical conductivity of geopolymer materials is intensively influenced by the microstructure appearance. The acceptable level of electrical conductivity is believed to play a major role in fast ionic conduction used for electrochemical sensors or solid-state batteries. Vladimirov et al. [18] reported that the overall electrical conductivity vitally depends on the density and nature of the grain boundaries. Understanding the microstructural and phase evolution at high sintering temperature towards electrical conductivity is strongly valuable for geopolymer and ceramic material.
Thus, the aims of this current work were to characterise the microstructural change, crystallographic evolution, and electrical conductivity at 900 • C sintering temperatures for kaolin, slag, and fly-ash-based geopolymers. Based on these results, the correlation between crystallography and electrical conductivity was clarified. The obtained observations were correlated with the element distribution obtained using the synchrotron source of micro-Xray fluorescence.
Materials and Sample Preparation
The samples used in this were formed by precursor sources of kaolin (Associated Kaolin Industries Sdn Bhd, Perak, Malaysia), fly ash (Manjung Power Station, Perak, Malaysia), and slag (Ann Joo Integrated Steel Sdn Bhd, Penang, Malaysia) with the chemical composition as determined by the benchtop X-ray fluorescence spectrometer; PW4030 (Table 1). The sodium silicate (Na 2 SiO 3 ) solution used was provided by South Pacific Chemical Industries Sdn. Bhd., Malaysia, consisting of SiO 2 (30.1%), Na 2 O (9.4%), and H 2 O (60.5%) with SiO 2 /Na 2 O = 3.2. The NaOH clear solution was mixed with the sodium silicate solution and cooled down to ambient temperature one day before mixing. The samples were designed based on three various mix geopolymer designs. Each raw aluminosilicate (kaolin, fly ash, and slag) powder, encompassing NaOH molarity, alkaline activator, solid-to-liquid ratio, and curing temperature were based on previous works Table 2. The designated mixing design was based on the optimum mechanical performance [19][20][21]. The precursor material was mixed with the alkaline activator solution for 5 min. The homogenised mixture was poured into a mould. Next, after curing for 28 days, the geopolymer was crushed and sieved at 38 µm to produce a fine geopolymer powder. Two grams of each geopolymer powder were weighed and compressed via the powder metallurgy. Then, the samples were sintered at 900 • C (heating rate of 10 • C/min and soaking time for 2 h) by using a standard electrically heated furnace.
Characterisation and Analysis
The microstructure surface analysis of the unsintered and sintered mixtures was performed using the JSM-6460LA model Scanning Electron Microscope (JEOL, Peabody, MA, USA) connected with secondary electron detectors. The samples were placed onto a double-sided carbon tape. The acceleration voltage and working distance were fixed at 10 kV and 10 mm, respectively.
The samples were examined from 400-4000 cm −1 (resolution of 4 cm −1 ) by using a Perkin Elmer FTIR Spectrum RX1 Spectrometer, Llantrisant, UK. The samples were prepared in powder form and then placed onto the sample slot (ATR crystal). Samples were evaluated by applying the potassium bromide (KBr) pellets methodology. The shifting of the functional group of unsintered and sintered geopolymer was recorded.
The Brunauer-Emmet-Teller (BET) surface area and pore volume were measured by the nitrogen adsorption-desorption isotherm at 77 K volume (TrisStar 3000, Micrometrics Instrument Corporation, Norcross, GA, USA). The quantity adsorbed was correlated to the total surface area and pore volume of the particles in the samples.
The sample was fabricated in powder form for the phase analysis. The XRD analysis was performed by using an XRD-6000 Shimadzu X-ray diffractometer (Columbia City, IN, USA) (Cu Kα radiation (λ = 1.5418 Å)). The analysis was operated at 40 kV, 35 mA at 2θ ranging between 10 • to 80 • (scan rate of 1 • /min). The XRD pattern was analysed by using X'pert High score Plus 2.0 Software by Malvern Panalytical Ltd. (Malvern, UK).
The samples were measured at ambient temperature by using four-point probe that was set up by Keithly, which was selected due to the measurement ability without contact resistance interference. The probe spacing (s) of 1 mm was used, and a current subjected to the samples was set at 0.15 s/cm and 1 A. The following equations were used to calculate the electrical conductivity of the samples.
where; ρ = resistivity, s = probe space, The element distributions in the sintered geopolymer were performed by using a synchrotron µ-XRF at BL6b beamline at the Synchrotron Light Research Institute (SLRI) located in Bangkok, Thailand. The detection limits at the sub-parts per million concentration level could be obtained for larger than 100 nm, and the sensitivities approached the attogram (10-18 g) level [22]. The polycapillary lens was used to initiate a micro-X-ray beam (size of 30 µm × 30 µm) on the samples and focussed the continuous synchrotron radiation. The range energy micro-X-ray beam was set (2 to 12 keV) without the monochromator feature. The experiments were conducted in a helium gas atmosphere with 30 s of exposure time at each point. The resulting images were created in bilinear interpolation and analysed using PyMca software [23]. Figure 1 depicts the sample specification and the localised scan point on the surface of the samples. ρ = resistivity, s = probe space, V = voltage, I = current, and σ = conductivity.
The element distributions in the sintered geopolymer were performed by using a synchrotron µ -XRF at BL6b beamline at the Synchrotron Light Research Institute (SLRI) located in Bangkok, Thailand. The detection limits at the sub-parts per million concentration level could be obtained for larger than 100 nm, and the sensitivities approached the attogram (10-18 g) level [22]. The polycapillary lens was used to initiate a micro-X-ray beam (size of 30 µ m × 30 µ m) on the samples and focussed the continuous synchrotron radiation. The range energy micro-X-ray beam was set (2 to 12 keV) without the monochromator feature. The experiments were conducted in a helium gas atmosphere with 30 s of exposure time at each point. The resulting images were created in bilinear interpolation and analysed using PyMca software [23]. Figure 1 depicts the sample specification and the localised scan point on the surface of the samples.
Microstructure Analysis
The microstructure of the fly ash, kaolin, slag geopolymers subjected to the sintering temperatures as investigated by SEM are shown in Figure 2. The unsintered fly ash geopolymer ( Figure 2a) showed that there was an incomplete dissolution of the fly ash spheres, while the presence of well-defined clay platelets in the kaolin geopolymer was shown in Figure 2b. The small amount of remnant slag particles within the rough surface can be seen in Figure 2c. The remnant slag particles were enclosed within the geopolymer matrix [19]. The micrographs of the unsintered geopolymer revealed that the higher curing temperature is adequate for the development of a structurally firm geopolymer.
After sintered at 900 °C, a smooth and denser geopolymer surface could be clearly seen in the matrix, as shown in Figure 2d-f. By referring to Figure 2d, the existence of micro cracks with lesser remnant fly ash particles was observed, as indicated by the spherical-shaped particles. The kaolin geopolymer surface became increasingly glassy and
Microstructure Analysis
The microstructure of the fly ash, kaolin, slag geopolymers subjected to the sintering temperatures as investigated by SEM are shown in Figure 2. The unsintered fly ash geopolymer ( Figure 2a) showed that there was an incomplete dissolution of the fly ash spheres, while the presence of well-defined clay platelets in the kaolin geopolymer was shown in Figure 2b. The small amount of remnant slag particles within the rough surface can be seen in Figure 2c. The remnant slag particles were enclosed within the geopolymer matrix [19]. The micrographs of the unsintered geopolymer revealed that the higher curing temperature is adequate for the development of a structurally firm geopolymer.
smooth with the sintering temperature at 900 °C ( Figure 2e). The change in the microstructure was supposedly due to the hydration of moisture and phase transformation that was discussed in the next section. Simultaneously, at the same sintering temperature of 900 °C, the slag geopolymer surface revealed lesser cracks and pore formations. The crack healing ability of the slag geopolymer caused the reduction of crack lines in conjunction with the partial melting. This result was supported by the findings of Dudek et al. [24].
Pore Structure Analysis
The influence of the sintering temperature (900 °C) on the pore structure of geopolymer samples was analysed by Brunauer-Emmett-Teller (BET) technology. The specific surface area and pore volume of the unsintered and sintered geopolymer are depicted in Figure 3. The Kaolin geopolymer (KG) sample obtained the lowest surface area (0.86 m 2 /g) and pore volumes (0.03 cm 3 /g), while the Fly ash geopolymer (FG) sample delivered the highest surface area (26.6 m 2 /g) and pore volume (0.07 cm 3 /g). Fly ash was the by-product of coal combustion in a thermal power plant [25], always accommodating unburned carbon, which led to a porous material that delivered a higher surface area, as shown in Figure 3. After sintered at 900 • C, a smooth and denser geopolymer surface could be clearly seen in the matrix, as shown in Figure 2d-f. By referring to Figure 2d, the existence of micro cracks with lesser remnant fly ash particles was observed, as indicated by the sphericalshaped particles. The kaolin geopolymer surface became increasingly glassy and smooth with the sintering temperature at 900 • C (Figure 2e). The change in the microstructure was supposedly due to the hydration of moisture and phase transformation that was discussed in the next section. Simultaneously, at the same sintering temperature of 900 • C, the slag geopolymer surface revealed lesser cracks and pore formations. The crack healing ability of the slag geopolymer caused the reduction of crack lines in conjunction with the partial melting. This result was supported by the findings of Dudek et al. [24].
Pore Structure Analysis
The influence of the sintering temperature (900 • C) on the pore structure of geopolymer samples was analysed by Brunauer-Emmett-Teller (BET) technology. The specific surface area and pore volume of the unsintered and sintered geopolymer are depicted in Figure 3. The Kaolin geopolymer (KG) sample obtained the lowest surface area (0.86 m 2 /g) and pore volumes (0.03 cm 3 /g), while the Fly ash geopolymer (FG) sample delivered the highest surface area (26.6 m 2 /g) and pore volume (0.07 cm 3 /g). Fly ash was the by-product of coal combustion in a thermal power plant [25], always accommodating unburned carbon, which led to a porous material that delivered a higher surface area, as shown in Figure 3. ials 2021, 14, x FOR PEER REVIEW 6 of 13 It can be seen in Figure 3 after having been sintered at 900 °C, Slag geopolymer (SG) showed the lowest value of surface area (0.86 m 2 /g) and pore volume (0.01 cm 3 /g) compared to KG900 and FG900. The small surface area and pore volume indicated that the SG900 is compacted and denser, thus, contributed to lower permeability due to the minimisation of porosity. This was also supported by the microstructure analysis, as shown in Figure 2c. The slag particle was believed to be consisting of mesopores-type in minor quantity, thus, obtained lower surface areas after having been introduced to the high sintering temperature.
Structural Spectra Analysis
The Fourier-transform infrared spectroscopy (FTIR) of the unsintered and sintered geopolymer at 900 °C is shown in Figure 4. The unsintered geopolymer presented a broad absorption band at ~1000 cm −1 and ~500 cm −1 , corresponding to the huge fingerprints of the geopolymer structure, which identified as stretching vibrations of Si-O-Si/Al aluminosilicates, as depicted in Figure 4a. The weak OH − stretching vibration and bending modes were determined at ~3600 cm −1 and ~1650 cm −1 , respectively. These OH vibrations were found majorly because of the typical water bond absorption in the geopolymer product. Meanwhile, the narrow peaks, obtained at 1430-1500 cm −1 , corresponded to the existence of the carbonate compound (CO)3 2− . The slag geopolymer delivered the highest wavenumber at ~1500 cm −1 , as it contained a higher level of CaO in the material, which indicated the vibration of calcite [26]. Figure 4b clearly portrays that there was a remarkable change in the absorption bands after having been sintered at 900 °C, indicating the full dehydration of geopolymer. It was evident that in the considerable broadening of the spectral region of 1000-1037 cm −1 in the samples sintered at 900 °C. The band shifted to a lower wavenumber; kaolin geopolymer (1037 cm −1 ), slag geopolymer (1009 cm −1 ), and fly ash geopolymer (1000 cm −1 ) due to the vibrations of Si-O-Al asymmetric telescopic and Si-O-symmetrical stretching tetrahedrally as a result of the geopolymer edifice. The existence of new bands at 700-780 cm −1 was attributable to the symmetrical stretching vibration of the Si-O-Si(Al) bridges corresponding to crystalline minerals such as quartz, gehlenite, and akermanite [9,27]. The appearance of these crystalline phases was further confirmed in the next phase analysis section. A typical observation was revealed for various geopolymers sintered to elevated temperatures, whereby the weak-vibration mode at 3400-3500 cm −1 and ~1640 cm −1 lowered the intensity at 900 °C , which was attributed to the OH stretching and bending vibration, suggesting the full dehydration of the geopolymer. It can be seen in Figure 3 after having been sintered at 900 • C, Slag geopolymer (SG) showed the lowest value of surface area (0.86 m 2 /g) and pore volume (0.01 cm 3 /g) compared to KG900 and FG900. The small surface area and pore volume indicated that the SG900 is compacted and denser, thus, contributed to lower permeability due to the minimisation of porosity. This was also supported by the microstructure analysis, as shown in Figure 2c. The slag particle was believed to be consisting of mesopores-type in minor quantity, thus, obtained lower surface areas after having been introduced to the high sintering temperature.
Structural Spectra Analysis
The Fourier-transform infrared spectroscopy (FTIR) of the unsintered and sintered geopolymer at 900 • C is shown in Figure 4. The unsintered geopolymer presented a broad absorption band at~1000 cm −1 and~500 cm −1 , corresponding to the huge fingerprints of the geopolymer structure, which identified as stretching vibrations of Si-O-Si/Al aluminosilicates, as depicted in Figure 4a. The weak OH − stretching vibration and bending modes were determined at~3600 cm −1 and~1650 cm −1 , respectively. These OH vibrations were found majorly because of the typical water bond absorption in the geopolymer product. Meanwhile, the narrow peaks, obtained at 1430-1500 cm −1 , corresponded to the existence of the carbonate compound (CO) 3 2− . The slag geopolymer delivered the highest wavenumber at~1500 cm −1 , as it contained a higher level of CaO in the material, which indicated the vibration of calcite [26]. Figure 4b clearly portrays that there was a remarkable change in the absorption bands after having been sintered at 900 • C, indicating the full dehydration of geopolymer. It was evident that in the considerable broadening of the spectral region of 1000-1037 cm −1 in the samples sintered at 900 • C. The band shifted to a lower wavenumber; kaolin geopolymer (1037 cm −1 ), slag geopolymer (1009 cm −1 ), and fly ash geopolymer (1000 cm −1 ) due to the vibrations of Si-O-Al asymmetric telescopic and Si-O-symmetrical stretching tetrahedrally as a result of the geopolymer edifice. The existence of new bands at 700-780 cm −1 was attributable to the symmetrical stretching vibration of the Si-O-Si(Al) bridges corresponding to crystalline minerals such as quartz, gehlenite, and akermanite [9,27]. The appearance of these crystalline phases was further confirmed in the next phase analysis section. A typical observation was revealed for various geopolymers sintered to elevated temperatures, whereby the weak-vibration mode at 3400-3500 cm −1 and~1640 cm −1 lowered the intensity at 900 • C, which was attributed to the OH stretching and bending vibration, suggesting the full dehydration of the geopolymer. The formation of gehlenite, akermanite, and diopside could be seen in the samples sintered at 900 • C. Commonly, the evolution of the crystalline phase might act as fillers to reinforce the matrix and enhance thermal stability. The crystallisation also enhanced the denser microstructure, as exhibited in the SEM images shown in Figure 2. Based on Heah et al. [28], the crystallisation at the elevated temperature developed the mechanical properties of the geopolymer, which was believed contributed by the denser surface. The de-carbonation at 900 • C contributed to the decomposition of calcium hydrates and the reduction intensity of calcite. Silica, calcium oxide, and aluminium oxide were liberated from the geopolymer matrix to produce the crystalline phase of gehlenite, as shown in Equation (3) [29].
Phase Transformation
Meanwhile, akermanite was produced, as a major crystalline mineral, as a result of the reaction between calcite and free magnesium and silica oxide according to Equation (4) [30].
The contradict phase analysis was obtained by KG900. Upon sintered at 900 • C, the portion of kaolinite phases transformed into semi-crystalline nepheline diffraction peaks. Based on previous studies, nepheline crystals were formed in heated-treated sodium-based geopolymers [31]. According to Sabbatini et al. [32], the existence of nepheline assisted the improvement of mechanical performance as the result of the favourable amount of silicon-rich and polymerised species. This was also corroborated with the chemical analysis of FTIR peaks, as sintered-kaolin geopolymer showed higher vibrations of Si-O-Si/Al aluminosilicates at~1000cm −1 (Figure 4).
Electrical Conductivity
The electrical conductivity was measured based on the concern of the geopolymeric matrix as potential reinforcement material in the electronic application. The result of Figure 6 shows the electrical conductivity value of the unsintered and sintered geopolymers. The highest conductivity values were depicted by SG (5.62 × 10 −4 Ωm −1 ), while the lowest values were obtained by FG (5.19 × 10 −4 Ωm −1 ) for the unsintered samples. The electrical conductivity in SG could be explained as a result of ion mobility (Ca 2+ , OH − , Na + ) and electron transport across the percolated network of the calcium-based geopolymer. These existing ions were freely absorbed by a thin layer of the hydration product, such as calcium carbonate (CaCO 3 ), as shown in Figure 4. Meanwhile, iron-rich particles in fly ash were found as the major cause for the lower electrical conductivity. The metallic iron particle was believed to exclude from the fly ash geopolymerisation. The unreacted spherical particles clearly could be visualised in Figure 2. This was also in accordance with the results published by Alida et al. [21]. As can be observed in Figure 7, the microstructure displayed the crystalline grains structure after the geopolymer sintered at 900 °C. The development of the ordered structure indicated the occurrence of necking grain growth, resulting in material densification [33,34]. The necking grain facilitated the additional conductive pathway on the crystalline On being sintered to 900 • C temperatures, the electrical conductivity of the geopolymers was found to increase. As it could be observed in Figure 6, SG900 exhibited a higher electrical conductivity than FG900 and KF900. For example, the electrical conductivity of SG900 was increased by 47.9%, compared to the FG900 (24.5%) and KG900 (29.8%) sintered geopolymer, respectively. The significant increase in electrical conductivity was attributed to the denser microstructure and lesser pores appearance, which connected the paths of the electron transportation and enhanced the electrical conductivity.
As can be observed in Figure 7, the microstructure displayed the crystalline grains structure after the geopolymer sintered at 900 • C. The development of the ordered structure indicated the occurrence of necking grain growth, resulting in material densification [33,34]. The necking grain facilitated the additional conductive pathway on the crystalline microstructure. The structure rearrangement into the crystalline phase enhanced the alkali metal-ion transfer rate and electrical conductivity [35]. As can be observed in Figure 7, the microstructure displayed the crystalline grains structure after the geopolymer sintered at 900 °C. The development of the ordered structure indicated the occurrence of necking grain growth, resulting in material densification [33,34]. The necking grain facilitated the additional conductive pathway on the crystalline microstructure. The structure rearrangement into the crystalline phase enhanced the alkali metal-ion transfer rate and electrical conductivity [35].
Elemental Distribution Analysis
The sintered geopolymer was analysed, by using the synchrotron micro-XRF analysis, in order to determine the element distribution and potential existence crystalline phase at 900 °C. Figure 8 depicts the particular area and the micro-XRF images in Si-Al-Ca of the sintered geopolymer, signifying these elements were well distributed within the samples. By using this advanced technique, the distribution of the major elements (calcium), including light elements (silicon and alumina), could be easily traced.
Elemental Distribution Analysis
The sintered geopolymer was analysed, by using the synchrotron micro-XRF analysis, in order to determine the element distribution and potential existence crystalline phase at 900 • C. Figure 8 depicts the particular area and the micro-XRF images in Si-Al-Ca of the sintered geopolymer, signifying these elements were well distributed within the samples. By using this advanced technique, the distribution of the major elements (calcium), including light elements (silicon and alumina), could be easily traced.
By referring to Figure 8, the distribution of Si and Al was classified for the identification of the geopolymer main chain (Si-O-Al/Si). In general, the colours blue, green, and red represented low, medium, and high intensities for each distribution element at the integrated area. For KG900, the high concentrated Si region (red colour) reflected the quartz grain, while the medium concentration (green colour) of the Si element was indicated in the good homogeneity of the samples. Meanwhile, the Ca distribution was portrayed in the region of hydrated minerals of the FG900 and SG900 sintered geopolymers. The high concentrated Ca territory corresponded to the existence of gehlenite (3Ca 2 Al 2 SiO 7 ) and akermanite (Ca 2 MgSi 2 O 7 ). Regions with a lower concentration of Ca distribution reflected the boundaries of diopside (MgCaSi 2 O 6 ), which formed a minor crystal within the crystalline geopolymer, as corroborated in Figure 5. These Ca-rich crystalline minerals were confirmed to be contributing to a denser microstructure appearance, as portrayed in Figure 2. The elevated temperature resulted in significant changes in the microstructure evolution, edging it towards the formation of a stable crystalline phase. By referring to Figure 8, the distribution of Si and Al was classified for the identification of the geopolymer main chain (Si-O-Al/Si). In general, the colours blue, green, and red represented low, medium, and high intensities for each distribution element at the integrated area. For KG900, the high concentrated Si region (red colour) reflected the quartz grain, while the medium concentration (green colour) of the Si element was indicated in the good homogeneity of the samples. Meanwhile, the Ca distribution was portrayed in the region of hydrated minerals of the FG900 and SG900 sintered geopolymers. The high concentrated Ca territory corresponded to the existence of gehlenite (3Ca2Al2SiO7) and akermanite (Ca2MgSi2O7). Regions with a lower concentration of Ca distribution reflected the boundaries of diopside (MgCaSi2O6), which formed a minor crystal within the crystalline geopolymer, as corroborated in Figure 5. These Ca-rich crystalline minerals were confirmed to be contributing to a denser microstructure appearance, as portrayed in Figure 2. The elevated temperature resulted in significant changes in the microstructure evolution, edging it towards the formation of a stable crystalline phase.
Conclusions
The microstructure evolution, phase transformation, and electrical conductivity of kaolin, fly ash, and slag geopolymers at sintering temperature were analysed experimentally in this paper. The influence of sintering temperatures towards crystallisation, chemical bonding analysis, and element distribution of geopolymers were investigated. The microstructure analysis obtained the development of surface densification and lesser pores within the geopolymer matrix. The X-ray diffraction revealed that the crystallisation of gehlenite, akermanite, and nepheline was determined at 900 °C. The electrical conductivity measurement showed that the crystalline geopolymer could be proposed as a potential reinforcement material in electronic applications, especially in solder composites.
Conclusions
The microstructure evolution, phase transformation, and electrical conductivity of kaolin, fly ash, and slag geopolymers at sintering temperature were analysed experimentally in this paper. The influence of sintering temperatures towards crystallisation, chemical bonding analysis, and element distribution of geopolymers were investigated. The microstructure analysis obtained the development of surface densification and lesser pores within the geopolymer matrix. The X-ray diffraction revealed that the crystallisation of gehlenite, akermanite, and nepheline was determined at 900 • C. The electrical conductivity measurement showed that the crystalline geopolymer could be proposed as a potential reinforcement material in electronic applications, especially in solder composites.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
|
2021-05-01T06:17:22.050Z
|
2021-04-26T00:00:00.000
|
{
"year": 2021,
"sha1": "af63de3c5f50f402acb794bdff83757ec260b300",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/14/9/2213/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff542ffebfd4ab61e1317d536648d4928e691596",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248747833
|
pes2o/s2orc
|
v3-fos-license
|
The monoterpene 1,8-cineole prevents cerebral edema in a murine model of severe malaria
1,8-Cineole is a naturally occurring compound found in essential oils of different plants and has well-known anti-inflammatory and antimicrobial activities. In the present work, we aimed to investigate its potential antimalarial effect, using the following experimental models: (1) the erythrocytic cycle of Plasmodium falciparum; (2) an adhesion assay using brain microvascular endothelial cells; and (3) an experimental cerebral malaria animal model induced by Plasmodium berghei ANKA infection in susceptible mice. Using the erythrocytic cycle of Plasmodium falciparum, we characterized the schizonticidal effect of 1,8-cineole. This compound decreased parasitemia in a dose-dependent manner with a half maximal inhibitory concentration of 1045.53 ± 63.30 μM. The inhibitory effect of 972 μM 1,8-cineole was irreversible and independent of parasitemia. Moreover, 1,8-cineole reduced the progression of intracellular development of the parasite over 2 cycles, inducing important morphological changes. Ultrastructure analysis revealed a massive loss of integrity of endomembranes and hemozoin crystals in infected erythrocytes treated with 1,8-cineole. The monoterpene reduced the adhesion index of infected erythrocytes to brain microvascular endothelial cells by 60%. Using the experimental cerebral malaria model, treatment of infected mice for 6 consecutive days with 100 mg/kg/day 1,8-cineole reduced cerebral edema with a 50% reduction in parasitemia. Our data suggest a potential antimalarial effect of 1,8-cineole with an impact on the parasite erythrocytic cycle and severe disease.
Introduction
Malaria is a life-threatening parasitic disease and a major public health problem. Annually, about 229 million cases and 409,000 deaths are registered worldwide [1]. The most severe a1111111111 a1111111111 a1111111111 a1111111111 a1111111111
Ethics statement
Parasite cultures were supplemented with A+-type blood samples collected from healthy volunteers, randomly selected, who provided written informed consent. All procedures were designed and approved by the Research Ethics Committee of the Hospital Universitário Clementino Fraga Filho from the Federal University of Rio de Janeiro (permit number 074/10).
The ECM model was used to determine the potential antimalarial effect of 1,8-cineole on peripheral blood parasitemia and cerebral edema. This study was performed according to the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. All experimental protocols were assessed and approved by the Institutional Ethics Committee of Federal University of Rio de Janeiro (UFRJ) (number 041/18). All surgery was performed under anesthesia using a ketamine/xylazine mixture, and all efforts were made to minimize suffering.
Plasmodium falciparum erythrocytic cycle
The cultures of P. falciparum were performed as described [18][19][20][21]. P. falciparum (W2 strain) were kept in vitro in RPMI 1640 medium supplemented with 50 μg/mL gentamicin and 10% A +-type human plasma at 5% hematocrit in 25 cm 2 flasks. Parasite cultures were incubated in a gas-controlled atmosphere (5% CO 2 , 5% O 2 , and 90% N 2 ) for at least 24 h. Parasitemia was evaluated daily in hematologic stained thin blood smears by optical microscopy. Parasitemia was expressed by the percentage of infected erythrocytes.
Parasite culture synchronization
Parasite culture synchronization was carried out as published previously [18][19][20][21]. Briefly, the synchronization process involved treating infected erythrocytes with 5% D-sorbitol for 10 min to eliminate the cells infected with mature forms. After treatment, cells were washed and recultured to allow the formation of new schizonts. The effects of 1,8-cineole were tested in cultures containing 1% parasitemia. DMSO (at final concentrations <1%) was used as a vehicle for 1,8-cineole.
Treatment with 1,8-cineole and determination of the half maximal inhibitory concentration
The infected erythrocytes were treated with different concentrations of 1,8-cineole (65-6483 μM) for 24 h. After incubation, the percentage of parasite ring forms was determined in hematologic stained thin blood smears using optical microscopy. Then, the half maximal inhibitory concentration (IC 50 ) for the effect of 1,8-cineole was calculated by nonlinear regression analysis with the best fit of the experimental values using GraphPad Prism software. It was assumed that the dose-response curve has a standard slope, equal to a Hill slope of 1. Three independent experiments were performed in triplicate.
Hemolysis assay
The hemolysis assay was performed as described previously [18]. Non-infected erythrocytes were incubated or not with 1,8-cineole (65-3241 μM) for 24 h. DMSO at a final concentration of 0.5% was used as a vehicle for 1,8-cineole. Then, the cell supernatants were collected and clarified by centrifugation at 600 × g for 8 min to measure free hemoglobin spectrophotometrically at 530 nm. The results are expressed as a percentage of a positive control.
Analysis of the ultrastructure by transmission electron microscopy
Transmission electron microscopy was performed as published previously [18]. Briefly, infected erythrocytes (1% parasitemia, 5% hematocrit, enriched in mature forms) were treated or not with 972 μM 1,8-cineole for 48 h under the same culture conditions described for the P. falciparum erythrocytic cycle. Then, cultures were washed twice in 0.1 M PHEM buffer (30 mM PIPES, 10 mM HEPES, 5 mM EGTA, 2.5 mM MgCl 2 , 35 mM KCl [pH 7.2]) and fixed for 24 h in a mixture containing 2.5% glutaraldehyde, 4% sucrose, and 4% freshly prepared formaldehyde in 0.1 M PHEM buffer (pH 7.2). After washing in 0.1 M PHEM buffer, samples were post-fixed in 1% osmium tetroxide plus 0.8% potassium ferrocyanide in 0.1 M cacodylate buffer for 40 min, dehydrated in ethanol, and embedded in epoxide resin. Ultrathin sections (70 nm) were cut and stained for 20 min in 5% aqueous uranyl acetate and 5 min in lead citrate. Samples were observed in a Tecnai-Spirit transmission electron microscope (Thermo Scientific) operating at 120 kV. Images were obtained with a 2 k Veleta camera (Olympus).
Culture of brain microvascular endothelial cells
Brain microvascular endothelial cells (BMECs), an immortalized brain microvascular endothelial cell line of human origin, originally used as a model of the blood-brain barrier [21,22], were maintained in medium 199 (M199, Sigma-Aldrich) supplemented with 10% heat-inactivated fetal calf serum (Invitrogen, Carlsbad, CA, USA) and 1% penicillin/streptomycin (Sigma Chem Co, St Louis, MO, USA) at 37˚C in 5% CO 2 .
Adhesion assay
We used an adhesion assay to evaluate the ability of infected erythrocytes to adhere to endothelial cells. BMECs were plated in 24-well culture chambers (5 × 10 4 cells/well) and cultured for 24 h in M199 medium supplemented with 10% fetal calf serum (pH 7.4). Non-infected RBCs or P. falciparum-infected erythrocytes (iRBC, 4 × 10 5 cells/well, 5% parasitemia) were incubated or not with 972 μM 1,8-cineole for 2 h before coculture with BMEC for an additional 2 h under the same culture conditions as for BMECs. Non-adherent erythrocytes were washed out with phosphate-buffered saline, and the adhered cells were fixed and stained with hematologic staining. The number of adhered erythrocytes per BMEC was determined by direct counting using optical microscopy (10 fields/well). The data obtained were used to calculate the adhesion index: adhesion index = {[(BMECs with bound erythrocytes)/total number of BMECs] × [(erythrocytes bound to BMECs)/total number of BMECs]} × 100.
Measurement of lactate dehydrogenase
Lactate dehydrogenase (LDH) activity in the cell supernatant of BMECs was detected as described previously [18]. BMECs were incubated or not with different concentrations of 1,8-cineole (ranging from 324 to 3241 μM). Then, the cell supernatant was collected and LDH activity was measured by a kinetic-UV (pyruvate-lactate) method using a commercial kit purchased from Labtest. The results are expressed as the percentage of a positive control (LDH activity in the lysate of BMECs obtained by incubation of cells with 1% Triton X-100).
Animals and experimental cerebral malaria
Male C57BL/6 mice (8-12 weeks) were provided by the Animal Care Facility of the Health Science Center of the UFRJ. The animals were kept in cages in a temperature-controlled room (22˚C-24˚C) with a 12 h light/dark cycle, with access to food and water ad libitum. ECM was induced as published previously [18,[23][24][25][26][27]. Briefly, malaria infection was induced by intraperitoneal injection of 1 × 10 6 P. berghei ANKA (PbA)-infected erythrocytes in normal C57BL/6 mice. Peripheral blood parasitemia was determined daily in hematologic stained thin blood smears using optical microscopy. The results are expressed as the percentage of infected cells. Mice were divided into 5 groups: (1) control, non-infected mice; (2) PbAinfected mice; (3) non-infected mice treated with 1,8-cineole; (4) PbA-infected mice treated with 1,8-cineole; and (5) PbA-infected mice treated with artesunate. The treatment with 1,8-cineole (100 mg/kg/day) or artesunate (10 mg/kg/day) started on the day of infection using daily doses of via intraperitoneal injection for 6 consecutive days. Peripheral blood parasitemia was assessed on days 3, 4, 5 and 6 post-infection, and cerebral edema was assessed at the end of the experiment.
Cerebral edema
The evaluation of cerebral edema was performed using the Evans blue extravasation assay as described previously [18,25]. Briefly, mice received an intravenous injection of 1% Evans blue dye solution. After 1 h, the mice were euthanized and their brain was removed, weighed, and incubated in 2 mL of formamide (37˚C, 48 h) to extract the dye. Absorbance of the supernatant was measured at 620 nm. The concentration of dye was determined using a standard curve. The data are expressed as milligrams of dye normalized per gram of tissue.
Statistical analysis
All results are expressed as the mean ± standard deviation (SD) of at least 3 independent experiments. GraphPad Prism 8 was used for the statistical analysis. Comparison of the different experimental groups was determined by one-way analysis of variance (ANOVA), followed by the Tukey post-test. Significance was determined as a P value <0.05.
1,8-cineole reduces Plasmodium falciparum parasitemia in vitro
In the first experimental group, we evaluated the effect of the monoterpene 1,8-cineole in the erythrocytic cycle of P. falciparum. For this, infected erythrocytes enriched in the schizont form (1% parasitemia, 3%-5% hematocrit) were incubated with increasing concentrations of 1,8-cineole from 65 to 6483 μM (corresponding to 10-1000 μg/mL) and parasitemia, expressed as the percentage of rings (% ring), was assessed after 24 h. This compound decreased the % ring in a dose-dependent manner with an IC 50 of 1045.53 ± 63.30 μM (or 150 μg/mL), and it reached 100% inhibition at 6483 μM (Fig 1A and 1B). The decrease in parasitemia levels was not due to any hemolytic effect, because the levels of hemoglobin detected in the supernatant of healthy erythrocytes at all concentrations of 1,8-cineole tested was less than 10% and similar to the that of the controls (RPMI and the vehicle DMSO) (S1 Fig).
To determine whether the inhibitory effect of the monoterpene was observed using different levels of parasitemia, we tested the effect of 1,8-cineole at the IC 50 concentration (972 μM) in cultures with parasitemia ranging from 0.5% to 3%. We detected 40% inhibition regardless of the level of parasitemia ( Fig 1C).
1,8-cineole treatment irreversibly affects intracellular development of P. falciparum
We observed a decrease in the percentage of parasitemia in the presence of 1,8-cineole, therefore we decided to characterize whether the monoterpene affects the intracellular development of the parasite. First, we aimed to identify which parasite form was more susceptible to the treatment. For this, synchronized cultures of P. falciparum enriched in schizonts, rings, or trophozoites (1% parasitemia) were treated or not with 972 μM 1,8-cineole, and parasitemia was determined right after transition to the next parasite form (Fig 2A). Under the experimental conditions used, parasitemia levels were not significantly changed when rings or trophozoites were treated with the compound. However, the treatment of schizonts produced 40% inhibition, characterizing a possible schizonticide effect of 1,8-cineole ( Fig 2B).
In the next step, 1,8-cineole was added to non-synchronized cultures of P. falciparum (1% parasitemia) daily for 4 consecutive days to evaluate intracellular development of the parasite during 2 parasite cycles ( Fig 3A). Parasite growth proceeded normally under the control conditions, achieving 7% parasitemia after 96 h, but in the presence of 972 μM 1,8-cineole, the number of rings remained at low levels, around 1.5%, throughout the experiment (Fig 3B). Optical microscopy revealed important morphological changes, especially in mature trophozoites and schizonts (Fig 3C). Under the control situation, parasites progressed to mature forms at 48 h in culture (Fig 3C), whereas with 1,8-cineole treatment, at the same time point, schizonts were strikingly reduced in size, suggesting a delay in the intraerythrocytic progression of the parasite; this became more evident in a second development cycle. After 96 h, parasites appear smaller in size, with less formation of hemozoin ( Fig 3C). Moreover, these morphological changes are associated with reduction in the percentage of parasitemia. Because ring formation is reduced, this has an impact on the progression of the parasite cycle ( Fig 3D).
Electron microscopy analysis of cells incubated with 1,8-cineol showed several structural changes compared with control cells (Fig 4). Control trophozoites (Fig 4A-4D) exhibited a normal aspect, showing a well-preserved and elaborated endomembrane system, including the endoplasmic reticulum network spread throughout the cell cytoplasm ( Fig 4A, white rectangle; Fig 4B) and large digestive vacuoles filled with hemozoin crystals (Fig 4A, 4C, and 4D, white arrows). In contrast, parasites incubated with 972 μM 1,8-cineole (Fig 4E-4H) did not show the characteristic endoplasmic reticulum network (Fig 4E, white rectangle; Fig 4F) or hemozoin crystals (Fig 4E, 4G, and 4H), suggesting a massive loss of membrane integrity that may lead to cell death. Empty vacuoles, most likely residual food vacuoles (Fig 4E, 4G, and 4H, asterisks) were also observed in treated cells, in some cases occupying a large area of the cytoplasm.
To evaluate whether 1,8-cineole-induced inhibition of ring formation was a reversible process, schizont cultures (1% parasitemia) were treated with 972 or 3241 μM 1,8-cineole for 24 h. After incubation, cells were washed to remove the compound and recultured under normal culture conditions, with parasitemia adjusted to 0.5% in both groups, for an additional 72 h (Fig 5A). In the first 24 h of treatment, before washing the cells, treatment with 972 μM 1,8-cineole reduced ring formation by about 40% as expected. A more pronounced inhibition was observed when cultures were treated with 3241 μM 1,8-cineole, achieving 85% inhibition (Fig 5B). After washing out the compound, and under the control conditions, parasite differentiation and growth were observed. In the presence of 1,8-cineole, the inhibitory profile was sustained at least for the additional 72 h incubation for both concentrations used, suggesting irreversibility of the effect of 1,8-cineole (Fig 5C).
1,8-Cineole reduces adhesion of infected erythrocytes to brain microvascular endothelial cells
The phenomenon of sequestration observed in falciparum malaria is characterized by the adhesion of Plasmodium-infected erythrocytes to endothelial cells lining the microvasculature of different tissues [28]. This phenomenon is a consequence of intracellular development of the parasite, and it is implicated in the pathogenesis of severe disease [28]. In the next experimental group, we verified whether the modifications induced by 1,8-cineole in infected erythrocytes could have an impact on their adhesion to endothelial cells. To test this hypothesis, P. falciparum-infected erythrocytes were pre-treated with 972 μM 1,8-cineole for 2 h, and then cocultured with a monolayer of BMECs for an additional 2 h. The cells were then washed to remove unbound cells before determining the adhesion index as described in the Materials and methods section ( Fig 6A). As expected, infected erythrocytes had a great ability to bind to BMECs compared with non-infected cells. The treatment with the monoterpene reduced the adhesion index by 60% (Fig 6B and 6C). The decrease in the adhesion index was not due to the loss of viability of the endothelial cells, because LDH activity in the supernatant of cultures treated with 1,8-cineole was around 10% (S2 Fig). These results reinforce the suggestion that 1,8-cineole is a prominent antimalarial candidate.
1,8-cineole attenuates severe disease in a model of experimental cerebral malaria
In the next experimental group, we used ECM to evaluate whether the inhibitory effect of 1,8-cineole on the erythrocytic cycle of the parasite is reproduced in vivo, consequently promoting a protective effect. C57BL/6 mice infected with PbA were treated or not with 100 mg/kg 1,8-cineole or 10mg/kg artesunate (intraperitoneally) daily for 6 consecutive days. Under the experimental conditions, without treatment, peripheral blood parasitemia reached almost 12% at day 6 post-infection and the mice developed cerebral edema as revealed by the Evans blue extravasation assay (Fig 7A and 7B). Treatment with 1,8-cineole partially reduced parasitemia over time, achieving 45% inhibition, while the treatment with artesunate completely abolished it (Fig 7A). Moreover, 1,8-cineole treatment partially prevented the development of cerebral edema while artesunate treatment completely avoided it (Fig 7B). It is worth to mention that treatment of non-infected mice with 1,8 cineole did not induce cerebral edema.
Discussion
In this study, we characterized the schizonticide effect of the monoterpene 1,8-cineole using the erythrocytic cycle of P. falciparum as well as an ECM model by infecting susceptible C57BL6 mice with PbA. Our data suggest that 1,8-cineole has a potential antimalarial effect based on the following: (1) it impairs the erythrocytic cycle and the intracellular development of the parasite; (2) it inhibits adhesion of infected erythrocytes to BMECs; and (3) it protects against the development of cerebral edema. These results demonstrate the activity of 1,8-cineole against Plasmodium sp., suggesting its use in therapeutic strategies to treat severe disease.
In a dose-dependent experiment, we demonstrated that 1,8-cineole decreased parasitemia according to increasing concentrations of the monoterpene, with an IC 50 of 1045.53 ± 63.30 μM (or 150 μg/mL). Su et al. [15] reported a similar IC 50 value against the same strain of P. falciparum as used in our work. Using headspace gas chromatography to assess the real concentration of cineole found in the culture medium, the group demonstrated that the IC 50 was more than 80% less than the calculated value, achieving a concentration that is unlikely to be toxic to the host, which makes it suitable for drug development [15]. We did not detect any hemolytic activity even in the presence of high concentrations of 1,8-cineole or toxic effects on other nucleated cells, because low levels of LDH were found in the supernatant of BMECs incubated with increasing concentrations of 1,8-cineole. In agreement, the low toxicity of 1,8-cineole has also been attested by other groups [29].
As an isolated compound, 1,8-cineole has been proposed to have antibacterial, antiviral, and antifungal activities [30][31][32], but studies on its effect against parasites including Plasmodium sp. are rare in the literature. Arrest of the growth of chloroquine-sensitive and chloroquine-resistant P. falciparum has been observed [15]. Here, we demonstrated that the monoterpene has an inhibitory effect not only on parasitemia but also on intracellular development of the parasite. The inhibitory effect of 1,8-cineole in reducing parasitemia is sustained even after washing out the monoterpene. This result could be explained in terms of the lipophilicity of the compound, perpetuating its repressive effect over time. In agreement with our findings, essential oils containing 1,8-cineole have been shown to inhibit [ 3 H]hypoxanthine uptake by P. falciparum, which reflects the inhibition of parasite growth [33].
A more prominent inhibitory effect of 1,8-cineole was observed when mature forms were treated with the monoterpene, characterizing a possible schizonticide effect and consequently inhibition of early ring formation, similar to other well-known antimalarials [34,35]. At this time point, the morphology modification revealed by optical microscopy was also relevant; the parasite was reduced in size with apparent loss of the malaria pigment, hemozoin. Ultrastructural analysis of trophozoites treated with 1,8-cineole confirmed derangement of internal membranes and the absence of crystals of hemozoin. During development, Plasmodium sp. degrade internalized hemoglobin, and the toxic free heme is immobilized into hemozoin as a mechanism to avoid cellular damage [36,37]. We could not observe any crystal formation in the presence of 1,8-cineole, therefore it is possible to imagine that this mechanism to detoxify free heme is impaired by treatment with the monoterpene. However more experiments are necessary to confirm this hypothesis.
In humans, falciparum-associated pathologies seem to be dependent on parasite sequestration, due to the adhesion of infected erythrocytes to the endothelium [28]. Using an in vitro model of co-incubation, we observed that 1,8-cineole strikingly reduced the adhesion index of infected erythrocytes to BMEC monolayers. Classically, infected erythrocyte sequestration depends on the recognition of adhesion molecules in the endothelium, such as ICAM-1, and ligands in the infected erythrocytes, such as PfEMP-1. Endothelial adhesion molecules are upregulated during malaria infection and participate not only in parasite sequestration but also in accumulation and recruitment of leukocytes [38], representing a key element in disease pathogenesis. Thus, the decrease in the expression or blockage of such molecules could attenuate susceptibility or severity of the disease. Accordingly, in a murine model of H1N1 infection, treatment with 1,8-cineole inhibited the upregulation of ICAM-1 and VCAM-1 induced by infection, corroborating the antiinflammatory effect of the compound [39]. Another attractive explanation for the reduction in the adhesion index is impairment of PfEMP1 expression in the surface of infected erythrocytes. This phenomenon could be a result of the direct effect of 1,8-cineol on the erythrocytic cycle of the parasite. However, more experiments are necessary to confirm these hypotheses.
The molecular mechanisms involved in the development of human disease remain to be fully determined, but it is well known that the overall process depends on parasite factors as well as host responses [28,40]. Interaction among infected erythrocytes, leukocytes, and endothelial cells induces upregulation of proinflammatory cytokines, which leads to activation of brain endothelium [28,40]. The host immune response and mechanical occlusion culminate in disruption of cerebral blood flow and ultimately lead to dysfunction of the blood-brain barrier, and consequently to hemorrhagic lesions and brain edema [28,40]. Although the pathology of brain disease in mice seems to be different from that in humans, the use of the murine model of cerebral malaria is well established, and it is considered a valuable tool to study the human disease [41][42][43][44]. Using the ECM model, we observed that 1,8-cineole, administered daily right after infection, had anti-plasmodial activity in vivo by decreasing parasitemia consistently until day 6 post-infection. Moreover, at this time point, we observed that the treatment also attenuated the formation of edema. Ramazani et al. [45] demonstrated that extracts of 2 species of Artemisia had antiplasmodial activity in BALB/c mice infected with P. berghei. In their work, the authors could not detect artemisinin in all plants, which suggests the effect of any essential oil constituent present in high amounts in their extracts.
How effectively 1,8-cineole can control the development of cerebral malaria compared to established therapy? Here, we showed that 1,8-cineole attenuated the parasitemia and brain edema. Different from 1,8-cineole treatment, in the presence of artesunate, parasitemia was 100% controlled culminating in the absence of brain edema. These observations agree with studies showing that treatment of infected mice with artesunate (starting right at infection or starting after increased parasitemia), via different administration routes, promote rapid depletion of parasites and effectively attenuate brain inflammation by decreasing leukocytes recruitment [46][47][48]. If a combination between 1,8-cineole and artesunate could positively cooperate to rescue brain tissue and cognitive function, further experiments will clarify this issue.
The bioavailability of 1,8-cineole is an important factor. McLean et al. [49] has studied the pharmacokinetics of 1,8-cineole on Trichosurus vulpecula. The authors showed that intravenous administration of 1,8-cineole revealed it was widely distributed, suggesting the terpene is greatly taken up by tissues. Moreover, about 40% of cineole was eliminated during the distribution phase. On the other hand, when the terpene was administered orally, they observed low bioavailability (at low doses, including 100 mg/kg) due to extensive first-pass metabolism. Moreover, intravenous infusion of cineole induced depression of the central nervous system. However, at the lower blood concentrations caused by oral doses, this undesired effect was not observed. In the present study, we used intraperitoneal administration of 1,8-cineole following the same route of administration used by Murata et al. [50]. We observed a protective effect on cerebral edema caused by malaria infection. In our experiments, 1,8-cineole alone did not change the extravasation of Evens blue dye compared with control mice, suggesting that the terpene administered by this route did not interfere in vascular permeability. However, further experiments are necessary to evaluate the correlation between the administration route and the final effects. Which is the best administration route? The one that demonstrates a good balance between therapeutic and side effects.
The current treatment determined by the World Health Organization is the use of artemisinin analogs in combination with other drugs (ACT) in an attempt to avoid resistance [5]. However, the resistance mechanisms of both Plasmodium sp. and malaria vectors to antimalarial drugs and insecticides, respectively, make disease control and extermination extremely difficult [5,6]. For this reason, a reappraisal of current therapies is indicated, and the discovery of new antimalarial agents is urgently needed. Our results bring new perspectives to the development of innovative therapies to halt malaria disease.
Supporting information S1 Fig. 1,8-cineole did not induce hemolytic activity. Non-infected erythrocytes (50% hematocrit) were incubated with different concentrations of 1,8-cineole (ranging from 65 to 6483 μM) or 0.5% DMSO (used as vehicle) for 24 h. The hemolytic activity was assessed by measuring free hemoglobin in the cell supernatant as described in the Materials and methods section (n = 7). The results are presented as the mean ± SD. n.s., not significant. (TIF) S2 Fig. Treatment with 1,8-cineole did not change the viability of brain microvascular endothelial cells. BMEC monolayers were incubated with different concentrations of 1,8-cineole (ranging from 324 μM to 3241 μM) for 24 h at 37˚C in 5% CO 2 . The cell supernatant was assayed for LDH activity to verify cell viability. The activity was determined as the percentage of a control prepared by adding 1% Triton X-100 to the monolayer (n = 4). The results are presented as the mean ± SD. n.s., not significant. (TIF)
|
2022-05-14T05:18:38.108Z
|
2022-05-12T00:00:00.000
|
{
"year": 2022,
"sha1": "0f37c3b9e1b8984aeee2f508a42dc05cddbdb6f7",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "0f37c3b9e1b8984aeee2f508a42dc05cddbdb6f7",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.