id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
259046931 | pes2o/s2orc | v3-fos-license | Lociq provides a loci-seeking approach for enhanced plasmid subtyping and structural characterization
Antimicrobial resistance (AMR) monitoring for public health is relying more on whole genome sequencing to characterize and compare resistant strains. This requires new approaches to describe and track AMR that take full advantage of the detailed data provided by genomic technologies. The plasmid-mediated transfer of AMR genes is a primary concern for AMR monitoring because plasmid rearrangement events can integrate new AMR genes into the plasmid backbone or promote hybridization of multiple plasmids. To better monitor plasmid evolution and dissemination, we developed the Lociq subtyping method to classify plasmids by variations in the sequence and arrangement of core plasmid genetic elements. Subtyping with Lociq provides an alpha-numeric nomenclature that can be used to denominate plasmid population diversity and characterize the relevant features of individual plasmids. Here we demonstrate how Lociq generates typing schema to track and characterize the origin, evolution and epidemiology of multidrug resistant plasmids.
P lasmid-mediated antimicrobial resistance (AMR) allows bacteria to resist exposure to every major class of antibiotics. Transferrable plasmids can disseminate AMR genes between bacterial genera and facilitate the spread of antimicrobial-resistant pathogens among host species 1 . The National Antimicrobial Resistance Monitoring System (NARMS) recognizes plasmidmediated AMR as a key threat to human, companion animal and food animal health. However, plasmids that encode for AMR genes are prone to genetic recombination events 2 . This capacity for genetic remodeling contributes to the great sequence diversity seen among plasmids and confounds current efforts to track and characterize these clinically relevant molecules [3][4][5][6][7][8] .
The most common plasmid typing method categorizes plasmids by a single conserved region on the plasmid replicon 9 . This method, plasmid incompatibility group typing (Inc typing), uses a PCR based replicon typing approach and emerged from research on the effects of plasmid replicon pairs and plasmid replication efficiency 10 . Plasmid combinations that result in decreased replication efficiency when concurrently occupying the same cell are classified within the same incompatibility group. This criterion was well-suited for analysis with contemporaneous molecular or in silico methods because it only required identification of a single target 11 . However, this reliance on a single genetic target does not address the great sequence diversity present among plasmids within a single Inc group and often does not detect hybrid plasmids 12 . This shortfall of using a single target for plasmid typing is apparent when the plasmid contains multiple replicon sequences 13,14 .
A complementary method to plasmid Inc typing known as MOB typing categorizes plasmids by the sequence of their relaxase protein 15,16 . The relaxase protein is an essential component in mobilizable plasmids that binds to the plasmid origin of transfer, introduces a single stranded nick and facilitates the transfer of the single plasmid strand to the bacterial plasmid secretion system 17 . Relaxase proteins have been phylogenetically grouped into six MOB families and plasmids are assigned to a MOB group based on the relaxase protein sequence 16 . Unfortunately, MOB typing methods are limited in their ability to categorize non-mobilizable plasmids and like Inc typing are based on only a single target.
One promising typing approach classifies plasmids by their average nucleotide identity 18 . This approach has a notable advantage over other typing schema because it uses the entire plasmid sequence to identify plasmid taxonomic units (PTUs) instead of using a single target. The PTU method identifies conserved taxonomic units using a sequence-length dependent comparison between plasmids. One of the main advantages of this method is that it classifies plasmids independent from any predicted phenotypic trait or function. This sequence-based approach has shown strong associations between PTU group and bacterial host specificity 19 . However, this approach does have limitations. First, because this method makes length-based comparisons between plasmids it is possible to miss regions of sequence similarity in smaller plasmids when they fall below the method's cutoff threshold. Second, the naming schema of the PTU system is independent of other typing systems, complicating comparisons to historical plasmid data. Finally, similar to other average nucleotide identity clustering methods, this method does not take into account variations in the plasmid structure resulting from recombination events. These limitations hinder the ability to make detailed comparisons between plasmids using the PTU designation alone.
Plasmid multilocus sequence typing (PMLST) addresses some of the challenges of Inc, MOB and average nucleotide identity typing methods. Schema that contain more than one target for plasmid typing are able to account for a greater degree of sequence diversity within a plasmid type 3 . Unlike the MOB methods, PMLST is able to categorize non-mobilizable plasmids as well. PMLST methods are compatible with existing plasmid typing nomenclature and the typing loci are defined sequences that can be used in downstream analysis. The IncA/C 3 , IncF 4 , IncHI 6 , IncH2 5 , IncI1 7 and IncN 8 PMLST schema contain 2-6 typing loci each and have contributed greatly to the understanding of plasmid sequence diversity. The IncA/C PMLST schema is used to differentiate the plasmids of the IncC plasmid type. The IncC plasmids are commonly associated with the carriage of clinically-relevant antimicrobial resistance genes and contribute to the spread of the multi-drug resistant phenotype 20 . Core genome plasmid multilocus sequence typing (cgPMLST) expands on PMLST methods further by identifying the genes essential for plasmid maintenance and using them as sequence typing targets 3 . This method has been applied to IncA/C plasmids to increase the number of typing loci to 28. However, while more targets are used for PMLST-based plasmid classification, they only represent a small percentage of the entire plasmid sequence and provide little information on structural differences between plasmids.
One factor that has hindered the progress of sequence-based plasmid typing systems is the difficulty of assembling plasmids from short read sequencing data 21 . However, as long-read sequencing technologies become more accessible, more closed plasmid assemblies are available to researchers. Closed plasmid assemblies offer two main advantages over gapped, or draft, plasmid assemblies. First, closed plasmid assemblies account for every nucleotide on the plasmid molecule. This provides a full accounting of all the coding and intergenic regions on the plasmid. The second advantage closed assemblies provide is the ability to determine which sequences are missing from a plasmid. For comparison, draft assemblies do not contain the entire plasmid sequence and cannot be used to determine if a given sequence is missing. Finally, closed assemblies can be used to identify the relative position of any genetic element on the plasmid. This attribute is useful in epidemiological operations such as antimicrobial resistance monitoring where the proximity of an AMR gene to an insertion sequence or transposon can help assess the risk of gene transfer.
These three attributes of closed plasmid assemblies are ideal factors for plasmid typing. First, the ability to account for every nucleotide on the plasmid increases the likelihood of identifying common sequences shared among different plasmids. Second, the ability to equate absence of sequence in an assembly to absence of sequence in the cognate plasmid allows for plasmid classification methods based on the presence or absence of genetic elements. Finally, analyzing the relative position of each genetic element on the plasmid can account for differences in the plasmid structure resulting from plasmid recombination and insertion events.
Here we present a plasmid subtyping method that uses closed plasmid assemblies to identify the conserved sequences and patterns of loci found among plasmids of a given plasmid type. In this paper, we propose to subtype plasmids of the IncC plasmid type as a demonstration of the Lociq method. We chose the IncC plasmids, not only because of their role in the transmission of AMR genes, but also because we can compare results of the Lociq method to the PMLST and cgPMLST profiles of this well-characterized plasmid type. By identifying these conserved genetic elements and patterns, we aim to develop a scalable approach to plasmid classification that allows the user to first identify large families of plasmids and then apply additional typing criteria to differentiate between individual plasmids. The purpose of this paper is to introduce the plasmid subtyping method, demonstrate its ability to subtype IncC plasmids, compare it to existing plasmid typing methods, and show how the results of the subtyping method can be used to facilitate research in plasmid biology, which has the potential to enhance pathogen surveillance for public health.
Results
We demonstrated the utility of the Lociq plasmid typing method by performing an analysis of closed plasmid assemblies and generating subtyping definitions for the IncC plasmids. Identification of the typing loci was performed by using the Roary and piggy programs to define the pangenome of 459 closed plasmid sequences 22,23 . Prevalence thresholds were used to determine which pangenomic loci were indicative of and exclusive to a given plasmid type. Finally, the candidate typing loci were validated against an external database ( Fig. 1). We then compared the Lociq typing method results to Inc, MOB and PTU typing methods, as well as PMLST and cgMLST subtyping methods. Finally, we demonstrated how the Lociq method organizes the results to facilitate downstream analyses.
Plasmid subtyping method. The full dataset of Salmonella and E. coli isolates contained 459 closed plasmid assemblies and 46 plasmid Inc types. These 46 plasmid types were represented by 398 plasmids and the remaining 61 plasmids did not belong to any plasmid Inc group. The combined pangenome for all 459 plasmids contained 6726 unique coding and intergenic regions, as generated by the Roary & piggy programs. These 6726 genetic elements were the library of plasmid loci found among our plasmids. The pangenome was analyzed as a binary presence/ absence matrix in R where plasmids were grouped by the similarity of their loci profiles accounting for both the coding and intergenic regions. This grouping was performed first by computing a distance matrix of the binary matrix data, then clustering with the hclust function using complete linkage. The resulting presence absence matrix was used for downstream subtyping of the Inc group plasmid typing schema (Fig. 2).
Next, we identified the IncC cluster on the presence-absence matrix and selected the loci indicative of and exclusive to IncC plasmids. Identification of the IncC plasmids revealed that the loci composition of IncC plasmids is not uniform and only a subset of loci is shared among the IncC plasmids (Fig. 2). Next, we identified the loci indicative of and selective for IncC plasmids by comparing the prevalence of each 6726 loci among IncC plasmids to their prevalence in non-IncC plasmids. Seventy-five loci were present in >90% of the IncC plasmids and fewer than 10% of the non-IncC plasmids ( Supplementary Fig. 1). This initial set of IncC typing loci contained 59 coding and 16 intergenic regions.
Following the initial identification of typing loci, we evaluated the prevalence of the loci against plasmids in an external database. The purpose of this analysis was to reduce the bias in loci selection that may be introduced if the initial dataset was not representative of the broader plasmid population. For example, in this demonstration, all plasmids were harvested from Salmonella and E. coli strains that were isolated from retail meats or food animal cecal samples. The plasmid data set did not contain plasmids harvested from other genera of bacteria and none of the bacteria were isolated from clinical or environmental sources. To address this, we evaluated the prevalence of the 75 loci among the 34,513 plasmids of the PLSDB v.2021_06_23_v2 database 24 . We compared the prevalence of typing loci between IncC and non-IncC plasmids in the database. Seventy-two of the seventy-five IncC typing loci met the two criteria of being present in > 90% of the plasmids that contained at least one typing locus and being present in < 1% of plasmids without a typing locus in the PLSDB database. The resulting complement of 72 IncC typing loci accounted for 40,091 bp and contained 58 coding regions and 14 intergenic regions. Further, a 90% prevalence of loci threshold was sufficient to identify all 534 IncC plasmids in the PLSDB database.
In the next stage of this plasmid subtyping demonstration, we identified the patterns of contiguous plasmid loci that were conserved among the IncC plasmids in the PLSDB database. These conserved contiguous regions were identified as fragments of the plasmid backbone. The fragment analysis that allowed for In the final stage of our demonstration of IncC plasmid analysis with the Lociq method, we used the sequence and position of the typing loci to characterize all plasmids from the external database that contained at least 1 IncC typing locus ( Supplementary Fig. 2). Plasmid characterization was performed by assigning a numeric identifier to each unique pattern of sequence type, fragment type and loci type (Fig. 4). The plasmid sequence type was defined by the complement of plasmid alleles in the plasmid, regardless of their position. The plasmid fragment type was determined by how the plasmid fragments were ordered along the plasmid, relative to a semi-conservative starting locus. The plasmid loci type was determined by rearranging the plasmid fragments in ascending order of their numeric identifier and recovering the arrangement of loci from the re-ordered plasmid fragments. This temporary rearrangement of plasmid fragments for loci typing allows the loci type to be independent of the fragment type.
In addition to the 534 IncC plasmids in the database of 34,513 plasmids, the analysis identified 31 IncC hybrid plasmids that contained at least 1 of the IncC typing loci. The 534 IncC plasmids were then subdivided into unique patterns of 52 fragment types, 260 loci types and 388 sequence types. There were 397 unique combinations of fragment type, loci type and sequence type represented among the 534 IncC plasmids (Supplementary Data 4). Further, the addition of the interfragment distance values to the subtyping criteria increased the number of unique combinations to 515. As a result, the 534 IncC plasmids could be divided into 515 unique combinations of fragment type, loci type, sequence type and interfragment distances.
The Lociq plasmid subtyping method includes features for analysis of the results. First, the results can be evaluated in a webbrowser using an R-shiny application 25 . This application allows the user to compare plasmids by generating a graphical map (Fig. 5) of each plasmid in the database ( Supplementary Fig. 3), a report of plasmid features ( Supplementary Fig. 4) and a searchable table of AMR genes that are present in the plasmid ( Supplementary Fig. 5) Fig. 6). Second, this subtyping method generates a tabular typing summary of all the plasmids that were evaluated (Table 1). This summary includes the plasmid ID, plasmid typing category, fragment type, loci type, plasmid sequence type, fragment sequence types and the interfragment distances (Supplementary Data 4). Third, the method produces sequence (Supplementary Data 5) and pattern (Supplementary Data 6) definitions for downstream analysis. Finally, this subtyping method includes a script that allows the user to characterize their own plasmid sequences using the database of results generated by the Lociq method. The database will also update the plasmid typing reference database if the user's plasmid sequences contain variants in sequence or structure that were not previously accounted for.
Comparison to existing methods. Our subtyping method classifies plasmids by variations in loci sequence and relative position on the plasmid. We compared the total number of subtyping groups, the size of each group and the Simpson diversity index across four plasmid typing methods and the Lociq typing method to evaluate their discriminatory power (Fig. 6). The first two methods we evaluated were the MOB type and the PTU typing methods. While neither of these classification methods were designed for IncC plasmid subtyping, both are valuable alternatives to the Inc typing system. PTU classification of the plasmids was able to assign 479 of the 534 IncC plasmids to a PTU group. The largest group contained 475 plasmids and the results that were generated had a Simpson's index of diversity of 0.199. MOB typing of the 534 IncC plasmids revealed 15 MOB types, 4 Metrics for plasmid typing. Endpoints evaluated in the Lociq plasmid typing method (a). Boxes represent plasmid loci while the numbered clusters of loci correspond to plasmid fragments. Examples of how the endpoints can be used to differentiate between two example plasmids A and B (b) using the sequence of plasmid loci to determine plasmid sequence type, order of the plasmid loci to determine loci type, order of the plasmid fragments to determine fragment type or the distances between the plasmid fragments as a metric for interfragment distances. the largest of which contained 425 plasmids. MOB typing of this dataset generated a Simpson's index of diversity of 0.359. The next two methods we evaluated were specifically designed to subtype IncC plasmids and showed greater ability to differentiate between plasmids. The first of these methods was the 5 loci IncA/ C PMLST schema which produced 28 groups. The largest group classified by this method contained 363 plasmids and the diversity index for this method was 0.492. The final comparator typing method was the 28 loci IncA/C cgPMLST schema. There were 180 unique combinations of IncA/C cgPMLST alleles represented in the dataset and the most common combination was identified in 87 plasmids. Typing with the IncA/C cgPMLST loci showed the greatest discriminatory power of all the comparator methods with a Simpson's diversity index of 0.954.
Next, we evaluated the typing schema generated in our plasmid subtyping method (Fig. 6). Structural characterization of the plasmids by the order of their fragments grouped the 534 IncC plasmids into 53 groups, the largest of which contained 386 plasmids. Fragment typing had slightly greater discriminatory power than MOB typing, as indicated by a Simpson's diversity index of 0.475. Structural characterization of plasmids by the order of their loci classified the plasmids into 260 groups. The largest group contained 171 plasmids and the Simpson's diversity index for this schema was 0.896. This value was slightly less than the diversity index of the IncA/C cgMLST method. The plasmid classification schema that grouped plasmids by the plasmid loci sequence type that were generated in our method grouped the plasmids into 388 groups. The largest group that was produced with this schema contained 15 plasmids. This schema had the second highest Simpson's diversity index of 0.996. The final schema that we evaluated combined all the structural and sequence features that were generated in the analysis. For this aggregate schema, plasmids were evaluated by their fragment type, loci type, sequence type and the distances between their fragments. This separated the plasmids into 515 groups, and the largest group contained 4 plasmids. This schema had the greatest discriminatory power with a Simpson's diversity index >0.999. Fig. 7). The Lociq method can also be used to identify custom features such as IS elements to indicate potential sites of plasmid recombination. A comparison of plasmids NC_012690.1 and AP024125.1 illustrates the proximity of IS elements to AMR and heavy metal resistance genes (Fig. 7). In addition, two inverted sequence regions are flanked by IS6 elements in AP024125.1 relative to NC_012690.1.
Downstream analysis of AMR positions in a dataset. As a second downstream analysis, we can leverage the AMR gene location data to identify trends in gene position among the plasmid dataset. We analyzed the location of bla CMY-2 among our IncC plasmids. Of the 117 plasmids that encoded for bla CMY-2 , 93 plasmids bore the gene downstream of IncC fragment 8 and upstream of IncC fragment 6. All but 2 of the bla CMY-2 genes in this subset were located in a range that peaked at 28 kb upstream of IncC fragment 6 (Fig. 8).
The bla CMY-2 genes were located in two ranges downstream of IncC fragment 8. One range peaked at 30 kb downstream of fragment 8 and the other at 80 kb. Two bla CMY-2 genes were found outside of these ranges: 1 was identified 242,666 bp downstream of fragment 8 in plasmid NZ_CP028804.1 and the other 190,156 bp downstream of fragment 8 in plasmid NZ_CP019001.1. The shift in the location of bla CMY-2 in both cases was associated with a potential insertion event upstream of the gene. Upstream of bla CMY-2 in NZ_CP028804.1 is a region that contains genes associated with resistance to silver, copper and arsenic as well as heat shock tolerance as well as the genetic markers for the plasmid replicons IncFIA_1(AP001918), IncFIC(FII)_1(AP001918) and IncFII_1(AY458016). Similarly, upstream of bla CMY-2 in NZ_CP028804.1 is a region encoding for the iucA, iucB, iucC, iucD, iutA virulence genes and the genetic markers for plasmid replicons IncFIB(K)_1(JN233704) and IncFII(K)_1(CP000648). The presence of multiple plasmid replicons combined with the relative position of bla CMY-2 from the IncC plasmid fragments indicates these two plasmids are the result of a recombination event between an IncC plasmid and a plasmid of the IncF family of plasmid groups. The Lociq typing method records the gene position data for all AMR and user-defined accessory genes and as a result, this gene location analysis can be performed for any gene represented in the plasmid dataset.
Lociq typing of draft assemblies. Draft plasmid assemblies can be analyzed by using the allele definitions of the Lociq results. The Lociq program cannot analyze draft assemblies for structural variations of loci or fragment order, but it can perform plasmid MLST to identify which plasmids in the Lociq results most closely match the draft assembly. To demonstrate this plasmid typing function, we queried the NCBI database for draft assemblies containing IncC plasmid sequence and filtered the results for assemblies generated from short reads. From this, we selected the whole genome shotgun sequencing record for Klebsiella pneumoniae K184 (JAANYS000000000) that contained 1,477 contigs. A BLAST query of the IncC typing loci against the 6.7 Mb draft assembly identified 62 IncC typing loci in the sequence. Thirty five of the 62 loci were partial matches that either occurred at the end of a contig (Supplementary Data 9) or aligned to complementary ends of two separate contigs (Supplementary Data 10). Of the remaining 27 loci, 22 matched known alleles in the Lociq results (Supplementary Data 11). Our analysis revealed that this grouping of 22 alleles was conserved among 86 plasmids in our dataset of 534 IncC plasmids. This subset of 86 IncC plasmids represented the closest matches to the plasmid identified in the whole genome shotgun sequencing assembly based on our typing method.
Analysis of plasmids in a clinical setting.
The final demonstration shows how subtyping with the Lociq method can aid in tracking the evolution of a plasmid in a clinical setting. To do this, we used the Lociq method to visualize the results of a study in a major hospital in Taiwan that tracked the transmission of bla OXA-48 from a plasmid to a K. pneumoniae chromosome over a threeyear period 26 . During this time, an accessory IncC plasmid that was retained among the K. pneumoniae strains had lost~20 kb of sequence containing 9 AMR genes. The study closed the sequences of 4 IncC plasmids that were recovered from isolates in the blood of a patient suffering from bacteremia, urine of two patients suffering from urinary tract infections and pus from a patient suffering from pneumonia. Analysis of the 4 IncC plasmids revealed that all belonged to the IncC Lociq sequence type 74 (IncC Lociq ST74) and the loci and fragment patterns were identical among all four plasmids (Fig. 9). However, the interfragment distances and arrangement of AMR genes among the plasmids differed, indicating that each of the plasmids that was recovered represented a different stage in the evolution of the plasmid at the hospital. The primary study indicated that the first stage of plasmid evolution was observed between the plasmids NZ_CP040034.1 and NZ_CP040029.1 that were isolated in the first year of the sample period. These plasmids showed an inversion of a~20 kb resistance cassette containing erm(42)bla TEM-31 -rmtb1-tet(G)-floR2-sul1-qacEdelta1-aadA2-dfrA12 that was located between IncC fragments 4 & 2. The next step in plasmid evolution indicated in the primary study was observed in plasmids recovered later in the sampling period. These plasmids showed a reduction in size due to the loss of an overlapping resistance cassette containing aac(3)-IId-dfrA12-aadA2-qacE-delta1-sul1-floR2-tet(G)-rmtb1-blaTEM-31 but leaving erm(42) in the plasmid. The proposed final step was the loss of bla CTX-M-14 that was embedded between two sections of IncC fragment 3. This quick analysis revealed that even though the plasmids were identical in sequence type, loci pattern and fragment pattern, the difference in interfragment distance showed that the plasmids were not identical. Further, the fragments of the Lociq typing method provided common reference point among the plasmids to identify where each plasmid restructuring event had taken place.
Next, we compared the four IncC Lociq ST74 plasmids recovered from K. pneumoniae isolates in a Taiwanese hospital to the only five IncC Lociq ST75 plasmids in our results. These two plasmid sequence types differ by a single allele that encodes for an uncharacterized protein. Even though the four ST74 plasmids were all recovered from a single location and single species, the five ST75 plasmids were recovered from multiple species and multiple sites. The smaller two IncC Lociq ST74 plasmids shared the same loci and fragment pattern with the IncC Lociq ST75 plasmids NZ_LT985224.1 and NZ_MF150121.1, however the ST75 plasmids were recovered from E. coli in France and K. pneumoniae in Brazil, respectively ( Supplementary Fig. 8). Alignment of the plasmids revealed 98% coverage and > 99% identity between NZ_CP040024.1 and NZ_LT985224.1 and 97% coverage and > 99% identity between NZ_MF150121.1 and NZ_CP040039.1. The third IncC Lociq ST74 plasmid NZ_CP040029.1 shared the same plasmid structure and AMR composition of the IncC Lociq ST75 plasmids NZ_MF150118.1 and NZ_CP028996.1, but the IncC Lociq ST75 plasmids were recovered from P. mirabilis in Brazil and K. pneumoniae in USA (Supplementary Fig. 9). Both NZ_MF150118.1 and NZ_CP028996.1 aligned to the ST74 NZ_CP040029.1 with 100% coverage and >99% identity. Finally, the fourth IncC Lociq Fig. 9 Alignment of IncC sequence type 74 plasmids. Visual comparison of all IncC Lociq ST74 plasmids in our results. All plasmids were recovered from a single hospital in Taiwan between 2013 and 2015 and illustrate how a plasmid that is established in a single location can change over time. Shaded regions indicate differences between plasmids. The numbered black bars represent plasmid fragments, red bars represent AMR genes and yellow bars represent stress-tolerance genes. Strand orientation is in relation to the plasmid indexing locus and forward orientation is represented by gene presence above the sequence line. ST74 plasmid NZ_CP040034.1 shared the same inverted sequence upstream of IncC plasmid fragment 2 that was observed in the IncC Lociq ST75 plasmids NZ_CP023724.1 and NZ_AP018672.1. These last two ST75 plasmids were recovered from a hypervirulent K. pneumoniae clinical isolate in Taiwan and a K. pneumoniae environmental isolate in Japan (Supplementary Fig. 10) 27 . Both ST75 plasmids shared 96% coverage and 99% identity with the ST74 plasmid, and the decreased coverage was affected by the partial loss of a resistance cassette between IncC plasmid fragment 4 and 2.
This final application of the Lociq method demonstrates how one plasmid type that was recovered solely from K. pneumoniae isolates from a major hospital in Taiwan was genetically similar to plasmids isolated from K. pneumoniae, E. coli and P. mirabilis in 4 different continents. This demonstration indicates that with the appropriate supporting epidemiological data, the results of the Lociq method can be used to support efforts to track the spread of clinically relevant plasmids.
Discussion
We have demonstrated how the Lociq method uses closed plasmid assemblies to identify core genetic elements and structural patterns conserved among IncC plasmids. This method can be applied to a single dataset to identify typing metrics for any other plasmid group that share a core set of loci. Further, because this method can characterize plasmids that do not contain the full complement of typing loci, it is ideal for characterizing plasmids that contain elements from multiple plasmid types. This feature also allows for increased characterization of plasmids from draft assemblies where not all of the plasmid typing loci are represented in the assembled sequence. The Lociq method provides a common language to describe plasmid diversity using the endpoints of fragment pattern, loci pattern, plasmid sequence type and interfragment distances. These properties make the Lociq method a powerful tool to track and study the evolution and routes of transmission of any plasmid of interest.
The Lociq method generates multiple typing schema, each with a different discriminatory power. Typing schema with a low discriminatory power, such as the Lociq fragment type, are suited to identify larger groups of similar plasmids. The schema that accounts for all metrics of the Lociq method had the greatest discriminatory power of all the evaluated methods and is best suited to differentiate between similar plasmids. The comparator method whose metrics generated greatest discriminatory power was the IncA/C cgPMLST method. However, the IncA/C cgPMLST schema was only designed to differentiate between IncA/C plasmids while the Lociq method can theoretically be applied to characterize any plasmid type that shares a common set of core loci. Further, the cgPMLST was developed through resource intensive transposon disruption assays while the bioinformatic Lociq subtyping method can be run on a desktop computer 3 .
The Lociq method adds two features that are not common in other typing methods. First, this method identifies conserved intergenic regions and codifies them as typing loci. This has the dual benefit of not only increasing the pool of plasmid loci, but also facilitating the construction of larger contiguous regions of neighboring typing loci. The second feature the Lociq method adds is an analysis of variations in the plasmid structure. Structural analysis of the arrangement of elements is relevant to plasmid typing because it can identify common recombination events in plasmids such as deletions, insertions, duplications or rearrangement events. The structural analysis also accounts for differences in the length of sequence between the plasmid fragments. Variations in interfragment distances can notify researchers not only that a recombination event occurred, but also the region of the plasmid where the recombination event took place.
While the Lociq method increases the discriminatory power of plasmid subtyping through the addition of structural comparisons, the method does have limitations. First, the user needs to have access to a library of high-quality closed plasmid assemblies to construct their initial dataset. Second, the plasmid dataset should contain sufficient genetic diversity to represent the plasmid type of interest. The Lociq method also requires the user to input threshold values for loci selection and interfragment distance limits. The method provides graphics to help inform the user of plasmid loci distribution, but no equation to determine the optimum cutoff value for prevalence within a plasmid type is supplied with the method. Finally, even though the Lociq results demonstrated greater discriminatory power than other typing schema, increased discriminatory power is not always ideal when the objective is to identify similar members of a group. Fortunately, the Lociq method generates multiple outputs that allow the user to select the testing metric that is appropriate for their purposes. Due to these limitations, the IncC-specific typing definitions that we obtained from our sample set of foodborne pathogens are not intended to classify the full diversity of the extant IncC population. Rather, developing the plasmid typing definitions that accurately reflect the diversity of plasmid sequence and structure will require collaboration with a number of partners that represent a diverse set of isolation locations, biological compartments, host organisms and isolation dates.
In addition to the applications demonstrated earlier, this typing method has promising implications for plasmid research. First, the Lociq method can be used to characterize plasmids that are currently untyped. The initial stage of the Lociq program organizes plasmids independent of plasmid type through hierarchical clustering of loci presence/absence data. Plasmids belonging to clusters without a known plasmid type can be characterized by the Lociq method using the typing loci unique to that cluster. Second, the Lociq method can facilitate analyses between plasmid sequence and plasmid metadata. These comparisons may be made either by evaluating the sequence composition of the plasmid typing alleles, or by evaluating alleles present in subclusters of a plasmid type as was seen in the clustering of the IncC plasmids (Fig. 1). Finally, the library of typing loci may help to reconcile draft plasmid assemblies by providing a template for contig extension and gap closure when partial matches of plasmid typing loci map to the end of draft assembly contigs.
The Lociq method combines structural and sequence variants to increase the discriminatory power of existing plasmid typing methods. By reducing plasmids to their component parts, the Lociq method standardizes comparison metrics among plasmid types and allows for enhanced investigations between plasmid loci and plasmid metadata such as AMR gene composition, isolation source or plasmid lineage. The results of the Lociq method will not only benefit basic plasmid biology research they will also aid public health monitoring programs such as NARMS to track the spread of plasmid lineages and better identify the origin of multidrug resistant plasmids.
Methods
Sequences and core annotations. The initial dataset of long read sequences from 175 Salmonella and E. coli retail meat and cecal sample NARMS isolates were generated using PacBio Sequel platform with sequencing kit v3.0 (Pacific Biosciences, Menlo Park, CA). Sequencing libraries were prepared with the PacBio SMRTbell template prep kit v1.0 and the resulting reads were assembled into closed contigs using the PacBio Hierarchical Genome Assembly Process 4.0 and Circlator v1.5.5 28,29 . Plasmid Inc type was determined using PlasmidFinder definitions (accessed 4-27-2022) and closed plasmid assemblies were annotated with PROKKA v1.14.5 30,31 . The reference database of plasmid sequences evaluated was the PLSDB database v. 2021_06_23_v2 24 .
Lociq method. The scripts for operation of the Lociq method are available for download at http://www.github.com/LBHarrison/Lociq/. Required input for the method includes annotation files of closed plasmid assemblies, access to reference database and plasmid type metadata. Additionally, the program requires user defined thresholds for prevalence and distance to account for variability in diversity among different plasmid groups and comparator schema.
Identification of plasmid typing Loci. The pangenome and intergenic regions of the closed plasmid dataset were obtained using Roary and piggy, respectively 23,22 . Data for the coding and intergenic pangenomes were merged and passed to R for clustering as binary data with complete linkage 32 . Plasmid typing loci among the Inc groups was determined in a two-stage process (Fig. 1). First, putative plasmid typing loci were identified by selecting the loci with a user-defined threshold of high prevalence in the plasmid group of interest and a user-defined threshold of low prevalence in the other plasmid groups. Second, loci were queried against an external plasmid database as a validation step using an 80% identity threshold. Loci that met or exceeded user-defined prevalence thresholds for membership within a plasmid group were identified as the plasmid typing loci for the current plasmid group.
Identification of conserved plasmid fragments. Sequence coordinates of typing loci were obtained through a BLAST query of the loci against an external plasmid database. Clusters of loci separated by less than a user-defined threshold value defined the contiguous sequence regions of a plasmid. These data were used to generate a contingency table displaying an all vs all tally of loci occurring in the same contiguous sequence region. The contingency table was analyzed as a correlation matrix evaluating the Pearson's correlation coefficient (R) for all loci interactions using the R Hmisc v4.7-0 package 33 . Loci clusters with a mean correlation coefficient ≥ 0.9 represent conserved contiguous sequence regions in the plasmid dataset and are referred to as plasmid fragment. Loci clusters with a mean R-value < 0.9 were subjected to increasingly stringent clustering parameters until the resulting plasmid fragments had a mean R-value≥ 0.9.
Plasmid subtyping. Plasmid sequences were indexed to begin at the typing locus present in the greatest number of plasmids and the sequences were analyzed with AMRFinder plus to identify AMR genes and stress tolerance genes 34 . Plasmids were then subtyped using the metrics of: sequence type, organization of loci, organization of plasmid fragments and the distances between the plasmid fragments (Fig. 4). Unique numeric identifiers of the typing metrics are generated as part of the summary file output from the Lociq program (Supplementary Data 4). Plasmid sequence type was determined by the allelic composition of plasmid typing loci. Loci position data were extracted from the BLAST results and a unique numeric identifier was assigned to each unique organization of loci among the plasmid typing fragments. A similar process was applied to the order of plasmid fragments to determine the plasmid fragment type. Finally, the distances between each plasmid fragment were recorded to identify each plasmid's set of interfragment distance values.
Comparator typing methods. Comparator typing schema were used to evaluate the discriminatory power of the Lociq method. PTU designation and MOB type were determined using COPLA (updated 6-30-2021 using the RS84 definitions) 19 . IncC plasmids were further characterized by the IncA/C PMLST and IncA/C cgPMLST allelic profiles as recorded in PubMLST (Accessed 8-18-2022) 3,35 . Discriminatory power of the typing schema was determined by Simpson's diversity index.
Downstream analyses. Downstream analyses were performed to demonstrate four additional applications of Lociq method. In the first demonstration of custom annotations, insertion sequence (IS) elements were identified in the dataset using ISEscan v1.7.23 and the results merged with the Lociq annotation file 36 . Sequence alignments were generated with NCBI BLAST and visualizations were generated using the R genoPlotR package 37 . In the second demonstration that identified trends in the position of AMR genes in the dataset, the distance of an AMR gene of interest to its nearest plasmid fragments were visualized on a density plot in the base R package.
Third, the Lociq method was used to improve characterization of plasmid draft assemblies. This was done by performing a BLAST query of typing loci against the draft plasmid assembly to identify loci present in the sequence. The results were filtered by requiring an identity > 70% and coverage >90%. The draft assembly loci sequences were compared to the reference plasmid loci sequences to determine which specific plasmid typing alleles were present in the draft assembly. Alleles present in the draft plasmid assembly were used to construct an m x n presence/absence matrix where m was equal the number of plasmids that were analyzed with the Lociq method + the plasmid draft assembly and n was equal to number of unique alleles in the plasmid draft assembly. The presence/absence matrix was used to create a distance matrix of plasmids using the dist function in R with the method parameter set to binary. The row corresponding to the plasmid draft assembly was extracted and the distance values were evaluated to identify the least dissimilar plasmids from the Lociq results.
Statistics and reproducibility. Statistical tests were performed in R with the Hmsic package 32,33 . Specifically, the Pearson's correlation coefficient and corresponding probabilities were calculated to evaluate the pairwise likelihood of any two plasmid typing loci occurring on the same region of the plasmid. Source data and numeric results are available in Supplementary Data 3. The sample size for these tests accounted for the 72 plasmid typing loci that were identified in the IncC plasmid demonstration dataset.
Reporting summary. Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. | 2023-06-04T06:17:22.745Z | 2023-06-02T00:00:00.000 | {
"year": 2023,
"sha1": "5db64d0094fe67be9955016852d4cfc7aa2e0f0f",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "40e2d548ab4d6dd085c9efefb7ac98bdf6d16a8a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
188164123 | pes2o/s2orc | v3-fos-license | Geographical prespective in managing worker’s recruitment and rotation
Competition in retail business continues to occur as it continues to develop. Multiple things have to be considered in creating a competitive business, and one of them includes human resource management. The level of satisfaction of human resources used must be considered for it to be reciprocal to the company. For employees, one of their work satisfaction points is ease of work access. Residence location of mid to low-level employee is an important consideration especially in a retail business with outlets spread in many locations and employee salaries that tend to be low. This study took place in an international hardware retail company and aims to review the process of employee management, in particular, the process of recruitment and employee rotation based on geographical perspective. Various public transportation that supports access to work sites is assessed using a geographic information system approach.
Background
Current development affects the development of business in Indonesia and one of which is a retail business. Retailing is an activity of selling goods and services for consumers to meet their personal needs [1]. Retail business in Indonesia is very potential remembering that Indonesia is one of the countries with the highest population density. A high number of citizens mean a high number of consumers as retail business are a business that distributes goods or services to the consumer.
No business is independent of human resources. Human resources reflect the quality of the effort provided by a person within a specific time to produce goods and services. An example is a business such as a retail business that provides services produced by human resources to run its business. Adam Smith emphasized that a sufficient allocation of human resources is a requirement for economic growth [2]. It takes quality human resources in building an advanced business. Therefore, recruitment of workers to be placed in business is highly significant. It takes the right strategy and criteria in determining suitable workforce with the line of business undertaken.
One of the effective methods to allocate human resources is through work rotation. Work rotation is done to meet the needs of job positions in the company. According to Indrayati [3], work rotation has a positive and significant effect on job satisfaction, and work motivation has the positive and significant effect on job satisfaction. Therefore, this work rotation process is important to note with the aim of increasing the productivity of the company.
Accessibility is a measure of convenience or ease of reaching a location. The more networks exist in an area then the accessibility of the area is high [4]. According to Azis & Asrul [5] accessibility is affected by several factors such as distance, time, and travel costs and 90% of residential-based travel 2 means that every trip to work, education, entertainment or other areas always starts from the residence (home) and ends with a return trip home.
Transportation has a decisive role in improving work accessibility for socially disadvantaged groups. Some critical constraints still affect the fulfillment of the basic needs of these groups regarding security, reliability, affordability, and availability [6]. Transportation mode used is very varied with different characteristics. The modes of transportation according to Azis & Asrul [5] that are commonly used to go to work location are car, train, city transportation, bus, motorcycle, and bicycle. The use of public transportation in Indonesia tends to decrease every year except for the increasing use of trains in the Greater Jakarta Area and BRT users in Jakarta [7]. The majority of employees that reluctant to use public transportation are employees with income above 5 million [8]. Most private vehicles used by residents in Indonesia are motorbikes which are also the most significant contributors to accidents in Indonesia [9]. The low price and easy to use motorbike is the reason for Indonesians to use it [10].
This study aims to review the process of employee management, especially in the process of recruitment and rotation of employees based on geographical perspective. Furthermore, this study also expected to provide information to employees regarding what public transportation is available near their place of residence to get to the workplace. Through the information provided regarding public transportation, it is hoped that this can increase the awareness of employees on the environment in Indonesia. Especially in order to reduce the use of motorcycles that have a significant impact on the environment due to motor vehicle emissions and exposure to high emission levels pose a high risk to human health [11]. The residence location of middle to lower level employees is essential to be considered, especially in a retail business with some outlets spread in many locations. Based on the Ministry of Finance in Indonesia [12] workers classified in the middle to the lower level are employees who have monthly salaries below 2.6 million rupiahs per month. Various public facilities that support access to work sites are assessed using a Geographical Information System (GIS) approach. Public vehicles information presented in this study are trains, public buses and BRT (Bus Rapid Transit) which are contributors to the lowest accident in Indonesia [9]. The study examines an international hardware retail company with 120 outlets spread across Indonesia and 56 outlets spread across Greater Jakarta area, with approximately 70 employees per outlet. The employee salaries in this company start at 2.5 million rupiahs per month so it can be indicated as middle to lower level employees.
Research Methods
This study uses a GIS approach to manage, analyze, and display all the desired geographic information [13]. The multi-ring buffer technique is used to create buffers with more than one area for each selected coordinate point. Distance information of workers is obtained from a survey conducted by Kompas [14], i.e., 0 -15 km as can be seen in Figure 1. According to this information, buffers are created within a distance of 5 km, 10 km, and 15 km. The software used is QGIS Desktop 2.18.14. Spatial data collected includes location data of 56 outlets located in Greater Jakarta area, administrative boundaries, as well as data on transportation facilities (Bus Rapid Transit terminal location, public bus stop location, and train station location). Demographic data is obtained from the 3 Indonesian Central Bureau of Statistic that includes population data, family size data, employee data, and the unemployment rate for the Greater Jakarta area. Figure 2 shows the outlet location in Greater Jakarta Area and buffer areas within 5 km, 10 km, and 15 km. Supported by Bus Rapid Transit (BRT), transportation facilities in Jakarta is more favorable to another city. BRT route is available to access by the citizen in North Jakarta, East Jakarta, South Jakarta, West Jakarta, and Central Jakarta. The analysis shows that only one outlet has less transportation facility, i.e., only one mode of transportation exists in the 15 km buffer area in the form of a shuttle car. Figure 3 shows the transportation facilities in the Greater Jakarta Area.
Findings and Discussion
The buffer area shown in Figure 2 shows the recruitment area for each outlet. Based on the average distance data covered by Jakarta and Non-Jakarta respondents it is found that the average distance traveled from residence to workplace is 0-15 km. Therefore, buffer technique used is multi-ring buffer using 5 km, 10 km, and 15 km buffer area, with outlet's coordinate as the center point of the buffer area. Access to train, bus, and BRT can also be seen in the buffer area as consideration for employees who use public transportation to go to work. Figure 4 shows an example of a multi-ring buffer for an outlet inside a mall. In this study, transportation facilities in the object of research are public transportation such as bus, train, and BRT. These public transportations are chosen because the cost spent to use the transportation is affordable, and the route is long. Recruitment is conducted by looking at the smallest buffer from the residence applicants with available store locations. Job rotation was also conducted using a buffer with priority smallest buffer area at 5 km. It aims to get the closest distance and also minimizes the possibility of mutation location where employees live far to store destinations. However, if there is no outlet in the buffer range of 5 km, then create optional in the buffer 10 km or 15 km depending on the availability of existing destination store.
Conclusion
This study proves that the Geographic Information System is highly applicable in displaying accessibility information that is easy to understand, interesting and easy to use, with the availability of free, open source and licensed software. Multi-ring buffer technique allows the company to assess the location of the prospective employee compared to the location of the outlet.
In recruiting employees, the management should also consider the accessibility and residence location of the applicants to support the productivity of the company. The considerations are also applying in the employee rotation process. Placement process should prioritize from smallest to the largest buffer area. If there is no outlet located in the area of the buffer that has been determined, it can be overcome by looking at the distance matrix of the shortest distance to the nearest existing store. Furthermore, this study also expected to provide information to employees regarding what public transportation is available near their place of residence to get to their workplace. Moreover, also, to increase the employee's sense of concern for the environment in Indonesia, especially big cities like in the Greater Jakarta area which is full of pollution | 2019-06-13T13:20:08.848Z | 2018-12-14T00:00:00.000 | {
"year": 2018,
"sha1": "73ed6fac09c1253eb9cfcdf4df69e346df6be98c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/195/1/012034",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8765361f34f38859a4e59bf3b7a0e0da75324e92",
"s2fieldsofstudy": [
"Business",
"Geography"
],
"extfieldsofstudy": [
"Physics",
"Geography"
]
} |
215409578 | pes2o/s2orc | v3-fos-license | Psychometric Validation and Reference Norms for the European Spanish Developmental Coordination Disorder Questionnaire: DCDQ-ES.
The Developmental Coordination Disorder Questionnaire (DCDQ) is a widely used and well-validated tool that contributes to the diagnosis of Developmental Coordination Disorder (DCD). The aim of this study was to further analyze the psychometric properties of the European Spanish cross-culturally adapted version of the Developmental Coordination Disorder Questionnaire (DCDQ-ES) in a sample of Spanish children aged 6-11 years and to establish reference norms with respect to age groups. Parents of 540 typically developing children completed the DCDQ-ES. A second sample of 30 children with probable DCD (pDCD) was used to test its discriminant validity. Confirmatory factor analysis supported the original three-factor structure and the internal consistency was excellent (Cronbach's α = 0.907). Significant differences between age groups were found. The pDCD group scored significantly lower than the reference sample in the three subscales and DCDQ-ES total score (p < 0.001; AUC = 0.872). The DCDQ-ES is a reliable and valid tool for screening motor coordination difficulties in Spanish children and for identifying children with probable DCD. The findings of this research suggest that context-specific cut-off scores should be systematically utilized when using cross-cultural adaptations of the DCDQ. Age-specific cut-off scores for Spanish children are provided.
Introduction
It is estimated that Developmental Coordination Disorder (DCD) affects approximately 5%-10% of school-aged children, making it the most prevalent neurodevelopmental disorder in childhood [1][2][3]. Children with DCD present motor coordination difficulties that significantly and persistently limit their daily functioning. As established by the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), children with DCD must show significantly poorer motor coordination performance than expected from the child's chronological age and opportunity for skill learning and use (criterion A) that significantly and persistently interferes with typical activities of daily living (criterion B), where onset occurs in the early developmental period (criterion C) and that cannot be better explained by a neurological condition affecting movement (criterion D) [2]. children in the pDCD group were identified as having probable DCD using the 95th percentile cut-off score on the Spanish version of the DCDDaily-Q (mean score = 46.9, SD = 7.8) [33].
The DCDDaily-Q is a parent questionnaire aimed to operationalize criterion B of the diagnosis of DCD [32]. This measure has demonstrated excellent psychometric properties and capacity to identify children with DCD (Cronbach alpha = 0.85; sensitivity = 88%; specificity = 92%) [32]. All children in the pDCD group had been referred to two rehabilitation centers in Spain for motor performance issues, and some of them had a previous medical diagnosis of a co-occurring neurodevelopmental condition (ADHD = 33.3%, ASD = 13.3%, no co-occurring disorder = 53.3%). None of the children in the pDCD group were receiving specific treatment for DCD.
Participants in the normative group were randomly selected from a previously recruited larger sample that came from fourteen randomly selected mainstream elementary schools located in five locations in northwest, north and center of Spain (northwest = 78.1%, north = 20.2%, center = 1.7%) [21,33]. Most of the children (60.6%) came from a family with high/university education level (i.e., at least one parent held a college degree). Children with a parent-reported diagnosis of a developmental disorder were excluded from this group.
A third group that included children in the normative sample was created to serve as a control group for discriminant validity analysis in order to control for age and sex distribution. Children in the control group were randomly selected from the normative group using age-and sex-stratified sampling to match for exact age and sex with the pDCD group. As the pDCD sample size was small (n = 30), a 1:2 ratio was used for the control group (n = 60) to increase the statistical power of the analyses [36,37].
This study was approved by the Autonomic Research Ethics of Galicia Committee (code 2017-167). The DCDQ-ES was sent to the parents of the participants between June 2017 and December 2019 via school or rehabilitation center intermediation, so the parents could complete the DCDQ-ES at home.
Parents also received an informative letter about the study, where it was stated that completion of the DCDQ-ES was anonymous and voluntary. The e-mail address and telephone number of the first author were included in the letter so parents could contact the research team for clarification of the items or the questionnaire. Only parents who consented to participate returned the DCDQ-ES to the schools after completion within one week. Researchers retrieved the completed questionnaires from the schools.
European-Spanish Version of the DCDQ (DCDQ-ES)
The DCDQ-ES is a 15-item parent questionnaire designed to screen motor coordination disorders in 5-15-year-old children [23]. Using a five-point Likert scale, parents are asked to evaluate how well their child performs certain motor daily activities compared with their peers (1 = not at all like your child; 2 = a bit like your child; 3 = moderately like your child; 4 = quite a bit like your child; 5 = extremely like your child). Items are divided into three subscales or factors: control during movement, fine motor/handwriting and general coordination.
Total and subscale scores are calculated, where higher scores indicate better performance and the total score indicates whether a child has probable DCD with respect to three age groups (5-7 years 11 months; 8-9 years 11 months; and 10-15 years) [24]. The DCDQ usually takes about 10-15 min to complete [23], and it is a well-validated and recommended tool for assessing criterion B of the DSM-5 for a diagnosis of DCD [2,3].
The DCDQ was originally developed in English, and its original validation study using a large sample of Canadian children demonstrated good psychometric properties (Cronbach's alpha = 0.94; sensitivity = 85%; specificity = 71%) [23].
Translation into European Spanish, cross-cultural adaptation and preliminary psychometric validation of the DCDQ-ES have been described in a previous study, demonstrating that it is conceptually and semantically equivalent to its English version and is a reliable measure for assessing motor coordination in Spanish children [34]. Additionally, the DCDQ-ES has a moderate and significant correlation with the Spanish version of the DCDDaily-Q (r = 0.406; ICC = 0.381; p < 0.001), which contributes to demonstrating its concurrent validity [33].
The DCDQ-ES is available in the Supplementary Materials (Table S1).
Statistical Analysis
Analyses were performed using SPSS version 24 (SPSS Inc., Chicago, IL, USA) and EQS 6.1 for Windows. To assess the goodness of fit, confirmatory factor analysis (CFA) was conducted using an unweighted least-squares estimation method (n = 540) [38][39][40]. A root-mean-square error of approximation (RMSEA) of < 0.08, a comparative fit index (CFI) of > 0.95 and a non-normed fit index (NNFI) of > 0.95 were indicators that the model fitted the data adequately [41,42].
Reliability of the DCDQ-ES was calculated using Cronbach's alpha, with a value higher than 0.70 considered to be an indication of good internal consistency. Student's t-test, analysis of variance (ANOVA) and Bonferroni post-hoc tests were used to determine the discriminant validity of the DCDQ-ES by calculating differences between the control group and the pDCD, pDCD only, pDCD/ADHD and pDCD/ASD groups for mean item scores and mean total and subscale scores. Discriminant validity of the DCDQ-ES across age groups was also tested using Student's t-test.
Mean differences according to sex and age group were assessed with Student's t-test and ANOVA analysis. Then, the 5th, 10th, 15th and 20th percentiles of the normative group were calculated for the DCDQ-ES total and subscale scores in the overall sample and within each of the three age groups. ROC computations were conducted and DCDQ-ES total score sensitivity, specificity and predictive values were calculated.
Finally, we explored the potential research consequences of adjusting DCDQ-ES scores for the Spanish population by examining the percentage of children identified as having probable DCD using the original Canadian cut-offs (≤ 46 for ages 6-7; ≤ 55 for ages 8-9; or ≤ 57 for ages [10][11] or the Spanish-adjusted 5th percentile cut-offs for each age group.
Discriminant Validity
As displayed in Table 1, the total score of the DCDQ-ES showed a good discriminant capacity between typically developing children and children with probable DCD across age groups. The pDCD group scored significantly lower than the matched control group, both for the DCDQ-ES total and subscale scores and all items. Children with pDCD only (without ADHD or ASD) also showed significantly poorer scores on the DCDQ-ES total scale and all subscales ( Table 2). Table 2. DCDQ-ES total, subscale and item scores for pDCD and matched control group (n = 90). <0.001 c ; <0.001 d SD = standard deviation; pDCD = probable Developmental Coordination Disorder; = ADHD = Attention Deficit and Hyperactivity Disorder; ASD = Autism Spectrum Disorder; a = between controls and pDCD; b = between controls and pDCD only; c = between controls and pDCD/ADHD; d = between controls and pDCD/ASD.
Age and Sex Differences and Age-Specific Cut-Offs
Significant differences between age groups were found in the DCDQ-ES total scale and all subscales (p < 0.001). Younger children scored significantly lower than their older peers in the DCDQ-ES total scale and subscales.
Differences between sex groups were found only in one subscale. In the overall normative sample, girls scored significantly higher than boys in fine motor/handwriting (p < 0.001), but not in control during movement (p = 0.424), general coordination (p = 0.084) or total score (p = 0.228).
Therefore, percentiles for all subscales and total score were calculated separately for each age group. In total, four cut-off points for each age group were calculated according to the 5th, 10th, 15th and 20th percentiles on the normative group for DCDQ-ES total and subscales ( Table 3). The 15th percentile cut-off point of the DCDQ-ES for the total sample was 57 or below, with a sensitivity of 76.7% and a specificity of 83.3% (AUC = 0.872, 95% CI = 0.798 − 0.948, n = 90) (Table 4; Figure 2). Table 3. Overall and age-specific cut-off points according to the 5th, 10th, 15th and 20th percentiles for the DCDQ-ES total and subscores in the normative group (n = 540). In bold = recommended cut-offs for DCD indication (criterion B) in clinical practice (p15) and research (p5). In bold = recommended cut-offs for DCD indication (criterion B) in clinical practice (p15) and research (p5). Table 5 displays the research consequences of using the original Canadian cut-off points for identifying Spanish children with probable DCD, which were developed using logistic regression modelling [23]. As observed, 3.5% of children in the reference sample were diagnosed differently, depending on the cut-off point used. For the youngest children there is a 100% rate of agreement between both cut-off proposals, but in older groups this mismatch would result in a high rate of false-positive diagnoses. This mismatch is especially relevant in children aged 10 to 11 years, as 6.7% of Spanish children would get a false positive of probable DCD in research practice. Table 5. Prevalence of children diagnosed with probable DCD using Canadian or Spanish cut-off points (n = 540).
Canadian Cut-Offs Spanish Cut-Offs
Probable not DCD Probable DCD
Discussion
The aim of this research was to further validate the Spanish version of the DCDQ and to develop cut-off points for Spanish children using a randomly selected, sex and age-balanced sample of 540 Spanish typically developing children.
As motor coordination performance is a complex construct, different theories have been suggested and tested for its categorization when using and interpreting the DCDQ [23,43,44]. In this study, CFA analysis confirmed the original proposed three-factor structure, which is in line with the findings from Rivard et al. [44] and the validation study of the Italian version of the DCDQ [35]. Overall, these findings add to the evidence that motor coordination is a complex and multifactorial construct and that fine motor skills, coordination during movement and general coordination are interrelated factors but with unique differential aspects. For instance, girls and boys tend to show different motor coordination patterns in fine and gross motor skills, even when children come from different countries and cultural environments [21,45], and children with DCD struggle with different areas of motor coordination [3,13]. Therefore, it is necessary to assess each factor when exploring for DCD or coordination difficulties in daily living. Based upon the presented results, the authors recommend taking into account the specific problems in each of the three subscales in addition to interpreting the total score when using the DCDQ in a clinical context.
The DCDQ-ES has been previously cross-culturally adapted to the Spanish population, demonstrating that it is culturally and conceptually equivalent to the original DCDQ, and the preliminary validation study showed that the DCDQ-ES is a reliable tool for assessing motor performance in typically developing Spanish children [34]. In line with previous studies, findings from this further validation work report higher internal consistency values for the DCDQ-ES total scale and for the three subscales [24][25][26][27][28][29][30][31]. Cronbach's alpha values in other validation studies in European, Asian and Latin American populations range from 0.89 to 0.96 [24][25][26][27][28][29][30][31], demonstrating that the DCDQ is a reliable tool for assessing motor coordination and probable DCD.
The DCDQ-ES showed a high capacity to discriminate between children with and without probable DCD. The pDCD group scored significantly lower on all of DCDQ-ES items, the total scale and each of the three subscales (p < 0.05). The total score of the DCDQ-ES significantly discriminated children in the pDCD group across the three age groups as well. The co-occurrence rate of other neurodevelopmental conditions within the pDCD group is in line with the high prevalence rates reported by previous research, particularly regarding ADHD and ASD [3,14,[46][47][48]. Children with ADHD frequently present with motor coordination difficulties and DCD [3,49], and it has been questioned whether ADHD and DCD may pose as a unique disorder, but research demonstrates that they show differential motor, executive functioning and sensory processing characteristics and disparities in brain underpinnings, adding to the evidence of both disorders being commonly overlapping but different conditions [7,50,51].
Co-occurrence between DCD and ASD has been less explored, partially because assessment of motor coordination difficulties in children with ASD is reasonably more complex. However, the DSM-5 states that co-occurrence between both disorders is possible and research suggests that it may be quite frequent [2,3,[52][53][54]. A recent study using a large sample of children with ASD (n > 11,000) estimates that prevalence of risk of DCD in this population is as high as 86.9% [55]. Even if ASD commonly overlaps with DCD, research supports that both are different disorders with unique physiological and functional characteristics and intervention requirements [56]. For instance, Caeyenberghs et al. [57] found that children with DCD only and ASD only showed disorder-specific neural alterations, while children with both DCD and ASD exhibited distinct topological patterns, concluding that co-occurring children have a unique neural signature.
In this study, most of the items significantly discriminated children with pDCD only, pDCD/ADHD and pDCD/ASD, although some items (i.e., item 4, 5 or 15) did not discriminate typically developing children from pDCD only children, which can be partially explained by the small sample size in this subgroup. However, the total and subscale scores of the DCDQ-ES significantly discriminated children with pDCD only, pDCD/ADHD and pDCD/ASD, thus supporting the discriminant validity of the DCDQ-ES.
As expected, significant differences between age groups were found in both the DCDQ-ES total scale and all subscales. Older children scored significantly higher than younger children, which adds to the evidence that children improve their motor skills as they grow, as has been theorized previously by several authors, thus supporting the use of age-specific cut-off points [21,23,35,44,46].
Findings regarding sex differences in motor performance vary highly across cultural contexts and measures of assessment [20,33,45]. In this study, boys and girls showed a similar score on the DCDQ-ES total scale but had significant differences in the fine motor/handwriting subscale.
Outcomes regarding differences in motor coordination between boys and girls are inconclusive and vary according to country and measure of assessment [58][59][60]. For instance, Rivard et al. [44] reported that Canadian typically developing and DCD girls scored better on the DCDQ total scale than typically developing and DCD boys, respectively, while Caravale et al. [35] found that Italian boys and girls showed similar scores on the Italian DCDQ. Using the DCDDaily-Q, Delgado-Lobete et al. [45] found that both Spanish and Dutch girls showed better performance in fine motor activities than Spanish and Dutch boys, but differences in total performance varied according to sex and country.
These outcomes are in line with the findings from this study, and suggest that motor performance is probably influenced by cultural factors and daily activity participation. On the other side, typically developing boys are usually more proficient in gross motor skills than typically developing girls, while girls usually outperform boys in fine motor skills, but there is generally a higher proportion of males than females reported with DCD [21,33,45,59,60]. Thereby, it is possible that impairments in gross motor skills may be more evident than difficulties in fine motor performance, which could lead to girls with coordination motor struggles to go unnoticed.
As age was significantly associated with DCDQ-ES subscales and total scores, different cut-off points were calculated following the original age categorization of the DCDQ [23]. The resulting Spanish cut-offs reflected the lower mean scores found in typically developing Spanish children in comparison with the Canadian children, except for younger children. Identifying DCD in young children may be more complicated than in older children because motor performance is more variant and coordination difficulties can be overturned [3,33].
Country-adjusted cut-off points have also been developed for other Southern American and European versions of the DCDQ, and these are usually lower than the original ones [35,61]. The established cut-off points for Brazilian children are significantly lower than both the Canadian and the Spanish norms, indicating lower overall scores in the DCDQ for Brazilian children [61]. While Italian adjusted cut-off points are almost similar to the Spanish norms in younger children, they differ significantly in the 8-9 and 10-12-years-old groups [35], which in the Spanish situation may reflect an increasing improvement in motor performance with age [21]. This situation may be due to different motor coordination standards between North America and South America or Southern Europe, which are consistent with the different prevalence rates of probable DCD among these populations [21,60]. Interestingly, differences between Italian and Spanish cut-off points further support that variances in motor coordination performance exist even between regions that may be perceived as similar.
The 5th percentile is often taken as the cut-off point in tools designed to identify the risk of DCD in research [32,35,[62][63][64], and so it is the cut-off point recommended by the authors when using the DCDQ in Spain to operationalize criterion B of the diagnostic criteria for DCD diagnosis in research practice. Conversely, the use of the 15th percentile is recommended in clinical practice. However, as the aim of the DCDQ-ES is to identify as many children with probable DCD as possible, different percentile scores are given so that researchers and healthcare practitioners can compare a child's performance in each of the three factors and the total scale in relation to the normative sample, thereby detecting those children with mild motor coordination difficulties in order to prompt strategies to prevent further consequences. An additional recommendation for clinicians would be to not only be alert to the total DCDQ-ES score but to notice whether the child scores lower than their peers in a particular area (i.e., control during movement, fine motor/handwriting or general coordination), as children with DCD present with a variety of motor coordination issues.
As expected, the Spanish recommended cut-off score for clinical practice in the overall sample, regardless of age group, is higher than the Canadian value (57 vs. 53). It is interesting to note that although this overall cut-off resulted in quite similar sensitivity values (Spanish = 77%; Canadian = 81%), the specificity in the Spanish version is significantly higher (Spanish = 83%; Canadian = 65%).
However, sensitivity and specificity values for the clinical proposed Spanish cut-off were similar with that of the original DCDQ and other cross-cultural adaptations [23,[25][26][27]30,35]. For instance, sensitivity and specificity of the German version of the DCDQ for a clinic sample was 72.7% and 95%, while these values decreased to 30% and 86.7% in a community sample [25]. The Italian-adjusted cut-off scores resulted in a sensitivity of 59% and a specificity of 65% for a community-based sample [35], but these values increased to 88% and 96% if using a clinical DCD sample [27]. The sensitivity and specificity for Brazilian children is 73% and 86.6%, respectively [26], and the European French values are similar as well (sensitivity = 85%, specificity = 81.6%) [30].
One important finding in this study was that using the non-country-adjusted cut-off points for Spanish children resulted in a significant mismatch and a high rate of false-positive diagnoses of probable DCD, especially in children older than 7 years. As previously discussed, the discrepancy between Canadian and Spanish norms could be explained by differences in motor coordination standards between regions, which have been reported in previous studies across European and American populations [35,45,60,62]. It may be possible that parents from different cultural and geographical backgrounds have distinct standards on rating their child's motor performance in comparison to other children.
These findings show that it is crucial to develop and promote the use of country-adjusted norms in order to prevent misleading outcomes in clinical and research practice. Possible clinical consequences of mistakenly identifying children with probable DCD include not only economic and resource costs but also the cost of putting families and children through unnecessary stress and potentially delaying a definite diagnosis. As the DCDQ-ES aims to operationalize criterion B of the DSM-5 diagnostic criteria for DCD, a diagnosis of definite DCD only should be made after a comprehensive multidisciplinary evaluation [3,33]. An occupational therapy evaluation of the impact of motor deficits on a child's activities in daily living has specific relevance in the diagnosis of DCD (criterion B). Therefore, it is recommended to include pediatric occupational therapists in the multidisciplinary team.
Some limitations of this study should be addressed. One important limitation was that a definite diagnosis of DCD could not be established in the pDCD group. However, only children who scored at the most restrictive cut-off in the DCDDaily-Q were included in the pDCD group. Another limitation regarding the pDCD group is that most severe cases (i.e., children who had been referred for motor coordination difficulties in addition to another potential neurodevelopmental condition) were more likely to be recruited in this study, which may constitute a bias. A second limitation is that the sample size of the 10-11-years-old group in the pDCD group was very small. Additionally, our sample did not include children aged 12-15, therefore the norms for the older age group should be considered when assessing Spanish children older than 11 years. Finally, intra-rater reliability, test-retest reliability and concurrent validity with objective motor test batteries were not tested. Future research directions might include gathering data from children with a definite diagnosis of DCD in order to further test the sensibility and specificity of the proposed cut-off scores.
Conclusions
The present study has both research and clinical implications as it reports further information about the psychometric properties of the European Spanish version of the DCDQ and provides the reference norms for Spanish children. Findings show that the DCDQ-ES is a reliable and valid instrument for assessing motor coordination issues and for identifying children with probable DCD in Spanish context. Age-specific cut-off points adjusted to the Spanish population are provided for research and clinical purposes. The DCDQ-ES is a cost-effective, accessible and reliable measure for easy and quick assessment of motor coordination that may prompt further and comprehensive evaluation of potential DCD if needed. Health practitioners working in pediatric primary care or with children, such as occupational and physical therapists, can benefit from these findings and use the DCDQ-ES to operationalize criterion B of the diagnostic criteria for DCD.
Acknowledgments:
The authors want to acknowledge the collaboration and participation of the schools and families involved in the data collection. The authors thank the original author of the DCDQ, Brenda Wilson, for giving us the permission to cross-cultural adapt and validate the DCDQ into European Spanish.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. | 2020-04-08T19:07:49.365Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "f1c047b95d0b401d9d508c617896d125a5a6d8a6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijerph17072425",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "be2fdb962bdb2d28c7fe77ebea128bfcb79f857d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
26560164 | pes2o/s2orc | v3-fos-license | Brain Gene Expression Analysis: a MATLAB toolbox for the analysis of brain-wide gene-expression data
The Allen Brain Atlas project (ABA) generated a genome-scale collection of gene-expression profiles using in-situ hybridization. These profiles were co-registered to the three-dimensional Allen Reference Atlas (ARA) of the adult mouse brain. A set of more than 4,000 such volumetric data are available for the full brain, at a resolution of 200 microns. These data are presented in a voxel-by-gene matrix. The ARA comes with several systems of annotation, hierarchical (40 cortical regions, 209 sub-cortical regions in the whole brain), or non-hierarchical (12 regions in the left hemisphere, with refinement into 94 regions, and cortical layers). The high-dimensional nature of this dataset and the possible connection between anatomy and gene expression pose challenges to data analysis. We developed the Brain Gene Expression Analysis Toolbox, whose functionalities include: determination of marker genes for brain regions, statistical analysis of brain-wide co-expression patterns, and the computation of brain-wide correlation maps with cell-type specific microarray data.
Chapter 2
The voxel-based Allen Atlas of the adult mouse brain 2.1 Presentation of the dataset 2. 1
.1 Co-registered in situ hybridization data
The adult mouse brain is partitioned into V = 49, 742 cubic voxels of side 200 microns, to which in situ hybridization data are registered [2,3] for thousands of genes. For computational purposes, these gene-expression data can be arranged into a voxel-by-gene matrix.
For each voxel v, the expression energy of the gene g is a weighted sum of the greyscale-value intensities I evaluated at the pixels p intersecting the voxel: where M (p) is a Boolean mask that equals 1 if the gene is expressed at pixel p and 0 if it is not.
Data matrices and gene filters Coronal atlas
Some genes in the Allen atlas of the adult mouse brain gave rise to an ISH experiment and coronal sectioning of an entire brain. The resulting data constitute the coronal atlas.
The coronal atlas contains brain-wide data for G all = 4, 104 genes (this is a subset of the genes for which ISH sagittal sectioning took place for a hemisphere after ISH, see next section for documentation on the sagittal atlas). The corresponding voxel-by-gene data matrix has size V = 49, 742 by G all , and is contained in the file ExpEnergy.mat. The list of genes arranged in the same order as the columns of the data matrix are obtained by using the function get genes.m. 1 load ( ' ExpEnergy . mat ' ); 2% the g−th column o f voxel−by−gene matrix E corresponds 3% to the gene geneNamesAll ( g ) 4 geneNamesAll = get_genes ( Ref , ' allNoDup ' , ' allen ' ); 5% Entrez Ids are arranged i n the same order 6% as gene names ( unresolved Entrez i d s are treated as zero ) 7 geneEntrezIdsAll = get_genes ( Ref , ' allNoDup ' , ' entrez ' ); The matrix in ExpEnergyTop75Percent.mat consists of the 3,041 columns of the matrix defined in Equation 2.1 that are best correlated with the corresponding genes in the sagittal atlas. The names and Entrez ids of the genes 1 are obtained as follows: 1 load ( ' E x p E n e r g y t o p 7 5 P e r c e n t . mat ' ); 2% the g−th column o f voxel−by−gene matrix E 3% corresponds to the gene genesAllen ( g ) 4 genesAllen = get_genes ( Ref . Coronal , ' top75corrNoDup ' , ' allen ' ); 5% Entrez Ids are arranged i n the same order It should be noted that the Entrez ids that are not resolved are represented by zeroes in geneEntrezIds, so Entrez ids should be resolved in fine rather than used during computations.
The start-up file mouse_start_up.m loads the Allen Reference Atlas stored in the file refAtlas.mat and the data matrix ExpEnergytop75percent.mat, which can be used through the structure Ref. • Example 1. Systems of brain annotation. The variable cor=Ref.Coronal contains the coronal atlas, and it can be used to check some of the contents of Table 2. 1. a detailed description of the fields in the structure Ref.Coronal.Annotations can be found in the next section. 1 The differences between the two code snippets are the data matrix loaded, and the filter used as teh second argument of the function get_genes. One can check that after executing any of the snippets, the number of gene names equal the number of columns of the data matrix.
From matrices to volumes 2.2.1 The Allen Reference Atlas (ARA)
The ARA comes in six different versions, described in Table 2. 1. Each of these versions corresponds to an annotation of the three-dimensional grid by digital ids. Each of these digital ids corresponds to a brain region. The correspondence between ids and names of brain regions can be resolved using the fields Ref.Coronal.ids and Ref.Coronal.labels.
• Example 2. From brain regions to digital ids in the ARA. Consider the fine annotation (identifier index 4 in Table 1) and let us work out a threedimensional grid with ones at the voxels corresponding to the caudoputamen in this annotation, and zeroes everywhere else. We can use this grid to compute the number of voxels in the caudoputamen. 1% the three−dimensional g r i d containing the f i n e annotation o f the brain 2 annotFine = get_annotation ( cor , ' fine ' ); 3% the names o f brain r e g i o n s 4 labels = ann . labels { 4 }; 5% the numerical i d s o f the r e g i o n s 6 ids = ann . ids { 4 }; 7% where i s the caudoputamen i n the l i s t ? There may be spaces 8% i n the l a b e l s , s k i p them 9 labelsNoSpace = regexprep ( labels , '\ W ' , ' ' ); 10 ca ud ou pu ta me nI nd ex = find ( strcmp ( labelsNoSpace , ' Caudoputamen ' ) == 1 ); 11 caudouputamenId = ids ( c au dou pu ta me nI nd ex ); 12% put z e r o s at a l l the voxels that do not belong to caudoputamen 13 volCaudoputamen = annotFine ; 14 volCaudoputamen ( volCaudoputamen~= caudouputamenId ) = 0; 15 volCaudoputamen ( volCaudoputamen == caudouputamenId ) = 1; 16% count the voxels i n caudoputamen 17 nu m V o x I n C a u d o p u t a m e n = sum ( sum ( sum ( volCaudoputamen ) ) ); The value of the variable numVoxInCaudoputamen computed in the above code snippet should be 1248.
From gene-expression vectors to volumes
At a resolution of 200 microns, the Allen Reference Atlas (ARA) is embedded in a three-dimensional grid of size 67 × 41 × 58. Out of the V tot = 67 × 41 × 58 in the grid, V = 49, 742 are in in the brain according to the ARA. The brain voxels can be mapped to a three-dimensional grid using the function make_volume_from_labels.m and a specified voxel filter, corresponding to one of the versions of the ARA (as per Table 2.1).
• Example 3. Whole-brain filter. Let us take the whole first column of the data matrix and map it to a three-dimensional grid, using the whole-brain filter, corresponding to the 'standard' annotation: 1 wholeBrainFilter = Ref . Coronal . Annotations . Filter { 1 }; 2 display ( wholeBrainFilter ); 3 brainFilter = get_voxel_filter ( cor , wholeBrainFilter ); 4% a column vector with 49 ,742 elements 5 col1 = E ( : , 1 ); 6 display ( size ( col1 ) ) 7% map t h i s column vector to a volume 8 vol1 = m a k e _ v o l u m e _ f r o m _ l a b e l s ( col1 , brainFilter ); 9 display ( size ( vol1 ) ); Some of the annotations do not extend to the whole brain, as can be seen from Table 2. 1. when comparing columns of the matrix of gene-expression energies to regions in the ARA, it is important to restrict the matrix to the rows that correspond to annotated voxels. For each system of annotation, the field Ref.Coronal.Annotations.Filter is the list of voxels in a 67 × 41 × 58 grid (not a list of row indices in the matrix of expression energies), that are annotated. The example below shows how to recover the list of row indices in the gene-expression matrix corresponding to a given filter.
• Example 4. More filters. For instance, one can check that the following two snippets produce the same matrix EFiltered, consisting of the rows of the full geneexpression matrix corresponding to voxels in the fine annotation: 1 % Compute the s e t o f rows using a volume o f 2 %i n d i c e s and the f i l t e r 3 cor = Ref . Coronal ; 4 identifierIndex = 4; 5 brainFilter = get_voxel_filter ( cor , ' brainVox ' ); 6 numVox = numel ( brainFilter ); 7 % l a b e l the voxels by i n t e g e r s 8 indsBrainVoxels = 1 : numVox ; 9 % arrange the i n t e g e r s i n a volume 10 indsVol = m a k e _ v o l u m e _ f r o m _ l a b e l s ( indsBrainVoxels , brainFilter ); 11 filter = get_voxel_filter ( cor , ann . filter { identifierIndex } ); 12 % r e s t r i c t the volume to the voxels that are i n the f i l t e r 13 indsFiltered = indsVol ( filter ); 14 EFiltered = E ( indsFiltered , : );}} 1 %apply the f i l t e r to each column o f the data matrix 2 cor = Ref . Coronal ; 3 identifierIndex = 4; 4 brainFilter = get_voxel_filter ( cor , ' brainVox ' ); 5 filter = get_voxel_filter ( cor , ann . filter { identifierIndex } ); 6 numGenes = size ( E , 2 ); 7 % r e s t r i c t each colum o f the data matrix to voxels 8 % that are i n the f i l t e r 9 for gg = 1 : numGenes Note also that the voxel filter can be recomputed from the three-dimensional annotation and the brain-wide filter: 1% the three−dimensional g r i d containing the f i n e annotation o f the brain 2 annotFine = get_annotation ( cor , ' fine ' ); 3 labels = ann . labels { 4 }; 4 ids = ann . ids { 4 }; 5 labelsNoSpace = regexprep ( labels , '\ W ' , ' ' ); 6 ca ud ou pu ta me nI nd ex = find ( strcmp ( labelsNoSpace , ' Caudoputamen ' ) == 1 ); 7 caudouputamenId = ids ( c au dou pu ta me nI nd ex ); 8 volCaudoputamen = annotFine ; 9 volCaudoputamen ( volCaudoputamen~= caudouputamenId ) = 0; 10 volCaudoputamen ( volCaudoputamen == caudouputamenId ) = 1; 11 p l o t _ i n t e n s i t y _ p r o j e c t i o n s ( volCaudoputamen ); The fineAnatomy filter. It is manifest from the projection of the characteristic function of the caudoputamen in the fine annotation ( Figure 2.2) that the fine annotation contains only the left hemisphere of the brain. We can confirm this by applying the voxel filter of the fine annotation (called fineAnatomy, which is the value of Ref.Coronal.Annotations.filter{4}) to the gene-expression vector of Gabra6 . The following snippet should reproduce Figure 2.3, which is equivalent to Figure 5.1, with all the voxels that are not in the fineAnatomy filter filled with zeros. 1% take the brain−wide expression o f Gabra6 2 indexGabra6 = find ( strcmp ( genesAllen , ' Gabra6 ' ) == 1 ); 3 columnGabra6 = E ( : , indexGabra6 ); 4 volGabra6 = m a k e _ v o l u m e _ f r o m _ l a b e l s ( columnGabra6 , brainFilter ); 5% c o n s i d e r the voxel f i l t e r o f the ' f i n e ' annotation 6 fineFilter = get_voxel_filter ( cor , ' fineAnatomy ' ); 7% take the expression values at the voxels i n the f i l t e r , and arrange them 8% i n a column vector ( one can check that 9% dataGabra6Filtered has the same number o f 10% elements as f i n e F i l t e r 11 colG abra6F iltere d = volGabra6 ( fineFilter ); 12% map these values to a three−dimensional grid , using f i n e F i l t e r • Example 8. Brain-wide versus left hemisphere. As can be seen on Table 2.1, the standard annotation contains all the voxels in the brain (49,742 of them). We can compute the characteristic function of the caudoputamen in this annotation, and check 7% where i s the caudoputamen i n the l i s t ? 8 labelsNoSpace = regexprep ( labels , '\ W ' , ' ' ) ; 9 ca ud ou pu ta me nI nd ex = find ( strcmp ( labelsNoSpace , ' Caudoputamen ' ) == 1 ); 10 caudouputamenId = ids ( c au dou pu ta me nI nd ex ); 11% put z e r o s at a l l the voxels that do not belong to caudoputamen 12 volCaudoputamen = annotStandard ; 13 volCaudoputamen ( volCaudoputamen~= caudouputamenId ) = 0; 14 volCaudoputamen ( volCaudoputamen == caudouputamenId ) = 1; 15% count the voxels i n caudoputamen 16 n u m V o x I n C a u d o p u t a m e n S t a n d a r d = sum ( sum ( sum ( volCaudoputamen ) ) ); 17 display ( n u m V o x I n C a u d o p u t a m e n S t a n d a r d ); 18% reproduce Figure
Sections
The function flip_through_sections.m allows to go through the sections of a (67×41×58) volume, of a kind specified in the options. It pauses between sections. The duration of the pause is one second by default, it can be adjusted using the field secondsOfPause of the options. If the value of secondsOfPause is negative, the user will have to press a key to display the next section. • Example 9. Sections of the average across all genes in the dataset. The following code allows to visualize the coronal, sagittal and axial sections of the average expression E across all genes in the data matrix, defined in Equation 2.2:
Relation between the various annotations in the ARA
The field Ref.Coronal.Annotations.parentIdx in the reference data structure gives the indices (not the numerical id) of the parents of the regions in a given annotation, arranged in the same order as these regions. If a region does not have a parent in the considered system of annotation, the corresponding entry in parentIdx is zero. NB: the list of regions in the 'cortex' annotation is not closed under the operation of taking the parent of a region in a hierarchy. The field parentSymbols has to be used instead of parentIdx to work out the hierarchy.
• Example 10. Hierarchical and non-hierarchical annotations. Let us check that the big12 and fine annotations are non-hierarchical, and investigate parents and descendants of the caudoputamen and of the cerebellum in the standard annotation.
Let us work out which regions (if any) correspond to an empty set of voxels in the standard annotation. Only two of the 7 regions corresponding to an empty set of voxels are therefore leaves of the hierarchical tree. We can conclude that they are too small to be represented by voxels at a spatial resolution of 200 microns. The other 5 regions have descendants in the hierarchy, but they can only be resolved by taking the reunion of the voxels belonging to their descendants (and descendants thereof). One can note a curiousity about Striatum dorsal region, which only has one descendant, caudoputamen (which corresponds to the index uu=1 in the above loop). The inclusion of caudoputamen in the dorsal region of the striatum is therefore trivial, and the two labels Striatum dorsal region and Caudoputamen can be treated as synonyms of each other.
The fine annotation compared to the big12 annotation. Let us show that the 'fine annotation is a refinement of the big12 annotation. It is manifest from Table 2.1 that the fine annotation covers fewer voxels than, but we can check that 1) all these voxels are also in the big12 annotation, and 2) each region in the fine annotation intersects only one region in the big12 annotation. The toolbox contains these code snippets as two functions that work out the refinement of the big12 annotation, and the organisation of the fine annotation into larger regions of the brain. One can check that the following reproduces the above results: For instance, the 16-th region in the fine is Nucleus Accumbens. Check it: ( 16 ) ); 2% check that t h i s region i s a the r i g h t index i n annotationFineToBig12 3 display ( a n n o t a t i o n F i n e T o B i g 1 2 . labelsFine ( nAccIndex ) ); 4% In which region o f Big12 i s i t included ? 5 display ( a n n o t a t i o n F i n e T o B i g 1 2 . l a b e l s P a r e n t B i g A t l a s ( nAccIndex ) ); 6% Apart from nucleus accumbens which are the other subregions o f the Genes versus neuroanatomy 3.1 Localization scores 3. 1
.1 Localization scores of a single gene in the ARA
Let us define [4] the localization score λ ω (g) of a gene g in a region ω as the ratio of the squared L 2 -norm of the expression energy of gene g in region ω to the squared L 2 -norm of the expression energy of gene g in the set Ω of voxels that are annotated in the version of the ARA that contains ω.
It can be computed from a voxel-by-gene matrix (with one score per column for a fixed region ω), using the function localization_from_id.m, given the numerical id of the region ω and the numerical identifier corresponding to an annotation containing ω.
• Example 13. One gene, one region. Let us compute the localization score of Pak7 in the cerebral cortex, as defined by the big12 annotation of the left hemisphere.
Localization scores of sets of genes in the ARA
where G is the number of genes in our dataset (G = 3, 041 by default when using the start-up file mouse_start_up.m).
Let us define the localization score in the brain region ω of a weighted set of genes encoded by Equation 3.2 as • Example 16. Let us compute the best generalized localization scores in the big12 annotation. The fitting score φ ω (g) of a gene g in a region ω is defined where E norm g is the L 2 -normalized g-th column E g of the matrix of gene-expression energies: the symbol Ω denotes the set of voxels in a given system of annotation that contains region ω. The definition is the only decreasing affine function of the squared L 2 norm of the difference between the normalized gene-expression vector of gene g and the characteristic function χ ω of the region ω: where the denominator in Equation 3.6 ensure the L 2 -normalization V v=1 χ ω (v) 2 = 1.
Given a gene, a system of annotation chosen among the ones in Table 2.1, and the numerical id of a region in this system of annotation, the function fitting_from_id.m computes the fitting score of the gene to the corresponding region.
• Example 17. One gene, one region in a given annotation. Let us compute the fitting score of Pak7 in ω = Cerebral cortex, in the big12 annotation (hence identifierIndex = 5, see Table 2 The function fitting_from_id.m can also used to compute the fitting score of several genes. If the second argument of the function is a voxel-by-gene matrix with p columns, the function returns an array of p fitting scores arranged in the same order as the columns. Given a version of the ARA (specified by the index identifierIndex), a region-bygene matrix of all the fitting scores of all genes corresponding to the columns of the data matrix. Note that the columns of this region-by-gene score matrix do not sum to a constant (the squares of the entries of each column sum to the square of the fraction of the gene-expression that projects onto the set of voxels in the annotation, which is at most 1).
• Example 18. Fitting scores of all genes in all the regions in the big12 annotation.
Fitting scores of sets of genes in the ARA
Like the localization score, the fitting score can be generalized to linear combinations of sets of genes: where E norm α is the L 2 -normalized gene-expression vector corresponding to the coefficients (α 1 , . . . , α G ) which can be minimized wrt the weights of the genes using Matlab code implementing an interior-point method by Koh [5]: Genes versus genes: co-expression networks 4
.1 Co-expression networks of genes in the Allen Atlas
The co-expression of two genes in the Allen Atlas is defined as the cosine similarity between their gene-expression vectors in voxel space. Given a voxel-by-gene matrix containing the brain-wide expression energies (as in Equation 2.1), the corresponding gene-by-gene matrix of co-expressions of the full set of genes, or coExpr full , is the symmetric matrix with entries equal to the co-expression of pairs of genes, as in Equation 4. 1. (4.1) The matrix coExpr full can be computed as follows in Matlab: The function co_expression_matrix.m takes a voxel-by-gene matrix as an argument and returns the gene-by-gene co-expression matrix defined by Equation 4.1, which equals the matrix coExpressionFull defined in the above code snippet if the full voxel-by-gene matrix of gene expression energies is used as an argument. Other versions of the Allen Atlas than the brain-wide standard annotations can be specified in the options to restrict the voxels to one of the annotations described in Table 2. 1.
• Example 20. Distribution of co-expression coefficients. The diagonal elements of the co-expression matrix equal 1 by construction, and the co-expression matrix is symmetric. Hence, the distibution of co-expression coefficients in the atlas is given by the upper diagonal coeefficients of the co-expression matrix, which can be extracted using the function upper_diagonal_coeffs.m, as in the code snipped below, which plots the distribution of brain-wide co-expression coefficients (Figure 4.1)) and compares it to the one of co-expression coefficients in the left hemisphere (as defined in the big12 annotation, Figure 4 4.2 Monte Carlo analysis of brain-wide co-expression networks 4
.2.1 Special sets of genes versus full atlas
The gene-by-gene matrix coExpr full defines a universe in which we would like to study co-expression networks of special sets of genes, in a probabilistic way.
Given a set of G special genes of interest, corresponding to the column indices (g 1 , . . . , g special ) in the data matrix, their co-expression matrix coExpr special is obtained by extracting the submatrix of coExpr full corresponding to these indices (see Equation 4.2.4).
Cumulative distribution function of co-expression coefficients in sets of genes drawn from the Allen Brain Atlas
Having observed (Figure 4.1) that the distribution of pairwise co-expression coefficients of genes in the whole coronal atlas is roughly linear in a large domain of co-expression, we can study the cumulative distribution function of the co-expression coefficients in the special set, and compare it to the one resulting from random sets of genes (with the same number of genes as the special set, in order to eliminate the sample-size bias).
These cumulative distribution functions are evaluated in the following way. Again let G special denote the size of the matrix coExpr special , i.e. the number of genes from which coExpr special was computed. Consider the set of entries above the diagonal of coExpr special above the diagonal (which are the meaningful quantities in coExpr special ): The elements of this set are numbers between 0 and 1. For every number between 0 and 1, the cumulative distribution function (c.d.f.) of C special , denoted by cdf C is defined as the fraction of the elements of C special that are smaller than this number: where C special = G special (G special − 1)/2.
For any set of genes, cdf special is a growing function cdf special (0) = 0 and cdf special (1) = 1. For highly co-expressed genes, the growth of cdf special is concentrated at high values of the argument (in the limit where all the genes in the special set have the same brain-wide expression vector, all the entries of the co-expression matrix go to 1 and the cumulative distribution function converges to a Dirac measure supported at 1). To compare the function cdf special to what could be expected by chance, let us draw R random sets of G special genes from the Atlas, compute their co-expression network by extracting the corresponding entries from the full co-expression matrix of the atlas (coExpr full ). This induces a family of R growing functions cdf i , 1 ≤ ı ≤ R on the interval [0, 1] From this family of functions, we can estimate a mean cumulative distribution function cdf of the co-expression of sets of G special genes drawn from the Allen Atlas, by taking the mean of the values of cdf i across the random draws: The functions cdf special , cdf and cdf dev are fields of the output of the function cumul_co_expr.m whose usage is illustrated in the example below.
Example 21. Cumulative distribution function of co-expression of a special set of genes. Consider the set of 288 genes from the NicSNP database, whose position in the data matrix is encoded in the Matlab file nicotineGenes.mat. NB: the code snippet below can be applied to any special set of genes upon changing the variable indsSmall. It should reproduce
Thresholding the co-expression matrix
The co-expression matrix coExpr special corresponding to a special subset of the genes in the Allen Atlas (Equation 4. 2.4) is symmetric, like coExpr full , and its entries are in the interval [0, 1]. It can be mapped to a weighted graph in the following way (see [6] for details). The vertices of the graph are the genes, and the edges are as follows: -genes g and g are linked by an edge if their co-expression is strictly positive.
-If an edge exists, it has weight coExpr special gg .
Let us define the following thresholding procedure on co-expression graphs: given a threshold ρ between 0 and 1, put to zero all the entries of coExpr special that are lower than this coefficient. The underlying graph is obtained by taking the graph corresponding to coExpr special , and cutting all the links with weight below ρ.
The more-co-expressed a set of genes is, the larger the connected components of the thresholded graphs uderlying coExpr special ρ will be, for any value of the threshold ρ. For instance we can study the average size of connected components of thresholded co-expression matrices and the size of the largest connected component as a function of the threshold ρ : 12) where N ρ (k) is the number of connected components with size k. The connected components are worked out using the implementation of Tarjan's algorithm [17] in the Matlab function graphconncomp.m.
Statistics of sizes of connected components
For any quantity worked out from the special co-expression matrix co-expression matrix coExpr special defined in Equation , we can simulate its probability distribution by repeatedly drawing random random sets of genes from the atlas, and recomputing the same quantity on for this set.
Let us use the above-defined thresholding procedure to study a set of G special = 288 genes obtained by intersecting the NicSNP database [16] with the set of G = 3, 041 genes given by get_genes( Ref.Coronal, 'top75CorrNoDup', 'allen'). We would like to acertain whether this set of genes is more co-expressed than expected by chance for a set of this size taken from genesAllen. At each value of a regular grid the threshold ρ between zero and 1, the function co_expression_island_bootstrap.m computes the maximal size and average size of connected components of the thresholded co-expression graph, and draws R random sets of genes of size G special from the atlas. This induces a distribution of R partitions of sets of G special genes into connected components, obtained by applying Tarjan's algorithm to each of the R sets of genes.
Example 22. Consider the set of 288 genes from the NicSNP database, whose position in the data matrix is encoded in the Matlab file nicotineGenes.mat.
Neuroanatomical properties of connected components Relation between fitting scores and co-expression
If the co-expression of pairs of genes is defined as the cosine of the angle between their expression vectors in voxel space, as in Equation 4.1, the fitting score of a gene g to a region ω of the brain equals the co-expression of gene g and a (hypothetical) gene whose expression profile would be proportional to the characteristic function of region ω: where the second equality comes from the normalization of E mathrmnorm g and χ ω in voxel space.
Fitting scores of sums of gene-expression vectors
At a given level of the threshold on co-expression, a set of genes is partitioned into connected components induced by the graph underlying the thresholed matrix defined in Equation .
For a set of genes of fixed size numGenes, the function fitting_distribution_in_atlas.m estimates the distribution of fitting scores of the sum of sets of genes of a given size, extracted from the Allen Atlas.
In these functions, the genes are not weighted by coefficients to be optimized, they are simply summed over, so that the set of gene-expression vectors is different from the one explored in marker genes. The fitting score φ sum ω of the sum of genes depends only on a list of K distinct K genes {g 1 , . . . , g K }, and a region ω in the Allen Reference Atlas: where E norm ({g 1 , . . . , g K }) is the sum of gene-expression vectors of the genes {g 1 , . . . , g K }, normalized in the L 2 sense: . One can search the results by P -value of fitting scores, and/or size of connected components, and/or brain region: Example 25. Neuroanatomy of nicotine-related genes in the big12 annotation (continued). 1% show a l l the connected components , at any value o f 2% the threshold , whose f i t t i n g s c o r e to any region i s estimated to be i n Gene-based and cell-based expression data The computational techniques exposed in this chapter allow to reproduce the results of [7,11,12,8,9,10] (see also [13,14,15] for analyses of the Allen Atlas data in terms of cell types).
Cell-type-specific microarray data
The file G_t_means.txt contains a matrix of microarray data. The rows correspond to the cell-type-specific samples coming from several studies and analyzed in [18]. The columns correspond to genes (the matrix is a type-by-gene matrix). The following code snippet (included in the file cell_type_start_up.m) defines two matrices with the same numbers of columns, corresponding to the intersection of the sets of genes in the coronal Allen Brain Atlas and in the microarray data:
Brain-wide correlation profiles
The function cell_types_correls.m computes the voxel-by-type correlation matrix between the coronal Allen Brain Atlas and a set of cell types, as in equation 5. 1. 2) Example 26. Brain-wide correlation profiles between cell-type-specific data and the Allen Atlas. To compute the correlations for all the available cell types, the type indices specified in the variables span the whole range of row indices of the matrix C: The results are saved in the file cellTypesCorrelations.mat. It has V = 49, 742 rows and T = 64 columns (the matrix cellTypesCorrelations is a voxel-by-type matrix, consistently with the definition of Corr(v, t) in Equation 5.1), so that each column can be mapped to a volume and visualized in exactly the same way as a column of the data matrix E: Example 27. Visualization of brain-wide correlation profiles. Let us plot maximal-intensity projections of the correlations between the 20-th row of C and all the rows of the Allen Atlas.
Brain-wide density estimates of cell types
To decompose the gene-expression at every voxel in the Allen Atlas into its cell-typespecific components, let us introduce the positive quantity ρ t (v) denoting the contribution of cell-type t at voxel v, and propose the following linear model: Both sides are estimators of the number of mRNAs for gene g at voxel v. The residual term in Equation 5.4 reflects the fact that T = 64 cell types are not enough to sample the whole diversity of cell types in the mouse brain, as well as noise in the measurements, reproducibility issues, the non-linearity of the relations between numbers of mRNAs, expression energies and microarray data.
To find the best fit of the model, we have to minimize the residual term by solving the following problem : (ρ t (v)) 1≤t≤T,1≤v≤V = argmin φ∈R + (T,V ) E E,C (φ), (5.5) where (as defined in Equation 5.1), one can rank the regions in a given annotation of the Allen Reference Atlas by computing the average correlation across in each of the regions: For each cell type t, each region r in the ARA is ranked according to its average correlation. Call its rank χ(r). In particular, the correlation profile yields a top region ρ χ (t) in the ARA, for which χ(ρ χ (t)) = 1. This is the region whose voxels are most highly correlated to cell type t on average.
Ranking of regions by estimated density of a cell type. Consider a system of annotation of the ARA, taken from Table 2.1 Big12 is the default option in the Matlab implementation). For each cell-type index t and each region V r (where r is an integer index taking values in [1 ..R], where R is the number of regions in the chosen system of annotation), we can compute the contribution of the voxels of the region to the brainwide density profile of cell type t: Given a cell type t, the regions in the ARA is ranked according to its average correlation. Call its rank φ(r). In particular, the correlation profile yields a top region ρ φ (t) in the ARA, for which φ(ρ φ (t)) = 1. This is the region whose voxels bring the largest total contribution to the vector ρ t .
These two rankings are implemented in the function classify_pattern.m (see example below). The values of the fraction of total density and average correlation for a given cell type can be plotted (in the big12 version of the ARA), using 1 the functions plot_correlations_for_big12.m and plot_densities_for_big12.m. Given a region in the ARA (our best guess according to one of the above criteria), we need to pick a section that intersects this region. The function cell_type_vol_prepare.m implements the choice of the section. It chooses the section (which can be sagittal, coronal or axial, specified by the field options.sectionStyle of the options) that intersect the desired region (specified by the fields options.identifierIndex and options.regionIndexForSection) along the largest number of voxels, unless the field options.customIndex equals 1. Then it works out the section (of the required style) at a position specified by option.desiredIndex.
Chapter 6
Clustering 6.1 Kullback-Leibler distance from gene-expression to a given probability distribution One can use the Kullback-Leibler (KL) divergence to compare the brain-wide expression of a gene to a given probability distribution over the brain.
This probability distribution can be for instance the normalized average E avg full across all genes in the coronal atlas for instance KL avg (g) : whereẼ avg full andẼ g are probability densities over the brain obtained by normalization: . (6.2)
Construction of a bipartite graph from a voxelby-gene matrix
The Allen Atlas can be mapped to a weighted bipartite graph in the following way: • the first set of vertices consists of voxels, numbered from 1 to V , • the second set of vertices consists of genes, numbered from 1 to G, • each of the edges connects one voxel, say v to one gene, say g and has a weight given by the expression energy of E(v, g) of the gene at the voxel.
We looked for partitions of this weighted bipartite graph into subgraphs such that the weights of the internal edges of the subgraphs are strong compared to the weights of the edges between the subgraphs. This is the isoperimetric problem addressed by the algorithm of [19] (the graph need not be bipartite to apply this algorithm, but since we started with a bipartite graph, each of the subgraphs, or biclusters, returned by the algorithm, is bipartite, and therefore corresponds to a set of voxels and a set of genes).
Given a weighted graph, the algorithm cuts some of the links, thus partitioning the graph into a subset S and its complementaryS, such that the sum of weights in the set of cut edges is minimized relative to the total weight of internal edges in S. The sum of weights in the set of cut edges is analogous to a boundary term, while the total weight of internal edges is analogous to a volume term. In that sense the problem is an isoperimetric optimization problem, and the optimal set S minimizes the isoperimetric ratio ρ over all the possible subgraphs: where the quantity W ij is the weight of the link between vertex i and vertex j. Once S has been worked out, the algorithm can be applied separately to S and its complemen-taryS. This recursive application goes on until the isoperimetric ratio reaches a stopping ratio, representing the highest allowed isoperimetric ratio. This value is a parameter of the algorithm. Rising it results in a higher number of clusters, as it rises the number of acceptable cuts. The implementation of the recursive partition of the bipartite graph in the present toolbox is due to Grady and Schwartz [19]. | 2015-07-03T18:47:10.000Z | 2012-11-27T00:00:00.000 | {
"year": 2012,
"sha1": "cb888c7107222efc36cba22c598affa9241a7b48",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9b7867b4bf3169abf76923197d4403ab637da34c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Computer Science"
]
} |
119307179 | pes2o/s2orc | v3-fos-license | Distribution of localized states from fine analysis of electron spin resonance spectra of organic semiconductors: Physical meaning and methodology
We develop an analytical method for the processing of electron spin resonance (ESR) spectra. The goal is to obtain the distributions of trapped carriers over both their degree of localization and their binding energy in semiconductor crystals or films composed of regularly aligned organic molecules [Phys. Rev. Lett. v. 104, 056602 (2010)]. Our method has two steps. We first carry out a fine analysis of the shape of the ESR spectra due to the trapped carriers; this reveals the distribution of the trap density of the states over the degree of localization. This analysis is based on the reasonable assumption that the linewidth of the trapped carriers is predetermined by their degree of localization because of the hyperfine mechanism. We then transform the distribution over the degree of localization into a distribution over the binding energies. The transformation uses the relationships between the binding energies and the localization parameters of the trapped carriers. The particular relation for the system under study is obtained by the Holstein model for trapped polarons using a diagrammatic Monte Carlo analysis. We illustrate the application of the method to pentacene organic thin-film transistors.
I. INTRODUCTION
The electron spin resonance (ESR) technique offers a unique microscopic probe of carriers in semiconductors with unpaired spins. It measures transitions between the quantum levels m s = ±1/2 in the presence of a magnetic field [1][2][3][4] . The spectrum of the transition between two quantum levels constitutes a δ-function provided there is no external interference with the quantum levels of the spin system. However, the quantum states of carrier spins undergo a variety of interactions with the environment. This interaction destroys the δ-functional form of the spectrum and broadens it. The broadening of the ESR signal is a result of two fundamentally different contributions from the medium. The first is a decay of the quantum levels caused by interaction with the excitations of the environment. This mechanism leads to the Lorentzian shape of the spectral line. The second contribution is a result of the inhomogeneities of the medium. One example is the interaction with nuclear spins that is known as hyperfine interaction; here the inhomogeneities are caused by the probability distribution of the nuclear spin moments. In this case the spectroscopic signal is the sum of the contributions from spin systems located in different surroundings. The energy of signal in every surrounding is shifted by the local magnetic field depending on the environment so that the summed signal has an inhomogeneous shape.
As an example, electronic spin of cationic pentacene molecule isolated in a solution exhibits inhomogeneous broadening of the ESR signal that arises because of hyperfine coupling with 14 proton nuclear spins. The signal is constituted of a series of individual lines due to the hyperfine splitting. The envelope function of the signal is roughly reproduced by a Gaussian [5][6][7] . This feature is consequence of the central limit theorem (CLT). The local magnetic field for each electronic spin is caused by interaction with 14 proton nuclear spins and the random energy shifts of respective ESR signals are inevitably spread in accordance with Gaussian distributions, as a result of the independent nature of respective nuclear spin orientations.
We note that the anisotropic values of g-factors are averaged out (or narrowed motionally) by the rotation motion of the molecules in solution. In contrast, solid-state organic molecular crystals exhibit ESR spectra composed of individual lines that are broadened due to the faster decay rate. Then, there might be the case that the individual lines have to become unresolved and the resulting ESR spectrum has to be very close to Gaussian envelope.
Recent major developments in the field of organic thinfilm transistors (TFTs) allow high-precision field-induced ESR measurement (which is referred in the following simply as ESR) for the carriers in semiconductor crystals or films composed of regularly aligned organic molecules. In the measurements, carries are doped without introducing any randomness by using the field-induced technique. The ESR signal in organic TFTs was first measured and analyzed in the groundbreaking study by Marumoto and his coauthors 8,9 . The ESR signal observed in pentacene TFT at room temperature appeared to be narrower than that observed in solution. The authors claimed that the narrower linewidth should be an evidence of a spatial extension of wavefunction. According to CLT, the linewidth of a signal coming from charge distribution covering N molecules is narrower by the factor 1/ √ N . Then, assuming that the signal is Gaussian it was concluded that the wavefunction is spread over N ≈ 10 molecules. However, we note that the analyses were performed on the ESR spectrum with non-Gaussian lineshape. Subsequent studies have shown that the field-induced ESR signal and linewidth is temperature dependent [10][11][12] . Typical pentacene TFTs and rubrene single-crystal transistors exhibit sharp ESR signal whose single-Lorentzian linewidth presents motional narrowing 13 effects with increase of the temperature. In case of the pentacene TFTs, the feature is well consistent with the thermally activated multiple trap-and-release (MTR) transport with the activation energy of about 10 meV in the high temperature range. In contrast, the narrowing effect by the increase of temperature is not observed below around 50 K. Actually it was demonstrated by the continuous wave saturation experiments that all carriers in the pentacene TFTs are localized at T < 50K and all relaxation channels are frozen at such low temperatures 14 . However the ESR spectrum is still deviated from the simple Gaussian at low enough temperature. Therefore, the deviation of the ESR signal from Gaussian shape is not a result of relaxation or motional narrowing but should be associated with the nature of weakly-localized carrier states in the organic TFTs. It is of current important theoretical and experimental challenges in materials physics to understand the carrier transport of organic electronics devices, as they permit productions of large-area and flexible electronic products 15,16 .
In the present paper we analyze the situation when the ESR signal of a semiconducting organic molecular system is a smooth curve that deviates from the Gaussian linewidth even at very low temperatures where the carriers are localized. Here we assume that peculiar lineshape is caused by a further specific inhomogeneity of the pentacene TFTs, as associated with the distribution of weakly-localized carrier states which are responsible for the device operation. Given this assumption, we have developed a unique technique for obtaining the trap density of states from a few to tens of meVs in pentacene TFTs 14 the algorithm of which is described in detail in this paper. Note that the obtained energy resolution for the trap density of states is much higher than that by other methods based on transport or optical measurement [17][18][19][20][21][22][23] . The mathematical method suggested here is rather general and can be applied to analyses of broad spectrum in a variety of problems outside of those considered here.
In Section II we study the deviation of the inhomogeneous low-temperature ESR signal from the Gaussian shape. We show that the signal from the one pentacene molecule is very similar to a Gaussian showing almost no individual lines from hyperfine splittings. It appears that the ESR lineshapes of a signal from many independent traps of the same kind must be a Gaussian (see II A-II B) whose width is uniquely determined by a single localization parameter N ef f , namely an effective number of sites where carrier is localized. Therefore, it is concluded that the non-Gaussian lineshape can be ascribed to the super-position of signals from different kinds of traps, where each kind of trap is described by its own localization parameters N ef f . In Section II C we derive an explicit relationship between the shape of the experimental ESR signal and the distribution of the traps over the localization parameters N ef f . This relationship is a Fredholm integral equation of the first kind where unknown function is a distribution of traps. Section III presents an algorithm to solve it based on the stochastic optimization method (SOM) [24][25][26][27] . We describe the SOM and present an analysis of its sensitivity to the experimental noise in Sections III A and III B, respectively. Sections III C-III D present experimental details, handling realistic noisy experimental data by SOM, and results for traps distributions in pentacene TFTs. Section III E presents methods that find the limits of the reliability of the distributions obtained.
Section IV shows how the distribution of the traps over the localization parameter N ef f can be mapped as a distribution over the binding energies E B . This transformation can be formulated generally although a particular implementation of the mapping requires the explicit E B − N ef f relationship between the binding energy E B and the localization parameter N ef f . This relationship for a given model can be obtained using the exact numeric diagrammatic Monte Carlo method 28 , the analytic momentum average method 29,30 , the coherent basis states method 31,32 , or numerous other methods (see [33] for a review). We consider in Section IV A the model of two dimensional Holstein polaron in the field of an onsite attractive center. The distribution of the trapped states over the binding energies E B in pentacene TFTs are shown in Section IV B. Sections V presents a discussion of our results and Section VI provides conclusive remarks.
II. ESR SPECTRA OF TRAPPED CARRIERS IN ORGANIC SEMICONDUCTORS: FUNDAMENTAL KNOWLEDGE AND FURTHER GENERALIZATIONS
In this section, we introduce the well-known characteristic features of the ESR spectra of a single molecule and a cluster containing several molecules (II A). We then present an analysis of a noticeably different case where the carrier is localized on a single impurity in a crystal (II B). Finally, we consider the case of traps of different origin where we can introduce a relationship between the lineshape of an experimental ESR signal and the distribution of the impurities over different localization parameters (II C).
A. ESR spectra for single molecule and a cluster containing several molecules In this section, we consider a molecular crystal in which single molecule contains so many nuclear spins that its ESR spectrum has an inhomogeneous Gaussian shape. A typical situation for a carrier trapped in a molecular crystal is that it is localized in a trap and its distribution over the molecular crystal sites i is characterized by a probability distribution {p i }. The temperature is assumed to be sufficiently low that we can neglect the "homogeneous" relaxation leading to the Lorentzian shape of the ESR signal. It is also low enough to avoid selfaveraging of the inhomogeneities by the "motional narrowing" mechanism.
In this case the lineshape of the ESR signal is determined by the "inhomogeneous" broadening caused by the site dependent distribution of the hyperfine interactions. When the typical width of the individual spectral lines of the split with hyperfine interaction quantum levels is larger than the typical energy distance between these levels 3,6 , the lineshape of the ESR signal is Gaussian. This shape occurs when the carrier is localized in either a single molecule or a cluster containing several molecules.
The case of a carrier trapped in a crystal is noticeably different from the case of a cluster with several molecules. The probability distribution over N molecules i {p i , i = 1, N } is uniform p i = 1/N in a cluster. In contrast, the probability distribution in a crystal trap p i , which is density of the carrier in given site i, is not uniform, and the only restriction is the normalization condition i p i = 1. However, as shown below, the ESR signal of a carrier in a trap always retains the Gaussian shape and the width is uniquely determined by the carrier probability distribution {p i }.
The simplest ESR signal considered in our study is that for a single molecule. The fine structure of the ESR absorption by a single molecule in a condensed environment is frequently blurred by the broadening of the hyperfine levels. The ESR signal from a single molecule in this case is Gaussian. The standard expression describing the hyperfine structure of one molecule is 6 (1) Here k is the number of groups of equivalent nuclei, n i is the number of equivalent nuclei in the ith group, I i is the nuclear spin in the ith group, Γ is the linewidth of each peak, P is the intensity of each peak and B is the magnetic field. If protons (I = 1/2) are the only paramagnetic nuclei, as is the case for pentacene molecules, P is given as where C mi+niIi 2niIi are binomial coefficients. For the particular case of the pentacene molecule we set Γ = 0.02mT and use the coupling constants {A i ; i = 1, . . . , 4} and numbers of equivalent nuclei {n i ; i = 1, . . . , 4} reported in [5]. The ESR signal obtained from Eqs. (1) and (2) can be represented (see Fig. 1a) as a curve fluctuating around the Gaussian envelope with standard deviation σ 0 = 0.554 mT. The standard situation, known from the physics of gases and solutions, is the case where the carrier is localized in a cluster containing N molecules and its density is spread over N molecules. In this case the signal retains its Gaussian shape with the width of the distribution reduced by the factor N 1/2 . The hyperfine structure of the N molecules can be simulated by Eqs. (1) and (2) by replacing n i → N n i and A i → A i /N . Figure 1b shows an example of the spectrum for N = 2 with standard deviation σ = σ 0 / √ 2. It is clear that the oscillations around the Gaussian envelope are quickly suppressed as N increases.
The shape and the 1/ √ N narrowing factor of the ESR signal for a carrier distributed over a cluster with N molecules follow from the CLT. The shifts of the signal, y i = (B i − B 0 ) for the i-th molecule, is an independent random variable with Gaussian distribution R(y) = G 0 (y) with standard deviation σ 0 . By the CLT, the dis-
B. ESR spectra for trap in crystal
The situation for a carrier localized in a trap in a crystal is different from the above situation with N molecules. The latter case assumes uniform charge distribution, and thus the CLT applies. In contrast, the distribution over molecules i in a trap {p i } is nonuniform with the probabilities p i subjected to the normalization condition i p i ≡ 1. Hence, we cannot assume a Gaussian lineshape for the ESR signal; the lineshape must be studied separately.
Regardless of the lineshape, the probability distribution {p i } unambiguously determines the linewidth of the ESR signal. The linewidth is characterized by the stan-dard deviation σ which is the root square of the second moment of the linewidth. If we consider the standard deviation of a signal in a trap σ and compare it with that from a single molecule σ 0 we can introduce the effective number of molecules N ef f ({p i }) to describe the linewidth of the ESR signal from a carrier in a trap. The distribution of the ESR shift B is the same for each molecule i with mean B = B 0 and variance σ 2 0 = (B − B 0 ) 2 . Since the hyperfine configuration of molecules are independent of each other the variables y i = (B i − B 0 ) are independent for different molecules i. Hence, the standard deviation σ({p i }) of the sum of random variables It is natural to define the effective number of molecules Then, the effective number of molecules To study the shape of the ESR signal for a trap with charge density {p i } we generated by a standard method 24 random variables {y i } following a Gaussian distribution with dispersion σ 0 = 3 (Fig. 2a). Then, we studied the distribution of the random variableȳ = i p i y i . By the CLT the uniform distribution p i = 1/N leads to a Gaussian shape of the signal with the dispersion narrowed by the factor √ N (Fig. 2b). After performing simulations with a large set of different distributions p i we conclude that the distribution of the random variableȳ = i p i y i is always Gaussian (see Fig. 2c Hence, we conclude that the shape of the ESR signal for carriers localized in a set of identical independent traps is uniquely determined by the distribution density {p i }. It is always Gaussian with the standard deviation σ defined by Eqs. (5) and (6).
C. ESR spectra for several kinds of traps
Since the ESR signal for independent identical traps is always Gaussian, we assume that a non Gaussian shape originates from the superposition of the signals from different kinds of traps. Indeed, each different kind of trap is characterized by a different probability distribution of the trapped carriers.
To describe the experimental spectrum E exp (B) by the superposition of the ESR spectra G(B, ξ) for each trap type ξ we must choose a parameter ξ that unambiguously characterizes the spectrum G(B, ξ). It follows from the analysis in Section II A that the ESR spectrum from identical traps is Gaussian and can be characterized by a single parameter N ef f . This parameter reflects the spatial distribution {p i } of a charge in a trap (6) and determines the narrowing of the Gaussian with respect to the ESR width σ 0 of a carrier localized on a single molecule. Therefore, the ESR signal for the same trap type, characterized by the same spatial extension ξ = N ef f , can be expressed as Introducing the distribution function D(N ef f ) of the traps in pentacene over N ef f we can express the experimental signal E exp (B) in terms of the superposition which is a convolution of the distribution function D(N ef f ) of the traps and the Gaussian signal for a trap type characterized by the localization parameter N ef f . Most techniques for ESR measurements detect the derivative E exp (B) over the magnetic field B and hence the experimental signal X exp (B) = dE exp (B)/dB is related to the distribution function of the traps D(N ef f ) via Hence, to obtain the distribution D(N ef f ) we must solve one of the integral equations (8) and (9). The experimental signals X exp (B) (E exp (B)) and the kernel dG(B, N ef f )/dB (G(B, N ef f )) are known functions and the distribution D(N ef f ) is to be determined.
Here B min (B max ) is the lower (upper) bound of a magnetic field where the signal is larger than the experimental noise. X exp (B) are experimental data and X (B) is obtained from the distribution D(N ef f ) which is considered to be a solution of the integral equation. However, such naive approach leads to huge unrealistic oscillations of the solution D(N ef f ). Instead, we need to apply one of the advanced techniques developed for such equations. Section III A gives a general description of the stochastic optimization method (SOM) [24][25][26][27] for the solutoion of Eqs. (8) and (9). Section III B applies the method to the analysis of the ESR data and demonstrates the influence of experimental noise on the reliability of the results. Section III C presents an experimental technique to obtain ESR spectra suitable for the fine analysis of the data. Section III D introduces an algorithm that implements the SOM and presents results for the trap distribution in pentacene TFTs. Finally, Section III E demonstrates the limits of the reliability of the distribution obtained by solving Eqs. (8) and (9).
A. Method to solve inverse problem
It is notoriously difficult to solve the Eqs. (8) and (9) because these equations belong to the class of "ill posed" problems. Naively, the true solution of the Eq. (9) D(N ef f ), being convoluted with the kernel, produces the function X (B), -which coincides with the given function X exp (B). However, the general feature of the practical implementations of Eq. (9) is that the knowledge about the function X exp (B) is "noisy" and incomplete. Specifically, X exp (B) is known for a particular set of points {B i , i = 1, M } with some errorbars resulting from the experimental noise. In this case, to find a "solution", we can introduce the residual function and optimize a measure of the deviation of "the solution" X (B) from the given data X exp (B). For example, we could maximize the inverse deviation (10) Naturally, ∆(i) is never equal to zero at all points i = 1, M when realistic noisy data {X exp (B i ), i = 1, M } are considered. Hence, even the best measure Q max is not equal to infinity. Therefore, the only feasible strategy for Eq. (9) is to find a solution, that is "the best" in some sense.
The above features are the characteristics of the class of "ill posed" problems for which we can not get an exact solution and can only find the "best" choice of D(N ef f ) for the given data set {X exp (B i ), i = 1, M }. The naive approach, where we simply maximize measure (13), leads to unreasonable "solutions". Typically, they have huge fluctuations which exceed the true values of D(N ef f ) by several orders of magnitude. To get a reasonable description of D(N ef f ) we must suppress this "saw tooth" instability.
There are two different strategies. The first is the regularization method, e.g. the popular maximal entropy method 34 as one of many such methods 35,36 . This approach maximizes a measure that is similar to (13) but modified in such a way that the solution is smooth enough to suppress "saw tooth noise". The main drawback is that the solution is corrupted by the smoothing regularization procedure. The second strategy uses modern stochastic approaches to obtain many statistically independent solutions (see 24,27 and the references therein) whose linear combination smoothes the "saw tooth noise" without corrupting individual solutions. Since this approach has been shown 27 to be better for the solution of Eq. (9) The SOM has been successfully applied to many integral equations. The kernels of the equations are different from those in Eqs. (8) and (9). The exponential kernel K(y, x) = exp[−yx] was examined in [24,[37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54][55], and various kernels ranging from the Fermi distribution to the Matsubara frequency representation are considered in [56][57][58][59]. We test the applicability of the SOM to the kernels of Eqs. (8) and (9). We also show how the statistical noise of the experimental data can obscure the information that can otherwise be obtained by solving these equations.
To verify that the SOM is applicable to Eqs (8) and (9) with the kernel defined by (7) we introduced a normalized to unity function D(N ef f ) (see the dotted line in Figs. 3 and 4) and generated a set of "experimental" data {X exp (B i ), i = 1, 200} using relations (8,9). Then, we attempted to find D(N ef f ) by solving Eqs. (8) and (9). For ideal data without noise in the set {X exp (B i ), i = 1, 200} we were able to restore D(N ef f ) successfully (see Fig. 3(a) and 4(a)). We did not find a significant difference between the results obtained by solving Eqs. (8) The results presented in Figs 3 and 4 illustrate the general trends. An increase in the signal-to-noise ratio s n corrupts the solution for large values of N ef f first: the shape of the high-energy peak is not reproduced but its position is still correct. At higher values of s n the shape of the low-energy peak is not reproduced. A comparison of Figs 3 and 4 shows that the spectrum with a sharp feature at small N ef f (Fig. 3) is more robust to experimental noise than that with a broad feature at small N ef f (Fig. 4). Note that although the shapes of the high-and low-energy peaks are not reproduced, their positions are still approximately correct even for large values of the signal-to-noise ratio s n .
C. Experimental data for analysis
To conduct reliable spectral analysis as discussed above, we need high-precision ESR spectra for the carriers in organic TFTs. We acquired the spectra by the following procedures: We used a commercially-available X-band (9 GHz) ESR apparatus (JES-FA200, JEOL) equipped with a high Q cylindrical cavity (Q factor 4000-6000 for the TE 011 mode). We fabricated bottom-gate, top-contact pentacene TFTs with high mobility that are suitable for high-precision measurements. The device is composed of a 100-µm-thick poly(ethylene naphthalate) (PEN) film as a nonmagnetic substrate, a 1-µm-thick Parylene C film as a gate dielectric layer (4.5 nF/cm 2 ), and a 50-nm-thick pentacene film as the semiconducting layer. The gate, source, and drain electrodes are composed of vacuum-deposited gold films with a thickness of 30 nm; this is much thinner than the skin depth of gold (about 790 nm).
Since the field-induced carrier is accumulated only at the semiconductor/insulator interface, the ESR signal is proportional to the total channel area of the TFTs. We used a device with a width of 2.5 mm and a length of 20 mm, the dimension of which is limited by the inner diameter of the ESR tube and the cavity size. We used a stack of ten sheets of TFTs for the high-precision ESR measurement, to obtain field-induced carriers ten times as large as those in a single sheet. The total carrier number at V G = -200V is estimated as 2.8x10 13 .
The semiconducting pentacene layer is composed of a uniaxially-oriented polycrystalline film where all the component pentacene molecules aligned with the molecular long axes are roughly perpendicular to the film plane. In the measurement, a static magnetic field was set perpendicular to the film plane to eliminate the anisotropic effect of the g tensor. A continuous-flow cryostat was used for the low-temperature measurements. In the FESR measurements at low temperature, we first applied the gate voltages at room temperature (with the source and drain shorted) and then cooled the device to the set temperature, to avoid a delay in the charge accumulation. The temperature was stabilized carefully so that the fluctuation at 20 K was about 0.01 K, which minimized the effect of temperature-dependent spin susceptibility.
D. Practical implementation of method: Distribution of traps in pentacene TFT
Equation (9) is preferable for the practical implication of the SOM. The problem with realistic noisy data from ESR experiments is that there is some uncertainty in the normalization and background of the experimental data. In the ideal case, implying the normalization of the distribution D(N ef f ) to unity and the conditions respectively. However, the normalization of the experimental data is not exact because experimental noise can lead to uncertainty of a few percentage points. On the other hand, the solution of Eq. (9) is sensitive to possible normalization errors. There is also a problem with the background. If the experimental data are obtained in the form E exp (B) (see e.g. Fig. 2), there is no unique procedure to determine the constant background level that must be subtracted from the data to get a pure signal for the ESR transition. This uncertainty also increases the uncertainty relating to the normalization of the data. However, the problem of the unknown background disappears when Eq. (9) is used as the integral equation because lim B→±∞ X exp (B) = 0 for any constant background. Therefore, to analyze the ESR data for pentacene TFTs we used (9) where the only uncertainty is that of normalization.
To handle the normalization uncertainty we can change the normalization in either Eq. (14) or Eq. (16). The two choices are equivalent because of the linearity of Eq. (9). In practice we normalized the experimental signal as shown in (16) and varied the normalization I of the distribution density D(N ef f ) (14). We handled the normalization uncertainty for the result shown in Fig. 6 as follows. The spectrum at gate voltage -200 V was considered (the result for this gate voltage is shown in Fig. 6 by solid line). The integral equation (9) was solved for different normalizations I and the solution with the normalization having the best deviation measure Q max was chosen. Figure 5 shows the best inverse deviation Q max (10,13) and the position of the sharp peak in D(N ef f ) versus the normalization I. It can be seen that the position of the sharp peak is sensitive to the value of the normalization I and this may be a source of the volatility in the fine analysis of the ESR data. However, it can be shown that the suggested approach to the the normalization I, which leads to the best deviation measure Q max , is robust and produces stable results. We have demonstrated this via analysis of the ESR data at different gate voltages (Fig. 6). We found that the best normalizations are different at different gate voltages: I(V = −200) = 1.01, I(V = −120) = 1.038, and I(V = −40) = 1.03, respectively. However, the position of the sharp peak at low values of N ef f does not depend on the voltage V if at each voltage V we use the normalization I(V ) that corresponds to the best deviation measures Q max . Physically, the sharp peak at low values of N ef f corresponds to deep impurity levels that depend only slightly on the gate voltage. Therefore, its independence on the gate voltage in the fine analysis of the ESR spectra indicates the high stability of the procedure based on the suggested approach.
E. Reliability of trap distribution result
Since solving Eq. (9) is an "ill-posed" problem it is useful to understand how much information we can get from the analysis and to check how many details of the resulting distribution of impurities are reliable. The reliability can be analyzed by plotting the residual function (12).
The spectrum D(N ef f ) (Fig. 6) obtained by solving Eq. (9) has three peaks. Figure. 7a shows the fit of the ESR signal using the distribution D(N ef f ) in Fig. 6. It also shows the separate contributions of the A, B, and C components of the distribution. To clarify which features of the distribution D(N ef f ) are reliable for the given level of noise in the experimental data we studied the residuals (12) X exp (B) − X (B) (Figs. 7b-e). We can see that the quality of the fit by the SOM (Fig. 7b) is much better than that obtained by e.g., the Lorentzian (Fig. 7c) and two δ-functions (Fig. 7d). The fit from three δ-functions gives a residual function (Fig. 7e) as good as that obtained from D(N ef f ) in Fig. 6. Therefore, we conclude that, within the limits of the noise of the experimental data, the existence of at least three kinds of traps is a reliable result. We note that the distribution over the parameter N ef f in Fig. 6 is free from any assumption about the shape of the distribution. Indeed, because of the noise of the current experimental data, the only reliable conclusion is the statement about the existence of at least three types of traps. However, a data analysis with less noise could, in principle, reveal more fine structure in the distribution function D(N ef f ).
IV. TRANSFORMATION FROM SPATIAL DISTRIBUTION TO ENERGY DISTRIBUTION
In this section we discuss finding the distribution of the traps Z(E B ) over the binding energies E B given the distribution D(N ef f ) over the localization parameter N ef f . This transformation is trivial when there is a priori knowledge of the functional dependence Indeed, there is a balance relation between D and Z for two nearby points E The above generic relation (17) It is already well known that the behavior of a carrier in pentacene TFTs can be described by a particle in a system with attractive impurities 10 . It is also known that the carrier is subject to the electron-phonon interaction 60 . To model the behavior of a carrier in pentacene TFTs we chose the simplest Hamiltonian describing a 2D Holstein polaron in a field of on-site attractive center Here, c † i (b † i ) is the creation operator for the carrier (phonon) in the i-th molecule. U is the attractive impurity potential for the carrier c † 0 at site 0 and ω ph is the frequency of the dispersionless phonon. The amplitude t describes the electron transfer ∝ t between nearest neighbor sites and the local Holstein coupling to the phonons is ∝ γ. The dimensionless electron-phonon coupling constant λ is defined to be λ = γ 2 /(4tωph). It is clear that for the chosen model the parameter set to determine relation (17) is P = {U, λ} including the potential of the attracting trap U and the strength of the electron-phonon coupling λ.
To calculate the values of E B = E B (U, λ) and N ef f = N ef f (U, λ) we used the direct space diagrammatic Monte Carlo (DSDMC) technique 28 . Similar data can be can be obtained by the inhomogeneous momentum average approximation method 29,30 and the coherent basis states method 31,32 . The data for E B = E B (U, λ) and N ef f = N ef f (U, λ) are presented in Fig. 8a and 8b. The values of N ef f were determined by relation (6) from the charge distribution in the trap which was calculated by the DSDMC technique (see Fig. 9).
To determine unambiguously the functional depen- λ) and N ef f = N ef f (U, λ) (Figs. 8a and 8b) we must decide which of the two parameters, λ and U , is fixed and which is responsible for the variation of the physical parameters of the traps: the binding energy E B and the localization parameter N ef f . A proper choice of the parameter responsible for the variation in the physical properties of the traps determines relation (17) fully and unambiguously. Since the thin film in pentacene TFTs consists solely of pentacene molecules it is natural to assume that the value of λ is one and the same for the entire film and the spread of the physical parameters of the traps is caused by trapping potentials of different origins. The trapping potentials of different origins, in turn, can be characterized by different strengths of the attractive potentials U .
B. Energy distribution of traps in pentacene TFTs
To analyze the ESR spectrum of pentacene TFT we used the electron-phonon coupling constant λ = 0.15, estimated from optical absorption experiments 60 , and the hopping amplitude t = 0.1eV, obtained from band structure calculations 61,62 . Figure 10 shows the distribution Z(E B ) of the trapped carriers over the binding energies in TFT at the gate voltage -200 V. We fixed λ = 0.15, considered the dependence of E B (bold line in Fig. 8a) and N ef f (bold line in Fig. 8b) on the attractive potential U , and obtained N ef f (E B ) (bold line in Fig. 8c). Then, we used transformation (18).
We found that the two discrete trap levels (A and B) peak at 140 ± 50 meV and 22 ± 3 meV, respectively. The broad feature (C) at the gate voltage -200 V is distributed between 5 and 15 meV, as presented in Fig. 10. The low-energy profile prompts the anticipation of tail states extending from just below the band edge, as has been discussed for amorphous semiconductors, while the states are partially occupied up to the Fermi level at around E B = 5 meV. These results are roughly consistent with the small activation energy of about 15 meV for the motional narrowing observed in [14], and also with the potential fluctuations by atomicforce-microscope potentiometry 63 . We note that the distribution Z(E B ) gives relatively correct position of the trap levels, although the absolute value of the binding energy is rather model-dependent.
V. DISCUSSION
Weakly-localized in-gap states are expected to play crucial roles in the intrinsic charge transport along semiconductor/gate dielectric interfaces in organic transistors. In practice, temperature-independent mobility is often observed in devices with high mobility and highlyordered molecular interfaces, which indicates that the Fermi energy is just below the band edge 64 .
To date, various experimental techniques have been used to investigate the interfacial trap density, such as deep-level transient spectroscopy (DLTS) 65 , photocurrent yield 66 , gate-bias stress 67 , and thermally-stimulated current 68 experiments. However, the measurements are based on the charge transport, and it is strongly affected by the "extrinsic" potential barriers at grain boundaries and/or channel/electrode interfaces. In striking contrast, our method has a crucial advantage in its ability to dis- close the spatial and energy distribution of shallow traps down to a few meV, based on a unique microscopic probe using electronic spins. In addition, the g tensor can be used to identify the molecular species around the trap sites. For the three types (A, B, and C) of trap states obtained, the g tensor should be common, considering the highly symmetric nature of the ESR spectra. This indicates that the trap states are extended over inherent pentacene molecules of regular orientation 8,10 . Of these, the deep discrete trap level (A) might be attributed to structural defects such as grain boundaries 69 .
The shallow discrete level (B) and the broad feature (C) might be ascribed to small defects such as molecular sliding along the long axes of the molecules 70 , disorder induced by random dipoles in the amorphous gate dielectrics 71,72 , thermal off-diagonal electronic disorder 73,74 , and the fluctuations of the band edge 75 . Notice that the regular orientations of the molecules are retained in the trap states as stated above. Although these assignments are rather speculative, we believe that further microscopic investigations based on this study will provide a comprehensive view of the weakly-localized in-gap states in organic transistors.
The lack of sensitivity of the strongly localized states with small N ef f to the gate voltage, which is obvious from the physical point of view, is an indication of the stability and reliability of the fine analysis of the ESR spectra. For shallow states with large N ef f and small E B , an increase in the gate voltage adds low-energy states that participate in creation of ESR signal (Fig. 11). This shift of the border where states are visible to the ESR probe indicates that these states are filled as the gate voltage increases. Hence, this behavior can be interpreted as a movement of the Fermi level which tends to zero energy as the bias voltage increases.
It is important to mention that the sharp peak of D(N ef f ) at N ef f = 1.54 does not contradict the assumptions used to derive the integral equations (8) and (9). The very essence of these equations implies that the contribution from each state with a given N ef f is a Gaussian ESR signal. On the other hand, the signal at small N ef f is a more complicated function with fine features (see e.g. Fig. 1a for N ef f = 1). The results for distribution D(N ef f ) at small N ef f can be unreliable. However, as can be seen in Fig. 1c, the ESR signal for reasonable parameters is close to Gaussian even at N ef f = 1.54. Therefore, the results for the distribution of the traps D(N ef f ) are valid even for small values of the localization parameter N ef f .
VI. CONCLUSIONS
We have presented an unbiased analytical method for the processing of high-precision electron spin resonance (ESR) spectra, which allows us to obtain the distribution of trapped carriers over the degree of localization and the binding energy. The first step is a fine analysis of the shape of the ESR spectra by the SOM, which allows us to split the spectrum into multiple Gaussian components each of which corresponds to a different spatial extension of the trapped carriers. The second step is the transformation of the distribution over the degree of localization into a distribution over the binding energies via a system-dependent relation between the binding energies and the localization parameters of the trapped carriers. We have presented and discussed the fundamental bases of the spectral analysis, detailed algorithms for practical applications, and discussed the robustness of the analysis to experimental noise. Although the method can be applied to many systems, we consider that it is most appropriate for ESR spectra of organic TFTs for the following reasons. First, the channel materials are composed of regularly aligned organic molecules that involve multiple degrees of freedom for nuclear spin moments. This feature clearly justifies our basic assumption that a single type of trap gives the Gaussian lineshape of the ESR spectrum. Secondly, it is possible to measure the high-precision ESR spectrum because of the fairly small spin-orbit interactions of organic materials. The fieldeffect device structure also enables the control of the carrier density without introducing any randomness in the channel semiconductors.
Such a direct probe is quite unique in investigating the microscopic carrier dynamics in the organic TFTs that have attracted considerable recent attention in the field of organic electronics. We have shown that the trap states in pentacene TFTs can be classified into three major groups: deep trap states with a spatial extension of about 1.5 molecules (A), relatively shallow trap states that extend over about 5 molecules (B), and shallower trap states that extend over 6 to 20 molecules (C). These states respectively correspond to deep and shallow trap states with binding energies of 140 meV (A) and 22 meV (B), with the broad feature ranging from 5 to 15 meV (C). These shallow in-gap states are crucial for understanding and improving the device performance of organic TFTs. | 2012-03-27T06:34:41.000Z | 2012-02-29T00:00:00.000 | {
"year": 2012,
"sha1": "c949aec71677a3b2d91645f4b1711d9f37e2b82c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1203.5877",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c949aec71677a3b2d91645f4b1711d9f37e2b82c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
268284609 | pes2o/s2orc | v3-fos-license | CONSTRUCTION AND TESTING OF SMALL-SCALE THERMOACOUSTIC ELECTRICITY GENERATOR WITH DIFFERENT HEATING POWER
The necessity of renewable energy is indispensable. Nowadays, researchers focus on converting waste or solar heat into advantageous energy, such as electric energy, using thermoacoustic scientific knowledge. The thermoacoustic machine converts the heat energy into sound energy and conversely. Then, using a Linear alternator, it converts into electric energy. In this study, we focus on the construction and testing of small-scale thermoacoustic electricity generators with different heating temperatures. The heat is converted into acoustic energy employing the thermoacoustic engine, and the acoustic energy is converted into electrical energy using the linear alternator. In this investigation, the heating power varies from 226 W to 389 W. The result shows that 32.2 mW of electricity was found as the thermal power at 389 W. Moreover, the onset heating temperature span is 316 .
Introduction
Energy consumption has rapidly increased due to technology development, rising population, and economic development [1].These issues lead to the energy crisis in which the global demand on the limited natural mineral deposits used to power manufacturing companies and households is decreasing as the demand increases [2].On the other hand, waste heat has become another environmental issue.Waste heat is often dissipated into the atmosphere or water like lakes, rivers, and the ocean.This increases greenhouse gas emission and contribute more to global warming [3].The aforementioned issues can be solved using thermoacoustic technology.The waste heat can be transformed into sound energy, which can be used to drive the linear alternator to generate the electrical energy.Therefore, using thermoacoustic technology is good for the environment and renewable energy, especially electrical energy.Compared with the other technology, the thermoacoustic engine is pistonless so it will be easier and more applicable.Moreover, the improvement and accessibility of thermoacoustic refrigerators make them more effective and efficient than vapour compression refrigeration systems [4].
Thermoacoustic studies the interconnection between sound and heat with fluid and plate [5] and studies the transformation of heat to acoustic energy and conversely.Thermoacoustic can be divided into two types: thermoacoustic engine and thermoacoustic cooler.The thermoacoustic engine is a machine for converting thermal energy into acoustic energy [6].Then, the acoustic energy can generate electric power using a linear alternator or turbine.
The thermoacoustic engine has been investigated by some researchers.Ueda and Farikhah, in 2016, investigated the thermoacoustic energy conversion in the stack screen [7].They proposed a thermoacoustic numerical calculation method incorporating the empirical formula for the laminated wire mesh regenerator proposed by Obayashi [8].Using this method, the efficiency of energy conversion was numerically calculated in the laminated wire mesh regenerator of the thermoacoustic engine.The numerical results agree with the experimental results with an error of about 10%.
Furthermore, by comparing the energy conversion efficiency of the laminated wire mesh regenerator and the energy conversion efficiency of the regenerator with a uniform flow path, the effect of the complexity of the flow path on the efficiency can be clarified.Specifically, when the acoustic impedance is relatively low, and the temperature at the hot end is low, the flow path's complexity reduces the efficiency by nearly 40% [7].This stack screen is usually used for the thermoacoustic engine.Other researchers focus on the effect of geometry on the efficiency of thermoacoustic [9][10][11][12].Utami et al. did the numerical simulation and found that the lowest heating temperature to generate spontaneous oscillation in the thermoacoustic engine is 124 when the narrow channel radius of the stack is 0.12 mm.The temperature is the low-grade energy that can be utilized for waste heat recovery.
Moreover, the highest efficiency achieved as the flow channel radius stack equals 0.07mm.Thus, the efficiency is 57% of the upper limit value [9].In 2020, Rokhmawati et.al. obtained the lowest initial heating temperature (153 ) and optimal efficiency (38 %) when the narrow radius of the engine stack is 5 cm and the mean pressure is 4 MPa [10].The effect of the regenerator length on the performance of the prime mover system was numerically studied.It was calculated numerically that the engine efficiency is 27 % of the Carnot efficiency when the regenerator length corresponded to 6 cm [11].Farikhah et al. designed a 4-stage engine for waste heat recovery to find the best radii of the 4-stage thermoacoustic engine and obtain the low temperature for the engine.It was found that the low onset heating temperature is 43 and the total system performance is 8% of the upper limit value when all of the ratio of narrow engine radii to the thermal penetration depth is 1.2 [12].Farikhah et.al constructed the thermoacoustic cooler driven by a loudspeaker in 2013.Stem of goose down as an organic stack, a high thermal insulation material, was used for the plate media in the thermoacoustic engine.The goose down stack material has high thermal insulation, meaning it has low thermal conductivity of the plate media.There is no hazardous substance in the process, and piston is less.Air was used for the working gas, and stainless steel was used for the resonator.The decreasing cooling temperature is only 5 [13].This cooler can be driven by the thermoacoustic engine.Some researchers optimized the performance by changing some geometries.It was found that the performance was improved by some optimized parameters [14][15][16][17][18].The influence of the stack's porosity on the performance of a thermoacoustic refrigerator driven by a thermoacoustic engine was conducted.The best porosity of the engine and cooler is 1.1, and the entire performance is 24 % of the Carnot efficiency [14].Liu et al. did the numerical calculation on a thermoacoustic refrigerator.They found that the larger the column number of staggered parallel plates, the better the refrigeration effect will be obtained [15].Hakim et al. studied the potential of electric power generation using mechanic vibration; however, the electric power is still low [19].
The challenge of the thermoacoustic is the advancement of thermoacoustic electric generators.The thermoacoustic electric generator is a combination between a thermoacoustic engine and an alternator-like loudspeaker which operates in reverse mode.It transforms heat energy into acoustic work and then into electric energy.Nowadays, the investigations on traveling wave type of thermoacoustic electricity generator have attracted some researchers due to their better efficiency but with a more complicated configuration.Therefore, in this research, we focus on the standing wave type of thermoacoustic electricity generator due to its simplicity.In 2023, Ding and his group investigated thermoacoustic refrigeration systems driven by waste heat of industrial buildings.The cooling temperature achieved -12 and the cooling capacity is 0.95 kW.However, the configuration is more sophisticated [20].
In 2016 and 2019, Setiawan et al. investigated a thermoacoustic prime mover, but it did not apply to thermoacoustic electric power generation [21][22][23].Kitadani et al. [24] constructed the traveling-wave and standing-wave type thermoacoustic electricity generator employing a Linear alternator to transform sound power to electricity and reach an electric power output of 1.1 W with a conversion efficiency from heat to electricity of 0.3 %.Piccolo [25] studied a numerical simulation of thermoacoustic electric power employing a conventional linear alternator in a straight tube.The study results show that the acoustic-to-electric performance is about 70 % when the heating temperature is 527 , whereas the thermal-to-electric conversion efficiency generated by the prime mover is 5.7%.In 2022, Tutur et al. investigated pressure variation's influence on the initial temperature span and electric power output of a thermoacoustic electricity generator in a straight tube.It was found that the smallest onset heating temperature difference is 347 when the mean pressure is 0.35 MPa.
Moreover, 691 W of electrical power is achieved when the pressure is 0.40 MPa with 347 of onset temperature difference [26].Moreover, the heating power used in this experiment is 370 W. In 2012, the traveling-wave thermoacoustic electricity generator using an ultra-compliant alternator was built and produced 11.6 W of electrical power.However, the device needed larger space, making the design more complex [27].In 2014 Wang et al. performed the improvement of a 500 W thermoacoustic electric generator.They found a maximum electric power of 473.6 W at 2.48 MPa.It was found that the efficiency achieves 14.5 %, but the onset heating temperature is quite high at 650 [ ] A two-stage traveling-wave thermoacoustic electric generator was investigated.The study results show that 204 W of electric power is obtained, but the onset heating temperature is high at 597 and 511 [29].The design and performance of a two-stage standing wave thermoacoustic electricity generator was investigated in 2016.Even though the electric power is high, the heating temperature of the engine is high [30].Hamood and his group designed and constructed a two-stage thermoacoustic electricity; however, the mean pressure is too high [31].A few researchers have investigated the influence of heating power on the onset heating temperature and electric power of a simple standing wave thermoacoustic generator.Therefore, we focus on it in this experimental study.
Experimental Method
The investigation was conducted using an experimental research approach.First, a standing wave thermoacoustic power with a branch was constructed.Then, some measurements were conducted, and the results related to the onset heating temperature difference, electric power, pressure, and frequency were presented.
Construction
The schematic diagram of the thermoacoustic electric power generation with the branch used in this investigation is illustrated in Fig. 1.The system comprises some components: thermoacoustic engine core and Linear alternator.The thermoacoustic engine core is inserted into a pipe called a resonator.The resonator is made from stainless steel, and it is 29 cm in length.It is connected to the resonator branch, where the loudspeaker is located.The length of the connecting pipe is 16 cm, while the branch pipe is 24 cm in length.The other part of the resonator, made of polyvinyl chloride (PVC), has 36 cm in length.
In the engine core part, the stack, hot, and ambient heat exchangers are installed in the first part of the resonator.The stack is sandwiched by the hot and ambient heat exchangers.The length of the heat exchangers is 4 cm, whereas the length of the stack is 3.5 cm.The stack is made of a stainlesssteel wire mesh screen with mesh number 12 [see Fig. 2 (c)].The porosity of the stack is 0.814, while the wire diameter is 0.050 cm.Using the equation ), the stack's hydraulic radius is 0.0547 cm.This parameter shows the heat exchange process between the pore walls of the regenerator and the working fluid.
is the thermal penetration depth, and is the thermal relaxation time in the cross-section of the regenerator's channel.The value is 2.0 defined as thermal penetration depth, which means the gas layer space in which heat can pour over an interval of time.It can be expressed as where k is thermal conductivity, is density, is the specific heat in the constant pressure, and f is the frequency.Those gas property values are shown in Table 1.The ambient and hot heat exchanger length are 4 cm, while the diameter is 6.8 cm.The heat exchangers have tiny holes with a diameter of 0.3 cm, and it is parallel to the axis of the resonator that allow the working fluid to oscillate to the pores.The heat exchangers are made of copper, which isa good thermal conductivity.The loudspeaker was employed as a linear alternator which converts acoustic into electric energy.It is located at the side branch, and it has 24 cm in length.The loudspeaker diameter is 15 cm, the impedance is 8 cap omega, and the loudspeaker terminals are connected to the resistor.The actual surface area of the loudspeaker cone is denoted as Sd, and it is 136.8 . The electrical power generated by the loudspeaker can be calculated using the following equation: (2) where is the electric power, is the root mean square voltage, and the load resistance is denoted as .The input electric power was turned on as the temperature at the stack hot side has not elevated considerably.Therefore, the temperature span between the two sides of the stack remained constant.This system is assumed to be in a steady state condition.The temperature pressure amplitude and the electric voltage output are stopped after the electric power input is turned off.This step was repeated for heat input 226 W to 389 W.
The resonator used in the thermoacoustic engine part is 29 cm, while the connecting pipe is 16 cm.This pipe is connected to the other resonator, which is ,36 cm in length, and the side branch where the loudspeaker was placed.The total length of the resonator is 81 cm.The resonance frequency of the closed resonator tube can be calculated as follows: ⁄ (3) where is the speed of sound of the working fluid and is the resonator length.The harmonic series frequency is denoted as .n is n-th harmonic series (n = 1, 2, 3,…).By using the sound speed in Table 1 with a resonator length of 81 cm, the calculated resonant frequency with the heat input range of 226 W to 389 W is around 212 Hz for the first harmony frequency (n=1) and 424 Hz for the second harmony frequency (n=2).Table 2 shows the gas properties determined at 0.1 MPa of mean pressure, and air was used as the working fluid (gas) at 300 K [32]. is the specific heat ratio, which is a dimensionless parameter ( ⁄ ). and are isobaric and isochoric specific heat, respectively.is the Prandtl number of the working gas, which is also a dimensionless number ( ⁄ ). and are kinematic viscosity and thermal diffusivity, respectively., , , and K are the mean pressure, density, sound speed, and thermal conductivity of the working fluid.The porosity shown in Table 3 is the dimensionless parameter, which is defined as the ratio of open area in the cross-section to the total cross section area of the stack.
Measurement
To measure the temperature at the regenerator sides ( ), the thermocouples were used.As the electric heater turns on, the thermocouples begin to record the temperature using the data logger.After the heat is imposed on the hot end of the regenerator, a spontaneous oscillation appears because of the temperature gradient on the thermoacoustic engine, and the sound wave is propagated.Then, the pressure versus time can be measured using the pressure sensors, displayed by a WE7000 data logger software.Using Fast Fourier Transform (FFT), the time domain data was transformed into the frequency domain.Four pressure sensors were used.To measure the oscillation pressure, the transducers are installed along the resonator tube.The pressure was measured using Kyowa PGMC-A-200 KP with nonlinearity and hysteresis within 1.34 % RO and 0.12 % RO, respectively.The temperature was measured using a type-K thermocouple with a special limit of error K.The measurement error is as follows: and where is a random error, and is a systematic error, n is the number of repeated measurements, and is the arithmetic mean representing the measurement [33].Using eq. 4 and 5, the pressure and temperature measurement error can be estimated to be 4.00 Pa and K, respectively.
To measure the heating power, the electric heater was introduced.It is imposed at the hot heat exchanger.At the ambient heat exchanger, the cooling water was circulated to maintain the ambient temperature.By using a digital voltmeter (measuring voltage V) and a digital ammeter (measuring current I), input heating power can be calculated as Q = VI (rms value).
In this investigation, the heating power was varied by the heater, and the acoustic power was generated inside the engine stack.Using some pressure sensors, the pressure can be measured, and acoustic power can be obtained.On the other hand, the temperature can be measured using and recorded in the data logger.The acoustic wave traveling is along the resonators and the side branch where the loudspeaker was located.In the loudspeaker, the acoustic energy was converted into the electric energy.
The prototype of a small-scale thermoacoustic electricity generator with a side branch configuration was built.Figure 4 shows the experimental setup.The system comprises a thermoacoustic engine, linear alternator, and resonator with branch.The engine regenerator is installed inside the resonator and is employed as the thermoacoustic engine.A loudspeaker was used for the Linear alternator to transform acoustic into electric power.In this investigation, the atmospheric pressure was used.The working fluid is air, and the heat power is input into the system.The ambient heat exchange is running on the tap water and is using a water tank; the rejected water is collected.The temperature in the hot and ambient side of the engine regenerator was measured by the type E thermocouples, and the pressure transducers were obtained using data acquisition.It is connected to a data logger system.A power analyzer was employed to measure the current and voltage.
Working Principle
Thermoacoustic electricity generator with side branch configuration comprises a resonator, a regenerator engine, a hot heat exchanger, an ambient heat exchanger, and a loudspeaker for the Linear alternator (See Fig. 3).The regenerator was arranged from the stack wire meshes in which the thermoacoustic effect occurs.The regenerator and the heat exchangers were installed in the resonator.In the thermoacoustic engine, there are no moving parts to perform the thermodynamics cycle.The presence of the regenerator plate and the working fluid is essential for the engine.When the heat source imposes the engine regenerator, the gas particles on one side become hot.As shown in Fig. 5, the gas particles experience four-step cycle: two constant pressure heat transfers in steps 2 and 4 and two adiabatic in steps 1 and 3.
Step 1 shows that the gas parcel experiences adiabatic compression while it displaces from the ambient side to the hot side.The fluid parcel is warmed.Then, in step 2, it experiences the constantpressure heat transfer from the plate to the gas parcel.After that, the gas parcel in step 3 experiences adiabatic expansion, and it displaces from the hot side to the ambient side.In step 4, the gas parcel experiences constant-pressure heat transfer from the parcel to the plate.In step 2, the gas parcel experiences thermal expansion at high pressure, while in step 4, it experiences thermal contraction.As a result, the acoustic work is generated, .In this case, the acoustic energy is generated in the engine regenerator [6].The acoustic power generated in the thermoacoustic engine is delivered into the loudspeaker where the conversion of the acoustic power to the acoustic power occurs.
Result and Discussion
Figure 6.a presents the result of measuring temperature as a function of time.This investigation used 0.1 MPa of air for the working gas.The red line shows the stack's hot side temperature, while the blue line presents the stack's ambient end temperature.Figure 6 shows that as the heater was turned on, the stack hot temperature rose considerably.On the other hand, the stack ambient temperature remained constant.The spontaneous oscillation was excited when the onset heating temperature achieved a value.Then, the stack generated acoustic power.Figure 6.a shows the thermocouple's readings throughout experiment, which took 60 minutes.A 219 Hz was the spontaneous oscillation at the onset heating temperature (344 ) with 28 of ambient temperature .When the frequency is 219 Hz, the spontaneous oscillation occurs at the onset heating temperature ( ) with 28 of ambient temperature .As a result, the onset heating temperature difference is 316 .Thermal energy was converted to acoustic energy.Meanwhile, the linear alternator produced about 33 mW of electric power.
The pressure transducer is installed along the resonator tube.The pressure oscillation was measured using a transducer.Then, employing FFT, the frequency of the sound waves can be generated.Figure 6.b presents the frequency spectrum of the sound waves at 1 atm.It was taken at resonant frequency.As we can see in Fig. 6.b., the FFT analysis shows a dominant frequency of 219 Hz.
Figure 7 shows that the temperature difference occurs influenced by the heating power (besides being influenced by many factors).It can be seen that the onset temperature difference increases by approximately 2 from 314 to 316 when the heating power increases by 163 W from 226 W to 389 W. This happens because the temperature increases on the hot side of the stack becomes faster when the heating power is greater.The onset heating temperature difference is 316 .Compared to that found by Tutur et al., this temperature is lower.They found 347 which means 31 higher than that found in this investigation [26].One important thing is that the lowest heating power at 226 W, shown in Figure 7, is the minimum heating power needed so that the onset (thermoacoustic effect/sound generation) can occur.Heating power below 226 W cannot generate sound.Heating power as a function of onset temperature difference is shown in Fig. 7.When the heating power increases, the onset heating temperature difference rises from 314 to 316 .Figure 8.a shows the experimental result of the frequency produced by the engine with different heating temperatures.It can be seen that the frequency slightly increases when the heat input increases.Figure 8.b shows the effect of heating power on the pressure amplitude .It shows that the pressure amplitude increases from 2 to 5 kPa.The electric power generated by the loudspeaker in the steady state is shown in Fig. 9.When the heat input is 226 W, the electric power is 8 mW, but when the heat input is 389 W, the electric power is about 33 mW.
Air is the working fluid at 100 kPa.⁄ has been calculated to be around 2.0.It means that thermal interaction between the working gas and the stack occurs effectively when the hydraulic radius of the pores of the stack is 2.0.The increase in is the reason for the increase in frequency (f), pressure amplitude ( ), and output electrical power ( ).
Figure 7. Heating power vs
Figure 8.a shows that the frequency of the sound generated experiences a significant increase as the heating power increases.It increases by around 2.1 Hz, from 217.7 Hz to 219.6 Hz when the heating power increases from 226 W to 389 W. The increase in frequency occurs due to the increase in the maximum temperature difference, which comes from the increase in the temperature of the hot side of the stack ( ).When the temperature gets higher, the speed of sound increases in the gas.As a result, the sound frequency increases because the frequency f is proportional to the sound speed.
Figure 8.b shows that the amplitude of the generated sound wave pressure increases significantly as the heating power rises.It increases by around 2.1 Hz, from 217.7 Hz to 219.6 Hz, when the heating power increases from 226 W to 389 W. That is, around 2.34 times, from 2.10 kPa to 4.93 kPa, along with the increase in heating power from 226 W to 389 W. Therefore, the sound generated becomes stronger.Thus, the pressure amplitude becomes high.Figure 9 shows the dependence of the output electrical power produced by a thermoacoustic device on the heating power provided to the thermoacoustic device.It shows that the output electrical power increases almost linearly with the increase in heating power, with a very significant increase from 8.3 mW to 32.7 mW (about 4 times) when the heating power increases from 226 W to 389 W. The increase in output electrical power is caused by an increase in the onset temperature difference , and pressure amplitude, which increases when heating power goes up.The electric power resulting in this investigation can be used for electronic applications, such as prescaler.This electronic device consumes 30 mW of electric power [35].
Conclusions
This study constructs a standing wave-type electricity generator with different heating power configurations.From the study results, the conclusions can be drawn as follows: 1) When the heating power is 389 W, the highest electric power can be achieved (33 mW).It can be seen clearly that the heating power has an impact on it.The higher heating power was achieved, the higher electric power would be obtained.The electric prower of 33 mW can be applied to electronic counting circuit, such as prescaler.2) By changing the heating power, the difference in onset heating temperature increased, meaning that low heating power is preferable for low-grade waste heat recovery.3) There is a trade-off between the dependence of electric power on heating power and those of onset heating temperature difference.4) For low-grade waste heat recovery applications, the onset heating temperature difference and the electric power should be increased.Therefore, some parameters and working operation should be optimized for future work.Moreover, it is also essential that the geometry or the configuration of the thermoacoustic-electric generator should be investigated deeply to find the high electric power, best performance, and low heating temperature simultaneously.
Figure 1 .
Figure 1.General design outline of a simple standing wave thermoacoustic electricity regenerator with branch √ ⁄ (1)
Figure 2 .
Figure 2. (a.) the heater (b.) the hot heat exchanger (c.)The screen mesh stack (d.) the ambient heat exchangerThe ambient and hot heat exchanger length are 4 cm, while the diameter is 6.8 cm.The heat exchangers have tiny holes with a diameter of 0.3 cm, and it is parallel to the axis of the resonator that allow the working fluid to oscillate to the pores.The heat exchangers are made of copper, which isa good thermal conductivity.Fig. 2. (a) shows the heater used for heating the hot side of the stack.The hot and ambient heat exchangers are shown in Fig 2. (b) and (d).The heat exchangers have holes with a diameter of 0.3 cm, and it is parallel to the axis of the resonator, allowing the working fluid to vibrate to the tiny holes.The heat exchangers are made of copper, which is a good thermal conductivity.Table1.The module and T/S parameters of the loudspeaker
Fig. 2 .
Figure 2. (a.) the heater (b.) the hot heat exchanger (c.)The screen mesh stack (d.) the ambient heat exchangerThe ambient and hot heat exchanger length are 4 cm, while the diameter is 6.8 cm.The heat exchangers have tiny holes with a diameter of 0.3 cm, and it is parallel to the axis of the resonator that allow the working fluid to oscillate to the pores.The heat exchangers are made of copper, which isa good thermal conductivity.Fig. 2. (a) shows the heater used for heating the hot side of the stack.The hot and ambient heat exchangers are shown in Fig 2. (b) and (d).The heat exchangers have holes with a diameter of 0.3 cm, and it is parallel to the axis of the resonator, allowing the working fluid to vibrate to the tiny holes.The heat exchangers are made of copper, which is a good thermal conductivity.Table1.The module and T/S parameters of the loudspeaker
Figure 3 .Figure 4 .
Figure 3. Experimental Setup of the thermoacoustic electricity generator with side branch configuration
Figure 6 .
Figure 6.a.Temperature vs time over the testing period (60 minutes)
Figure 8 .
Figure 8.a Heating power vs Figure 8.b.Heating power vs | 2024-03-09T16:10:13.585Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "9fe963315c5d286aed527d4f3e560853b2264ea5",
"oa_license": null,
"oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0354-98362400050S",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e91aac3b729a6a689fcb7e980f5c864e83df3e22",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Physics"
],
"extfieldsofstudy": []
} |
54855569 | pes2o/s2orc | v3-fos-license | Long-time behavior of stable-like processes
In this paper, we consider a long-time behavior of stable-like processes. A stable-like process is a Feller process given by the symbol $p(x,\xi)=-i\beta(x)\xi+\gamma(x)|\xi|^{\alpha(x)},$ where $\alpha(x)\in(0,2)$, $\beta(x)\in\R$ and $\gamma(x)\in(0,\infty)$. More precisely, we give sufficient conditions for recurrence, transience and ergodicity of stable-like processes in terms of the stability function $\alpha(x)$, the drift function $\beta(x)$ and the scaling function $\gamma(x)$. Further, as a special case of these results we give a new proof for the recurrence and transience property of one-dimensional symmetric stable L\'{e}vy processes with the index of stability $\alpha\neq1.$
Clearly, the symbol of this process is given by p(x, ξ) = −iβ(x)ξ + γ(x)|ξ| α(x) . The aim of this paper is to find sufficient conditions for recurrence, transience and ergodicity of stable-like processes. The main tool used in proving these conditions is the Foster-Lyapunov criteria for general Markov processes, which were developed in [MT93b].
Long-time behavior of stable-like processes has already been considered in literature. Clearly, in the case when α(x), β(x) and γ(x) are constant functions, the stable-like process {X α t } t≥0 becomes one-dimensional symmetric stable Lévy process with the drift. By the Chung-Fuchs criterion (see [Sat99,Corollary 37.17]), its recurrence and transience property depends only on the index of stability α ∈ (0, 2] and the drift β ∈ R. More precisely, one-dimensional symmetric stable Lévy process with the drift is recurrent if and only if either 1 < α ≤ 2 and β = 0 or α = 1. In this paper, in the case when α = 1 and β = 0, we prove the same fact by using a different technique. In the general case, R.L. Schilling and J. Wang [SW12, Theorem 1.1 (ii)] have developed a Chung-Fuchs type condition for transience for "nice" Feller processes. By applying this condition to the stable-like process {X α t } t≥0 , they have shown that lim sup |x|−→∞ α(x) < 1 is a sufficient condition for transience. In this paper, we relax the assumption lim sup |x|−→∞ α(x) < 1, that is, we give a sufficient condition for transience without any further assumptions on the function α(x).
Next, J. Wang [Wan08] has given sufficient conditions for recurrence and ergodicity of general one-dimensional Lévy type Feller processes, that is, the Feller processes given by the following infinitesimal generator where a(x) ≥ 0 and b(x) ∈ R are Borel measurable functions and ν(x, ·) is a σ-finite Borel kernel on R × B(R), such that ν(x, {0}) = 0 and R (1 ∧ y 2 )ν(x, dy) < ∞ holds for all x ∈ R. By applying these conditions to the stable-like case, he has shown that if α(x) > 1 and for all |x| large enough, then the corresponding stable-like process is recurrent (see [Wan08, Theorem 1.4 (i)]). Note that the above recurrence condition does not cover the zero drift case and the case when α(x) ≤ 1. In particular, it does not cover a symmetric stable Lévy process case. Further, conditions for ergodicity presented in that paper do not cover the stable-like case (see [Wan08, Theorem 1.4 (ii)]). In this paper, we give a sufficient condition for recurrence without any further assumptions on the function α(x) and a sufficient condition for ergodicity in the case when 1 < inf{α(x) : x ∈ R}. Furthermore, in the case when α(x) and γ(x) are periodic functions with the same period and when β(x) = 0, B. Franke [Fra06,Fra07] has shown that the recurrence and transience property of the corresponding stable-like process depends only on the minimum of the function α(x), that is, if the set {x ∈ R : α(x) = α 0 := inf x∈R α(x)} has positive Lebesgue measure, then the corresponding stable-like process is recurrent if and only if α 0 ≥ 1.
Finally, if the functions α(x), β(x) and γ(x) are of the form where α, β ∈ (0, 2), γ, δ ∈ (0, ∞) and k > 0, then by using an overshoot approach, B. Böttcher [Böt11] has shown that the corresponding stable-like process is recurrent if and only if α + β ≥ 2. For the Dirichlet forms approach to the problem of recurrence and transience of stable-like processes without the drift term we refer the reader to [Uem02,IU04], and for the discrete-time analogous of stable-like processes and their recurrence and transience property we refer the reader to [San12a,San12b]. Now, let us state the main results of this paper.
then the corresponding stable-like process {X α t } t≥0 is recurrent.
then the corresponding stable-like process {X α t } t≥0 is transient. The following constant will appear in the statement of the following theorem. For α ∈ (0, 2) and θ ∈ (0, α) let where z n is the binomial coefficient and 2 F 1 (a, b, c; z) is the Gauss hypergeometric function (see Section 3 for the definition of this function).
then the corresponding stable-like process {X α t } t≥0 is ergodic. Let us give several remarks about Theorems 1.1 and 1.2. Firstly, note that condition (1.4) implies condition (1.2). To see this, note that for all 0 < θ < α and all |x| large enough. Thus, as it is commented in Section 2, ergodicity implies recurrence. Secondly, let 0 < θ < lim inf |x|−→∞ α(x) be arbitrary, then lim sup implies recurrence of the corresponding stable-like process (see the proof of Theorem 1.2). Further, it can be proved that the function θ −→ E(α, θ) is strictly increasing, thus we choose θ close to zero. Next, from (3.2), (3.3) and (3.4), it is easy to see that Hence, (1.5) becomes (1.2). Thirdly, note that if lim sup |x|−→∞ α(x) < 1, than the corresponding stable like process cannot be recurrent. Fourthly, the assumption lim inf |x|−→∞ α(x) ≥ 1 in Theorem 1.1 (i) is not restrictive, that is, we can state Theorem 1.1 (i) without any assumptions about the function α(x) (see the proof of Theorem 1.1 (i)). But, if we allow that lim inf |x|−→∞ α(x) < 1, then, clearly, (1.2) does not hold. Finally, as a simple consequence of Theorem 1.1 we get a new proof for the well-known recurrence and transience property of Lévy processes. Corollary 1.3. A one-dimensional stable Lévy process given by the symbol (characteristic exponent) p(ξ) = γ|ξ| α , where α = 1 and γ ∈ (0, ∞), is recurrent if and only if α > 1.
Note that Theorem 1.1 (i) does not imply the recurrence property of one-dimensional symmetric 1-stable Lévy process since, in this case, the left-hand side in (1.2) equals to zero. Now, we explain our strategy of proving the main results. The proofs of Theorems 1.1 and 1.2 are based on the Foster-Lyapunov criteria (see Theorem 2.3). These criteria are based on finding an appropriate test function V (x) (positive and unbounded in the recurrent case, positive and bounded in the transient case and positive and finite in the ergodic case), such that A α V (x) is well defined, and a compact set C ⊆ R, such that A α V (x) ≤ 0 in the recurrent case, A α V (x) ≥ 0 in the transient case and A α V (x) ≤ −1 in the ergodic case for all x ∈ C c . The idea is to find test function V (x) such that the associated level sets C V (r) := {y : V (y) ≤ r} are compact sets and such that C V (r) ↑ R, when r ր ∞, in the cases of recurrence and ergodicity and C V (r) ↑ R, when r ր 1, in the case of transience. In the recurrent case, for the test function we take V (x) = ln(1 + |x|). In the transient case we take V (x) = 1 − (1 + |x|) −θ , where 0 < θ < 1 is arbitrary, and in the ergodic case we take V (x) = |x| θ , where 1 < θ < inf x∈R α(x) is arbitrary. Now, by proving that in the ergodic case, the proofs of Theorems 1.1 and 1.2 are accomplished. Let us remark that a similar approach, using similar test functions, can be found in [Lam60], [MAY95] and [San12a] in the discrete-time case and in [ST97] and [Wan08] in the continuous-time case.
The paper is organized as follows. In Section 2 we recall some preliminary and auxiliary results regarding long-time behavior of general Markov processes and we discuss several structural properties of stable-like processes which will be crucial in finding sufficient conditions for recurrence, transience and ergodic properties. Further, we also give and discuss some consequences of the main results. Finally, in Section 3, using the Foster-Lyapunov criteria, we give proofs of Theorems 1.1 and 1.2.
Throughout the paper we use the following notation. We write Z + and Z − for nonnegative and nonpositive integers, respectively. By λ(·) we denote the Lebesgue measure on the Borel σ- in the sequel, will denote an arbitrary Markov process on R d with transition kernel p t (x, ·) := P x (X t ∈ ·) and {X α t } t≥0 will denote the stable-like process given by the infinitesimal generator (1.1).
Preliminary and auxiliary results
In this section we recall some preliminary and auxiliary results regarding long-time behavior of general Markov processes and we discuss several structural properties of stable-like processes.
Definition 2.1. Let {X t } t≥0 be a càdlàg strong Markov process on R d . The process {X t } t≥0 is called Note that the Lebesgue irreducibility of stable Lévy processes is trivially satisfied, and the Lebesgue irreducibility of general stable-like processes has been shown in [ A is a recurrent process then there exists a unique (up to constant multiples) invariant measure π(·). If the invariant measure is finite, then it may be normalized to a probability measure. If {X t } t≥0 is (Harris) recurrent with finite invariant measure π(·), then {X t } t≥0 is called positive (Harris) recurrent, otherwise it is called null (Harris) recurrent. The Markov process {X t } t≥0 is called ergodic if an invariant probability measure π(·) exists and if lim t−→∞ ||p t (x, ·) − π(·)|| = 0 hods for all x ∈ R d , where || · || denotes the total variation norm on the space of signed measures. One would expect that every positive (Harris) recurrent process is ergodic, but in general this is not true (see [MT93a]). In the case of the stable-like process {X α t } t≥0 , these two properties coincide. Indeed, according to [MT93a, Theorem 6.1] and [SW12, Theorem 3.3] it suffices to show that if {X α t } t≥0 possess an invariant probability measure π(·), then it is recurrent.
Let t > 0 be arbitrary. Then for each j ∈ N we have By letting t −→ ∞ we get that π(B j ) = 0 for all j ∈ N, which is impossible. Let us remark that a stable Lévy process is never ergodic since the Lebesgue measure satisfies the invariance property. Due to the above discussion and from Theorems 1.1 (i), 1.2 and [MT93a, Theorem 3.2], we get the following two additional long-time properties of stable-like processes.
In other words, for each initial position x ∈ R the event {X α t ∈ C c for any compact set C ⊆ R and all t ∈ R + sufficiently large} has probability 0.
(ii) Under assumptions of Theorem 1.2, for each initial position x ∈ R and each ε > 0, there exists a compact set C ⊆ R such that As already mentioned, the proofs of Theorems 1.1 and 1.2 are based on the Foster-Lyapunov criteria. Let us recall several notions regarding Markov processes we are going to need in the sequel. The extended domain of a càdlàg Markov process {X t } t≥0 on R d is defined by Let us remark that in general, the function g(x) does not have to be unique (see [EK86, Page 24])).
We callà the extended generator of {X t } t≥0 . A function g ∈Ãf is usually abbreviated bỹ Af (2.1) and for the functionÃf (x) we can take exactly the function A α f (x), where A α is the infinitesimal generator of the stable-like process {X α t } t≥0 given by (1.1). Next, let {X t } t≥0 be a càdlàg Markov process on R d and let , the stable-like process {X α t } t≥0 is always conservative and then, sice it has càdlàg paths, it also non-explosive.
A function V : R d −→ R + is called a norm-like function (for a càdlàg Markov process {X t } t≥0 ) if V ∈ D(Ã) and the level sets {x : V (x) ≤ r} are precompact sets for each level r ≥ 0.
Finally, a set C ∈ B(R d ) is called ν a -petite set (for a càdlàg Markov process {X t } t≥0 ) if there exist a probability measure a(·) on B(R + ) and a nontrivial measure holds for all x ∈ C and all B ∈ B(R d then the process {X t } t≥0 is Harris recurrent.
then the process {X t } t≥0 is positive Harris recurrent and Let us remark that in the case of the stable-like process {X α t } t≥0 , according to [Twe94, Theorems 5.1 and 7.1], the first requirements of Theorem 2.3 (i) and (iii) always hold, that is, every compact set is a petite set.
We end this section with the following observation. Assume that {X t } t≥0 is an ergodic Markov process with invariant measure π(·). Then, clearly, holds for all x ∈ R d and all bounded Borel measurable functions f (x). In what follows, we extend this convergence to a wider class of functions. For any Borel measurable function f (x) ≥ 1 and any signed measure µ(·) on B(R d ) we write where the supremum is taken over all Borel measurable functions g : Note that || · || 1 = || · ||. Hence, f -ergodicity implies ergodicity. Now, by Theorems 1.2, 2.3 (iii) and [MT93b, Theorem 5.3 (ii)], we get the following sufficient condition for f -ergodicity.
Proof of the main results
In this section we give proofs of Theorems 1.1 and 1.2. Before the proofs, we recall several special functions we need. The Gamma function is defined by the formula It can be analytically continued on C \ Z − and it satisfies the following two well-known properties Γ(z + 1) = zΓ(z) and Γ(1 − z)Γ(z) = π sin(πz) . (3.1) The Digamma function is a function defined by Ψ(z) := Γ ′ (z) Γ(z) , for z ∈ C \ Z − , and it satisfies the following properties: where γ is the Euler's number The Gauss hypergeometric function is defined by the formula for a, b, c, z ∈ C, c / ∈ Z − , where for w ∈ C and n ∈ Z + , (w) n is defined by (w) 0 = 1 and (w) n = w(w + 1) · · · (w + n − 1).
The series (3.5) absolutely converges on |z| < 1, absolutely converges on |z| ≤ 1 when Re(c−a −b) > 0, conditionally converges on |z| ≤ 1, except for z = 1, when −1 < Re(c − b − a) ≤ 0 and diverges when Re(c − b − a) ≤ −1. In the case when Re(c) > Re(b) > 0, it can be analytically continued on C \ (1, ∞) by the formula and for a, b ∈ C, c ∈ C \ Z − and z ∈ C \ (0, ∞) it satisfies the following relation For further properties of the Gamma function, the Digamma function and hypergeometric functions see [AS84, Chapters 6 and 15]. We also need the following two lemmas.
Proof. First, note that for x ≥ 0, by the binomial expansion of (1 + x) n , we have (1 + x) n ≥ nx n−1 for all n ∈ N. Now, the desired result follows by the dominated convergence theorem.
Proof of Theorem 1.1 (i). The proof is divided in four steps.
Clearly, V ∈ C 2 (R) and the level set C V (r) := {x : V (x) ≤ r} is a compact set for all levels r ≥ 0. Furthermore, it is easy to see that holds for all x ∈ R. Hence, by the relation (2.1), V ∈ D(Ã) and for the functionÃV (x) we can take the function A α V (x), where A α is the infinitesimal generator of the stable-like process {X α t } t≥0 given by (1.1). In the sequel we show that there exists r 0 > 0, large enough, such thatÃV (x) ≤ 0 for all x ∈ (C V (r 0 )) c . Clearly, sup x∈C V (r 0 ) |ÃV (x)| < ∞. Thus, the desired result follows from Theorem 2.3 (i). In order to see this, since C V (r) ↑ R, when r ր ∞, it suffices to show that lim sup Step 2. In the second step, we find more appropriate expression for the functionÃV (x). We haveÃ
Let us define
Hence, in order to prove (3.8) it suffices to prove lim sup Furthermore, for x > 0 large enough we have Step 3. In the third step, we compute lim sup x−→∞ (3.10) Step 4. In the fourth step, we compute lim sup x−→∞ ) . In the case when α(x) = 1, we have by elementary computation we get Further, in the case when α(x) = 1, using integration by parts formula and (3.6), we get Let us put Thus, Now, by applying Lemma 3.2, it is easy to see that lim x−→∞ and lim x−→∞ C 2 (x) = 0. (3.14) Further, since 0 ≤ ϕ(y) ≤ 1, for |y| ≤ 1, we have Again by Lemma 3.2, it follows that lim x−→∞ C 3 (x) = 0.
(3.15) Further, by applying (3.1) and (3.7) we get Next, Again, since 0 < inf{α(x) : x ∈ R} ≤ sup{α(x) : x ∈ R} < 2, by applying (3.5) and Lemma 3.1, we get and from (3.5) we get Clearly, by the dominated convergence theorem we have and by using the Taylor series expansion of the function ctg(y) we get (3.21) Now, by combining (3.9) -(3.21) we get lim sup Finally, by combining (3.8), (3.9), (3.22) and assumption (1.2) we get lim sup The case when x < 0 is treated in the same way. Therefore, we have proved the desired result.
Proof of Theorem 1.1 (ii). The proof is divided in three steps.
Step 1. In the first step we explain our strategy of the proof. Let ϕ ∈ C 2 (R) be an arbitrary nonnegative function such that ϕ(x) = |x|, for |x| > 1, and ϕ(x) ≤ |x|, for |x| ≤ 1. Let θ ∈ (0, 1) be arbitrary and let us define the function V : R −→ R + by the formula Clearly, V ∈ C 2 (R) and the level set C V (r) = {x : V (x) ≤ r} is a compact set for all levels 0 ≤ r < 1. Furthermore, since the function V (x) is bounded holds for all x ∈ R. Hence, by the relation (2.1), V ∈ D(Ã) and for the functionÃV (x) we can take the function A α V (x), where A α is again the infinitesimal generator of the stable-like process {X α t } t≥0 given by (1.1). In the sequel we show that there exists 0 < r 0 < 1, such thatÃV (x) ≥ 0 for all x ∈ (C V (r 0 )) c . Clearly, sup x∈C V (r 0 ) |ÃV (x)| < ∞. Thus, the desired result follows from Theorem 2.3 (ii). Note that for the sets C, D ⊆ R, defined in Theorem 2.3 (ii), we can take C := C V (r 0 ) and D is an arbitrary closed set satisfying D ⊆ C c and λ(D) > 0. Now, from the continuity of the function V (x), we have sup In order to prove the existence of such r 0 , since C V (r) ↑ R, when r ր 1, it suffices to show that lim inf Step 2. In the second step we find more appropriate expression for the functionÃV (x). We haveà c(x) |y| α(x)+1 dy.
Let us define
Hence, in order to prove (3.23) it suffices to prove lim inf Furthermore, for x > 0 large enough we have By restricting the function 1 − (1 + t) −θ to intervals (−1, 1) and [1, ∞), and using its Taylor expansion, that is, we get Let us put Further, by (3.6), we have Next, by elementary computation, we have in the case when α(x) = 1, in the case when α(x) = 1, and Step 3. In the third step we prove lim inf First, by the mean value theorem, we have Further, since 0 < inf{α(x) : x ∈ R} ≤ sup{α(x) : x ∈ R} < 2, from (3.5) and the dominated convergence theorem, it follows (3.30) Thus, by combining (3.24) -(3.30), we have lim inf Next, it can be proved that the function is strictly decreasing, hence we choose θ close to zero. From, (3.2), (3.3) and (3.4), we get Now, the claim follows from condition (1.3). The case when x < 0 is treated in the same way. Therefore, we have proved the desired result.
Proof of Theorem 1.1 (ii). Let {X n } n≥0 be a Markov chain on the real line given by the transition kernel p(x, dy) := f x (y − x)dy, where f x (y) is the density function of the stable distribution with characteristic exponent p(x; ξ) = −iβ(x)ξ + γ(x)|ξ| α(x) . Hence, the chain {X n } n≥0 jumps from the state x by the stable distribution with the density function f x (y). By [ Proof of Theorem 1.2. We use a similar strategy as in Theorem 1.1 (ii). The proof is divided in three steps.
Clearly, V ∈ C 2 (R) and the level set C V (r) = {x : V (x) ≤ r} is a compact set for all levels r ≥ 0. Furthermore, since θ < inf{α(x) : x ∈ R}, we have for all x ∈ R. Hence, by the relation (2.1), V ∈ D(Ã) and for the functionÃV (x) we can take the function A α V (x), where A α is the infinitesimal generator of the stable-like process {X α t } t≥0 given by (1.1).
In the sequel we show that there exists r 0 > 0, large enough, such thatÃV (x) ≤ −1 for all x ∈ (C V (r 0 )) c . Clearly, sup x∈C V (r 0 ) |ÃV (x)| < ∞. Thus, the desired result follows from Theorem 2.3 (iii). In order to see this, since C V (r) ↑ R, when r ր ∞, it suffices to show that lim sup We havẽ c(x) |y| α(x)+1 dy + 1. Furthermore, for x > 0 large enough we have
Let us define
Step 3. In the third step we prove Further, since 0 < inf{α(x) : x ∈ R} ≤ sup{α(x) : x ∈ R} < 2, from (3.5) and the dominated convergence theorem, it follows Thus, by combining (3.32) -(3.38), we have lim sup = lim sup The case when x < 0 is treated in the same way. Therefore, we have proved the desired result.
Proof of Corollary 1.3. In the case when α = 2, the claim easily follows from Theorem 1.1. Further, in the case when α = 2, that is, in the Brownian motion case, the corresponding infinitesimal generator is given by A 2 f (x) = γf ′′ (x) (recall that the symbol (characteristic exponent) is given by p(ξ) = γ|ξ| 2 ) and clearly C 2 (R) ⊆ D(Ã). Thus, for any f ∈ C 2 (R), for the functionÃf (x) we can take the function A 2 f (x). Now, by taking again V (x) = log(1 + ϕ(x)) for the test function, where ϕ ∈ C 2 (R) is an arbitrary nonnegative function such that ϕ(x) = |x| for all |x| > 1, we get AV (x) = A 2 V (x) = −γ (1 + |x|) 2 for all |x| > 1, that is, the Brownian motion is recurrent. | 2012-12-11T08:08:28.000Z | 2012-12-11T00:00:00.000 | {
"year": 2013,
"sha1": "e8b792d22fc5f118ac6e2ff060881946da75f74a",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.spa.2012.12.004",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "e8b792d22fc5f118ac6e2ff060881946da75f74a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
9590540 | pes2o/s2orc | v3-fos-license | Perspectives of health care professionals on cancer cachexia: results from three global surveys
Cachexia has a high prevalence in cancer patients, leading to reduced tolerance/response to treatment and decreased quality of life. Results from three global surveys presented herein demonstrate a definite need for increased awareness and educational initiatives to improve the knowledge and understanding of cancer cachexia among physicians in order to optimize patient outcomes.
introduction Cachexia is a debilitating condition with high occurrence in cancer patients, particularly in those with advanced disease [1,2]; up to 20% of patients die as a result of cancer cachexia (CC) [3]. CC has been defined as a multifactorial syndrome characterized by muscle depletion, with or without loss of adipose tissue, which cannot be completely reversed by available treatments, leading to progressive functional derangements [4]. The pathophysiology of CC is characterized by reduced food intake and abnormal metabolism, which lead to a negative protein and energy balance [4]. The agreed-on diagnostic criteria for CC are a weight loss >5%, or a weight loss >2% in patients already showing depletion according to body mass index (BMI <20 kg/m 2 ) or skeletal muscle mass [4].
It is well recognized that the influence of CC extends beyond weight loss, negatively impacting patients' psychological wellbeing, exercise capacity, quality of life (QOL), and tolerance and effectiveness of anticancer therapies, and is associated with systemic inflammation [5]. Nevertheless, CC is still frequently under-recognized, untreated, and considered inevitable for cancer patients [1,6].
Despite this, there is a growing understanding of CC as a continuum that can progress through various stages: pre-cachexia, cachexia, and refractory cachexia [4]. A recent study examining classification models for CC highlighted the need to recognize the complete cachexia trajectory [7]. The authors have also proposed that cachexia should be considered a comorbidity of cancer [8]. These perspectives on CC have practical implications since they might favor early recognition, diagnosis, and therapeutic interventions. A clear distinction of pre-cachexia would allow treatments that can prevent/delay CC to be initiated as early as possible [9]. Since existing treatments are limited in their ability to treat CC, it is crucial to shift attention to improving early detection of nutritional and metabolic impairments that could lead to CC [5,[8][9][10]. In support of this, several studies have shown advantages of early nutritional counseling and intervention in cancer patients, in terms of treatment tolerance and clinical outcomes [11][12][13][14][15].
The aim of this study, composed of three global surveys, is to gain insights into the awareness, understanding, and treatment practices among health care professionals (HCPs) involved in CC management. methods surveys design and inclusion criteria Three surveys were conducted among HCPs by Synovate Healthcare (now Ipsos Healthcare; Surveys 1 and 3) and Adelphi (Survey 2), between 2011 and 2012. The design of each study was developed according to the best market research practice. All questions were tested for understandability through an internal process among experts of the companies involved in the surveys' development. Since this was the first initiative in the field, the questions chosen for these surveys were not externally validated. Partial completion was not allowed or used for final analysis. Questions asked or discussed in the surveys and reported herein are described in supplementary Table S1, available at Annals of Oncology online. survey 1 survey design. A 20-min questionnaire developed by a research team from Synovate Healthcare (now Ipsos Healthcare) with a background in social sciences consisted of multiple-choice questions. Participants from 13 different countries (Table 1) were selected, the majority via online panels that screened HCPs for inclusion (see inclusion criteria below); these participants completed the surveys online. The exception was Indonesia, where recruitment was carried out face to face by a local team of recruiters, and paper surveys were completed. The collected data were subject to verification.
inclusion criteria. All participants had to have oncology as their primary specialty with more than 3 years of practicing experience, treat a minimum of 30 cancer patients each month, and be personally involved in the management and treatment of CC. (Table 1). Participants were screened for inclusion based on their responses to questions on standard research criteria (i.e. to ensure that there were no affiliations with pharmaceutical companies, health care, or governmental agencies) and treatment practices. Quantitative data were collected via an Internet survey, and invitations to participants were sent via an email that contained a secure, respondentspecific link to the survey. Responses were reviewed daily for quality. Qualitative data were collected via a telephone interview conducted by an Adelphi team member. Analysis of the responses was conducted by reviewing all responses and developing a framework specific to the survey, in which to categorize the responses. This was carried out by experienced market research professionals.
inclusion criteria. Participants had to have a primary specialty in either medical or hematologic oncology. They should have been in full-time practice for 3-25 years post-residency and see/treat at least 100 solid tumor cancer patients and at least 20 nonsmall-cell lung cancer patients each month. In the month before taking the survey, participants should have treated a minimum of 30 patients for cancer-associated weight loss, specifically with prescription medication. survey 3 survey design. In-depth, 45-min interviews were developed by Ipsos Healthcare. Interviews were conducted in eight different countries (Table 1), the majority by telephone. Exceptions were Turkey and Russia, where the interviews were face to face. Participants were recruited via local offices and freelance recruiters for Ipsos Healthcare. Interviews were conducted by local office and freelance qualified moderators, and audio recorded. A code frame was developed and applied to the verbatim responses for each open-ended question. Each verbatim response was then analyzed by trained coders and assigned to the appropriate code.
inclusion criteria. Participants had to be qualified oncologists or nutritionists for at least 3 years. They also had to be personally involved in the management and treatment of CC patients.
statistical analyses
Data were analyzed using SPSS software, version 19. Frequencies were described, and the results were reported using descriptive
definitions and synonyms of CC
When asked to provide a definition spontaneously, CC was defined most frequently as weight loss (86%) and loss of appetite (46%); over a quarter (27%) of HCPs provided the definition of muscle wasting/loss of body mass. When asked to spontaneously provide words that they considered to be synonymous with CC, participants most often used the terms loss of weight and decreased appetite (51% and 34%, respectively); over a quarter (28%) provided the term wasting/cancer wasting (supplementary Table S2, available at Annals of Oncology online).
diagnosis and treatment practices
Symptoms most commonly considered to be part of the CC criteria spectrum were weight loss (97%), loss of appetite (93%), failure to thrive (92%), and muscle wasting (91%) ( Figure 1A). The primary factor leading to the prescription of drug treatment of CC was weight loss >5% (69% of participants) ( Figure 1B). Additionally, half of the HCPs would consider drug treatment of CC if the patient had cancer disease both procatabolic and not responsive to anticancer treatment (50% of participants), or a BMI of <20 kg/m 2 plus a weight loss of >2% (46% of participants). When HCPs were asked what percentage of weight loss from baseline they considered to be indicative of CC and would prompt them to initiate treatment ( Figure 1C), almost half (46%) indicated a weight loss of 10%. However, 35% of participants responded that they would wait until weight loss was 15%-20%, and over 10% of participants would wait until weight loss was >25%.
Regarding the disease stage at which patients are first treated for CC, responses revealed that 61%-77% of patients are only initiated on CC treatment with prescription medication at Stage IV disease, regardless of the tumor type (Figure 2). Prostate and breast cancers had the highest percentages of patients receiving initial CC treatment at Stage IV (77% and 74%, respectively).
goals and desired improvements of CC treatment
The ability to promote total weight gain was rated by participants as the most important factor in selecting a therapy for CC treatment (mean importance rating 5.7 on a 7-point scale where 1 ¼ not at all important and 7 ¼ extremely important). This was closely followed by the ability to maintain current total weight/ prevent further weight loss, lack of side-effects, and improvement of fatigue (mean importance ratings: 5.6, 5.6, and 5.5, respectively) ( Figure 3).
The primary aims of HCPs when prescribing first-line treatment of CC were patient focused: enabling patients to improve or stabilize their weight, and ensuring that they can cope with cancer treatment and experience QOL improvements (Table 2).
In response to what developments HCPs would like to see in the treatment of CC, participants desired more specific CC treatments; therapies that enhance multiple aspects of a patient's QOL (ease of administration, few side-effects, improve weight, appetite and energy levels, and mood lifting), and therapies that are able to be used early on, and/or preventively, providing rapid improvements.
discussion
CC is under-recognized and often inadequately managed by HCPs in oncology, with patients not receiving treatment that could improve clinical response, QOL, and ultimately survival [11]. Treatment is dependent on a variety of factors such as awareness of the condition, clinical practice within the specific therapeutic area, and resources available to dedicate time to assess symptoms and prescribe treatment [16,17]. Our study provides insight into HCPs' attitudes toward nutritional and metabolic derangements, particularly oncologists who care for patients with the highest prevalence of malnutrition. This understanding is important to identify gaps in HCP knowledge and CC management, and also to develop strategies to assist HCPs in recognizing and effectively managing the condition.
Findings from the three global surveys reported herein demonstrate that the perception and clinical practices concerning CC vary among HCPs worldwide. There is still no clear and univocal concept of CC, although responses highlight that it is mainly perceived to be associated with weight loss and loss of appetite. Weight loss was also most frequently regarded as a symptom of CC, with the majority of participants in Survey 1 considering a weight loss of >5% to be the primary factor leading to the prescription of drug treatment of CC. Survey 1 covered 13 different countries across Europe, North America, and Australasia. Conversely, 48% of HCPs in Survey 2, who were all USA based, would wait for a weight loss of !15% before initiating treatment. Additionally, around two-thirds of cancer patients do not receive any CC prescription medication before the disease reaches Stage IV. These results suggest that while HCPs may be aware that weight loss and loss of appetite are consequences of cancer, there is a failure to recognize CC as a negative prognostic factor. Patients remain undiagnosed until late in the course of their disease, when the impact of CC on both QOL and treatment outcomes may have already been substantial.
While the understanding of the multifactorial pathogenesis of CC and its detrimental impact is improving, this knowledge still needs to be shared more widely and applied in clinical practice. The lack of nutrition studies during training translates into a limited understanding of the impact of nutritional status on treatment outcomes. With the focus shifting toward the importance of early intervention, increasing HCPs' understanding of the role of nutrition in cancer prognosis is important. A recent position paper of the European School of Oncology Task Force [5] highlights the need for a multimodal approach, including nutritional support and novel therapeutic agents, when managing malnutrition and CC. We have also recently proposed the 'parallel pathway', which encompasses a multiprofessional and multimodal approach to ensure that cancer patients receive appropriate and continuous nutritional and metabolic supports [9]. Another possible factor leading to suboptimal CC management is the lack of awareness of simple tools to identify patients who have symptoms of CC (e.g. standardized tools for body weight loss and appetite). A recent survey [18] demonstrated an urgent need for standardized symptom assessment to identify patients who are at risk earlier in the course of the condition. Although this was a small study, there is considerable interest in adopting a brief symptom assessment tool.
Furthermore, a study that scrutinized over 140 000 Web pages of various international oncology societies for guidelines on CC reported that global CC awareness appears to be extremely low [19]. Only a few (10/275) of the identified oncology societies provided guidelines, and of these, only 6 were for physicians, including the European Palliative Care Research Collaborative [20]. There is, therefore, a need for improved availability and effective dissemination of the most updated international clinical practice guidelines.
The strengths of this study include the large number of survey participants, comprising a good representation of HCPs treating CC patients, and its multinational nature. Outcomes can therefore be taken as a 'real-world' representation and can potentially inform the development of educational initiatives for HCPs and updates to current treatment guidelines. Conversely, the most relevant limitation of the study is that these are self-reported data, which could contain a bias in the responses. An additional limitation is the low response rate, often seen in market research, and may be a result of several factors including lack of enthusiasm for online surveys, current workload, and general interest in a topic. Other study drawbacks relate to the questions being presented differently in the three surveys and responses not always being grouped into country-specific responses. As such, we were unable to directly compare similarities and differences in results between the surveys or to make comparisons between countries. Nevertheless, the aim of the surveys was to provide an overall representation of treatment practices, and considering the large number of HCPs involved, data from each survey remain reliable in the authors' opinion. Future studies of actual HCPs' practices are warranted, to provide greater insights into unmet needs of CC management in the clinical setting. This study underscores the need for increased awareness of CC and its management. Effective dissemination of current guidelines may help establish the criteria for CC diagnosis and treatment, and future guidelines should emphasize the importance of recognizing and treating CC at an earlier stage. Efforts should focus on identifying barriers and knowledge gaps, and tailoring educational initiatives to meet HCPs' needs. Additionally, providing effective, concise, and clinically relevant nutritional and metabolic guidelines to oncology trainees is vital.
acknowledgements
The authors would like to thank the HCPs for their participation in these surveys. The authors also wish to thank Eva Polk, PhD, CMPP, Siddharth Mukherjee, PhD, and Delyth Eickermann, PhD, (TRM Oncology, The Hague, The Netherlands), for their medical writing assistance in the preparation of this manuscript. The authors are fully responsible for all content and editorial decisions for this manuscript. The authors participated in data analysis and independently interpreted the data and directed manuscript content and development. This manuscript reports and discusses results from a survey sponsored by Helsinn. The authors have no further disclosures and no conflict of interest with the companies involved in performing and supporting the surveys.
references Table 2. Goals of the participants for treatment of CC patients Objectives of HCPs Further details Improve or stabilize weight • Although improving weight is the ideal outcome, stabilizing or maintaining weight is a more realistic goal for many HCPs. Improve QOL • Improve general well-being, minimize pain, improve physical and mental strength, lift patient mood and energy levels, and stimulate appetite. Minimize side-effects • Reduce the additional burden of side-effects.
• Alleviate patient distress from tolerability issues. Improve nutritional status • The main focus of the nutritionist.
• Typically one of the first goals that is focused on. Manage individual symptoms • Address nausea and vomiting, motility issues, mood, constipation, and pain. Primary treatment and tumor response • CC will improve if the cancer treatment is effective. • An effective CC treatment helps to avoid the termination of cancer treatment and thereby helps improve cancer therapy outcomes. Do nothing (no active interventions) • Especially if the patient is not disturbed by lack of desire for food (oncologists). CC, cancer cachexia; HCP, health care professional; QOL, quality of life. | 2018-04-03T04:45:58.457Z | 2016-12-01T00:00:00.000 | {
"year": 2016,
"sha1": "0409e8131cf8051cbffd78290e3da8e3e4c31660",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1093/annonc/mdw420",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "0409e8131cf8051cbffd78290e3da8e3e4c31660",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52098899 | pes2o/s2orc | v3-fos-license | Remaining Useful Life Estimation for Informed End of Life Management of Industrial Assets: A Conceptual Model
. The management of the End of Life phase of the lifecycle of industrial assets is more and more a relevant issue for companies dealing with aging assets. This paper provides a conceptual model that includes different aspects that should be considered by a comprehensive methodology to support an informed decision-making process in this regard. In particular, the work stems from the need of estimating the Remaining Useful Life (RUL) of an asset, for defining an Ageing Asset Strategy, through a multi-disciplinary approach instead of a purely technical one. To this end, the proposed model highlights the different End of Life types of an asset that should be considered for comprehensive RUL estimation methodology.
Introduction
Nowadays, ageing assets represent a particular challenge for asset managers [1]. Society's inventory of capital goods is increasing as well as ageing in the western societies. This is very much the case for infrastructures like roads, railways, electric power generation, transport, and aircrafts [2] but also involves production facilities. Many manufacturing plantsat least in Europehave been built in the years after the Second World War, and assets therein are currently approaching the end of their expected functional lives [3]. Ageing of asset systems is one of the reasons why physical Asset Management (AM) has become a more essential part of the organizations' activities during the last decades [4], [5]. Therefore, it is more and more important to define an effective Ageing Assets Strategy for companies managing capital assets. The implementation of such a strategy should provide asset managers with the tools they need to determine the most cost-effective strategy for the ageing assets under their stewardship [1]. As stated by [6], life extension is an alternative to conventional End of Life management strategies, such as decommissioning and replacement of capital equipment, and is gaining popularity in the last years. The decision of extending the life of an asset can potentially bring great economic advantages and value to asset owners, managers and stakeholders if all the necessary information is effectively collected to support it, and proper decision-making is applied. In fact, it is crucial for decision makers to have a holistic view on the current processes and issues involved in undertaking a life extension program.
Asset Life Extension processes generally include: definition of premises for the life extension program, assessment of asset condition, estimation of remaining useful life (RUL) and evaluation of different strategies for life extension [6]- [8].
The Asset Integrity Management (AIM) approach developed in the Oil&Gas industry, can be considered the predecessor of the ageing and life extension programs that are nowadays under discussion. According to [7], AIM and Asset Life Extension are overlapping facets of the same requirement to ensure offshore safety, with the former principally concerned with contemporaneous integrity management, and the latter requiring a forward-looking approach anticipating future changes, challenges and threats, and forecasting the consequences on an installation's risk profile. In the document by [7], a process to preserve asset integrity and extend its life is proposed and its main phases are: to understand the asset condition, to recognize ageing and obsolescence, to manage life extension. Even if the model does not analyze in depth how condition assessment and RUL estimation should be performed, it gives an important remark: there are not only forces related to physical degradation at work in an ageing of asset but obsolescence should be taken into account too in industrial installations.
Based on these premises, this research aims at proposing a conceptual model to estimate the Remaining Useful Life (RUL) for informed-decision making in End of Life Management (EOL) of industrial assets, identifying and considering different potential causes of ageing, relevant in order to define an Asset Strategy through a multi-disciplinary approach instead of a pure technical one. Section 2 shows findings from the literature analysis about EOL causes and RUL estimation approaches. Section 3 describes the proposed conceptual model. Section 4 is dedicated to the Conclusions.
Literature analysis
Estimating the exact moment in which an asset would reach its EOL and therefore calculating its RUL is one essential step in the life extension process but it is not a trivial task [6]- [8]. RUL estimation is a key research topic of Condition Based Maintenance (CBM) and PHM ( [9], [10]) but some gaps still exist, mainly related to the need of addressing the problem from a multi-disciplinary perspective and not only from a technical one [8]. In fact, one open issue in the literature is about ageing causes to be considered for RUL estimation in order to get to an indicator that can be used for informed AM decision-making, overcoming the pure physical ageing perspective of an asset. For this reason, the first objective of this paper is to analyze and systematize the discussion about EOL causes. Secondly, the analysis of current approaches in literature for RUL estimation is presented.
End of Life Causes
In the studies on life extension of assets in the Oil& Gas industry [6], [7], the presence of two aspects when talking about ageing is hinted. Indeed, they suggest the separation between causes of ageing related to the physical condition of the asset and causes related to obsolescence, which can be linked to various factors other than physical degradation.
Other studies support this idea, by clearly stating the distinction of obsolescence from any type of physical degradation. For example, the work [11] demonstrated the presence of two forces of ageing with a case study distinguishing between what they call the traditional forces of mortality and the technology obsolescence. The work [12] recently confirmed this insight as they also asserted that the main reasons for the end of life of a system can be divided between physical ageing and wear, and obsolescence, that they define as the inability to satisfy increasing requirements of the users. Moreover, the authors identify three types of obsolescence, i.e., economic obsolescence, functional obsolescence and spare parts obsolescence, asserting that what other scholars have defined as technology obsolescence is one of the causes, possibly the most frequent, of the three obsolescence types. Table 1 shows and classifies different types of EOL causes, adapting the work by [12] including other relevant literature. In detail, two main EOL causes can be defined: physical ageing and obsolescence. Physical ageing is considered as the main form of ageing by most of the literature, and its only cause is physical degradation increasing the failure risk. Most methodologies related to the RUL estimation are specifically dedicated to this type of EOL cause, as showed in the next section.
Obsolescence is the other EOL cause type and it can be classified into: Economic obsolescence, caused mainly by technology advance and emerging when the current asset is no more profitable if compared with a new equipment; Functional obsolescence, emerging when the asset is unable to satisfy the new requirements after the introduction of new regulations or a change in the market; Diminishing Manufacturing Sources and Material Shortage (DMSMS) obsolescence, mainly related to the obsolescence of the spare parts leading to difficulty in repairing the asset caused, mostly but not only, by technology advance.
Remaining Useful Life estimation approaches
Currently, the estimation of the RUL is a topic of extensive research efforts [13]. Assuming the RUL would be known exactly, an asset could be exploited generating optimal value for its owner, without any increased failures or costs. Furthermore, knowing the processes or incidents that cause the end of the assets useful life would allow the owner to take preventive measures to extend the asset's life [3], [13]. However, there are some weaknesses related to current approaches to estimate asset RUL. In particular, [3] highlighted two of those weaknesses.
many approaches are limited to the technical aspects of the asset or to a mainly statistical approach to deterioration mechanisms; many quantitative attempts fail because of the quality and availability of data.
Hence, on one side there is the need for methods that adopt a multi-disciplinary approach, considering other ageing causes together with physical degradation, connected with obsolescence [8]. On the other side, there is need for methods suitable for situations in which limited or no quantitative data are available, for example methods based on the knowledge of experts [10].
Looking at the RUL estimation techniques in the literature, it is clear that most scholars focused their studies mainly on the development of quantitative methods to support condition-based maintenance programs [14]- [19]. These works have been grouped and defined generically as quantitative RUL estimation methods and techniques because their objective is to link one or more degradation mechanisms to the life of the asset element, mainly low levels of the asset decomposition (in a complex asset structure), to predict when it will fail. They are often complex methodologies and models concentrating on a specific part or assembly to develop a technique that works in certain conditions. Besides, they can be very precise when data are available but they are difficult to be applied when reliable data is scarce. Moreover, as already stated, they are monodisciplinary since they are limited to the technical aspects of the asset. In [8], they recognized the importance for a multi-disciplinary approach for RUL prediction for Life Extension (LE) decision-making. Nevertheless, their model still focuses on the technical input to decision-making, by establishing a process for technical health assessment.
A group of scholars, starting from the difficulties that often emerge when using quantitative methodologies, tried to overcome them developing a model that considers also qualitative information along with quantitative data and that is multi-disciplinary [3]. In an attempt to make the RUL a multi-disciplinary practice, [3] propose a new methodology called Lifetime Impacts Identification Analysis that aims to identify the external impacts that could affect an asset's life in the future without trying to calculate the exact date of end of life. According to the authors it is important to consider four dimensions of ageing: i) the technological perspective, which is related to the question for how long the asset (and/or its output) will comply with the existing technical specifications, ii) the economic perspective, concerning the costs of operating and maintaining a piece of equipment, iii) the compliancy perspective, that deals with the 'license to operate' of the company, and iv) the commercial perspective, which considers whether the asset (and its production) are still able to fulfil the demands of the market. This methodology is an interesting starting point for research looking for comprehensive methodologies to support the EOL management of industrial assets.
Proposed Conceptual Model
Based on the analysis of the literature, the proposed conceptual model intends to provide the basis to develop a guide for asset managers to determine in a simple but systematic manner the RUL of an asset, enlarging the concept of RUL by considering different EOL causes together with the physical ageing. The proposed methodology is thought to facilitate a rigorous approach to address the problem, bearing in mind that each step of the process will require the application of systematic judgment and experience, to achieve informed decisions in the EOL phase and to eventually develop proper asset strategies. The methodology is built on this simple postulate: while physical degradation is certainly causing the end of physical life, it is not the only way an asset can reach its EOL; starting from the definitions provided by the literature, different EOL types can be defined associated with different EOL causes. In particular, we define four types of EOL besides the end of physical life: i) end of service level life, ii) end of capacity life, iii) end of financial life, iv) end of maintainable life. In the reminder, each of it is described in detail and related to one type of EOL causes as defined in Table 1.
Regarding the physical ageing EOL cause, the EOL type can be identified as follows the EOL type can be identified as follows the EOL type can be identified as follows: End of physical life: it occurs when an asset is physically non-functioning (e.g., failed, collapsed, stopped working). Physical mortality failure occurs when the consumption of an asset caused by usage over time reduces performance to such an extent that the asset is unable to sustain performance at or above minimum requirements. A physical mortality failure could occur due to such things as age, wear and tear, environmental factors, accidental damage or operator error.
Regarding the Functional Obsolescence EOL cause, two types of EOL can be identified, related to external factors, and they are the following: End of service level life: it occurs when the expected levels of service have changed since the acquisition of the asset such that the performance requirements now imposed on the asset exceed the functional design capabilities of the asset. This could be due to changes in regulations (such as effluent, air, water quality or safety requirements) or due to changes in customer needs. End of capacity life: it occurs when the volume of demand placed on an asset exceeds its design capability.
Regarding the Economic Obsolescence EOL cause, one type of EOL can be identified, and it is: End of financial life: it occurs when an asset ceases to be the lowest cost alternative to satisfy a specified level of performance or service level, i.e. when the cost to sustain required performance from an asset under current O&M practices exceeds that of feasible alternatives (where the amortized cost to acquire plus the costs to operate and maintain a new or renewed asset is less than the operation and maintenance of the existing asset). This type of EOL is often driven by outdated technology or design.
Regarding the DMSMS Obsolescence EOL cause, one type of EOL can be identified, and it is: End of maintainable life: it occurs when it becomes inconvenient to maintain the asset either because the costs of spare parts are increasing or because spare parts are not available in the market. This type of EOL is strictly connected with the unavailability of spare parts which can be caused by the development of a new technology, by a supplier's bankruptcy or by the supplier's decision , e.g. not to produce anymore the parts. This type of EOL is also connected with the End of financial life since it can be related to an increase in the costs of operating the asset as a consequence.
The different EOL types have to be considered to estimate RUL through a complete evaluation, and it is needed that the asset managers identify which type of EOL will likely rise first. In fact, while all five causes are at work on an asset at all times, only one type of EOL is expected to be the most imminent in time. By this perspective, RUL is defined as the lowest expected life for a selected asset given its operating environment where that life is derived from a determination of the most imminent EOL type, i.e. minimum value among the different types of Time to EOL (TT_ x EOL ): RUL=Min (TT physical EOL , TT capacity EOL , TT service level EOL , TT financial EOL , TT maintainable EOL ) (1) Once the RUL is estimated, it enables to forecast the point in time at which the asset will likely end its life. This is relevant for asset strategy at its EOL: it can be used to evaluate the lead time to be considered as the horizon to define the EOL strategy to adopt (e.g. major repair, refurbishment or replacement). Figure 1 summarizes the steps to estimate the RUL of an asset following the proposed model. The most challenging aspect is the way to estimate each TT_ x EOL since, for each type of EOL, many factors may incur (see Table 1) and should be considered to estimate it in a quantitative way. Moreover, uncertainty must be managed. Therefore, adequate sources of information and knowledge should be available, to finally obtain the most likely EOL of the asset.
Conclusions
This paper discusses about the relevance of a comprehensive methodology to estimate the RUL of an industrial asset in order to support an informed decision-making process to manage its EOL phase. In fact, RUL estimation is a critical process for the definition of asset strategies at their EOL. Up to date, most RUL models that can be found in the literature focus on the physical ageing EOL cause. Very few attempts were made to estimate the impact of obsolescence on the assets EOL from a multi-disciplinary perspective. In this paper, different EOL causes are identified and classified and, based on that, a conceptual model is proposed including the EOL types that should be considered in a comprehensive methodology to be used for Asset Management decision making about the EOL of industrial assets. The proposed model can be used as a reference for future research on methods for decision and information support in EOL management of industrial assets. The model is intended to be conceptual and opens the path for future research on the methods that can be used for transferring the various aspects of End of life to Time to EOL and hence, to estimate the RUL through the proposed perspective. In fact, different methods can fit this model, either semi-quantitative like multi-criteria decision making supporting methods, either quantitative like the use of optimization based on consideration of optimal usage of the asset. The proposed model can be also useful in the context of Product-Service Systems both for the asset owner/operator and the OEMs [20] and future research on this aspect is envisioned by the authors. | 2018-08-28T01:19:49.998Z | 2018-08-26T00:00:00.000 | {
"year": 2018,
"sha1": "28c276acf1c5b31664541867d10f7230555d2959",
"oa_license": "CCBY",
"oa_url": "https://hal.inria.fr/hal-02177879/file/472851_1_En_42_Chapter.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "ef658b2ec6fd120edbd1a2b638958e86808d5056",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
220387337 | pes2o/s2orc | v3-fos-license | Exposure to cholinesterase inhibiting insecticides and blood glucose level in a population of Ugandan smallholder farmers
Objectives The risk of diabetes mellitus may be elevated among persons exposed to some pesticides, including cholinesterase-inhibiting insecticides (organophosphates and carbamates). The objective of this study was to investigate how acetylcholinesterase activity was associated with mean blood glucose levels among smallholder farmers in Uganda. Methods We conducted a short-term follow-up study among 364 smallholder farmers in Uganda. Participants were examined three times from September 2018 to February 2019. At each visit, we measured glycosylated haemoglobin A (HbA1c) as a measure of long-term average blood glucose levels. Exposure to organophosphate and carbamate insecticides was quantified using erythrocyte acetylcholinesterase normalised by haemoglobin (AChE/Hb). For a subgroup of participants, fasting plasma glucose (FPG) was also available. We analysed HbA1c and FPG versus AChE/Hb in linear mixed and fixed effect models adjusting for age, sex, physical activity level, and consumption of fruits and vegetables, alcohol and tobacco. Results Contrary to our hypothesis, our mixed effect models showed significant correlation between low AChE/Hb and low HbA1c. Adjusted mean HbA1c was 0.74 (95% CI 0.17 to 1.31) mmol/mol lower for subjects with AChE/Hb=24.3 U/g (35th percentile) compared with subjects with AChE/Hb=25.8 U/g (50th percentile). Similar results were demonstrated for FPG. Fixed effect models showed less clear correlations for between-phase changes in AChE/Hb and HbA1c. Conclusions Our results do not clearly support a causal link between exposure to cholinesterase-inhibiting insecticides and elevated blood glucose levels (expressed as HbA1c and FPG), but results should be interpreted with caution due to the risk of reverse causality.
to 1.90) of DM for highest versus lowest tertile of exposure to any pesticide, but most studies considered only exposure to organochlorine insecticides such as dichloro-diphenyl-trichloroethane. 3 Less is known for other insecticides that are more widely used in modern agriculture, and most earlier studies are cross-sectional and use crude exposure metrics, making them sensitive to bias. Given their diverse toxicodynamic modes of action, 4 results from one class of pesticides cannot be extrapolated to other classes. Organophosphate insecticides have been suggested to increase the risk of DM by perturbing gluconeogenesis and glycogenolysis, and by leading to insulin resistance through oxidative stress and pro-inflammatory effects. 5 Due to the burden of
Key messages
What is already known about this subject? ► A number of studies have suggested that exposure to cholinesterase inhibiting insecticides is associated with increased risk of diabetes mellitus, but the evidence is limited by cross-sectional designs and poor confounder control.
What are the new findings? ► Contrary to our hypothesis, our follow-up study with good confounder control showed a significant correlation between lower erythrocyte cholinesterase and lower glycated haemoglobin A. ► Similar patterns were seen for fasting plasma glucose.
How might this impact on policy or clinical practice in the foreseeable future? ► Our findings do not seem to support a causal link between cholinesterase inhibiting insecticides and diabetes mellitus, but both exposure and outcome of interest were indirectly assessed, so caution is warranted when interpreting results. ► Future studies on the relationship between cholinesterase inhibitors and diabetes mellitus should combine acetylcholinesterase with other objective exposure metrics and validated subjective exposure information. Figure 1 Flow chart of participant recruitment. morbidity and mortality from DM 2 and the widespread use of pesticides, 6 even modest risk increases could be relevant at population level. The absolute amounts of pesticide used in sub-Saharan Africa are relatively low compared with other regions, 7 but farmers may be highly exposed due to unsafe pesticide handling practices. A 2014 study in the Wakiso District in Uganda showed that 94% of smallholder farmers used pesticides, organophosphate insecticides were some of the most common and 70% of farmers applied pesticides while wearing their regular clothes. 8 The overall prevalence of DM in Uganda is 2.7%. 9 To the best of our knowledge, no previous study has focused on hyperglycaemic among Ugandan farmers. The purpose of this study was to investigate how objectively quantified exposure to cholinesterase inhibiting insecticides was related to average blood glucose levels in a cohort of smallholder farmers in the Wakiso District.
MeTHOds study design
In a short-term follow-up study among smallholder farmers from the Wakiso District in central Uganda, we collected information on both exposure and outcome at baseline in September-October 2018, and at two follow-up examinations in November-December 2018 and January-February 2019. Participants were recruited from an organisation of conventional farmers and an organisation of farmers working towards organic certification of some crops. The timing and method of recruitment was intended to maximise exposure variation within and between persons, as we had been informed that the main insecticide application season in the area was October-November (personal communication, Aggrey Atuhaire, Uganda National Association of Community and Occupational Health).
Participant recruitment
We attended the weekly meetings of farmers' groups from the two organisations and invited all members, excluding pregnant women and farmers under 18 years. The list of eligible individuals was randomised using a pseudo-random number generator. Potential participants were then invited to the examination centre in sequence. If potential participants could not be reached by phone, or were unable to come, the next person on the list was approached. Figure 1 provides an overview of participant recruitment and exclusion. Out of 532 persons recruited at the meetings, 380 came to the examination centre, and 364 participated at baseline. There was only negligible loss to follow-up: 356 and 354 persons participated in phase II and III, respectively.
Outcome assessment
Our main outcome was glycosylated haemoglobin A (HbA 1c ), a measure of the average blood glucose levels for the last 8-12 weeks. 10 Potassium EDTA venous blood was analysed for HbA 1c at the examination centre using the HemoCue HbA1c 501 system (HemoCue, Ängelholm, Sweden) according to the manufacturer's instructions. 11 More than 90% of samples were analysed for HbA 1c within 2 hours.
As a secondary outcome, participants who came to the examination in the morning after a 12-hour fast were also tested for capillary blood fasting plasma glucose (FPG) using the HemoCue Glucose 201 RT (HemoCue). FPG was analysed immediately after sampling. Because of logistic constraints, participants were not randomised for FPG testing; we tested participants who were able and willing to come fasting in the morning.
exposure assessment
Exposure to organophosphate and carbamate insecticides was quantified by analysis of capillary blood erythrocyte acetylcholinesterase (AChE). The primary toxicodynamic target of organophosphate and carbamate insecticides is nervous system acetylcholinesterase. 12 Measurements of the erythrocyte isoenzyme can be used to express exposure. 13 Analysis was performed immediately after sampling, using the Test-Mate ChE Cholinesterase Test System Model 400 (EQM Research, Cincinnati, Ohio, USA) according to the manufacturer's instructions. 14 The device automatically normalised the AChE by the Hb, resulting in our primary exposure metric, AChE/Hb.
Biochemical results were manually recorded, and later double entered into the Open Data Kit (ODK) Collect app. 15 Extensive quality control of the biochemical analyses was performed; results have been reported elsewhere. 16
Confounder selection and assessment
Potential confounders were selected a priori based on Directed Acyclic Graphs. 17 The basic set of confounders comprised sex, age, physical activity level (metabolic equivalent task minutes in the last week) and current consumption of alcohol (g/week), tobacco (g/day) and fruits and vegetables (servings per day in the last week). An extended set of confounders also included years of full-time education (proxy for socioeconomic status) and Body Mass Index (BMI). Subjective information on confounders was collected using the WHO STEPS 18 and Global Physical Activity Questionnaire (GPAQ). 19 Subjective pesticide exposure information was provided from a modified version of a questionnaire designed to capture exposure among smallholder farmers in lowincome and middle-income countries. 20 21 Subjects were interviewed in Luganda or English, depending on their own language preferences. Answers were digitised immediately using ODK Collect. 15 Weight was measured in a standardised manner 22 with the participant wearing only light clothes, using a medical scale (seca robusta 813, seca, Hamburg, Germany). Height was measured in a standardised manner 22 using a stadiometer (SM-SZ-300, Sumbow Medical Instruments, Ningbo, China). Anthropometric data were digitised immediately using ODK Collect. 15
statistical analyses
Since DM can be considered the extreme end of a spectrum of hyperglycaemic, 1 we analysed HbA 1c and FPG as continuous variables. To account for family relationships and repeated measurements, data were analysed in a linear mixed effect model with fixed effects for the exposure and confounder variables and random effects for family and participant. The regression coefficient for the exposure variable was allowed to vary randomly between participants. The model can be written as: where y is the outcome, and β0 is the intercept; βb is the regression coefficient for the effect of the exposure b on y; βb is normally distributed, and each person has their own value of βb ; βc,i is the regression coefficient for the effect of the ith confounder ci on y. All participants have the same value of βc,i . The random effects for families and persons are α and τ, and ε is an error term. All three measurements for each participant are included in the same regression model.
In secondary analyses, we analysed our results in a fixed effect model focusing on changes within persons from one phase to another, in order to remove the effect of unmeasured timeinvariant confounders. The model can be written as where y is the change in the outcome between two phases, and xi is the change in the ith independent variable xi between the phases. All participants have the same regression coefficient βx,i . The random effect for family is α, and ε is an error term.
Python 3 (Python Software Foundation, https://www. python. org/) and Stata 15 (StataCorp, College Station, Texas, USA) were used for data management, while data were analysed in Stata 15. Sex was entered as a categorical variable; all other independent variables were continuous and were generally modelled using restricted cubic splines with four knots to allow non-linear exposure-response relationships. However, effects of alcohol and tobacco consumption were assumed linear, as the low numbers of persons smoking or drinking alcohol did not allow the use of splines (table 1). Spline analysis results were plotted using xblc. 23 We included participants who had at least one study visit with no missing values for any of the included variables. As the loss to follow-up was very low, we did not adjust for it statistically. Sensitivity analyses were conducted to test the robustness of our findings; online supplementary appendix 2 describes all analyses and results. Analyses were prespecified in a published analysis protocol, 17 and deviations from protocol are also listed in online supplementary appendix 2.
To aid in the interpretation of the findings, we conducted analyses of variance for all biochemical metrics. For each metric (eg, HbA 1c ), we fitted a linear mixed effect model that had no fixed effect terms, but included random effect terms for family and participant. Results are shown in online supplementary appendix 5.
resulTs
Demographic data at baseline are presented in table 1, both overall and stratified by AChE/Hb below/above the median (26.3 U/g). The purpose of the stratification is to check for imbalances in demographic variables that might introduce confounding. The 19 reflecting the degree of manual labour in Ugandan smallholder agriculture. Somewhat surprisingly, the use of pesticides, including cholinesterase inhibitor insecticides, was similar in the two AChE/Hb strata. Self-reported pesticide exposure information is available in online supplementary appendix 3, while online supplementary appendix 4 lists the use of personal protective equipment (PPE) during handling of pesticides. Almost all the cholinesterase inhibitor insecticide used by study participants were organophosphates-carbamates were seldom used. The use of PPE was very low, and gumboots was the only type of PPE used by >50% of directly exposed persons. A significant negative trend in AChE/Hb across project phases was seen, with a mean change per phase −0.74 (95% CI −0.85 to −0.63) U/g (table 2). A positive trend in HbA 1c was evident, with a mean change per phase 0.41 (95% CI −0.03 to 0.85). For FPG, no evident trend was seen.
Workplace
Exposure and outcome variables showed considerable withinperson variance (online supplementary appendix 5): for logtransformed HbA 1c , the ratio between within-person variance and the sum of between-family and between-person variance was 0.74, while the ratio was 1.28 and 0.22 for log-transformed FPG and for AChE/Hb, respectively.
In both unadjusted and adjusted analyses, a significant association was demonstrated between low AChE/Hb and low Data from each phase presented as mean±SD. Δ/phase denotes the mean change (95% CI) when project phase changes increases by one, based on a mixed effect model with fixed effect for phase, random effects for family and person. AChE, erythrocyte acetylcholinesterase; FPG, fasting plasma glucose; HbA 1c , glycated haemoglobin; LOQ, limit of quantitation; NGSP, National Glycohemoglobin Standardization Program.
HbA 1c (figure 2). The reference is the median (50th percentile) of AChE/Hb values in each model. In the basic adjusted model, mean HbA 1c was 0.74 (95% CI 0.17 to 1.31) mmol/mol lower for subjects with AChE/Hb=24.3 U/g (35th percentile) than for reference subjects with AChE=25.8 U/g (50th percentile). Furthermore, subjects with AChE/Hb=27.1 (65th percentile) had mean HbA 1c 0.63 (95% CI 0.12 to 1.14) mmol/mol higher than the reference. FPG was also lower for subjects with low AChE/ Hb in both unadjusted and adjusted analysis (figure 2). In the basic adjusted model, mean FPG was 0.06 (0.01; 0.12) mmol/L lower for subjects with AChE/Hb=24.3 U/g (35th percentile) than for reference subjects with AChE/Hb=25.4 mmol/L (50th percentile), and 0.11 (95% CI 0.01 to 0.20) mmol/L higher for subjects with AChE/Hb=27.1 U/g (65th percentile). To put these numbers in context, the same models showed that mean HbA 1c was 5.98 mmol/mol higher for subjects aged 68.7 years(95th percentile) compared with subjects aged 23.2 years (5th percentile), and mean FPG was 0.47 mmol/L higher. Results from sensitivity analyses are provided in online supplementary appendix 2, and they showed that our findings were robust to different modelling strategies. The most important sensitivity analyses are described in the following paragraphs.
Due to incomplete data for family relationships, in one sensitivity analysis we derived CIs using a bootstrap procedure, in addition to the random effect for family. Results from this analysis were similar to the main analysis, and remained statistically significant.
Further adjusting for Hb concentration did not change our results, nor did adjusting for project phase. We also dichotomised HbA 1c into normal (≤38 mmol/mol) or elevated (≥39 mmol/ mol), 1 and analysed the dichotomous variable in a mixed effect logistic regression model. Lower AChE/Hb was significantly associated with lower odds of having HbA 1c ≥ 39 mmol/mol, supporting the main findings.
Finally, in the fixed effect model of change in AChE/Hb versus change in HbA 1c within individuals from phase I to III, decreased AChE/Hb seemed associated with increased HbA 1c in both the unadjusted and adjusted analysis (figure 2). The association was less clear in sensitivity analyses comparing phase I/II or II/III (online supplementary appendix 2).
dIsCussIOn
We expected low activity (as a marker of exposure to organophosphate and carbamate insecticides) to be associated with increased blood glucose levels. However, our main analyses showed significant associations between low AChE/Hb and low HbA 1c , as well as low FPG. Sensitivity analyses gave similar results, strengthening confidence in the findings. On the other hand, results from fixed effect models focusing on changes within individuals were inconsistent and showed no clear correlations between changes in AChE/Hb and changes in HbA 1c .
Previous studies on the association between cholinesterase inhibiting insecticides and DM are inconsistent. Shapiro et al conducted a follow-up study among pregnant Canadian women environmentally exposed to organophosphates, finding that high urinary levels of organophosphate metabolites were correlated with decreased odds of being diagnosed with either gestational diabetes or gestational impaired glucose tolerance. 24 On the other hand, a cross-sectional study in the general US population showed no associations between FPG, HOMA-IR (Homeostatic Model Assessment for Insulin Resistance) or HbA 1c versus urinary organophosphate metabolites. 25 Among Indian farmers and villagers, there was no significant difference in butyryl cholinesterase (BChE) between subjects with diabetes and without diabetes, but odds of DM was significantly positively correlated with plasma levels of some organophosphates. 26 Finally, in a follow-up study among greenhouse workers in Spain, workers Workplace Figure 2 results from analyses of glycaemic regulation versus ache/hb, modelled using splines. *Basic set of confounders=age, sex, alcohol consumption in last week, tobacco consumption in the last week, MeT-minutes of physical activity in the last week, servings of fruits and vegetables consumed per day in the last week. extended set=basic set+Body Mass index and years of full-time education. **Basic set of confounders=Δ(age), Δ(alcohol consumption), Δ(physical activity), Δ(consumption of fruits and vegetables) and Δ(tobacco consumption). extended set=basic set+Δ(body mass index). Y-axis=difference in outcome, relative to the predicted value at the median value of the independent variable. solid black line=estimate, modelled using restricted cubic splines with four knots. Dashed black lines=95% ci. Black dots on trend line show the location of the spline knots. histogram shows the distribution of the independent variable for observations in the model. had significantly higher FPG in a high-exposure than in a lowexposure period, and both AChE/Hb and BChE decreased in the high-exposure period. While our results are in line with those of Shapiro et al, 24 both of the last two studies seem to contradict them. Shapiro et al suggested that the negative correlation in their study might be due to confounding from intake of fruit and vegetables with pesticide residues. 27 In our population this is an unlikely explanation, as the consumption of fruit and vegetables was generally very low (table 1), and was included as an independent variable in our models. Differences in exposure metrics and study populations might explain the conflicting results, but it is difficult to draw any clear conclusions regarding the relationship between exposure to organophosphate and carbamate insecticides and the risk of DM.
The main strength of this study is the repeated measurement design with negligible loss to follow-up and three phases of objective measurements of both exposure and outcome for all participants. Confounders were selected a priori based on Directed Acyclic Graphs, and information on confounders was collected using the standardised WHO STEPS 18 and GPAQ 19 instruments. A previous version of GPAQ had moderate validity in Ethiopia, with Spearman's rho=0.31 for self-reported versus pedometer-measured physical activity time. 28 The mean consumption of fruits and vegetables in our study are in line with a previous study in Uganda, which showed that only 12.2% of subjects consumed five or more servings of fruit and vegetables per day in a 'typical' week. 29 Demographic variables were similar between individuals with low and high AChE/Hb, making it less plausible that other variables co-vary enough with AChE/Hb to be responsible for the demonstrated associations. Theoretically, the relationship between AChE/Hb and HbA 1c could be biassed if both HbA 1c and AChE/Hb were affected by Hb levels. The direction of such bias is difficult to predict, as different types of anaemia affect erythrocyte lifespan differently. 30 However, the relationship between AChE/Hb and HbA 1c was unchanged in a sensitivity analysis including Hb level as a covariate, so confounding from anaemia does not explain our findings. We considered whether the associations between AChE/Hb, HbA 1c and FPG could be due to changed toxicokinetics in overweight individuals. Betweenperson differences and within-person changes in body fat might influence the excretion of organophosphates, since most are lipophilic. 31 However, that does not explain our results, as associations persisted after adjustment for BMI.
Our study also has some important limitations. Our sampling strategy was convenience-based rather than random, which could lead to selection bias. However, to explain our findings, the selection should have made participants most likely to participate if they were highly exposed and had low HbA 1c , or if they had low exposure and high HbA 1c . We find such selection unlikely. For logistic reasons, HbA 1c was assessed using a point-of-care device, which might not have the same level of precision and accuracy as what could be provided by a clinical biochemical lab. According to the manufacturer, the HemoCue HbA1c 501 system is 'interference-free, which means it is unaffected by Hb variants', 32 but a recent study found that results were somewhat negatively biassed for blood samples with sickle cell trait (ie, heterozygous for haemoglobin S (HbS)). 33 We did not measure variant Hb, but the proportion of participants with HbS is likely considerable, as a recent study among children of HIV-positive mothers in the same region found that 12.8% of infants had HbS. 34 Interference from variant Hb could bias our dose-response relationships away from the null if it also affected AChE/Hb activity. However, while patients with sickle cell anaemia have considerably higher AChE/Hb than healthy controls, healthy persons with sickle cell trait only have normal AChE/Hb. 35 Hence, interference from variant Hb is unlikely to explain our results, and it cannot explain why we see the same association between AChE/Hb and FPG. In a study in South Africa, the HemoCue HbA1c 501 had an area under the curve of 0.81 for diagnosis of DM, 36 indicating that results are imprecise. However, imprecision leads to bias towards the null and cannot explain the associations either.
AChE/Hb is a well-established biomarker of exposure to organophosphate and carbamate insecticides, 13 and all analyses presented in this paper used AChE/Hb as an objective exposure metric. Due to poor correlation between self-reported spraying activities and AChE/Hb (table 1), we did not perform analyses based on subjective exposure information. Hence, effect estimates for AChE/Hb might be biased by the use of other classes of pesticides. The lack of a correlation between subjective spraying information and AChE/Hb might be due to substantial exposure through other routes than spraying (eg, re-entry work in sprayed fields and pest control operations in subjects' homes), recall bias or recovery of AChE activity in the time between exposure and interview. Alternatively, it could be an indication that AChE/Hb is a suboptimal exposure metric in the study population. AChE can be influenced by many physiological and pathological conditions, 37 including blood sugar levels. For example, patients with dysregulated type 1 diabetes have been shown to have lower AChE activity than both healthy controls and well-controlled type 1 diabetics. 38 It is therefore possible that the exposureresponse relationships are affected by unknown factors influencing both glucose levels and AChE, or due to reverse causality.
Due to interindividual variability in AChE/Hb, it is recommended that when using AChE/Hb to monitor insecticide exposure, each person's results should be compared with their own pre-exposure levels, instead of using population reference values. 39 However, we could not clearly define a 'pre-exposure' period for the study population, as subjectively reported pesticide use was similar in the three project phases (online supplementary appendix 3). We do not think that this is a problem for our analyses, as our models explicitly account for both withinindividual and between-individual variability in AChE/Hb.
The possibility of bidirectional links between AChE/Hb and glycaemic regulation challenges the use of AChE/Hb as exposure metric for this particular outcome. Alternative objective exposure metrics such as hair or urine levels of organophosphate metabolites, or measurements of subjects' exposures using personal samplers, might be better suited due to a smaller risk of bias. During the data collection phase of the PEXADU project, participants wore passive pesticide samplers in the form of silicone wristbands, and a random subsample gave urine samples. The future analysis of these samples should be able to shed more light on the causal relationships between blood glucose levels and exposure to cholinesterase-inhibiting pesticides, which has potential important implications for both workers and the general population.
COnClusIOn
Contrary to our hypothesis, we found a correlation between low Hb-adjusted acetylcholinesterase activity (AChE/Hb), low HbA 1c and low FPG. The relationship between change in AChE/Hb and change in HbA 1c between project phases was less clear. Our study does not clearly support a causal link between exposure to organophosphate and carbamate insecticides and elevated blood glucose levels (expressed as HbA 1c and FPG).
Workplace science and Technology (registration number hs234es) and the higher Degrees research and ethics committee at Makerere University school of Public health (MaksPh-hDrec, registration number 577).
Provenance and peer review not commissioned; externally peer reviewed. data availability statement Data are available on reasonable request. For access to deidentified data from the subset of participants who have consented to data sharing, please contact the corresponding author ( ph@ au. dk). access requires permission from the MaksPh-hDrec and the Danish Data Protection agency.
Open access This is an open access article distributed in accordance with the creative commons attribution non commercial (cc BY-nc 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. see: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/. | 2020-07-08T13:02:50.474Z | 2020-07-05T00:00:00.000 | {
"year": 2020,
"sha1": "5f8cb7b4323edff2f071a366f0f830dbd99024f6",
"oa_license": "CCBYNC",
"oa_url": "https://oem.bmj.com/content/oemed/77/10/713.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "BMJ",
"pdf_hash": "743699f24034eee01033384233ff8c551996a359",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4003326 | pes2o/s2orc | v3-fos-license | Models and Framework for Adversarial Attacks on Complex Adaptive Systems
We introduce the paradigm of adversarial attacks that target the dynamics of Complex Adaptive Systems (CAS). To facilitate the analysis of such attacks, we present multiple approaches to the modeling of CAS as dynamical, data-driven, and game-theoretic systems, and develop quantitative definitions of attack, vulnerability, and resilience in the context of CAS security. Furthermore, we propose a comprehensive set of schemes for classification of attacks and attack surfaces in CAS, complemented with examples of practical attacks. Building on this foundation, we propose a framework based on reinforcement learning for simulation and analysis of attacks on CAS, and demonstrate its performance through three real-world case studies of targeting power grids, destabilization of terrorist organizations, and manipulation of machine learning agents. We also discuss potential mitigation techniques, and remark on future research directions in analysis and design of secure complex adaptive systems.
I. INTRODUCTION
From brains and immune systems, to societies and ecosystems, many natural phenomena are categorized as Complex Adaptive Systems (CAS). Such systems are characterized by the complex behaviors that are the emergent results of nonlinear interactions between a large number of components at different levels of system's organization [1]. CAS are generally decentralized and governed by adaptive dynamics that enable their intrinsic adaptation and evolution in changing environments [2]. Over the past decades, the multidisciplinary framework of CAS has been extensively applied to study natural mechanisms of emergent behavior in various domains, ranging from anatomical systems and biological behavior [3] to social and economical systems [4].
Furthermore, the decentralized and adaptive operation of CAS has inspired numerous engineering solutions for distributed system architectures, such as smart power grids [5], autonomous navigation [6], and the Internet of Things (IoT) [7]. Equipment of such distributed systems with CAS-inspired mechanisms is a promising approach to the challenging task of control and management of the increasingly complex and heterogeneous systems [8]. In particular, the Self-organization aspect of CAS enables the emergence of order and pattern from uncoordinated actions of autonomous agents in multi-agent distributed settings [9]. In self-organizing systems, individual agents are capable of adapting to changes in the environment via autonomic tuning of their configurable parameters to enhance individual as well as global operations of dynamic distributed systems.
The growing interest in adoption of CAS architectures in mission-critical applications intensifies the need for investigat-ing the security aspects of such systems. While the distribution of responsibilities and capabilities among multiple agents in CAS seemingly relieves the threats posed by single points of failure, the complexity of dynamics in such systems gives rise to unique challenges in quantifying and ensuring their resilience and robustness in hostile environments and adversarial conditions. While the body of work on CAS presents many contributions towards analysis of resilience against random and natural perturbations, current state of the art leaves major gaps in understanding and enhancement of resilience against targeted attacks and adversarial actions.
This paper aims to develop a comprehensive foundation for analysis and enhancement of resilience in natural and engineered CAS against adversarial actions. To this end, we study and formalize the threats posed by attacks targeting the adaptive dynamics of such systems. Accordingly, the main contributions of this paper are as follows: 1) We introduce three approaches to the modeling of CAS, namely: dynamical systems model, Dynamic Data-Driven Application Systems (DDDAS) model, and game theoretic model of strategic network formation. 2) We propose quantitative definitions of attack, vulnerability, and resilience in the context of CAS security. 3) We develop a comprehensive set of schemes for classification of attack surfaces in CAS, and discuss generic instances of active and passive adversarial actions targeting these surfaces. 4) We propose a framework based on reinforcement learning for simulation and analysis of attacks on CAS. 5) We demonstrate the practical application of our proposed framework in three practical case studies: induction of cascade failures in power grids, destabilization of terrorist organizations, and manipulation of deep reinforcement learning agents. 6) We present a discussion on potential defensive and mitigation techniques.
The remainder of this paper is organized as follows: Section II provides an overview of CAS and the relevant background. Section III presents models for analysis of CAS. Section IV details our proposed definitions of attack, vulnerability, and resilience. Section V presents classifications of vulnerabilities and attack surfaces in CAS, followed by the proposal of a framework for simulation of adversarial actions and analysis of their impact on CAS in Section VI. Section VII demonstrates the application of this framework in three practical case studies. Section VIII discusses potential approaches towards mitigation of such attacks, and Section IX concludes the paper with remarks on future research directions.
II. BACKGROUND In this section, we briefly introduce the paradigm of complex systems and their adaptivity to provide the reader with an overview of fundamental concepts and notions required for the remainder of this paper. It must be noted that this background is by no means comprehensive, and the interested reader may refer to sources such as [10] and [11] for in-depth introductions to CAS.
A. Complex Adaptive Systems
Complexity, as a quantifiable measure, is yet to obtain a unified and consistent definition. From the multitude of definitions that have emerged from the field of complexity science [12], we abide by the definition presented by Mitchell [1]: "A complex adaptive system is a system in which large networks of components with simple rules of operation and no central control give rise to complex collective behavior, sophisticated information processing, and adaptation. Such systems exhibit nontrivial emergent and self-organizing behaviors." Accordingly, the most general characteristics of CAS are identified as [2]: • Large numbers of constituent elements and interactions • Non-decomposability, i.e., components cannot be separately studied due to interactions • Nonlinearity of dynamics and behavior • Various forms of hierarchical structure • Emergent behavior • Self-organization • Co-evolution with other complex entities or the environment. The concepts of emergence and self-organization are of particular significance in the scope of our work. Emergence in CAS refers to the occurrence of properties and behavior in a system that are not present in the constituent components, i.e., global properties resulting from local interactions are emergent [13]. Similarly, Self-Organization is the emergence of global coherence out of local interactions [14]. Natural instances of self-organization include the swarming formation of birds in flight, and the emergence of cognitive abilities from interactions of neurons in the brain.
B. Vulnerability and Resilience of CAS
The resilience of complex systems has been the subject of active research in diverse disciplines, ranging from ecology [15] and epidemiology [16] to power distribution systems [17] and counter-terrorism [18]. Yet, the bulk of available literature on this topic emphasize on resilience of CAS to naturally occurring and random perturbations. Amid the spectrum of definitions considered in such works [19], one of the most general definitions of resilience is: "The ability of a system to endure failure and recover from mishaps by restoring its capacities" [20]. While this definition captures the objectives of system-level studies, it fails to satisfy the requirements of security analyses. While recovery from failure may demonstrate the long-term sustainability of system's operations, the security consequences of short-term failures may be catastrophic. For instance, exposure of confidential information in a cloud computing platform, however technically recoverable, may incur severe damages to the users and operators of the platform. Therefore, there is a need for security-oriented alternatives of this definition.
Similarly, the concept of vulnerability in CAS is defined either too loosely, or too context-dependent. For instance, [21] defines vulnerability as the system's inability to resist stresses, which may be exploited by threats and hazards. On the other hand, [22] provides a network-oriented definition as links or nodes whose removal adversely impact the functions of a complex network. In the context of disaster mitigation, [23] defines vulnerability as "the human product of any physical exposure to a distater that results in some degree of loss." It is evident that a generic and quantitative definition of vulnerability is needed to form the basis of a quantitative framework for security analysis of CAS.
In Section IV, we utilize the dynamical model of CAS to develop such definitions of resilience and vulnerability for analysis of security in such systems.
III. MODELS OF CAS
In this section, we present three approaches to modeling the behavior of CAS. First, we introduce the dynamical system model and the relevant terminology, which will form the basis of defining resilience, vulnerability, and attack in Section IV. This approach is complemented by the Dynamic Data-Driven Application System (DDDAS) abstraction, as well as a game-theoretic model of network formation. Having multiple approaches enables various levels of abstraction for highdimensional CAS, thereby providing multiple perspectives for capturing the structure and dynamics of such systems. These approaches are detailed below.
A. Dynamical Model
CAS are dynamical systems, meaning that their states change as a function of time. In this perspective, the dynamics of CAS can be modeled as: (1) Whereẋ(t) is the first-order derivative of x with respect to t, x = (x 1 , x 2 , ..., x n ) is the n-dimensional state of CAS, β is the state of the environment (or alternatively, control input), and f is the dynamics of the system. The set of all possible configurations of x is termed the phase space of the system, henceforth denoted by X. A solution x(t) to the equation 1 constitutes a trajectory in phase space. Any trajectory is uniquely defined by the initial conditions, In dynamical systems, an attractor is a bounded region in phase space to which trajectories with certain initial conditions come arbitrarily close. Formally, an attractor is an invariant set Λ ∈ X, where trajectories of perturbations that lead to states outside of Λ eventually return to Λ. Attractors may be isolated points, limiting cycles, or more complex objects in the phase space.
A basin of attraction Ω(Λ) is the set of all states which fall on trajectories that lead to attractor Λ. Formally, Accordingly, the basin boundary ∂Ω of a CAS is defined as the set of states that are not in any basin. Formally: Even though the dynamical model provides a fundamental mathematical perspective on the behavior of CAS, the abstraction and computational aspects of this model become severely restricted in high-dimensional systems. Therefore, alternative models are often used to simplify the dynamical representation and abstraction of CAS.
B. DDDAS Model
The decentralized adaptive behavior of CAS implies the existence of a feedback control loop in the constituent components. Accordingly, each component of CAS monitors the changes in its environment, analyzes the observations and its internal state with respect to local rules and objectives, and adjusts its operating parameters accordingly. This process can be accurately captured within the framework of Dynamic Data-Driven Application System (DDDAS). A DDDAS is a symbiotic feedback control system, which can dynamically analyze the state of system and its environment to control and determine when, where, and how it is best to gather additional data, and in reverse, can dynamically steer the applications based on the obtained measurements [24]. The operational cycle of an agent in a generic distributed DDDAS comprises of 4 components: • Sensing: Observing the state of agent's environment and retrieving relevant information that may be disseminated by other agents • Information Sharing: Communicating agent's current state and observations with other agents • Data Fusion and Analytics: Integration and processing of observed and retrieved information • Self-Configuration: Configuration of agent's functional parameters according to processed information Figure 1 illustrates the anatomy of a DDDAS cycle. Since the inception of DDDAS, this framework has spawned numerous applications such as environment analysis (e.g., weather [25]); robotic systems (e.g., coordination and swarming of unmanned aerial vehicles (UAVs) [26] and unmanned ground vehicles (UGVs) [27]); image processing (e.g., target tracking [28]), and embedded computing (e.g., hardware/software designs [29]). Furthermore, recent literature illustrates the application of this framework to the analysis of generic complex systems [30].
C. Network Formation Game Model
CAS are networks comprised of a large number of various agents, each with unique requirements and capabilities, leading to heterogeneity in various aspects of the systems. Each individual agent in this network aims to optimize its local objectives, such as energy consumption, computational performance, reliability, and resilience, through interactions with other agents in the network. The actions and interactions of these agents give rise to emergent patterns at macroscale, which drive the system-level behavior of self-organizing networks.
To enable the analysis of emergent behaviors, stability, and resilience of CAS, one approach is to model the dynamics of interactions as strategic network formation games [31] that provides a framework for analysis of self-organizing dynamics for generic designs and applications. In such games, every agent desires to establish the optimal set of links to other agents which maximizes the agent's reward or utility. Depending on system specifications, a link in this setting may represent inter-node communications, routing hop, information sharing, computation and communication resource sharing, synchronized actuation, proximity, trust, or any other quantifiable relationship. Accordingly, the dynamics of interactions can be captured by a network formation game Γ(N, U, A, (G, F )) with complete or incomplete information, where N is the set of all agents, A is the set of all actions available to players, U = {U 1 , U 2 , ..., U N } is the set of each agent's payoff function, G(N, E) is the graph of N vertex with the directional or undirectional weighted edge-set E representing the network topology, and F = {F 1 , F 2 , ..., F n } is the set of attribute vectors representing the exogenous features and characteristics of each individual. The tuple (G, F ) is the information available to all players on the game settings. Each agent i ∈ N also bears an idiosyncratic profile ε i , capturing the traits and characteristics of individuals that affect their decision in link establishment, but are not known to other agents. Such characteristics may include experience and learning profile, priority of objectives, and level of trust. The actions of players in this game are their establishment or removal of heterogeneous links to other players. Let G ij be the ij-th component of the adjacency matrix of the network. The action of player i is denoted by where m i is the number of link types available to player i and W i is the set of possible link weights for i.
Various types of equilibria and stability can be defined for such games, including Nash Stability (NS) and Pairwise Stability (PS) [32]. As illustrated by the example in Figure 2, these different criteria for stability do not necessarily overlap and need to be chosen according to the problem at hand. By choosing the relevant criteria for stability and defining suitable payoff functions U i∈N to account for relevant costs and incentives of game states and trajectories, this model allows for analysis of generic parametric bounds and relations in establishment of emerging topologies, behaviors, and dynamical stability within the game abstraction. Furthermore, this game theoretic modeling of self-organizing behavior provides the formal analysis of behaviors and interactions by considering the adversary as another player in the game. Also, network formation games can enjoy the benefits of many strong analytical toolsets such as graph theory, category theory, network science, and cooperative optimal control.
IV. THREAT MODEL The adaptive dynamics of CAS gives rise to a variety of vulnerabilities and attack surfaces. By definition, the macroscale behavior of such systems is the emergent result of micro-scale actions of local or individual elements. Therefore, adversarial perturbations of micro-scale structure and dynamics may result in amplification of perturbations and manipulation of the macro-scale behavior.
To ensure a consistent and comprehensive study of such attacks, we first develop suitable definitions of attack, vulnerability, and resilience in CAS. We differentiate between two types of attacks, namely passive and active attacks. Passive attacks aim at exposure of structural and dynamical properties of the targeted CAS, and do not require exertion of additional input to the system. Instances of such attacks are traffic analysis [33] and inference of dynamics [18]. On the other hand, Active attacks involve the implementation of adversarial actions to achieve an adversarial objective. Building on the dynamical model of Section III-A, we define adversarial action as the intentional manipulation of either the state or dynamics of CAS, such that the resulting state-space trajectory passes through states, which may include states outside of desired basins of attraction, states within undesired submanifold of the phase space (e.g., undesired basins of attraction), or ill-defined states within a modified phase space. Accordingly, the modes of adversarial actions can be categorized as those perturbing the state configuration of CAS, and those that manipulate the dynamics of CAS, formalized as follows: 1) State Manipulation: Let γ(x t ) be the perturbation to state x t ∈ X, i.e., the perturbed state is obtained via x p t = x t + γ(x t ). The problem of adversarial state manipulation is to devise the function γ(x t ) such that at an arbitrary time T : Where t 0 is the initial time, and X * ∈ X is the set of states within the space of undesired states X which conform to adversarial objectives. It is noteworthy that a sustainable impact is imposed when the adversary aims for driving the target into X * 's basins of attraction. Alternatively, if the objective is to reach specific trajectories µ(t) in the space of undesired trajectories M rather than particular states, the problem can be rearranged as devising γ(x t ) s.t. some measure of distance between the original and desired trajectory becomes smaller than an arbitrary error threshold , i.e., 2) Dynamics Manipulation: Let λ(x t , β t ) be the perturbation to the environment (alternatively, it can be viewed as control input). The problem of adversarial dynamics manipulation is to devise a suitable control perturbation λ(x t , β t ), such that at an arbitrary time T : It must be noted that X * is not necessarily a subset of X, as the phase space may shift due to perturbations. Alternatively, the problem of reaching specific trajectories can be formulated similarly to the case of state manipulation, with the following optimization objective: With the concept of attack formalized, we can construct suitable measures of vulnerability and resilience on the same grounds. We adopt a well-established fact from the realm of cyber-security that no system can be completely secure against all possible attacks. Hence, the objective of securing a system becomes deterrence of attacks in an economic sense, namely making successful attacks as costly as possible [34]. Accordingly, we define the vulnerability of an element (state, trajectory, or dynamics) in a CAS to a specific adversarial action, as the inverse of the minimum amount of cost incurred to the adversary to impose the maximum achievable cost to the targeted CAS, via implementing the adversarial action on the designated element. This definition assumes that adversarial cost C adv ≥ 1, and hence the value of vulnerability is in the range [0, 1], whose unit is determined by the dimensions of adversarial cost C adv . In a similar manner, we define the resilience of a CAS against a certain attack as the minimum cost imposed on the adversary to successfully implement that adversarial action and force the CAS into an undesired state or trajectory. The selection of adversarial and CAS cost metrics is highly dependent on the context of analysis. One simple instance of choices for adversarial cost can be the minimum number of perturbations required for a successful attack. A similar choice for the CAS cost is the loss of connectivity in the network model of its interactions.
V. CLASSIFICATION OF ATTACK SURFACES Attack surfaces are structural and dynamical components of CAS that may be targeted in active and passive attacks. In this section, we present three schemes categorizing such components, and provide attack instances for each identified component. security are Confidentiality, Integrity, and Availability, forming the CIA triad of security [35]. Confidentiality refers to the restriction of unauthorized access to protected information. Examples of attacks on confidentiality in CAS include the inference of states, dynamics, and interaction protocols in a self-organizing swarm of UAVs. Integrity is maintaining and assuring the accurate functioning of the system in the intended manner. An instance of corresponding attacks is manipulation of a distributed autonomous navigation system to induce collisions. Availability is assuring the uninterrupted operation of the system. Induction of cascade failures in power distribution systems is a well-established instance of such attacks on CAS.
2) DDDAS-based: Another approach to classification of attack surfaces is based on the distributed DDDAS model presented in Section III-B. As illustrated in Figure 3, each component of the DDDAS cycle constitutes attack surfaces that can be the subject of adversarial actions targeting one or a combination of the CIA dimensions. However, as shown in Table I, under this schemes some attacks may find overlapping roots between different component.
3) Functionality-based:
We also propose a more general functionality-based approach to classification. The building blocks of CAS are its structure and topology, dynamics of in-teractions, and the internal dynamics of each constituent agent. Accordingly, we further categorize the attack surfaces of CAS into those stemming from the Network Structure, Cooperation Protocolos, or Actuation Functions, detailed below: A. Attacking the Network Structure As discussed in Section III, CAS can be modeled as networks of interacting agents. Depending on the model's context and objective, this network may represent the communication links between agents, their interactions, dependencies, or other types of relationships. As is the case with distributed networked systems, such as communications (e.g., [36]) and social networks (e.g., [37]), the intrinsic network structure of CAS gives rise to a number of potential vulnerabilities that can be exploited to mount passive and active attacks against the system. By means of traffic analysis [33] and inference attacks [18], adversaries can target the confidentiality of CAS to identify the topology and dynamics of their networks. Knowledge of the network topology allows adversaries to optimize denial of service attacks by analyzing the structure of their target and determining the most critical regions [33]. To further expand on this surface, consider the case of a self-organizing swarm of UAVs, as illustrated in Figure 4. The inter-UAV network depicted in this figure is a graph with 2 hubs (i.e., Nodes 3 and 4), through which a large portion of network flows pass. If the adversary aims a jamming attack at only these two hubs, the network becomes completely disconnected, thereby the entire operation of the system is disrupted at minimal cost to the adversary. Under certain circumstances, this type of attack may cause cascading effects that result in total system failure over time. A well-studied example of which is cascade failures in power grids [38].
B. Attacking Cooperation Protocols
Considering the independent and self-interested nature of agents in CAS, stabilization and efficiency of many real-world applications of such systems necessitate the implementation of rules and protocols to induce and maintain cooperative interactions between agents. For instance, formation control and navigation of UAV swarms require the sharing of positional information among UAVs, as well as their coordination of navigational parameters. Implementation of cooperation protocol creates another source of attack surfaces. Adversaries may target the confidentiality of CAS via passive sniffing of shared information through either insider and outsider attacks. This type of passive eavesdropping enables further active attacks through inference and identification of objectives and system dynamics.
The integrity of such systems can be targeted in various ways. By spoofing legitimate agents, adversaries can inject false data into the information sharing pipeline of CAS. Also, spoofed, compromised, or malicious insider agents may falsify their resource requirements, or even pose as several agents to gain unfair access to shared resources. In the domain of distributed wireless networks, this type of exploitation is known as Sybil attack [39]. Furthermore, in systems with constrained information sharing capacities, adversarial perturbation of the environment may lead to sharing of incorrect or incomplete information. For instance, consider the case of a UAV swarm which relies on individual reporting of observed obstacles for collision avoidance. If the reporting protocol limits the number of reported obstacles to the n nearest objects observed by a UAV, an adversary may spoof or generate m >> n minor obstacles in the vicinity of the UAV to prevent it from informing rest of the swarm about major nearby obstacles.
Attacks on the availability aspect may also come in different forms. Spoofed, compromised, or malicious insider agents may act as information blackholes [40] by tactically refusing to share their information at particular times. In CAS that rely on multi-hop communications, this attack can be more damaging if the agent stops forwarding information received from other neighbors as well. Another type of attack is based on spoofed, compromised, or malicious insider agents disseminating certain information that cause termination of cooperation. In our example of UAV swarm, transmission of messages such as "mission accomplished", "mission failed", or radio silence signal in tactical scenarios, may cause the cooperative process to end. Furthermore, if the cooperation protocol is not well-designed, broadcast of certain resource constraints or environmental conditions may result in prevalence of agents' selfishness over cooperation. For instance, if the UAV swarm encounters an inevitable collision state [41], cooperation protocol may allow agents to choose independent action over cooperation. This condition may be induced through either dissemination of fake information, or adversarial manipulation of the environment.
C. Attacks on Actuation Functions
The main objectives of CAS are realized by each agent via actuation functions. In the example of UAVs, actuation functions are cyber-physical controllers of motion and communications. In general, the ultimate goal of all attacks introduced so far is indirect manipulation or disruption of actuation functions. Adversaries may also directly target the actuation of CAS through attack surfaces in actuation mechanisms and functions. Mounting attacks on confidentiality of actuation may be in the form of parameter inference. Obtaining knowledge of operating parameters through side-channel attacks enables the adversary to derive a more accurate estimation of system's state and dynamics, thereby allowing the optimization of active attacks against the system. Also, in competitive CAS, complete knowledge of an agent's operating parameters may provide other agents with an unfair advantage. For instance, consider a CAS setup to automate the sharing of information on cyber attacks among corporations [42]. In this scenario, agents aim to share the minimal amount of data required to preserve the long-term benefits of information sharing. If an adversarial agent is able to estimate the parameters used by another agent in filtering and disseminating information, it may allow the adversary to infer the undisclosed portion of agent's information. A sophisticated attack in such incomplete information systems can be the adversarial disclosure of parameters to competitors, thereby causing the system dynamics to diverge from a beneficial equilibrium. Economic and political parallels of this phenomenon are instances of insider trading and whistleblowing (e.g., [43]).
The integrity of actuation functions may be targeted via manipulation of the environment or sensory observations. In an autonomous fleet of self-organizing vehicles, calculated manipulation of the visual input to a vehicle may result in an adversarial example [44] for the machine learning component of the system. Adversarial examples are minimally perturbed inputs that cause misclassifications in machine learning algorithms. For instance, minor changes in a speed sign on the side of a street can result in its misclassification as a stop sign by an autonomous vehicle, causing it to stop in an unsafe location [41]. In some cases, even spoofed perturbations of the environment is sufficient for manipulation of actuation functions. A real-world example of such cases is the Automatic Collision Avoidance System (ACAS) utilized by many of today's commercial aircraft [45]. This system generates motion advisories according to the position and heading of other aircraft in the environment, obtained from an unencrypted, open protocol known as ADS-B [46]. An adversary may simply fake the presence and trajectory of nonexistent aircraft by spoofing, ADS-B signals, which can lead to ACAS advisories that change the trajectory of targeted aircraft [41].
Similar attacks can also target the availability of actuation functions. Adversaries may manipulate the environment such that the actuation functions of CAS agents fall within undefined or terminal states. Figure 5 illustrates an instance of such attacks: an autonomous vehicle that is trained to avoid crossing solid lines, will inevitably remain stationary if it finds itself encircled by such lines. In our UAV example, induction of emergency conditions through environmental or sensory manipulation can drive targeted agents into safe modes, which in many cases trigger automatic Return-to-Base (RTB) or emergency landing procedures [41]. Table I presents the classifications of the sample attacks discussed in this section.
VI. SIMULATION FRAMEWORK
As an approach towards analysis of impact in attacking CAS dynamics, we propose a framework for simulation of adversarial actions against generic CAS. With the aim of analyzing the maximum impact of attacks, this framework is designed automatically derive the optimal sequence of adversarial actions against CAS models. Also, our framework supports the analysis of both whitebox and blackbox attacks, meaning that the adversary can be considered to have complete, partial, or no a priori knowledge of the system dynamics. Furthermore, this framework allows for arbitrary designation of adversarial goals (e.g., network disruption, actuation manipulation, etc.), and can be configured for arbitrary types of adversarial actions.
The initial step of each simulation in this framework is to obtain an estimation of dynamics in the targeted CAS from time-series observations of the system. For simulation of blackbox attacks, this can be achieved through a variety of methods developed for identification of nonlinear dynamics, such as utilization of deep neural network (e.g., [47]). When partial knowledge of the system is assumed, the estimation technique can be based on a generic model of the dynamics with unknown model parameters, which may be estimated via statistical techniques. As for the simulation of whitebox attacks, this estimation can be fixed to a complete dynamical model of the system. Examples of each case are presented in Section VII.
With the initial estimate of dynamics at hand, the next step of this framework is to create a secondary simulation of the targeted system in order to obtain the optimal attack policy π * (S), which maps any observed state S of the estimated system to an optimal action A S . This action corresponds to one the adversarial actions defined in the initial configuration of simulations, Instances of which are node removals for attacks on network structure, sensory overload for attacks on cooperation protocols, and crafting adversarial examples for manipulation of actuation functions.
Accordingly, we propose reinforcement learning as a promising approach to the problem of policy optimization. Reinforcement learning techniques are described by the Markov Decision Process (MDP) tuple (S, A, P, R), where S is the set of reachable states in the process, A is the set of available actions, R is the mapping of transitions to the immediate reward, and P represents the transition probabilities (i.e., system dynamics). At any given time-step t, the MDP is at a state s t ∈ S, which can represent the current configuration of simulated CAS. The reinforcement learning agent's choice of action at time t, a t ∈ A causes a transition from s t to a state s t+1 according to the transition probability P at st,st+a . The agent receives a reward r t = R(s t , a t ) ∈ R for choosing the action a t at state s t .
Interactions of the agent with MDP are captured in a policy π. When such interactions are deterministic, the policy π : S → A is a mapping between the states and their corresponding actions. A stochastic policy π(s, a) represents the probability of optimality for action a at state s.
The objective of reinforcement learning is to find the optimal policy π * that maximizes the cumulative reward at any time t, denoted by the return functionR = t =t T ψ t −t r t , where ψ < 1 is the discount factor that accounts for the diminishing worth of rewards obtained further in time, hence ensuring thatR is bounded.
An approach to this problem is the Action-Value Function optimization algorithm or Q-Learning. In every iteration of this technique, the optimal value of each action is calculated as the expected sum of future rewards, assuming that every action taken after the current choice follows the optimal policy. Under a given policy π, the value of an action a in a state s is given by the value function Q defined as: The optimal Q value is hence defined as: Q * (s t , a t ) = max π Q π (s t , a t ), and the optimal policy is given by π * (s t ) = arg max at Q(s t , a t ). The Q-learning method estimates the optimal action policies by using the Bellman equation Q i+1 = E[R+ψ max at Q i ] as the iterative update of a value iteration technique. Practical implementation of Q-learning is generally based on function approximation of the parametrized Q-function Q(s t , a t ; θ) ≈ Q * (s t , a t ). A common technique for approximating the parametrized non-linear Q-function is to train a neural network whose weights correspond to θ. This neural network is trained such that at every iteration i, it minimizes the loss function: , and ρ(s t , a t ) is a probability distribution over states s t and actions a t .
This optimization problem is typically solved using computationally efficient techniques such as Stochastic Gradient Decent (SGD). This approach allows for the problem of estimating Q functions to be performed by neural network function approximators optimized via stochastic gradient descent, updating the current value Q(s t , a t ; θ t ) towards a target value Y Q t . Once the optimal policy is obtained from the secondary simulation, it is implemented on the primary simulation to observe the impact for a user-defined number of timesteps. At this point, the new observations are fed back to the estimation algorithm to improve adversary's model of target dynamics, and derive the optimal attack policy for the updated model. This iterative process is executed until the user-defined criteria for attack success or termination are reached. At every iteration of Q-Learning, the process selects its estimation of the best possible action, which is one of the designated adversarial actions designated in the configuration of attack simulation.
This process is formalized in Algorithm 1. Before execution, this algorithm must be integrated with a dynamical simulation or physical prototype of the target system. Also, the user shall define a technique for estimation of dynamics, designate an attack objective, the set of permissible adversarial actions, the cost function of attack, and the criteria for termination of Q-learning. Upon execution, the algorithm iteratively observes the state of the target system, and updates its estimate of target's dynamics according to a pre-defined technique (line 5). This estimate is then used to create a simulation of target from an adversary's perspective, which is then explored via Q-learning to obtain an optimal attack policy based on current estimate (line 6). This policy is then applied to the original
Algorithm 1: Attack Simulation Framework
Input : dynamical simulation, Attack cost function C, objective O, set of actions A, termination criteria X Data: initial target configuration G 0 , reward/cost of attack R, current configuration G, policy π Output: optimal reward/cost of attack R, final configuration G * , optimal policy π * (.) R, π ← QLearning(SimulateDynamics(G, U, π), G, U, X, C) 7 Implement a ← π(G) 8 Update G 9 end simulation or prototype of the target (line 7), and the simulated adversary's observation of target's state is updated according to the resulting state of the target (line 8). This process is repeated until the adversarial reward reaches the designated attack objective (line 4). It is noteworthy that this framework can only succeed if the attack objective is reachable from the initial state of the target, and with the defined set of actions. Otherwise, this algorithm will provide a best-effort performance in coming as close as possible to the objective. Also, the accuracy and convergence of this algorithm is heavily dependent on the dynamic estimation mechanism. The choice of estimation technique and its updating criteria must be such that the estimation errors do not consistently accumulate, and remain bounded over a large number of iterations.
Furthermore, Algorithm 1 does not intrinsically account for constraints on execution time, therefore such limitations must be implemented within attach the cost function. Similar to the reachability criteria of optimality, if time constraints of the problem fall below the time required for reaching the optimal answer, this algorithm still performs a best-effort search for optimal attacks and potential impact. Such best-effort results are indeed representative of practical worst-case impact levels under the conditions modeled by user-defined parameters.
VII. CASE STUDIES To study the performance and feasibility of our proposed framework, we investigated its application to 3 real-world CAS scenarios, namely: Inducing cascade failures in power distribution networks, destabilization of terrorist organizations, and policy manipulation in Deep Q-Learning. For each case study, we describe the objective and classify the type of attack according to the schemes introduced in Section V. We then report the approach and experimental setup, and present the results in terms of quantitative impact and vulnerability.
A. Cascade Failures in Power Grids
Power distribution networks constitute a well-known instance of CAS [17] that are susceptible to cascading failures triggered by malfunctions in one or more local components, such as relays and transmission lines. In such cases, the load of a failed component is balanced onto neighboring nodes, causing them to overload and fail as well [48]. In this case study, the attack objective is to analyze the maximum possible disconnection of a power network by induction of cascading failures through sequential removal of transmission lines in a simulated power grid. The case of sequential attacks on power grids is recently studied by Yan et al. [48], who also use a an approach based reinforcement learning to analyze the impact of such attacks. One major difference between the methodology of [48] and this case study is the assumption of a blackbox attack in our approach, which circumvents the issues caused by modeling challenges in the study of cascading power grid failures [38]. Moreover, this case study demonstrates an instance of applying a dynamical system model to analysis of vulnerabilities in CAS.
1) Objective and Classification: The objective of this attack is to disconnect the minimum number of transmission lines one at a time, such that the system collapses. This attack targets the network structure to compromise the Availability dimension of CIA by implementing an adversarial action to manipulate the state of this CAS.
2) Experiment Setup: The benchmark network used in this experiment is a mid-size IEEE RTS-79 architecture [49]. This system is comprised of 24 buses, 38 transmission lines, 17 load buses, and 10 generating units, with a total generation capacity of 3405MW, and a peak load of 2850MWs. A line is considered to be alive if it operates with a load that is smaller than its capacity. Once this threshold is reached, the line fails and all of its load is distributed equally among the nodes that are directly connected to it.
The dynamical simulation was implemented in Python using the PyPSA toolbox [51]. Following the setup of [48], the attack objective was set to cause at least 8 lines failures, while minimizing direct disconnection of lines by the attacker, and maximizing the disconnections resulting from cascading failures. We constrained the maximum number of iterations of each simulation to 500, and repeated each full simulation 100 times. As for the estimation method, we adopted the architecture proposed in [52] for a convex-based Long-Short Term Memory (LSTM) neural network to approximate the nonlinear dynamics of the power grid.
3) Results: Figure 6 depicts the obtained results, avereged over 100 repetitions. It can be seen that our framework achieves an outage of 8.6 lines with only 3 direct node removals, thereby demonstrating the applicability of our framework in simulating emergent attacks in real-world CAS. Ac-cordingly, the vulnerability measure of this network structure to node removal attacks is 1 3 = 0.34.
B. Destabilization of Terrorist Networks
This case study reports our previous work in [18]. In this work, we investigated the performance of our framework in deriving optimal destabilization policies against terrorist organizations. In the context of this study, destabilization is defined as minimizing the desire of terrorist agents to remain affiliated with the organization. Similar to the previous experiment, the choice of adversarial action in this scenario is also sequential removal of nodes (i.e., human actors). This attack aims to eliminate the spiritual and operational incentives of remaining in the organization through removal of those nodes who are vital in preserving these two aspects.
1) Classification: Although this is another network node removal action, but the attack surface in this scenario is the cooperation protocols of the targeted CAS. This attack aims to exploit the self-interested nature of agents by diminishing the incentive of cooperation, such that righteous breaking off from this cooperation becomes inevitable. Consequently, this attack is targeting the Integrity and Availability dimensions of CIA through active attacks on both the state and the dynamics of this CAS.
2) Experimental Setup: We modeled the dynamics of this CAS as a network formation game, in which the payoff function for each agent is defined as follows: where G −i is the adjacency matrix G with the ith row deleted, and payoffs are known up to θ 0 . X = (X ij ; i, j ∈ N ) is the set of homophily vectors between all pairs i = j obtained from profile vectors F i and F j , V ij is the deterministic component of the payoff, and the parameter ij ∈ i is the idiosyncratic shock, representing the effect of i's unknown parameters on its desire to establish a link with j. Instances of such parameters are personal taste and psychology, and may extend to include the effects of homophily and topological parameters that are Consequently, the problem of estimation is simplified into estimation of payoff's parameters for each agent. The set of available data for this estimation includes automatically extracted profiles and incomplete snapshots of the network mined from open-source structured and unstructured sources. We applied a recently proposed 2-step estimation technique [53] that exploits the hierarchical symmetries in the CAS to eliminate the need for detailed time-series observations of the target.
With this estimation technique at hand, we applied the simulation framework to our extracted dataset of Al Qaeda's leadership network, with the objective of maximizing the network fragmentation F , defined as the proportion of mutually reachable nodes as each node is removed or unconnected from the network. Formally, where s is the size of component k (i.e., groups of nodes remaining connected after removal of a node) and n is the total number of nodes in the network. Values close to 1 indicate high fragmentation and values close to 0 indicate low fragmentation. As such, fragmentation is an inverse measure of the amount of connectedness or connection redundancy in a network.
3) Results: Figure 7 illustrates the results of implementing the action policy generated by our framework for 5 iterations, in comparison with two well-established targeting techniques in network targeting: elimination of the node with highest betweenness centrality, and elimination of those with highest brokerage values at each iteration. It is shown that the proposed technique achieves a much higher fragmentation in all steps of the process, culminating in a 71% fragmentation after 4 node removals. The resulting order of targeting in this experiment is as follows: In the attack based on betweenness, the targeting sequence is Bin Laden first, followed by Zawahiri, Mohammad Ata (operation leader for September 11 attacks), and Abu Gatada. Also, the targeting sequence of brokerage-based attack is ordered as: Zawahiri, Abu Qatada, Bin Laden, and Ibrahim Maidin (military leader of Jemaah Islamiah in Singapore).
Due to the unavailability of ground truths in the public domain, this experiment is restricted to observations and interpretive evaluation. One interesting observation is that this policy does not recommend the targeting of Bin Laden as the first action, which as is the case today, would only lead to his replacement by Zawahiri without any major impact on the individual utilities of lower members. This policy begins by removing those nodes whose replacement leads to significant drops in network's performance, which in turn reduces the benefits of remaining in or joining the network for other members. Consequently, targeting the top leader leads to less effective replacements and network configurations, which may either dissolve on its own, or can be targeted with greater ease than the original network. This weakening of ties can be observed in the sparsity and diminishing clustering, quantified via changes in the global clustering coefficient, as depicted in Figure 8.
Accordingly, the vulnerability of Al Qaeda's network structure to node removal attacks is 1 4 = 0.25.
C. Policy Induction in Deep Reinforcement Learning
The emerging paradigm of deep Reinforcement Learning (RL) [54] demonstrates the defining characteristics for CAS: training of deep RL is governed by the nonlinear dynamics of neural networks and interactions with its environment, the behavior of deep RL is an emergent result of local interactions between the hierarchical layers of deep networks, the policy and actions of deep RL adapt in response to changes in the environment, and the deep neural networks of this system self-organize through adjustment of inter-layer weights. Consequently, deep RL can also be subject to dynamical attacks.
To demonstrate the vulnerability of deep RL to such attacks, in [55] we present a DDDAS-based model of vul-nerabilities in such systems, and report the performance of our framework against Deep Q-Networks (DQNs) through manipulation and induction of adversarial policies in these systems at training time. In this attack, we utilize adversarial examples [44] to manipulate the environmental feedback of DQN, and lead it towards learning our adversarial policy instead of one that satisfies the original objectives of the DQN.
The procedure of this attack can be divided into the two phases of initialization and exploitation. The initialization phase implements processes that must be performed before the target begins interacting with the environment, which are: 1) Train a DQN based on attacker's reward function r to obtain the adversarial policy π * adv 2) Create a replica of the target's DQN and initialize with random parameters The exploitation phase implements the attack processes such as crafting adversarial inputs. This phase constitutes an attack cycle depicted in Figure 9. The cycle initiates with the attacker's first observation of the environment, and runs in tandem with the target's operation.
2. Attacker estimates best action according to adversarial policy 2) Experimental Setup: We examine the targeting of Mnih et al.'s DQN designed to learn Atari 2600 games [56]. In our setup, we train the network on a game of Pong. The game is played against an opponent with a modest level of heuristic artificial intelligence, and is customized to handle the delays in DQN's reaction due to the training process. The game's backend provides the DQN agent with the game screen sampled at 8Hz, as well as the game score (+1 for win, -1 for lose, 0 for ongoing game) throughout each episode of the game. The set of available actions A = {U P, DOW N, Stand} enables the DQN agent to control the movements of its paddle.
Similar to the original architecture of Mnih et al. [56], this input is first passed through two convolutional layers to extract a compressed feature space for the following two feed-forward layers for Q function estimation. The discount factor γ is set to 0.99, and the initial probability of taking a random action is set to 1, which is annealed after every 500000 actions. The agent is also set to train its DQN after every 50000 observations.
In this experiment, we consider an adversary whose reward value is the exact opposite of the game score, meaning that it aims to devise a policy that maximizes the number of lost games. To obtain this policy, we trained an adversarial DQN on the game, whose reward value was the negative of the value obtained from target DQN's reward function. With the adversarial policy at hand, a target DQN was setup to train on the game environment to maximize the original reward function. The game environment was modified to allow perturbation of pixel values in game frames by the adversary. A second DQN was also setup to train on the target's observations to provide an estimation of the target DQN to enable blackbox crafting of adversarial example. At every observation, the adversarial policy obtained in the initialization phase was consulted to calculate the action that would satisfy the adversary's goal. Then, the JSMA algorithm was utilized to generate the adversarial example that would cause the output of the replica DQN network to be the action selected by the adversarial policy. This example was then passed to the target DQN as its observation. Figure 10 compares the performance of unperturbed and attacked DQNs in terms of their average reward values per episode. It can be seen that the reward value for the targeted DQN agent rapidly falls below the unperturbed case and maintains the trend of losing the game throughout the experiment.
The vulnerability of DQN to policy induction attacks can be expressed as the inverse of number of epochs before divergence of average reward from the unperturbed trajectory, which is 1 12 = 0.083.
VIII. DISCUSSION ON MITIGATION TECHNIQUES
The complexity and scope of CAS gives rise to an everlasting potential for discovery of novel and unprecedented vulnerabilities. As a result, comprehensive analysis of their resilience to adversarial actions cannot solely rely on evaluation of predefined lists of attack types and vectors. Consequently, such analyses must determine the underlying parametric relations and bounds which lead to CAS designs that are guaranteed to satisfy the desired criteria for reliability and security. Also, this level of resilience needs to be balanced against cost and operational specifications. Therefore, the problem of choosing optimal resilience criteria and parametric bounds translates into an iterative optimization problem. A further challenge in this analysis is to determine the temporal depth of tracing the impact of parametric changes, i.e., how far into the future is to be analyzed in order to verify the safety of tested changes. A prominent instance of this challenge is the domain of AI safety, which is concerned with the effects of long-term learning and cumulative autonomy on safe and secure operation of intelligent agents.
With regards to attack detection, the distributed nature of CAS gives rise to a major challenge in monitoring the state of the system and detection of attacks. Feasible detection mechanisms must provide the means for dissemination of local observations into a network that may be jammed, compromised, or not homogeneously trustworthy. Therefore, information sharing and incorporation of received data into attack detection mechanisms have to follow strategic and selective procedures. Also, dissemination of state information must follow protocols that minimize communication overheads, while providing reliable transmissions in networks under attack. Adoption of similar developments in such fields as cognitive radios [57], wireless sensor networks [58], and Internet of Things [59] may prove useful in explorations of this area.
Building on this step, a further venue of pursuit is the formal and numerical investigation of the impact of constituent element and system parameters on the resilience of CAS. Through parametric analysis of homeostasis conditions in generic models of CAS, this direction of work enables the establishment of absolute and relative parametric bounds within which a CAS remains resilient. One of the potential pursuits of this venue is to establish bounds on the initial conditions required for the emergence of resilient CAS, such as the number of constituent elements, required redundancies, and other parametric rules for schemata that give rise to the emergence of resilience. This analysis will enable a further formal investigation into the balance of resistance and adaptivity of systems with their feasibility in terms of efficacy, performance, real-time responsiveness, and energy efficiency. Achieving these objectives in large-scale CAS will require extending the models of dynamics established in Section III into tractable models that are better-suited for analysis of high-dimensional nonlinear dynamics. Promising venues of investigation include modern variants of MDP modeling and reinforcement learning, path integrals, genetic algorithms, mean-field game theory, and operad theory, to analyze reachability, controllability, convergence, and phase transitions in high-dimensional state trajectories of CAS.
Inspired by the biological phenomena of threat detection and alerting of cohorts in biological systems and societies, a further venue is to investigate the equipment of elements in CAS with intrinsic mechanisms for self-regulation and identification of ongoing attacks, thereby greatly enhancing the resilience and elasticity of such systems against adversarial manipulations. This thrust may investigate the mechanisms of anomaly detection and coalition formation that enable cooperative detection of attacks through distributed information sharing and processing. A highly useful result of this work can be the development of self-organizing mechanisms and schema that produce such functionalities as emergent behaviors of the system. A major inspiration for this study is the human immune system, which itself is a CAS whose emergent behavior is to detect, announce, and defend against attacks. Cells in the immune system perform distributed anomaly detection based on simple learning and memory retainment mechanisms, and modulate their individual and coordinated response according to the continual updates of the memory by the learning mechanism. This calls for a comprehensive study into the feasibility of such mechanisms for development of emergent defense mechanisms in CAS. This study may also investigate the employment and enhancement of multiagent reinforcement learning and transfer learning techniques as mechanisms for adaptive learning of optimal actions in the presence of persistent and dynamic anomalies. A further direction of this task is to analyze the feasibility of embedding dedicated attack detection and mitigation nodes, and to establish design rules for balanced and optimal size, distribution, and signaling of such nodes in resilient CAS.
IX. CONCLUSION
We introduced the paradigm of adversarial attacks targeting the nature of dynamics in Complex Adaptive Systems (CAS). Aiming to develop a comprehensive foundation for analysis of such attacks, we proposed three approaches to the modeling of CAS as dynamical, data-driven, and game-theoretic systems. We developed suitable definitions of attack, vulnerability, and resilience in the context of CAS Security, and introduced three schemes for classifying threats based on security dimensions, data-driven abstraction, and fundamental functionalities of CAS. Building on this foundation, we proposed a framework for simulation and analysis of attacks on CAS, and demonstrated its performance in vulnerability analysis of power grids, terrorist networks, and deep reinforcement learners. These case studies also demonstrate the need for novel techniques and methodologies for threat detection and mitigation in both natural and engineered CAS. To facilitate the search for such techniques, we also presented a discussion on promising approavenues for future research in analysis and design of resilient complex adaptive systems. | 2017-09-13T05:14:48.000Z | 2017-09-13T00:00:00.000 | {
"year": 2017,
"sha1": "7938d8bf13559ed99b1eb5330c531d5efe088f47",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d06cc8497f5a1b13c7399fe61a345edf9178d227",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
270961806 | pes2o/s2orc | v3-fos-license | An updated meta-analysis of optimal medical therapy with or without invasive therapy in patients with stable coronary artery disease
Background The efficacy of optimal medical therapy (OMT) with or without revascularization therapy in patients with stable coronary artery disease (SCAD) remains controversial. We performed a meta-analysis of randomized controlled trials (RCTs) that compared OMT with or without revascularization therapy for SCAD patients. Methods Studies were searched in PubMed, EMBASE, and the Cochrane Central Register of Clinical Trials from January 1, 2005, to December 30, 2023. The main efficacy outcome was a composite of all-cause death, myocadiac infarction, revascularization, and cerebrovascular accident. Results were pooled using random effects model and fixed effects model and are presented as odd ratios (ORs) with 95% confidence intervals (CI). Results Ten studies involving 12,790 participants were included. The arm of OMT with revascularization compared with OMT alone was associated with decreased risks for MACCE (OR 0.55 [95% CI 0.38–0.80], I²=93%, P = 0.002), CV death (OR 0.84 [95% CI 0.73–0.97], I²=36%, P = 0.02), revascularization (OR 0.32 [95% CI 0.20–0.50], I²=92%, P < 0.001), and MI (OR 0.85 [95% CI 0.76–0.96], I²=45%, P = 0.007). While there was no significant difference between OMT with revascularization and OMT alone in the odds of all-cause death (OR 0.94 [95% CI 0.84–1.05], I²=0%, P = 0.30). Conclusions The current updated meta-analysis of 10 RCTs shows that in patients with SCAD, OMT with revascularization would reduce the risk for MACCE, cardiovascular death, and MI. However, the invasive strategy does not decrease the risks for all-cause mortality when comparing with OMT alone. Supplementary Information The online version contains supplementary material available at 10.1186/s12872-024-03997-7.
Introduction
Coronary artery disease (CAD) is the leading cause of death all over the world, which could cause angina pectoris, acute myocardial infarction (AMI), and ischemic heart failure (HF) [1].CAD is characterized by the development of atherosclerotic plaques in the epicardial coronary arteries.When atherosclerotic obstruction causes significant flow-limiting, or plaque rupture causes thrombotic vessel occlusion, angina or AMI occurs [2].Chronic myocardial ischemia caused by stenotic coronary artery or myocardial infarction may further lead to HF and/or death [2], leading that alleviating angina symptoms and preventing AMI or death as the main goals of CAD treatment [1].
Revascularization consisted of percutaneous coronary intervention (PCI) and coronary artery bypass grafting (CABG) have been proved to improve event-free survival in patients with AMI [3,4].However, the optimal treatment for patients with stable coronary artery disease (SCAD) are still in controversy.A lot of randomized trials have compared the ability of optimal medical therapy (OMT) and revascularization in achieving aforementioned treatment goals in SCAD patients [5][6][7][8].Most of them found that revascularization provides better symptom relief and improved quality of life compared with OMT [9,10], but the results whether it can also improve survival or reduce new myocardial infarction are still inconsistent.Therefore, the revascularization always recommends as an adjunct to medical therapy for SCAD patients in guidelines [11].
Accordingly, with the new evidence from long term outcomes of some trials, we sought to conduct this updated meta-analysis to provide a comprehensive assessment of the role of coronary revascularization coupled with OMT compared to OMT alone in patients with SCAD.
Search strategy and data extraction
We carried out the systematic review in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) [12] (Table S1 in the Additional file 1).The study protocol has been registered on the International Platform of Registered Systematic Review and Meta-analysis Protocols database (Inplasy protocol: INPLASY202410067).We conducted a search in PubMed, EMBASE, and the Cochrane Central Register of Clinical Trials for RCTs that based on the optimal medical therapy with or without revascularization in patients with SCAD from January 1, 2005, to December 30, 2023.The search strategy was shown in Table S2 in the Additional file 1.The search was complemented by manual search of the reference list of relevant articles and published guideline statements by professional societies.
Inclusion and exclusion criteria
Inclusion criteria for studies include meeting the following requirements: (1) Patients with SCAD.(2) The patients were treated through optimal medical therapy with or without revascularization therapy.(3) Outcomes indicators: All-cause death, cardiovascular (CV) death, myocadiac infarction, revascularization.We excluded studies that enrolled patients < 18 years old; did not have enough data to extract, such as the summary of some meetings, literature materials such as reviews and pharmacological introductions; non-randomized trials, including observational studies, reviews, and meta-analysis; non-English language manuscripts; trials not in humans; and also studies before 2000 in which had percutaneous transluminal coronary angioplasty or balloon angioplasty as the primary means of intervention, as they did not reflect the current standard of care.
The protocol was drafted by three authors (Lei Bi, Yu Geng, and Yintang Wang) and reviewed by all coauthors.EndNote (X9 version) software was selected for document management, two investigators (Lei Bi and Yu Geng) independently evaluated the eligibility of the identified items.Potential discrepancies were discussed with the senior author (Ping Zhang).
Outcomes
The primary efficacy outcome was a composite of major cardiac and cerebrovascular events (MACCE), including all-cause death, myocardial infarction (MI), revascularization, and cerebrovascular accident.Other efficacy outcomes were all-cause death, CV death, MI, revascularization, hospitalization, and cerebrovascular accident.Definitions in individual trials were reviewed, and a harmonizing definition was used across the trials to the extent (Table S3 in the Additional file 1).We used the Cochrane Collaboration criteria to determine the risk of bias for each included study.
Statistical analysis
Revman5.3 were used for meta-analysis.Data that met homogeneity (P > 0.10 and I 2 ≤ 50%) through heterogeneity test were meta-analyzed using fixed effect model.If homogeneity (P ≤ 0.10 or I 2 > 50%) was not met or heterogeneity cannot be ruled out, random effect model was used to combine effects [13].For the discontinuous outcomes, odds ratio (OR) estimates with the related 95% confidence intervals (CIs) were used.A P value < 0.05 was considered statistically significant.And Mantel-Haenszel (MH) was used for between study variance estimation.
Subgroup and sensitivity analyses
The treatment efficacy of OMT therapy with or without revascularization was explored in patients with SCAD.The revascularization treatment strategy varies in different studies.Additional subgroup analyses were used to compare the efficacy results between PCI or CABG versus OMT of the SCAD patients.R software (version 4.2.2) was used to investigate the influence of a single study on the overall pooled estimate of each predefined outcome.
Results
The flow chart (Fig. 1) summarizes the search and study selection process.A total of 3,728 studies were identified through the search in PubMed, Cochrane Central Register for Controlled Trials, and EMBASE, of which 1,525 were excluded due to duplication.Then, 2,203 irrelevant studies were also excluded after reading the titles and abstracts.The remaining 19 studies were assessed by reading the full texts.Among them, data from 10 RCTs evaluating the efficacy of the optimal medical therapy with or without revascularization in patients with SCAD were included.
In the end, 10 RCTs involving 12,790 patients, comprising 6,497 patients with revascularization and 6,293 patients with OMT alone, were included in the analysis (Table 1).These RCTs include ISCHEMIA [14], ISCH-EMIA-CKD [15], ISCHEMIA-EXTEND [16], ISCH-EMIA-post hoc [17], BARI 2D [18], MASS II [7], and TIME [5] (trials on OMT with or without PCI and/or CABG), COURAGE [6], FAME2 [19], and JASP [20] (trials on OMT with or without PCI).No differences were observed in terms of the proportion of patients lost to follow up between the groups across trials.As the ISCH-EMIA, ISCHEMIA-EXTEND, and ISCHEMIA-post hoc trials included the same patients, the most appropriate one was used in analysis for corresponding outcomes.
Sensitivity analysis
We used R software to investigate the influence of a single study on the overall pooled estimation of each predefined outcome (the primary efficacy outcome, all-cause death, CV death, MI, and revascularization).We found that the removal of any one study did not affect the results overall, while in terms of MACCE and cerebrovascular accidents, may have been influenced by a single experiment, which may have highlighted the reduced power of these results.
Risk of bias and quality assessment of outcomes
The results of the risk of bias assessment with the RoB2 of randomized control trials are summarized in Figure S4 in the Additional file 1.Four studies were considered at low risk for overall risk of bias.
Discussion
In this updated meta-analysis of 10 RCTs which contains 12,790 patients, we found that in patients with SCAD, comparing with OMT alone, revascularization with OMT would reduce the risk for MACCE, cardiovascular death, and MI.Invasive therapy is also associated with a lower rate of revascularization and recurrent hospitalization.However, invasive therapy does not decrease the risks for all-cause mortality.The aforementioned benefit is mainly driven by the strategy of PCI therapy with OMT, and it should be mentioned that the trials included in our meta-analysis have PCI as the predominant means of revascularization, except for BARI 2D, MASS-II, and ISCHEMIA trial in which a significant proportion of patients underwent CABG.Our finding of a lower risk for the primary efficacy outcomes of MACCE in revascularization with OMT arm is predominantly driven by FAME 2 [19], MASS II [7], and TIME data [5].The 5-year follow-up data from FAME 2 was published in 2018 [19], and it reported that a coronary fractional flow reserve (FFR) guided PCI led to a significantly lower rate of the prespecified primary composite end point of death, MI, or urgent revascularization than medical therapy alone.In addition, intracardiac imaging by utilizing intravascular ultrasound (IVUS) or optical coherence tomography (OCT) in guiding PCI has consistently shown to reduce major adverse cardiovascular events (CV death, target lesion-related MI, or ischemia-driven target lesion revascularization) [21].These findings were also in consistent with the results shown in a meta-analysis of 31 studies with 17,882 patients [22].The current guidelines recommend that revascularization be considered in patients with SCAD when signs of reversible myocardial ischemia are present [11,23,24].Findings in aforementioned studies indicate that revascularization guided by an intravascular technique estimation of the target lesions might be more benefit for the target patients.
A plethora of researches have addressed the potential of revascularization to improve survival and to reduce MIs in patients with stable CAD, and from 2000s, several RCTs were conducted to provide more robust evidence in this field [5-8, 19, 20].However, almost none of them found the relationship between lower risk for death or MI and revascularization in addition to OMT, except for relief of anginal symptom.Trials before The International Study of Comparative Health Effectiveness with Medical and Invasive Approaches (ISCHEMIA) enrolled patients with milder levels of ischemia, which may be one of the postulations for the negative results.ISCHEMIA was designed to determine the effect of revascularization added to medical therapy in patients with stable CAD and moderate or severe ischemia, for whom an invasive strategy might have been most beneficial [14].Although the ISCHEMIA trial also failed to find the benefit of revascularization in survival improvement, it found a lower incidence of spontaneous MIs on long-term followup in the invasive strategy arm than among those in the conservative-strategy group.This finding indicates that there might be a long-term benefit of revascularization for stable CAD patients.Recently, the results of extended follow-up for mortality of ISCHEMIA trial have been published [16].With a median follow-up of 5.7 years, the study showed that there was a lower 7-year rate of cardiovascular mortality with an initial invasive strategy, but a higher 7-year rate of non-cardiovascular mortality compared with the conservative strategy, which result in no net treatment difference in all-cause mortality.
Our meta-analysis contains the updated results of extended follow-up of ISCHEMIA trial [16], and all the included trials predominantly reflected the contemporary medical practices in both the medical and the invasive arms, and the data in the analysis based on the longest follow up data available for each trial [5-8, 14−17, 19, 20].Several meta-analyses aiming at exploring the more beneficial therapy strategy for patients with stable CAD have been published.In meta-analysis published by Bangalore et al., in which 14 RCTs with 14,877 patients were included with a weighted mean follow up of 4.5 years, no difference in mortality was found between medical therapy and revascularization, but a reduced nonprocedural MI in the invasive therapy arm [25].However, trials included in this meta-analysis were much older and balloon angioplasty was the predominant means of intervention.A recently published meta-analysis conducted by Aviral et al. was similar to the current analysis [26].It contained 7 RCTs with 12,013 patients and reported that there was no statistically significant difference in primary outcome of all-cause mortality between either arm, but statistically significant lower rates of MACCE (death, MI or stroke), cardiovascular death, and MI in the revascularization arm comparing to conservative arm.Our results are consistent with the analysis by Aviral et al., for the similar including criteria of trials, and in addition, we update the results of extended follow-up for mortality of ISCHEMIA which provides a significant long-term improvement in cardiovascular mortality.
Our finding of lower incidence of revascularization in the invasive with OMT arm are consistent with prior randomized trials of revascularization versus medical therapy alone [5-8, 19, 20].And the finding of lower incidence of MI in the invasive with OMT arm which is predominantly driven by ISCHEMIA [14], MASS II [7] and FAME2 [19] data, is also consistent with the meta-analysis by Aviral et al. [26], because of the choice to include primary definition of MI in the ISCHEMIA trial [14].
Nevertheless, there are some scenarios in the practice need further discuss.When encountering equal percentages of stenosis in multiple vessels, intravascular technique estimation (FFR, IVUS, and OCT) of the target lesions is crucial.When encountering stenoses with an FFR of 0.81, in such borderline scenario, comprehensive assessment based on symptoms (e.g.assessment of frequency of angina and quality of life), risk factors, or more tools (echocardiography, instantaneous wave-free ratio (iwFR), myocardial contrast echocardiography, late gadolinium enhancement cardiac magnetic resonance, et al.) to estimate myocardial viability or functionally significant stenosis maybe beneficial [1,11].Meanwhile, the individual risk-benefit ratio should always be evaluated and revascularization considered only if the expected benefit outweighs its potential risk.Based on thorough assessment of the extent and severity of CAD as well as the presence of associated comorbidities, the aspect of shared decision-making is crucial.Full information must be given to the patient about the anticipated advantages and disadvantages of the two strategies, including the dual antiplatelet therapy related bleeding risks, contrastinduced nephropathy, or procedural complications, and multidisciplinary decision-making maybe required in some scenarios.
Limitations
Some limitations should be taken into account.First, we did not have access to individual patient data; and the definitions of the primary endpoints of MACCE and the diagnostic method to detect ischemia varied across trials included in this meta-analysis.Second, the findings do not apply to patients with clinically significant left main CAD, low ejection fraction, acute coronary syndrome, or those with class III or IV heart failure.Third, although the aim of our meta-analysis was to assess the benefit of revascularization for stable CAD, the trials included in our meta-analysis have PCI as the predominant means of revascularization, given less patients underwent CABG.But the subgroup analysis also shows lower risk for MACCE and revascularization in patients received CABG and OMT.
Conclusions
The current updated meta-analysis of 10 RCTs shows that in patients with SCAD, OMT with revascularization would reduce the risk for MACCE, cardiovascular death, and MI.However, the invasive strategy does not decrease the risks for all-cause mortality when comparing with OMT alone.
approved the final manuscript.All authors have participated sufficiently in the work and agreed to be accountable for all aspects of the work.
Funding
The work was supported by the Beijing Tsinghua Changgung Hospital Fund (Grant No. 12023C1002 and No. 12023Z1005).
Fig. 1
Fig. 1 Flow chart of search and study selection process
Table 1
Design and outcomes of the studies included in the current Meta-analysis pectoris Fig. 2 Forest plot of pooled odds ratio (OR) comparing OMT with revascularization versus OMT alone for the efficacy outcomes.A: MACCE; B: all-cause death; C: CV death; D: revascularization; E: MI; F: hospitalizations; G: cerebrovascular accident | 2024-07-05T06:17:17.470Z | 2024-07-04T00:00:00.000 | {
"year": 2024,
"sha1": "b984c43d8478b6e4e1c4b5d21a612dd64ad6f321",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9b56bb4dbf569f7c7bc97456d35d9922e9a585bf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270666094 | pes2o/s2orc | v3-fos-license | Regulating interface interaction in alumina/graphene composites with nano alumina coating transition layers
The structure and properties of graphene/alumina composites are affected by the interface interaction. To demonstrate the influence of interface interaction on the structure of composite materials, a composite without graphene/matrix alumina interface was designed and prepared. We introduced a nano transition layer into the composite by pre-fabricating nano alumina coating on the surface of graphene, thus regulating the influence of interface interaction on the structure of the composite. According to the analysis of laser micro Raman spectroscopy, the structure of graphene was not seriously damaged during the modification process, and graphene was subjected to tensile or compressive stress along the 2D plane. The fracture behavior of the modified graphene/alumina composites is similar to that of pure alumina, but significantly different from that of pure graphene/alumina composites. The elastic modulus and hardness of composite material G/A/A are higher, while its microstructure has better density and uniformity. In situ HRSEM observation showed that there was a transition layer of alumina in the modified graphene/alumina composite. The transition layer blocks or buffers the interfacial stress interaction, therefore, the composite material exhibits a fracture behavior similar to that of pure alumina at this time. This work demonstrates that interface interactions have a significant impact on the structure and fracture behavior of graphene/alumina composites.
Introduction
As a traditional inorganic ceramic material, alumina has advantages such as heat resistance, insulation, and high hardness, and can be widely used in industries such as aerospace, electronic consumer goods, high-speed railway, and metal smelting.However, alumina also have some common drawbacks of ceramic materials, such as insufficient toughness, susceptibility to brittle fracture and fragmentation, which limits their use in some occasions.Meanwhile, graphene, as a new two-dimensional carbon nano material, has outstanding properties 1 such as mechanical strength, 2 chemical stability, corrosion resistance and thermal conductivity. 3It would be very suitable to improve the mechanical properties, [4][5][6][7] thermal conductivity, 8 electrical conductivity, 9,10 and wear resistance 11,12 of alumina ceramics.Therefore, many researchers use graphene as reinforcement materials to improve the comprehensive properties of alumina.4][15][16][17][18] The uniform distribution of graphene in the alumina matrix can improve the fracture toughness of alumina ceramics, reduce its brittleness, and prevent it from brittle fracture.Previous work has demonstrated that graphene can signicantly improve the mechanical properties of alumina ceramics and demonstrated its strengthening mechanisms, including graphene extraction, crack deection and blockage, and graphene bridging.In addition, the grain renement of graphene/alumina composites was also found.
0][21][22] Iikhar Ahmad et al. 23 studied the interface structure of graphene/alumina using high-resolution transmission electron microscopy (HR-TEM) and Fourier transform infrared (FTIR) spectroscopy, and found that Al 2 OC phase was formed in the interface region.Jonathan M. Polfus et al. 24 studied the crystal structure, electronic structure and oxygen stoichiometry of graphene oxide/alumina nanocomposite interface through density functional theory (DFT) calculations.Priyamvada Jadaun et al. 25 used electronic structure methods based on DFT and local density approximation (LDA) to study the effect of crystalline alumina on the band structure of singlelayer and double-layer graphene.M. S. Gusmão et al. 26 used DFT to study the electronic structure and transport properties of monolayer graphene on the surface of alpha-Al 2 O 3 .In our previous work, 27 the rst principles theoretical calculation and experimental research on the interface structure of graphene/ alumina were carried out.However, the study on the interface interaction between graphene and alumina and its effect on the structure and properties of composites is still insufficient.Especially, there is no specialized experimental work on the effect of interface interaction on the microstructure of graphene/alumina composite materials.
In this work, to demonstrate the inuence of interface interaction on the structure of composite materials, a new special graphene/alumina composite without graphene/matrix alumina interface was designed and prepared.We prepared nano alumina coating on the surface of graphene by hydrothermal method, and prepared the nal composite by hot pressing sintering, thus introducing an interface transition layer into graphene/alumina composite.In this case, graphene does not directly interact with the alumina matrix at the interface.Previous studies attributed the grain renement of composite materials to the mass transfer hindrance effect of the two-dimensional sheet structure of graphene.This design retains the inuence of sheet structure on the microstructure of composite materials, but cleverly excludes the inuence of interface interaction.For comparison, we also synthesized conventional graphene/alumina composites.To better understand the inuences of interface interaction, the interface structure of the two composites was examined in situ by using high-resolution spherical aberration electron microscope, and the structural characteristics and fracture behavior were compared.Through this interesting comparative experiment, the inuence of graphene/alumina interface interaction on the microstructure and fracture behavior of composite materials was preliminarily presented.The research results of this work have important conceptional signicance for the development of graphene/alumina composites, and also have reference value for the research of other two-dimensional materials/ceramic composites.
Graphene modication
The multilayer graphene platelets (henceforth, expressed as graphene in the text), were purchased from Tokyo Chemical Industry Co., Ltd.The thickness and width of graphene are about 6-8 nm and 15 mm, respectively (graphene has approximately 20 to 30 layers).Add a certain amount of multi-layer graphene platelets (marked as G below) into 300 ml ultrapure water for ultrasonic 15 min, then add 1.5 g of sodium dodecylbenzenesulfonate (analytically pure, Sinopharm), continue ultrasonic 4 hours, and obtain stable slurry (nal concentration is 3.0 mg l −1 ).Add 2.7 g of Al(NO 3 ) 3 (aluminium nitrate, analytically pure, Sinopharm) and 2 g of CO(CH 2 ) 2 (oxalic acid, analytically pure, Sinopharm) into the slurry and stir for 15 min to obtain a well mixed slurry.Finally, the slurry was transferred into a polytetrauoroethylene lining and subjected to hydrothermal reaction in a 500 ml stainless steel reactor at a temperature of 105 °C for 1 hour.Aer the reaction, slurry was naturally cooled to room temperature in air and transferred to a beaker.Stir and heat the slurry in the beaker on an electric heating plate (about 150 °C) until the water evaporates to dryness, forming a cracked block.Finally, use a mortar to grind the block into powder and pass it through a 100 mesh sieve for use.This step realizes the preparation of Al(OH) x $H 2 O x primary coating on the surface of graphene.Place the modied powder into an alumina crucible and sinter it in a tubular vacuum furnace.Before sintering, use ultra-high purity argon gas (99.9999%) to purge the pipeline for 1 hour, and then continue sintering in ultra-high purity argon gas.The sintering temperature is 1250 °C and the holding time is 1 hour.During this process, the primary coating of Al(OH) x $(H 2 O) y is dehydrated to form a dense nanoscale alumina ceramic coating.Aer sintering, the powder will be ground and sieved through a 100 mesh sieve for use.Through the above steps, the modied graphene powder (hereinaer identied as GA) with pre coated alumina nano coating is obtained (Fig. 1).
Composite
Commercial nano a-Al 2 O 3 power (Shanghai Macklin Biochemical Co., Ltd, China) with a high purity of 99.99% and an average particle size of 30 nm was selected as raw material.Nano alumina particles were mixed with modied graphene (GA) or pure graphene (G) powder by wet mixing method. 15For GA powder, the modied graphene was sonicated 30 min in deionized water to obtain a GA suspension (with a concentration of 3 mg ml −1 ).For G powder, the pure graphene was dispersed in sodium dodecylbenzene sulfonate solution (10 mg ml −1 ) and was sonicated for 4 hours to obtain a G suspension with the same concentration.The a-Al 2 O 3 powder was added to suspension and stirred (the mass concentration of GA/G in the nal mixed powder was 0.5 wt%, 1.0 wt%, 1.5 wt%, 2.0 wt%, respectively), then stirring was continued and dried at 150 °C in the air.Finally, the dried powder was ground and sieved in a 100 mesh sieve.When preparing the composite sample, 4 g of mixed powder was added to a graphite sintering die with an inner diameter of 27 mm and was compacted, and nally x it with an abrasive indenter.During sintering, the hot pressing furnace is used for sintering at 1400 °C under vacuum atmosphere and 50 MPa for 1 hour (zt-40-21y, Chen Hua, made in China).
Material characterization
The cross-section structure and element distribution were analyzed by eld emission scanning electron microscope (Zeiss Supra 55, made in Germany).The slice samples of the composite interface were prepared in situ by focused ion beam technology (Thermosher Scios 2, made in USA), and their atomic images were obtained by high-resolution spherical aberration electron microscope (Thormo Fisher Themis Z, made in USA).The orientation of alumina grains at the interface was analyzed by Fast Fourier transform (FFT).The chemical state of graphene in composites was analyzed by laser micro Raman spectroscopy (MLRM, Renishaw inVia, UK).The phase composition of the composites was analyzed using X-ray diffraction (XRD, Bruker D8 Advantage, Germany).The FT-IR spectrum of modied graphene and pure graphene was recorded using a Frontier FT-IR spectrometer (PerkinElmer, Inc., USA).The sample was prepared using potassium bromide tablet pressing method (modied graphene or pure graphene with a mass ratio of 1 : 1000 to potassium bromide).All absorbance spectra were obtained by subtracting corresponding background spectra at room temperature.In transmission mode, with air as the background, the DTGS detector was used to scan the spectra of modied graphene and pure graphene mixed with potassium bromide (KBr) compressed samples in the range of 400 cm −1 to 4000 cm −1 .
Micro mechanical properties
Nano indentation tests were carried out on the polished surface of the two composites by means of a G200 tester (Agilent Technologies, Inc., Santa Clara, CA) with Berkovich tip (nominal radius of 200 nm).All the nanoindentations tests were performed at room temperature (22 ± 0.3 °C) and room humidity (40 ± 2% RH), and each experiment under the given conditions was repeated individually at least 12 times to ensure the reproducibility, to eliminate the effect of thermal dri on nanoindentation, thermal dri correction was reduced to #0.05 nm s −1 before each test.The indents were organized along all the thickness direction of the samples in order to detect possible gradients in densication.Hardness (H) and elastic modulus (E) were calculated by the procedure created by the procedure created by Oliver and Pharr from the loaddisplacement curves. 28
Nano alumina modication of graphene surface
First, we observed the surface morphology and structural characteristics of the GA using scanning electron microscope.It can be seen from Fig. 2a that GA maintains a relatively complete graphene structure, and alumina with nanometer thickness is evenly distributed on the surface of graphene.In order to facilitate the comparison, we also give the electron microscope pictures of pristine graphene (Fig. 2c).It can be seen that compared with graphene, the thickness of modied graphene is thicker, and the wrinkles are also less.
The surface structure of modied graphene is shown in Fig. 2d.It can be observed that a relatively at nano alumina coating is formed on the surface of graphene.There are some occulent deposition structures on the surface of the coating.EDS analysis was conducted on different positions of the alumina coating.The scanning results of surface element distribution (Fig. 2e and f) show that alumina is evenly distributed on the surface of graphene.Clear alumina signals were observed in both occulent deposits and darker areas.It can be seen from Fig. 2b that the thickness of the cross section of modied graphene is about 50 nm.At the same time, we analyzed the phase structure of modied graphene using XRD technology, and found that many weak alpha alumina characteristic peaks appeared next to the strongest graphene characteristic peak (Fig. 3a).This phenomenon indicates that there is a very thin alumina coating on the surface of graphene, resulting in very low diffraction intensity of the alumina crystal layer.However, it is worth noting that these weaker signals can match the peaks of the alpha alumina standard card, indicating that even at a nanoscale thickness, the coating still maintains good crystallization performance.Interestingly, aer removing the peak of graphene (Fig. 3b), a strong peak appeared at 54.2 in the signal of alumina, with a signicantly stronger intensity than other diffraction peaks, indicating a clear preferred orientation of the nano alumina coating on the surface of graphene.Aer analysis, it was found that the preferred orientation plane is the (10 17) plane.0][31][32] The structure of GA was further analyzed by in situ laser micro Raman technology, and the results are shown in Fig. 4. In order to facilitate comparison, we also give the Raman spectra of the initial graphene.It can be seen from Fig. 4a that aer the surface modication of graphene, the strength of the D peak did not increase signicantly, and the G peak still maintained a sharp peak shape, indicating that graphene maintained a relatively complete structure during the modication process, and the defect concentration did not increase signicantly. 33Aer the modication of graphene, the position of its G peak is between 1559.580cm −1 and 1567.702cm −1 .Compared with graphene, the G peak position of GA shows blue shi and red shi at the same time, that is, the peak position shis in different directions at different graphene positions.The blue shi of the G peak corresponds to the compressive stress in the 2D plane of graphene, while the red shi corresponds to the tensile stress of graphene.Therefore, we can think that when graphene is modied, the nano alumina coating formed on its surface produces two different interfacial stresses.XRD analysis shows that the aluminum oxide coating of alpha phase is formed on the surface of graphene.According to the signal in the spectrum, it can be judged that the aluminum oxide on the surface of graphene is multi oriented.When the alumina grains with different orientations form an interface with graphene, tensile or compressive stress will be generated on graphene due to different lattice mismatch, which will lead to the Raman peak position of graphene moving in different directions.
Table 1 shows the ID/IG ratio before and aer graphene modication.It can be seen that the ID/IG value of GA is slightly lower than that of G.This indicates that in graphene modication, the defect concentration did not signicantly increase, and the graphene structure was not destroyed.
Fig. 5 shows the results of Fourier transform infrared spectroscopy analysis before and aer graphene modication.From Fig. 5a, it can be seen that pure graphene has a strong absorption peak at 1110 cm −1 , corresponding to the vibrations of C-O-C bond (epoxy). 34These epoxy bonds are introduced during the preparation process of graphene.The peaks at 2920 cm −1 and 2850 cm −1 represent the symmetric and asymmetric vibrations of the C-H bond, respectively.From Fig. 5b, it can be seen that aer graphene modication, the signals of C-O-C and C-H bonds disappear.This is because graphene is thermally reduced during the modication process, and the ether bond oxygen in the graphene structure is desorbed.Moreover, hydrogen atoms on graphene are also thermally desorbed.It is worth noting that a broad peak appeared in the range of 500 to 900 cm −1 , which corresponds to the stretching vibration of the Al-O-Al bond, 35 indicating that graphene has Paper RSC Advances been successfully modied and aluminum oxide covers the surface of graphene.
Structure and mechanical properties of composite
Fig. 6 shows the cross section structure of G/A/A composites obtained by sintering modied graphene and nano alumina at different contents.We also show the cross section structure of the G/A composite composed of graphene and alumina.
Comparing Fig. 6a and c, it can be found that in G/A/A, graphene has better atness in the alumina matrix.Additionally, the large number of graphene folds did not appear in G/A/A, as in G/A (Fig. 6b and d).This folds may be caused by the wrinkle of graphene itself or the extrusion of nano alumina powder during hot pressing sintering.According to the content of the previous section, we can see that aer the preparation of nano alumina coating on the surface of graphene, the rigidity of graphene microchip is enhanced and the wrinkles are signicantly reduced.Therefore, in the later sintering process, the modied graphene maintained a good smoothness.This more at graphene distribution may be of great signicance for the construction of some unique properties of anisotropy.Fig. 7 shows the cross-sectional elements distribution of two types of composite materials.Fig. 7a and d the observation results of scanning electron microscopy mentioned earlier.
Fig. 8 shows the XRD spectra of two composite materials at different doping concentrations.It can be seen that the modi-cation of graphene has no signicant effect on the phase composition of the two composite materials.Both composite materials exhibit very pure properties a-Al 2 O 3 phase.The corresponding peak of graphene (002) is marked with black squares in the graph.From Fig. 8a, it can be seen that as the doping concentration of graphene or modied graphene increases, the signal of the (002) peak becomes stronger.This indicates that graphene and modied graphene were not destroyed during the sintering process of the composite material, and doping did not alter the phase of the alumina matrix.
Cross section photos of G/A/A, G/A, and pure alumina are shown in Fig. 9 and 10.From Fig. 9a, c and e, it can be seen that the fracture behavior of G/A/A composites is similar to that of pure alumina, and there are two fracture modes: transgranular fracture and intergranular fracture.The areas of transgranular fracture are marked by red circles in the gure.From Fig. 9b, d and f, we can see that the G/A composites are mainly characterized by intergranular fracture.The results indicate that the grain boundary stress distribution of G/A/A and G/A composite materials may be different.G/A is dominated by intergranular fracture, and the fracture behavior is signicantly different from that of pure aluminum oxide, indicating that the grains around graphene may be subject to tensile stress along the 2D plane of graphene, and the energy of grain boundary becomes higher, which is more likely to cause grain boundary dissociation, and form a new surface to reduce the energy of the system.However, G/A/A and pure alumina did not exhibit such effects.
To study this effect, the structural characteristics of graphene in G/A/A and G/A were observed using in situ Raman analysis technology, respectively.The results are shown in Fig. 11.It can be found that graphene in the two composites has kept a relatively complete structure, and the strength ratio of D peak to G peak has not increased signicantly, indicating that graphene structure has not been seriously damaged during the sintering process of composites.In all composite materials, the 2D peaks of graphene exhibit the characteristic shape of multilayer graphene. 36It is worth noting that the G peak position of graphene has a signicant blue shi in both composites.
The G peak position of G/A moves from 1564 cm −1 of pure graphene to 1582-1584 cm −1 , and the G peak position of G/A/A moves to 1580-1582 cm −1 .The blue shi phenomenon of G/A is slightly stronger than that of G/A/A.The range of G peak movement is very close, indicating that graphene is subject to the compressive stress in 2D plane in both composites.The interfacial stress of graphene in the two composites is similar.Graphene will generate tensile stress along the graphene/ alumina interface on the surrounding alumina layer in the composite.Therefore, the fracture behaviour of G/A composite is signicantly different from that of pure alumina, with intergranular fracture being the main mode.However, why do G/A/A composites still maintain a fracture mode similar to pure alumina?We speculate that the nano alumina coating on the surface of GA transfer to the interface transition layer in the composite, which cushions the inuence of interface stress on the surrounding alumina matrix layer.Table 2 shows the ID/IG ratio of composite materials.It can be seen that the ID/IG ratio of the modied composite material G/A/A is signicantly lower than that of the composite material G/A at a lower doping ratio (concentration # 1.5 wt%).Although the ID/IG ratio of G/A/A is higher than that of G/A when the content reaches 2.0 wt%, it is also signicantly lower than the ID/IG values of G/A at other concentrations.The magnitude of ID/IG values can reect the variation of graphene defect concentration.The aluminum oxide coating on the surface of modied graphene effectively protects graphene during the sintering process of composite materials, reducing the defects generated during the sintering process and better maintaining the two-dimensional honeycomb structure.
Fig. 12 shows the elastic modulus test results of two composite materials in different regions and depths.The average elastic modulus of composite material G/A composed of pure graphene is 249.3GPa, while the average elastic modulus of modied composite material G/A/A is 355.6 GPa.The elastic modulus of G/A/A is signicantly higher than that of G/A.Fig. 13 shows the hardness test results.Similar to the situation of elastic modulus, the micro-hardness of G/A/A is signicantly higher than that of G/A.The average micro-hardness of G/A and G/A/A materials is 11.7 GPa and 19.3 GPa, respectively.Fig. 14 shows the load-displacement curves of two materials.It is observed that penetration depth obtained in case of the composite G/A/A (Fig. 14b) is smaller than G/A (Fig. 14a), and the maximum load of G/A/A is also signicantly greater than that of G/A, indicating towards a better compactness and homogeneity of microstructure. 37The two-dimensional size of graphene akes exceeds 10 mm.And the random dispersion on the surface and near surface areas of alumina resulted in signicant differences in nanoindentation data at different positions on the surface of the two composite materials.However, from a statistical perspective, it can be considered that the various parameters of composite material G/A/A are superior to those of G/A.Comparing the elastic modulus and hardness of G/A and G/A/A, it was found that graphene modi-cation signicantly improved some of the mechanical properties of the composite material, which may be related to the adjustment of the interface interaction between graphene and alumina matrix by the alumina coating.The elastic modulus of ceramic composite materials is related to their density.According to literature, 5 graphene doping can increase the Young's modulus of alumina, but this enhancement effect will gradually weaken as the amount of graphene doping continues to increase.This is because as the graphene content increases, more pores and voids may appear in the composite due to interface interactions or aggregation of graphene, which will lead to a decrease in the elastic modulus of the composite.In modied composite G/A/A, due to the effect of the aluminum oxide transition layer on the surface of graphene, the bonding between the aluminum oxide matrix and graphene is tighter, suppressing some interfacial interactions.Defects in graphene may lead to localized stress at the interface, exacerbating pores and voids in composite materials.And according to the ratio of ID to IG in Raman spectroscopy, it can be found that modied graphene is better protected during composite material sintering, and the density of defects is signicantly lower than that of pure graphene.Compared with composite G/A, there are fewer defects such as pores and voids.This can also be seen from Fig. 6d that there is a signicant graphene aggregation phenomenon in the G/A composite, which will lead to more defects.However, modied graphene exhibits less aggregation or folding in composite materials.We speculate that this may be the reason why the elastic modulus of G/A/A composite materials is signicantly better than that of G/A.The hardness of composite materials is related to grain size, and both types of composites exhibit signicant grain size suppression.However, the aggregation of graphene can lead to a decrease in the density of composites and also affect their hardness. 5Therefore, the hardness of modied composite G/A/A is superior to that of G/A.
Interface structure of composite
To conrm the effect of alumina nanocoating, interface structure samples of G/A/A and G/A composite were prepared in situ using FIB technology, and the interface structure was observed using high-resolution spherical aberration electron microscopy.
The results are shown in Fig. 15.From the high-denition atomic image in Fig. 15a, it can be seen that in traditional graphene/alumina composite materials, graphene and alumina bind very tightly, and no obvious transition layer is found.In this type of material, the matrix alumina grains directly form an interface with graphene, and the growth of alumina grains is signicantly affected by the interface effect.According to the analysis above, graphene is subjected to compressive stress along the two-dimensional plane in the alumina matrix, and the surrounding alumina grains are simultaneously subjected to tensile stress.Obviously, in traditional graphene/alumina composites (G/A), this tensile stress directly acts on the alumina matrix layer, causing grain boundary relaxation in the G/A material, leading to intergranular cracking under external forces.Fig. 15b shows the selected area electron diffraction results of the alumina layer near the interface in the G/A composite material, indicating that the grains at this location form an interface with graphene by the (11 20) crystal plane (Fig. 15 identies the orientation of the crystal plane towards graphene).
Fig. 15c shows the high-resolution atomic image of the interface of the modied graphene/alumina composite material (G/A/A).It can be observed that there is a clear transition layer in G/A/A, which is consistent with the speculation in the previous text.The thickness of the transition layer is about 15 nm.A photo with a larger magnication is shown in Fig. 15d.It can be seen that the atomic arrangement in the alumina matrix region is very similar to that in the transition layer, and the lower side of the transition layer is tightly bound to the graphene layer.The atomic image of another interface in the G/A/A composite material is shown in Fig. 15g, and a clear transition layer is also observed.These phenomena indicate that the pre prepared nano alumina coating on the surface of graphene is retained during the subsequent sintering process of the composite material and not completely destroyed.Alternatively, it can be considered that the nano alumina coating is transformed into a transition layer during the sintering process of composite materials.This transition layer can block or buffer the stress interaction between graphene and matrix alumina.In G/A/A, the matrix alumina is only subjected to stress from the transition layer and not directly subjected to interfacial tensile stress from graphene, thus retaining a fracture behavior similar to that of pure alumina.Fig. 15e, f and h, i respectively show electron diffraction patterns at different positions of the interfaces.From the electron diffraction patterns at zones 3 and 5, it can be seen that the transition layer in G/A/A forms an interface with graphene by (2 110) and (1103) crystal planes, respectively.Interestingly, the orientation of the nearby alumina matrix layer is consistent with that of the transition layer (compare Fig. 15e and f, as well as Fig. 15h and i, respectively), indicating that the atomic structure of the transition layer has an impact on the structure of the alumina matrix during composite material sintering.The above experimental phenomena clearly indicate the inuence of graphene/alumina interface interaction on the structure of composite materials.When the interface transition layer isolates the direct interaction between graphene and alumina matrix layer, the stress effect at the interface no longer has a signicant impact on the fracture behavior of the composite material.This method of introducing an interface transition layer provides a new approach for adjusting the structure of graphene/alumina composite materials and even other two-dimensional reinforced composite materials.
Conclusions
(1) A layer of crystalline alumina coating with alpha phase and thickness of tens of nanometers was prepared on the surface of graphene.
(2) The structure of graphene was not seriously damaged during the modication process, and graphene was subjected to tensile or compressive stress along the 2D plane.
(3) The fracture behavior of modied graphene/alumina composites is similar to that of pure alumina, but signicantly different from that of the traditional graphene/alumina composites.
(4) According to the analysis results of Raman spectrum, in graphene/alumina composites, alumina is subject to tensile stress along the 2D plane of graphene, so the fracture process is mainly intergranular fracture.
(5) The elastic modulus and hardness of composite material G/A/A are higher, while its microstructure has better density and uniformity.
(6) In situ HRSEM observation showed that there was a transition layer of alumina in the modied graphene/alumina composite.Although in the modied graphene/alumina composite, the stress effect of the interface is the same as that of the traditional graphene/alumina composite, due to the block or buffer effect of the transition layer, this stress effect does not act on the surrounding alumina matrix, so the fracture mode of the modied graphene/alumina composite is similar to that of the pure alumina.(7) The above experimental phenomena clearly indicate the inuence of graphene/alumina interface interaction on the structure of composite materials.In the graphene/alumina composite material system, we should not only consider the two-dimensional sheet structure of graphene and the performance changes brought about by its high strength, but also consider the inuence of interface interaction on the material structure and properties.
Fig. 1
Fig. 1 Schematic diagram of the experiment on growing alumina nanocoates on the surface of graphene.
Fig. 2
Fig. 2 Surface morphology and elemental distribution of GA ((a) the morphology of GA, (b) side structure of GA, (c) the morphology of G, (d) surface of GA and (e) and (f) are the elemental distributions at each point in (d), respectively).
Fig. 3
Fig.3The XRD results of modified graphene (GA).((a) Initial spectrogram, (b) due to the strong signal of graphene, which masks the signal of alumina, a spectrogram removed the graphene signal is also provided).
Fig. 4
Fig. 4 Raman spectroscopic results of GA and G ((a) GA, (b) G, different curves come from different detection positions of the same sample).
Fig.6shows the cross section structure of G/A/A composites obtained by sintering modied graphene and nano alumina at different contents.We also show the cross section structure of the G/A composite composed of graphene and alumina.Comparing Fig.6a and c, it can be found that in G/A/A, graphene has better atness in the alumina matrix.Additionally, the large number of graphene folds did not appear in G/A/A, as in G/A (Fig.6b and d).This folds may be caused by the wrinkle of graphene itself or the extrusion of nano alumina powder during hot pressing sintering.According to the content of the previous section, we can see that aer the preparation of nano alumina coating on the surface of graphene, the rigidity of graphene microchip is enhanced and the wrinkles are signicantly reduced.Therefore, in the later sintering process, the modied graphene maintained a good smoothness.This more at graphene distribution may be of great signicance for the construction of some unique properties of anisotropy.Fig.7shows the cross-sectional elements distribution of two types of composite materials.Fig.7a and drespectively represent the distribution of Al elements, with black areas without Al.Fig. 7b and e respectively represent the distribution of O
Fig. 7
Fig. 7 The results of EDS of two composite materials (cross-section, G/GA content is 1.5 wt%.(a)-(c) Represent the distribution of Al, O, C of composite G/A, respectively.(d)-(f) Represent the distribution of Al, O, C of composite G/A/A, respectively).
Fig. 9
Fig. 9 Fracture cross sections of two composite materials with different graphene content (oblique side view, the areas of transgranular fracture are marked by a red circle in the figure.(a), (c) and (e) Represent modified composite materials with GA content of 0.5 wt%, 1.5 wt%, and 2.0 wt%, respectively.(b), (d) and (f) represent composite materials with G content of 0.5 wt%, 1.5 wt%, and 2.0 wt%, respectively).
Fig. 13
Fig. 13 Hardness of two composite materials ((a) G/A, (b) G/A/A, 1.5 wt%, displayed test results from different locations, displayed test results from different locations).
Fig. 14
Fig. 14 The load-displacement curves of two composite materials ((a) G/A, (b) G/A/A, 1.5 wt%, displayed test results from different locations).
Fig. 15
Fig. 15 High resolution spherical aberration electron microscopy photos and selected area FFT transformation of the interface structure ((a) interface structure of composite G/A; (b) the electron diffraction pattern at zone 1 in (a); (c), (d) and (g) interface structure of composite G/A/A; (e), (f) and (h), (i) are the electron diffraction patterns at zone 2, 3 and zone 4, 5, respectively).
Table 1
The ID/IG ratio for G and GA © 2024 The Author(s).Published by the Royal Society of Chemistry RSC Adv., 2024, 14, 20020-20031 | 20023
Table 2
The ID/IG ratio for different composite materials | 2024-06-23T05:11:58.354Z | 2024-06-18T00:00:00.000 | {
"year": 2024,
"sha1": "39a4578681fb25cd3ebbaf00c9657381f2a04047",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "39a4578681fb25cd3ebbaf00c9657381f2a04047",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14372313 | pes2o/s2orc | v3-fos-license | Comparative Metagenomic Analysis of Soil Microbial Communities across Three Hexachlorocyclohexane Contamination Levels
This paper presents the characterization of the microbial community responsible for the in-situ bioremediation of hexachlorocyclohexane (HCH). Microbial community structure and function was analyzed using 16S rRNA amplicon and shotgun metagenomic sequencing methods for three sets of soil samples. The three samples were collected from a HCH-dumpsite (450 mg HCH/g soil) and comprised of a HCH/soil ratio of 0.45, 0.0007, and 0.00003, respectively. Certain bacterial; (Chromohalobacter, Marinimicrobium, Idiomarina, Salinosphaera, Halomonas, Sphingopyxis, Novosphingobium, Sphingomonas and Pseudomonas), archaeal; (Halobacterium, Haloarcula and Halorhabdus) and fungal (Fusarium) genera were found to be more abundant in the soil sample from the HCH-dumpsite. Consistent with the phylogenetic shift, the dumpsite also exhibited a relatively higher abundance of genes coding for chemotaxis/motility, chloroaromatic and HCH degradation (lin genes). Reassembly of a draft pangenome of Chromohalobacter salaxigenes sp. (∼8X coverage) and 3 plasmids (pISP3, pISP4 and pLB1; 13X coverage) containing lin genes/clusters also provides an evidence for the horizontal transfer of HCH catabolism genes.
Introduction
From the early 1950 s to late 1980 s, hexachlorocyclohexane (HCH) was one of the most globally popular pesticides used for agricultural crops. HCH is chemically synthesized by the process of photochlorination of benzene. The synthesized product is called as technical-HCH (t-HCH) and consists of five isomers namely, a-(60-70%), c-(12-16%), b-(10-12%), d-(6-10%) and e-(3-4%) [1]. The insecticidal property of HCH is contributed mainly by the c-HCH (also known as Lindane) [2]. The process of extracting the c -HCH isomer from the t-HCH generates a HCH-waste (consisting of a-, b-, d-HCH) which is 8 times the amount of lindane produced [1]. In the last 60 years, 600,000 tons of lindane has been produced, thereby, generating a HCH-waste (referred as HCH-muck) of around 4-7 million tons [3][4]. The inappropriate waste-disposal techniques and the indiscriminate use of this pesticide have created a global environmental contamination issue [1]. This environmental contamination is mainly associated with the physicochemical properties of the HCH isomers which are completely different from other pollutants [5]. The axial and equatorial position of the chlorine atoms around the cyclohexane ring governs the persistence of these HCH isomers in the environment. Over the years, the build-up of huge stockpiles of HCH waste and their leaching into the environment through air and water have marked HCH as a problematic polluting compound [3]. A primary concern is the human health risks associated with the carcinogenic [6], endocrine disruptor and neurotoxic [6] properties of the HCH isomers. In May 2008 signatories of the Stockholm Convention listed a-, band c-HCH amongst the recognized persistent organic pollutants (UNEP 2009).
Sites heavily contaminated with HCH have been reported from Germany, Japan, Spain, The Netherlands, Portugal, Greece, Canada, the United States, Eastern Europe, South Africa and India [5]. By the 1970's and 1980's the usage and production of t-HCH and lindane was banned in most of the industrialized countries. In India the use of t-HCH was introduced in 1950's and has continued till 1997. However, even after 1997, there remained restricted production and use of lindane [7]. In the last 15 years 7000-8000 tons of lindane has been manufactured and the corresponding HCH-muck been improperly disposed off at several locations [2] (called HCH dumpsites). These HCH-dumpsites form the ideal experimental sites to understand how microbial communities respond to HCH pollution.
Owing to the global presence of the HCH open sinks, several primal efforts have focused on developing an efficient bioremediation technology [8][9][10][11]. As a first step the genetics, biochemistry and physiology of microbial degradation of HCH isomers especially of c-HCH has been studied in detail in Sphingomonads. For example, the genetic pathways responsible for the degradation of c HCH, also called lin genes (lin pathway), have been characterized from Sphingobium japonicum UT26 [12] and Sphingobium indicum B90A [5,13]. In general c-HCH degradation pathway is divided into upper and lower pathways. The upper pathway of c-HCH is mediated by dehydrochlorination (linA), haloalkane dehalogenation (linB) and dehydrogenation (linC/linX) in a sequential manner leading to the formation of 2, 5-dichlorohydroquinone. 2, 5-dichlorohydroquinone (lower pathway) is further converted to succinyl-CoA and acetyl-CoA by the action of reductive dechlorinase (linD), ring cleavage oxygenase (linE), maleylacetatereductase (linF), an acyl-CoA transferase (linG, H) and a thiolase (linJ). By and large the expression of lin genes in these strains is heterogeneous in nature as the genes of the upper pathway are expressed constitutively (linA, linB and linC) [14] and others (linE and linD) can be induced via transcription factors (linR) [14][15]. In addition to their primary role in the degradation of c-HCH, linA and linB play an important role in the degradation of a-, b-, dand e-HCH; but they also degrade the intermediates that are constitutively generated by this pathway. Sequence differences in the primary LinA and LinB enzymes in the pathway play a key role in determining their ability to degrade the different isomers. These studies formed the base of field trials where Sphingobium indicum B90A has been used as a primary bioremediatory element; however, these efforts have had limited success [9]. While one organism may play a dominant role in the degradation process, the role of the associated microbial species in the microbial consortia may also play a role in augmenting its capability. Therefore, characterizing the microbial community structure at HCHdumpsites should be a priority.
Here we present results of the first detailed investigation of the unexplored bacterial, archaeal and fungal diversity that exists in the soil of a HCH dumpsite. In addition to the taxonomic characterization, changes in their functional dynamics are also studied. The comparative gene centric analysis performed in this study clearly indicate that the marked differences in the microbial community are associated with the changes in the functional diversity especially related to their membrane transport, chemotaxis/motility and catabolic genes (lin genes) affected by the presence of HCH isomers at the dumpsite.
Ethics Statement
No specific permits were required for the described field studies.
Selection of HCH Contamination, Soil Sampling and Total DNA Extraction
To study the shift in microbial community structure across the increasing HCH contamination, we collected bulk soil samples from a HCH dumpsite situated at Ummari village, Lucknow [7] (27u 009 24.799 N, 81u 089 57.899 E), along with two more locations situated at a distance of 1 km (27u 009 31.199 N, 81u 089 54.7 E) and 5 km (27u 009 59.599 N, 81u 089 36.0.8 E) away from the dumpsite. The latter two soils were used as reference to assess the changes in microbial community under HCH stress at the dumpsite. Sampling was performed in the September of 2010 considering seasonal crop rotation (land was not processed for farming). Since sampling sites represent physicochemically different soils from uncultivable (HCH-dumpsite, 450 mg/g) to agriculturally managed (a small segment at 5 km site), subsamples (10 subsamples from each composite mix; 500 g soil/subsample) were collected at a depth of 10-20 cm, coordinates with any type of vegetation (natural or agricultural) were strictly avoided. Subsamples were transported on ice (4uC) and stored at 280uC till processed for HCH residue estimation and physiochemical analysis using methods described earlier [7]. DNA from each subsample was isolated by using PowerMaxH Soil DNA Isolation Kit (MO-BIO, USA). Equal concentration ( = 200 mg) of environment DNA from each subsample (10 subsamples/composite pool) were mixed to form a composite genetic pool representing total DNA composition for each site. DNA purity and concentration was analyzed by using NanoDrop spectrophotometer (NanoDrop Technologies Inc., Wilmington, DE, USA). Isolated total DNA was stored at 220uC till processed for microbial diversity and sequence analyses.
Sequence Data Generation
We performed targeted amplicon and shotgun pyrosequencing of the environment DNA using titanium protocols (Roche, Indianapolis, IN, USA). Roche 454 analysis software version 2.0 was used to analyze the sequences. The Tag-Encoded FLX Amplicon Pyrosequencing (TEFAP) was performed as described earlier [16] by using one-step PCR, mixture of Hot Start and HotStar high fidelity Taq polymerases. For shotgun sequencing of environmental DNA samples a full picotitre plate was run for each shotgun pyrosequencing library representing individual soil gradient. A total of 1.2 Gigabases of nucleotide sequence was generated (Table 1). Raw reads were processed for various quality measures using Seq-trim pipeline [17]. Reads were preprocessed at the following parameters; minimum length = 250 bp, minimum quality score = Phred Q20 average and reads with ambiguous bases (including N) were not used for further analysis.
Microbial Diversity Analysis
We estimated microbial diversity across increasing HCH contamination by using three different methods: TEFAP, metagenomic SSU rRNA typing and direct comparison of EGTs (Environmental Gene Tags) to the reference genomes. For bacterial, archaeal and fungal diversity analysis by TEFAP [16] method a total of 6 individual primer sets were utilized (Table S1). Following sequencing, all failed sequence reads, low quality sequence ends (Phred Q20 average) and tags/primers and reads ,250 bp were removed. The resulting sequences were then deleted of any non-bacterial/archaeal/fungal ribosome sequences and chimeras using custom software [18] set at default parameters. For archaeal analysis, in addition to the above steps, sequences with greater identity to bacterial 16S rRNA gene sequences were also deleted. Unique reads were BLASTN [19] (E-value cutoff of 1610 25 minimum coverage 90% and 88% identity) against GreenGene [20] (16S rRNA) and SILVA [21] (SSUs and LSUs) databases. Resulting outputs were compiled and data reduction analysis performed by using a NET and C# analysis pipeline [22]. In the second approach SSU rRNAs from the shotgun metagenomic sequences were binned from each metagenome using BLASTN [19] (E-value cutoff of 1610 210 minimum coverage 90% and 88% identity) against rRNA databases mentioned above. OTU (Operational Taxonomic Unit: status was assigned to sequences above 300 bp and similar to reference sequences (.95%). OTUs were clustered with 97% similarity criteria using UCLUST [23]. Candidates OTUs were used to assign phylogeny using RDP [24] scheme at 80% confidence value [25]. Relative abundance matrix (genus) of the metagenomes was used for statistical analysis. In the third approach taxonomical profiles were constructed by mapping metagenomic reads against NCBI genome database using NBC [26] (Naive Bayesian Classifier) at a N-mer length of 12.
Qualitative and Quantitative Measurements of Phylogenetic Diversity
For each metagenome, a subset of 1000 randomly selected candidate OTUs were used to construct a relaxed neighborjoining tree using Clearcut [27] with Kimura correction. To understand the phylogenetic correlation between sampled soil cohorts, distance matrices were constructed from each phylogeny and Mantel test (10000 permutations, two tailed: p-value) was performed using PASSAGE-2 [28]. Additionally, un-weighted UniFrac [29] was run on phylogenetic tree (at 1000 permutation) constructed after combining candidate OTUs from each metagenome. Rarefaction plots and non-parametric diversity indices were calculated using EstimateS [30]. The statistics utilized are not based upon biological replications but instead based upon technical replications provided by utilizing multiple diversity assays. Thus, we are representing the observational evaluation of the 3 samples analyzed using a variety of diversity assays and metagenome sequencing data from samples with three different contamination levels of HCH.
Characterization of Metagenomic Gene Content
Metagenomic sequences were annotated using evidence based annotation approach [31]. Sequences were BLASTX [19] against several protein databases (COGs, Pfam, SWISS PROT/TREM-BLE and KEGG) at an E-value cutoff: 1610 25 . Predicted genes were tabulated and classified into functional categories from lower orders (individual genes) to higher orders (cellular processes).
Relative abundance for each gene was calculated by dividing the similarity hits for an individual gene by total hits against any of the database. Higher functional order enriched in any of the metagenome was later analyzed at the finer scales. To understand the gradient specific functional traits, endemic metagenomic reads were binned using MegaBLAST [32] (reads of one metagenome against combination of remaining).
Community Potential and Participation for HCH Degradation
Sequences for well-characterized HCH degrading genes (Table S8) were downloaded from NCBI (dated 11 th March, 2011) and utilized as a template for DNA-Seq based analysis that was performed using ArrayStar (DNAstar) at default settings. Relative expression was calculated in each metagenome as per manufacturer's guidelines followed by statistical analysis (Two sided Fishers exact test and storey's FDR method). Additionally, metagenomic reads representing any of the lin gene were binned (BLASTN at E-value; 10 210 and 85% query coverage), and reference assembled on the ORF of respective lin gene. As mentioned above, protein guided DNA assembly for each lin gene was performed using Transpipe [33]. Relative abundance of lindane degradation pathway was quantified for each HCH gradient via comparing extracted lin gene sequences against KEGG [34].
Microdiversity Analysis of the Environmental Genomes
Phylogenetic reports created by 16S-rRNA pyro-tag, metagenomic SSUs and EGTs comparison with known genomes revealed the enrichment of genera like Marinobacter, Chromohalobacter, Sphingomonas, Sphingopyxsis and Novosphingobium ( Fig.1(A) and Fig.S1) along with increasing HCH contamination. Since most of these genera are genetically and functionally selected to degrade or tolerate HCH [5], we further focused our assembly efforts to assess their genomic and plasmid microdiversity. All metagenomic reads were aligned against the reference genomes (Table S9) and plasmids and recruitment plots were generated using MUMMER [35] as explained earlier [36]. Metagenomic reads were assembled into contigs using velvet_0.5.01 [37] (kmer length = 31). Contigs were BLASTX [19] (E-value = 10 25 ) against NCBI nr (non redundant) database. Phylogenetic identity was given to the contigs using MEGAN [38] at default parameters. Largest clusters were grown by recruiting singlets using Scarf algorithm [39] at following parameters, -g x -x Tc T -l 6-M T -n 2. Coverage was calculated for each contig via aligning metagenomic reads back to the contigs using Mosiak aligner (www.bioinformatics.bc.edu) at default parameters. Reference genome sequences (Table S9) were shredded into 3 kb long pseudo-contigs and concatenated with metgenomic contigs. Pooled contigs (reference genomes and metagenome) were later clustered based upon their tetra nucleotide frequency correlations as explained previously [40]. After performing the length distribution of contig pool following parameters were optimized for tetra-ESOM analysis; minimum length of contig = 1800 bp and maximum size window = 3500 bp. To maximize the use of data contigs were further binned using the %GC character as %G+C varies between species but remain highly constant within species [41]. Contigs were submitted to RAST [42] server for gene calling and annotation.
Statistical Analysis
Identification of genes or subsystems enriched between any two metagenomes was done using two-sided Fishers exact test with storey's FDR method for multiple test correction using STAMP [43]. Genes or subsystems were considered as enriched if the pvalue was significant along with pair wise comparison of metagenomes. A principle component analysis on correlation matrix with 1000 bootstrap value was performed to compare taxonomic profiles generated after 454 pyro-tagging of 16S-rRNA gene, metagenomic SSU-rRNA typing and direct comparison of EGTs with reference genomes. Two-way clustering was also performed on normalized genus versus metagenome sample (relative abundance from each taxonomy predictions method) matrix with some changes in parameters as methods explained elsewhere [44].
Data Availability
The TEFAP data were submitted to NCBI SRA under accessions SRA045821.1, SRP008135.1, 260594.1 and runs under SRR342413.1 whereas shotgun sequencing data runs under SRX0964712. Data were also uploaded to MG-RAST [45]
Physicochemical Analysis of Soils
The physicochemical analysis of the composite soil samples from three locations (Table 1) showed significant differences (P,0.00001 in all corresponding comparisons; Fisher's Exact test and Storey's FDR method) in electrical conductivity (maximum at the dumpsite; 8.5 dS/m). The dumpsite soil sample was highly saline (Electrical conductivity and cation concentration) and available potassium was .10 times higher (918 kg/ha) as compared to other composite samples (1 km = 40 kg/ha, 5 km = 84.3 kg/ha). This difference in electrical conductivity (EC) could be due to higher abundance of ions (especially cations) as a result of pesticide contamination [46] and high potassium concentration is a characteristic feature of soil ecosystems with inherent bioremediation potential [47]. HCH contamination was genera across three metagenomes obtained after TEFAP analysis using four bacterial primer sets. Genera and sample categories were clustered using Manhattan distance metric, top 50 genera with standard deviation .0.4 and having at least 0.8% of the total abundance were selected. Colour scale is representing the relative abundance of sequence reads (normalized by sample-mean). (B) Phylogenetic correlation of microbial communities across increasing HCH contamination, a subset of 1000 randomly selected OTUs from each metagenome was used to construct an elucidan distance matrix. Matrices were pair-wise compared using Mantel-test (1000 permutation, 0.05 as standard P -value) and Pearson correlation values were calculated. Asterisks indicate the statistical significance P,0.001(mean6sm). (C) Relative percentage of reads assigned to different archeal (I) and fungal (II) genera in TEFAP analysis. doi:10.1371/journal.pone.0046219.g001 mainly composed of aand b-HCH (g HCH) and was up to 450 mg/g, 0.7 mg/g, 0.03 mg/g soil from the dumpsite, 1 km and 5 km away soil samples, respectively ( Table 1). The levels of gHCH reported from the dumpsite are the highest reported from any of the dumpsites studied so far [48][49].
Microbial Diversity Estimation
In our first taxonomic approach we performed 16S rRNA amplicon pyrosequencing (TEFAP, Tag-Encoded FLX Amplicon Pyrosequencing) for each composite genetic pool using kingdom specific primers (Table S1) targeted at the conserved domains of the rRNA genes [16]. Fig. 1 provides an overview of bacterial, archaeal and fungal diversity based on TEFAP analysis. In this analysis a total of 114, 771 sequences with an average length of 338 nucleotides were generated, of which 13,437 and 17,293 were derived from archaeal and fungal assays, respectively. After quality control steps (average quality score = Phred Q20 and tags, primers and reads ,250 bp length were removed) a total of 72,178, 4,535 and 14,294 sequences were utilized for bacterial, archaeal and fungal diversity analysis, respectively.
Bacterial Diversity Analysis
Bacterial diversity was analyzed among the 3 sites using 4 bacterial primer pairs (Fig. 1A). The dual dendrogram is clustered based upon weighted pair average and Manhattan distances. Dumpsite assays were clustered together regardless of which primer was utilized. Two of the primer pairs (530F-1100R and 515F-860R) (Table S1) always demonstrated high similarity to each other independently of the environment analyzed (Fig. 1A), which is to be expected, as they cover a similar region of the 16S rRNA gene, but this also suggests that they retrieve a similar community profile despite potential primer bias.
Several genera demonstrated notable differences (average and standard deviation after each individual assay) between the sites (Fig. 1A). Pseudomonas and Alcanivorus (4.2% 66.1) were more abundant in the dumpsite dataset. The first four of these genera have already been reported to degrade HCH isomers in pure cultures [5]. Interestingly, the dumpsite soil dataset was also found to be enriched for anaerobes Clostridium and Dehalobacter (Table S2) that are also reported to degrade HCH isomers [50][51]. In contrast, the 1 km and 5 km datasets were predominated by Escherichia/Shigella (37.8/7.6% 63.1); Acidobacterium (17.3% 62.6), Salmonella (7.6% 62.3), Levilinea (3.5% 60.7) and Rubrobacterin (3.3% 61.3), respectively. This finding is not unexpected as these bacteria especially Escherichia/ Shigella commonly colonize soils impacted by human or animal waste, and a small segment of these sites were using such waste as a fertilizer for growing rice, wheat and vegetables.
We also observed bacterial genera which were unique to the dumpsite dataset. The criteria for selection of these genera required that each of the bacterial diversity assays agreed (i.e. for the genera all four were positive at the dumpsite and negative at the other sites). These genera and the average percentage (average among the four bacterial diversity assays) are presented in Table S3. Marinimicrobium (1.1% 60.45), Idiomarina (0.67% 60.16) and Salinisphaera (0.46% 60.20) were abundant as well as unique to the dumpsite dataset alone (Table S3). However, there is no clear evidence of their association with the degradation of HCH isomers, nor any documented presence at HCH dumpsites in the literature, although they have been reported from hyper saline environments [52], which suggests that the salinity of the dumpsite could be promoting unique microbial composition. Some of the major genera that were predominantly present at lowest HCH site (5 km) include Cladilinea, Streptomyces and Gemmatimonas (Table S4).
The bacterial/phylum distribution based upon SSU rRNA analysis using RDP [24] (Table S5) was by and large in agreement with that of TEFAP analysis. The most abundant phyla present in the dumpsite and 1 km datasets were Proteobacteria (50-50.8%) followed by Firmicutes (33.8-43%) and Actinobacteria (4-14.5%). In contrast Firmicutes (70%) were most abundant in the 5 km (lowest HCH) dataset (Table S5), which are known to be dominant in dry/arid soils [53]. Fusobacteria, Cyanobacteria and Chlorobi were completely absent in the dumpsite and 1 km datasets. Therefore, while HCH contamination did impact the diversity and abundance of the various bacterial genera, it did not markedly affect phylum level diversity or abundance. A Mantel test of betadiversity between sites (between distance matrices generated from phylogenetic tree of candidate OTUs; Fig. 1B) indicates a significant linear correlation (P,0.001) between increasing stress conditions (HCH contamination and salinity) and microbial community structure. These beta-diversity patterns are driven by the change in diversity and abundance of genera as described above rather than higher taxonomic ranks.
Further insights into the bacterial diversity within the three metagenomic datasets was obtained by computationally identifying the reads matching bacterial 16S rRNA gene sequences from the metagenomic reads (EGTs) and assigning them to different taxonomic levels (SSU rRNA). We also mapped EGTs to .1100 bacterial genomes (EGT genome typing) in NCBI reference genome database [54]. A total of 2,926, 4,164 and 2,301 SSU rRNA reads were obtained from the dumpsite, 1 km and 5 km datasets, respectively. The phylogenetic composition obtained by TEFAP, SSU rRNA and EGT typing analysis was compared at the genus ( Fig S1) and phylum level (Fig S2). Despite the general accordance, there are some noteworthy differences between the TEFAP, SSU rRNA typing and EGT typing. For example, Streptococcus was more abundant (9.6%) at the dumpsite according to EGT typing in comparison to TEFAP (1%, 61.2) prediction, while Acidobacterium was predominant in TEFAP analysis at the dumpsite (13.3%, 62.3) in comparison to SSU rRNA typing (1%). Relative enrichment of Pseudomonas (P,0.001 in all corresponding comparisons), Sphingomonas (P,0.001 in all corresponding comparisons) and Chromohalobacter (P,0.001 in all corresponding comparisons) was validated by all three approaches used. Some of the differences among these three techniques could possibly be attributed to the inherent biases of each technique, such as low coverage of 16S rRNA in metagenomic data (SSU rRNA), PCR primer amplification (TEFAP), and lack of relevant genomes for this environment (EGT genome typing) as reported previously [55][56]. Two strong points emerge from the data. First, the data reflect that at the surface soil (up to 20 cm) there is relative enrichment of bacterial, archaeal and fungal taxa genetically evolved to tolerate high salinity and degrade HCH isomers. Thus natural attenuation, a process in which microbial community contribute to the pollutant degradation is already in operation but needs to be monitored in detail over several other parameters (salinity, organic wastes and time). Second, for rapid degradation of HCH isomers at the dumpsite, the metagenomic data suggests that it may indeed be possible to effectively biostimulate the indigenous bacterial community by application of specific nutrients that would target the productivity of specific taxa [10][11] (taxa specific minimal salt medium and electron donors).
Archaeal and Fungal Diversity
So far the available literature on microbial diversity at the HCH dumpsites only reflects the presence of bacteria [10,[48][49], with archaeal and fungal diversity having never been analyzed at a HCH dumpsite. Based upon relative abundance (reads assigned to a particular archaeal genus/total reads assigned to the archaeal domain), Nitrososphaera (.90%) and related genera were enriched in the 1 km and the 5 km datasets whereas in the dumpsite dataset there was a relative increase in the abundance of genera like Halobacterium (.30%), Haloarcula (.10%), Halorhabdus (.10%) and Halopelagius (.5%) (Fig. 1C-I). Archaeal genera like Halorhabdus [57] and Halobacterium [58] have already been reported as naturally selected inhabitants of highly saline (EC and cations concentration) environments. In general, halophilic bacteria and archaea have a broad catabolic potential [52], and hence these halophiles may have a role in HCH degradation at the dumpsite. Evaluation of fungal diversity based upon TEFAP analysis at the dumpsite revealed high proportion of Fusarium species (.50%) that were absent in our sampled genetically pooled samples representing two remote sites (Fig. 1C-II). Fusarium species were tentatively identified as either F. equiseti or F. oxysporum (LSU with .97% sequence similarity to the reference sequence; Fig. S3). While the role of other dominant fungal species is not yet known, the ability of Fusarium sp. to degrade HCH isomers in pure cultures has been described previously [59][60]. The 1 km site, a certain segment of which is potentially impacted by human or animal waste fertilizer, showed comparatively high proportions of Sarcosphaera (48.13%) and Peziza (14.67%), while the most distant site (5 km) was relatively high in Trichocladium (28.94%) and Oidium (10.13%). Unlike the bacterial analysis, there were too few archaeal or fungal sequences identified by rRNA classification or genomic mapping from the metagenomic data to providing meaningful results. Nevertheless the microbial community at the dumpsite and 1 km datasets were more closely related to each other than 1 km-5 km or dumpsite-5 km datasets (Fig. 1A, S1 and S2), validating the HCH contamination and salinity hypothesis. Further increase in sequencing depth and replicates could help to improve the resolution of these findings.
Metagenome Functional Overview
Protein functions generated from evidence-based annotation (Pfam, COGs, SWISS PROT/TREMBLE and KEGG databases) were classified at various hierarchies [31] (individual genes, protein families and cellular processes). Observed increase in HCH contamination resulted in an increase in the relative abundance of cellular processes such as membrane transport (P,0.001 for all pair wise comparison), motility and chemotaxis (P,0.001 for 5 km versus 1 km and ,0.01 for dumpsite versus 1 km dataset comparison), transposases and plasmid maintenance (P,0.001 for all pair wise comparisons) ( Fig. 2A). Additionally, phage and prophage elements were also heightened in the HCH dumpsite, suggesting an increase in genetic mobility due to pollution or salinity stress. Enriched subsystems and protein families involved in each of the above-mentioned processes were identified and characterized ( Fig. 2B and Table S6). Categories involved in aromatic compound metabolism include chlorobenzoate, benzoate and toluene degradation (Table S6), which have been reported as end products of anaerobic degradation of HCH [61], were found to be positively correlated to the HCH contamination. Rarefaction estimates (Fig. 2C), two sided Fisher's Exact test and Storey's FDR method were performed on the Pfam [62] database results (protein families) using STAMP [43]. Protein families that were significantly higher in the dumpsite include transposons (P,1e 211 for each pair wise comparisons), phages (P,1e 215 for each pair wise comparisons), IS elements (P,1e 210 for each pair wise comparisons), alpha-beta hydrolase folds (P,1e 215 for each pair wise comparisons), major facilitator super family (P,1e 215 for each pair wise comparisons) and short chain dehydrogenases (P,1e 215 for each pair wise comparisons). It is not surprising that an increase in salinity levels and HCH contamination resulted in an increase in the enrichment of microbial genes coding for enzymes and proteins involved in aromatic compound metabolism, stress tolerance, multidrug resistance and motility/chemotaxis proteins. Similarly, the genes involved in motility, chemotaxis and sensing, were required for sensing HCH isomers [63].
Based on SOM (Self Organization Mapping) analysis we observed that genes coding for phage DNA synthesis, capsid proteins, packaging and transposase families like Tn3, IS-6100, and integrase core domain were predominantly present in the dumpsite and the 1 km datasets (Table S6). At the dumpsite there was also a notable enrichment of error prone DNA repair genes and genes facilitating enhanced mutation rates. Finally, the dumpsite and the 1 km datasets showed high relative abundance and diversity of proteins involved in transposition and conjugation mechanisms. The overall functional diversity based on KEGG [34] enzyme profiling clearly revealed the impact of HCH and salinity on microbial responses. For instance, the dumpsite and the 5 km datasets had the least correlation (R 2 :0.92), whereas the dumpsite and 1 km datasets were more correlated (R 2 :0.943), while 1 km and 5 km datasets were the most correlated (R 2 :0.98) (Fig. 2D). When the metagenomic data was analyzed at a higher functional category, the contributions of functional genes from eukaryotes was significantly higher at the 5 km, while bacteria contributed more significantly to the metabolic potential of the dumpsite (data not shown).
Community Potential and Participation in HCH Degradation
To know the relative enrichment of genes already assigned to HCH degradation pathway, functional binning was performed on each of the datasets using BLASTN [19] and transpipe [33] analysis. We were able to bin reads against 12 unique genes that have already been reported to be involved in the HCH degradation pathways. Notable among these are: linA, linB, linC, dehydrochlorinase, chlorocatechol 1,2-dioxygenase, 2,4,6-trichlorophenol monooxygenase, 2,6-dichloro-p-hydroquinone 1,2-dioxygenase, and 2,5-dichloro-2,5-cyclohexadiene-1,4-diol, (chloro) muconate-cycloisomerase, LysR family transcriptional regulator (LinR), TRAP-type mannitol/chloroaromatic compound transport system and periplasmic component (ttg2 gene) ( Fig. 3 and Table S7). We compared the three datasets for the presence and relative abundance of HCH degradation genes (lin genes). The dumpsite and 1 km site had a higher metabolic potential to degrade HCH isomers, compared to the 5 km site in which these genes were nearly absent (Fig. 3). Additionally, ABC transporter genes like ttg2 [64] and Ton-B receptors [65] were found in higher relative abundance at the dumpsite in comparison to the other datasets. These transporter genes have been reported from Sphingomonads where they help in the transport of complex hydrophobic compounds like HCH across the membrane thus facilitating the degradation process [64].
Sequences (Table S8) related to the lin operon, gene clusters and plasmids were downloaded from NCBI and each of the metagenomes were reference assembled to existing linA,B,C,-D,E,R,X genes and plasmids. We found 34,953 matches in the dumpsite metagenomic data, 35,256 in the 1 km site, and only 24,442 sequences from the 5 km site. Results from DNA-Seq based analysis (Fig. 4) were in agreement with those of functional binning, HCH contamination levels and taxonomic enrichment studied in each of the metagenomes. We observed a very high relative abundance of genes encoding for Lin A and Lin B, as these two primary enzymes are responsible for the degradation of all HCH isomers and also some of the intermediates (Fig. 3 and 4). We observed that linA, linB, and linC genes were abundant at the dumpsite and 1 km datasets ( Fig. 3 and 4) indicating that either a large majority of bacteria contain these genes or that these genes were present in multiple copies as two copies of linA gene have already been reported from Sphingomonads that harbor these genes [13,64]. Our previous studies have revealed certain end products of degradation of a, b and d HCH under aerobic condition by using Sphingobium indicum B90A, and also under anaerobic conditions [5]. However, the enrichment of benzoate, toluene, naphthalene and aromatic ring opening genes at the HCH dumpsite (Table S6) is an indicator that even the end products are degraded further.
Recruiting Chromohalobacter Salexigens Pangenome and Tracing Horizontal Gene Transfer Potential of lin Genes in situ
Metagenomic studies enable the recovery of partial genetic information from a broad distribution of the community membership. However, for the dominant organism (or pan organism) in a given community it is often possible to reassemble a complete genome, albeit a pan-genome comprised of sequences from a number of closely related species or strains [65][66]. Based on the phylogenetic profiles generated by TEFAP, metagenomic SSUs and direct comparison of EGTs to reference genomes, we generated metagenomic recruitment plots for various reference genomes (Table S9) using MUMMER [35]. De-novo assembly (see material and methods) of all three datasets resulted into 2,388,526 contigs (N50 = 745 bp, maximum contig size = 3458 bp, average contig coverage = ,5X). Owing to the primary focus of our further assembly efforts to reconstruct the enriched, salinity tolerant and HCH degrading draft or complete pangenomes (genomic fragments from similar species), de-novo assembled contigs were clustered based upon their nucleotide compositional characteristics (tetra nucleotide frequencies and %G+C) as explained earlier [40,67].
Owing to the relatively high abundance of Chromohalobacter salexigens DSM 3043 in our taxonomic analysis, a draft pangenome of Chromohalobacter sp. was constructed from the metagenome data (Fig. 5A, S1, S4). The Chromohalobacter sp. assembly . DNA-seq analysis of the community potential for HCH degradation. DNA-seq analysis of metagenome sequences against reference lin genes using Array star, x axis represents the relative abundance of lin genes from different genera (Table S8) consists of 5189 contigs (average contig size = 513 bp, average coverage ,8X) totaling 1,580 kbp of total draft pan-genome ( Fig. 5A and S4). The RAST annotation server [42] was used to annotate 778 protein coding sequences (CDS) and 189 hypothetical proteins on the contigs that were confirmed with an average BLASTp identity of 98.5% to the reference coding sequences.
These observations clearly indicate the enrichment of Chromohalobacter over an increasing HCH contamination level, as observed by TEFAP analysis. We were able to assemble the complete 16S rRNA gene sequence of Chromohalobacter sp. (99.9% identical to 16S rRNA gene sequence of Chromohalobacter salexigens DSM 3043; (Contig no = 646 size = 1652 bp, coverage = .35).
Since there was no other 16S rRNA gene sequence (phylogenetic marker) of Chromohalobacter salexigens in our assembly it certainly indicates low interstrain microdiversity of Chromohalobacter salexigens (average BLASTp identity to the reference coding sequences = 98.5%). It is essential to note that potassium cations released by the pesticide in contaminated soils can lead to an increase in the total salinity of the soil matrix [46]. Chromohalobacter salexigens DSM 3043 is a halophilic gamma-proteobacterium with a versatile metabolism allowing fast growth on a large variety of simple carbon compounds as its sole carbon and energy source. This bacterium is also resistant to saturated aromatic hydrocarbons and heavy metals and is a host to several versatile plasmids [68][69]. As with other studies that highlight the in-silico potential for reassembled genomes to support specific phenotypes, the role of these organisms in HCH degradation needs to be confirmed through biochemical tests. However, this information could help to refine the culture conditions necessary for axenic isolation in this organism(s), for example by generating a flux balance metabolic model of the organism (e.g. ModelSEED) [70].
The lin genes are already known for their mobile nature and association with IS-elements [64] however, there is no evidence of their relative mobility or evolution. Previous reports on the localization of lin genes especially linA, linB, linC, linDER across different species indicate that many of these genes are present across genomes as well as plasmids [12,[71][72][73]. Recently the presence of lin genes has been reported on the genome (3.51 Mbp) of Sphingobium japonicum UT26 [12] and plasmids; pISP3 (43k bp) and pISP4 (21k bp) in Sphingomonas sp. MM1 [74]. An exogenous plasmid pLB1 (21k bp) that carried IS-6100 composite transposon containing two copies of linB [75] was isolated directly from HCH contaminated soil. Thus we targeted our assembly efforts (clustering using tetra-ESOM and %GC character) to understand the microdiversity and organization of lin genes as metagenomic islands using reference sequences of the genome of Sphingobium japonicum UT26 (the solitary representative sequenced genome of HCH degrading bacterium available so far) and three plasmids pISP3, pISP4 and pLB1. For this purpose, we generated metagenomic recruitment plots and binned the contigs for the first chromosome of UT26 and three plasmids. Metagenomic recruitment plots of genome and plasmids (Fig. 5. A, B, C, D and E) clearly showed an abundance of metagenomic reads against reference sequences in the range of 97% to 100%. When metagenomic islands were identified over the recruitment plots it became evident that except for the IS-element of the linB gene there were hardly any reads mapped over the IS-elements related to the other lin genes (Fig. 5. B, C, D and E). This suggested a relative genomic plasticity and faster rate of evolution for various linA, linC, linDER and linF over linB genes. The studies also reflect that the bacterial community at the dumpsite is enriched for HCH degradation potential (lin genes), insertion elements, integrases, prophages and/or plasmids, which are contributing in the continuous genetic adaptation of these bacteria.
Conclusions
This is the first metagenomic analysis of samples collected from soils with differential concentration of HCH contamination. Though the presence of halophilic bacteria can be attributed to strong salinity differences between the dumpsite and the other two sites, the enrichment and diversity of lin genes suggests that HCH contamination did play a significant role in structuring the functional potential of the community. This study has shown the enrichment of ubiquitous but yet unknown archaeal, bacterial and fungal taxa under HCH contamination (and highly saline conditions). A higher diversity and abundance of lin genes, transposons, plamids, prophages, ABC transporters and genes associated with chemotaxis/motility and membrane transport were observed at the HCH dumpsite dataset. The data thus provided strong evidence not only for the enrichment of a specific microbial population and genes but a massive lateral transfer of catabolic genes (lin) through conjugation and transposition among the members of the established microbial community. We recovered one partial enriched microbial genome and three nearly-complete plasmids containing lin genes, indicating that these bacteria harbor catabolic plasmids, and dominate this HCH stressed environment. While the results presented here can prove to be an invaluable supplement for the on-going efforts in the development of in-situ bioremediation technologies for HCH, this study also suggests good prospects for developing economically viable HCH bioremediation technology. The latter may involve the use of specific tailor-made nutrients(s) and chemicals like taxa specific minimal salt medium [10], and various electron donors [11]. In addition, this study also points out that bioaugmentation by using a consortium (cultivable representatives of the enriched genera) of both HCH degraders and non-degraders could improve the efficiency of remediation efforts that focus on the use of a single taxon. Figure S1 Two way clustering of bacterial genus (predicted by EGT mapping to NCBI genomes, SSU rRNA analysis against GreenGenes database and by taxa specific 16S rRNA pyrotagging) versus sample matrix. Genera and sample categories were clustered using Manhattan distance metric, top 50 genera with standard deviation .0.4 and having at least 0.8% of the total abundance were selected. Colour scale is representing the relative abundance of sequence reads after normalising the data from the respective means of individual column (one sample). (TIF) Figure S2 PCA (principle component analysis) performed on the total diversity patterns (phylum) obtained after EGT mapping, metagenomic SSU rRNA analysis and taxa specific pyro-tagging. Correlation matrix was selected for the co-ordination with 1000 bootstrap values. (TIF) Figure S3 Phylogentic analysis of fungal 18S rRNA gene sequences. Phylogenetic analysis was performed on the partial (300 bp) 18S rRNA gene sequences obtained from bTEFAP analysis of dumpsite metagenome (n = 42) and reference sequences (n = 49) using the neighbour joining method with Kimura twoparameter model. The bootstrapped consensus tree, inferred from 1,000 replicates is presented as a radial tree. Bootstrap values (percentages of replicate trees in which the associated taxa clustered together) are shown for selected nodes in the tree. The tree is drawn to scale, with branch lengths corresponding to the evolutionary distances used to infer the phylogenetic tree. (TIF) Figure S4 Schematic representation of graft pangenome (contigs) of Chromohalobacter salexgens sp. assembled using tetraESOM and %GC based clustering on de-novo assembled metagenome contigs. (A) Circular representation of the draft genome (contigs bin). From outside towards the centre: outermost circle, metagenomic contigs arranged using reference sequence, circle 2, metagenomic reads coverage (coordinates with ,8X coverage are not represented); circle 3; innermost circle, GC content of the contigs. (B) Contigs are ordered using reference genome sequence (representing by black base ring). Red colored positions represent the non coding tRNA and rRNA genes. (TIF)
Supporting Information
Table S1 List of specific primers used in the present study for TEFAP (Tag-Encoded FLX Amplicon Pyrosequencing) analysis: First four primer sets in the first column were used for bacterial selective assay.
(DOCX)
Table S2 Relative abundance (percentage) of anaerobic bacteria (HCH degradation related) at all three metagenomes obtained after bTEFAP analysis using four bacterial assays.
(DOCX) Table S3 The bacterial genera which were unique to the dumpsite dataset. The average relative percentage across each of the 4 bacterial diversity assays is presented. For the dumpsite the standard deviation is also provided. For both the one km and 5 km sites each of the assays was negative for these genera.
(DOCX)
Table S4 Genera enriched in the pristine 5 km compared to the one km dumpsite soil sample. Those which were significantly higher based upon ANOVA and Tukey-Kramer among the diversity assays are in bold. (DOCX) The relative expression based upon an RNAseq based analysis. The NCBI sequences for the noted accessions were utilized as the reference transcriptome and the raw reads from each of the 3 metagenomic sites were compared. The genera of the NCBI genes and the gene designations are also indicated. (DOCX) | 2018-04-03T04:34:11.087Z | 2012-09-28T00:00:00.000 | {
"year": 2012,
"sha1": "005f7a80cb4325b22e5b732958f6b838e0b8794c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0046219&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "005f7a80cb4325b22e5b732958f6b838e0b8794c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
80358471 | pes2o/s2orc | v3-fos-license | Clinical analysis of medication related osteonecrosis of the jaws: A growing severe complication in China
Background/purpose Medication-related osteonecrosis of the jaws (MRONJ) is an unusual but quite serious complication. However, its mechanism remains unclear, and its treatment protocol is still controversial. Materials and methods Our study involved 201 osteonecrosis of the jaw (ONJ) patients from September 2006 to March 2017. We analyzed risk factors, clinical characteristics, treatment, etc., by comparing MRONJ with other ONJs. Results Among 201 patients, MRONJ accounted for 14.71% and it presented a consistent increase tendency. In comparison with other ONJs, we considered advanced age, maxilla lesion, diabetes mellitus, tooth extraction, especially multi-teeth extraction as risk factors (P < 0.0125). Our study demonstrated that maxillary lesion was associated with an advanced stage and it was inclined to worse prognoses. We also found MRONJ had little correlation to Actinomyces infection. Surgical treatment could improve patients' condition successfully (P > 0.05). 81.3% patients with advanced stage showed complete or partial healing lesions after surgery. Conclusion Advanced age, maxilla lesion, diabetes mellitus, tooth extraction seem to be important triggering factors for MRONJ. Clinicians and surgeons should pay attention to maxillary lesions as it is related to severe symptoms and unfavorable prognosis. Once diagnosed as MRONJ, surgery is an effective treatment for patients with advanced stage.
Introduction
Osteonecrosis of the jaw (ONJ) is a common oral and maxillofacial surgery disease. Since 2003, a new sort of ONJ, Bisphosphonate related osteonecrosis of the jaws (BRONJ), has been observed through ever increasing case reports. 1 Briefly, BRONJ is an unusual but quite serious complication of bisphosphonate (BP) therapy in patients suffering from osteoporosis or malignancies, such as multiple myeloma, breast cancer, prostate cancer, etc. In 2014 AAOMS 0 position paper, 2 BRONJ was replaced by MRONJ (Medication-related osteonecrosis of the jaws), as new drugs, for example, denosumab, 3,4 bevacizumab can also lead to osteonecrosis of the jaws. Though the incidence of MRONJ remains relatively low (0.7e6.7% for IV BPs, 0.1e0.21% for oral BPs), 2 it is lack of effective treatment protocol. Neither surgery nor conservative therapy can thoroughly eliminate patients' symptoms and reach complete healed oral mucosa. In addition, the pathogenesis of MRONJ is unclear even though a great number of researchers have been working on it. Possible hypotheses include inhibition of bone remodeling, antiangiogenesis effect, bacterial infection, immunity dysfunction and direct cytotoxicity. 5 In China, a growing number of patients with osteoporosis, malignancies or bone cancer metastasis get in touch with BPs. Consequently, the incidence of MRONJ keeps rising. Though it can be easily diagnosed following the guidance of AAOMS, 2,6 difficulty lies in the treatment of MRONJ. The aim of our study is to compare MRONJ with other ONJs in West China Hospital from growing tendency, risk factors, clinical characteristics, treatment and outcomes, and so on, in order to propose guidelines customized for China population.
Materials and methods
The database of the West China Hospital of Stomatology was searched from September 2006 to March 2017. Patients who visited the West China Hospital of Stomatology and diagnosed with either osteonecrosis of the jaw or osteomyelitis of the jaws were included. The inclusion criteria of MRONJ is according to AAOMS 0 definition, 2 and the main exclusion criteria of MRONJ were osteonecrosis after radiotherapy in head and neck area and obvious metastatic infiltration of the jaw. Patients who developed into osteonecrosis after bone graft were also excluded. This study has been approved by the Regional Ethics Committee Investigation of West China Hospital of Stomatology (WCHSIRB-D-2017-060). 201 patients were eventually selected into the study. Considering the classification of previous study 7 as well as the cause of osteonecrosis, patients were classified as five groups in our study: 1. Medication related osteonecrosis of the jaws (MRONJ) group: patients have a previous or ongoing bisphosphonate treatment history but no head and neck radiation history. Our study didn't find any patients using denosumab or other drugs. 2. Osteoradionecrosis (ORN): Patients went through head and neck radiotherapy before osteonecrosis occurred on the mandible or maxilla. 3. Odontogenic osteonecrosis: Patients with dental infections, such as pericoronitis, developed into osteonecrosis or osteomyelitis with sequestra. 4. Trauma and surgical induced osteonecrosis: Patients were previously exposed to oral or maxillofacial trauma, leading to osteonecrosis or osteomyelitis with sequestra. Or patients went through oral and maxillofacial surgery, subsequently suffered from postoperative infection. 5. Cause-Unknown: Patients with no clear cause.
In the current study, in most cases, we compared the MRONJ group with other groups to figure out specific MRONJ risk factors and other useful statistics.
Statistical analysis
SPSS (version 21.0, SPSS Inc., Chicago, IL, USA) was used to analyze the collected data. We analyzed descriptive statistics, and results were expressed in mean, standard deviation, frequency, percentage for different variables.
The Anova was applied for the study of the association of qualitative variables. To detect any differences between qualitative variables, the Pearson Chi Square test or Fisher's exact test, Bonferroni correction, ManneWhitney U test were used as appropriate. P 0.05 were considered statistically significant.
Results 201 patients were selected into our study. According to the statistics, odontogenic osteonecrosis was the most frequent Figure 1 The fraction of each osteonecrosis. osteonecrosis, accounting for 46.77%. The second biggest groups were osteoradionecrosis. MRONJ as well as causeunknown osteonecrosis followed (Fig. 1), and MRONJ presented a consistent increase tendency (Fig. 2). The bisphosphonate most commonly used was zoledronate (n Z 12, 50%). The most related systemic diseases were prostate cancer and osteoporosis (23.3%). Other systemic diseases included breast cancer, multiple myeloma et al (Table 1).
Age, gender and location
The median age of MRONJ patients was 64.43 years (range, 32e87 years), older than all other groups. There were statistically differences between MRONJ group and odontogenic, cause unknown, trauma & surgical group (ANOVA, P < 0.001).With regard to gender, the sex ratio (male: female) of MRONJ group was1.14, and it did not show any statistical significant differences. As for the location of the osteonecrosis, 11 patients had MRONJ lesions located in the maxilla, in contrast to 19 in the mandible, and none occurred in both the jaws. The fraction of maxillary lesions in the MRONJ group was higher than all other groups (Bonferroni correction, P < 0.0125) ( Table 2).
Risk factors
Among those patient with MRONJ, five patients had hypertension (16.7%), six patients had diabetes mellitus (20%). Corticosteroid and immunosuppressant usage accounted for 13.3%. MRONJ group had more diabetes mellitus cases than all other groups and it showed statistically differences between MRONJ group and odontogenic cases (Bonferroni correction < 0.0125). Speaking of drinking and smoking, 36.7%MRONJ patients were smokers. 30% MRONJ patients had drinking habits, but we didn't find any statistically differences between MRONJ and other groups (Table 3).
Our study also analyzed local triggering factors in MRONJ group involving tooth extraction (n Z 24, 80%), periodontal disease (n Z 5, 16.7%), poor oral hygiene (n Z 8, 26.7%), inappropriate prosthesis (n Z 3, 10%). 28 patients had local risk factors on record, while only two patients developed MRONJ spontaneously. Nevertheless, only tooth extraction presented statistically differences with other groups. MRONJ had more patients whose teeth were extracted, pretty higher than ORN group, cause unknown group, trauma and surgical group (P < 0.001) ( Table 3).
Clinical presentations and imaging features
Of 30 MRONJ patients, nine patients presented stage III lesions, 17 patients presented stage II lesions, only one patient showed stage I lesions. Besides, three patients presented stage 0 lesions. There were statistically differences between MRONJ stage and location (ManneWhitney U test, P < 0.05). Maxilla lesions inclined to an advanced stage based on our data (Staging of Medication-Related Osteonecrosis of the Jaw in Table 4).
According to our statistics, MRONJ 's common symptoms included exposed bone (Fig. 3), pain or discomfort, swelling, pus (Fig. 4), fistula (Fig. 5). Other symptoms were recorded in Table 5. Expect for exposed bone, no statistically differences between MRONJ and other groups were found. MRONJ presented the highest rate when it came to the exposed bone (P < 0.001). All MRONJ patients were examined with dental panoramic radiograph or Cone beam CT. Common imaging features included sequestra (Fig. 6), osteolysis, osteosclerosis (Fig. 7). Other imaging features were recorded in Table 6. Osteolysis, irregularity of the cortical margins, maxillary sinusitis showed statistically differences (Bonferroni correction, P < 0.0125). MRONJ group had more cases with maxillary sinusitis than other
Staging Symptoms and signs
Stage 0 patients without specific exposed necrotic bone, but with non-specific symptoms or clinical and imaging findings, such as pain, radiographic nonhealing bone in extraction sockets. Stage 1 Patients with exposed and necrotic bone or fistulas but no evidence of infection. Stage 2 Patients with exposed and necrotic bone or fistulas as well as clinical infection symptoms. Histopathologic and microbiological findings 20 patients in MRONJ group underwent biopsy. Inflammatory infiltrates, Sequestra were detected in most patients. Necrotic foci, colonization with pathogens, fibrous hyperplasia, bone hyperplasia was also detected among these patients. However, there were no statistically differences between MRONJ and all other groups. A total of 39 patients (8 in MRONJ group) underwent microbiological examination. Bacterium was isolated from pus from exposed bone or fistula. In MRONJ group, a-hemolytic streptococci, neisseria, prevotella intermedia, coagulase negative staphylococci, Corynebacterium were found. Notably, we did not find any patients infected with Actinomyces. There were also no statistically differences between MRONJ and all other groups (P > 0.05). After surgical treatment, 81.3% MRONJ patients showed complete or partial healing lesions, while other three patients' condition deteriorated. Conversely, only five patients (45.5%) who underwent conservative treatment showed complete or partial healing lesions. Six patients didn't show any improvement. Evolution of surgical treatment was better than conservative treatment in the present study though this difference did not reach statistically significant levels (P > 0.05) ( Table 7). It is worth noting that maxillary lesions were more inclined to worse prognoses (P < 0.05), six maxillary lesions ended up as no improvement or extended lesions, while only four maxillary lesions
Discussion
On the basis of the AAOMS 0 position paper, 2 MRONJ patients should suit 3 characteristics: 1) current or previous treatment with antiresorptive or antiangiogenic agents, 2) exposed, necrotic bone or fistula in the maxillofacial region which has maintained for at least 8 weeks and 3) no history of radiotherapy to the jaws. However, our study only involved patients with bisphosphonate usage, without any new drugs such as denosumab and ONJ was divided into five groups. A previous study shared a similar classification and BRONJ accounted for 45%. 7 Though MRONJ cases in our study only accounted for 14.71%, it showed a dramatically consistent increase tendency as previously described. 7,8 MRONJ made up 6.74% from 2006 to 2010, whereas it climbed up to 20.87% in last five years. This is probably because BPs were just started to be widely used in China in recent years. MRONJ is still a rare drug-related side effect in China but predictably, it tends to be rapidly increased in 10 years owning to numerous BPs prescription. It is important to emphasis that 36.67% MRONJ lesions occurred in the maxilla, quite higher than other groups. This ratio is pretty higher than most of the previous articles. 8e10 Oral and maxillofacial surgeons are supposed to pay attention to maxillary MRONJ lesions, since maxilla lesions can lead to severe symptoms, such as maxillary sinusitis, perforation of maxillary sinus. According to the result of Nisi et al., 11 maxillary lesion was associated with a worse MRONJ stage, and the present study reach the same conclusion too. And our study also found out maxillary lesion were more inclined to worse prognoses. According to our data, median age of MRONJ group was 64.43, older than all other groups (P < 0.001), which coincides with the data collected previously. 7 A possible explanation might be that older people are more likely to suffer from osteoporosis or malignancies, and BPs are always prescribed to them. With regard to gender, our data showed men and women had nearly equal chance to get involved in MRONJ, which is at odds with previous articles. Several authors observed more women are affected by MRONJ. 8,12,13 This might because prostate cancer is the one of the most frequent reason for BPs therapy in the present study, whereas prostate cancer only refers to male patients.
When it comes to local risk factors, 93.3% patients went through dental procedures or had dental problems. Only two patients developed MRONJ spontaneously. This ratio is close to published results. 7,13 As the most frequent invasive dental procedure, tooth extraction existed in 80% MRONJ patients and statistically differences were found (P < 0.001). Our study also analyzed the effect of multiextraction, in parallel with exposure bony size reported previously. 14 Considering direct cytotoxicity as a hypothesis, we speculated that maybe exposure size was association with the occurrence of MRONJ. Our results confirmed this consumption (P < 0.015). A previous study carried out an animal experiment, concluded that SD rats all developed MRONJ after repeated surgical extraction (second molar was extracted one week after first molar was extracted). 15 Based on this animal experiment, we also analyzed the effect of repeated extraction. However, it turned out that repeated extraction did not bring about higher prevalence of MRONJ. Other risk factors, such as periodontal disease, inappropriate prosthesis, poor oral hygiene, had been discussed as risk factors by many published studies. 16e18 Nevertheless, these differences did not reach statistically significant levels in the current study. In regards to the systemic condition, diabetes mellitus was the risk factor for MRONJ based on current data. However, our study did not find any statistically significant differences regarding hypertension, corticosteroid, smoking and alcohol. While several publications showed opposite results. 9,11,19 With the current available data, exposed bone was the only sign that showed statistically significant differences between MRONJ and all other groups (P < 0.001). This is consistent with the definition of MRONJ, which treats exposed bone as a necessary condition. With regard to imaging features, most findings could be found in both MRONJ and other groups. Of 11 MRONJ patients with maxilla lesions, 8 had evidence of maxillary sinusitis, parallel with the published literature. 20 MRONJ group significantly had more maxillary sinusitis cases in contrast to other groups. We speculated this was because MRONJ group had more lesions in the maxilla (36.7%). Furthermore, maxillary lesions were more likely to cause MRONJ of advanced stage (P < 0.05).
In the current study, histopathologic and microbiological findings did not show any statistically differences between MRONJ and other groups. According to the previous literature, Actinomyces played an important role in the course of MRONJ, 21 it estimated 73.2% (407 of 556) of the patients reported previously infected with Actinomyces. Anavi et al. 19 even isolated Actinomyces colonies in all 52 patients. Analogously, 66.7% patients were detected with Actinomyces colonization in a study from Spain. 14 However, our data did not find any patients infected with Actinomyces by microbiological examination. Three MRONJ patients' histopathologic examinations showed colonization with pathogens, but unfortunately, we couldn't find out whether it is Actinomyces or not. According to our results, MRONJ has little correlation to Actinomyces infection. 53.3% MRONJ patients underwent surgical procedure consisting of all stage III and seven stage II patients who had obvious sign of mobile bony sequestra. Conservative surgery was performed to other patients. Even though there were no statistically differences, surgical treatment's evolution was better than conservative treatment. 81.3% patients showed complete or partial healing lesions after surgery compared with 45.5% patients who accepted conservative treatment. Other authors also reported successful outcomes after surgery. Pichardo et al. 10 found all 74 MRONJ patients cured through surgical protocol. Holzinger et al. 22 drew the conclusion that effective surgery was able to improve the stage of MRONJ. Janovska et al. 23 found surgical treatment could lead to complete healing but it bore the risk of progression of the osteonecrosis and should be carefully planned under the control of patient's general health status.
Medication related osteonecrosis of the jaws is growing rapidly in China due to the wide use of Bisphosphonates. However, it is impossible to terminate prescription of BPs because most MRONJ patients receive BPs therapy to fight against malignancies or osteoporosis. Therefore, prevention strategies are essential for these patients. 24,25 In conclusion, our retrospective study found some MRONJ risk factors like advanced age, maxilla lesion, diabetes mellitus, chemotherapy, multi-teeth extraction. In addition, MRONJ has scarce specific clinical, imaging, histopathologic and microbiological features. We also claimed that surgical treatment could improve condition successfully in advanced stage patients. However, because of limited cases, and some patients' information were incomplete, the result could be specific to our study. To figure out the pathogenesis of MRONJ and suited treatment protocol thoroughly, further studies with large series should keep focusing on MRONJ.
Conflicts of interest
The authors have no conflicts of interest relevant to this article. | 2019-03-17T13:11:31.732Z | 2018-02-06T00:00:00.000 | {
"year": 2018,
"sha1": "8da3f84b55f73df3325eaf726d0d814c07fa7168",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jds.2017.12.003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8da3f84b55f73df3325eaf726d0d814c07fa7168",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
102857436 | pes2o/s2orc | v3-fos-license | The haloacetic acids formation potential in treated waters exposed to ozone and chlorine
exposed to ozone and chlorine Shangchao Yue , Lejun Zhao , Xiuduo Wang , Qishan Wang d and Fenghua He e Tianjin Municipal Engineering Design and Research Institute, Tianjin 300392, PR China College of environmental science and engineering, Nankai University, Tianjin 300071, PR China Tianjin TEDA water supply company, Tianjin 300457, PR China ysc010@163.com, lejun-zhao@vip.sina.com, wang_xiuduo@eyou.com, wangqsh@nankai.edu.cn, hefenghua121@sina.com
Introduction
Natural organic matter (NOM) in source waters may react with chlorine to form a series of disinfection by-products (DBPs) during disinfection. Trihalomethanes (THMs) and haloacetic acids (HAAs), the major DBPs, are established to be mutagens, carcinogens and toxicants [1][2][3]. For the concerns on public health risks, the water treatment plants have to optimize treatment processes or adopt new technology to meet the balance between disinfection and DBPs.
Most of the former studies were carried out in lab-scale or pilot-scale and few studies were available regarding the removal of DBPFP in full-scale plants. Furthermore, very little research had been conducted on the comparison of the DBP formation after prechlorination or preozonation.
This full-scale research presented goes beyond previous studies and was carried out in a water supply plant. The primary aim of this study was to compare the treatment efficacy of preozonation and prechlorination, in terms of concentration changes of DOC, UV 254 , SUVA and subsequent minimization of DBPFP from the source water in North China.
Materials and methods
Source water characterization. In this full-scale study, raw water was from the Luan River. In winter, the water is characterized by low temperature and turbidity.
Treatment processes. This study was carried out in a water supply plant in Tianjin. The processes were preoxidation (preozonation or prechlorination), coagulation, clarification, filtration and final disinfection, presented in Fig. 1. About 1.0 mg/L of ozone or chlorine were added during preoxidation. The contact time was controlled at three minutes [11].
Results and discussion
Raw water characteristics. In the raw water, turbidity was 7.02 -17.9 NTU, temperature was 4 -14 ℃, pH was 8.03, UV 254 was 0.059 cm -1 , DOC was 2.89 mg/L, SUVA was 2.44 L/(mg·m) [11]. The distribution of haloacetic acid species formation potential (HAASFP) in the raw waters is shown in Table 1. DCAA formation potential was the predominant HAA compound, which accounted for 60.75% of HAASFP. Since no MCAA and MBAA were detected in raw water, DBAA and TCAA accounted for the other 20.70% and 18.55%, respectively.
Reduction of HAAFP in different treatment processes. HAAs are a major group of DBPs in chlorinated water [12]. HAAFP along the treatment processes are presented in Fig.2. Comparison of the HAAFP decreases indicated that preozonation had a better and more stable reduction of HAAFP. 33.25% of HAAFP was removed from raw water during preozonation, while 30.77% of HAAFP during prechlorination. In the final effluents, greater decomposition of HAAFP was detected in Train 2 with a removal rate of 67.79%, comparing to a decrease of 58.38% in Train 1. The HAAFP concentrations were between 38.95 and 106.07μg/L in six effluent samples. fig. 3. After chlorination, 16.80% of DCAA, 49.04% of TCAA and 60.69% of DBAA were removed from raw water, while 21.52%, 66.20% and 42.59% were removed during preozonation, respectively, which suggested that ozone was more effective to reduce DCAA and TCAA than chlorine.
In final effluents, the total removal rates for DCAA, TCAA and DBAA in Train 1 were 49.75%, 81.95% and 61.11%, separately. Better reduction of HAASFP were achieved in Train 2, 53.45% of DCAA, 98.70% of TCAA and 86.56% of DBAA were removed, respectively. Moreover, DCAA was the major HAA specie in the two treatment trains. The results suggested that ozone changed the structures of NOM during preozonation and HAA precursors were removed by the treatment processes. Fig.3 The distribution of HAASFP along treatment processes a)Train 1 with prechlorination, b) Train 2 with preozonation. Comparing the reduction of THMFP [11] and HAAFP in the two treatment trains, the results indicated that HAAFP was removed better from raw water. Furthermore, preozonation was more effective at reducing HAAFP than THMFP in the period of this study, which showed a better removal of HAAs precursor materials during preozonation. The total removal rates also suggested a greater decomposition of HAAFP than THMFP, which may relate with the reduction of DBPFP during preoxidation. Since HAAFP values were higher than THMFP in raw waters and the health risk of HAAs is also higher than THMs, the removal of HAAFP was of great significance and the water quality had a great promotion.
Conclusions
From the examination of HAAs in the Luan River water, we can find that DCAA was the major HAASFP, while no MCAA and MBAA were detected from the raw water.
Under a similar dosage of chlorine and ozone, preozonation performed better in the decrease of HAAFP than prechlorination, leading to a significant reduction of HAAFP. Approximately 68% of HAAFP in the raw water was removed after the Train 2 treatment.
Overall assessment of the results indicated that Train 2 treatment with preozonation was more effective in the removal of HAAs. The application of preozonation is an effective method for reducing DBP precursors in drinking water treatment. The introduction of preozonation in source waters treatment in China is feasible and it would be a substitute for prechlorination. | 2019-04-09T13:10:55.862Z | 2018-05-01T00:00:00.000 | {
"year": 2018,
"sha1": "1618736fc8f62f0d9fa2e2704570231c0978f07b",
"oa_license": "CCBYNC",
"oa_url": "https://download.atlantis-press.com/article/25894705.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "57a72d757fca5441e9b0ed5583fcdba6fd015f60",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
6002475 | pes2o/s2orc | v3-fos-license | A Plasmodium falciparum Transcriptional Cyclin-Dependent Kinase-Related Kinase with a Crucial Role in Parasite Proliferation Associates with Histone Deacetylase Activity
ABSTRACT Cyclin-dependent protein kinases (CDKs) are key regulators of the eukaryotic cell cycle and of the eukaryotic transcription machinery. Here we report the characterization of Pfcrk-3 (Plasmodium falciparum CDK-related kinase 3; PlasmoDB identifier PFD0740w), an unusually large CDK-related protein whose kinase domain displays maximal homology to those CDKs which, in other eukaryotes, are involved in the control of transcription. The closest enzyme in Saccharomyces cerevisiae is BUR1 (bypass upstream activating sequence requirement 1), known to control gene expression through interaction with chromatin modification enzymes. Consistent with this, immunofluorescence data show that Pfcrk-3 colocalizes with histones. We show that recombinant Pfcrk-3 associates with histone H1 kinase activity in parasite extracts and that this association is detectable even if the catalytic domain of Pfcrk-3 is rendered inactive by site-directed mutagenesis, indicating that Pfcrk-3 is part of a complex that includes other protein kinases. Immunoprecipitates obtained from extracts of transgenic parasites expressing hemagglutinin (HA)-tagged Pfcrk-3 by using an anti-HA antibody displayed both protein kinase and histone deacetylase activities. Reverse genetics data show that the pfcrk-3 locus can be targeted only if the genetic modification does not cause a loss of function. Taken together, our data strongly suggest that Pfcrk-3 fulfils a crucial role in the intraerythrocytic development of P. falciparum, presumably through chromatin modification-dependent regulation of gene expression.
Plasmodium falciparum, the protozoan parasite responsible for the most virulent form of human malaria, causes 1 to 3 millions deaths annually, mostly among children in sub-Saharan Africa. This mortality is expected to rise with the global emergence and spread of drug-resistant parasites, making the discovery of alternative control agents an urgent task (43). The identification of potential targets is now greatly facilitated by the availability of genomic databases for several species of the Plasmodium genus (www.plasmoDB.org) (49). Plasmodium cell cycle regulators represent attractive candidate targets for intervention, because (i) their activities are most probably essential to parasite survival, and (ii) the overall organization of the cell cycle in malaria parasites differs considerably from that in mammalian cells; this is reflected by atypical properties of the enzymatic machinery controlling cell cycle progression, suggesting that specific inhibition is achievable (12,15).
The progression of the eukaryotic cell cycle is tightly controlled by a family of protein kinases, the cyclin-dependent kinases (CDKs), whose active forms are composed of a catalytic subunit (CDK) and a regulatory subunit (cyclin) (39). While several mammalian CDKs (CDK1, -2, -3, -4, -6, and -7) function in cell cycle control, others (CDK8, -9, -10, and -11) are part of the transcription machinery. CDK7 is a regulator both of cell cycle progression (through its activity as a CDKactivating kinase [CAK]) and of transcription (through its activity as a component of the general transcription factor TFIIH) (26). CDK8 and -9 regulate transcription by phosphorylating the C-terminal domain of the large subunit of RNA polymerase II (2,18); BUR1, the Saccharomyces cerevisiae CDK9 homologue previously known as SGV1, has been shown to regulate transcription through selective control of histone modifications (7,17,30). CDK10 regulates transcription and cell cycle progression by modulating the activity of the Ets2 transcription factor, a regulator of CDK1 expression (27). CDK11 interacts with the general precursor mRNA splicing factors and with RNA polymerase II, thereby playing a role in transcript production and the regulation of RNA processing (37). Finally, CDK5 has neuron-specific functions (11).
Among the 85 (or 99, depending on the criteria used for inclusion) eukaryotic protein kinase (ePK) sequences that were identified in the P. falciparum kinome (3,52), 18 clustered within the CMGC group (CDKs, MAPKs [mitogen-activated protein kinases], GSK3 [glycogen synthase kinase 3], and CDK-like), with 6 sequences more closely related to CDKs than to other CMGC subfamilies. By analogy with their functions in other eukaryotes, and despite the unique characteristics of the Plasmodium cell cycle (13,31,41) and transcription machineries (1,6,(8)(9)(10), it is likely that the Plasmodium CDKrelated kinases play key roles in cell cycle progression and transcription in the parasite. Among those gene products, PfPK5 (22,32,47), Pfcrk-1 (14), Pfmrk (34,35,53), and PfPK6 (5) have been the subjects of biochemical or structural investigations. However, the only reverse genetics-based information published so far regarding the function of CDKs in the parasite life cycle is that for Pbcrk-1, the orthologue of Pfcrk-1 in Plasmodium berghei, which is essential for erythrocytic schizogony (46). Here we report the functional characterization of pfcrk-3 (PlasmoDB identifier PFD0740w), a gene encoding an unusually large CDK-related protein (1,339 amino acids) whose kinase domain displays maximal homology to those CDKs which, in other eukaryotes, are involved in the control of transcription. The enzyme associates with a kinase activity present in parasite extracts, and this association is detectable even if the catalytic domain of Pfcrk-3 is rendered inactive by site-directed mutagenesis, suggesting that Pfcrk-3 is part of a complex containing other protein kinases. We demonstrate that Pfcrk-3 interacts with a histone deacetylase (HDAC) in parasite extracts, and we provide reverse genetics evidence strongly suggesting that the pfcrk-3 gene plays a crucial role in parasite proliferation during the asexual erythrocytic cycle.
MATERIALS AND METHODS
GST-Pfcrk-3 expression plasmid and site-directed mutagenesis. The Pfcrk-3 catalytic domain was amplified from the 3D7 cDNA clone by using oligonucleotides carrying a BamHI (forward primer, CGGGGATCCGATAAAAGAATG TAAGTTACACA) or a SalI (reverse primer, GGGGTCGACTTATCCTTTTT GATTACTCTGT) site (underlined). The PCR product was inserted into the pGEX4T3 plasmid (Amersham Biosciences) at the BamHI and SalI sites. GST-Pfcrk-3-K445M, a plasmid encoding a mutant glutathione S-transferase (GST)-Pfcrk-3 fusion protein with an alteration from lysine to methionine at residue 445, was obtained by site-specific mutagenesis using the overlap extension PCR technique (21). The plasmids were electroporated into Escherichia coli strain BL21, and the inserts were verified by DNA sequencing prior to protein expression.
Expression and purification of recombinant proteins. GST, GST-Pfcrk-3, and GST-Pfcrk-3-K445M were induced in Escherichia coli (strain BL21 codonϩ) with 0.5 mM isopropyl--thiogalactopyranoside at 30°C for 4 h. Cells were harvested and resuspended in ice-cold sonication buffer (phosphate-buffered saline [PBS] [pH 7.5], 0.1% Triton, 1 mM EDTA, 1 mM dithiothreitol [DTT]) containing protease inhibitors (1 mM phenylmethylsulfonyl fluoride and Complete mixture inhibitor tablet from Roche) and 100 g/ml lysozyme. After 10 min on ice, the suspension was sonicated and clarified by centrifugation at 11,000 ϫ g for 30 min at 4°C. The resulting supernatant was incubated with glutathione Sepharose resin (Sigma) for 1 h. The resin was washed four times with sonication buffer and once with a buffer containing 50 mM Tris-HCl (pH 8.7)-75 mM NaCl. The protein concentration was determined using the Bio-Rad dye reagent according to the manufacturer's recommendations with bovine serum albumin as a standard. Aliquots of purified proteins were analyzed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and Coomassie blue staining.
Parasite culture and preparation of parasite extracts. The P. falciparum clone 3D7 was cultured in vitro by standard methods (25). The parasites were grown in human erythrocytes at 5% hematocrit in complete RPMI 1640 medium in 25-cm 2 ventilated flasks. The flasks were kept in a 37°C incubator under 5% CO 2 . To remove serum and leukocytes, the human blood was washed three times in RPMI 1640 and the buffy coat removed. The medium was changed daily. Parasitemia was measured daily by examining Giemsa-stained blood smears and was kept between 0.5% and 10%.
Pulldown experiments. Glutathione-agarose beads coated with GST, GST-Pfcrk-3, or GST-Pfcrk-3-K445M were incubated in parasite extracts or in RIPA buffer alone at 4°C under mild agitation for 1 h (100 g of total parasite proteins for 10 g of recombinant proteins on beads). The beads were then washed three times in RIPA buffer, once in RIPA buffer with 0.1% SDS, and once in a standard kinase buffer containing 10 mM sodium fluoride, 10 mM -glycerophosphate, 1 mM phenylmethylsulfonyl fluoride, and Complete mixture protease inhibitors. Beads were resuspended in a volume of kinase buffer equal to the volume of beads. A standard kinase assay was then performed in a final volume of 30 l, and the samples were analyzed by SDS-PAGE and autoradiography.
Kinase assays. Kinase assays were performed as described previously (32). Briefly, reactions (30 l) were performed in a standard kinase buffer containing 20 mM Tris-HCl (pH 7.5), 20 mM MgCl 2 , 2 mM MnCl 2 , 10 mM ATP, and 5 Ci [␥-32 P]ATP, using 0.5 g of recombinant kinase (or immunoprecipitated material [see below]) and 5 g of substrate (myelin basic protein [MBP] or histone H1). After 30 min at 30°C, the reaction was stopped by the addition of Laemmli buffer, and the reaction product was loaded onto a 12% SDS-polyacrylamide gel. Following Coomassie blue staining, the gels were dried and exposed for autoradiography.
Antibody production, immunoprecipitation, and immunofluorescence analyses of wild-type Pfcrk-3. Chicken immunoglobulin (IgY) antibodies against peptides H2N-CKNRRTLNEDMLSVVD-CONH2 (named VVD) and H2N-P NERDIKTLRNLPCTN-CONH2 (named PNG), derived from protein sequences (residues 539 to 553 and 840 to 855, respectively, encoded by the PFD0740w gene), were synthesized by Auspep, coupled to rabbit albumin carrier, and inoculated into chickens (the PNG peptide was derived from an early version of PlasmoDB, and the sequence was subsequently changed to PNERD IKYLRNLPCWN; the two substitutions appear not to have prevented recognition of Pfcrk-3 by the antibodies [see Results]). The antibodies were isolated and affinity purified on a peptide-affinity matrix as described previously (19). The IgYs were used in immunoprecipitation as described previously (38). Briefly, an anti-Pfcrk-3 antibody bound to protein A Sepharose beads was incubated with an extract from a parasite at a specific stage (late trophozoite or schizont). Because of the low affinity of chicken antibodies for protein A Sepharose beads, the antibody was first incubated with a rabbit anti-chicken antibody before being coupled to protein A Sepharose beads. After incubation, protein A Sepharose bead-bound complexes were washed and assayed for kinase activity.
Immunoprecipitation of HA-tagged Pfcrk-3. Parasites expressing hemagglutinin (HA)-tagged Pfcrk-3 or wild-type 3D7 parasites were obtained by saponin lysis and were then solubilized in M-PER protein extraction reagent (Pierce) containing 25 U/500 l Benzonase and a protease inhibitor cocktail (Roche) for 20 min at 4°C. The lysates were cleared at 9,000 ϫ g and 4°C for 5 min prior to immunoprecipitation (IP). For each IP, 500 g protein lysate (in a 100-l total volume) was incubated with 10 l anti-HA agarose slurry (Pierce) overnight at 4°C and was then washed three times, for 5 min each time, with 500 l of cold TBS (25 mM Tris, 0.15 M NaCl [pH 7.2]) buffer.
Histone deacetylase assay. To detect HDAC enzyme activity in the Pfcrk-3 complex, duplicate samples were immunoprecipitated as described above, followed by a final wash in HDAC assay buffer (Millipore Corporation). The beads were further incubated for 16 h at 37°C in 60 l HDAC assay buffer containing 100 M acetylated fluorogenic peptide and 500 M NAD cofactor with or without HDAC inhibitors (10 mM nicotinamide and/or 2 mM trichostatin A). To test for the presence of NAD-independent HDAC activities in the Pfcrk-3 immunoprecipitates, duplicate experiments were also performed in the absence VOL. 9, 2010 A P. FALCIPARUM TRANSCRIPTIONAL CDK 953 of the cofactor. The beads were pelleted at 9,000 ϫ g for 30 s, and 40 l of the supernatant was transferred to each well of a half-volume plate. Twenty microliters of activator solution was added for 15 min, and the resulting fluorescence was recorded using a plate reader (excitation filter, 360/40 nm; emission filter, 460/40 nm).
(ii) pCAM-BSD-crk-3-HA. The pCAM-BSD-HA vector was generated by introducing a sequence encoding a single HA tag and the 3Ј untranslated region (3Ј UTR) of the P. berghei dhfr-ts gene into the multiple cloning site of pCAM-BSD (see Fig. 5A). The 3Ј end of the Pfcrk-3 coding region (652 bp, omitting the stop codon) was amplified by PCR from genomic DNA, using primers with PstI or BamHI restriction sites, which allowed insertion of the amplified product into the pCAM-BSD-HA vector.
Parasite transfection and genotype characterization. Ring-stage parasites were electroporated with 60 g of plasmid DNA (pCAM-BSD-crk-3 or pCAM-BSD-crk-3-HA) as described previously (16). Blasticidin was added to a final concentration of 2.5 g/ml 48 h after transfection. Resistant parasites appeared 4 to 5 weeks posttransfection and were cloned by limiting dilution.
For Southern blot analysis, total DNA was obtained as follows. Parasite pellets obtained by saponin lysis were resuspended in PBS and were treated with 150 g/ml proteinase K and 2% SDS at 55°C for 2 h. The DNA was precipitated with ethanol and 0.3 M sodium acetate after phenol-chloroform-isoamyl alcohol (25:24:1) extraction. The DNA was digested with PstI and SwaI, transferred to a Hybond N membrane, and hybridized to the pCAM-BSD or pfcrk-3 probe.
RESULTS
Bioinformatic analysis of Pfcrk-3. Phylogenetic analysis of the P. falciparum kinome (3, 52) identified 18 protein kinases belonging to the CMGC group, including Pfcrk-3. Among the different families that constitute the CMGC group, Pfcrk-3 clearly clusters within the CDK family, and more precisely with the CDK8 to -11 group, which comprises CDKs involved in transcriptional control (Fig. 1). BLASTP analysis confirmed the relatedness of Pfcrk-3 to transcriptional CDKs, with the BUR1 enzyme from yeast giving the highest score (see Discussion). The 11 "invariant" residues that are conserved in most protein kinases (20,29) are present in Pfcrk-3, as are the two signatures that define membership in the protein kinase family (4) (see Fig. S1 in the supplemental material). The polypeptide conforms with a high score to the Pfam protein kinase domain (Pfam entry PF00069; E value, 3.6eϪ77).
The predicted Pfcrk-3 polypeptide (1,339 amino acids) is unusually large for a CDK, because the putative catalytic domain contains two insertions (of 198 and 20 amino acids), and there are large N-terminal (378 residues) and C-terminal (335 residues) extensions (see Fig. S1 in the supplemental material). The extensions and insertions are rich in low-complexity regions and do not display homology to any characterized motif. The cyclin-binding motif (PSTAIRE in CDK1, PITALRE in CDK9, and PITQLRE in yeast BUR1) is replaced by AKTYIRE in Pfcrk-3. The Thr14 and Tyr15 residues (human CDK2 numbering) are the targets of negative regulation by phosphorylation in several mammalian CDKs; only the Tyr15 residue is present in Pfcrk-3, since Thr14 is replaced by an Ala residue. Thr160, which is the target of activating phosphorylation by CDK-activating kinases (CAKs), is conserved in Pfcrk-3.
Expression of pfcrk-3 mRNA and protein in blood stages. Reverse transcription-PCR (RT-PCR) using primers flanking the catalytic domain showed that (i) Pfcrk-3 mRNA is present in asexual and sexual blood stages, (ii) the gene structure (2 exons separated by a 100-bp intron) proposed in PlasmoDB is correct, and (iii) transcribed mRNA includes the predicted Nand C-terminal extensions (see Fig. S2 in the supplemental material).
Microarray data available on PlasmoDB (33) indicate that pfcrk-3 mRNA is detectable throughout the asexual cycle (as well as in sporozoites and gametocytes). Northern blot analysis allowed the detection of a single 4-kb pfcrk-3 mRNA species in trophozoites, although the signal was very weak (Fig. 2A). Probing of the same membrane with pfrhoph2 (a gene ex- (36); equal loading was ascertained by ethidium bromide staining, which yielded rRNA bands of similar intensities in all lanes (data not shown). IgY antibodies against two peptides derived from Pfcrk-3 (one in the large insertion and the other in the catalytic domain [see Fig. S1 in the supplemental material]) recognized recombinant Pfcrk-3 in Western blots (data not shown) (see below for a description of the recombinant protein). Western blotting performed with extracts from synchronous parasites using the anti-VVD antibody (directed against the catalytic domain) showed that although Pfcrk-3 mRNA was detectable predominantly during early stages, the protein was detectable throughout the asexual cycle (Fig. 2B). Pfcrk-3 appears to be proteolytically processed from a large precursor in rings, whose size approximates the expected molecular size of the full-length protein (160 kDa), to a protein of around 120 kDa at later stages, with lower-intensity bands (including one at 70 kDa) also detectable (Fig. 2B, left). Determination of the exact processing events would require additional studies using antibodies directed against the various parts of the protein. The antibody directed against the largest insertion in the catalytic domain (anti-PNG antibody) (Fig. 2B, right) yielded a similar profile, suggesting that processing consists of removal of the extensions.
In view of the clean Western blot pattern obtained with the anti-VVD antibody, we next performed immunofluorescence analysis. Consistent with the Western blot data, the Pfcrk-3 signal was detectable in all intraerythrocytic stages of the parasite and largely colocalized with the parasite histone proteins (Fig. 3). In some ring-stage parasites (e.g., the top row in Fig. 3), Pfcrk-3 appears to localize at the periphery of the nucleus, a pattern similar to the "horseshoe" pattern observed by Issar et al. with antibodies against specific histone modifications (24).
Pfcrk-3 is crucial for asexual proliferation. In an attempt to determine whether or not Pfcrk-3 is essential for erythrocytic development, P. falciparum 3D7 parasites were transfected FIG. 3. Pfcrk-3 colocalizes with histones. Images captured by laser scanning confocal microscopy show substantial colocalization of Pfcrk-3 (green) with nuclearly localized histones (red). P. falciparum 3D7 parasites were fixed with 4% paraformaldehyde-0.0075% glutaraldehyde and were probed with chicken anti-Pfcrk-3 IgY and mouse antihistone IgG antibodies. The data in each row represent the fluorescence profile in ring-stage parasites, early trophozoites, late trophozoites, or schizonts, as indicated. DIC, differential interference contrast. Bars, 5 m in the top row and 2 m in all other rows. coding sequence. This fragment excludes two kinase subdomains essential for activity, labeled GXGXXG (a glycine-rich region required for correct orientation of ATP) and PE (a proline-glutamate motif in which the latter residue is required for the structural stability of the enzyme). Single-crossover homologous recombination results in a pseudodiploid configuration with two truncated copies, each of which lacks one of these essential motifs. Oligonucleotides are indicated by horizontal arrows, restriction sites by vertical lines, and DNA probes by horizontal bars. BSD, blasticidine deaminase cassette. (B) PCR analysis of the disrupted locus. Total DNA isolated from cloned, blasticidin-resistant parasites transfected with pCAM-BSD-crk-3 (transfectant) or from wild-type 3D7 parasites (3D7) was subjected to PCR using the primers indicated (see panel A for their locations). (Left) Primers OL-3 and -4 (diagnostic for the pCAM-BSD-crk-3 episome or concatemeric inserts); (center) primers OL-1 and -4 (diagnostic for 5Ј integration); (right) primers OL-3 and -2 (diagnostic for 3Ј integration). (C) Southern blot analysis. Total DNA was extracted from cloned blasticidin-resistant parasites transformed with pCAM-BSDcrk-3 (transfectant) and from wild-type 3D7 parasites (3D7) and was digested with PstI and SwaI. (Left) After transfer to a Hybond membrane, the digested DNA was probed with the blasticidin resistance cassette. (Right) The membrane was stripped, and the digested DNA was probed with a pfcrk-3 amplicon located upstream of the fragment used as an insert in the pCAM-BSD-crk-3 plasmid. The sizes of comigrating markers are given to the left. The band corresponding to the linearized episome was detected in the transfected parasites (left panel, left lane). The band corresponding to the intact wild-type locus was detected in both the transfected and the untransfected parasites (right panel, both lanes).
VOL. 9, 2010
A P. FALCIPARUM TRANSCRIPTIONAL CDK 955 with a pCAM-BSD-based construct containing the central portion of the pfcrk-3 gene in order to disrupt the locus (Fig. 4A). The resulting blasticidin-resistant parasites were then monitored for integration-specific PCR products. No integrationspecific products were observed even after prolonged maintenance of the culture (up to 5 months); the only PCR products obtained corresponded to the unintegrated episome (Fig. 4B). Southern blot analysis confirmed the integrity of the wild-type locus in the transfected cells (Fig. 4C). Failure to disrupt the pfcrk-3 gene may signify either that the gene is essential for parasite asexual proliferation or that the vector was unable to recombine with the locus. To verify that the pfcrk-3 locus was indeed accessible to recombination, we attempted to modify the locus without causing loss of function of the gene product. For this purpose, we transfected wild-type parasites with the pCAM-BSD-Pfcrk-3-HA plasmid containing the 3Ј end of the Pfcrk-3 coding region fused to a hemagglutinin (HA) epitope followed by the 3Ј untranslated region (3Ј UTR) from the P. berghei dhfr-ts gene (Fig. 5A). Following single-crossover recombination, we expect an HA-tagged, functional Pfcrk-3 protein to be expressed, but we expect no expression from the wild-type enzyme. PCR analysis of the uncloned blasticidinresistant population performed 14 weeks after transfection readily detected integration of the construct into the pfcrk-3 locus (Fig. 5B, primers OL-5 and -4), in addition to the episome (Fig. 5B, primers OL-3 and -4). Cloned lines were obtained by limiting dilution, and PCR examination of the genotypes of individual clones at the pfcrk-3 locus demonstrated that several clones had lost the wild-type locus (data not shown). The disappearance of the wild-type locus in these clones and the presence of a modified locus of the expected size were verified by Southern blot analysis. Figure 5C shows the data for one such clone, which displays a modified locus and has either retained the episome or integrated concatemers of the plasmid. Association of Pfcrk-3 with protein kinase and histone deacetylase activities in parasite extracts. In view of (i) the demonstrated role of BUR1 (the enzyme of the yeast kinome that is the closest relative to Pfcrk-3) in the regulation of chromatin modification (including through histone acetylation) (7) and (ii) the emerging importance of histone deacetylases in the regulation of gene expression in Plasmodium (see reference 50 for a recent contribution to the field), we set out to investigate whether histone deacetylase activity could be copurified with Pfcrk-3 from P. falciparum extracts. Unfortunately, the anti-Pfcrk-3 IgYs did not perform satisfactorily in immunoprecipitation assays (data not shown). We therefore resorted to the parasite line expressing an HA-tagged version of Pfcrk-3 (described in the preceding section). We first verified that an immunoprecipitate obtained with anti-HA antibodies from transgenic, but not from wild-type parasites, contained a protein kinase activity, using mammalian histone H1 (a classical substrate for assaying CDK activity) as a phosphate receiver. Indeed, kinase activity was much higher in the HA immunoprecipitate obtained from parasites expressing HAtagged Pfcrk-3 than from untransfected 3D7 parasites (Fig. 6A), indicating that Pfcrk-3 is associated, directly or indirectly, with kinase activity (see below). The autoradiogram displays many high-molecular-weight bands in addition to the histone H1 added as a substrate; these may represent copurifying P. falciparum proteins (including Pfcrk-3 itself) acting as substrates. We then subjected the HA-immunoprecipitated material to a histone deacetylase activity assay in the presence or absence of NAD as a cofactor. As shown in Fig. 6B, no activity was obtained after immunoprecipitation from extracts of wildtype parasites (bar 1); only samples from parasites expressing the HA-tagged protein (bars 2 to 6) exhibited histone deacetylase activity. Addition of the cofactor resulted in an increase in enzyme activity (compare bars 2 and 3), suggesting the presence of both NAD-dependent and NAD-independent HDAC activities in the Pfcrk-3-associated complexes. This finding is consistent with the activity of recombinant PfSir2 as reported previously (45). Interestingly, the Pfcrk-3-associated HDAC activities were sensitive to the sirtuin inhibitor nicotinamide and the class I/II HDAC inhibitor trichostatin A. Together these data indicate that Pfcrk-3 is part of one or more complexes whose components are capable of protein phosphoryla- Total DNA isolated from blasticidinresistant parasites transfected with pCAM-BSD-crk-3-HA and from wild-type 3D7 parasites was subjected to PCR using the primers indicated. (Left) Primers OL-3 and -4 (diagnostic for pCAM-BSD-crk-3 episome or concatemeric inserts); (right) primers OL-5 and -4 (diagnostic for 3Ј integration). (C) Southern blot analysis. Total DNA was extracted from blasticidin-resistant parasites transformed with pCAM-BSD-crk-3 and from wild-type 3D7 parasites, and 3 g was digested with PstI and SwaI, run on a 0.8% agarose gel, transferred to a Hybond membrane, and probed with the blasticidin resistance cassette (see panel A). The membrane was stripped and probed with a pfcrk-3 fragment that is not present in the pCAM-BSD-crk-3-HA plasmid (see panel A for the location of the probes, which are indicated by horizontal bars underneath the loci). The band corresponding to the linearized episome or concatemeric insert is detected in the transfected parasites (left panel, lower band). The band corresponding to the wild-type locus (right panel, right lane) is replaced by a larger band of the size expected from the recombination of the plasmid into the locus (both panels, left lanes).
956
HALBERT ET AL. EUKARYOT. CELL tion and histone deacetylation and that it may play a role in chromatin modifications by associating with various HDAC enzymes in P. falciparum. Kinase activity and pulldown assays with recombinant GST-Pfcrk-3. In an attempt to demonstrate that Pfcrk-3 itself possesses kinase activity, a polypeptide containing the catalytic domain plus the C-terminal extension was expressed in E. coli as a 100-kDa GST fusion (GST-Pfcrk-3). No kinase activity of the purified recombinant enzyme was observed under our experimental conditions, whereas other recombinant P. falciparum kinases were active (data not shown). We reasoned that activity of GST-Pfcrk-3 might require interaction with a cyclinlike activator. Four plasmodial cyclin-like proteins have been cloned in our laboratory (Pfcyc-1, Pfcyc-2, Pfcyc-3, and Pfcyc-4), two of which (Pfcyc-1 and Pfcyc-3) activate recombinant PfPK5 (another CDK-related enzyme) in vitro (38). Incubation of Pfcrk-3 with these four different recombinant cyclins did not cause activation of the recombinant enzyme, nor did addition of RINGO, a potent activator of some CDKs, including PfPK5 (32, 42) (data not shown).
The absence of activity of the recombinant Pfcrk-3 catalytic domain even in the presence of cyclins may result from the fact that the fusion protein lacks the large N-terminal extension, or from the absence of additional activator mechanisms, such as phosphorylation by other kinases. To address the latter issue, we repeated the kinase assay following incubation of recombinant Pfcrk-3 in parasite extracts. Pulldown experiments, in which glutathione-agarose beads loaded with GST-Pfcrk-3 were incubated in extracts from asynchronous parasites, washed, and subjected to in vitro kinase activity assays, allowed us to detect histone H1 kinase activity associated with the recombinant enzyme (Fig. 7A, lane 2). Furthermore, under these conditions, we also observed a weak signal at approximately 100 kDa, which is likely to be GST-Pfcrk-3 itself. A much weaker signal was detected when the pulldown from the parasite extract was performed with the GST moiety alone (Fig. 7A, lane 1), and no signal was observed when GST-Pfcrk-3 was incubated in parasite lysis buffer that did not contain parasite proteins (lane 4). The activity observed in Fig. 7A, lane 2 (phosphorylation of histone H1 and of GST-Pfcrk-3), might result either from activation of Pfcrk-3 itself by a component of the pulled complex (e.g., through binding of a cyclinlike regulator or through a phosphorylation event) or from another protein kinase that had been pulled down by GST-Pfcrk-3. To distinguish between these possibilities, we repeated the pulldown experiment using a kinase-dead mutant of GST-Pfcrk-3 (GST-Pfcrk-3-K445M) in which a conserved lysine residue involved in ATP orientation was replaced by a methionine residue (Fig. 7B). This yielded a signal of an intensity similar to that obtained when the pulldown was performed with the wildtype enzyme (Fig. 7B, lanes 2 and 3), suggesting that another plasmodial protein kinase that interacts with Pfcrk-3 was responsible for at least a fraction of the detected kinase activity identified in Fig. 7A. Pulldown experiments thus demonstrate that Pfcrk-3 is associated with kinase activity in parasite extracts and that at least part of this activity is due to another kinase that interacts with Pfcrk-3; we favor the hypothesis that Pfcrk-3 itself also possesses activity in vivo (see Discussion).
DISCUSSION
Bioinformatics considerations. BLASTP analysis showed that the Pfcrk-3 kinase domain displays maximal homology to BUR1 (BLASTP score, 164; E-value, 1eϪ40), a yeast transcriptional CDK-related kinase previously described as SGV1 and required for recovery from mating pheromone-induced cell cycle arrest (23). When complexed to its cyclin partner, BUR2, BUR1/SGV1 phosphorylates the carboxy-terminal end of RNA polymerase II (40,54,55) and other transcriptionrelated substrates (28) and regulates trimethylation (30). Thus, the BLASTP results are fully consistent with the phylogenetic analysis in which Pfcrk-3 clustered with the transcriptional CDKs, which include SGV1 and human CDK9 (Fig. 1). Another feature shared by Pfcrk-3 and BUR1 is a long C-terminal extension that is usually not found in other CDKs (including mammalian CDK9); remarkably, this extension has exactly the same size (335 amino acids) in both enzymes. Even though the Pfcrk-3 protein appears to be processed during parasite development (Fig. 2B), the C-terminal extension is presumably maintained, since the C-terminally HA-tagged enzyme can be recovered via the tag (Fig. 6).
Many CDKs are negatively regulated by phosphorylation of Thr14 and Tyr15 (human CDK2 numbering), of which only Tyr15 is conserved in Pfcrk-3. Interestingly, the inverse configuration is observed in BUR1 and CDK9, where only Thr14 is conserved; this may indicate different modes of regulation between Pfrck-3 and transcriptional CDKs in Opisthokhonts (the phylum including yeast and metazoans). In contrast, the conservation in Pfcrk-3, yeast BUR1, and human CDK9 of Thr160, the target of activating phosphorylation by CDK-activating kinases (CAKs), is consistent with the observation that BUR1 is activated by CAKs (55) and suggests that a similar mechanism may regulate Pfcrk-3 activity.
Pfcrk-3 function. Successful in situ 3Ј HA tagging (Fig. 5) establishes that the pfcrk-3 locus is recombinogenic, strongly suggesting that the lack of success in obtaining parasites with a disrupted locus is due to the fact that the gene is crucial for asexual proliferation. The possibility remains that Pfcrk-3 inactivation is not strictly lethal but causes a growth rate defect that renders parasites unable to compete with parasites that retain a wild-type locus in the transfected population. In this context, it is noteworthy that in addition to the inability of BUR1 mutants to recover from mating pheromone-dependent cell cycle arrest (23), normal growth under various conditions is affected in BUR1 and BUR2 mutants (54).
Thus, bioinformatics analyses, subcellular localization, and association with histone deacetylase activity all concur to assign Pfcrk-3 as a chromatin-associated CDK involved in the regulation of transcription. It would be of great interest, in order to gain further insight into the precise function of Pfcrk-3, to identify which of the several putative HDACs (one class I HDAC, two class II HDACs, and two class III sirtuins [44]) encoded by the P. falciparum genome is responsible for the activity we observed to be associated with HA-Pfcrk-3. Sensitivity to nicotinamide suggests that a class III HDAC, such as Pfsir2, is involved (45), but caution must be exercised until the enzyme is experimentally identified. The parasites expressing HA-tagged Pfcrk-3 represent a tool that can now be used for affinity chromatography/mass spectrometry-based identification of this and other components of the protein complex that includes Pfcrk-3. Further information on the role of Pfcrk-3 in gene expression could be gained by performing high-resolution colocalization studies to determine whether the spatial distribution of Pfcrk-3 correlates with that of specific histone modifications, as hinted by the "horseshoe" appearance of the protein distribution pattern in ring-stage parasites (Fig. 3) (24).
A recombinant protein containing the catalytic domain and the C-terminal extension, but lacking the N-terminal extension, displayed no enzymatic activity. This is expected for a CDK homologue; for example, no activity was observed with the PfPK5 CDK in the absence of a cyclin (or equivalent) activator (32). However, in contrast to what we observed with PfPK5, addition of recombinant cyclins to the reaction mixture did not result in Pfcrk-1 activation. We showed that a histone H1 kinase activity can be pulled down from parasite extracts using recombinant Pfcrk-3; kinase activity was also recovered when a kinase-dead mutant was used in the pulldown assay, suggesting that another protein kinase is present in a complex that includes Pfcrk-3. Taken together, our data are consistent with the proposition that Pfcrk-3 functions in a large complex regulating transcription. In other systems, it is well established that transcriptional complexes contain several PKs, including CDKs. If, as we suspect in view of the conservation of all residues that are important for activity, Pfcrk-3 is indeed an active CDK, it will of course be crucial for our understanding of its function to identify substrates of the enzyme. The present study generated the tools necessary to proceed with investigations in these areas and constitutes a solid basis for further work aimed at understanding the control of gene expression in malaria parasites. | 2018-04-03T04:24:39.746Z | 2010-03-19T00:00:00.000 | {
"year": 2010,
"sha1": "6fca67cc157d576266da9777ad516e337a4d90f1",
"oa_license": null,
"oa_url": "https://doi.org/10.1128/ec.00005-10",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "e65c24bd2b80bedc5cb1c6d2587c681ee9399f12",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
17191639 | pes2o/s2orc | v3-fos-license | Light Steel-Timber Frame with Composite and Plaster Bracing Panels
The proposed light-frame structure comprises steel columns for vertical loads and an innovative bracing system to efficiently resist seismic actions. This seismic force resisting system consists of a light timber frame braced with an Oriented Strand Board (OSB) sheet and an external technoprene plaster-infilled slab. Steel brackets are used as foundation and floor connections. Experimental cyclic-loading tests were conduced to study the seismic response of two shear-wall specimens. A numerical model was calibrated on experimental results and the dynamic non-linear behavior of a case-study building was assessed. Numerical results were then used to estimate the proper behavior factor value, according to European seismic codes. Obtained results demonstrate that this innovative system is suitable for the use in seismic-prone areas thanks to the high ductility and dissipative capacity achieved by the bracing system. This favorable behavior is mainly due to the fasteners and materials used and to the correct application of the capacity design approach.
Introduction
Light timber-frame structures are widespread in the USA but this structural system is gaining acceptance worldwide because of the rapidity of realization, affordability, flexibility in design and construction, and good energy and structural performance. In seismic-prone regions such as Northern America, Italy, Japan and New Zealand, the application of this wall system as a Seismic Force Resisting System (SFRS) proved to be very efficient, thanks to its lightness and intrinsic dissipative capacity, if connections are correctly designed (e.g., [1]). In timber frame buildings, energy dissipation is mainly demanded to connections between bracing panels and timber frame, such as nails, screws or staples. Several works were conduced with the aim of analyzing the role of connections on the load-bearing capacity and deformation of timber-frame systems subjected to lateral loads. Recent works propose analytical models to predict their behavior [2][3][4]. Germano et al. [5] presented experimental results regarding the contribution of connections to the hysteretic behavior and energy dissipation capacity of the tested walls. Shake-table tests on full-scale light-frame buildings were also realized in Japan with the aim of evaluating dynamic properties of these timber constructions [6]. In [7] post-and-beam timber buildings braced with nailed shear walls are analyzed and different behavior factor values are proposed, depending on a building's configuration and on the effect of different nail distributions at each storey.
Innovative structural systems are commonly proposed in response to the changing needs of users and the construction industry, with the aim of optimizing the performance of traditional buildings [8]. For example, the coupling of timber with steel elements allows taking advantage of their intrinsic properties and to reduce their limitations, with the effect of improving the overall behavior of the building. Steel and wood can be integrated at component and/or building system level (e.g., steel connections with timber frames or walls, hybrid frames, steel frames and wood diaphragms) [9,10]. Examples of hybrid building systems were already realized and tested. Steel beams or frames combined with cross-laminated timber panels [11,12] or with timber-frame shear walls [13,14] have been studied through experimental tests and numerical modeling. These systems showed a relatively high ductility and demonstrated to be a reliable SFRS.
In light timber-frame systems, non-wood materials such as gypsum and cement plaster are also used as bracing components. The influence of these brittle materials on the performance of wood-frame shear walls is reported in [15].
Specially designed structural skins are commonly used as strengthening technique, e.g., reinforced-cement coating (jacket) for damaged masonry walls. In this case, a reinforced-cement layer is applied on the outer side or on both sides of the wall and it is connected to the masonry with steel anchors [16]. These exterior reinforcement techniques were also applied to light timber-frame systems. Zisi [17] and Zisi and Bennett [18] studied a system where the strengthening element is an anchored brick veneer tied to the exterior wall face of the wood-frame wall. A gypsum wallboard sheathing is added to the interior wall face. Analytical models demonstrated that both brick veneer and wallboard sheathings stiffen significantly the timber-frame shear wall. Results from shake-table tests [19] showed the influence of wall finish materials and gypsum interior wallboard on the behavior of light-frame wood constructions.
The use of novel systems in seismic areas require the assessment of mechanical properties through experimental tests in order to evaluate their seismic performance [20]. Van de Lindt [21] presented a summary of testing and modeling studies on timber shear walls over the last two decades of 20th century. More recently, in the United States, the seismic behavior of typical light-frame wood structural systems has been studied [22] to analyze the design and retrofit of existing wooden frame dwellings [23].
In this work, an innovative timber shear-wall system is presented. The light timber-frame system is coupled with an Oriented Strand Board (OSB) panel and an innovative technoprene slab infilled with plaster that improves not only the static and seismic behavior but also the insulation properties of the wall. The vertical load resistance is demanded to multi-storey thin-box steel columns that also allow to reduce the on-site construction time. They are fastened to the vertical panels, the foundations and the floors with steel brackets. Two walls were analyzed with quasi-static cyclic-loading tests according to EN 12512 protocol [24]. Then, numerical simulations allowed analyzing the behavior of a case-study building.
Description of the System
The proposed construction system combines a modular light timber frame with thin-box steel columns and an innovative external bracing system. The system represents an evolution of that described by Pozza et al. [25] where the outer reinforced concrete shelter is substituted by suitably shaped plastic panels infilled with plaster and the timber columns are replaced by steel ones.
In this system, the structural elements have different functions: steel columns support live and dead vertical loads, whereas the OSB panel and the external plastic slab confer to the timber frame resistance against wind and earthquake actions. The frame (see Figure 1) has modular dimensions: width is equal to 1080 mm and height is three times the width dimension. Fifteen-millimeter thick OSB/3 panel conforming to EN 300 [26] is stapled to the timber frame, realized with 200 mmˆ80 mm horizontal crosspiece beams and 100 mmˆ80 mm vertical studs. Both beams and studs are made of C24 timber (EN 338) [27]. The innovative technoprene slab (polypropylene homopolymer reinforced with 18.5% chemically coupled glass fiber), hereafter called skin (see Figure 2), is infilled with plaster and acts as an additional bracing system, collaborating with the OSB panel to provide strength and dissipative capacity to the timber frame. The skin is a square slab of about 108.6 mm width, and thickness equal to 35 mm, including the plaster layer. It is connected to the frame with three 10 mmˆ120 mm screws on each side (steel class 8.8, according to ISO 898 [28]). The main advantages are: 1. The lightweight panel facilitates the realization of the buildings. 2. The special 3D shape improves the adherence with the plaster and allows the creation of a ventilation chamber between the OSB and the external layer (i.e., a continuous natural airflow from the ground level to the roof), improving the durability of the wooden parts and providing good insulation properties to the building.
The steel columns ( Figure 3) allow speeding up the construction process and assure resistance to vertical loads. In detail, the columns are placed before the shear walls and can be continuous from the foundation to the roof, for low-and medium-rise buildings. In this way, the assembly of the building is optimized in terms of rapidity and on-site management costs. Columns are connected to the frame with 30 mmˆ20 mmˆ2 mm press-belted L-profiles (continuous along the pillar), which are jointed to the wooden side with 4 mmˆ60 mm ring shank nails and to the steel side with 6.3 mmˆ19 mm screws (self-tapping screws according to EN 15480 [29], steel class 9.8 according to ISO 898 [28]). The same column-to-frame L-profiles have the function of connecting two adjacent modular shear walls. Therefore, two adjacent modules are indirectly jointed through a steel column. Connection elements, made with steel brackets, are used for supporting floor and roof beams and connecting columns at foundation, in order to resist to uplift of the shear wall. These brackets are made of the same tubular element of the column, 2-or 3-mm thick, and are connected with 6.3 mmˆ19 mm self-tapping screws to the column and with 20 mm-diameter anchors to the foundation (see Figure 3). The resistance to base shear forces is provided mainly by three vertical 12 mmˆ180 mm wood screws (class 4.8 according to DIN 571 [30]) fixed between the timber frame and the concrete foundation curb. Moreover, three horizontal 12 mmˆ100 mm wood screws connect the bottom edge of the skin and the foundation curb. A vertical section of the system is shown in Figure 4.
Test Setup and Procedure
Quasi-static cyclic-loading tests were conducted on two walls realized with the studied construction system, in order to assess the resistance of the system against lateral loads and to evaluate its seismic behavior.
The first wall tested (Wall A) was realized with all the components of the system. In the second wall (Wall B), the OSB panel was removed and only the skin acted as bracing system. In this way, the contribution of the skin to the global seismic response has been evaluated.
Two adjacent panels were assembled to realize the walls, which are 3.24 m high and 2.16 m wide. Tests were conducted subsequently with the same setup and instrumentation. A reinforced concrete foundation was realized to reproduce the base connection of the system. Figure 5 shows the test set-up used for both walls, which was chosen to be consistent with the previous experimental campaign, whose results are presented in [25]. A vertical load of 8.8 kN (reproducing gravitational loads at the first storey of a low-rise building with lightweight floors) was applied for each steel column by three hydraulic actuators. Lateral guides were positioned at the top of the specimen to avoid out-of-plane movement. Displacements of panels and connections were measured with transducers, placed as shown in Figure 5: CH1, CH4 and CH5 measured the base uplift; CH3 the base slip; CH6 the panel-to-panel slip; CH2 the top displacement; and Linear Variable Displacement Transducer (LVDT) applied and measured the top horizontal force and displacement. Figure 6 shows the configuration before the test. Quasi-static cyclic-loading tests were performed in displacement control, according to EN 12512 [24], following the protocol shown in Figure 7. Such testing protocol requires the definition of the yielding displacement V y of the specimen assumed equal to 10 mm. Displacement was applied at a 0.2 mm/s rate.
Test Results
At the end of the tests, no failure localization was evident, but there was diffuse yielding of fasteners between bracing systems and frame and between frame and steel L-profiles. Thin cracks at the perimeters of the skin panels were also observed. Figure 8 demonstrates the formation of a plastic hinge in the 10 mmˆ120 mm wood screws connecting skin to frame. Tests were stopped before the ultimate displacement of the walls was reached, due to limited allowable jacket elongation. Figure 9 shows the hysteresis curves of the two specimens, i.e. the imposed top displacement vs. the corresponding applied force. Figure 10 shows the walls at the end of the tests, at the maximum applied displacement. Wall A reached the maximum displacement without relevant failures or strength degradation phenomena (Figure 9a). This specimen exhibited the hysteretic behavior typical of timber structures, characterized by pinching phenomenon of steel-wood and wood-wood connections. Moreover, the skin and the OSB panel contributed to the hardening behavior shown.
The shear resistance of the system is limited by the weakest mechanism among the followings: 1. The in-plane shear resistance of skin and OSB panel and the shear resistance of the relative connectors. 2. The axial and shear resistance of the connections at foundation. 3. The shear resistance of frame-to-column joints. The main contributions to the ductility of the system are given by the shear deformation of the bracing system and the panel-to-panel relative slip (see Figure 10a). Conversely, base connections should be over-designed due to their brittle behavior and, therefore, small and almost elastic deformations are expected for them, according to the capacity design approach. Figure 9b allows assessing the contribution of the skin to the shear resistance of the whole system. In the cyclic tests for Wall A and Wall B, the same displacements were reached with lower resistance for Wall B. The hysteretic behavior of this wall also confirms the contribution of the skin panel to the energy dissipation capability of the system: the pinching behavior was reduced and the ductility was maintained. The comparison between Figure 9a,b also allows quantifying this contribution in terms of strength: it can be stated that almost the 60% of in-plane shear resistance is given by the skin.
Analysis of Experimental Results
Results obtained from the cyclic tests were analyzed to define the main mechanical properties of this innovative system. This section discusses the evaluation of the following parameters: yielding point (V y , F y ), maximum displacement and force reached (V max , F max ), stiffness for the elastic and post-elastic branches (k e , k p ), ductility µ, strength degradation at different cycle amplitudes, and viscous damping ratio ν eq . Finally, the evaluation of the ductility class, according to design provisions [31] is reported.
Ductility Estimation
Tables 1 and 2 show the evaluation of yielding points and the main outcomes in terms of strength, stiffness and ductility according to different approaches. These parameters are defined using suitable bi-linearization methods of the envelope force-displacement curve, which is generally not regular and does not exhibit well-defined yielding condition.
The fact is that various methods are proposed to compute this point [24,32]. The first method proposed by EN 12512 [24] is adequate for curves with two well-defined linear branches. The yielding point is defined by the intersection of these two lines. The second method proposed by EN 12512 [24] assumes the hardening stiffness equal to 1/6 of the elastic stiffness, without taking into account the actual hardening slope. Alternative methods to determine the yielding point are based on an energy approach. The equivalent Elastic-Plastic Energy method (EEEP) [33] is based on balancing the strain energy between the actual curve and the bi-linear one, and is characterized by a perfectly-plastic post-elastic branch. Another one is an equivalent-energy method with post-elastic hardening branch and is indicated as EEEH [20]. [24], equivalent Elastic-Plastic Energy method (EEEP) [33] and Equivalent-Energy Method with Post-Elastic hardening branch (EEEH) [20].
Parameters
Notations ( In this work, the envelope of the hysteresis curve was fitted using the analytical formulation proposed by Foschi and Bonac [34]. Then, the mechanical parameters were obtained applying the bi-linearization methods that better fit the envelope curve. In detail, results for Wall A were fitted with ENa, EEEP and EEEH methods. To obtain a comparison with Wall B in terms of stiffness and strength, only the EEEP method was applied due to its perfectly-plastic behavior. In fact, EN and EEEH methods cannot adequately fit elastic-perfectly plastic envelope curves and therefore they may provide inconsistent results when treating with curves without well-defined hardening behavior.
The use of different approaches causes a variation of both yielding displacement and force, due to the variation of the elastic and post-elastic stiffness. This means that the ductility value could be strongly influenced by the method of bi-linearization used. In particular, the EEEP method normally over-estimates the yielding force [32], whereas EN methods are generally less conservative in terms of ductility than energy-based ones.
Ductility ratios were evaluated assuming the maximum applied displacement as ultimate top displacement, i.e., 92 mm for wall A and 90 mm for Wall B.
Obtained ductility is always higher than 6, which is the minimum value to be assured for the High Ductility Class (HDC), according to Eurocode 8 [31]. Comparing the obtained ductility values with other studies [35,36], it can be seen that the ductility of this novel system is greater than that obtained for massive timber systems (e.g., Cross-laminated Timber-CLT). This is because the shear deformation of the bracing panels allows the achievement of higher ductility than shear walls, in which the displacement capacity is mainly concentrated in the base connections.
Strength Degradation and Viscous Damping Ratio
Timber structures assembled using metal fasteners are sensitive to stiffness and strength degradation of connection elements when undergoing cyclic action. The consequent strength reduction is an important parameter to identify the ability of a structure to resist cyclic action and therefore seismic shocks. According to Eurocode 8 [31], this parameter and the ductility ratio are used to define the Ductility Class of a timber structure. Table 3 lists the strength degradation recorded between the first and third cycles of each displacement level of the tested walls and the equivalent viscous damping ν eq . These values were defined according to EN 12512 [24]. Table 3 demonstrate that the loss in strength increases with the cycle amplitude. This value is always less than 20%; therefore, given the ductility higher than 6, the system can be classified as High Ductility Class (HDC). Table 3 also lists the equivalent viscous damping ν eq values, which summarize the hysteretic dissipative capacity of a structural system. These values are constantly greater than 18%, confirming the good dissipative capability of this system. Moreover, it can be seen that the values of the equivalent viscous damping for Wall B are higher than results for the entire system (Wall A). These values confirm the contribution of the skin to the dissipative capability of the system and therefore its suitability for use in seismic areas.
Values listed in
Comparing the results with those obtained for the previous system [25], a slight improvement in terms of both strength reduction and equivalent viscous damping has been obtained.
Numerical Simulations
In this section, the description of the numerical model adopted to simulate the seismic behavior of a case-study building and main results are reported and discussed.
Reliable models of timber structures should take into account the mechanical properties of connection elements, which affect the global behavior of the building [37]. In detail, values of strength and stiffness of connections-derived from codes or tests-are sufficient for linear modeling, whereas post-elastic and hysteretic behavior of connections also has to be correctly considered to perform nonlinear dynamic simulations (NLDA). This type of analysis allows evaluating the intrinsic dissipative capacity and the intrinsic over-strength of the studied system and to obtain an estimation of the behavior factor-henceforth called "q-factor"-which summarizes such capabilities [38].
The main features of the model (geometry, type of elements and hysteretic behavior) are reported and the calibration based on experimental test data of Wall A is described. Finally, the analysis of a case-study building was conduced in order to obtain an estimation of the q-factor and to verify that the system belongs to HDC [31].
Nonlinear Model
The modeling approach for novel timber systems consists of the following steps [39]: 1. Execution of quasi-static cyclic loading tests, according to EN12512 [24], of an entire wall specimen representative of the studied system and recording of applied lateral force vs. displacement of wall and connections, as shown above. 2. Calibration based on test results of each nonlinear hysteretic spring representing connection elements and bracing system, in terms of equivalence of hysteresis cycles and dissipated energy for each element. 3. Assembling of linear and nonlinear elements to reproduce the tested wall specimen and to simulate the cyclic-loading test. 4. Comparison of numerical and experimental curves of the wall in terms of hysteresis cycles and energy dissipation, in order to validate the model. 5. Modeling of the case-study building and execution of NLDA.
In the studied shear walls, both connection elements and bracing system are characterized by an hysteretic behavior and show pinched load-displacement responses and strength degradation under cyclic loading, which are typical for timber systems [37]. In order to reproduce faithfully their actual response, the research-oriented numerical code "Open System for Earthquake Engineering Simulation OPENSEES" [40] was used. The hysteresis material model Pinching4 proposed by Lowes and Altoontash [41] was adopted for each non-linear elements. This model allows replicating the monotonic nonlinear curve with four slopes, the pinching behavior and strength and stiffness degradation phenomena. It requires the calibration of sixteen parameters for stress and strain on the positive and negative response envelopes; six parameters for pinching cycles; and four parameters for strength degradation. Stiffness degradation was not considered. The main modeling hypotheses are that nonlinear behavior is concentrated in the connection and bracing systems, whereas the wood frame remains elastic. Figure 11 shows the Finite Element (FE) model of the tested shear wall. Each finite-element module consists of a perimeter frame made with elastic trusses braced by diagonal nonlinear springs ( Figure 11-element a), which reproduce faithfully the in-plane cumulated response of stapled OSB panel, plastic skin and relative connectors (staples and screws). Inelastic springs are also used for hold-downs (element b), base shear bolts (element c) and in-plane vertical joints between adjacent wall modules (element d). Table 4 lists the main parameters for each nonlinear element. Linear compression-only elements are coupled in parallel with hold-down springs in order to simulate the asymmetric behavior of this component, as shown in Figure 12. Vertical loads and seismic masses are applied at upper nodes. For further details on this modeling strategy, see [25].
Simulation of Tests
After the calibration of the elementary nonlinear connections, the experimental cyclic test of Wall A described above was reproduced with the numerical model in Figure 11, by imposing the same horizontal top displacements (loading protocol according to EN 12512 [24]) and vertical load and recording displacements at the same position of test transducers ( Figure 5). The good accuracy of the model was ascertained by comparing the numerical data with test results. Figure 12 shows main recorded results superimposed on experimental cycles, i.e., lateral force vs. displacement at the top (Figure 12a), vs. displacement at hold-downs ( Figure 12b) and vs. relative displacement at vertical joint (Figure 12c). The dissipated energy graphs ( Figure 13) clearly show that the numerical model never over-estimates the experimental values, and that the difference between numerical model and test data in the near-collapse condition is less than 10% (Figure 13a). Figure 13b also shows the dissipated energy computed separately for each pulling and pushing phase (i.e., half-cycle). These comparisons allow to validate the model in terms of strength, stiffness and hysteresis behavior of the shear wall ( Figure 12a) and of hold-downs and vertical joints (Figure 12b,c). Moreover, the energy comparison in Figure 13 allows affirming that the estimated values of q-factor are reliable and conservative.
Case-Study Building and Design Criteria
The three-storey CLT building tested on the shaking table during the SOFIE project [37,42] was assumed as the case study. A 2D model of the façade placed in direction X was analyzed (evidenced within dashed box in Figure 14). To allow a simplified 2D model of the structure, configuration with symmetric openings was chosen and rigid diaphragm assumption was made.
The same precast modular panels subjected to the cyclic test and numerical calibration described above were used in the model to assemble the building, conforming the resistance of the base connections to the seismic loads.
In order to evaluate the peak ground acceleration (PGA) compatible with an elastic design of the case-study building (PGA d ) and to compare it with the PGA that leads the non-linear model to the near-collapse condition (PGA u ), the elastic response spectrum for building foundations resting on type A soil (rock soil, corresponding to S = 1.0, T B = 0.15 s, T C = 0.4 s, T D = 2.0 s), behavior factor q = 1, and building importance factor γ I = 1 was assumed according to Eurocode 8 [31]. The maximum spectral amplification factor F 0 was assumed equal to 2.5. Then, the unit lateral load-bearing capacity of the shear wall was deduced from the experimental load-displacement curve, i.e., the force corresponding to the yielding of the shear wall (according to the EEEP bilinear model) was assumed to be the conventional design strength of the wall. Therefore, given the overall seismic mass equal to 25.2 t, the PGA d compatible with an elastic design of the structure, without safety factors applied, was equal to 0.21 g, assuming the fundamental period of the shear wall within the plateau range. The hypothesis that the first mode period was in the plateau range was confirmed by the frequency analysis, which provided the fundamental period of the building T 1 = 0.36 s.
Evaluation of q-Factor
The method proposed by Ceccotti and Sandhaas [39] was used to estimate the q-factor, obtained as ratio between PGA u and PGA d . The applicability of such method requires some additional clarifications about the definition of the design and of the near-collapse condition. Differences between design and modeling phase introduce uncertainties in the final values of q-factor, which can be influenced mainly by design over-strength. A correct evaluation of the intrinsic q-factor should take into account all intrinsic capacities of the system, i.e., dissipative capacity, ductility, redundancy, post-elastic hardening behavior, and strength reserve. However, each over-resistance of walls induced by design criterion (e.g., safety level assumed by designers or simplified analytical methods for design) should not influence this value and can be computed in addition.
In this work, the yielding condition was assumed coincident with design condition (i.e., PGA d = PGA y ), in order to evaluate the intrinsic value of the q-factor. It is clear that such design condition depends only on the bi-linearization method adopted to evaluate the yielding limit from the experimental load-displacement curve, whereas it is independent of the design of the structure. The actual over-design subfactor, i.e., the ratio between PGA y and PGA d , which can be obtained via codes and analytical methods, should be multiplied by the intrinsic q-factor to obtain actual overall q-factor of the building.
The near-collapse condition must be defined to compute the PGA u with NLDA, assuming a criterion based on the maximum displacement capacity that the structure can reach without collapsing. Near-collapse limits were fixed as: (a) vertical uplift 18 mm and (b) inter-storey drift of bracing system of 2.0%.
A capacity design approach was followed in order to avoid brittle failures and to obtain the maximum ductility of the building at the near-collapse condition. Consequently, the weakest components of the structure were the bracing system and the vertical joint, i.e., OSB panel-to-frame connections, skin-to-frame connections and frame-to-column connections, which in each test and analysis yielded before other seismic-resisting components, i.e., before yielding of other connections and failure of wooden and plastic components. Therefore, obtained values of q-factor are consistent only with a correct capacity design of the building. Otherwise, the building could fail before reaching the maximum ductility and the PGA u could be lower.
The NLDA were carried out considering eight seismic shocks, artificially generated with SIMQKE_GR [43] in order to meet the spectrum compatibility requirement with the design elastic spectrum. Dynamic equilibrium equations were integrated with a not-dissipative Newton-Raphson scheme and time-steps of 0.001 s, introducing an equivalent Rayleigh viscous damping of 2%, according to [37]. By progressively increasing the magnitude of the applied seismic signals, the PGA u values, which lead to the near-collapse condition, were evaluated for all signals. Lastly, the q-factors for each signal were evaluated as the ratio between the PGA u values and the PGA d value. Results are reported in Table 5 and Figure 15, with average and 5% characteristic values (q 0.05 ) computed according to EN 1990 [44]. The obtained average q-factor was 5.42, confirming the good dissipative capability of the tested system. The 5% characteristic value, equal to 4.62, could be used as conservative estimation of the intrinsic q-factor.
In recent works concerning light timber-frame buildings, values of q-factor equal to 2.5 [5] or in the range between 2.5 and 4.5 [7] were obtained. The innovative system here investigated assures higher values of q-factor due to the presence of staples and additional fasteners (skin-to-frame screws and frame-to-column nails), which diffusely yielded, providing high ductility and dissipation capacity to the system.
Conclusions
Results from experimental tests and numerical simulations demonstrated that the proposed innovative construction system represents a viable technique for high-ductility construction in seismic-prone areas.
Experimental results show that this steel-timber shear-wall system is characterized by a pronounced dissipative behavior if subjected to horizontal cyclic loads, thanks to the response of the bracing system, which is able to deform plastically for at least three fully reversed cycles, with high value of static ductility and limited reduction of resistance (less than 20%). These properties make this system classifiable in HDC.
Numerical results confirmed test evidence and the ductility class hypothesized. To design this system with linear analyses, a behavior factor up to 4.5 can be adopted if a rigorous capacity design approach is applied. In detail, all base connections and all brittle components (timber and plastic) must be over-resistant compared to the bracing system and the nails at vertical joints, which are the most ductile and dissipative components of the shear wall. A damage-limitation-state verification should also be conduced in order to limit the deformation of the system and to avoid unacceptable damage to the building.
Such results are based on the analysis of a single three-storey building. In order to generalize such results (e.g., variability of the q-factor with building characteristics), variations of the case-study building will be considered in future works. In addition, obtained results depend on applied design method, based on test data and on capacity design approach. A comparison with results obtained varying the adopted design method could lead to variations in the q-factor. | 2016-03-14T22:51:50.573Z | 2015-11-01T00:00:00.000 | {
"year": 2015,
"sha1": "e300bdadffa76292f4495b8991e028e232b9e47d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/8/11/5386/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e300bdadffa76292f4495b8991e028e232b9e47d",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Engineering",
"Medicine"
]
} |
1946321 | pes2o/s2orc | v3-fos-license | Dimethyl 5,5′-methylenebis(2-hydroxybenzoate)
In the title compound, C17H16O6, the two methyl salicylate moieties are related by crystallographic twofold rotational symmetry with the two benzene rings close to being perpendicular [inter-ring dihedral angle = 86.6 (8)°]. Intramolecular phenolic O—H⋯O hydrogen bonds with carboxyl O-atom acceptors are present, with these groups also involved in centrosymmetric cyclic intermolecular O—H⋯O hydrogen-bonding associations [graph set R 2 2(4)], giving infinite chains extending across (101).
In the title compound, C 17 H 16 O 6 , the two methyl salicylate moieties are related by crystallographic twofold rotational symmetry with the two benzene rings close to being perpendicular [inter-ring dihedral angle = 86.6 (8) ]. Intramolecular phenolic O-HÁ Á ÁO hydrogen bonds with carboxyl O-atom acceptors are present, with these groups also involved in centrosymmetric cyclic intermolecular O-HÁ Á ÁO hydrogen-bonding associations [graph set R 2 2 (4)], giving infinite chains extending across (101).
Experimental
The title compound was prepared in two steps starting with salicylic acid. 5,5′-Methylenebis(salicylic acid) was prepared according to a known procedure (Cushman et al., 1990), and was then esterified with methanol and a catalytic amount of sulfuric acid (Méric et al., 1993). Slow evaporation of a saturated solution in dichloromethane gave single crystals suitable for X-ray diffraction.
Refinement
The phenolic H-atom (H1) was located in a difference Fourier map and both positional and isotropic displacement parameters were refined. All other H-atoms were placed in geometrically idealized positions and refined using a riding model with C-H = 0.95 Å (aromatic), 0.98 Å (methylene) or 0.97 Å (methyl) and U iso (H) = 1.2U eq (C) (aromatic or methylene) or U iso (H) = 1.5U eq (C) (methyl).
Figure 1
The molecular structure of the title compound showing atom numbering and displacement ellipsoids drawn at the 30% probability level. The intramolecular hydrogen bonds are shown as dashed lines. Symmetry code: (i) -x + 1, y, -z + 1/2.
Figure 2
The one-dimensional hydrogen-bonded chains in the title compound, with hydrogen bonds shown as dashed lines.
Displacement ellipsoids are drawn at the 30% probability level.
Special details
Geometry. All s.u.'s (except the s.u. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell s.u.'s are taken into account individually in the estimation of s.u.'s in distances, angles and torsion angles; correlations between s.u.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell s.u.'s is used for estimating s.u.'s involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > 2σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. | 2016-05-12T22:15:10.714Z | 2012-04-18T00:00:00.000 | {
"year": 2012,
"sha1": "2d917f3243716b7e56f4c35e66f0fbdb77139405",
"oa_license": "CCBY",
"oa_url": "http://journals.iucr.org/e/issues/2012/05/00/zs2198/zs2198.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d917f3243716b7e56f4c35e66f0fbdb77139405",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Computer Science",
"Medicine"
]
} |
232775186 | pes2o/s2orc | v3-fos-license | Neuropathic Pain in the Elderly
Neuropathic pain due to a lesion or a disease of the somatosensory system often affects older people presenting several comorbidities. Moreover, elderly patients are often poly-medicated, hospitalized and treated in a nursing home with a growing risk of drug interaction and recurrent hospitalization. Neuropathic pain in the elderly has to be managed by a multidimensional approach that involves several medical, social and psychological professionals in order to improve the quality of life of the patients and, where present, their relatives.
Introduction
Neuropathic pain in the elderly is a common but unrecognized clinical issue. In the general population, recent surveys reported prevalence rates of between 6.9% and 10% for neuropathic pain [1], while data on the prevalence among older people are scarce. Due to cognitive impairment and concurrent illnesses, older people often underreport pain, especially to primary care physicians [2]. Moreover, aging reveals anatomical and biological changes, such as loss of neurons in the central nervous system, increased number of abnormal or degenerating fibrers, slower conduction velocity, altered endogenous inhibition and decreased function of neurotransmitters [3][4][5]. These anatomical changes are involved in the altered perception of neuropathic pain among older people. Finally, difficulties in conducting questionnaires among patients with dementia or visual and hearing disorders could delay the diagnosis of neuropathic pain.
Despite the age-related organic changes, both younger and older people might be affected by the same chronic diseases which carry on the common manifestations of neuropathic pain. This explains why classification of different types of neuropathic pain and first clinical approach do not differ between all ages.
If reported, pain mostly results from the stimulation of pain receptors. This kind of pain is called nociceptive pain, and its treatment is based on common analgesic medications [3,6]. Neuropathic pain is often persistent and more difficult to treat than nociceptive pain. Sometimes more than one medication is needed to achieve pain relief [7][8][9]. Although persistent pain is reported more often by seniors living in nursing homes than by persons living independently [10,11], recent studies demonstrate that there is no association between chronic pain and cognitive or functional status. Perhaps pain is not a feature of aging, but it may contribute to functional deterioration [12].
Sometimes, there are mixed pain syndromes that include nociceptive and neuropathic pain, such as cancer-related pain. Chronic diseases related to neuropathic pain, such
Clinical Evaluation and Diagnosis
In order to choose the most appropriate treatment, it is important to know and identify the underlying mechanisms involved in pain perception (Table 1). Pain problems that arise from the stimulation of pain receptors give rise to nociceptive pain; generally, these receptors are stimulated as a result of trauma, inflammation and/or mechanical deformation. Examples may include ischemia, arthritis, infection, trauma and tissue distortion. Neuropathic pain results from pathophysiologic processes occurring in the central or peripheral nervous system. Some examples are diabetic neuralgia, posttraumatic neuralgia and postherpetic neuralgia. Central sensitization, a phenomenon resulting from synaptic plasticity, is important for the maintenance of chronic pain, both neuropathic and nociceptive. Increasing evidence suggests that this phenomenon is in part due to neuroinflammatory processes, involving both the peripheral and central nervous systems [21]. Moreover, there are other mechanisms of pain, including mixed nociceptive and neuropathic syndromes and pain syndromes of unknown mechanisms. Finally, it is important to consider the presence of psychological factors that may influence pain perception.
Clinical History and Common Symptoms
The most common conditions associated with neuropathic pain in the elderly are painful diabetic neuropathy, post-herpetic neuralgia, radiculopathies, post-traumatic neuralgia and central post-stroke pain. In older adults over 70 years of age, 3 out of 10 patients experience neuropathic pain [17]. However, while not a rare nosographic entity, it is often undiagnosed. The non-diagnosis, and therefore the failure to treat neuropathic pain, has important consequences for the health of the patient, especially the elderly. Because of the pain, the elderly patient can often experience depression, sleep disorders, falls, medication misuse, adverse drug reactions and slow rehabilitation. These complications, often persistent and coexisting, greatly complicate the initial diagnostic picture [22,23]. Moreover, older people may report less pain, often because they attribute pain to aging or do not report it for fear of losing independence or taking additional medication [24].
Neuropathic pain is generally a "persistent" (previously defined as "chronic") pain, which therefore lasts for at least three months. In the diagnostic process it is particularly important to establish a pain history: characteristics, localization, triggering and relieving factors, onset, associated conditions or events that have occurred together with the pain and the treatments that have already been performed. Two symptoms are fundamental in neuropathic pain: allodynia (the perception of a harmless stimulus as painful) and hyperalgesia (the increase in painful perception of a painful stimulus) [25]. It is also important to identify a possible triggering factor, such as trauma (e.g., a fall, fracture or surgery), a recent acute disease (e.g., shingles), recent treatment (e.g., chemotherapy or radiotherapy) or any predisposing conditions such as a pre-existing disease (e.g., diabetes, neoplastic or rheumatic diseases). It is also important to take a family and social history, for example, to assess any psychological trauma (PTSD), the stability of ties with any partners and the use of tobacco, alcohol or other recreational substances. Elderly people can often have difficulty reporting or communicating pain, even in the absence of cognitive impairment [26]. Moreover, and especially in the elderly population, and in emergency period [27], it is important to assess any changes in behavior and to involve family members in the medical history collection in order to have a complete medical history. Finally, it is also important to keep in mind that nociceptive pain, if present, can mask neuropathic pain [28].
Physical Examination
During physical examination, an accurate and systematic neurological evaluation is important which assess any focal motor or sensitivity deficits; osteoarthritis; sarcopenia; gait alterations, reflex alterations that may indicate an alteration of the peripheral nervous system; alterations of the decubitus, such as the assumption of an analgesic posture or a defensive attitude towards a part of the body; and groans or paroxysmal cries that may indicate the presence of neuropathic pain [29][30][31][32]. In addition, it is important to look for skin alterations that may be a key sign (for example, the typical diabetic foot ulcer or dermatomeric distribution in shingles). Dysautonomic manifestations such as orthostatic hypotension, delayed gastric emptying or incontinence may imply that the pain is sustained by the autonomic sympathetic system or is a complex regional pain syndrome. Furthermore, especially in the elderly, an assessment of quality of life, functional status, psychological and social sphere to identify and treat anxiety, depression, social isolation and "disengagement" is a priority [33,34].
Pain Assessment
One way to categorize and quantify pain is using scales, especially in patients with cognitive impairment. Currently there are more than thirty scales to assess pain only in the elderly with cognitive impairment [28,35]. Although there are several scales that can be used to assess neuropathic pain, none have been universally approved for use in patients with advanced cognitive impairment. There are two main types of scales: one-dimensional and multidimensional, the latter generally give more stable values and explore more "domains of pain". Some scales are specific for neuropathic pain, such as the Neuropathic Pain Scale (NPS) [36]. Other scales, although not specific for neuropathic pain, are particularly useful in the geriatric field when a patient suffers from advanced dementia, such as the Hurley Disconfort Scale (DS-DAT), which relies on the clinical presentation of the patient or self-report scales, such as the Verbal Rating Scale, the Horizontal Visual Analogue Scale and the Faces Pain Scale. All these scales have demonstrated reliability and stability over time. There are also questionnaires administrable to patients, such as DN4 [37] and painDETECT [38], which are designed to screen the subjective characteristics of neuropathic pain (such as burning, tingling, sensitivity to touch, pain caused by light pressure, electric shock-like pain, pain to cold or heat and numbness).
Recently, a system has been developed to classify the probability that a pain is neuropathic in poorly responsive patients: based on the presence of specific characteristics of neuropathic pain in the history, objective examination and confirmatory diagnostic tests, respectively, the pain is defined as "possible", "probable" or "confirmed". When a pain is classified as "probably" neuropathic, commencement of treatment is indicated [39]. Some confirmatory tests can be performed at the patient's bedside, which include the evaluation of the presence of neuropathic pain characteristics, assessing the different components of sensitivity (touch, pressure, vibration, pain, temperature) and any alterations (eventually if there is a loss or an increase in somatosensory function). Neuropathic pain may begin with or without a harmful stimulating agent. In addition, more precise instrumental confirmatory tests such as quantitative sensory testing, blink reflex testing and the nerve conduction study can also be used. Furthermore, the diagnostic path of the elderly patient changes significantly depending on whether the patient is capable of self-assessment or not. Pickering et al. propose an algorithm that integrates the anamnestic research of possible causes of pain, comparing it with the physical objectivity and the use of specific questionnaires [40]. Finally, it is important for diagnostic and therapeutic purposes to identify the fragility of the elderly patient, both from a health and socio-economic point of view.
Main Etiological Scenarios
The main etiological scenarios can be classified depending on their origin as peripheral or central (Table 2).
Chemotherapy induced neuropathy (CIPN)
Post-operative neuropathic pain (PONP) The PHN is defined "pain continuing 90 days past the diagnosis of herpes zoster (HZ) or rash onset" [41]. The annual incidence of acute herpes zoster infection among healthy people under the age of 20 years is approximately 1 per 1000; the incidence is 5 to 10 times greater for those older than 80 years [42]. This also reflects the incidence of PHN in the elderly population [43,44]. Advanced age is therefore not only one of the main risk factors for the onset of PHN but is also associated with longer duration and severity of symptoms [45]. Usually the diagnosis is clinical, but in suspicious cases confirmatory laboratory tests are available (skin sampling, tissue biopsy, serology), although VZV DNA PCR has the highest sensitivity and specificity and has become the gold standard for diagnosis. In general, the symptoms and signs of posterpetic neuralgia (PHN) do not differ between the adult and elderly population and are mainly characterized by the triad of burning, sharp or stabbing and constant or intermittent pain and allodynia and anesthesia of the affected area. However, the early recognition of VZV reactivation is important in order to promptly start the specific treatment. This can be sometimes difficult, because in the elderly an atypical presentation compared to younger adults is possible; for example, it may appear as a patch inside the dermatormer or have a maculopapular appearance without vesicular evolution. After fading, the typical skin rash of the acute phase, initially a purplish-brownish discoloration, appears in the affected area, and then pale scarring results (scarring blades) appear. This scenario often evolves into an important painful condition, especially in the older people, and can also result in a loss of appetite, sleep and libido in the long term as this clinical scenario can last several months, risking a significant reduction in quality of life.
Diabetic Neuropathy (DNP)
Often, within ten years of the onset of diabetes, there is the onset of sensorimotor polyneuropathy [46]. When present, DNP causes several effects that significantly affect individual's life as the pain significantly reduces the functional state of the patient, such as poor mobility and ability to walk [47]. The diagnosis is usually clinical, even if the presentation can be insidious, and is characterized by pain that is worsening at rest, typically during the night or in the early hours of sleep. This pain often also causes a reduction in the number of hours of sleep and the quality of sleep itself, which may lead to several complications in the elderly patient. The pain characteristics are shooting, stabbing or poking sensations with typical paroxysmal patterns; moreover, it can be associated with positive sensory alterations (the presence of burning, paresthesias or allodynia) or negative (loss of sensitivity) ones. The typical distribution is by socks and gloves. There are also other non-polar neuropathy sensorimotor diabetic forms (mononeuropathies, multiple mononeuropathies, plexopathies, autonomic neuropathies, etc.). Important differential diagnoses are claudication vasculopathy, Morton's neuroma, radiculopathy, arthrosis, plantar fasciitis and tarsal tunnel syndrome; however, these have different clinical features and can be discriminated by imaging.
Chemotherapy Induced Peripheral Neuropathy (CIPN)
Chemotherapy-induced peripheral neuropathy (CIPN) is defined as somatic or autonomic signs or symptoms resulting from damage to the peripheral nervous system (PNS) or autonomic nervous system (ANS) caused by chemotherapeutic agents [48]. Common symptoms in CIPN are alteration in tattile, vibratatory, termic and nociceptive sensitivities, and the typical presentation is with painful paraesthesias in the extremities and signs consistent with an axonopathy. CIPN is rather important in older adults because it has been shown that those affected may display slower gait velocities, shorter step length and are at an increased risk of falls [49,50]. Older adults tend to develop chronic CIPN more frequently than younger adults [51]; this may lead to a functional decline and a higher risk of falls.
Post-Operative Neuropathic Pain (PONP)
Neuropathic pain resulting from operating procedures is the third leading cause of neuropathic pain in the elderly [17]. This type of pain has a complex nature, and it can be challenging to diagnose because of the variable manners in which older people can present it. Some studies show that pain could persist over a week for about one-third of post-operative patients [52]. This type of pain is more complex in older adults because of the interaction of potentially multiple factors, such as preexistent chronic pain, cancer and surgery itself, which in turn may alter the clinical presentation. Finally, recognition can be often complicated by post-operative delirium.
Complex Regional Pain Syndrome (CRPS)
The current accepted definition of CRPS is "an array of painful conditions that are characterized by a continuing (spontaneous and/or evoked) regional pain that is seemingly disproportionate in time or degree to the usual course of any known trauma or other lesion [53]. The pain is regional (not in a specific nerve territory or dermatome) and usually has a distal predominance of abnormal sensory, motor, sudomotor, vasomotor, and/or trophic findings" [54]. Pain is the main symptom, and it is described as burning, stinging or tearing. Although CRPS is a rare condition in the elderly, who have less inflammation after injury, it can be very debilitating, limiting individual functionality. The diagnosis is based upon clinical features, significant past medical history and physical examination. The main inciting events are traumas, surgery and fractures. Other important differential diagnosis are the presence of the Raynaud phenomenon, rheumatoid arthritis, diabetic neuropathy, deep vein thrombosis, compartment syndrome, peripheral vasculopathy and localized infection.
Compressive Neuropathic Pain (CNP)
Compressive radiculopathy is relatively common in the elderly because degenerative changes in the intervertebral disc occur typically with aging. Moreover, with aging, discs accumulate repeated mechanical stress over time, predisposing one to disk rupture and herniation. The diagnosis is both clinical and radiological [55]. The pain is the main symptom, but its characteristics varies depending on which nerve root is affected. The differential diagnosis of radiculopathy includes the peripheral nerve entrapment syndromes such as carpal tunnel syndrome, ulnar neuropathy at the elbow, etc. [56,57]. In these cases, a proper clinical evaluation associated with electrophysiological assessment and imaging (ultrasonography) provides useful data for diagnosis and treatment [58,59]. The onset of pain may be insidious or acute; the symptoms vary from a dull ache to a severe burning pain and differ in localization and distribution, but in general they follow the sensory nerve distribution. For example, in some cervical radiculopathy the patient may describe pain located on the medial border of the scapula and radiating to the proximal upper limb; similarly, some lumbar radiculopathy pain is referred in the buttock and radiating down in the lower limb. However, as already described, older patients may have some difficulties in describing pain localization, distribution and characteristics because of other co-existing medical condition, cognitive decline and language barriers, among others.
Post-Amputation Neuropathic Pain (PANP)
Although amputation is no longer a frequent procedure, when it is performed it most often causes chronic pain, which is reported in 95% of cases. Among all the causes of the amputation of the lower limbs, vascular diseases are particularly important, which have an increased incidence rate after the age of 65 years [60]. Amongst the causes of persistent pain after amputation, there are neuroma formation, phantom limb syndrome and flexion contraction. Neuroma can rise at any site of the peripheral nerve distribution. In phantom limb syndrome, pain is burning, aching or of a shock-type in the amputated limb and is the main symptom [61]. In addition to the significant pain of the phantom limb syndrome, related mechanisms are still little known [62]. There is also a long-term increase in the risk of developing depressive, anxious symptoms or significant post-traumatic psychological stress symptoms [63] that can adversely affect quality of life and facilitate cognitive decline. A flexion contracture can be a consequence of limb amputation. Such contracture is more common in older patients, especially if they also have cognitive impairment or have had a previous stroke. If the contracture becomes particularly important (over 25 • ) it can cause lumbar lordosis and low back pain.
Central Syndromes Central Post-Stroke Pain Syndrome (CPSP)
Central post-stroke pain syndrome is a condition characterized by the triad of allodinia, hyperpathia and sharp, stabbing or burning pain. This syndrome can commonly result from an ischemic lesion localized at any point along the spinothalamic or trigemino-thalamic pathways [64]. The site of the lesion affects the subjective localization of pain. The pain can be either caused or spontaneous and is often associated with dysesthesia [65], generally affecting large areas of the body, including the trunk and face; sometimes, it can affect narrower areas (e.g., a hand). Sometimes, it can manifest itself as unilateral facial and/or head pain. It is often an undiagnosed and therefore an untreated condition. Currently, the diagnosis is made by exclusion as no pathognomonic characteristics are evident [66]. Moreover, the diagnosis is often complicated by the fact that, especially in the elderly, preexisting painful conditions are present and can be confusing in the clinical picture. Criteria have been proposed, divided into major and minor criteria, to facilitate and standardize the diagnosis of CPSP [66]. At this moment, we do not have studies that help in differential diagnosis. Advanced age does not seem to be a predictive factor of CPSP development after a cerebrovascular accident [67][68][69].
Multiple Sclerosis (MS)
Pain is often a common symptom in MS, although it is not always neuropathic in nature. Neuropathic pain can have a paroxysmal or persistent course, both of which are often associated with dysesthesia. Among the paroxysmal syndromes, there are trigeminal neuralgia, the anaconda sign and Lhermitte's sign. Trigeminal neuralgia in MS (TN-MS) has similar presentation similar to non-MS population, but TN-MS is about 20 times more frequent than in non-MS population; it also tends to occur at an earlier age and often has a bilateral distribution. The anaconda sign, or the MS hug, is characterized by dysesthesias and an enveloping, tightening and oppressive sensation around the chest and abdomen, which occasionally can limit respiratory acts. Lhermitte's sign is often elicited by flexion of the neck and is referred to as an electric shock radiating down the spine or into the limbs. Persistent pain can develop as a result of an increased frequency of paroxysmal pain attacks or directly with persistent pain syndromes, such as chronic migraine, pelvic pain syndromes, chronic tension-type headaches and atypical facial pain [70].
Spinal Cord Injury (SCI)
As a result of spinal injury, in about 60-70% of patients [71,72] a major painful syndrome, refractory to treatment and debilitating, may occur [73]. Nonetheless, the lack of a univocal definition of neuropathic pain subsequent SCI has already been highlighted as a major challenge. This syndrome is characterized by musculoskeletal, visceral and neuropathic pain, which significantly reduces the quality of life and sleep. Among them, neuropathic pain is the one that is most often reported as the most severe and excruciating [74]. Spinal cord stimulation devices should be considered in patient with chronic pain, included older ones [75,76]. Studies have shown that in elderly people with neuropathic pain from long-term spinal damage, it is associated with a higher incidence of depressive symptoms [77] and an increased use of health services, which tend to exacerbate the economic pressure on the health care system. The presentation of neuropathic pain in SCI is still poorly defined. Advanced age seems to be a risk factor for the development of neuropathic pain derived by spinal cord injury [78].
Trigeminal Neuralgia (TN)
Although primary trigeminal neuralgia is a rare condition, its incidence increases with age, so the TN is one of the more frequently seen neuralgias in the older adult population. It usually presents after the age of 50 but can occur at any age. In general, the manifestations do not differ in the elderly compared to the adult. The diagnosis is generally clinical and is mainly characterized by the presence of three main features of pain: unilateral localization (usually in one or more of the trigeminal branches), the sensation of a short and paroxysmal electric shock (<1-2 min of duration) and the ability to be triggered in response to innocuous stimuli; in particular, the latter feature seems to be the most specific of the syndrome [79,80]. Other possible diagnoses are to be ruled out, because they may mimic primary TN or may be secondary forms, such as trigeminal neuralgia related to herpetic or postherpetic manifestations, post-traumatic TN or other causes such as dental or craniofacial pain.
Fibromyalgia
The definition and diagnosis of fibromyalgia has changed progressively over the past 40 years. At present, according to the IASP definition of neuropathic pain, it would not be included in this group of syndromes. However, recent studies have shown that there may be neuropathic small-fiber injuries [81], which would include, at the very least, fibromyalgia in neuropathic pain syndromes, influencing diagnosis and treatment.
Pharmacological Management
The management of neuropathic pain is based on a multidisciplinary team assessment [82], especially in the elderly, who are often affected by multiple diseases that require a more complex assessment than younger people. Pain relief and the improvement of the consequences derived by persistent pain should be the main goals for those who approach this issue. Guidelines for neuropathic pain management suggest a substantial drug approach with any distinctions based on different ages. In these circumstances, pharmacologic therapies represent the first step in pain treatment (Table 3).
Older people are often affected by more than one disease. Physicians should pay attention to drugs for chronic illnesses that have pharmacological interactions with pain medications. Metabolism in the elderly is compromised by physiological decrease in liver, kidney and heart function. Frailty and multimorbidity require a single-person strategy for pharmacological interventions. Starting with low doses and titrating very slowly are recommended. Some analgesic medications should be administered with caution in acute and long-term pain therapy. For example, NSAIDs (non-steroidal anti-inflammatory drugs) have several adverse reactions, including GI bleeding, renal impairment and platelet dysfunction; therefore, their use should be limited among older people.
Chronic pain is less manageable than acute pain. Patients affected by persistent pain should be aware that complete relief from neuropathic pain is difficult to achieve. For these reasons, pharmacological treatment often needs adjustments. It is necessary to review frequently the classes of medications, dosages, patterns and side effects to obtain an effective recovery. If a specific class of drugs is not efficacious, the use of an alternative one may be more favorable. Nonpharmacological strategies, such as cognitive behavior interventions, may be useful in combination with drugs or an alternative to them in longterm pain management. To avoid adverse effects of or addiction to the drugs, a nondrug intervention should be also considered as a bridge to other kinds of treatment.
Patients with persistent pain often experience sleep deprivation and mood disorders. Pain may lead to difficulties in initiating and maintaining sleep. As a result, sleep deprivation decreases the pain threshold and improves anxiety and depressed mood [82]. Nondrug assessment, including sleep restriction therapy, may help to improve the patient's quality of life. Social isolation and loss of autonomy might complicate the pain assessement. Then, a mixed approach based on pharmacological interventions, cognitive behavioral therapies and rehabilitation is required. In the elderly, more so than in younger people, physiotherapists and occupational therapists assume an important role in the process of care.
Current Pain Medications
After excluding other kinds of pain or anatomical causes of persistent neuropathic pain that require a nonpharmacological approach, such as surgical treatment, physicians should focus on the right therapeutic strategy to obtain pain relief. Based on the actual guidelines, a substantial drug approach represents the first choice, especially among older people.
The main route of drug administration for neuropathic pain is the oral route. Considering that neuropathic pain is mostly a persistent pain, the oral route of drug administration is the most manageable. It gives an efficacious drug effect, which is prolonged over a specific period of time. Short-acting oral drugs are the most advantageous medications to take for a faster pain relief due to the rapid blood level onset for those forms of episodic pain. The intravenous route is preferable in the exacerbation of neuropathic pain when oral pills are not efficacious. The other routes of administration, such as transcutaneous, sublingual or subcutaneous are indicated when oral drugs are not efficacious or among the elderly with difficulty in swallowing [6,83].
Anticonvulsants
Anticonvulsants, particularly gabapentinoids, represent the most effective class of drugs used for the treatment of neuropathic pain. Due to few drug interactions, gabapentinoids (pregabalin and gabapentin) are very manageable. Randomized trials on pregabalin and gabapentin have shown their effectiveness on peripheral neuropathic pain reduction. Gabapentinoids are also indicated in the treatment of postherpetic neuralgia, central neuropathic pain and the neuropathic component of cancer pain. Moreover, pregabalin improves the consequences derived by persistent pain, such as sleep disorders, depressed mood and social functioning [84]. As a "Level A" indication in the pharmacological treatment of neuropathic pain, pregabalin should be taken at a starting dose of 25 or 50 mg 2 times a day. At these doses, pregabalin should be effective on neuropathic pain among elderly patients, although the typical dosage effect starts at 150 mg/d [85]. Doses higher than 300 mg/d are typically associated with side effects. Although pregabalin's quick titration is more tolerable than gabapentin, older people should assume a lower starting dose and increase the analgesic dosage with caution. Gabapentin should be titrated until two months, every seven days, to achieve a maximum tolerated dose. The starting dosage is 100 mg three times a day. At the beginning of titration, a single increased bedtime dose should be considered to avoid daytime sedation. The target gabapentin dose is between 1800 mg to 3600 mg/d [86], but it is necessary to evaluate renal function before increasing the daily dosage. Gabapentinoids may cause dizziness, diplopia, concentration disorders and peripheral edema, including ankle swelling. Sodium valproate is also effective on neuropathic pain, but its efficacy is probably less remarkable than pregabalin. It is associated with more side effects, such as poor glycemic control, haematological and gastrointestinal disorders. Carbamazepine is indicated as oral treatment only in persistent pain derived by trigeminal neuralgia. It is necessary to increase its dosage very slowly, considering the poor tolerability, with a starting dose of 200 mg a day. An alternative pain medication is oxcarbazepine, which has less pharmacological interactions and side effects than carbamazepine. Significant adverse reactions to carbamazepine are sleepiness, dizziness, ataxia, hyponatremia, SIADH, liver damage, aplastic anemia, leukopenia and thrombocytopenia. If carbamazepine and oxcarbazepine are both not tolerated, patients might take lamotrigine for a temporary period of time before conclusive surgical intervention.
Serotonin Norepinephrine Reuptake Inhibitors (SNRIs)
Serotonin norepinephrine reuptake inhibitors (SNRIs), both duloxetine and venlafaxine, are approved for the treatment of painful diabetic neuropathy as an alternative therapy when gabapentinoids are not efficacious. Duloxetine should be taken at a starting oral dosage of 60 mg per day given in the morning. Among older people, a daily dosage of 30 mg/d could be also effective. It can cause nausea, vomiting, dizziness, somnolence, constipation and increased blood pressure. Duloxetine is also given in combination with gabapentinoids to decrease the threshold of neuropathic component of cancer pain. Venlafaxine is found to be effective not only in painful diabetic neuropathy but also in all the other form of painful polyneuropathies [82]. In the short-acting formulation, a dosage of 25 mg is given two or three times a day. Considering the long-acting venlafaxine, the starting dose is 37.5 mg or 75 mg once a day. The total daily dosage may be increased to 225 mg a day. Side effects, such as sweating, cardiac conduction abnormalities and high blood pressure are seen during the titration to the target dose. If a single-drug treatment is not effective for pain relief, adding venlafaxine to gabapentin may be more effective in the treatment of painful diabetic neuropathy [81]. This class of drugs shows its effectiveness not only on neuropathic pain but also on depressed mood, which is a common consequence derived by persistent pain.
Tricyclic Antidepressants (TCA)
Tricyclic antidepressants (TCA), including amitriptyline, might be effective in painful diabetic neuropathy and postherpetic neuralgia. Nortriptyline is not less efficacious than gabapentin in postherpetic neuralgia. If a single drug approach does not give pain relief, considering that TCAs and gabapentinoids have different mechanism, a combination of these medication may be useful in management of neuropathic pain [86]. Considering the numerous adverse effects, such as cardiac conduction abnormalities, anticholinergic effects and postural hypotension, which can contribute to falls and fractures, TCAs do not represent the first pharmacological choice among older people.
Opioid Analgesics
Opioid analgesics are considered a second line therapy in the treatment of acute or persistent nociceptive pain when common analgesic, such paracetamol e NSAIDs, are not efficacious. An association of opioid analgesics and the other classes of medications mentioned above represents the main pharmacological strategy to treat both the neuropathic and the nociceptive components of cancer pain. Opioid analgesics may be also useful in the management of exacerbation of persistent neuropathic pain. Considering that these drugs are associated with a prolonged half-life and longer duration of action among elderly, it is possible that a smaller than usual dose of opioids could be efficacious. Nevertheless, addiction to chronic use of opioids may represent an impediment to the treatment of persistent pain. Physicians should choose opioids only when alternative drugs are not efficacious. For example, dextromethorphan, tramadol, oxycodone and morphine sulfate are recommended, but with a "Level B" indication, in the treatment of painful diabetic neuropathy [84]. As a weaker opioid, tramadol is indicated in the treatment of painful diabetic neuropathy, trigeminal neuralgia and post-herpetic pain syndrome. The starting dose of tramadol is 50 mg once or twice a day. It is necessary to titrate tramadol very slowly, adding a daily oral dose of 50 mg every 7 days, until a total maximum dose of 300 mg a day is reached. Nausea, vomiting, seizures and orthostatic hypotension are common side effects. Adverse dose-dependent effects are somnolence, constipation and respiratory depression, which persist until the development of tolerance. During this time patients should pay attention to falls or other mobility accidents. Patients with renal or hepatic impairment should reduce the opioid dosage.
Other Medications
An intravenous infusion of lidocaine, associated with rehydration, might be useful in the exacerbation of trigeminal neuralgia when the intensity of pain is very high and common oral drugs are not efficacious. An alternative to lidocaine is endovenous treatment with fosphenytoin [87]. Lidocaine might be also useful as a local anaesthetic when pain is well localized. Lidocaine patches should be applied directly on the painful site and be replaced every 12 h. If patches are not tolerated, lidocaine gel (6%) is an excellent alternative, especially on postherpetic neuralgia. Lidocaine patches might be helpful during titration of oral analgesic drugs. A combination of oral and topical treatment represents an alternative regimen to take pain relief. For other kinds of neuropathic pain, the effect of lidocaine is moderate.
High-concentration capsaicin (8%) patches show their effectiveness as a topical analgesic in the treatment of painful diabetic neuropathy and postherpetic neuralgia. The main side effect of these patches is the strong burning sensation on contact with warm fluid, which often requires an anaesthetic pre-medication.
Alternative medications for the treatment of neuropathic pain are cannabinoids, antiarrhythmics, antioxidants (-lipoic acid), aldose reductase inhibitors, protein kinase C beta inhibitors and transketolase activators (thiamines and allithiamines), but their use is not common.
Surgical Therapies
Radiculopathy surgery, especially lumbar surgery, is currently considered a safe and effective intervention even in the elderly population [88]. Indeed, age is not an independent exclusion factor, but it is important to consider preoperative risk, which considers not only age and medical comorbidities but ensures a multidimensional assessment of the elderly. Efficacy, in terms of satisfaction and pain control after therapy, is similar both in the elderly and younger people. Given the increased prevalence of degenerative spinal disease associated with the aging of the general population, it is important to consider this type of treatment in the elderly as well [89].
Microvascular decompression is an effective procedure in the treatment of trigeminal neuralgia; this treatment has been shown to be as effective in the elderly as in the young. It is unclear whether it is this approach is riskier in the elderly than in the young; however, an increase in cases of death, stroke and thromboembolism has been noted in the former group [90].
Spinal Cord Stimulation (SCS) is an invasive neuromodulatory technique that should be considered in chronic pain that does not respond to conservative approaches and in particular when it is localized to one extremity. Its mechanism of action is still unclear. It has been hypothesized that it may regulate cytokines imbalance [75], but further studies are needed to ascertain that. At the state of the art, a multifactorial mechanism of action seems most likely. SCS has been shown to be effective in some types of chronic neuropathic pain, primarily in failed back surgery syndrome, but also in multiple sclerosis pain and complex regional pain syndrome. However, it may be less effective in postherpetic neuralgia and phantom limb syndrome [91]. Recent studies have shown how important it is to consider the complexity of pain and the possible overlap of multiple pain syndromes in the elderly patient when choosing a treatment [76].
More recently, dorsal root ganglia stimulation (DRGS) is being increasingly used as a first-line neuromodulation technique or in cases of SCS failure. Some data seem to show better outcomes than SCS, while maintaining comparable risks and complications of both techniques [92]. However, further studies are needed, especially in the elderly population to evaluate any differences in terms of efficacy and safety.
Nerve decompression surgery is a technique that aims to restore the function of compressed nerves. It has been hypothesized to be effective in some cases of diabetic peripheral neuropathy and superimposed focal nerve entrapment [92]; however, due to the presence of conflicting data, its use is not yet recommended for DNP [93].
Sympathetic nerve block (SNB) is a surgical technique that requires the presence of experienced staff and can achieve partially benefit in terms of reducing neuropathic pain. At present SNB is being used effectively to reduce pain in CRPS. It is not yet clear what the role may be in PHN [94]. In some cases, it has been hypothesized to be involved in DNP [95] and in PANP treatment [96].
Dorsal root entry zone (DREZ) has been described in the literature as a possible treatment for patients who have failed to respond to more conservative modes of therapy; however, at present, there are no studies indicating safety and efficacy in elderly patients.
Other Therapies
At the state of the art, pharmacologic therapies are the mainstay of neuropathic pain management; however, some nonpharmacologic therapies may be effective in adjuvating pharmacologic co-treatment [83]. The main non-pharmacological strategies include lifestyle modifications, physical therapies, surgery and microsurgery, cognitive-behavioral therapy and vaccines.
Lifestyle Modifications
Lifestyle modifications can be numerous and vary depending on the specific etiologic scenario; however, they are applicable wherever it is possible to correct an inappropriate behavior or an exposure to a modifiable risk factor, such that there is a significant impact in reducing the degree of pain, disease progression or quality of life.
Physical Therapies
Physical therapies that have demonstrated efficacy in the management of neuropathic pain include the application of superficial and deep level heat and cold, fluid therapy, whirlpool therapy, physical massage, TENS, transcranial magnetic stimulation and transcranial electrical stimulation.
Transcutaneous electrical nerve stimulation (TENS) has been shown to be one of the most effective physical therapies in the treatment of neuropathic pain; for instance, it is used to treat diabetic neuropathy that does not tolerate first-line therapies. Although there are not numerous studies, it has been shown in diabetic neuropathy to reduce pain [97]. In addition, TENS therapy has demonstrated effectiveness in treatment following spinal cord injury, acute, subacute, and chronic postoperative pain, and radiculopathy. Currently, the efficacy of TENS is believed to depend on intensity, frequency, duration and number of sessions. In elderly patients, when applied during exercise, TENS is well tolerated and can generate short-term hypoalgesia which may have beneficial short-term effects [98].
Transcranial direct current stimulation has been demonstrated that in some cases there is a reduction in pain intensity in the elderly [99,100]; however, its use may be limited by practical and regulatory issues.
Transcranial Magnetic Stimulation (TMS) has been shown to be a safe procedure and may be effective in some conditions [101]; studies have demonstrated pain reduction in chronic unilateral neuropathic pain from thalamic stroke, brainstem stroke, spinal cord lesion, brachial plexus lesion or trigeminal nerve lesion [102].
Rehabilitation
Rehabilitation is a widely employed element in the management of neuropathic pain [6]; the goal of this therapy is to adjuvate the pharmacological treatment, potentiating it, reducing the dose necessary to achieve the effective analgesic effect and improving the functionality and quality of life of the subject [103]. Although a fundamental element of rehabilitation is physical exercise, there are still no conclusive data of its effectiveness in neuropathic pain; however, the most studied area is diabetic and pre-diabetic neuropathy. Further research is needed to understand the role of exercise in sensory nerve disorders. The effectiveness of exercise depends on the type of underlying neuralgia [104,105].
Acupuncture
It has been demonstrated that acupuncture can be an effective therapeutic option in reducing the pain of diabetic neuropathy and chronic low back pain; however, it is still unclear how effective it is, even though safe and generally well tolerated, in other neuropathic pain syndromes such as post-stroke and post-herpetic neuralgia [96]. Acupuncture has been shown to be safe in many studies, although its efficacy compared to drug therapy has not yet been unequivocally demonstrated [106] and in some cases has proven ineffective. In elderly patients, it appears to be a good adjuvant therapy during the rehabilitation phase following acute disease, improving pain, quality of life and sleep and overall well-being [107].
Cognitive Behavioral Therapy
Cognitive behavioral therapy is used in the treatment of several conditions, both in young and older adults; the strongest evidence concerns anxiety disorders, somatoform disorders and bulimia. In addition, generalized anxiety disorder is not uncommon in the elderly [108]. Therefore, over the years more and more attention has been paid to the role and importance of psychological and social factors in chronic pain. This has contributed to the development of approaches such as cognitive behavioral therapy. Although the efficacy of this approach in the treatment of neuropathic pain is not universally validated, it is believed that given its low cost and safety, it is still a viable alternative in the management of neuropathic pain.
Varicella-Zoster Virus Vaccine
Age is the major risk factor for the development of herpes zoster and postherpetic neuralgia. The vaccine reduces the risk of herpes zoster incidence in the elderly population [109]. In addition, it generates a cell-mediated immunity response comparable to exposure to the Varicella-Zoster virus itself, which in turn is associated with a lesser severity of the course and incidence of postherpetic neuralgia [110].
Conclusions
A multidisciplinary team assessment represents the best strategy to manage neuropathic pain, especially among elderly, in those whom biological changes may alter not only the perception of pain but also the response to medications. A correct process of diagnosis requires an articulated anamnesis, which investigates comorbidities, consequences to chronic pain and previous pain medications. Questionnaires might help when older people are affected by cognitive impairments. Medications are more effective than nonpharmacological therapies, but often it is necessary to combine them, especially when neuropathic pain is not responsive to drugs. Rehabilitation and cognitive behavioral therapy represent an alternative regimen in the management of symptoms related to pain. Mostly, neuropathic pain is a persistent pain in which therapies are less efficacious than in nociceptive pain. Thus, particularly among older people who suffer from many diseases and often feel abandoned, physicians play a fundamental role in supporting patients in the whole process of care.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-04-04T06:16:29.193Z | 2021-03-30T00:00:00.000 | {
"year": 2021,
"sha1": "be2c8cc4b59dea94df2dd747dd7cded5ff117faf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4418/11/4/613/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3be5b49a744d60c598e9143d180b8f4e17395150",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226242561 | pes2o/s2orc | v3-fos-license | Stochastic effects on the dynamics of an epidemic due to population subdivision
Using a stochastic susceptible–infected–removed meta-population model of disease transmission, we present analytical calculations and numerical simulations dissecting the interplay between stochasticity and the division of a population into mutually independent sub-populations. We show that subdivision activates two stochastic effects—extinction and desynchronization—diminishing the overall impact of the outbreak even when the total population has already left the stochastic regime and the basic reproduction number is not altered by the subdivision. Both effects are quantitatively captured by our theoretical estimates, allowing us to determine their individual contributions to the observed reduction of the peak of the epidemic.
Simple models for the spread of infectious diseases are useful for the quantitative characterization of an epidemic as well as for forecasting future infection numbers and guiding decisionmaking for containment. Different extensions and refined versions of these models have been created to extract various factors that may be critical for the dynamics and prevention of epidemics. Although it is well known that stochastic fluctuations can alter the dynamics as well, they are often neglected at higher infection number levels such that the contact rates and basic reproduction number become the central quantities of interest. In contrast, we investigate a situation in which stochastic effects can quantitatively change the course of an epidemic when infection numbers are large and contact rates remain unaltered. We consider an extended Susceptible-Infected-Removed (SIR) model in which a large population is subdivided into a certain number of sub-populations, each containing only a few infected individuals. For the limiting case of perfect isolation, i.e., when the epidemic evolves independently in each sub-population with no cross-infections, we derive analytical estimates for these stochastic effects that together recapitulate the results of extensive numerical simulations. Our central quantity of interest is the peak total number of simultaneously infected individuals, which we compare between the subdivided population and a single large population with an identical reproduction number.
Our analysis suggests that regional isolation can resurrect certain stochastic effects and thereby contribute to effective containment, regardless of the initial distribution of infected individuals.
I. INTRODUCTION
Generic models such as the Susceptible-Infected-Removed (SIR) model conceived by Kermack and McKendrik 1 are indispensable for characterizing the bulk properties of epidemics and determining the influence of crucial parameters on the dynamics. The contact rate between individuals, which is proportional to the reproduction number R 0 , usually plays a crucial role, as its reduction through containment measures directly slows the spreading of the disease. On a large scale (states or countries), numbers of infections during the height of an epidemic are usually large such that deterministic mean-field descriptions are appropriate. These have been widely used to track the course of epidemics and the effect of interventions, for example, for the current spreading of COVID-19. 2 While many details about the biology and modes of infection of a specific disease are important for its dynamics in detailed models, 3 even basic SIR models have been extended in various conceptual directions. Besides various general topologies of the underlying contact and mobility networks, 4-6 so-called meta-population models Chaos ARTICLE scitation.org/journal/cha have been used to separate the disease dynamics within local environments from its spread between them. 7 It has been shown that it is possible to calculate effective quantities for the whole population, such as reproduction numbers (i.e., a threshold theorem), 8 the final attack ratio, 9 and criteria for persistence 10 in deterministic models of such subdivided populations. Another important deviation from simple bulk behavior arises through stochasticity (see Ref. 11 and references therein). Stochastic versions of extended SIR and related models have been used to calculate corrections to the outbreak threshold, 12 consequences of stochasticity for contact tracing, 13 and other control schemes, 14 to only name a few. Stochastic effects are also observed in agent-based 15 and meta-population models. 16,17 Here, we seek to study the joint effect of subdivision and stochasticity on the overall magnitude of an epidemic for a fixed initial number of infected individuals in the total population. In general, subdivision can be expected to artificially boost fluctuations, as the infection numbers in each sub-population can be small even when the total number of infections in the entire population is large. We would like to quantify the ability of such increased stochasticity to reduce the impact of the epidemic. We deliberately refrain from applying any form of traditional containment in our model, such as further reductions in the contact rate or contact tracing. 18 In particular, we design the subdivision such that the deterministic dynamics of the epidemic in the subdivided population remains unchanged compared to a single large population, as outlined in Sec. II. This allows us to compare the peak number of infected individuals in the entire population for each scenario both analytically and numerically in order to extract the specific effects of stochasticity triggered by subdivision.
A. Reaction system
We consider a population of N individuals with SIR dynamics, 1 with S, I, and R referring to susceptible, infected, and removed individuals, respectively, where removal with per capita rate k happens due to recovery, quarantine, or death. The rate b corresponds to the number of contacts per unit time an individual has with a random other individual in the population, multiplied by the probability that a contact between a susceptible and an infected individual leads to transmission. The total transition rate from S to I per unit time is, therefore, b N S I. The two rates b and k are related to the basic reproduction number R 0 = b/k, which is independent of population size. The deterministic epidemic threshold above which an outbreak occurs is R 0 = 1, and we assume R 0 > 1 throughout this study. The population is subject to the total constraint N = S(t) where we denote the number of individuals in each state by the same letters. For simplicity, all initial conditions assume R(0) = 0 such that they are uniquely defined by N and the number of initially infected I 0 = I(0).
When a population of total size N is split up into N s subpopulations, we simulate N s separate copies of the system (1), with N, S, I and R replaced by N i , S i , I i , and R i , respectively, where the index i refers to the different sub-populations and N i = N/N s . N = N i , S = S i , I = I i , and R = R i refer to the population totals. The initial number of infected individuals is distributed either uniformly or randomly across the N s sub-populations. All numerical results in this study are obtained from stochastic simulations of Eq. (1) using the Gillespie algorithm. 19 To account for the inherent stochasticity of the system, several realizations, i.e., identical simulations with different random number generator seeds, are simulated for each parameter set. We report the number of realizations as well as distributions, averages, and standard deviations across the results as appropriate. Our main figure of interest is the peak number of infected individuals I max or, equivalently, the peak infected fraction of the population γ = I max /N. These could be considered a measure for the impact of the epidemic and the strain on the health care system and public health resources such as the agencies that perform contact tracing and testing.
B. Deterministic behavior
The reaction scheme (1) results in the deterministic mean-field equations, which give rise to two regimes in the dynamics. During the initial regime, I starts off from an initial value I(0) = I 0 , rises exponentially ∼ I 0 e (b−k)t , and saturates to a peak value, where the approximation for the maximum fraction of infected individuals 0 < γ < 1 is valid as long as the entire population is initially susceptible; i.e., S(0) ≈ N. 20 In the secondary regime when the recovery dynamics dominates, I decays to zero exponentially, as the number of susceptibles decreases below the value necessary to sustain spreading. In this deterministic system, a subdivision into N s smaller subpopulations of size N/N s will have no effect since Eq. (2) remains invariant when S, I, R, and N are scaled by the common factor 1/N s . Relative to their individual sub-population sizes N i , the same dynamics are observed in all sub-populations and the dynamics of the population totals S = S i , I = I i , and R = R i are identical to those of a single large population. Therefore, the subdivision is not analogous to cutting links in a contact network but rather a redistribution of them since we assume that the contact rate b remains unchanged. This conservative assumption means that individuals in each sub-population still have the same number of contacts per unit time as they had in the large population despite the smaller number of individuals to choose from. While, in reality, the contact rate b might decrease in such a situation and deterministically reduce R 0 and, therefore, I max , we intentionally keep it constant here to extract the effects of stochasticity.
C. Stochastic behavior
Deterministic behavior only applies if S and I are both large, particularly only after the number of infected people I has risen to appreciable levels. If I is still low, stochastic fluctuations determine whether I will "take off" and develop exponential behavior even if b > k. This effect was already considered shortly after Kermack and McKendrick introduced the original SIR model 21 and is now wellknown. However, in a subdivided population, it can significantly alter the course of the outbreak in the total population if the initial number of infected individuals in a single sub-population is low enough (even if the number is large in the total population). An example for populations of N = 1 000 000 individuals split into N s = 10 sub-populations is shown in Figs. 1(a) and 1(b), along with the expected dynamics of a single large population (red curve). In one example set of ten sub-populations, only three sub-populations (blue, yellow, and green curves) experience a significant outbreak, and they are desynchronized with a broad distribution of individual peak times [ Fig. 1(c)]. Spontaneous extinction and desynchronization lead to an average behavior across 100 simulations with a significantly reduced peak (turquoise curve). Note that, on average, both the undivided large population and the sum of the smaller sub-populations initially exhibit comparable exponential growth in the number of infected individuals [ Fig. 1(b)]. This means that, while extinction in some sub-populations and fluctuations in timing happen early on, their effect is only seen later during the saturation phase.
During the initial phase, we can assume that S ≈ N and that I follows a simple birth-death process with rates b for "birth" and k for "death." We shall use this analogy for derivations throughout this study and in Appendixes A-D. We briefly recapitulate one important result from the theory of branching processes here, namely, that an exponentially growing population that starts from an initial condition of I(0) = 1 has a finite extinction probability of which asymptotically approaches k/b at long times; see the derivation in Appendix A. This means that with probability p ext 1 = k/b, the dynamics never enters the exponentially growing deterministic regime but decays back to zero due to number fluctuations. 22 Therefore, for two independent lineages in the same population, the extinction probability is p ext 2 = (k/b) 2 , and, similarly, p ext n = (k/b) n , as long as the total population is sufficiently large such that the lineages do not interfere with each other. We will use these extinction probabilities and other statistics of the birth-death process to derive analytical approximations for the effects of extinction and desynchronization on the stochastic dynamics.
III. RESULTS
A. Theoretical estimates for isolated sub-populations
Extinction
To obtain an estimate for the effect of extinction and the distribution of infected individuals, we add up the maximum numbers of infected individuals in the sub-populations. Each of these peaks is approximately γ N/N s but only if the infection does not stochastically become extinct during the initial stages. For largepopulation sizes and values of b/k that result in a significant peak, extinction usually happens well before the peak is reached in other sub-populations (see Appendix B) such that these populations do not contribute. Therefore, on average, the contribution of each subpopulation will be I s,max (n) = γ 1 − p ext n N/N s , where n indicates the number of initially infected individuals in the sub-population and p ext n is the probability that they go extinct without entering deterministic growth as discussed above. Therefore, the total peak number of infected individuals in all the sub-populations due to extinction is given by I ext max = n g n I s,max (n), where g n is the number of sub-populations with n initially infected individuals. Note that N s = n g n . Combining the above equations, we obtain The above result manifestly shows that holds. Note that this reduction is exclusively due to extinction, and the simple summation of the individual maxima neglects the possible desynchronization between sub-populations, which we will consider further below. For example, for the ideal case where each sub-population only contains at most one infected individual, we have where g 1 = I 0 is the total number of initially infected individuals in the large population (for this to make sense, N s ≥ I 0 is required).
Since γ corresponds to the case where the population was not split up, the peak number of infected can, therefore, be reduced by increasing the number of sub-populations N s or by bringing b closer to k. Note that this is in addition to a potential decrease in the deterministic peak fraction γ of infected [cf. Eq. (3)] that would result if the subdivision also led to fewer contacts (i.e., a reduced rate b), which we have conservatively assumed not to be the case here.
Desynchronization
The independent summation of maxima in different subpopulations is a conservative estimate since fluctuations can lead to stochastic desynchronization and thus to a further reduction of the peak value. The distribution of peak times in the sub-populations from the previous example is shown in Fig. 1(c). The temporal shift between the different sub-populations can be attributed entirely to stochastic fluctuations in the initial phase of the dynamics. Assuming that this time shift accumulates while the dynamics can still be modeled as a pure birth-death process without saturation effects, we can derive the probability distribution for the deviation from the mean peak time t peak ≡ t peak − t peak as where n is the initial number of infected individuals in the subpopulation andτ = ln γ k/b /(b − k) with γ being the exponential of the Euler constant (see Appendix C for details). Note that n here was only used to incorporate the extinction probability, while the shape of the distribution is based on a single initially infected individual. Nevertheless, this result is in excellent agreement with the measured distribution for randomly distributed infected individuals [see the dashed line in Fig. 1(c)].
We can then use this distribution to obtain a quantitative estimate for the additional peak reduction due to desynchronization. For this purpose, we approximate the deterministic time evolution of I in the vicinity of the peak as I(t) ≈ Nγ exp − 1 2 bkγ (t − t peak ) 2 , which is valid as long as S(t) remains of order ∼ N (see Appendix D); i.e., b/k is not too large. In the limit of many superimposed peaks of this shape, with the variability of t peak given by Eq. (8), the peak is reduced by an additional factor α −1 , The peak number of infected individuals, with both stochastic effects of the confinement taken into account, similarly becomes I con max = Nγ con = I ext max /α. It is interesting to note that this reduction factor is bounded from below by lim R 0 →1 α −1 = 12/(12 + π 2 ) ≈ 0.7407. The desynchronization effect is, therefore, much more limited than the extinction effect.
B. Numerical results
We consider as an example a region with a population of 8 000 000 and 500 infected individuals (I 0 /N ∼ 6 · 10 −5 ) and assume a removal rate of k = 0.14, corresponding to a realistic mean removal time of 1/k ≈ 7 days for the recent epidemic 23 (particularly if symptomatic individuals are quickly removed from the infectious pool through quarantining). Let us further assume that the infectious contact rate is b = 0.2 (> k). This corresponds to a substantial reduction of R 0 from its initial value of 2-2.5 24 through mild measures such as social distancing, although the epidemic would still spread exponentially, with infection numbers doubling about every 12 days.
If this population is allowed to mix homogeneously, the dynamics will evolve according to the deterministic prediction with a peak around 5% infected individuals (blue data in Fig. 2). If instead, the population is split up and the 500 infected people are distributed randomly across the sub-populations, the peak percentage of infected individuals decreases to around 3% (for 100 sub-populations of 80 000 people) or 1% (for 500 sub-populations of 16 000 people) on average (red and yellow, respectively). In all cases, the analytical estimate that only considers the extinction effect, Eq. (6), is only an upper bound for the peak percentage of infected individuals in the total population, while also considering desynchronization according to Eq. (9) yields a good estimate the typical peak values. The peak time distributions for the three different ways of splitting up the population shown in Fig. 2(c) also agree with the analytical estimate of Eq. (8). Note that these distributions are not normalized since a significant fraction of sub-populations experience extinction of the epidemic and, therefore, do not exhibit a peak. There is also a subtle, non-monotonic effect on the termination time of the epidemic [ Fig. 2(d)] whose distribution is broader when the population is split up but does not change position appreciably. Note that the reduction for N s = 500 sub-populations in Fig. 2 is comparable (or even slightly lower) than the case where the 500 infected individuals are not distributed randomly across the sub-populations, but each sub-population contains exactly one infected individual. In this case (see Fig. 3), there are no sub-populations with initially zero infected individuals, implying that the reduction in the peak value compared to the large homogeneous population is strictly due to extinction and desynchronization, which are again well predicted by the analytical estimates.
To examine the validity of our approximations across different parameters, we varied the contact rate b and carried out numerical simulations for values of R 0 ranging between 1.14 and 2. We analyzed the resulting peak magnitudes to extract the individual contributions of extinction and desynchronization, which are in excellent agreement with our predictions of Eqs. (6) and (9), as shown in Fig. 4. The contribution of extinction alone was estimated numerically by summing maxima in different sub-populations, regardless of their timing. Overall, the simulations confirm the relative importance of the extinction effect, whereas the additional reduction by desynchronization plays a smaller role. Figures 4(a) and 4(b) show the case where N s = I 0 = 100, i.e., number of sub-populations and initially infected individuals is the same, and exactly one infected individual is placed in each sub-population. This serves to demonstrate the maximum effect of extinction, whereas in Fig. 4(c), a large share of the peak reduction is due to sub-populations containing no infections, as I 0 = 100 < N s = 500. However, the random distribution of infected individuals for N s = I 0 = 500 [ Fig. 4(d)] leads to a very similar result as in Fig. 4(b), although some of the reduction is due to the initial distribution (i.e., sub-population without any infections). For a high number of sub-populations N s as in Figs. 4(c) and 4(d) (and consequently a smaller sub-population size), deviations from the theory begin to appear toward low values of b very close to k, as the timescale of the extinction process becomes comparable to that of the deterministic SIR dynamics. In this regime, the distinction between an initial stochastic phase approximated by a birth-death process and the onset of saturation effects becomes increasingly blurred, as we show analytically in Appendix B. In particular, this affects the estimation of the extinction contribution (marked by black dots).
IV. DISCUSSION
Reducing the infectious contact rate b or increasing the removal rate k directly leads to a decrease of the deterministic peak fraction of infected, γ . The above analysis shows that, even without changing R 0 = b/k, the isolation of small sub-populations can reduce the overall peak number of infected people in the ideal case of at most one infected individual per sub-population by an additional factor of up to I 0 /N s · (1 − k/b)/α when I 0 /N s < 1. One contribution comes from the communities that have no infections and are now protected (I 0 /N s ), while another contribution comes from the possibility that an infection chain in a local community stochastically ends due to fluctuations (k/b). Stochastic desynchronization (1/α) further reduces the peak by up to about 25% according to Eq. (9). However, as shown by our estimates and confirmed by the numerical simulations, even outside this ideal scenario, a reduction can be achieved, regardless of the distribution of infected individuals across the sub-populations, and the reduction will be larger if b/k is already close to 1. It is also worth noting that, in contrast to reductions in R 0 = b/k, the timescale of the outbreak is not increased. The benefits of subdivision are obvious even from a deterministic standpoint in the case where many regions initially contain no infected individuals-in this case, subdivision prevents spreading of the epidemic to disease-free communities. However, our analysis shows that this advantage persists due to stochastic extinction events and desynchronization even if the sub-populations are so large that many or all of them initially contain infections, as long as I 0 /N s ∼ 1. Of course, increasing N s further is always beneficial due to the above-mentioned deterministic effect, with the trivial limiting case of one group per household (an extremely strict lockdown). In contrast, aiming at I 0 /N s ∼ 1 could still allow for the functioning of local socioeconomic life in fairly large sub-populations if I 0 is not too large when the subdivision happens.
While extinction has been widely considered for SIR-type models 11,21 and has been related to a minimum number of infections necessary to cause a "major" outbreak, 14 we have shown here that, even if the dynamics in the large population is outside the stochastic regime, it is possible to resurrect these effects by artificially subdividing the population. Because of the strong exponential dependence of the extinction probability on n [see Eq. (5)], it is important to note that I 0 denotes the true number of infections, including undetected and/or asymptomatic cases. Another aspect we have neglected here is that of cross-infections: In reality, sub-populations cannot be perfectly isolated; therefore, local extinction might only be temporary, as has been seen in studies of persistence. 10,16 The calculated peak reduction would be observed in the limit of small cross-infection rates. In contrast to extinction, desynchronization does not reveal itself on the level of a single population (except as a difference in timing) and is, therefore, an emergent property of the subdivision scenario, which is likely to persist in the presence of cross-infections. In the framework presented in Sec. II A, these could be included (without changing R 0 ) by allowing a certain fraction ξ of contacts across the entire population and only restricting the remaining fraction 1 − ξ to within each sub-population. We set up such a model in a separate study 25 to investigate a potential realistic containment strategy.
In reality, individuals will not compensate for all avoided contacts outside the local sub-population with contacts within it, as we have conservatively assumed by keeping b constant upon subdivision. Instead, isolation will naturally lead to a reduction in b, akin to cutting links in the spreading network 5 so that the effect of subdivision will be a combination of deterministic reductions in R 0 and the stochastic effects presented here. Subdivision of a population can be complementary to containment measures, such as social distancing and electronic contact tracing, 13,23 which still allow for the functioning of local public life. However, it also does not preclude the activation of more drastic measures in regions beginning to show deterministic exponential behavior. 25 Wilczek, and R. Yahyapour. This research was supported by the Max-Planck-Gesellschaft.
APPENDIX A: THE EXACT SOLUTION OF THE BIRTH-DEATH PROCESS
Consider a population of the infected individuals I that can undergo the following two processes: i.e., each I can give birth to another I with rate b or it can die with rate k, at any time. Ignoring the stochasticity, the average behavior of the system is described by exponential birth and death. The population n(t) can be determined as follows: where we have assumed that the initial size of the population is one.
As this is a one-step process, the probability of finding n copies of I in the sample at time t satisfies the following master equation: The factor of n is needed because the birth or death could happen to anyone. Equation (A3) can be solved by an ansatz of the form P n ∼ f n for n ≥ 1, which together with the initial condition P n (0) = δ n,1 gives us the solution as (A4) The distribution can be used to calculate the first two moments, which reveal more interesting features about the system. First, it is reassuring that the average population size behaves according to the mean-field description above that predicted exponential growth or decay. A quantity of interest is which probes whether number fluctuations follow a characteristic Poisson behavior. In the long time limit, we have which shows that while a decaying population that corresponds to b < k has a Poisson behavior, a growing population corresponding to b > k has giant number fluctuations, which can be Chaos ARTICLE scitation.org/journal/cha characterized via which leads to in the long time limit. In other words, the fluctuations scale with the average population size when b > k and with the square root of the average population size when b < k.
The above solution allows us to calculate the extinction probability of the population P 0 (t), which is an absorbing state. We find which is a very interesting result. When k > b,n → 0 at long times, and we obtain P 0 = 1. It is no surprise that extinction at long times is a certainty when the death rate is larger than the birth rate. However, when k < b,n → ∞ at long times, and we obtain P 0 = k/b, a result that is in contradiction with the prediction of the average behavior of the system, which is exponential growth. Therefore, number fluctuations could completely annihilate an exponentially growing population.
APPENDIX B: TIMESCALE OF THE EXTINCTION PROCESS AND ACCURACY OF MAXIMA DETECTION IN SUB-POPULATIONS
Here, we derive quantitative estimates that allow us to compare the timescale of the extinction process to that of the deterministic peak in the SIR model. This is conceptually interesting in its own right, but it also allows us to meaningfully differentiate between "real" maxima and random transient peaks in the number of infected individuals in sub-populations that experience extinction.
In the pure birth-death process, the fraction of extinction events, 0 ≤ φ x ≤ 1, that have already happened by time t can easily be calculated from Eq. (A11) as This equation can be inverted to yield the time t x by which a fraction φ x of extinction events have happened, On the other hand, we can also estimate the fraction of non-extinct populations, 0 ≤ φ c ≤ 1, that will still be below a cutoff size n c at time t, Evaluating φ c (t x (φ x ), n c ), therefore, yields the fraction of populations still below n c when a fraction φ x of extinction events have already happened. This expression can be inverted to yield the simple relationship giving the number of infected individuals below which a fraction φ c of non-extinct populations will still be at the time when a fraction φ x of populations destined for extinction have already reached the extinct state. In order to estimate the effect of extinction in our numerical simulations (cf. Fig. 4), we detect the maximum number of infected individuals in each sub-population (independent of their timing) and compare the sum of these numbers to our estimate I ext max from the main text. In the sub-populations that experience random extinction of the epidemic, the detected numerical maxima will in reality be transient fluctuations before extinctions. These contribute more and more as R 0 = b/k → 1 when the deterministic peak value Nγ = N 1 − (1 + log R 0 )/R 0 20 decreases and the extinction probability 1/R 0 increases. Using the estimates above, we can exclude these false maxima based on their timing by only considering those maxima for which and simultaneously ensuring that is fulfilled. φ x and φ c play the role of accuracy parameters. The first condition ensures that false maxima are excluded with probability φ x , while the second one ensures that a pure birth-death process would not have reached the deterministic SIR peak by the same time with probability φ c . Note that the latter is a conservative estimate, as growth in the SIR model is significantly slowed before reaching its peak compared to a pure birth-death process. In Fig. 4, we use a value of φ x = φ c = 0.99 to exclude 99% of false maxima and still detect more than 99% of deterministic SIR maxima except for the data points marked as unreliable, for which Eq. (B6) is not fulfilled, and, therefore, the extinction process and the deterministic SIR peak are not clearly separated in time. Conversely, this also means that for all other parameters (i.e., larger R 0 = b/k), extinction usually happens well before the deterministic SIR dynamics reaches its peak. It is worth emphasizing that, in the limit b → k and small populations, the distinction between an initial stochastic phase and a deterministic time course becomes meaningless since γ eventually becomes order ∼1/N and the mean extinction time diverges. At this point, the dynamics throughout will be dominated by random growth of the number of infected individuals and stochastic fluctuations will continue to contribute, even as the number of susceptibles decreases, eventually ending the epidemic (i.e., during and Chaos ARTICLE scitation.org/journal/cha beyond the maximum). In addition, the assumption that there is no depletion of susceptibles in the early phase (and thus the equivalence to a pure birth-death process) breaks down. However, in this study, we are interested in the regime where even sub-populations are still large and while b is sufficiently close to k to yield a significant extinction probability k/b, it is large enough to lead to a significant deterministic outbreak peak. Therefore, we do not investigate this regime.
APPENDIX C: ANALYTICAL APPROXIMATION OF THE RELATIVE PEAK TIME DISTRIBUTION
The fact that the early phase of the dynamics in the SIR model (when S ≈ N and I is small) corresponds to a simple birth-death process also allows us to obtain an analytical estimate for the peak time distributions of the sub-populations. This can be readily adapted from a similar calculation performed on an equivalent problem in evolution, where the dynamics of a small mutant subpopulation with a given selective advantage can likewise be understood as a birth-death branching process, 26 for which the transition from the initial stochastic regime where extinction is still possible to the deterministic regime of exponential growth corresponds to the establishment of the mutation in the population (which precedes fixation).
We obtain an approximation for the establishment time distribution of the disease in a sub-population as where we have corrected for an additional minus sign missing from Ref. 26. The variation in the timing of the later deterministic dynamics is due entirely to fluctuations in this initial stochastic phase. To compare this analytical approximation with our simulation results for the peak time in the main text, we plot the non-normalized, unconditional distribution, which is diminished by a factor [1 − (k/b) n ] [from Eq. (C1)] accounting for the probability of extinction in a population with initially n infected individuals and has its mean shifted to the measured mean peak time t peak . Here, where γ = 1.781 0724 . . . is the exponential of Euler's constant. We note that simply shifting the mean of the distribution is justified because the dynamics is predominantly identical in different sub-populations once they are in the deterministic regime, while only lagging by a random time span τ . This simple argument depends on the assumption that stochastic fluctuations can be ignored before deviations from exponential behavior (i.e., saturation effects) have to be considered for the deterministic dynamics. This is true for the scenarios we consider in the SIR model since our sub-populations still consist of thousands of individuals and we are explicitly focusing on cases where b is not arbitrarily close to k.
APPENDIX D: ESTIMATING THE EFFECT OF SUB-POPULATION DESYNCHRONIZATION
For estimating the peak reduction effect due to desynchronization of the sub-populations, it is convenient to work with the normalized equations for s = S/N and i = I/N, which reaḋ When i reaches its peak γ = i(t peak ), new infections and recovery balance according to Eq. (D1b) and s(t peak ) = k/b. Based on this known value, we use the following ansatz for s: with ε(t peak ) = 0. Since we are interested in the regime where there is a substantial extinction probability k/b, s(t peak ) is also still of order 1. Together with the fact thati(t peak ) = 0 by definition, we expect from Eq. (D1a) that the lowest (linear) order of ε will suffice to describe the dynamics around the peak; i.e., ε(t) ≈ ε 1 · (t − t peak ) (conversely, we expect this approximation to break down when b k). Substituting the ansatz into Eq. (D1a) yields ε 1 = −bγ or With this, we can obtain an approximation for i around the peak. From Eq. (D1b), we know that which can easily be solved. Together with the condition i(t peak ) = γ , we obtain Now that we have an approximation for i(t) near the peak, we can calculate how these time courses add up across individual subpopulations by assuming that they all have the shape (D5), with the peak time t peak stochastically distributed according to Eq. (C2). Definingī where each i(t j peak ; t) represents a time course as in Eq. (D5) with t j peak drawn from the distribution (C2) for each j, we obtain an average superposition of many sub-populations in the limit N s → ∞, i(t) = dt peak P est SIR (t peak +τ ) i(t peak ; t).
Note that [as compared to Eq. (C2)] we use here the normalized distribution, without the diminishing factor due to extinction, in order to extract the reduction strictly due to desynchronization. We have also set t peak to 0 without loss of generality, as a different value would simply shiftī(t) by the corresponding time.
ARTICLE scitation.org/journal/cha
The integral in Eq. (D7) cannot be integrated in a closed form. We, therefore, replace P est SIR by a normal distribution N (0, σ 2 ) with the same variance σ 2 = π 2 /[6(b − k) 2 ]. It is useful to note that, as for the normal distribution, the variance completely determines the shape of the Gumbel distribution in Eq. (C1), which means that the systematic error introduced by this replacement is parameter independent. Finally, we can calculatē with α = 1 + π 2 bkγ 6(b − k) 2 .
The maximum of the resulting time course occurs at t = 0 (due to our arbitrary choice of the mean for t peak ) and isī(0) = γ /α. Since the expected peak value without desynchronization is γ , desynchronization reduces this peak value by a factor of α −1 . According to Eq. (the definition of α above), α itself depends on γ , which in turn is a function of R 0 = b/k. Using the well known approximation γ = 1 − [1 + log(R 0 )]/R 0 , 20 which is valid as long as S ≈ N initially, we rewrite α as While we expect the quantitative estimate to be less accurate toward higher R 0 (see above), we note that the important limits lim R 0 →1 1 α = 12 12 + π 2 ≈ 0.7407 (D11) exist. The first one signifies that there is no peak reduction due to desynchronization for R 0 → ∞, consistent with the disappearance of the stochastic phase at the beginning of the dynamics. The second limit indicates a finite reduction by a factor ≈ 0.7407 toward R 0 = b/k = 1. Since the timescales of both the stochastic fluctuations and the deterministic peak behavior diverge for R 0 → 1 (and are ill-defined for R 0 = 1), this means that they must exhibit identical scaling behavior in order for neither of them to dominate. In between the two extremes, 1/α increases monotonically with R 0 , which implies that the maximum reduction that can be achieved by desynchronization is about 26% and is reached close to R 0 = 1. It is important to note, however, that several assumptions even about the deterministic time course (for example, the value of γ ) break down when R 0 is so close to 1 that γ becomes of order 1/N; therefore, a fully stochastic treatment would be needed to fully capture this regime. This does not limit the validity of the results in the regime we are interested in, i.e., where sub-populations still exhibit clear deterministic outbreaks (or extinction).
DATA AVAILABILITY
The data that support the findings of this study are available within the article. | 2020-10-29T09:08:18.672Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "2cd6404aae6e7903bdcf64014212d53e448c40ad",
"oa_license": "CCBY",
"oa_url": "https://aip.scitation.org/doi/pdf/10.1063/5.0028972",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ad03d97b6c8e61c6d66acb4cc67a645bd4a5489",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine"
]
} |
270059265 | pes2o/s2orc | v3-fos-license | Plants against cancer: towards green Taxol production through pathway discovery and metabolic engineering
The diversity of plant natural products presents a rich resource for accelerating drug discovery and addressing pressing human health issues. However, the challenges in accessing and cultivating source species, as well as metabolite structural complexity, and general low abundance present considerable hurdles in developing plant-derived therapeutics. Advances in high-throughput sequencing, genome assembly, gene synthesis, analytical technologies, and synthetic biology approaches, now enable us to efficiently identify and engineer enzymes and metabolic pathways for producing natural and new-to-nature therapeutics and drug candidates. This review highlights challenges and progress in plant natural product discovery and engineering by example of recent breakthroughs in identifying the missing enzymes involved in the biosynthesis of the anti-cancer agent Taxol®. These enzyme resources offer new avenues for the bio-manufacture and semi-synthesis of an old blockbuster drug.
INTRODUCTION
Plants are remarkable biochemists; they produce a wealth of bioactive small-molecule chemicals to mediate developmental processes, communicate with other organisms, and adapt to dynamic ecological environments.From the use of yarrow and chamomile by Neanderthalian cultures, nearly 50,000 years ago (Hardy et al. 2012), to traditional medicines developed by ancient civilizations, across the globe, humans have benefitted from the diversity of herbal remedies to treat ailments and diseases.Today, almost a third of World Health Organization essential medicines find their origin in plant natural products (De Luca et al. 2012;Raskin et al. 2002).Faced with increasing health challenges of a burgeoning world population, the large-scale production of clinically used therapeutics and the discovery of new drug leads are a matter of utmost urgency.
TRADITIONAL PRODUCTION APPROACHES
The diversity of plant natural products can provide a critical resource in this endeavor if challenges toward their production can be overcome.Traditional methods of isolating bioactive products from the source species have laid the foundation for many modern pharmaceuticals (Hartmann 2007).However, this strategy is often impractical, due to the lack of scalable cultivation, low product accumulation often in only specific tissues and in response to environmental stimuli, and the need to protect rare and endangered species.Over the past century, chemical synthesis has successfully provided high-purity therapeutics and other bioproducts (Guerra-Bubb et al. 2012;Hetzler et al. 2022), but is often constrained by high costs, the toxicity of waste products, and the structural complexity of plant natural products.
THE TRANSITION TO MODERN OMICS APPROACHES: TAXOL AS A CASE STUDY
The availability of large omics data, paired with inexpensive DNA synthesis, has revolutionized the discovery of enzymes and pathways underlying the biosynthesis of desired products (Li et al. 2023;Owen et al. 2017;Tiedge et al. 2020).Applying this resource to metabolic engineering, in heterologous microbial or plant platforms, now offers unprecedented possibilities for manufacturing natural and new-to-nature bioproducts with superior stereo-control and at an industrial scale using enzymatic and semi-synthetic approaches (Chen et al. 2024;Owen et al. 2017;Wurtzel and Kutchan 2016).
The diterpenoid anti-cancer drug Paclitaxel (Taxol ® ) exemplifies the various bottlenecks one can encounter in producing plant-derived therapeutics.Following its discovery in a drug screen of more than 100,000 plant natural products in the late 1960s, Taxol quickly became a leading chemotherapeutic, due to its unique mode of action in arresting mitosis and ultimately cell division by preventing microtubule disassembly and its broad-spectrum activity against several cancer types (Arnst 2020).
Taxol was first isolated from its natural source, Pacific yew (Taxus brevifolia) (Wani et al. 1971).However, coniferous yew trees do not present a sustainable resource, as they grow slowly and in only narrow climatic niches and produce only low amounts of Taxol.Moreover, the isolation of Taxol from bark tissue is destructive, which resulted in overharvesting to the extent that some yew species, historically used for commercial extraction, have been placed on the endangered species list by the International Union for Conservation of Nature (Mayor 2011).This limited natural supply chain has inspired many efforts, over the past six decades, to devise Taxol production strategies that can meet ever-increasing clinical demand (Fig. 1).
Numerous total synthesis routes for Taxol and key precursors have been established, but often require long and expensive routes due to the structural complexity of Taxol (Guerra-Bubb et al. 2012;Watanabe et al. 2023;Zhang et al. 2023a).Early work on the Taxol biosynthetic pathway facilitated the development of cell suspension cultures of Taxus needles to produce key precursors, such as 10-deacetyl-baccatin III with high stereochemical precision (Hezari et al. 1997;Ketchum et al. 2007Ketchum et al. , 2003)).Semi-synthetic approaches, utilizing these precursors as starting material, have proven a more renewable and scalable strategy and are currently the major platform for commercial Taxol production (Arya et al. 2020;Roberts 2007).
Rapid advances in synthetic biology have the potential to offer less costly and more sustainable avenues for Taxol manufacture but necessitate knowledge of the underlying enzymes and pathways.Such resources would not only enable improved Taxol production in existing and new metabolic engineering and semi-synthetic platforms but also facilitate combinatorial metabolic engineering of Taxol-biosynthetic enzymes, to thereby gain access to a broader range of the more than 600 known taxane and taxoid structures with potentially desirable therapeutic efficacies (Lange and Conner 2021).
These applications have steered long-standing research efforts in elucidating the multi-enzyme Taxolbiosynthetic pathway.In particular, the pioneering work by Croteau and colleagues resulted in the discovery of many of the core reactions and associated enzymes of Taxol formation (Guerra-Bubb et al. 2012;Jennewein and Croteau 2001;Walker and Croteau 2001) (Fig. 2).This includes the diterpene synthase, taxadiene synthase (TXS), catalyzing the conversion of the universal diterpenoid precursor, geranylgeranyl diphosphate (GGPP), into taxadiene as the committed reaction in building the core taxane scaffold (Hezari et al. 1995;Lin et al. 1996;Wildung and Croteau 1996).In later years, several cytochrome P450 monooxygenases (P450) and acyl-and benzoyl-transferases, which functionally decorate taxadiene, were characterized (Guerra-Bubb et al. 2012;Srividya et al. 2020).
Equipped with this pathway knowledge, the metabolic engineering of taxol precursors, especially taxadiene and taxadiene-5ɑ-ol, using microbial and plant platforms could be established, including yeast (Saccharomyces cerevisiae) (Nowrouzi et al. 2024(Nowrouzi et al. , 2020;;Walls et al. 2021), Escherichia coli (Ajikumar et al. 2010;Biggs et al. 2016;Chang et al. 2007) and Nicotiana benthamiana (De La Peña and Sattely 2021;Hasan et al. 2014;Li et al. 2019).Notably, product yields varied across studies and host systems, as exemplified by taxadiene titers reaching 103 mg L −1 in yeast, 1 g L −1 in E. coli, and 100 µg g −1 fresh weight in N. benthamiana.Despite these many advances, several enzymes essential for forming the core baccatin III intermediate, alongside additional modifications and formation of the complete aromatic side chain have remained elusive until recently.
What has been the challenge?Low abundance, diversity, and structural complexity of naturally occurring taxanes and related structures in species of Taxus have hindered access to pathway intermediates required for the biochemical testing of enzyme functions.Even if substrates are available, the functional diversity and catalytic promiscuity of the large P450 and transferase enzyme families predicted to be involved in Taxol formation, along with often low protein expression and activity in heterologous systems, has slowed progress in identifying pathway enzymes and understanding the pathway organization of Taxol biosynthesis.
TAXUS GENOMES: PLATFORM FOR MISSING ENZYME DISCOVERY
Sequencing and assembly of the[10 Gb genomes of two Taxus species provided a critical milestone in the quest for Taxol biosynthesis (Cheng et al. 2021;Xiong et al. 2021).Recent efforts have leveraged this resource to identify the missing enzymes and reactions to complete the Taxol pathway, marking a significant breakthrough in our understanding and metabolic engineering of Taxol biosynthesis.Integrating genomics, synthetic biology, and enzyme biochemical approaches, Jiang and coworkers elegantly elucidated two P450 enzymes that catalyze two previously unresolved functional modifications essential for Taxol bioactivity (Jiang et al. 2024) (Fig. 2).
Mining of the Taxus genomes revealed new members of the Taxus-specific CYP725 P450 family with known functions in Taxol formation (Kaspera and Croteau 2006).To enable P450 functional testing, these authors combined co-infiltration of substrate isolated from plant tissue with co-expression of P450 candidates and known pathway enzymes in N. benthamiana and insect cell cultures.This strategy resulted in the identification of Taxane Oxetanase 1 (TOT), a bifunctional CYP725 P450 that facilitates the addition of the characteristic oxetane ring to the taxane scaffold (Jiang et al. 2024) (Fig. 2).Functional analysis of insect microsomal fractions containing TOT and knock-down of TOT in Taxus cell cultures verified TOT functionality.
Next, Jiang et al. (2024) employed pathway engineering to generate the alternate intermediate, taxusin, and used this platform to functionally screen the remaining CYP725 candidates, leading to the discovery of taxane-9ɑ-hydroxylase (T9ɑH) that catalyzes the missing oxygenation at the C-9 position.With these enzymes in hand, the authors reconstituted the conversion of the universal diterpenoid precursor, geranylgeranyl diphosphate (GGPP), into baccatin III in N. benthamiana, using a subset of nine enzymes (Fig. 2), thus, paving the way for the scalable production of key Taxol precursors, through metabolic engineering.Strikingly, some enzymes, such as T10βH and DBAT, which were previously shown to catalyze reactions in Taxol biosynthesis, were not required to form baccatin III.
Furthermore, Yang et al. also reported the characterization of T9αH (here designated CYP725A37) as well as the CYP725A55-catalyzed oxetane ester formation to form 1β-dehydroxy-baccatin VI (Yang et al. 2024).Motivated by a close review of previously proposed pathway reactions, another recent effort by Zhao and colleagues demonstrated that taxane 5ɑ-hydroxylase (T5ɑH, CYP725A4), a P450 identified to decorate the taxadiene scaffold at C-5 nearly three decades ago (Hefner et al. 1996), can act as a bifunctional enzyme, facilitating C-5 hydroxylation of two primary TXS products, taxa-4(5)-11(12)-diene and its isomer taxa-4 (20)-11(12)-diene, and subsequent oxetane ring formation, as shown by engineering taxadiene production and T5ɑH co-expression in yeast and N. benthamiana (Zhao et al. 2024) (Fig. 3).In addition, Liu and coworkers combined promoter engineering with coexpression analysis to identify several previously unresolved products of T5ɑH, underscoring the functional promiscuity of this core P450 in the production of Taxol and other taxoids (Liu et al. 2024) (Fig. 3).
CATALYTIC PLASTICITY: DYNAMIC METABOLIC NETWORKS
These research advances not only discovered long sought-after pathway reactions in Taxol biosynthesis but, more broadly, highlight the potential of integrating systems biology, synthetic biology, and modern metabolomics and biochemical technologies to realize the discovery and engineering of multi-enzyme pathway networks en route to highly complex specialized metabolites that previously were unattainable.Notably, several findings suggest that similar to other diterpenoid pathways, the biosynthesis of Taxol and related taxoids is realized through a dynamic metabolic network, where individual enzyme modules can interact in different combinations to yield a broader product range (Bathe and Tissier 2019;Lanier et al. 2023;Peters 2006;Zerbe and Bohlmann 2015).
Firstly, the above-mentioned studies revealed that different enzymes are capable of generating the signature oxetane ring critical for the therapeutic efficacy of Taxol (Wang et al. 2000).Secondly, differences in the tissue-specific expression of several identified genes support the presence of tissue-specific pathways (Jiang et al. 2024).Thirdly, TXS, T5ɑH, and other Taxol-forming enzymes show expansive substrate-and productpromiscuity (Guerra-Bubb et al. 2012;Liu et al. 2024;Zhao et al. 2024) (Fig. 3), thus supplying substrates for alternate pathway branches toward the diverse array of taxanes and taxoids produced in species of yew (Lange and Conner 2021).Notably, the use of different minimal enzyme sets to produce baccatin III in N. benthamiana (Fig. 2) resulted in different product yields between 50 ng g −1 (Jiang et al. 2024) and 155 ng g −1 (Zhang et al. 2023b) plant material, suggesting that differences in enzyme combinations and pathway reconstitution affect pathway productivity.
The catalytic plasticity of Taxol biosynthesis presents both a challenge and an opportunity for metabolic engineering.Combinatorial pathway engineering of different enzyme modules can provide access to a range of structures, whereas lack of control over undesired branch pathways can substantially diminish product yield in heterologous systems (Andersen-Ranberg et al. 2016;De La Peña and Sattely 2021;Frey et al. 2024;Guo et al. 2016;Liu et al. 2024;Mafu et al. 2016).Although production yields of baccatin III and Taxol in yeast and N. benthamiana are still relatively low, advances in multi-enzyme pathway engineering, subcellular co-localization of enzyme modules, and engineering of microbial and plant host systems now offer the tools needed for developing large-scale Taxol production platforms (Jiang et al. 2024;Zhang et al. 2023b).
By integrating genomics-enabled gene discovery, enzyme co-expression approaches, and substrate feeding, a broader range of precursors can be accessed to fast-track the functional testing and annotation of enzyme superfamilies involved in the biosynthesis of all classes of plant-specialized metabolites (De La Peña and Sattely 2021;Frey et al. 2024;Kitaoka et al. 2015;Tiedge et al. 2020) (Fig. 3).To optimize pathway engineering toward Taxol and other desired products, a fundamental knowledge of the order of enzyme reactions and the spatial/temporal organization of pathways is required to enable the redirection of precursor flux and control enzyme expression levels, in heterologous systems that lack the native regulatory components (Ajikumar et al. 2010;Liu et al. 2024;Nowrouzi et al. 2024;Zhao et al. 2024).Complementary to pathway discovery and optimization discussed here, advances in metabolic engineering, fermentation, and plant biomass production, as well as semi-synthetic approaches are certain to continue boosting natural product titers in microbial and plant platforms (Wang et al. 2021;Biggs et al., 2021;Belcher et al. 2021).At the same time, rapid advances in metabolomics technologies enable the screening of a broad range of species across the plant kingdom and are certain to reveal new bioactive natural products as leads for drug discovery.
CONCLUSIONS
Continued efforts to decipher the structure-activity relationships of Taxol-biosynthetic enzymes will enable protein engineering to improve catalytic activity and specificity (Biggs et al. 2016;Edgar et al. 2016;Köksal et al. 2011;Liu et al. 2024;Schrepfer et al. 2016;You et al. 2018).Ultimately, combining the expansive tool kit, at the interface of modern biology and chemistry, can accelerate the discovery and sustainable manufacture of life-saving chemicals powered by plants. | 2024-05-28T15:02:00.850Z | 2024-05-26T00:00:00.000 | {
"year": 2024,
"sha1": "829a934d621ddbab40fd0153513fbc9bb2ec55c5",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s42994-024-00170-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c40d5b4c400267434673048972aaf33c8e757e05",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220043085 | pes2o/s2orc | v3-fos-license | How mobility habits influenced the spread of the COVID-19 pandemic: Results from the Italian case study
Starting from December 2019 the world has faced an unprecedented health crisis caused by the new Coronavirus (COVID-19) due to the SARS-CoV-2 pathogen. Within this topic, the aim of the paper was to quantify the effect of mobility habits in the spread of the Coronavirus in Italy through a multiple linear regression model. Estimation results showed that mobility habits represent one of the variables that explains the number of COVID-19 infections jointly with the number of tests/day and some environmental variables (i.e. PM pollution and temperature). Nevertheless, a proximity variable to the first outbreak was also significant, meaning that the areas close to the outbreak had a higher risk of contagion, especially in the initial stage of infection (time-decay phenomena). Furthermore, the number of daily new cases was related to the trips performed three weeks before. This threshold of 21 days could be considered as a sort of positivity detection time, meaning that the mobility restrictions quarantine commonly set at 14 days, defined only according to incubation-based epidemiological considerations, is underestimated (possible delays between contagion and detection) as a containment policy and may not always contribute to effectively slowing down the spread of virus worldwide. This result is original and, if confirmed in other studies, will lay the groundwork for more effective containment of COVID-19 in countries that are still in the health emergency, as well as for possible future returns of the virus.
Introduction
In December 2019, the city of Wuhan (Hubei, China) experienced a cluster of pneumonia cases that were monitored by the Chinese health authorities. This was caused by the new Coronavirus SARS-CoV-2 pathogen, also known as COVID-19 (e.g. Chinazzi et al., 2020). The global spread was so rapid that the World Health Organization (WHO) on 30 January 2020 officially declared that the COVID-19 epidemic was a public health emergency of international concern and later, on 12 March 2020, a global pandemic . In June 2020, WHO (2020) counted a total of more than 9 million confirmed cases globally and about 480 thousand deaths, reaching all countries worldwide. Countries that were initially heavily impacted by this pandemic (e.g. China and South Korea) through a massive testing regime as well as strict mobility and travel restrictions were successful in limiting the number of new locally transmitted cases (e.g. Kucharski et al., 2020).
In these few months, scientists have made advances in characterizing the novel coronavirus and have worked extensively on therapies and vaccines to combat it. Furthermore, the medico-scientific community has investigated incubation times for the virus. For example, Lauer et al. (2020) estimate that the average incubation time is 5.1 days, while in most cases (97.5%) the symptoms occur within 11.5 days of infection. The above results were used to derive the commonly applied quarantine period of 14 days (e.g. Backer et al., 2020) applied in many countries (e.g. China, the USA and major European countries like France, Germany, Italy, Spain and the UK).
With respect to the detection of positive cases, previous studies have revealed the presence of a significant fraction of asymptomatic patients infected with the virus Surveillances, 2020), in addition to an unknown percentage of false positive and/or negative of each diagnostic tool among patients with COVID-19, which may have slowed down the detection time of new infections.
While the scientific community has focused in recent months mainly on health issues to defeat this virus, other key topics addressed in the literature seek to correlate the cases (deaths) of COVID-19 to both meteorological (e.g. temperature and relative humidity - Qi et al., 2020;Shi et al., 2020;Tosepu et al., 2020;Y. Wu et al., 2020;Zhu and Xie, 2020) and air quality (e.g. PM pollution - Conticini et al., 2020;Pluchino et al., 2020;X. Wu et al., 2020) variables. By contrast the incidence of human mobility on the spread of the COVID-19 was not still deeply investigated. Indeed, the lockdown of cities and regions together with specific mobility restrictions (e.g. restricted hours and/or areas, specific restrictions for citizen categories) have been common practices performed worldwide to contain and delay the spread of the COVID-19 epidemic. For example, according to Fang et al. (2020) without the Wuhan lockdown, the number of COVID-19 cases would have been 64.81% higher in the 347 Chinese cities outside Hubei province, and 52.64% higher in the 16 other cities within Hubei (Fang et al., 2020). Many countries are using expedients such as strict mobility, travel restrictions, minimum distance, and quarantine (e.g. Muller et al., 2020;Wells et al., 2020) to slow down the spread of the virus. Italy, which experienced the earliest large-scale outbreak (in Codogno, Lombardy) of COVID-19 in Europe, on 8 March enacted similar restrictions on citizens' mobility and, starting from 21 March, began to show a drop in the number of new infections.
The main European countries (such as Germany, Spain, France and lastly the UK) as well as China and the US, have implemented a 14-day quarantine period based exclusively on medical considerations related to incubation time, that is the time that elapses from initial infection to manifestation of the symptoms (or no symptoms for the asymptomatic patients). This common practice is based on the consideration that, as the incubation of infected suspects takes place within 14 days, the national health system/World Health Organization will be able to detect a new case in this time interval. By contrast, the hypothesis discussed in this research is that the time period (days) in which a new positive case of coronavirus is identified and certified, which could be called a sort of a positivity detection time, is longer than the incubation time because of possible delays between contagion and detection caused, for example, by the significant percentage of tests that prove false negative to COVID-19, or by the fraction of people who, although infected, are asymptomatic and/or initially show only mild symptoms, and therefore do not resort to health care. Furthermore, this positivity detection time, as well as the spread of COVID-19, is correlated with mobility habits, in the sense that the number of certified cases of coronavirus in one day is directly related to the number of people who made trips several days before.
To the authors' knowledge, this issue has not been investigated elsewhere, and the appropriate definition and estimation of positivity detection time and its correlation with mobility habits (e.g. daily origin-destination trips) could avoid a slowdown in detecting the infection and hence a slowdown in taking restrictive/mitigative measures.
Starting from these considerations, the aim of the paper was twofold: i) to discuss the spread of coronavirus in Italy; ii) to investigate, for the first time in the literature, the incidence of citizen mobility within the spread of the coronavirus (COVID-19) pandemic, also quantifying the positivity detection time for the Italian case study. To do this, we referred to the mobility habits of the 14-80 year-old population defined in Italy as the "active population" (source: ISTAT, 2020), that is the fraction of citizens who are individually able, unless temporary impediments, to carry out activities (e.g. work, leisure, shopping) and that therefore have autonomous mobility habits.
The proposed case study is very suitable for the purposes of this research because Italy was the first European country to experience mass contagion starting from the first outbreak. Furthermore, by May 2020 the spread had almost stopped. It is therefore possible to analyse the huge quantities of detailed contagion data (on a daily basis) and citizen mobility observed at a national scale and for a long time (before, during and after the lockdown), in addition to the effects of specific injunctions adopted by the Italian Government. To do this, quantitative estimation was also performed from the transportation perspective: the hypothesis according to which the number of certified cases of coronavirus in one day is directly related to the mobility habits made several days before, in addition to other context factors, was investigated. After all, citizen mobility could increase the probability of contagion both directly, for example via trips made by public transport where social distancing cannot be guaranteed, or indirectly (e.g. car trips) because such trips are a measure of the number of activities that a population undertakes in a certain area (trips are made for a purpose), activities that are generally based on human interactions (e.g. work, leisure, shop, sports, events, cinema), which favor the spread of the virus.
Estimates were made through a multiple linear regression model linking the number of certified daily cases (day-to-day) to socio-economic indices (e.g. number of residents; population density), environmental variables (e.g. temperature, PM pollution), health care indicators (e.g. number of swabs taken daily) and mobility habits (e.g. number people who performed trips several days before).
The paper is organized as follows. Section 2 reports methods and materials discussing data collection and model formulation; Section 3 describes and discusses the main results. Finally, conclusions are reported in Section 4.
Methods and materials
As stated above, one of the aims of the paper was to investigate the incidence of citizen mobility within the spread of the coronavirus (COVID-19) pandemic. Estimates were made through a regional (zonal) aggregation level following the classification of territorial units for statistics (NUTS) of EC (2003), although some regional-scale variables were estimates starting from a sub-zonal (provincial) analysis as described below (see the traffic zones considered in Fig. 1 on the left side). Overall, the data considered for the estimations were: − the daily reports on COVID-19 positive cases from February 21 to May 5, 2020, source of the Italian Ministry of Health (2020); − the Italian national census data relative to the year 2019, source of ISTAT (2020); − the COVID-19 mobility observatory of the Italian Transport Ministry (2020), collecting about 1200 car traffic count automatic sensors data from January 2020 (pre-COVID-19) to May 2020, available at a national scale (see the locations in Fig. 1 on the right side) and evenly distributed throughout the Italian regions and provinces (sub-regional discretization); − the particulate matter (PM) pollutant measures performed in 2019 by the Italian Agenzia Regionale per la Protezione Ambientale (ARPA, 2020); − the average daily temperature measured by ilMeteo (2020) from January 2020 to May 2020; − the Italian mobility rates estimated by Isfort (2020).
Precisely, the mobility rates considered are those estimated by the Official National Monitoring Observatory "Audimob" of Isfort (2020), which periodically carries out continuous sample surveys on the mobility of Italians through telephone and computer interviews. Through this observatory it was possible to analyse the mobility habits before and during the national lockdown in Italy. In all, 2175 interviews were conducted by Isfort (2020) between January and February (before the COVID-19 epidemic) and 1398 interviews immediately after the lockdown (8 March 2020) on a representative population sample between 14 and 80 years old. About 70% of interviews were conducted by the Computer Assisted Telephone Interview (CATI) system and about 30% by the Computer Assisted Web Interviewing (CAWI) system.
The model estimation was performed through a multiple linear regression model linking the number of certified daily cases (day-today) to socio-economic, environmental, health care and mobility habits variables. This tool is among those most commonly used in econometric analysis (e.g. Greene, 2012). For example, within the transport and/or economic sectors, such methods are commonly used to explain the economic development of an area driven by a transportation infrastructure considered as an input, in addition to other social, economic and territorial variables. With this type of model it is possible to separate the effects of one group of variables with respect to the others (which is the aim of the research), or, in other words, to estimate the effects of a specific variable (e.g. mobility habits), other things being equal. The daily regional new positive cases of COVID-19 (delta new certified infections per day) provided by the Italian Ministry of Health (2020), were considered as dependent variables, while different independent variables were tested at regional scale: − socio-economic variables (e.g. population, population density, percentage of elderly residents over 65 years, number of employees, number of companiesrelative to the year 2019); − territorial variables (e.g. kilometers of coastline, square kilometers of mountain areas); − environmental variables (e.g. average number of exceedances of air quality thresholds; pollutant emissions; PM average concentrations; temperature; relative humidityrelative to the year 2019 and 2020); − health care variables (e.g. number of COVID-19 tests per dayrelative to the year 2020); − mobility habits variables (e.g. number of citizens who make at least one trip per day; transport accessibility; distance from the main Italian clustersrelative to the year 2020).
Although several model specifications and independent variables are significant the best model formulation obtained with respect to the validation tests (adj. R-squared and t-value) was: where: y t,i is the dependent variable that is number of daily new positive cases of COVID-19 detected in the t-th region on the i-th day (source: Italian Ministry of Health, 2020); POPdensity t is the population density [10 * inhabitants/km 2 ] referring to the provincial capital of the t-th region (source: ISTAT, 2020); PM t is the particulate matter (PM) pollutant variable [number of days], measuring the number of days in 2019 in which the national PM 10 daily limit set at 50 μg/m 3 was exceeded (source: ARPA, 2020); this variable on a regional scale was obtained as a weighted (on the population) average of the corresponding variables referring to provinces within each region; NTESTS t,i is the health care variable estimated by measuring the number of COVID-19 tests performed on the i-th day upon the population of the t-th region [1000 * tests/days] (source: Italian Ministry of Health, 2020); TTD t,i is the weighted average travel time [hours] from the t-th region toward the initial COVID-19 cluster (outbreak) in Codogno (Lombardy) on the i-th day; MOB t,i−x is the average number of 14-80 year-old people have done at least one trip (here defined as "mobility habits") "x" days before the i-th day with respect to the t-th region [100,000 * people/day]; this variable measures the circumstance investigated in this research that the number of certified cases of coronavirus in one day is directly related to the mobility habits made "x" days before; TEMP t,i−x is the average daily temperature observed "x" days before the i-th day with respect to the t-th region [°C] (source: ilMeteo, 2020); Constant variable was also estimated, accounting for all the attributes not otherwise included (explained) in the model. As mentioned in the previous section, the first real COVID-19 outbreak in Italy was in Codogno (Lombardy). Starting from this, the epidemic spread first to neighbouring regions and then increasingly greater distances to the whole of Italy. To take into account this proximity effect, a specific variable, TTD t,i , measuring the proximity to the outbreak of Codogno was considered in the model. To take this effect into account, we considered that this proximity variable could follow a time-decay principle. The most common non-linear functions proposed for quantifying the time-decay effect (in the literature well known as the distance-decay principle) are the inverse power and the negative exponential functions (e.g. Cheng and Bertolini, 2013;Martínez and Viegas, 2013;Hooper, 2015;Kwan, 1998). Starting from such considerations, different time-decay specifications were tested for the proposed TTD t,i variable, resulting that the following inverse power function was the best variable formulation for the case study in question: where: TT t,i is the average weighted (on the population) travel time from the t-th region toward the Codogno cluster on the i-th day; CTT t,i,p is the average minimum car 1 travel time from the p-th province toward Codogno on the i-th day, estimated on the current national transport network; Pop p is the population of the p-th province (source: ISTAT, 2020); day i is the i-th day.
To the authors' knowledge, an issue which has not yet been investigated in the literature is the proper definition and estimation of positivity detection time and its correlation with mobility habits (e.g. trips/day). This time period is the average number of days in which the national health system identifies and certifies a new positive case of coronavirus, and which does not coincide (this is the hypothesis supported in this research) with the incubation time that epidemiological studies quantify as being within 14 days (with an average value of 5 days). For this purpose, through the model formulation (1), the number of new COVID-19 cases in one day was correlated with the number of people making at least one trip "x" days before, which is to say that the number of today's certified new cases are a function of the mobility habits (and hence of people's interactions) performed some days before. To do this, reference was made to the average number of 14-80 year-old people making at least one trip a day (mobility habits) estimated through the equation: where: MOB t,i is the i-th day average number of 14-80 year-old people have done at least one trip in the t-th region; Pop t is the 14-80 year-old population (that is, as said, the socioeconomic category considered in this study) living in the t-th regional area (source: ISTAT, 2020); ADMR t is the average daily mobility rate relative to the t-th regional area before the diffusion of COVID-19 (source: Isfort, 2020); the daily mobility rate was defined as the percentage of residents in the t-th region who during a day make at least one trip, for whatever purpose, with the exception of pedestrian trips shorter than 5 min; %Var t,i is the i-th average daily percentage variation of ADMR t with respect to the pre-COVID-19 condition and relative to the t-th regional area. On basis of the Italian Transport Ministry (2020) data, regional weighted (on the population) values of %Var t,i were estimated, starting with those of the 110 Italian provinces with the following formula: where: %Var t,i,p is the i-th day average percentage variation of the p-th provincial daily mobility rate within the t-th region, with respect to the pre-COVID-19 condition; Pop p is the population living in the p-th province (source: ISTAT, 2020); %Var t,i,p,j is the i-th day average percentage variation of the j-th car traffic trips within the p-th province and the t-th region, with respect to the pre-COVID-19 condition (source: Italian Transport Ministry, 2020); f p,j,i is the i-th day average car trips relative to the j-th traffic count section within the p-th province and the t-th region (source: Italian Transport Ministry, 2020). The decision to estimate the trend (day by day) of the daily average percentage variation (%Var t,i ) starting from the trend in car trips observed (car traffic counts), instead of considering, for example, the trend in public transport passengers, was made for two reasons: i) public transport (transit) trips decreased over time faster than those observed for private cars due both to the reluctance of users to use such transport services during the pandemic (which do not guarantee adequate social distancing), and to the reduction in the supply of transport services (e.g. reduction in departures/day); whereas public transport trips decreased more rapidly over time, the overall mobility rate, i.e. the average number of people making at least one trip/day (e.g. trips to buy food, pharmaceutical products or other basic necessities), followed a more gradual trend comparable with that observed for car mobility (a consideration also confirmed in terms of model estimation results as described in Section 3); ii) moreover, for this transport mode, there were much more widespread data available at a national scale, which was therefore better suited to the purposes of the research.
Results and discussion
As said, one of the aims of the research was to discuss the spread of coronavirus in Italy. From the data of the Italian Ministry of Health (2020) emerges that in Italy on 18 February 2020 there was the first case which in a few days led to an outbreak in Codogno near Milan in Lombardy. On 23 February 2020 the Italian Prime Minister announced the decree DL 23 February 2020, no. 6, "Control and management of 1 Car travel time was the transport attribute that produced the best estimates. Other transport mode travel times were also tested but not reported for brevity (e.g. bus, train). the COVID-19 epidemic", providing the implementation of measures to contain the coronavirus infection for Lombardy and Veneto, identifying red zones where schools were closed; all public events in the regions were to be suspended. Nevertheless, the epidemic spread so rapidly that only five days after the outbreak (25 February), a total of 322 infected cases with nine deaths (3% of the total) were detected overall, with 314 (97.5%) in the North, six (1.9%) in central Italy and two (0.6%) infected cases in southern Italy (Italian Ministry of Health, 2020), most of them in the regions of Lombardy, Veneto and Emilia Romagna regions (Fig. 2).
Before the country entered lockdown on 9 March with the decree "DL no.11" (DL no. 11, 8 March 2020, "Emergency measures to contain the COVID-19 spread"), mobility habits had remained almost unchanged (Fig. 4) and many of those living or working in northern Italy had returned back to their central and southern regions of origin. Therefore, the contagion had already spread almost homogeneously in all the regions already before any mobility restriction. Fourteen days after the outbreak (10 March 2020), a total of 10,149 infected cases with 605 deaths (6% of the total) were registered nationwide, 8997 infected (88.6%) in the North alone, 811 (8.0%) in central Italy and 341 (3.4%) in the South (Fig. 2). Despite the contagion spreading to all regions, it seems that the numbers in terms of total cases and deaths have been amplified in some regions more than in others.
The virus continued to spread each day (Fig. 3), peaking on 21 March 2020 with a total of 53,099 cases and 4679 deaths (9% of the total). The spread of COVID-19 then gradually decreased to a safe value that allowed the start of a "Phase 2" (introduced by the decree DPCM of 26 April 2020), with fewer mobility limitations, on 3 May 2020, by which time 210,717 cases with 27,368 deaths (13% of the total cases) had been recorded nationwide: 168,648 infected (80%) in the North, 24,085 (11%) in central Italy and 17,984 (9%) in the South (Fig. 2).
To investigate the incidence of citizen mobility within the spread of the coronavirus (COVID-19) pandemic, the mobility habits trend was preliminary estimated applying Eq. (3). As said, through the Official (2020), were estimated the mobility rates of the Italian "active population" (14-80 yearold population) before and during the national lockdown (as imposed by DPCM of 8 March 2020). Before the COVID-19, within the 14-80 year-old Italian population, 38 million residents (80% of the total) made at least one trip per day, while after the lockdown ordinance (DPCM, 8 March 2020) a significant reduction in mobility habits was observed and in few days a reduction to about 18 million people trips/day was observed (−42%). The estimates for the period before COVID-19 were used to evaluate the ADMR t rate, while those made in the first lockdown period (8 March 2020) were used to validate the %Var t,i estimates. Results in term of daily mobility rates performed by Isfort (2020) are shown in Table 1. As a result of the lockdown, the mobility rate more than halved from 80% to 38%. In other words, 42% fewer of the population made daily trips by motorized vehicles, by bicycle or on foot (in the latter case only if exceeding 5 min). Despite the restriction regime, almost 40% of citizens on average left home every day to make at least a short trip. The decrease in the mobility rate was particularly significant in regions in central Italy (−51%), and less marked in the southern regions and the islands (−36%), while the northern regions fell just below the national average (−42%).
As regards the age of the interviewees, starting from the restrictions the collapse in mobility was clear, especially among the over-65s where it fell by three-quarters: fewer than 15% of citizens made at least one trip by private or public transport during the lockdown. The reduction in the mobility rate was also striking among the young and very young, where the majority are schoolchildren or university students. Looking at the employment status of the interviewees, the mobility rate during the lockdown was still around 50% among workers (a little higher among employees than the self-employed), who recorded a 35% decrease in trips, just 5-7 points below the general 2 average. On the other hand, the trips of retirees were almost eliminated: their mobility rate fell from 66% pre-COVID-19 to 16% under lockdown. There was also a very marked reduction in student mobility (from 73% to 26%), which was of course massively affected by school closures (DPCM of 9 March 2020).
For each of the 20 Italian regions, the 14-80 year-old population mobility habits and its day by day evolution before, during and after (beginning of business recovery "Phase 2") the COVID-19 pandemic in Italy was therefore estimated through Eq. (3) (results in Fig. 4). The estimates in their aggregate form (at the national scale) were also compared with those available in some official open-source databases specific to the Italian case study, including the COVID-19 mobility trends of Apple Inc. (driving data, 2020), the COVID-19 Community Mobility Reports of Google LLC (transit stations percent change from baseline data, 2020), the Mobility DataLab of Octotelematics and Infoblu S.p.A. (car trip, 2020) specific to road traffic in Italy and the survey results performed by Isfort (2020) discussed above and reported in Table 1. The results of the comparison (Fig. 4) show that the estimated mobility habits trend is consistent with those of the available databases, in addition to those of the Isfort (2020) investigation.
As stated above, the main aim of the paper was to investigate the incidence of mobility habits within the spread of the Coronavirus (COVID-19) pandemic, also quantifying the average positivity detection time for the Italian case study. Estimates were made through the multiple linear regression model in Eq. (1), linking the number of certified daily cases (day-to-day) to socio-economic, environmental, health care and mobility zone-specific variables at regional scale. The length of time considered spans the period from the first new cases observed on 21 February 2020 resulting from the outbreak in Codogno near Milan in Lombardy, to 20 April 2020 (60 consecutive days) when the daily infection curve reached its lowest point (Fig. 3) periods were also tested and not reported for brevity so as not to produce significant differences in estimation results.
Although several model specifications and independent variables are significant, in Table 2 the results of only the best model formulation with respect to the validation tests (adj. R-squared and t-value) are reported. All the parameters are statistically significant (N95% significance) and with the expected sign. The R-squared (adj. R-squared) is equal to 0.427 (0.424). R-squared values below 0.5 are not an unusual result, as observed in similar case study applications (e.g. Herranz-Loncán, 2007;González and Nogués, 2019).
With respect to socio-economic variables, Italy is characterized by uneven population density within the area of the country and for this reason, although the average regional population density, the model that gave the best results includes a population density variable (POPdensity t ) referring to the provincial capital of the region, that is the area where most of the population live. In the Italian regions, the average population density is about 183 inhabitants/km 2 , while the corresponding provincial capital population density is about 540 inhabitants/ km 2 (+66%). The territorial area where this difference is more evident is the Campania region where the province of Naples (the highest population density area of the country) with 2617 inhabitants/km 2 is 84% denser than its region average value (424 inhabitants/km 2 ). This circumstance means that areas with higher population densities have a higher probability of contagion, being (on average) less able to guarantee social distancing (increase in social activities with overcrowding). Moreover, more than 19 million inhabitants live in these provincial capitals of the region. Number of regional tests per day (NTESTS t,i ) is the health care variable estimated by measuring the number of COVID-19 tests performed every day upon the population of the region. This variable, which represents the second variable in "weight" with respect to the standardized coefficients estimated (Table 2), explains the circumstance that, all else being equal, the more tests are conducted, the greater is the probability of finding positive cases (especially with respect to the asymptomatic population).
As mentioned, the first real COVID-19 outbreak in Italy was in Codogno (Lombardy). Starting from this, the epidemic spread first to neighbouring regions and then increasingly greater distances to the whole of Italy. To take into account that the areas close to Codogno have greater daily exchange trips with the outbreak area, and therefore a greater probability of contagion, a specific variable (TTD t,i ) measuring the proximity to the outbreak of Codogno was considered in the model. Moreover, with the passing of the days this proximity effect from the initial outbreak (trips from Codogno and Lombardy) decreased, resulting in new contagion produced by the local mobility of residents in the region. To take this effect into account, we considered that this proximity variable could follow the time-decay principle described by Eq. (2). Through this variable formulation the proximity effect was greatest within the first days and then "decayed" in its incidence with the passing of the time.
Mobility habits were the variable (MOB t,i−x ) that best explained the number of COVID-19 infections (in term of "weight" with respect to the standardized coefficients estimated and reported in Table 2). This variable measures the circumstance investigated in this research that the number of certified cases of coronavirus in one day is directly related to the mobility habits made "x" days before. To estimate the most representative number of "days before" that influence the new cases in a day, many thresholds were tested in terms of model validation tests, obtaining that trips 21 days before was the best variable to reproduce the data observed. This result is also qualitatively observable from Furthermore, among the environmental measures, a particulate matter pollutant variable (PM t ) was significant, measuring the number of days in 2019 in which the national PM 10 daily limit set at 50 μg/m 3 was exceeded. This measure on a regional scale was obtained as a weighted (on the population) average of the corresponding variables referring to provinces within each region, meaning that areas with higher population and with lower air quality have a higher probability of contagion. Overall, the data analysis shows how the areas of the country with the highest PM pollution are those of northern Italy (e.g. province of Milan in Lombardy and Turin in Piemonte), where are mainly located the industrial areas and/or most of the population live, according to with the circumstance that PMs are mainly generated by industries, heating (e.g. home, office) and transport sector (e.g. mobility habits). The opposite occurs for areas in the south, characterized by an economy mainly based on tourism and agriculture. This variable explains, as observed in other case studies, the (positive) correlation between the number of cases per day and the average pollution in the area. With respect to air quality impacts upon the spread of COVID-19, some recent research has shown that people living with long-term exposure to air pollution are more likely to become infected by Coronavirus. For example, Conticini et al. (2020) conclude that a prolonged exposure to air pollution may partly explain a higher presence of viral agents such as SARS-CoV-2. At the same time, Pluchino et al. (2020) identified the PM 10 concentration as a factor of the vulnerability component of the risk in COVID-19 analysis. Indeed, in a study conducted by X. Wu et al. (2020) it was shown that an increase of 1 μg/m 3 in PM 2.5 involves an 8% increase in the COVID-19 death rate. From another research perspective, few studies have linked air pollutants with one of the causes that make COVID-19 spread so rapidly (e.g. Coccia, 2020;Piazzalunga-Expert, 2020;Setti et al., 2020).
Finally, temperature (TEMP t,i−x ) was also significant and represents the third variable in "weight" with respect to the standardized coefficients estimated (Table 2). This variable is negatively correlated with the COVID-19 new cases, meaning that the warmer areas of the country (i.e. south regions and island) have probably contributed to contain the virus contagion. This result is coherent with observed in other case studies, where several researches have observed that temperature and relative humidity positively influence the spread of COVID-19 (e.g. Qi et al., 2020;Shi et al., 2020;Tosepu et al., 2020;Y. Wu et al., 2020;X. Wu et al., 2020;Zhu and Xie, 2020). For example, in Hubei (China) Qi et al. (2020) observed that every 1°C increase in the average temperature with relative humidity in the range from 67% to 85% led to a 36% to 57% reduction in confirmed COVID-19 cases. In addition, the authors also concluded that every 1% increase in relative humidity led to an 11% to 22% reduction in daily confirmed cases with average temperatures in the range from 5.0°C to 8.2°C.
Conclusions
The research discussed in this paper concerns the topics of both the "atmosphere" (air quality and temperature impacts) and the "anthroposphere", in the sense of the Earth's research area dealing with the part of the environment that is made or modified to satisfy human activities and habits, where transportation system and the corresponding people mobility trips cover a central role. Precisely, the aim of the paper was to investigate the incidence of citizens' mobility habits within the spread of the Coronavirus (COVID-19) pandemic for the Italian case study. The conjecture that the number of new certified cases of coronavirus in one day is directly related to the number of trips made several days before, in addition to environmental and other context factors, was investigated. Another issue discussed in this paper and unexplored in the literature is the appropriate definition and estimation of the positivity detection time and its correlation with mobility habits. The thesis was that this time period generally exceeds the incubation time due to many external factors such as false negative test results to COVID-19 or people who, albeit infected, are asymptomatic and/or initially show only mild symptoms, and therefore do not resort to health care.
To pursue the research aims, quantitative estimation was made through a multiple linear regression model. Estimation results showed that mobility habits represent the variable that mainly explains (from a statistical perspective) the number of COVID-19 infections. Nevertheless, significant in explain the spread of the COVID-19 were also the environmental variables (temperature and PM pollutant), underling how environmental issues cover a central role in multidisciplinary researches (e.g. healthcare and transport sectors). Furthermore, other variables were significant in reproducing the spread of the coronavirus in Italy, among which the number of tests per day and the proximity to the first Italian outbreak, especially in the initial stage of infection (following a time-decay phenomenon). Furthermore, research results showed that the number of new COVID-19 cases in one day is directly related to the trips performed three weeks before for the Italian case study. This threshold of 21 days could be considered a sort of positivity detection time measure, meaning that quarantine of mobility restrictions (e.g. lockdown; restrictive/mitigative actions; social distance) commonly set in 14 days, and based only on incubation-based epidemiological considerations, is underestimated as a containment policy and may have produced a possible (dangerous) slowdown in the certification of the infections and therefore the slowdown in implementing restrictive/mitigative action, resulting in more Coronavirus contagion and deaths worldwide.
This result is original and, if confirmed in other case studies, would lay the groundwork for more effective containment of COVID-19 in countries that are still experiencing a health emergency, as well as for possible future returns of the virus, or for other pandemics.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2020-06-25T05:04:19.542Z | 2020-06-24T00:00:00.000 | {
"year": 2020,
"sha1": "1a8262f120250cf4b42c6d52ac907068c68f0616",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.scitotenv.2020.140489",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "1a8262f120250cf4b42c6d52ac907068c68f0616",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
29833757 | pes2o/s2orc | v3-fos-license | The impact of dental disease on mortality in patients with asymptomatic carotid atherosclerosis
BACKGROUND: Dental status and oral hygiene are associated with progression of atherosclerosis in patients with carotid stenosis. It remains unclear whether dental disease is a risk factor for mortality in these patients. We evaluated the bearing of dental disease on mortality among patients with asymptomatic carotid atherosclerosis. METHODS: Three World Health Organization-validated indices in 411 patients with asymptomatic carotid atherosclerosis were evaluated, measuring DMFT (decayed, missing, filled teeth) for dental status, CPITN (community periodontal index for treatment needs) for periodontal status and SLI (Silness-Löe Index) for oral hygiene respectively. Patients were prospectively followed for median 6.2 years (IQR 5.8 to 6.6 years) for all-cause mortality. RESULTS: During follow-up, 107 (26%) deaths occurred (74 cardiovascular causes). DMFT and SLI, but not CPITN, showed a significant and gradual association with mortality. For continuous variables, the adjusted hazard ratios (HR) for death were 1.06 (95% CI 1.0 to 1.12; p = 0.04) for DMFT, and 1.43 (95% CI, 1.01 to 2.03; p = 0.04) for SLI respectively. Edentulousness was a significant risk factor for death (adjusted HR 1.99, 95% CI, 1.18 to 3.02; p = 0.008). CONCLUSION: Dental status and oral hygiene were associated with mortality in patients with carotid atherosclerosis regardless of conventional cardiovascular risk factors.
Introduction
Dental disease is associated with atherosclerosis [1][2].Previously we reported that dental status and oral hygiene are independent risk factors for progression of atherosclerosis in the carotid arteries [3].Inflammation plays a key role in the development of atherosclerosis; inflammatory dental and oral processes may thus exaggerate vascular disease and promote its progression [4,5].However, the bearing of dental disease on mortality among a population with known vascular disease has not been fully understood as yet.While the development of coronary heart disease among a large population without known atherosclerotic disease was not associated with dental disease, others observed a significant association [6][7][8].We hypothesised that dental disease is associated with mortality in patients with pre-existing atherosclerosis.The aim of the present study was to investigate the impact of dental disease on mortality in patients with atherosclerosis in the carotid arteries.
Study design
All consecutive patients who underwent duplex ultrasound scanning of the extracranial carotid arteries from March 2002 to March 2003 were prospectively enrolled in the Inflammation in Carotid Arteries Risk for Arthrosclerosis Study (ICARAS).Detailed inclusion and exclusion criteria have been reported previously [9].In brief, patients with prevalent atherosclerotic carotid artery disease, as defined by the presence of non-stenotic plaques or carotid stenosis of any degree, who were clinically asymptomatic at the time of screening, were enrolled.Patients with enhanced intima media thickness but without plaques were not eligible.Patients with a cardiovascular event (myocardial infarction, stroke, coronary revascularisation, peripheral vascular surgery) during the preceding 6 months were excluded.The rationale behind this exclusion criterion was the assumption that acute cardiovascular events might impact specific biomarker levels or other clinical measures and therefore rather reflect the severity of an acute situation than chronic atherosclerotic disease.Patients with active malignant disease were also not included in ICARAS.The study was approved by the local ethics committee and all patients gave their written informed consent.
ICARAS dental sub-study
A total of 1268 patients were included in ICARAS.Of these, a 450-patient random sample, using computer-generated random digits, was identified for inclusion in the dental sub-study.411 (91%) accepted the invitation to participate and were included.No significant differences in baseline characteristics and demographics were found when comparing the sub-study participants with the entire ICARAS population.
Study endpoint and follow-up
The study endpoint was defined as death from any cause.This was evaluated by screening the national register of death, including screening for the specific cause of death (according to the International Statistical Classification of Diseases and Related Health Problems 10 th Revision).
Dental examination
Dental examinations were performed one to four weeks after the initial ultrasound examination by four specifically trained dentists who were blinded to the patients' clinical and ultrasound data.All patients were investigated by two observers in consensus.Three World Health Organizationapproved dental indices were selected to quantify dental disease [10].We used DMFT (decayed, missing, filled teeth), to evaluate the dental status, SLI (Silness-Löe plaque index) to measure oral hygiene, and CPITN (community periodontal index of treatment needs) as a surrogate marker of periodontal disease.Dental status and the amount of decay in an individual are described by DMFT as a means of expressing caries prevalence numerically.The score is generated by calculating the number of decayed (D), missing (M), and filled (F) teeth (T).DMFT was calculated for 32 teeth.SLI is based on recording both soft debris and mineralised deposits on teeth 12, 16, 24, 36, and 44.Each of the surfaces (buccal, lingual, mesial, and distal) is given a score from 0 (no plaque) to 3 (abundance of soft matter within the gingival pocket and/or on the tooth and gingival margin).The index is obtained by calculating the mean for all investigated teeth and surfaces.In edentulous patients SLI was obtained from the dentures.Assessment of CPITN includes recording of signs of gingival bleeding, supra-or subgingival calculus, and periodontal pockets, subdivided into shallow (4 to 5 mm) and deep (6 mm or more).We used a standardised lightweight periodontal probe with a 0.5-mm ball tip to probe 10 standardised index teeth which were then classified from 0 (healthy) to 4 (pocket >6 mm).Index teeth were investigated as recommended; if the index teeth were missing, the next adjacent teeth were used for evaluations.CPITN for edentulous patients was calculated in a separate category.
Colour-coded duplex sonography and grading of internal carotid artery stenosis
Duplex examinations at baseline and during follow-up were performed on an Acuson 128 XP10 with a 7.5-MHZ linear array probe (Acuson) by experienced technical assistants who were supervised by 2 of the authors.All duplex operators were blinded with respect to patients' clinical data and dental status.Duplex grading of the carotid stenosis was performed as described previously [3].The validity of our classification of the degree of stenosis with respect to angiography was assessed previously in our duplex laboratory in an independent cohort including 1006 carotid arteries [11].Assuming angiography as the gold standard, positive predictive values and negative predictive values ranged from 70% to 98%.With respect to the absolute degree of stenosis, we recorded excellent inter-observer agreement (kappa, 0.83; 95% confidence interval [CI], 0.79 to 0.88).
Medication
Pharmacotherapy of patients with evidence of carotid atherosclerosis was prescribed following a standard protocol: Patients received antithrombotic therapy with either acetylsalicylic acid 100 mg or clopidogrel 75 mg once daily.Patients with hyperlipidaemia (LDL cholesterol >130 mg/ dL) received inhibitors of the 3-hydroxy-3-methylglutaryl coenzyme A reductase (statins).
Statistical methods
Continuous data are presented as the median and the interquartile range (IQR, range from the 25th to the 75th percentile).Discrete data are given as counts and percentages.We used Mann-Whitney U tests and Fisher exact tests for univariate analyses, as appropriate.Separate Cox regressions for the independent variables DMFT, SLI, and CPITN in tertiles were performed.Hazard ratios (HR) and 95% CI are presented.Additionally, estimated effects of increasing continuous measures of DMFT, SLI, and CPITN were calculated.To allow for potential confounding effects, we calculated the risk of death by multivariable Cox proportional hazards analysis adjusting for age (years), sex (male/female), body mass index (kg/m 2 ), smoking (in categories), hypertension (yes/no), low-density lipoprotein cholesterol level (mg/dL), glycated haemoglobin A1 level (%), history of myocardial infarction (yes/no), peripheral artery disease (yes/no), history of stroke (yes/no), baseline degree of carotid stenosis (in categories), and statin treatment (yes/no).We assessed the overall model fit using Cox-Snell residuals.Furthermore, we tested the proportional hazard assumption for all covariates using Schoenfeld residuals (overall test) and the scaled Schoenfeld residuals (variable-by-variable testing).P-values below 0.05 are considered statistically significant.Sample size calculation indicated that a sample size of 418 patients allows detection of a difference in mortality of 12% assuming an overall mortality of 25% during follow-up (alpha 0.05, power 80%).To compensate for possible missing data, 450 pa-tients were included.Data analysis was done in SPSS (version 15.0) and SAS (version 9.1).
Follow-up and mortality
During a median of 6.2 years (IQR, 5.8 to 6.6 years), 107 (26%) deaths were recorded.Of these, 74 patients (69.2%) died from cardiovascular causes (47 patients from coronary artery disease, 10 from stroke, 7 from peripheral vascular disease and 10 from other vascular causes).Thirty patients (28%) died of cancer (13 patients died from lung cancer, 3 from pancreatic cancer, 6 from colo-rectal carcinoma, 4 from breast cancer and 4 from other cancer entities).Three patients (2.8%) died from other causes.
Dental status
A significant association between dental status and allcause mortality was observed (fig.1A).DMFT showed a significant association with all-cause mortality when incorporated as a continuous variable into a proportional hazard model (adjusted HR 1.06 [95% CI, 1.0 to 1.12; p = 0.04]).
Edentulousness
Patients without any teeth were at increased risk for death of any cause.For edentulous patients, the adjusted HR for death was 1.99 (95% CI, 1.18-3.02;p = 0.008, fig.2).
Discussion
Dental disease was found to be a risk factor for death in patients with prevalent carotid atherosclerosis.Teeth status, as evaluated by DMFT and the level of oral hygiene, as evaluated by SLI, were significantly associated with mortality.Edentulousness was consistently strongly associated Adjusted risk for overall mortality according to tertiles of DMFT, SLI, CPITN and edentulousness respectively.
with mortality.In contrast, CPITN, as a surrogate for periodontal disease, was not found to be an independent risk factor for death.Adverse socioeconomic circumstances are risk factors for the development of dental disease as well as atherosclerosis [12,13].This relation certainly confers a higher risk of adverse cardiovascular outcome and death.However, almost all residents of Austria are covered by national health insurance.In this context, a study from Sweden, a country with excellent health coverage and high educational standards, did not find socioeconomic circumstances associated with cardiovascular disease [14].Economic aspects thus might be less important among our cohort than individual factors such as education and health consciousness.These latter factors might be displayed by the oral hygiene status.Poor oral hygiene, decayed teeth and edentulousness thus may reflect patients' reluctant attitude to health care prophylaxis in general.
Another potentially important aspect is the association between periodontal inflammation and atherosclerosis.In particular, chronic microbial infection, including several periodontal pathogens, may play an important role in the development of atherosclerotic disease [15][16][17].This has been investigated in numerous publications reporting conflicting data.While several studies suggest a clear association between periodontal infection and mortality, others did not find periodontitis as a risk factor for poor outcome [5,[18][19][20][21][22]. Nor, in this context, did we observe a significant association between periodontal status as measured by the CPITN index and mortality.A possible explanation for these latter findings may be the fact that our population predominantly consisted of elderly subjects with advanced carotid artery disease, as shown by the presence of atherosclerotic plaques or stenosis.Since bacterial infection is thought to be linked to an early initiation of atherosclerotic lesions, measurement of the intima-media thickness may be a better parameter to investigate the association between periodontal infection and early stage atherosclerosis.The rate of 28% of patients who died of cancer is in line with cancer-death rates in the general population.Since our patients were predominantly elderly subjects (median 70 years), the occurrence of cancer among a certain percentage of our population was not unexpected.
To the best of our knowledge the present study is the first to demonstrate clearly a significant relationship between dental disease, especially tooth loss, and death among a population with asymptomatic atherosclerosis.Our findings are in line with results from a Japanese study investigating dental disease among a predominantly older population [23].Clinical implications derived from our findings could be as follows: once a dentist diagnoses advanced dental disease or signs of poor oral hygiene, the patient should be referred to an internist for further screening and/ or treatment of cardiovascular risk factors.
Limitations
The following limitations of our study should be mentioned.First, our study population consisted of patients with preexisting atherosclerosis.Hence we were unable to draw conclusions regarding the impact of dental and periodontal disease on mortality in a community based cohort.
Original article
Swiss Med Wkly.2011;141:w13236 Second, microbial aspects, which have been shown to be more specific than clinical signs of periodontitis, were not covered in our study.Third, specific limitations of the applied indices need to be mentioned.In populations with a high prevalence of decay, DMFT has been reported to be less suitable, and this might not be relevant for our cohort [24].The validity of SLI may be limited in milder forms of inflammation and by the need for probing [25].Finally, severe stages of periodontitis may be underestimated or even missed (edentulousness) by CPITN [26].
Conclusion
Dental status and oral hygiene were significantly associated with mortality in patients with carotid atherosclerosis regardless of conventional cardiovascular risk factors. Funding
Figure 1 DMFT
Figure 1 DMFT (fig.1A), SLI (fig.1B), and CPITN (fig.1C) indices in patients who died (n = 107) compared to survivors (n = 304).Box bottom marks the 25th percentile, the line within marks the median, and the box top marks the 75th percentile; the bottom and top of the vertical lines mark the 5th and 95th percentiles respectively.
Table 1 :
Patients' baseline characteristics and demographics.
Continuous data are presented as the median and the interquartile range.Discrete data are given as counts and percentages.aMultiply by 0.0259 to convert variables to millimoles per liter.Swiss Medical Weekly • PDF of the online version • www.smw.ch / potential competing interests: No financial support and no other potential conflict of interest relevant to this article was reported.Matthias Hoke, MD, Department of Internal Medicine II, Division of Angiology, Vienna General Hospital, Medical University, Währinger Gürtel 18-20, A-1090 Vienna, Austria, matthias.hoke@meduniwien.ac.at Correspondence: | 2018-04-03T01:28:58.261Z | 2011-07-28T00:00:00.000 | {
"year": 2011,
"sha1": "be8c097f517a44d9b225b43d03f7fcd5561acc78",
"oa_license": "CCBY",
"oa_url": "https://smw.ch/index.php/smw/article/download/1322/1528",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "be8c097f517a44d9b225b43d03f7fcd5561acc78",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229494833 | pes2o/s2orc | v3-fos-license | The preconditioning of lithium promotes mesenchymal stem cell-based therapy for the degenerated intervertebral disc via upregulating cellular ROS
Abstract Adipose-derived stem cell (ADSC) is one of the most widely used candidate cell for intervertebral disc (IVD) degeneration-related disease. However, the poor survival and low differentiation efficacy in stressed host microenvironment limit the therapeutic effects of ADSC-based therapy. The preconditioning has been found effective to boost the proliferation and the functioning of stem cells in varying pathological condition. Lithium is a common anti-depression drug and has been proved effective to enhance stem cell functioning. In this study, the effects of preconditioning using LiCl on the cellular behavior of ADSC was investigated, and specially in a degenerative IVD-like condition. Method The cellular toxicity on rat ADSC was assessed by detecting lactate dehydrogenase (LDH) production after treatment with a varying concentration of lithium chloride (LiCl). The proliferative capacity of ADSC was determined by detecting Ki67 expression and the relative cell number of ADSC. Then, the preconditioned ADSC was challenged by a degenerative IVD-like condition. And the cell viability as well as the nucleus pulpous (NP) cell differentiation efficacy of preconditioned ADSC was evaluated by detecting the major marker expression and extracellular matrix (ECM) deposit. The therapeutic effects of preconditioned ADSC were evaluated using an IVD degeneration rat model, and the NP morphology and ECM content were assessed. Results A concentration range of 1–10 mmol/L of LiCl was applied in the following study, since a higher concentration of LiCl causes a major cell death (about 40%). The relative cell number was similar between preconditioned groups and the control group after preconditioning. The Ki67 expression was elevated after preconditioning. Consistently, the preconditioned ADSC showed stronger proliferation capacity. Besides, the preconditioned groups exhibit higher expression of NP markers than the control group after NP cell induction. Moreover, the preconditioning of LiCl reduced the cell death and promoted ECM deposits, when challenged with a degenerative IVD-like culture. Mechanically, the preconditioning of LiCl induced an increased cellular reactive oxidative species (ROS) level and activation of ERK1/2, which was found closely related to the enhanced cell survival and ECM deposits after preconditioning. The treatment with preconditioned ADSC showed better therapeutic effects than control ADSC transplantation, with better NP preservation and ECM deposits. Conclusion These results suggest that the preconditioning with a medium level of LiCl boosts the cell proliferation and differentiation efficacy under a normal or hostile culture condition via the activation of cellular ROS/ERK axis. It is a promising pre-treatment of ADSC to promote the cell functioning and the following regenerative capacity, with superior therapeutic effects than untreated ADSC transplantation.
Introduction
Low back pain (LBP) is one of the major causes of disability in the elderly [1]. The intervertebral disc degeneration (IVDD) is considered the main cause of LBP [2]. An alteration in the niche of nucleus pulpous (NP) cells results in cell death and the matrix imbalance of NP, with the denaturation of type II collagen (Col II) and the loss of glycosaminoglycan. Unfortunately, there is no ideal treatment so far. The current treatment includes conservative and surgical therapies, which cannot stop the degeneration process of IVD, nor reverse the degeneration [3].
Cell-based therapy has shown a great potential in the treatment of multiple diseases or pathological processes [4]. Because of the pluripotent property and an easy access, adipose-derived stem cell (ADSC) has attracted many attentions in bioengineering medicine [5,6]. However, due to the harsh condition in degenerative intervertebral disc, the therapeutic effects of cell-based therapy is adversely limited, due to the poor survival and the impaired cell viability. Chan et al. reported only 20% of cells survived for over 7 days after transplantation into simulated-physiological conditions of cryopreservation IVD [7,8]. Specifically, previous researchers have proven that the acidic condition as well as high osmotic condition both impair the cell viability and proliferation, as well as the extracellular matrix (ECM) deposits [1,[9][10][11][12]. The urgent need of candidate cells that could adapt to IVD condition requires further study.
To promote the adaptation and the functioning of the transplanted cells in pathological condition, the preconditioning treatment has been developed and proved feasible [13,14], mainly including some chemical or biological factors [15,16]. Due to the limited space in the degenerated disc, the preconditioned cell shows advantages of higher feasibility and integration, with lower chance to cause unexpected effects by implanted materials. Recently, the preconditioning using lithium has been found beneficial in cell-based therapy against disease models in brain, bone, and heart [17][18][19][20][21]. Lithium could promote the proliferation of bone marrow-derived mesenchymal stem cell (BMSC) via a glycogen synthase kinase (GSK) 3β-dependent β-catenin/Wnt pathway [22]. Besides, the preconditioning with lithium has been found to reduce cell apoptosis by inducing autophagy [23]. Above studies suggest that the preconditioning of lithium would be a promising method to boost the cellular adaptation in a stressed microenvironment. While the specific influence of preconditioning with lithium to cellular adaptation of ADSC in the IVDD-like condition remains elusive.
In this regard, we designed this study to investigate the potential value of the preconditioning with lithium for an ADSC-based IVDD treatment. The cellular adaptation including cell proliferation and differentiation in normal or the IVDD-like condition were evaluated, and the therapeutic effects of preconditioned ADSC was also assessed using an IVDD rat model.
Cell culture and treatment
The Sprague-Dawley (SD) rat ADSCs of passage 2 were purchased from Cyagen Bioscience (China). Cells were cultured using SD rat ADSC basal medium supplemented with 10% fetal bovine serum (FBS), 1% glutamine, and 1% penicillin-streptomycin. The medium was replaced twice every week, and cells at passages 4 were used in subsequent experiments. The ADSC microsphere was also prepared. Every 1 mL cell suspension containing 3 × 10 5 cells was added to 15 mL centrifuge tube and centrifuged at 300 g/min at 4°C for 3 min. The microsphere forms after incubation overnight.
In regard to the preconditioning treatment, ADSCs were implanted on 6-well plates at a density of 3 × 10 5 cells/well and maintained overnight for adhesion. LiCl aqueous solution was added into basal medium at a final concentration of 0, 1, 4, 10, and 20 mmol/L. The preconditioned ADSC was also prepared for transplantation in vivo. Specifically, each 1 × 10 6 cells were resuspended using basal medium and kept on ice till use.
The cellular ROS production was blocked using apocynin (APO, 5 μM). Cells were maintained in LiClcontaining medium with or without APO for 48 h and washed with PBS.
Cellular proliferation assay
The cell viability was determined using a Trypan Blue staining reagent. The proliferation ability of ADSC after preconditioning was determined by counting the cell number and the lactate dehydrogenase (LDH) assay. The relative cell number was monitored using the cell counting kit (CCK8, dojindo) before treatment, after treatment, and at 2 days post-treatment. Briefly, cells were implanted in 96-well plate at a density of 1 × 10 4 cells/ well. The supernatant was discarded, then the cells were incubated with 5% CCK8 in high glucose DMEM for 2 h. The supernatant was collected and the absorption at 490 nm was detected using a spectrophotometer. The LDH assay was performed according to the instruction of the manufacturer. The culture medium was collected after preconditioning and 2 days after transplantation into the degenerative condition. The LDH content was determined by detecting the absorption at 440 nm was detected using spectrophotometer.
Immunofluorescent staining
The cell proliferation ability was also determined by detecting the Ki67 level after preconditioning. Briefly, cells cultured in 12-well plate were washed using cold PBS and immobilized using 4% paraformaldehyde for 30 min. Then, cells were permeabilized using PBS-Triton (0.5%) for 15 min. After these, cells were blocked using 5% bovine serum albumin (BSA) at RT for 1 h, and incubated overnight at 4°C with Ki67 (ab15580) primary antibody.
After washing three times using PBS, the cells were incubated with secondary antibodies conjugated to the Alexa Fluor fluorescent dyes (ab150078). Subsequently washed with PBS, cells were stained with 4, 6diamidino-2-phenylindole (DAPI, Sigma-Aldrich) for 5 min and washed with PBS for observation. Images were captured using a fluorescent microscope.
Western blot analysis
Western blot analysis was performed to determine the protein content of major NP cell markers. Briefly, cells were washed three times with cold PBS and lysed at 4°C using RIPA buffer (Beyotime) for 30 min. Protein contents were measured using the BCA protein assay. For each sample, 30 μg total proteins were loaded and separated by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and transferred onto an immobilon polyvinylidene difluoride (PVDF) membrane (Millipore Billerica). The membranes were blocked using 10% bovine serum albumin (BSA) and incubated with anti-GAPDH antibody (CST#5174), anti-type II collagen antibody (CST#13120), anti-aggrecan antibody (CST#13120) and anti-Krt19 antibody (CST#12434), anti-SOX9 antibody (CST#13120), anti-GSK-3b, anti-MMP13, anti-ADAMTS5, anti-ERK1/2, and anti-pERK1/2 overnight at 4°C. Then, the blots were rinsed and incubated with HRP-labeled secondary antibody (Bioker, China) for 2 h at room temperature (RT). The protein bands were visualized using a Bio-Rad imaging system, and the relative protein intensity of the control was calculated using "Quantity One" software.
Gene expression analysis
RT-PCR was performed to assess the mRNA level of the essential markers in the preconditioned ADSC under normal condition or IVDD condition after differentiation. Briefly, RNAiso reagent was used to extract total RNA. And the PrimeScriptTM Reagent Kit was used for reverse transcription. Quantitative polymerase chain reaction (qPCR) was performed using SYBR Green (Takara) according to the instruction of the manufacturer. Primers were synthesized by Sangon Biotech (China), and the sequences are listed Table 1.
Alcian blue staining
The ECM deposit was also determined by the Alcian blue staining. Briefly, after differentiation, cells were washed with PBS, then fixed with 4% paraformaldehyde for 15 min. The cells were stained using Alcian blue solution for 30 min and washed twice with PBS before observation under microscope.
Cellular ROS level detection
The cellular ROS production of ADSC was detected using 2′, 7′-dichlorofluorescein diacetates (DCFH-DA, Sigma-Aldrich) as previously reported [27]. Cells were washed with PBS after treatment, then the DCFH-DA was loaded at a concentration of 20 mmol/L in DMEM medium for 30 min, and the cells were washed thrice with PBS. The fluorescent image was taken with the fluorescence microscope.
Animal surgery SD rats (150 g) were purchased from the Shanghai SLAC Animal Research Center. All procedures in this study was approved by the ethic Committee of Zhejiang Chinese Medicine University Laboratory Animal Research Center. The needle puncture-induced model was applied in here. Twenty rats were divided into four groups: normal control (without needle puncture or treatment), degeneration group (needle puncture with PBS treatment), ADSC-treated group (needle puncture with ADSC transplantation), and preconditioned ADSC (needle puncture with preconditioned ADSC transplantation). Rats were anesthetized using 2% pentobarbital sodium intraperitoneally, and a sterile 20-gauge needle was inserted just through the annulus fibrosus (AF) into the middle of the NP, and rotated for 360°, then held for 30 s. The treatment was applied at 2 weeks post puncture, and the total injection volume was about 2 μL.
Histological and biochemical analysis
The rats were sacrificed by euthanasia at 3 month after treatment. The specimens were harvested and fixed using 4% PFA, embedded in paraffin, and sectioned at 5 μm thickness. The slices were stained with hematoxylin and eosin (H&E) or safranin O-fast green (S-O).
For immunohistochemical (IHC) staining, the slices were deparaffinized in xylene and rehydrated, then treated with 3% H 2 O 2 for 10 min, then blocked with 5% bovine serum albumin (BSA) for 30 min at room temperature. After that, the slices were incubated with diluted primary antibody at 4°C overnight. After washes with PBS, the slices were incubated at 37°C for 30 min with biotin-labeled secondary antibody, and the staining was detected by the streptavidin-biotin complex (SABC) method.
The biochemical analysis was performed according to previous report [3]. Briefly, the NP samples were harvested and frozen at − 80°C and lyophilized for 24 h. The samples were weighted and recorded, then digested with papain (Sigma) for 18 h at 60°C. The contents of sulfated glycosaminoglycans (sGAG) and hydroxyproline of each group were detected using the DMMB assay and Hydroxyproline Assay Kit (Jiancheng Bioengineering Institute, China), respectively. The results were normalized with dry weight of NP sample.
Statistical analysis
Results are expressed as the mean ± standard deviation for at least three independent experiments. Data were analyzed using one-way analysis of variance (ANOVA, non-parametric test) followed by Dunn's post hoc test for multiple comparisons (SPSS 20). A value of P < 0.05 was considered to indicate statistical significance.
Cellular toxicity of Li
About 40% cells were found dead after 2-day treatment with 20 mmol/L LiCl; thus, lower concentration was applied in subsequent experiments. The cell morphology was also monitored, with similar cell shape between each group after preconditioning (Fig. 1). The cellular viability was determined using Trypan Blue staining reagent. The result indicated no significant alteration on the live cell ratio after preconditioning (the data did not show). Then the cell death was also determined using the LDH assay. And the result showed similar LDH level among the groups with 0, 1, 4, and 10 mmol/L LiCl in basal culture medium (data provided in Fig. 5b). These data indicated minimal cellular toxicity of LiCl to ADSC when the concentration was under 10 mmol/L.
Preconditioning of LiCl promotes the proliferation of ADSC
Then, the cellular proliferation capacity was assessed by cell number counting and the Ki67 level detection after the preconditioning of LiCl. The content of Ki67 was found increased after preconditioning by immunofluorescent staining (Fig. 2a, b). And the medium group (4 mmol/L) resulted in a highest promotion (P = 0.0134) of Ki67, while other groups showed no significant difference compared to control.
The CCK8 kit was used to monitor the relative cell number. All preconditioned groups showed equivalent cell number with control (Fig. 2c), while the preconditioned groups with 4 mmol/L were found with an enhanced proliferation capacity than the control after the preconditioning (Fig. 2d) (P = 0.0134). These results indicated a boost on the proliferation ability of ADSC by the preconditioning with a medium concentration of LiCl.
Preconditioning of LiCl enhances the NP-like cell differentiation
In addition to the proliferation ability, the NP-like cell differentiation potency was further evaluated to determine the influence of preconditioning with LiCl. Cells were transplanted to NP cell differentiation medium . Notably, the level of these markers were found significantly increased in the 10 mmol/L preconditioned group than the control after a subsequent NP cell differentiation for 2 weeks (Fig. 3a) (P = 0.0219).
To verify the results, the protein level of these markers was also detected, which was consistent with the mRNA expression pattern (Fig. 3b). Specifically, the content of ACAN and SOX9 were higher in the 10 mmol/L group than other groups. The expression of Col II (P < 0.0001) and Krt19 (P < 0.001) were found increased in the 4 mmol/L group and the 10 mmol/L group, when compared to the control. These results suggested that the preconditioning could also
Preconditioning of LiCl promotes cellular adaptation of ADSC to degenerative IVD-like condition
The above data suggests that the preconditioning of LiCl can boost the proliferation ability and the NP-like cell induction efficacy of ADSC in normal culture condition. The hostile microenvironment in degenerated IVD is one of the major obstacles for ADSC functioning in vivo. To test the effects of preconditioning on the cellular adaptation to degenerative IVD-like condition, the treated cells were then challenged with the degenerative IVD-like condition, characterized by hyperosmotic pressure, acidic environment, and a lower nutrition, according to previous reports [24]. The control group showed a sharp decrease of cell number, while those preconditioned groups did not. The result of CCK8 showed an over 64% cell loss in the control group, and with about 10%, 12%, and 13% more cells survived in preconditioned groups, respectively (Fig. 4a). In addition, the cell death was confirmed using the LDH assay, which was consistent with the result of CCK8, suggesting a reduced cell death in all three preconditioned groups in degenerative IVD-like condition (Fig. 4b) (P < 0.0001).
Besides, the preconditioned ADSCs were cultured in NP cell induction medium and challenged by a degenerative IVD-like condition. After 2 weeks, the ECM deposit was detected by Alcian blue staining. As a result, the preconditioned ADSCs exhibited more ECM deposit c The ECM content was determined using Alcian Blue staining. Scale bar = 100 μm. Error bars depict Mean ± SD. (****P < 0.0001, ***P < 0.001, **P < 0.01) than control ADSC (Fig. 4c). The expression matrix metalloproteinase 13 (MMP13) and ADAMTS5 were also tested. The results showed a reduction on ADAMTS5 in 10 mmol/L preconditioned group (P = 0.0219). Also there was no increase of MMP13 in 10 mmol/L preconditioned group, though an increase of MMP13 in 4 mmol/L preconditioned group was found (Fig. 5c, e, f) (P = 0.0219). These results suggested an enhanced adaptation of ADSC to the degenerative IVD-like condition after preconditioning with LiCl, with a reduced cell death and enhanced ECM synthesis ability, as well as a reduced matrix catabolism, especially when preconditioned using 10 mmol/L LiCl.
Before our assessment of the NP-regenerative effects of LiCl-preconditioned ADSCs in animal model, the inhibitory effects of LiCl preconditioning on GSK-3b was assessed, due to the potentially subsequent activation on Wnt/β-catenin pathway after GSK-3b inhibition, which could cause negative effects on NP regeneration. The expression of GSK-3b by ADSCs was evaluated after preconditioning and after differentiation (Fig. 5a-d). The results showed no significant reduction of GSK-3b in preconditioned ADSC when compared to untreated ADSC. Notably, the medium level of preconditioning even increased the GSK-3b level (P = 0.0134). The content of β-catenin was also tested (data not shown), but the level was not elevated. These data suggested a promotion of adaptation capacity of ADSCs to the degenerative IVD condition after LiCl preconditioning, without causing significant GSK-3b inhibition nor Wnt/ β-catenin pathway activation.
Therapeutic effects of preconditioned ADSC in vivo
Assessed after 3 months after treatment, the therapeutic effects of LiCl-preconditioned ADSCs were assessed with histological and biochemical methods. The wellorganized cells and ECM was revealed in normal control group by H&E staining, while the NP structure was Fig. 5 The preconditioning using LiCl did not elevate the expression of GSK-3b and suppressed catabolism activity. a, b The expression of GSK-3b after preconditioning and c, d after NP differentiation. e, f The protein content of MMP13 and ADAMTS5 after differentiation. Error bars depict Mean ± SD. (*P < 0.05) destroyed in degeneration group, with disappeared lamellar structure of AF and hyperplastic endplate cartilage. The preconditioned ADSC-treated group exhibited better preserved NP tissue than ADSC-treated control and degeneration group. Besides, the S-O staining showed remarkably reduced ECM content in degeneration group, when compared to the normal control. And the preconditioned ADSC-treated group was found with more ECM deposits than the ADSC-treated control, though still less than normal control (Fig. 6a, b).
The distribution of aggrecan and Col II in NP tissues was also determined using IHC staining (Fig. 6c, d). At 3 months after treatment, the degeneration group was found with little positive staining NP tissue. The aggrecan content were distinctly found in NP tissues in normal group and preconditioned ADSC-treated group. Fewer NP tissue was found with aggrecan staining in ADSC-treated control, when compared to preconditioned ADSC-treated group. The Col II staining showed a similar pattern that the preconditioned ADSC-treated group exhibited higher level of Col II content in NP tissue than ADSC-treated control, though still decreased than normal group.
The content of sGAG in NP tissues was also measured by the DMMB assay. The result revealed an increased sGAG content in preconditioned ADSC-treated rat, Fig. 6 LiCl-preconditioned ADSC exhibited NP-regenerative effects. a H&E staining, b S-O staining, and IHC staining of c Col II and d aggrecan of IVD tissue at 3 months after treatment. e The sGAG content normalized to dry NP weight. f Hydroxyproline content normalized to dry NP weight. Error bars depict Mean ± SD. **P < 0.001, *P < 0.05, compared to Normal control; ##P < 0.01, #P < 0.05, compared to LiCl-preconditioned ADSC (L-ADSC) when compared to degeneration control and ADSCtreated control (Fig. 6e). The hydroxyproline content was also detected to determine the Col II content in NP tissue. The result revealed an increased Col II content after treatment with preconditioned ADSC after degeneration (Fig. 6f).
The effects of preconditioning depends on the activation of cellular ROS/ERK axis To investigate the potential mechanism of the promotion effects by LiCl preconditioning, the cellular ROS level was monitored as the ROS played key role in stem cell fate and function. The results showed a significantly increased ROS level in ADSC after the preconditioning of LiCl of 4-20 mmol/L (Fig. 7a, b).
To investigate the relationship between the elevated cellular ROS and the promotion of the cellular survival and the ECM deposits, ADSC microspheres were prepared and preconditioned with LiCl (10 mmol/L), with (APO+) or without APO (APO−), before NP induction. The cell number was similar between the preconditioned groups, while the APO+ LiCl-preconditioned group exhibited poor survival ability in degenerated IVD-like condition than the APO− LiCl-preconditioned group. (Fig. 8a) The result indicated the APO attenuated the promotion on the cellular survival by LiCl preconditioning.
The ECM deposits with or without APO preconditioning was also assessed after NP induction in degenerated IVD-like condition. The APO+ LiCl-preconditioned group was found with less ECM deposits than the APO− group, as revealed by the weaker staining intensity of Alcian blue and the immunofluorescent intensity of aggrecan (Fig. 8b-d).
To further investigate the underlying mechanisms of the LiCl-induced ROS-dependent behavior, the MAP kinase ERK1/2 axis was detected, since ROS is an important mediator of MAP kinase activation [28]. The results showed an elevated pERK1/2-ERK1/2 ratio after LiCl preconditioning. Besides, the prevention on the ROS generation during preconditioning, with APO, attenuated the activation of ERK1/2 (Fig. 8e, f). Accordingly, the suppression effect of LiCl preconditioning on MMP13 and ADAMTS5 was also attenuated (Fig. 8g, h).
Taken together, our data suggested a promotion on the cellular survival and ECM deposits, as well as a suppressed catabolism activity in ADSC under IVD condition after LiCl preconditioning, which were found dependent on the activation of ROS/ERK axis.
Discussion
The cellular adaptation to a stressed host condition is essential for a successful therapy. NP cells have been found with superior tolerance to the IVD condition, including the survival and ECM synthesis ability, rather than naïve MSC. In this study, we investigated the influences of the preconditioning with LiCl on the cellular behavior of ADSC under normal culture condition, and the adaptation of the preconditioned ADSC to a degenerative IVD-like condition via a series of experiments. And the results suggested that the preconditioning of LiCl boosted the proliferation ability and promoted the cellular adaptation of ADSC to the degenerative IVDlike condition, which reduced the cell death effectively and benefited the ECM deposit of ADSC in vitro and in vivo. What is more, these effects were found closely related to the elevated cellular ROS and activation of ERK1/2 during LiCl preconditioning. The application of ROS scavenger during preconditioning significantly attenuated the promotion effects on the cellular adaptation of ADSCs in IVD condition.
The tissue microenvironment is composed of the biological condition and the physicomechanical and chemical condition, which affect almost every cellular activity via affecting transcriptional processes or major metabolic processes, etc.. Accordingly, various strategies have been built up to regulate cellular behaviors with biological factors, The ECM content of ADSC microspheres determined using Alcian Blue staining, scale bar = 50 μm. c The immunofluorescent staining of aggrecan in ADSC microspheres, scale bar = 50 μm. d Quantitative analyses of aggrecan content. e, f ERK1/2 and pERK1/2 expression of preconditioned ADSC with or without APO treatment. g, h the mRNA expression of MMP13 and ADAMTS5 of preconditioned ADSC with or without APO treatment. Error bars depict Mean ± SD. (***P < 0.001, **P < 0.01, *P < 0.05) such as cytokines, or to change the physicomechanical property, or to change the content of specific components in milieus [29][30][31]. Compared to biological methods, the treatment using physicomechanical or inorganic elements is much easier to deliver and maintain, since the modulatory efficacy is believed to turn out more stable. For example, the iron balance theory has been widely applied in anti-cancer therapy by fascinating ferroptosis of cancer cells [32,33]. Selenium is also found essential for the synthesis of selenoprotein GPX4, which is dispensable for normal embryogenesis [34]. Lithium has been used to treat manic depression for over 100 years. Besides, lithium has also been found to paradoxically reduce lymphocyte production, but enhance their function [35,36]. Recent studies have also identified that the promotion of cellular function of MSC occurs after preconditioning with LiCl, which is also observed in this study, suggesting the promisingly therapeutic role in cell-based treatment [17,18]. The most important molecular mechanism of lithium is the mediatory action on GSK3, which could in turn phosphorylate the transcription factors to turn on the genes related to the cell growth, differentiation, inflammation, etc., such as the Wnt/β-catenin, Myc, and the NF-κB [37][38][39].
In this study, the influence of the preconditioning on cell proliferation is comparable in different groups (1-10 mmol/L). According to the previous study, the favorable effect of lithium on the proliferation and differentiation of MSC are dose dependent [40]. Satija et al. [41] concluded the promotion on cell proliferation occurred when the concentration was less than 5 mmol/L. It was also supported by de Boer et al. that lower concentration or low activity of Wnt pathway caused the proliferation of uncommitted MSC [42]. Our data is partial consistent with these findings that the cells treated with 4 mmol/L LiCl exhibited the highest proliferation rate in normal condition. Though the inhibition on GSK-3b would cause the activation of Wnt pathway, the preconditioning treatment of LiCl here (1-10 mmol/L) did not cause significant reduction of GSK-3b, nor during the NP differention process.
Besides, our data first proved that the preconditioning of lithium enhanced the adaptation of ADSC to the degenerative IVD-like condition, including a reduced cell death and an increased ECM deposit. And our results also suggested the participant of the elevated cellular ROS in the certain process during preconditioning. Previous studies has found that ROS plays important roles in cell fate decision and that an optimal ROS is critical for nuclear reprogramming and the generation of pluripotent stem cells [43]. Subsequently, the elevated ROS after LiCl preconditioning caused an activation of ERK1/ 2, which had been found critical during the adaptation of transplanted stem cell to harsh IVD condition [11], and also to play key role in anti-inflammation response [44]. Consistently, the addition of antioxidant to prevent the ROS generation during preconditioning attenuated the enhancement on the cellular adaptation to harsh condition. Therefore, we considered the critical role of ROS/ERK axis on the preconditioning effects of ADSC in this study.
Besides, the preconditioning with LiCl has also been reported with strong protection to MSC against apoptosis by upregulating the gene expression of Naip1, Erc1, and Faim2 [18]. In addition to the anti-apoptosis gene, the lithium activates Akt-1, which is a critical protein kinase that modulate apoptotic pathway [35]. Both mechanisms may be included in the adaptation of preconditioned ADSC against the stressed degenerative condition, and the enhanced ECM deposit in degenerative IVD-like condition. The influence of lithium preconditioning to stem cell could be complex; further study is intended to uncover the interaction between preconditioning with lithium and the cellular adaptation.
Conclusion
In this study, we found the positive influences of preconditioning with LiCl to ADSC, including the proliferation and NP-like cell differentiation efficacy under normal condition, and the cell adaptation in a hostile degenerative IVD-like condition with a reduced cell death and enhanced ECM deposit, and superior therapeutic effects to IVDD in vivo. And these influences were closely related to the activation of ROS/ERK axis in preconditioned ADSC. This novel inorganic method of LiCl preconditioning exerts promising application potential to boost the ADSC activity for the IVD regeneration. | 2020-11-26T09:05:37.623Z | 2020-11-19T00:00:00.000 | {
"year": 2021,
"sha1": "9f54d83de19e8f0e0089c9c6a2c87976b7ca4ce4",
"oa_license": "CCBY",
"oa_url": "https://stemcellres.biomedcentral.com/track/pdf/10.1186/s13287-021-02306-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9fc7c68d9e191e25a5880a3100db17f097c2fac8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
261847799 | pes2o/s2orc | v3-fos-license | Associations between social isolation and diet quality among US adults with disability participating in the National Health and Nutrition Examination Survey, 2013–2018
Highlights • More than half of U.S. adults with disability have “poor” diet quality.• Healthy Eating Index (HEI) 2015 is a measure of diet quality based on U.S. guidelines.• Social isolation is associated with lower total HEI-2015 score in unadjusted results.• Social isolation is associated with lower vegetable, seafood and plant protein intake.• Associations were modest but underline areas for further research.
Introduction
Disability affects one in four (61 million) US adults (Okoro et al., 2019) and is a well-established risk factor for social isolation (Macdonald et al., 2018;Emerson et al., 2020) and inadequate nutrition (An et al., 2015;Xu et al., 2012;Kim et al., 2013;Sugiura et al., 2016).Improving the health of people living with disabilities and improving Americans' nutrition status are Healthy People 2030 goals (U.S.Department of Health and Human Services, 2020).Improving diet quality is important for chronic disease prevention and management and for preserving the functional status and cognition of older adults (Zhao and Andreyeva, 2022;Fan et al., 2021), especially relevant for adults who already have a disability.
Studies globally have revealed significant associations between disability and nutrition status, considering diet quality based on reported intake only (An et al., 2015;Xu et al., 2012) or overall nutritional risk accounting for environmental, food insecurity, and health factors (Kim et al., 2013;Sugiura et al., 2016).Two studies using US nationally representative National Health and Nutrition Examination Survey (NHANES) data found an inverse association between better nutrition and odds of disability (An et al., 2015;Xu et al., 2012).
Social isolation, the objective lack of interactions with others or lack of a social network, is distinct from loneliness, the subjective perception of absence of social interaction (Leigh-Hunt et al., 2017).Social isolation and loneliness are consistently associated with poor mental health, cardiovascular outcomes, and mortality, and may be as important as traditional clinical risk factors in predicting mortality (Leigh-Hunt et al., 2017;Pantell et al., 2013).In older adults, social isolation and loneliness have been independently associated with poor nutrition (Boulos et al., 2017;Sahyoun and Zhang, 2005).In a study among middle-aged and older US adults using NHANES 2007-2008 data, greater social support was associated with better diet quality as measured by Healthy Eating Index (HEI) scores (Pieroth et al., 2017).There is prior evidence that this relationship differs by gender, with social isolation having a stronger negative effect on men compared with women, particularly single or widowed men (Pieroth et al., 2017;Vinther et al., 2016;Noguchi et al., 2021).
While studies have documented how disability and social isolation affect nutrition status separately, few have examined the association between social isolation and diet quality and food access specifically among people living with a disability.Further, most studies that investigate disability limit the study population to older adults, excluding a substantial proportion of adults living with disability.Younger adults may experience the effects of social isolation differently than older adults (Emerson et al., 2020;Schwartz et al., 2019;Holt-Lunstad et al., 2015) and on average have worse diet quality (U.S.Department of Agriculture, 2015).This study aimed to address these gaps by assessing whether social isolation is associated with diet quality among a nationally representative sample of community-dwelling US adults living with a disability, and whether age and gender (Pieroth et al., 2017) are effect measure modifiers of the relationship between social isolation and diet quality.We hypothesized that social isolation would be associated with lower diet quality among people living with a disability, and that this relationship would be stronger among younger compared with older adults, and among men compared with women.
Data source and study population
We conducted a cross-sectional analysis using NHANES 2013-2018 data.NHANES is a nationally representative survey designed to assess the health and nutrition status of the non-institutionalized US population (Chen et al., 2020).Survey procedures and sampling design have been described extensively previously (Chen et al., 2020).Disability was defined using the Physical Functioning Questionnaire (PFQ), representing physical functioning limitations due to long-term physical, mental, and emotional problems or illness (An et al., 2015;Centers for Disease Control and Prevention, 2020).A positive response was "some difficulty," "much difficulty," or "unable to do" for any question regarding ability to complete activities within the categories of daily living, instrumental activities of daily living, lower extremity mobility, and general physical activities (An et al., 2015;Xu et al., 2012).The 29,400 participants in the NHANES 2013-2018 cycles included 5,994 adults with a self-reported disability.We excluded those who were pregnant or breastfeeding (n = 267) or had missing dietary data (n = 808).The final sample was 5,167 participants (Fig. 1), reflecting a weighted frequency of 67 million.This study was exempt from Institutional Review Board approval due to use of a publicly available, deidentified data source.
Healthy Eating index
Diet quality was measured using the HEI-2015, which examines how closely an individual's diet aligns with the 2015-2020 Dietary Guidelines for Americans (Kirkpatrick et al., 2019;Krebs-Smith et al., 2019).It is comprised of 13 component scores: "adequacy" components (higher intake gives higher scores: total and whole fruits, total vegetables, greens and beans, whole grains, dairy, total proteins, seafood and plant proteins, fatty acids) and "moderation" components (higher intake gives lower scores: refined grains, sodium, added sugars, saturated fats).Scores range from 0 to 100; higher score corresponds with better diet quality.As it is recommended against using multiple versions of the HEI in a study due to possibility of score differences, we applied the HEI-2015 to all the data in our sample (Kirkpatrick et al., 2019).We categorized scores into < 51 ("poor" diet), 51-80 (diet "needs improvement") and 81 ("good" quality diet) (Choi et al., 2021;Basiotis et al., 2004).
We calculated HEI scores by applying the respective scoring algorithm to the day 1 dietary interview data, which was conducted inperson using a 24-hour dietary recall via a multiple-pass method (U.S.Department of Agriculture, 2022.).Survey data is weighted according to day of the week of the interview.Dietary intake data is linked to the US Department of Agriculture (USDA) Food and Nutrition Database for Dietary Studies to determine nutrient breakdown of foods eaten (U.S.Department of Agriculture, 2022).We applied the "Simple HEI Scoring Algorithm -Per Day" using SAS macros, publicly available from the National Cancer Institute (National Cancer Institute, 2023).
Social isolation
We created a social isolation index based on existing validated social isolation indices that measure social isolation as a construct, measured across multiple domains (Berkman and Syme, 1979;Victor et al., 2008;Cornwell and Waite, 2009).Our index uses available NHANES questions covering domains of marriage/partnership (Berkman and Syme, 1979;Pohl et al., 2017), living alone (Cudjoe et al., 2020;Victor et al., 2000), and participation in social activities (Berkman and Syme, 1979;Cornwell and Waite, 2009;Cudjoe et al., 2020).Living alone has been used in the National Health and Aging Trends Study to measure social isolation among older adults, and is associated with nutrition risk (Cudjoe et al., 2020;Victor et al., 2000;Weddle and Fanelli-Kuczmarski, 2000).
The social isolation index computes a score ranging 0 to 4 (higher score representing more social isolation) based on four components: marital status (1 point for widowed, divorced, separated, or never married; 0 points for married or living with partner), living alone (1 point for household size of one), and two items from the PFQ: how much difficulty do you have "going out to things like shopping, movies or sporting events," and "participating in social activities [visiting friends, attending clubs or meetings or going to parties]" (one point for each with answers of "some difficulty," "much difficulty," or "unable to do").This social isolation index was correlated with depression and with selfreported health status within our study sample (Spearman's correlation p <.01), which is similar to a method described previously for validating a new social support measure (Pohl et al., 2017).
Statistical analysis
We obtained frequencies and proportions of categorical variables and mean (SE) for HEI score.We used one-way ANOVA to compare the HEI scores by covariate categories.
Linear regression estimated the associations between social isolation and continuous HEI score.In the multivariable-adjusted linear regression models, we adjusted for covariates that have been associated with social isolation and/or total HEI score in the literature (cut points based on similar studies using NHANES data (Pieroth et al., 2017;Bigman and Ryan, 2021): age (categorized into 18-39, 40-59, and ≥ 60 years, similar to how NHANES dietary data is presented by the USDA, which groups age by decade (U.S.Department of Agriculture, 2022), gender (male, female), race/ethnicity (Mexican American or Hispanic, non-Hispanic White, non-Hispanic Black, Other), education (<high school, high school or GED, >high school), chronic condition count, smoking status (never, former, current), and physical activity level (using the Physical Activity questionnaire; sedentary: 0 min/week activity, somewhat active: > 0 min and < 75 min/week vigorous or 150 min/week moderate activity, active: ≥75 min/week vigorous or 150 min/week moderate activity).Chronic condition count included diabetes, hypertension, heart disease, stroke, chronic lung disease, cancer, arthritis, osteoporosis (Falvey et al., 2021), categorized as 0, 1, ≥2.Depression was defined based on positive screening via the patient health questionnaire 9 (PHQ-9) score ≥ 10 ( Manea et al., 2012).We examined social isolation score as a categorical variable rather than continuous due to suspected non-uniformity in differences between score levels, and having fewer than five categories (Rhemtulla et al., 2012).We collapsed scores 3 and 4 into a single category due to small sample sizes and suspected similarity in severity of social isolation compared to lower score categories.A score of 0 was the reference category.
To assess whether age and sex were effect measure modifiers, stratified linear regression models assessed the association between social isolation and HEI score by age and sex (Pieroth et al., 2017;Kobayashi and Steptoe, 2018).The interaction was deemed significant if the pvalues for likelihood ratio tests for global interaction (social isolation and age category, social isolation and gender) were <0.05.As a secondary analysis to assess potential mediation of the association between social isolation and nutrition by depression, we compared estimates for social isolation between multivariable models including and excluding depression (defined based on PHQ-9 score ≥ 10 ( Manea et al., 2012).The delta beta between the two models is interpretable as the extent of mediation by depression (MacKinnon et al., 2007).
We conducted analyses using SAS Studio (Cary, NC).We used SAS survey procedures, accounting for sample weights, strata, and clustering parameters specific to the 2013-2018 NHANES cycles.We imputed missing data for variables with >5% missing: depression (missing n = 291) and each social isolation variable that comprise the index (missing n = 571 for computed social isolation score) using hotdeck method with jackknife variance estimation and weighted selection.
The most common social isolation components were being single (42.6%), and having difficulty going out to things like shopping, movies, or sporting events (34.5%), while fewer participants reported having difficulty participating in social activities (visiting friends, going to parties, meetings, and clubs), or living alone.For social isolation score, the distribution ranged from 36.2% with a score of 0 (n = 1616), to 15.1% with a score of 3 or 4 (n = 630 for 3, and n = 274 for 4).
The most common health problems that participants identified as causing difficulty related to activities assessed by the PFQ were chronic bone problems (including arthritis, fractures; 54.4%) and back or neck problems (43.9%).Many reported sensory issues (18.5%), which was in similar frequency as emotional concerns including depression/anxiety (18.4%).
In the unadjusted linear regression model, social isolation score was associated with lower total HEI-2015 score, with the estimate greatest for social isolation score 3 compared to score 0 (β = -2.81,95% CI − 4.30, − 1.33) (Appendix, A.1).This association did not remain statistically significant in the adjusted model (Table 2).Social isolation score was associated with lower scores for several adequacy components.These included total vegetables for all social isolation score categories, and seafood and plant proteins for score categories 1 and 3 compared with score 0. For social isolation score 3, the β estimate for total vegetables was − 0.36 (95% CI − 0.55, − 0.17) and for seafood and plant proteins, the estimate was − 0.26 (95% CI − 0.49, − 0.03).In contrast, social isolation score 1 was associated with higher whole grains score compared to score 0 (β = 0.40, 95% CI 0.06, 0.73).
In adjusted analyses, no individual social isolation components had significant β estimates for total HEI-2015 score, when compared with the reference category of not meeting criteria for that component (Table 3).For HEI-2015 components, those who reported single status had a lower total vegetable score than those who were married or living with a partner (β = -0.21,95% CI − 0.33,-0.10).Participants who indicated difficulty with one of the social engagement measures, going out to do things like shopping, movies, sporting events, had lower scores for total vegetables, greens and beans, total protein foods, and seafood and plant proteins compared to those who did not report this difficulty.The largest magnitude of these associations was for seafood and plant proteins (β = -0.25,95% CI − 0.43, − 0.08).This social engagement measure was also associated with lower added sugars score (β = -0.34,95% CI − 0.62, − 0.06).
There was little evidence of effect modification by age or gender (likelihood ratio tests for global interaction, p values > 0.05) (Appendix, A.2).
In our sensitivity analysis with additional adjustment for depression (Appendix, A.3), estimates for social isolation and total HEI-2015 score, adequacy components, and moderation components were all similar to the model without adjustment for depression.This suggests that there was not substantial mediation by depression, when adjusting for other covariates.
Discussion
This study is among the first to examine the association between social isolation and diet quality among adults with disability in a nationally representative US sample.Higher social isolation was associated with lower overall diet quality, but this did not remain significant in the adjusted analysis.Observed associations were modest.Social isolation score was associated with lower intake of total vegetables and seafood and plant proteins, but associations were sometimes inconsistent across social isolation score categories.When we evaluated specific social isolation components, we found that single status was associated with lower vegetable intake, and having difficulty going out to things was associated with lower intake of total vegetables, greens and beans, total protein foods, and seafood and plant proteins, and higher intake of added sugars.There were no differences in the association between social isolation and HEI score stratified by gender or age.
Our observed differences were small (<1.0 point lower for the significant HEI-2015 score components), although they are on par with prior literature examining HEI-2015 component scores among women, which observed one-third to one-half points lower on certain adequacy components comparing those with and without disability (Deierlein et al., 2022).It has been proposed that for intervention studies comparing HEI scores across groups, a clinically significant difference in scores would be 5 or 6 points to achieve a moderate effect size of 0.5 (Kirkpatrick et al., 2019).Our adjusted estimates are substantially lower than this.Importantly, however, the majority of observed total HEI scores (overall and across all social isolation scores) would be categorized as diet quality that is poor (<51) or needs improvement (51-80); <2% of scores were above 81, the cutoff often used to reflect "good" diet quality (Kirkpatrick et al., 2019;Choi et al., 2021;Basiotis et al., 2004).Diet quality of adults in the US population overall needs improvement; 10.7% of older adults in the 2013 Health Care and Nutrition Survey had "good" diet quality (Choi et al., 2021).Thus, these results emphasize the need to improve diet quality among adults with disability in the setting of needed improvements among the US population overall.Our results may highlight specific dietary components that scored lower in association with social isolation, but given the small magnitudes observed, this warrants further investigation.
Studies focusing on middle-aged and older adults have reported similar associations between social isolation and nutrition.A study of NHANES 2007-2008 data among adults age 40 + found that social isolation was associated with lower diet quality, and lower component scores for total and whole fruit, whole grains, seafood and plant proteins, fatty acids, and empty calories (Pieroth et al., 2017).Significant findings were restricted to men, while we found evidence of associations for both men and women.
Other studies have observed similar findings with fewer social contacts or social isolation associated with lower consumption of fruits and vegetables (Sahyoun and Zhang, 2005;Kobayashi and Steptoe, 2018).Additionally, studies outside the US have revealed stronger associations between marital status and less healthy eating patterns among men compared with women (Vinther et al., 2016;Noguchi et al., 2021).We could not identify prior studies examining whether the relationship between social isolation and HEI score differs by age group.The age distribution of our sample, with only 40% below age 60, as well as the limitations of our social isolation index may have limited the ability to detect effect modification.Given the constraints of the available items in recent NHANES cycles, our social isolation index did not fully capture important domains such as size of social network, frequency of social contact, and church or club participation (Pohl et al., 2017).These domains may be important within strata of age and/or gender.
Single status as a risk factor for lower vegetable intake is consistent with prior literature.Longitudinal studies examining marital transitions among middle-aged and older adults in England and Japan reported lower intake of fruits and vegetables among widowed, separated or divorced adults, particularly men (Vinther et al., 2016;Noguchi et al., 2021).Potential mechanisms for this association pertinent to relationships and health status include gender norms (e.g.food preparation, women monitoring health of their partners), and marital partnership's effects on promoting healthy behaviors through self-regulation and meaning, purpose, and obligation (Umberson, 1987).One of the two social engagement measures we examined was associated with lower intake of several adequacy components.This measure represents those with difficulty accessing the physical environment outside the home, such as shops and entertainment, and may potentially extend to other activities, such as those related to instrumental activities of daily living.Vegetables, total protein, and seafood and plant proteins may represent fresh foods, thus the association may be driven by difficulty with shopping for or affording these fresh foods.This is consistent with the food insecurity and various environmental barriers to food access that adults with disability face, including transportation, neighborhood environment (curbs, walkability), and store environment (Schwartz et al., 2019;Huang et al., 2012;Jackson et al., 2019).Inadequate social support can exacerbate these barriers, whereas adequate social support can mitigate them.Turning to comfort foods due to stress and anxiety, as suggested previously in analyses of dietary patterns during the COVID-19 pandemic lockdowns, may influence the higher intake of added sugars observed in this group (Bennett et al., 2021).
Limitations Importantly, our social isolation index is not a validated measure, although it is based on previously validated social isolation measures.Also, the index was correlated with depression and self-reported health status, consistent with previously validated measures (Pohl et al., 2017).It does not capture all important domains of social isolation (e.g., size of social network, frequency of contact), so there is potential for misclassification of exposure.However, the highest level of social isolation measured by our index potentially represents a lower level of social isolation compared to indices that capture all domains, so our results may be an underestimation of the true association between social isolation and diet quality.
The cross-sectional design limits the ability to assess temporality and causality in the relationship between social isolation and diet quality.Additionally, NHANES excludes institutionalized adults, so results may not be generalizable to those living in nursing facilities, who may have more severe disability or experience factors influencing their nutritional intake that differ from adults living independently in the community.For our classification of disability, defined as a physical functioning limitation, we were unable to discern chronicity/duration of the disability.Finally, use of a single 24-hour dietary recall is also an imperfect albeit acceptable method for assessment of dietary intake (National Cancer Institute, 2023; U.S. Department of Agriculture, 2022).
Strengths
This study has numerous strengths including the use of nationally representative data.The study population was also novel for inclusion of all adults with disability, whereas most studies in this field have focused on middle-aged or older adults.We controlled for a comprehensive set of covariates with prior established associations with social isolation, nutrition, or both.Our findings also suggest that depression was not a major mediator of the results, emphasizing the need for further investigation of this relationship.Social isolation in this population may operate outside a solely mental health pathway to impact nutrition.
Conclusions
In this nationally representative study of US adults with disabilities, these findings have significant public health impact because they indicate that this population is not meeting national nutrition standards.Social isolation may adversely impact intake of vegetables and quality protein sources, consistent with results from studies of middle-aged and older adult populations.Among adults with disabilities, these associations were inconclusive and warrant further study.If confirmed, focused interventions would be important, as inadequate intake of these foods may contribute to worsened disability progression and chronic disease risk.
Further study is needed across the age spectrum of adults with disability, particularly younger adults.Validated social support questionnaires should be incorporated into national surveys to further examine social isolation as a risk factor for dietary quality and other health behaviors and outcomes.Emerging research on how social isolation related to COVID-19 pandemic lockdowns affected nutrition status of certain populations should include adults with disabilities.These findings support the need to screen adults with disability for social isolation and nutrition status.
Declaration of Competing Interest
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: During the preparation of this manuscript, K. H. Barry's spouse was employed at Verily Life Sciences.Neither her spouse nor Verily Life Sciences had any role in the present project.The authors have no other conflicts of interest.
Table 1
(continued ) a HEI = healthy eating index; usingHEI-2015.bTheages of all respondents ages 80 years and older are coded as '80′ due to disclosure risk.c Not married or not living with a partner; d Household size of one; e Activities like visiting friends, attending parties, clubs, meetings; f Activities such as shopping, movies, sporting events.g Positive screening based on score ≥ 10 on the PHQ-9.h Active = 150 min moderate or 75 min/week vigorous activity, Somewhat active = ≥ 0 min but < active.i Health problems causing difficulty with activities assessed in physical functioning questionnaire; categories are not mutually exclusive.p-value for ANOVA (yes vs. no for each category).Bold: statistically significant at p < 0.05.
Table 2
Multivariable-adjusted linear regression results for HEI total and component scores by social isolation score among adults with disability, NHANES 2013-2018.
a Healthy Eating Index-2015, maximum score 100 points.b Higher scores indicate more social isolation.c Includes those positive for 3 or 4 components.d Higher intakes result in higher scores.e Dark green and orange vegetables and legumes.f Higher intakes result in lower scores *p < 0.05, **p < 0.01.
Table 3
Multivariable-adjusted linear regression results for HEI total and component scores by social isolation component among adults with disability a , NHANES 2013-2018.
cThose with household size of one compared to those with household size > 1. d Those with difficulty going out to do things shopping, movies, sporting events compared to those without difficulty.eThose with difficulty participating in social activities such as visiting others, going to meetings, clubs, parties compared to those without difficulty.f Higher intakes result in higher scores.g Dark green and orange vegetables and legumes.h Higher intakes result in lower scores *p <.05; **p <.01. | 2023-09-15T15:18:32.649Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "e7c844d4a5e89022b2aa87d2ec3de7b1d020840c",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "92ce859130e9b9e4582659f70f6e80773242b4bf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
125924292 | pes2o/s2orc | v3-fos-license | Fermi Large Area Telescope Observations of the Monoceros Loop Supernova Remnant
We present an analysis of the gamma-ray measurements by the Large Area Telescope onboard the \textit{Fermi Gamma-ray Space Telescope} in the region of the supernova remnant~(SNR) Monoceros Loop~(G205.5$+$0.5). The brightest gamma-ray peak is spatially correlated with the Rosette Nebula, which is a molecular cloud complex adjacent to the southeast edge of the SNR. After subtraction of this emission by spatial modeling, the gamma-ray emission from the SNR emerges, which is extended and fit by a Gaussian spatial template. The gamma-ray spectra are significantly better reproduced by a curved shape than a simple power law. The luminosities between 0.2--300~GeV are $\sim$~$4 \times 10^{34}$~erg~s$^{-1}$ for the SNR and $\sim$~$3 \times 10^{34}$~erg~s$^{-1}$ for the Rosette Nebula, respectively. We argue that the gamma rays likely originate from the interactions of particles accelerated in the SNR. The decay of neutral pions produced in nucleon-nucleon interactions of accelerated hadrons with interstellar gas provides a reasonable explanation for the gamma-ray emission of both the Rosette Nebula and the Monoceros SNR.
Introduction
The shock waves of supernovae accelerate particles to very high energies through the mechanism of diffusive shock acceleration (e.g., Blandford & Eichler 1987). However, the processes of acceleration, release from the shock region, and diffusion in the interstellar medium of such particles are not well understood. Gamma-ray observations in the GeV domain are a powerful probe of these mechanisms. The Large Area Telescope (LAT) on board the Fermi Gamma-ray Space Telescope has detected GeV gamma rays from several SNRs (e.g., Thompson et al. 2012;Acero et al. 2016b, and references therein).
The Monoceros Loop (G205.5+0.5) is a well-studied middle-aged SNR. It has a large diameter (∼ 3. • 8) which allows detailed morphological studies in high-energy gamma rays since the LAT has a comparable point-spread function (PSF) above a few hundred MeV (the 68% containment angle above 1 GeV is smaller than 1 • ). The radio emission of the SNR has a non-thermal spectrum (e.g. Xiao & Zhu 2012), indicating the existence of high-energy electrons. A young stellar cluster and molecular cloud complex, the Rosette Nebula, is located at the edge of the southern shell of the SNR. The Hα line widths in the SNR ridge overlapping with the Rosette region are larger than near the center of the Rosette Nebula (Fountain et al. 1979), suggesting that the SNR is interacting with the Rosette Nebula. Turner (1976) obtained distances of 1.6 kpc for stars associated with the Rosette Nebula from main sequence fitting. Odegard (1986) argued that the distance of the SNR is 1.6 kpc based on his decameter wavelength observations of the absorption of nonthermal emission from the SNR by the Rosette Nebula. In this paper, we adopt this distance of 1.6 kpc for both objects. The age was estimated to be ∼ 3 × 10 4 yr based on the X-ray data and an SNR model (Leahy et al. 1986).
In Monoceros, a very-high-energy (VHE) gamma-ray source, HESS J0632+057, was first discovered at TeV energies by the High Energy Stereoscopic System (H.E.S.S.; Aharonian et al. 2007), located close to the rim of the Monoceros SNR. It appears to be point-like within experimental resolution; the limit on the size of the emission region was given as 2 ′ (95% confidence level). Detection of variability in the VHE gamma-ray and X-ray fluxes supports interpretation of the object as a gamma-ray emitting binary (Acciari et al. 2009), indicating that the bulk of VHE gamma rays do not come from high-energy particles accelerated by the SNR. No significant emission from the location of HESS J0632+057 was detected in the 0.1-100 GeV energy range integrating over 3.5 yr of Fermi LAT data (Caliandro et al. 2013). Also, an unidentified high-energy gamma-ray source, 3EG J0634+0521 has been detected using the EGRET data (Hartman et al. 1999). However, morphological studies which could associate the gamma-ray emission with molecular clouds require higher photon statistics with better angular resolution.
In order to understand the emission and its mechanism more deeply, a detailed analysis and further discussion are required.
In this paper, we report a detailed study of the emission in the direction of the Monoceros Loop by using the Fermi LAT data. We have analyzed the 67-month LAT data by using the 2FGL catalog. Observations and data selection are briefly described in Section 2. The analysis procedure and results described in Section 3 include a study of the morphology and spectrum of the emission associated with the Monoceros Loop and the Rosette Nebula. Finally, we present our results in Section 4 and conclusions in Section 5. The data selection, analysis procedure, and the modeling of gamma-ray emission are based on the previous studies of the Cygnus Loop by Katagiri et al. (2011) and SNR HB 3 by Katagiri et al. (2016).
OBSERVATIONS AND DATA SELECTION
The main instrument on Fermi is the LAT which detects gamma rays from ∼ 20 MeV to > 300 GeV 1 . The LAT is an electron-positron pair production telescope, using tungsten foil converters and silicon microstrip detectors and a hodoscopic cesium iodide calorimeter to measure the arrival directions and energies of incoming gamma rays. They are surrounded by 89 segmented plastic scintillators that serve as an anticoincidence detector to reject events originating from charged particles. Detailed information about the instrument can be found in Atwood et al. (2009), the on-orbit calibration is described in , and a summary of event classification strategies and instrument performance is given in Ackermann et al. (2012). The LAT has a larger field of view (∼ 2.4 sr), a larger effective area (∼ 8000 cm 2 for >1 GeV on-axis peak effective area) and improved PSF in comparison to previous high-energy gamma-ray telescopes.
We analyzed events toward the Monoceros Loop recorded from the start of science operations on 2008 August 4 until 2014 January 29. The LAT operated in a nearly continuous sky survey mode, to obtain a total exposure of ∼ 1.5 × 10 11 cm 2 s (at 1 GeV). In this observing mode approximately uniform coverage of the entire sky is obtained every 2 orbits (∼ 3 hr).
We used the standard LAT analysis software, the ScienceTools version v9r32, publicly available from the Fermi Science Support Center (FSSC) 2 . We use events classified as P7SOURCE that have been reprocessed with an updated instrument calibration (Bregeon et al. 2013). Only events that have a reconstructed zenith angle less than 100 • were used in order to minimize the contamination from Earth-limb gamma-ray emission. Furthermore, only time intervals when the center of the LAT field of view is within 52 • of the local zenith are accepted to further reduce the contamination by Earth's atmospheric emission. The Instrument response functions (IRFs) that correspond to this dataset are P7REP SOURCE V15 (publicly available via the FSSC) throughout this work.
Times when the LAT detected a gamma-ray burst (GRB) or nova were eliminated from the dataset. The transients located within 15 • of the Monoceros Loop were GRB 130504C (Kocevski et al. 2013) and Nova Mon 2012 (Cheung et al. 2013), corresponding to 56416.97797-56417.00390 and 56099.00000-56109.00000 in Modified Julian Day, respectively.
The region around the SNR is dominated by the gamma-ray emission of PSR J0633+0632 (2FGL J0633.7+0633). The pulse profile in the 0.2-300 GeV energy range analyzed in this paper is shown in Figure 1, where the events are within 1 • of the pulsar position. Using a timing solution modeling the effects of spin-down and timing noise (Kerr et al. 2015), we assigned rotational phase to each photon using the Fermi plug-in of the TEMPO2 software package (Hobbs et al. 2006). We only used the events during 2 Software and documentation of the Fermi ScienceTools are distributed by the Fermi Science Support Center at http://fermi.gsfc.nasa.gov/ssc. the off-pulse phases of PSR J0633+0632, corresponding to phases of 0.24-0.52 and 0.67-1.00 as adopted in Abdo et al. (2013). We restricted the energy range to > 0.2 GeV to avoid possible large systematics due to the rapidly varying effective area and much broader PSF at lower energies.
General settings
The morphology and spectrum of gamma-ray emission from the Monoceros Loop and Rosette Nebula were determined using a binned likelihood analysis based on Poisson statistics 3 (see, e.g., Mattox et al. 1996). The likelihood is the product of the probabilities of the observed gamma-ray counts within each spatial and spectral bin for a specified model. The gamma-ray emission model used here included all sources detected in the 2FGL catalog within 20 • of the SNR. We also included the standard LAT diffuse background model (Acero et al. 2016a), gll iem v05 rev1.fit that results from cosmic-ray (CR) interactions with the interstellar medium and radiation fields and an isotropic component to represent extragalactic gamma rays and charged particle background using a tabulated spectrum (iso source v05.txt). Both diffuse models are available from the FSSC. We fit all spectral parameters of the 2FGL sources spatially associated with the The analyses were performed within a 14 • ×14 • square region using 0. • 1 pixels.
The energy range for likelihood analysis is divided into 40 logarithmically-spaced energy bins from 0.2 GeV to 300 GeV. Figure 2 shows the counts map in the region of interest. We centered the region on the center of the SNR: (R.A., Dec.) = (99. • 75,6. • 50) (J2000).
Morphological analysis
For our morphological study we only used events with energies greater than 0.5 GeV (compared to the 0.2 GeV used in our spectral analysis) to take advantage of the narrower PSF at higher energies. Figure 3 shows the counts map in a 7 • × 7 • region centered on the Monoceros Loop, after subtracting the background: the Galactic emission, the isotropic component, and the 2FGL point sources except for the six 2FGL sources in Group S and R, the parameters of which were the best-fit ones obtained by the likelihood analysis where the emission associated with the SNR and the Rosette Nebula are modeled as the six sources (Model 1 in Table 1). The CO contours overlaid on the map correspond to line intensity integrated over velocities of 0 km s −1 < V < 20 km s −1 with respect to the local standard of rest, encompassing the velocity of 14 km s −1 corresponding to the distance from the Earth (1.6 kpc assuming the IAU-recommended values R 0 = 8.5 kpc and Θ 0 = 220 km s −1 ). The correlation between gamma rays and the CO line emission around the Rosette Nebula is evident. We note that the LAT standard diffuse model includes this CO emission (Acero et al. 2016a). Thus the residual excess of the gamma-ray emission indicates that the CR density in this region is enhanced relative to the surrounding region.
The emission north of the CO region appears point-like and is consistent with the position of the source 2FGL J0631.6+0640.
To evaluate the correlation between the gamma-ray and CO line emission quantitatively, we fit LAT emission with a spatial template based on the CO line emission for the Rosette Nebula instead of the three 2FGL sources in Group R. We restricted the spatial template to a 2. • 13 radius about the central cloud (R.A., Dec.) = (98. • 41,4. • 81) (J2000). Since the edge of the CO emission region is unclear due to statistical noise in the CO spectral measurements, we introduced the CO intensity threshold used to create the spatial template as an additional free parameter in the fit. The spectral model was assumed to be a power-law function. The resulting maximum likelihood values with respect to the maximum likelihood for the null hypothesis (no source component associated with the Rosette Nebula and the SNR other than PSR J0633+0632) are summarized in Table 1. The test statistic (TS) value (e.g. Mattox et al. 1996) for the CO image (Model 2 in Table 1) Table 1). The spectral shapes for both additional sources were assumed to be power-law functions. We varied the radius (1 σ for a Gaussian profile) and location of . The error of the centroid is 0. • 35 at 68% confidence level. The detection significances for the best-fit Gaussian profile and 2FGL J0631.6+0640 at energies of > 0.5 GeV are ∼ 14 σ and ∼ 11 σ, respectively. We note that if we add the five eliminated 2FGL sources on top of the best-fit model, the TS value increases by only 6.2 for 10 additional degrees of freedom, i.e. there is no statistical evidence for the presence of these sources in addition to the extended templates. Also, we note that the maximum likelihood value for a 408 MHz radio template with suppression of emission from Rosette Nebula (Model 5) was significantly worse than the best-fit Gaussian model, indicating that the gamma-ray emission around the Monoceros Loop is not strongly spatially associated with the shock region of the SNR as traced by radio. Finally, we examined the residual map after fitting as shown in Figure 5. There is no prominent gamma-ray emission left in the map. Therefore we adopted the Gaussian template with maximum likelihood parameters for the whole SNR in the following spectral analysis.
Spectral analysis
To measure the spectra of the SNR and the Rosette Nebula we used a maximum likelihood fit using the best-fit spatial model over the energy range from 0.2 GeV to 300 GeV. Figures 6 and 7 show the resulting spectral energy distributions (SEDs) for the SNR and the Rosette Nebula, respectively. If the detection is not significant in an energy bin, i.e., the improvement of the TS value with respect to the null hypothesis is less than 4 (corresponding to 2 σ for one additional degree of freedom) then we calculated a 90% confidence level upper limit assuming a photon index of 2.
At least three different sources of systematic uncertainties affect our analysis: uncertainties in the LAT event selection efficiency, the adopted diffuse model and the We similarly gauged the uncertainties due to the morphological template by comparing the results with those obtained by changing the radius of the Gaussian template within its ± 1 σ error. The total systematic errors were set by adding the above uncertainties in quadrature. If the total systematic error in an energy bin was > 100%, the point was replaced by an upper limit. This is relevant for the fourth energy bin (3.105 GeV -7.746 GeV) in Figure 6 where an upper limit is presented due to the large systematic error although the TS value is ∼ 9. The dominant systematic error for the measurement of the SNR spectrum arises from the uncertainty of the diffuse model below 0.5 GeV and the morphological uncertainty above 0.5 GeV, respectively.
We searched for a spectral break in the LAT energy range by comparing the likelihood values of a spectral fit over the whole energy range considered based on a simple power law and a log parabola function. TS values and best-fit parameters are summarized in Table 2.
The values for a log parabola function correspond to improvements at the > 6 σ confidence level for the SNR and > 9 σ for the Rosette Nebula when only statistical uncertainties are taken into account. For the SNR, we further investigated the systematic effects on the above spectral analysis. Accounting for systematics in the fit, the curved shape is still preferred over a power-law at a confidence level > 5 σ. By comparing the spectral parameters of a log parabola function for both sources, the spectral shapes are consistent within the statistical errors at our current sensitivity. Assuming the spectral shape is a log parabola function, the gamma-ray luminosities integrated over the energy range 0.2-300 GeV inferred from our analysis are ∼ 4 × 10 34 erg s −1 for the SNR and ∼ 3 × 10 34 erg s −1 for the Rosette Nebula, respectively.
DISCUSSION
An extended region of gamma-ray emission was found to be spatially coincident with the Monoceros SNR by Acero et al. (2016b). We confirmed the extended emission with this more detailed analysis. Since no pulsar wind nebula has been discovered so far within the SNR (e.g., Roberts 2004), the likely explanation for the bulk of this gamma-ray emission is the interaction of high-energy particles accelerated in the shocks of the Monoceros Loop with ambient interstellar matter and radiation fields. The morphological difference between the gamma-ray emission and the radio emission can be explained by the inhomogeneity of the nearby gas, which is irradiated by the accelerated CRs that have escaped from the shocked regions. This hypothesis would also readily explain the enhanced emission from the nearby Rosette Nebula where the same population of high-energy particles would produce a bright gamma-ray signal when interacting in the dense molecular clouds traced by the CO emission. We note that the possibility of a pulsar wind nebula without detectable radio emission cannot be ruled out for the explanation of the enhanced emission around the SNR. Also, we cannot rule out that some of the emission around the SNR is produced by dark gas, i.e. gas that is not accounted for in HI or CO surveys. Its distribution cannot be modeled precisely, yet large quantities of dark gas have been found surrounding nearby molecular clouds (Grenier et al. 2005). In contrast, it is difficult to explain the enhanced emission around the Rosette Nebula only by dark gas considering a good fit of the CO template and the feature of dark gas that is mostly at the outskirt of the cloud.
Broadband emission from the Monoceros Loop SNR was modeled under
the assumption that gamma rays are emitted by a population of accelerated protons and electrons. We assumed relativistic electrons and protons have the same injection spectrum and occupy the same spatial volume characterized by a constant magnetic field strength and matter density. We used the following equation to model the momentum distribution of injected particles: where p br is the break momentum, s L is the spectral index below the break and s H above the break. a e,p are normalizations for the electron and proton components, respectively. Because the details of the proton/electron injection process are poorly known, we adopt a minimum momentum of 100 MeV c −1 .
Electrons suffer energy losses due to ionization, Coulomb scattering, bremsstrahlung, synchrotron emission and inverse Compton (IC) scattering. The evolution of the momenta spectra N e,p (p, t) are calculated from the following equation: where b e,p = −dp/dt is the momentum loss rate, and Q e,p is the particle injection rate.
We assumed that the shock produced particles at a constant rate, so Q e,p is constant. To derive the gamma-ray emission spectrum we calculated N e,p (p, T 0 ) numerically, where T 0 is the SNR age of 3 × 10 4 yr. Momentum losses for protons are neglected because the timescale for radiative losses via neutral pion production is ∼ 10 7 /(n H /1 cm −3 ) yr wheren H is the gas density averaged over the volume occupied by high-energy particles. Gamma-ray emission by secondary leptons produced from charged pion decay was neglected. Generally, this is a negligible contribution unless the gas density is comparable to that in dense molecular clouds and the SNR has reached the later stages of its evolution, or the injected electron-to-proton ratio is much lower than locally observed. The calculation of the spectrum of π 0 decay gamma rays from interactions between protons and ambient hydrogen was adopted from Dermer (1986). A scaling factor of 1.84 accounted for helium and heavier nuclei in target material and CRs (Mori 2009). Contributions from bremsstrahlung and IC scattering by accelerated electrons are computed based on Blumenthal & Gould (1970), and synchrotron radiation is evaluated using the work of Crusius & Schlickeiser (1986).
First, we considered a model with the Monoceros Loop SNR dominated by π 0 -decay.
The gamma-ray spectrum constrains the number index of accelerated protons to be s H ≈ 2.8 in the high-energy regime. We adopted a spectral index s L = 1.5 to explain the radio continuum spectrum (Xiao & Zhu 2012). We note that the radio spectrum was estimated from the full SNR with the exception of the Rosette Nebula region that is dominated by strong thermal emission. Since we expect curvature in the GeV energy band due to the kinematics of π 0 production and decay, it is difficult to constrain a break in the proton momenta spectrum from the gamma-ray spectrum. The gamma-ray spectrum thus provides only an upper bound to the momentum break at ∼ 10 GeV c −1 .
We adopt a break at the best-fit value, 2 GeV c −1 . The density is fixed to 3.6 cm −3 based on the H I observations (Xiao & Zhu 2012). The resulting total proton energy, W p ∼ 7.6 × 10 49 · (3.6 cm −3 /n H ) · (d/1.6kpc) 2 erg, is less than 10% of the typical kinetic energy of a supernova explosion. For the electron-to-proton ratio measured at Earth, K ep ≡ a e /a p = 0.01, the magnetic field strength is determined to be B ∼ 35 µG by the radio data. Using these model parameters (Table 3), we obtained the SEDs shown in Figure 8 (a).
In the case of leptonic scenarios, we assume K ep = 1 to produce the gamma-ray emission predominantly from the electrons. The radio spectrum (Xiao & Zhu 2012) is difficult to be modeled as the synchrotron radiation when we fit the gamma-ray spectrum with a model dominated by electron bremsstrahlung, as shown in Figure 8 (b).
The other leptonic scenario is an IC-dominated model. IC gamma rays originate from the interaction of high-energy electrons with the cosmic microwave background (CMB) as well as optical and infrared radiation fields. Galactic radiation fields were adopted from Porter et al. (2008) at the location of the Monoceros Loop. These very complex spectra are approximated by two infrared and two optical blackbody components. It is hard to reproduce the multi-wavelength spectrum well with an IC-dominated model shown in Figure 8 (c). In addition, the ratio between IC and synchrotron fluxes constrained the magnetic field to be less than ∼ 2 µG and requires a low gas density ofn H ∼ 0.01 cm −3 to suppress the electron bremsstrahlung, which is unlikely.
In conclusion, the bulk of the gamma-ray emission from the Monoceros SNR is most likely from π 0 decay produced by the interactions of protons with ambient hydrogen. It is then reasonable to explain the gamma-ray spectrum of the Rosette Nebula by the same process. If the protons are accelerated in the whole SNR in the same manner and are not strongly affected by spectral deformation due to CR diffusion processes, the shape of the proton spectrum in the Rosette Nebula is expected to be the same as in the Monoceros Loop. Figure 7 shows the gamma-ray spectrum of the Rosette Nebula with the π 0 -decay dominated model assuming the density in the molecular clouds is 100 cm −3 . The spectrum can be reproduced without any change from the proton momentum spectrum of the SNR.
The resulting total proton energy, W p ∼ 0.18 × 10 49 · (100 cm −3 /n H ) · (d/1.6kpc) 2 erg, is about 2% of that for the π 0 -decay model of the SNR, which is reasonable considering the solid angle of the Rosette Nebula with respect to the SNR and uncertainty of the matter density. We note that these CR energies for the Monoceros Loop SNR and the Rosette Nebula are the enhancements of the CR density in addition to that implicit in the standard Galactic diffuse emission model.
To summarize, the assumption that the gamma-ray emission from the Monoceros SNR is dominated by decay of π 0 produced in nucleon-nucleon interactions of hadronic CRs with interstellar matter is a natural scenario that can also readily explain the emission from the nearby Rosette Nebula as interactions of the same population of CRs in the dense molecular cloud. Similarly to SNR HB 3 (Katagiri et al. 2016), it should be emphasized that our observations towards the Monoceros Loop provide a rare and valuable example for which the emissions from both the SNR and the interacting molecular clouds are detected.
CONCLUSIONS
We analyzed gamma-ray measurements by the LAT in the region of the Monoceros Loop. The brightest gamma-ray peak is spatially correlated with the Rosette Nebula.
A template derived from the CO gas distribution fits the morphology of the gamma-ray Additional support for science analysis during the operations phase is gratefully acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d'Études Spatiales in France.
We thank Luigi Tibaldo for helpful comments and discussions on the dark gas. Flux upper limits at the 90% confidence level are shown for energy bins when the detection was not significant (test statistic < 4) . The blue region is the 68% confidence range (no systematic error) of the LAT spectrum assuming that the spectral shape is a log parabola. Figure 6 are shown alongside radio continuum measurements (Xiao & Zhu 2012). Radio emission is modeled as synchrotron radiation, while gamma-ray emission is modeled by different combinations of π 0 -decay (long-dashed curve), bremsstrahlung (dashed curve), and inverse Compton (IC) scattering (dotted curve). As described in the text, the models are: a) π 0 -decay dominated, b) bremsstrahlung dominated, c) IC-dominated. a −2 ln(L 0 /L), where L and L 0 are the maximum likelihoods for the model with/without the source component, respectively. The model for L 0 includes PSR J0633+0632.
b The 3 sources in the 2FGL source list associated with the Rosette Nebula (Nolan et al. 2012) are referred to as Group R in the text. The 3 sources listed in the 2FGL source list associated with the Monoceros Loop are referred to as Group S in the text.
c The additional degrees of freedom for the CO image is 2 for the spectral shape, 1 for the analysis threshold to extract emission. The details are shown in the text. d 2FGL J0631.6+0640 is included in Group S. e The radio template was obtained from 408 MHz radio data (Taylor et al. 2003) by excluding the region around the Rosette Nebula, where the emission is predominantly thermal. The additional degrees of freedom for the radio template are 2 for the spectral shape (a power law). | 2016-08-23T04:43:10.000Z | 2016-08-23T00:00:00.000 | {
"year": 2016,
"sha1": "7fe9cd0b2297459d6986c54ca4bc9147f930d88f",
"oa_license": null,
"oa_url": "https://iopscience.iop.org/article/10.3847/0004-637X/831/1/106/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "7fe9cd0b2297459d6986c54ca4bc9147f930d88f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
258411743 | pes2o/s2orc | v3-fos-license | What we need to know and do on sugammadex usage in pregnant and lactating women and those on hormonal contraceptives
Sugammadex is a chemically modified γ-cyclodextrin that is used as a selective reversal agent for steroidal neuromuscular blockade. The use of sugammadex has greatly increased globally; however, little is known about its potential adverse effects in pregnant and lactating women or those using hormonal contraceptives. There are three important theoretical assumptions. Firstly, pregnancy-related physiological changes involve most organs and affect the pharmacokinetic profiles of medications. Considering the physiological changes in pregnant women and the pharmacokinetic properties of sugammadex, alterations in the dosage and safety profiles of sugammadex may occur during pregnancy. Secondly, very large and polarized sugammadex molecules are expected to have limited placental transfer to the fetus and excretion into breast milk. Finally, sugammadex can bind to steroidal neuromuscular blocking agents as well as other substances with similar structures, such as progesterone. As a result of using sugammadex, progesterone levels can be reduced, causing adverse effects such as early pregnancy cessation and failure of hormonal contraceptives. This narrative review aims to demonstrate the correlations between sugammadex and pregnancy, lactation, and reproductive potential based on previously published preclinical and clinical studies. This will bridge the gap between theoretical assumptions and currently unknown clinical facts. Moreover, this review highlights what anesthesia providers should be aware of and what actions to take while administering sugammadex to such patients.
concerned about adverse events (7.8%). Sugammadex use is estimated to increase substantially globally, alleviating economic concerns as sugammadex patents have already expired or are about to expire soon [4,5].
Can sugammadex replace acetylcholinesterase inhibitors in patients undergoing surgery and receiving rocuronium? Pregnant and lactating women are generally excluded from most clinical trials; therefore, no efficacy or safety studies have been conducted on these patients. It is not surprising that there are insufficient data on the Pregnancy and Lactation Labeling Rule for sugammadex. Only theoretical assumptions are made (Table 1). Pregnancy-related physiological changes involve most organs and affect the pharmacokinetic profiles of medications. Considering the physiological changes in pregnant women and pharmacokinetic properties of sugammadex, there may be alterations in its dosage and safety profile during pregnancy. Very large polarized sugammadex molecules are expected to have limited placental transfer to the fetus and excretion into breast milk [6]. Finally, sugammadex binds to steroidal NMBA and other substances with similar structures, such as progesterone [7,8]. The use of sugammadex can reduce progesterone levels, which may cause adverse effects such as early cessa-tion of pregnancy and failure of hormonal contraceptives. However, there is no clinical evidence supporting this assumption.
The purpose of this narrative review is twofold. The first is the scientific aspect, including preclinical and clinical studies of previously published correlations between sugammadex and pregnancy, lactation, and reproductive potential, thereby bridging the gap between theoretical assumptions and currently unknown clinical facts. The second is the practical clinical aspect, which discusses what anesthesia providers and patients should be aware of, and what hospitals need to be institutionalized to use sugammadex for these patients.
THEORETICAL ASSUMPTIONS AND PRECLINICAL EVIDENCE Pregnancy-related physiologic and pharmacokinetic changes
During pregnancy, significant physiological changes occur owing to increased estrogen and progesterone levels, beginning in the first trimester, peaking at term and labor, Preclinical study showed peak concentration in breast milk 30 min after sugammadex administration. Early in the postpartum period, gaps between the mammary alveolar cells increased and peak concentration of sugammadex may pass through breast milk.
CICV: cannot intubate and cannot ventilate, SOAP: Society of Obstetric Anesthesia, and Perinatology.
www.anesth-pain-med.org and gradually reversing a few weeks postpartum [9]. Pregnancy-related physiological changes involve most organs and affect the pharmacokinetic profile of medications [10,11]. Decreased gastrointestinal motility and increased gastric pH affect drug absorption. Increased total body water, plasma volume, and capillary hydrostatic pressure lead to a significantly increased volume of distribution. Decreased concentrations of drug-binding proteins increase the bioactivity of certain drugs. Increased cardiac output induces greater hepatic and renal blood flow, resulting in the increased clearance of some medications. However, information regarding pharmacokinetic changes or dosage requirements is lacking for most drugs used during pregnancy [12]. Moreover, it is often unclear whether altered pharmacokinetics lead to changes in drug efficacy or to adverse effects. Unfortunately, the use of sugammadex during pregnancy is not exempt. As sugammadex has no affinity for plasma proteins when administered intravenously, it immediately encapsulates rocuronium in a 1:1 molar ratio [13,14]. This leads to a concentration gradient that shifts rocuronium from the peripheral compartment (neuromuscular junction) to the central compartment (plasma), where it is encapsulated by sugammadex [15]. The sugammadex-rocuronium complex is highly soluble, and urinary excretion is the major route for its elimination without being metabolized by the liver [13].
Considering the physiological changes in pregnant women and pharmacokinetic properties of sugammadex, several analogies can be made. First, an increase in total body water increases the volume of distribution of the hydrophilic drug, which lowers the plasma concentration. The glomerular filtration rate (GFR) increases during pregnancy, and for drugs excreted by glomerular filtration, renal clearance parallels the changes in GFR during pregnancy [16]. However, the extent to which volume distribution increases and whether the dose of hydrophilic drugs should be increased remain unclear. Unlike the uniform increase in the GFR during pregnancy, the effect of renal tubular transport on renal clearance varies among drugs. Therefore, there are limitations in theoretically estimating the efficacy and safety of sugammadex during pregnancy.
Maternal-fetal transfer and fetal development
Sugammadex is a modified γ-cyclodextrin with a lipophilic core and hydrophilic periphery and a molecular weight of 2,178 daltons. The addition of eight side chains extended the cavity size to achieve a better fit for steroidal NMBA [17]. In addition, negatively charged carboxyl groups were added at the ends of the eight side chains to maintain structural integrity and enhance electrostatic binding to the positively charged quaternary nitrogen of rocuronium [6]. Theoretically, it is difficult for sugammadex to pass through the placenta owing to its large molecular size and negatively charged characteristics [18].
Preclinical studies by Merck have reported conflicting results [19]. In an embryo-fetal development study, pregnant rats received daily intravenous administration of sugammadex up to six times the maximum recommended human dose (MRHD). No treatment-related maternal or embryo-fetal changes were observed. In another embryo-fetal development study, pregnant New Zealand white rabbits received daily intravenous administration of sugammadex up to eight times the MRHD. A decrease in fetal body weight was observed in the offspring at maternal doses of 65 and 200 mg/ kg. Moreover, incomplete ossification of the sternum and an unossified first metacarpal were found in the offspring at a maternal dose of 200 mg/kg/day. Furthermore, maternal toxicity was observed at 200 mg/kg. Considering that bone retention of sugammadex occurred in rats after intravenous injection with a mean half-life of 172 days, these findings may be attributed to the drug [19]. No evidence of malformation was observed at any dose. In a prenatal and postnatal development study, pregnant rats were intravenously administered sugammadex at concentrations up to six times the MRHD dose. There were no drug-related effects on rat parturition during prenatal or postnatal development. However, there was postnatal loss due to pup cannibalization. Therefore, effects of sugammadex on steroidal hormones and pheromones should not be excluded.
The effect of drugs on fetal neuronal development is an emerging issue. Palanca et al. [20] reported that sugammadex alone promoted neural apoptosis in primary cultures. Neural apoptosis was further promoted when sugammadex was used alone rather than in combination with steroidal NMBA in primary cultures [21]. It was concluded that sugammadex caused depletion of neuronal cell cholesterol, resulting in oxidative stress and neuronal apoptosis. However, this does not occur in vivo because the mature bloodbrain barrier (BBB) prevents the passage of sugammadex. Thus, sugammadex may pass through a compromised BBB, such as an immature or damaged one. The potential of anesthetics to cause neuroapoptosis and other neurodegenerative changes in the developing brain has become evident in animal studies over the past 20 years [22,23]. One study postulated that the co-administration of sugammadex with neonatal sevoflurane may exacerbate neuronal apoptosis due to changes in BBB integrity [24]. Neonatal mice exposed to 2% sevoflurane for 6 h developed BBB ultrastructural abnormalities. The co-administration of sugammadex with sevoflurane in neonatal mice further increased neuroapoptosis in the brain compared to 2% sevoflurane alone, whereas sugammadex alone did not induce apoptosis. This possibility should be considered when administering sugammadex with inhaled anesthetics in pregnant women. However, further studies are required to confirm these findings.
Interaction with progesterone
As sugammadex encapsulates steroidal NMBA, it may also bind to other appropriately sized steroidal substances. Progesterone is such a substance and in vitro binding studies suggest that progesterone levels may decrease by approximately 34% when exposed to sugammadex [19]. Decreased progesterone levels can lead to two serious adverse effects. One is failure to maintain early pregnancy, and the other is hormonal contraceptive failure.
Two animal studies that investigated the effect of sugammadex on progesterone levels in pregnant animals have been conducted [25,26]. Pregnant rats were randomly assigned to three groups and injected under sedation on the 7th day of gestation: control, sugammadex 30 mg/kg, and rocuronium 3.5 mg/kg + sugammadex 30 mg/kg [25]. Blood samples were obtained 35 min after injection to determine progesterone levels. Progesterone levels were not significantly different between the groups, and successful completion of pregnancy and absence of stillbirths or miscarriages were reported.
Pregnant rabbits were randomly divided into three groups: control, rocuronium administered at the onset of general anesthesia (GI group), and rocuronium + sugammadex administered 60 min after general anesthesia (GII group) [26]. In the GII group, progesterone levels at 60 and 90 min after general anesthesia were significantly lower than the baseline progesterone levels. In addition, the progesterone levels at 60 and 90 min after general anesthesia were significantly lower in the GII group than those in the GI group. However, all pregnancies were successful without early birth or stillbirth. Because studies are still lacking and the results are inconclusive, it cannot definitely be concluded that sugammadex affects pregnancy by lowering progesterone levels.
Lactation
Generally, low plasma protein binding, low molecular weight, and highly lipophilic and cationic drugs favor increased drug excretion into breast milk [27]. In contrast, sugammadex is large, hydrophilic, and has a half-life of 2 h and pKa of 2.82 [6]. Therefore, it is appropriate to predict minimal excretion of sugammadex into breast milk. Moreover, the oral absorption of sugammadex is thought to be low; therefore, it can be assumed that the amount of sugammadex delivered to breastfed infants is negligible. In a milk excretion study in rats, 20 mg/kg sugammadex was injected intravenously on postnatal day 9 and the maximum drug level was achieved at approximately 30 min [19]. The oral administration of sugammadex via milk did not induce adverse effects on survival, body weight, or physical or behavioral development in rats. However, there is no published evidence to support this.
Cesarean section
In cesarean sections, neuraxial anesthesia is preferred over general anesthesia; however, general anesthesia is still administered under some conditions [28]. Because pregnancy-related physiological changes peak at term and delivery, the efficacy and safety profiles are of primary concern when using sugammadex after fetus delivery. A randomized controlled noninferiority trial was conducted to show that a high-dose of rocuronium can achieve intubation conditions comparable to those of succinylcholine for cesarean delivery [29]. In the rocuronium group, 2 mg/kg sugammadex was administered if the train of four (TOF) count was ≥ 1 and 4 mg/kg sugammadex was used if the post-tetanic count was ≥ 1. Among the 120 patients, the time from neuromuscular blockade reversal to TOF ratio > 0.9 was 104 ± 63 (mean ± SD) s. In addition, no signs of residual blockade or side effects were observed. These findings are similar to those reported in other clinical studies [30,31].
The sugammadex doses required for the routine reversal of moderate or deep blocks during cesarean section appear to be sufficiently effective and safe at doses equivalent to adult doses (2-4 mg/kg). In emergencies, such as cannot intubate and cannot ventilate (CICV), a high-dose of sugammadex (16 mg/kg) must be administered for immediate reversal before fetal birth; however, these cases have not yet www.anesth-pain-med.org been reported. Recent large multicenter studies showed difficult intubation rates of 2.0-5.4% which is similar to the general surgical population (4.4%) [32][33][34]. In contrast, failed intubation rate is higher in pregnant women (0.12-0.53%) compared with that in the general surgical population (0.06%) [32,33,35]. In 2015, the Obstetric Anesthetists' Association and Difficult Airway Society developed the first national obstetric guidelines for difficult airway management [36]. The guidelines recommend considering high-dose sugammadex administration for immediate reversal of CICV. Although there is no evidence for the efficacy and safety of high-dose sugammadex in pregnant women and fetuses, it is reasonable to consider sugammadex administration because the risks of exposure to severe hypoxia could be more harmful than the potential risk of using high doses of sugammadex.
Non-obstetric surgery
Unlike cesarean sections, pregnant women undergoing non-obstetric surgery do not deliver a fetus and must continue their pregnancy. Therefore, few clinical studies have reported the use of sugammadex in such cases, and these have only recently been published in the form of case series [18,[37][38][39][40]. Theoretically, the passage through the placenta or BBB is limited; however, animal experiments have shown worrisome results [19,24]. In 2019, an interesting case of sugammadex placental transfer was reported [37]. A woman at 29 weeks of gestation required an intrauterine transfusion for Rh (D) alloimmunization. During the intrauterine transfusion procedure, maternal respiratory distress occurred because of the intramyometrial injection of rocuronium, which was intended to be administered intramuscularly to the fetus. After the administration of sugammadex (100 mg), the patient's respiratory distress resolved. After the patient had stabilized, additional rocuronium was administered to the fetal buttocks. Interestingly, adequate paralysis was achieved in the fetus without sustained paralysis induced by the maternal sugammadex injection, suggesting limited maternal-fetal placental transfer of sugammadex.
Recently, several case series and a multicenter retrospective study on maternal and fetal outcomes after sugammadex use in pregnant women have been published [18,[38][39][40] ( Table 2). In two case series, patients were at 4-26 gestational age and received 0.7-4.3 mg/kg sugammadex. Although preterm premature rupture of membranes (N = 8/25) and preterm labor (N = 12/25) occurred, none of these episodes Torres et al. [38] Single center, case series Values are presented as number only, median (range), or mean ± SD. GA: gestational age, SGX: sugammadex, CD: cesarean delivery, PPROM: preterm premature rupture of membranes, ASD: atrial septal defect. *None < 2 weeks after sugammadex administration, † None < 12 weeks after sugammadex administration, ‡ Data available only for women who received sugammadex in the first trimester.
occurred within 2 weeks of receiving sugammadex [18]. In another case series, only one patient experienced preterm labor; however, it was induced by severe preeclampsia and developed 12 weeks after sugammadex administration [38].
In a multicenter retrospective observational study with 73 patients who received sugammadex and 51 patients who did not [39], the gestational age was 15.0 ± 5.1 (mean ± SD) weeks and the median total dose of sugammadex was 200 mg. Miscarriages and preterm births within 4 weeks of sugammadex administration were not significantly different between the patients with and those without sugammadex exposure. In one study, a larger dose of sugammadex (8 mg/ kg) was administered to 15 patients who underwent electroconvulsive therapy [40]. Spontaneous abortion occurred in one patient and one infant developed neonatal respiratory distress. Moreover, no patients experienced preterm delivery or labor induced by sugammadex administration. Although these studies did not show obvious detrimental effects of sugammadex on maternal and fetal outcomes, their retrospective nature and small sample size cannot confirm the safety concerns.
Levels of progesterone and unintended pregnancy
Few clinical studies have examined steroidal hormone levels after sugammadex injection [41,42]. One study investigated the hormonal profiles of 50 young male patients randomly divided into N (neostigmine) and S (sugammadex 4 mg/kg) groups [42]. Sugammadex showed no adverse effects on progesterone and cortisol levels, while it was associated with a temporary increase in aldosterone and testosterone levels. They explained that sugammadex has no effect on progesterone levels because of its relatively low affinity (120 to 700 times lower than that of rocuronium) and tight binding to plasma proteins. A more recent prospective observational study was conducted to investigate the effects of sugammadex on perioperative estrogen and progesterone levels in premenopausal women aged 18-50 years [41]. After 240 min of sugammadex administration, progesterone in patients taking oral contraceptives tended to decrease; however, it was non-significantly decreased within 20% below baseline, far less than the 34% expected pharmacokinetically. Nonetheless, they did not consider the menstrual cycle or surgical stress, which significantly affect hormonal levels. In addition, because endogenous progesterone is suppressed by oral contraceptives (exogenous progesterone), a small change in progesterone exaggerates the percentage change. Both authors suggested that statistically significant changes in hormonal levels were borderline or temporary and would be clinically insignificant.
However, investigating the association between unexpected pregnancies and sugammadex use is difficult. Lazorwitz et al. [43] reported a single case (0.7%; 95% confidence intervals: 0-4.1%) of unexpected pregnancy after sugammadex administration in 134 patients using hormonal contraceptives. Based on the ultrasound measurements, the estimated date of conception was 19 days after sugammadex administration. Although there are no clinical reports of unintended pregnancy due to sugammadex-progesterone interaction, the manufacturer advises seriously considering this interaction. They recommended that if an oral contraceptive is taken on the same day that sugammadex is administered or non-oral hormonal contraceptives are used, the patient must use an additional non-hormonal contraceptive method or a backup method of contraception for the next 7 days [19]. Unintended pregnancy can be personally, socially, and economically burdensome; therefore, patients should be informed and educated even if there is a slight possibility. However, several studies show that 78-94% of anesthesia providers are aware of the risk of oral contraceptive failure with sugammadex, whereas only 20 to 33% of anesthesia providers discuss this with their patients [44,45]. Appropriate education and policies are required to overcome discrepancies between knowledge and practice. Anesthesia providers must assess the risk of oral contraceptive failure induced by sugammadex preoperatively and screen women of childbearing age for oral contraceptive administration. If women are at risk of oral contraceptive failure, anesthesia providers should counsel about sugammadex and its alternatives (acetylcholinesterase inhibitors) and make decisions regarding the choice of NMB antagonists. After surgery, information should be provided through a take-home leaflet or letter to improve postoperative recall [46,47]. Along with these perioperative processes, education of relevant medical staff and feedback from audits are necessary.
Lactation
Currently, there is no clinical evidence regarding the use of sugammadex during breastfeeding [48]. However, owing to the biochemical and pharmacokinetic characteristics of sugammadex and preclinical evidence, sugammadex is acceptable for use during breastfeeding [49]. In contrast, a www.anesth-pain-med.org statement published by the Society for Obstetric Anesthesia and Perinatology (SOAP) disagrees with immediate breastfeeding [8]. According to the World Health Organization recommendations, breastfeeding should be initiated within the first hour of birth [50]. If a mother who received sugammadex after cesarean section began breastfeeding within 1 h after delivery, she may have breastfed at the peak concentration of sugammadex. Moreover, in the early postpartum period, large gaps between mammary alveolar cells enhance the delivery of maternal proteins to breast milk and may allow sugammadex to pass through breast milk [51]. Immature metabolism and renal function delay sugammadex clearance in infants. Therefore, SOAP recommends the use of acetylcholine esterase inhibitors and, if not, pumping and discarding breast milk for the first 12-14 h after surgery [8].
CONCLUSION
The use of sugammadex in pregnant and lactating women and in those of childbearing age taking oral contraceptives shows a large gap between theoretical estimation and clinical practice. Scientific and clinical evidence is increasingly being published to fill this gap; however, it remains insufficient. Therefore, it seems that now is the time to practice "Do not harm" rather than practicing "Doing good". Premature birth and miscarriage owing to failure to maintain pregnancy, fetal deformities, developmental disorders, and unexpected pregnancies are completely different from the acute and temporary side effects of drugs. These are permanent afflictions and catastrophes for both individuals and the society. Therefore, it should be approached with more caution than other issues. However, although there is a lack of clear evidence, it is most likely that sugammadex is already playing the role of "Doing good" in some clinical situations such as cesarean section and CICV. Thus, what we need to know are theoretical knowledge and accumulated scientific data. What we have to do is establish a perioperative process of sugammadex use in pregnant and lactating patients and those on oral contraceptives. In addition, we must conduct related research and share our data worldwide. If we accomplish what we need to know and do, we will be able to move forward from "Do not harm" to "Doing good".
FUNDING
None.
CONFLICTS OF INTEREST
No potential conflict of interest relevant to this article was reported.
DATA AVAILABILITY STATEMENT
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study. | 2023-04-30T15:19:29.684Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "5482078ffadbd645b635098397b9b975bc63b17a",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "bac73ce8c49da806a35de9e7a2ac1f4ead692233",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118475073 | pes2o/s2orc | v3-fos-license | New Class of SO(10) Models for Flavor
We present a new class of unified models based on SO(10) symmetry which provides insights into the masses and mixings of quarks and leptons, including the neutrinos. The key feature of our proposal is the absence of Higgs boson 10_H belonging to the fundamental representation that is normally employed. Flavor mixing is induced via vector-like fermions in the 16 + 16-bar representation. A variety of scenarios, both supersymmetric and otherwise, are analyzed involving a 126_H along with either a 45_H or a 210_H of Higgs boson employed for symmetry breaking. It is shown that this framework, with only a limited number of parameters, provides an excellent fit to the full fermion spectrum, utilizing either type-I or type-II seesaw mechanism. These flavor models can be potentially tested and distinguished in their predictions for proton decay branching ratios, which are analyzed.
Introduction
Grand unified theories [1][2][3] based on SO(10) gauge symmetry [4] are attractive candidates for physics beyond the Standard Model (SM). These theories predict the existence of righthanded neutrinos needed for the seesaw mechanism, and unify all fermions of a given family into a single irreducible multiplet, the 16-dimensional spinor representation. Quarks and leptons are thus unified, as are the three gauge interactions of the SM. The unification of fermions into multiplets suggests that SO(10) may serve as a fertile ground for understanding the flavor puzzle. There are challenges involved, since in particular, large neutrino mixing angles should emerge from the same underlying Yukawa structure that allows for small quark mixing angles. This indeed has been realized in a class of SO(10) models with a minimal set of Yukawa coupling matrices [5][6][7][8][9][10][11][12][13], and we shall provide a new class of models that achieves this in this paper. Since SO(10) admits an intermediate symmetry, the Pati-Salam symmetry SU (4) c × SU (2) L × SU (2) R or one of its subgroups, unification of gauge couplings can occur consistently even without low energy supersymmetry. Of course, SO(10) may be realized in the supersymmetric context as well, in which case the intermediate symmetry breaking scale may be the same as the unification scale. As far as the Yukawa sector of the theory is concerned, the two scenarios (non-SUSY versus SUSY) are not all that different. In this paper we shall study a new class of SO(10) models addressing the flavor puzzle both in the non-supersymmetric and in the SUSY contexts.
The Yukawa sector of this theory has only two symmetric matrices (in flavor space), involving a 10 H and a 126 H of Higgs bosons. It is natural to include a 210 H for completing the symmetry breaking. In such a scenario, unfortunately, once the constraints from the Higgs sector are properly taken into account, the model can be ruled out [27][28][29][30], assuming that the low energy supersymmetric threshold corrections to the fermion masses are negligible. With the relatively large Higgs mass m H = 125 GeV, the split supersymmetric scenario [31,32] of the minimal SO(10) model [33] is also found to be inconsistent [34,35] 4 .
One should not abandon the whole elegant grand unified program simply because the simplest supersymmetric realization does not work perfectly. The usual way to rule in a theory that was ruled out is to increase the particle content and thus the number of model parameters. This was the approach of [36], where a new 120-dimensional Higgs representation has been added to the minimal model. 5 In this way the Yukawa sector increases by one antisymmetric matrix, which gives sufficient freedom to fit the data.
In this paper we will go, surprisingly, in the opposite direction, and ask ourselves, if it is possible to fit the data with less, not more, Yukawa matrices. This paradoxical question has obviously a hidden proviso, otherwise we would get no mixing at all. To account for the correct low energy mass spectrum, mixings, and CP violation we will thus make use of an extra vector-like generation 16 + 16, similar to the one used in [37]. The difference with [37] is that we will assume the bilinear spinors 16 a to be coupled with 126 H instead of 10 H . In this way we may hope to describe neutrino masses and mixings in a pattern similar to the charged fermions, which is one of the great achievements of the SO(10) framework.
We shall see that this decreasing of the number of Yukawa matrices at the expense of an extra vector-like family can be successful and we will show several examples where it works. Although we will consider different possible Higgs sectors and take some of their constraints seriously, we will not consider a combined fit of the Higgs and Yukawa parameters, which can obviously pose extra restrictions. This more modest approach nevertheless shows that SO(10) Yukawa sectors with a single Yukawa matrix can be realistic.
The rest of the paper is organized as follows. In Sec. II we present the key features of the new class of SO(10) models. In Sec. III we set up the framework and the formalism. In Sec. IV we adopt a specific basis that removes redundancies, which is well suited for numerical analysis of the flavor observables. Sec. V discusses the constraints imposed on the SUSY models from the minimization of the Higgs potential. Sec. VI has our numerical fits to the fermion masses and mixings for the six models analyzed. Finally, in Sec. VII we conclude. In Appendices A and B some useful relations used for the fermion mass fits are given. Appendix C contains the numerical Yukawa matrices for various cases that result from the fits.
New class of SO(10) models
The key feature of the new proposed models is the absence of 10 H . In its place we introduce a 16 + 16 vector-like fermions. In addition to a 126 H , we employ either a 45 H or a 210 H for symmetry breaking. These fields have non-trivial couplings to the vector-like fermions, which is needed to avoid certain unwanted relations among down-type quark and charged lepton masses. Additional Higgs fields (e.g. 54 H ) are needed for consistent GUT symmetry breaking, but these fields do not enter into the Yukawa sector. The Yukawa Lagrangian of our models has a very simple form, L Yuk = 16 (m a + η a 45 H ) 16 a + 16 a Y ab 126 H 16 b + 16ȳ 126 H 16 + h.c. (2.1) corresponding to the use of 45 H as the symmetry breaking field (in addition to the 126 H field). Here a, b = 1 − 4 are the generation indices which include a 16 from the vector-like family. We thus see that the Yukawa sector has one 4 × 4 matrix Y ab , and two four-vectors m a and η a . Since Y ab can be chosen to be diagonal and real, this amounts to 4 + 4 + 4 flavor mixing parameters. The Yukawa coupling 16ȳ 126 H does not have any effect on the light fermion masses and mixings. While in the diagonal and real basis for Y ab the vectors m a and η a are in general complex, these being related to GUT scale masses, one complex combination disappears from low energy masses and mixings. One should add to this set two (real) VEV ratios (one from the two SM singlets of 45 H and one for the up-type and downtype Higgs doublet VEV ratio from the 126 H ), and an overall scale for the right-handed neutrino masses. We thus see that the model has 14 real parameters and 7 phases to fit 18 observed values among quark masses, quark mixings and CP violation, charged fermion masses, neutrino mass-squard differences and mixing angles. Thus these models are rather constrained, yet we show that excellent fits are obtained. It may be noted that the minimal supersymmetric SO(10) models with two symmetric Yukawa coupling matrices involving 10 H and 126 H have 12 real parameters and 7 phases that enter into the flavor sector. The basic structure of Eq. (2.1) can be realized in several other ways. We study all such SO(10) models in this paper. The Higgs field 45 H in Eq. (2.1) may be replaced by a 210 H . In this case, since the 210 H contains three SM singlet fields, there are two ratios of VEVs from the 210 H , which would increase the number of parameters by one. These models may be realized with or without low energy supersymmetry. In the non-SUSY models, the VEVs of 45 H and 210 H are real, while in SUSY models they are in general complex (thus increasing the phase parameters to 8). In the SUSY models we find that although the 210 H has two associated VEV ratios, only one of the two is independent, due to symmetry breaking constraints arising from the superpotential. In SUSY versions, additional fields other than 126 H and 210 H used in the Yukawa sector are often required, in order to avoid new chiral supermultiplets that remain light and spoil unification of gauge couplings. A summary of the models that fit into this classification and studied here is given below. The VEV of the SM singlet in 126 H will be found a posteriori to be around 10 13 − 10 14 GeV in all models. This has an effect on the choice of Higgs fields, especially in the SUSY models: Very simple Higgs systems used for GUT symmetry breaking would lead to certain sub-multiplets having mass of order 10 11 GeV, which would spoil perturbative unification of gauge couplings in SUSY SO (10). The choice of "other Higgs fields" shown above are in part guided by this not happening. Furthermore, in some simplistic SUSY cases, the Higgs doublet mass matrix becomes proportional to other color sector mass matrix. Making a pair of Higgs doublets light would then lead to making a pair of colored states light as well, which affects perturbative unification. Such cases are avoided in the scenarios shown above. In each of the models listed above, seesaw mechanism may be realized via either type-I or type-II chain. Such sub-classes will be denoted by a label I or II when needed. Thus AI would refer to type-I seesaw in Model A, and likewise AII for type-II seesaw in the same model.
Models A and B are nonsupersymmetric, while models C-F are supersymmetric. For model A, in addition to 45 H , a 54 H is needed to break SO(10) down to the SM without going through an intermediate SU (5)-symmetric limit. In Model B which uses a 210 H , an additional field, either a 54 H or a 16 H is needed for the following reason. As noted already, 126 H acquires a VEV of order 10 13 − 10 14 GeV, which can be ignored for the study of GUT symmetry breaking at around 10 16 GeV. A single 210 H would break SO(10) down to one of its maximal little groups, such as SU (5) × U (1), SU (4) C × SU (2) L × SU (2) R etc. The fermion mass matrix would then reflect this unbroken symmetry, which is not realistic in the light fermion spectrum. Addition of a 54 H (or a 16 H ) with a GUT VEV would reduce the surviving symmetry and help with realistic fermion masses. For SUSY SO(10), it is not a viable model if the symmetry is only broken by 45 H + 54 H , since in this case, the Higgs doublet (1,2,1/2) and the Higgs octet (8,2,1/2) mass matrices become identical. So one cannot make the MSSM doublet fields light without also making the octet fields light. To break this degeneracy one needs to extend the Higgs sector. For this purpose in model C, we enlarge the Higgs sector by adding 16 H +16 H . SUSY SO(10) model with 210 H +126 H +126 H is also not a consistent model, because with the requirement v R ∼ 10 13−14 GeV, the octet (8, 3, 0) Higgs field becomes light with a mass of order ∼ 10 10−11 GeV, so the theory does not remain perturbative up to the GUT scale. Thus, in order to avoid this, in model D, we include 54 H Higgs and in model E, we include a 16 H + 16 H . It will be shown later in Sec. V that, in all these SUSY SO(10) models with a 210 H , there is only one independent VEV ratio involving the 210 H field, owing to symmetry breaking constraints. Including more Higgs multiplets, one can break such relationships among VEVs which can lead to two independent VEV ratios for 210 H . We also consider this general case which is labeled as model F, where in addition to 210 H , one has both 54 H and 16 H + 16 H (or some unspecified) multiplets. It is to be mentioned that, we do not consider any model where both the 45 H and 210 H are present simultaneously, which would lead to more parameters and thus less predictions in the fermion sector. Details of the symmetry breaking schemes will be explained further in Sec. V.
The set-up and formalism
All models we study have one vector like 16 + 16 pair plus 3 generations of chiral 16's. Their mass terms and couplings to a 45 H given in Eq. (2.1) can be expanded to yield Although by redefining the phases of ψ a we can make all these M a real, we will keep them complex in general. Then we project to the heavy states as usual by To this we add the Yukawa couplings to 126 H . Although we are free to choose this 4 × 4 Yukawa matrix to be diagonal and real (in the original basis, i.e. before (3.5)), we will keep it to be complex symmetric and choose a convenient basis later on. The 16 has coupling to the 126 H , but this will turn out to not affect light fermion masses. The relevant Yukawa couplings are (see Eq. (2.1)) 16 a Y ab 126 H 16 b + 16ȳ 126 H 16. (3.9) In this original basis we put all together: we have to project to the light generations. In doing so we need to evaluate (Y is a 4 × 4 matrix, while Y is its 3 × 3 submatrix) (3.14) where we used Λx * =Λx * . For charged fermions this is enough, and we get (mass matrices are defined as ψ c M Ψ ψ) For neutrinos things are slightly more involved, since there are two kinds of heavy neutrinos, the usual right-handed ones, plus the new vector-like ones. The full symmetric Majorana mass matrix is 10×10. However, in the leading order in yv R /M L,ν c (M L,ν c denote the masses of vector-like leptons), the situation returns to ordinary with 20) so that as usual by using the seesaw [17] formula we arrive at the 3 × 3 light neutrino mass matrix as If the approximation yv R /M L,ν c 1 is not good, we write the full symmetric matrix for (3.22) One can integrate out ν 4 andν without any trace, since they mix through a large M L , but otherwise feel just the small VEVs. What remains is for (ν i , ν c i ,ν c , ν c 4 ): This has again the form and thus Eq. (3.21) applies with M ν L given by Eq. (3.20), but now for 5 right-handed neutrinos with a 5 × 3 matrix M ν D and a 5 × 5 symmetric matrix M ν R : (3.26) To conclude, let's write down explicitly the various x's: we can rewrite the above as To get the masses and mixings we change the basis for x = d, u, e, ν and X = D, U, E, N . This means that (for X = N , so that the CKM and PMNS matrices are defined as So far we have been very general. However, there are redundancies that are present, which should be removed for an efficient numerical fitting algorithm. In the next section we shall choose a specific basis, which may appear at first to be less intuitive but which is well-suited for our numerical minimization. There are two obvious basis choices, one where Y ab is diagonal, and a second one where the vectors m a and η a have simple forms. It is the second one that is used in the next section. For further use we give here the relations between the two sets of parameters. and are the VEVs of the three SM singlets of 210 H . This then changes Eq. (3.31) into where now For correspondence with the specific basis chosen in the next section, we still have Eqs.
Analysis in a specific basis
The general formulas given in the previous section for the light fermion mass matrices have built-in redundancies. Here we choose a specific basis where these redundancies are removed. We choose a basis where the four-vectors in Eq. (2.1) have simple forms: These simple forms are achieved by 4 × 4 family rotation, which makes the vector η to have the form shown, and a subsequent 3 × 3 family rotation that brings the vector m to this form. A further rotation in the first two family space can be made, we choose this rotation to make the 4 × 4 Yukawa matrix, denoted as a ij in this specific basis, to be diagonal in the 1-2 subspace, i.e., a 12 = a 21 = 0. The correspondence given in Eqs. (4.51) The effective mass terms that arise after the VEV of Φ is inserted would depend on the VEV ratio of the two SM singlets in 45 H and on two VEV ratios of the three SM singlets in the case of 210 H . For the former, we can define an unbroken charge Q, which is not the electric charge, but a linear combination of hypercharge Y and the U (1) X charge contained in SO(10) → SU (5) × U (1) X -the 45 H leaves this charge Q unbroken. A parameter can be introduced in terms of which the unbroken charge Q can be defined for each of the SM fermions [37]: where X is normalized so that X 10∈16 = 1, X 5∈16 = −3 and X 1∈16 = 5. Thus the charges of fermions ∈ 16 of SO(10) for the case of 45 H are: ; Q e c = 1 + 6 5 ; Q ν c = −1.
(4.53)
For 210 H case the fermion charges are given in terms of two parameters 1,2 : (4.54) These charges are obtained from Eq. (3.43) by setting , the last two terms of the Yukawa Lagrangian in Eq. (4.51) can be written as the heavy (GUT scale) fields (f 4 ,f c 4 ) and the light SM fields (f 3 ,f c 3 ) can be identified as Note that this ratio is not exactly equal to tan β of MSSM, but is closely related to it. If we ignore the mixing of the up and down-type Higgs doublets from 126 H with other doublets present in the theory, r would be equal to tan β in MSSM. The following relations are then readily obtained: Note that a rotation in the 1-2 sector has been made which makes a f 12 = a f 21 = a 12 = 0. These mass matrices are not symmetric, since a f ij = a f ji , although the original matrix obeyes a ij = a ji . These four mass matrices for f = U, D, D, ν D are given in terms of the parameters , T, θ, φ and a ij (with i, j = 1 − 4 and a 12 = a 21 = 0). We choose to take elements of M E to be independent. One can then solve for a 13 and a e 23 = a e 32 , which does not lead to realistic fermion masses.) Similarly for the case of Φ = 210 H , the restriction is 2 = 0 is required as can be seen from Eq. (4.54). All these mass matrices have the same 1-2 sector and one can choose a 11 = a e 11 and a 22 = a e 22 . In addition, a e 33 , a u 33 , a d 33 depend on 3 independent parameters a 33 , a 34 , a 44 that appear only in the (3,3) sector of the light mass matrices. Since this linear system is invertible, one can treat a e 33 , a u 33 , a d 33 as independent parameters. The (3,3) element of the right-handed neutrino Majorana matrix is then not free, and is determined in terms of a e 33 , a u 33 , a d 33 . Expressions for a ij in terms of the independent parameters chosen are given in Appendix A .
The elements of M E are independent parameters. We can express M U and M D in terms of T, θ, φ, a u 33 , a d 33 , a e ij and (or 1,2 ) for the case of 45 H (or 210 H ), so in this basis the charged fermion mass matrices are: In the case of SUSY, is complex, so one additional phase enters (for a total 21 parameters). For Φ = 210 H in the SUSY context with minimal Higgs content, 1 and 2 are not independent of each other (see later), so there are again 13 magnitudes and 8 phases (in total 21 parameters). Later we will also consider a case with non-minimal Higgs sector where both these VEV ratios 1,2 can be in general independent of each other. In the neutrino sector (discussed in the next subsection) the mass matrix is given by these same parameters except for an overall scale (v R,L for type-I and type-II seesaw scenarios respectively) that adds one new parameter.
Type-I seesaw
To write down the mass matrix in the neutrino sector, we make the assumption that M, bΩ v R , which is a valid approximation provided that M, bΩ ∼ M GU T ∼ 10 16 GeV. Note that in order to generate light neutrino masses by using the seesaw mechanism, one roughly needs v R ∼ 10 12−14 GeV. In this approximation, no new parameter comes into play in the neutrino mass matrix except the scale v R . For type-I seesaw mechanism the Dirac neutrino mass matrix can be read off from Eq. (4.59): (4.72)
Type-II seesaw
In analogy to the the analysis done in Sec. 4.1.1 one can derive the type-II seesaw contributions to the the neutrino mass matrix by replacing v R → v L and ν c → ν. In this type-II seesaw scenario the neutrino mass matrix is then given by
Symmetry breaking constraints
In all models studied here, there is no 10 H Higgs and matter fields couple to 126 H + 126 H and 45 H or 210 H scalars. There are considerations as outlined in Sec. II that would require additional Higgs fields to be present for consistent symmetry breaking. While there are no constraints on the VEV ratios when a 210 H is employed in the non-SUSY framework, these ratios are determined in the case of SUSY. We consider the various constraints on the symmetry breaking sector in this section.
Non-SUSY SO(10) models A and B
Model A employs 126 H , 45 H and a 54 H . Breaking of SO(10) down to SM via SU (5) channel is not viable due to gauge coupling unification and proton decay limits. If only 45 H and 126 H (or 16 H ) Higgs multiplets are used to break SO(10), breaking takes place through the SU (5)-symmetric channel [38][39][40]. The other two breaking channels SO(10) not have stable vacuum at the tree-level. Recently quantum corrections to the tree-level potential have been taken into account [41,42] and the validity of such breaking channels has been shown. However, we do not rely on quantum corrections in this paper. This is why the Higgs sector needs to be extended with a 54 H for consistent SO(10) breaking down to SM [43,44]. Note that a Higgs system consisting of 126 H and 54 H is sufficient for symmetry breaking purposes if also a 10 H is used [45], but without the 10 H as in our case, a 45 H is necessary.
Since the SM Higgs doublet is part of the 126 H in this model, a question arises as to the negativity of its squared mass. Consistency of the GUT scale symmetry breaking would require all physical scalar squared masses to be positive, which includes the SM Higgs doublet. There must then be a source that turns this positive mass to negative value. It has been shown in Ref. [46] that indeed such a turn-around is possible, provided that some scalar from any GUT multiplet remains light and has non-negligible couplings to the SM Higgs doublet. The context in Ref. [46] is similar to our present case, where a 144 H of SO(10) is used to break the GUT symmetry as well as the electroweak symmetry. Since our present non-SUSY model has an intermediate scale, we expect some of the scalars to survive down to the intermediate scale, which would enable turning the Higgs mass-squared to negative value so as to trigger electroweak symmetry breaking.
In Model B we employ a 210 H in addition to the 126 H . This is not however sufficient for our purpose. Since the VEV of 126 H is much smaller than the GUT scale, a single 210 H would break the GUT symmetry to one of its maximal little groups, such as SU (5) [47]. The fermion mass matrices will then carry traces of this unbroken symmetry, which would lead to unwanted mass relations. This is why we extend the scalar sector by adding a 54 H or 16 H . For non-SUSY SO(10) model with Higgs multiplets 210 H + 54 H , since 54 2 1 s + 54 s + 770 s and 210 2 1 s + 54 s + 770 s , the scalar potential contains 2 non-trivial quartic couplings between 210 H −54 H . In addition, 210 H has 3 non-trivial quartic couplings and 54 H has one cubic and one non-trivial quartic couplings. This counting of non-trivial couplings dictates that in general the two VEV ratios 1,2 from the 210 H are free parameters. Similar argument can be provided if 54 H is replaced by 16 H Higgs.
SUSY SO(10) Models C-F
The Higgs sector of Model D consists of 210 H + 54 H + 126 H + 126 H . This system is a subset of the SUSY SO(10) models studied in Ref. [48]. The relevant part of the superpotential with only 210 H , 54 H and 126 H + 126 H is: Since the VEV of 126 H is required to be in the intermediate scale ∼ 10 13−14 GeV range from a fit to light neutrino masses arising via the seesaw mechanism, in this analysis of the superpotential one can neglect the contribution coming from this field as the other scalars 210 H +54 H will get much larger VEVs of order the GUT scale ∼ 10 16 GeV. Then the relevant stationary equations are These correspond to the VEV ratios ratios While studing the fermions masses and mixing numerically, we will consider both these cases. These models are labelled as D a for the solution 1 = − 2 There are two different solutions of this system of stationary equations So the VEV ratios are given by where is a free parameter. We discard the first solution since this corresponds to SU (5)symmetric case. The surviving model will be labeled E.
By adding more Higgs multiplets in either of the models D or E, as for example 16 H +16 H or adding another 54 H to model D, these relations for VEV ratios can be made invalid and 1,2 can be made independent parameters. We will also study this general case.
Numerical analysis of fermion masses and mixings
In this section we show our fit results of fermion masses and mixings for different SO(10) models described in Sections II and V. We do the fitting for both non-SUSY and SUSY cases, each with type-I and type-II seesaw scenarios. For optimization purpose we do a χ 2 -analysis. The pull and χ 2 -function are defined as: 81) [51,52] we get the GUT scale inputs. For all different SUSY SO(10) models, we do the fitting for tan β = 10. For the charged lepton masses, a relative uncertainty of 0.1% is assumed in order to take into account the theoretical uncertainties arising for example from threshold effects. The inputs in the neutrino sector are taken from Ref. [53]. For neutrino observables, we do not run the RGE from low scale to the GUT scale, which is a relatively small effect, except for an overall rescaling on the neutrino masses that can be absorbed in the corresponding scale v R or v L . In the case of inverted hierarchical neutrino mass spectrum, RGE effects can be important, whereas for all our cases the spectrum turns out to be normal hierarchical. Since the right-handed neutrino masses are extremely heavy, threshold corrections might also have effects on the neutrino observables if the Dirac neutrino matrix elements are of order one, but in our case the elements are much smaller than one. All these inputs are shown in the tables where the fit results are presented. Below we present our best fit results and the corresponding parameters for different SO(10) GUT models as discussed above. Table 1: Fitted values of the observables correspond to χ 2 = 7 · 10 −2 and 0.78 for models AI and AII respectively. These fittings correspond to |a ij | max = |a 44 | =1.9 and 3.3 for the type-I and type-II cases respectively (see text for details). For the charged lepton masses, a relative uncertainty of 0.1% is assumed in order to take into account the theoretical uncertainties arising for example from threshold effects. {8.65 · 10 7 , 2.66 · 10 10 , 6.99 · 10 11 } - (6.84) In performing such optimization, solutions with lower values of χ 2 exist but we are only interested in the solutions for which the original couplings a ij are also in the perturbative range. In the optimization process we restrict ourselves to the case of (a ij ) max 2. For all the solutions that are presented, we did find good fits with this cut-off except for model AII where |a 44 | = 3.3 as can be seen from Eq. C.150 . The original coupling matrices a ij can be computed with the parameter sets that result due to the minimization process. For all the fits to the different models presented in this work, these matrices are shown in Appendix C.
In Table 2, the predicted quantities correspond to the best fit values. For example, for model AI, the predicted value of the Dirac type CP violating phase in the neutrino sector is δ P M N S = 2π/3. The fit result presented in this case is very good since χ 2 = 7 · 10 −2 . We have investigated the robustness of the predicted value of δ P M N S and found it to be not very robust. Sine the χ 2 for the best fit is extremely small, it is quite fine to deviate from the minimum χ 2 are still find acceptable fits. We find that the variation of δ P M N S from the predicted value can be quite large. In Fig. 1, we show the variation of δ P M N S with χ 2 /n obs . Most of the fit results presented in this work have small total χ 2 , so this conclusion on the robustness of δ P M N S prediction is valid for the other models as well. We present the variation plot only for model AI. Table 3 Table 3: Best fit values of the observables correspond to χ 2 = 5 · 10 −3 and 1 · 10 −5 for models BI and BII respectively. These fittings correspond to |a ij | max = |a 44 | =0.56 and 0.26 for the type-I and type-II cases respectively. For the charged lepton masses, a relative uncertainty of 0.1% is assumed in order to take into account the theoretical uncertainties arising for example from threshold effects. Table 5: Best fit result for models C with inputs correspond to tan β = 10. The fitted values correspond to χ 2 = 7 · 10 −4 for model CI and 6 · 10 −4 for model CII. These fittings correspond to |a ij | max = |a 44 | =1.5 and 1.03 for the type-I and type-II cases respectively. For the charged lepton masses, a relative uncertainty of 0.1% is assumed in order to take into account the theoretical uncertainties arising for example from threshold effects. Table 9 and 10 respectively. The parameter set for D b I is: Table 7: Fitting result for model D a I with inputs correspond to tan β = 10. The fitted values correspond to χ 2 = 7.4 for type-I. It should be mentioned that, among all the fit results presented in this work, this specific fit has the largest value of χ 2 which is 7.4 for 18 observables. This fit correspond to |a ij | max = |a 44 | =1.55. For the charged lepton masses, a relative uncertainty of 0.1% is assumed in order to take into account theoretical uncertainties arising for example from threshold effects. We did not find any acceptable fit within the perturbative range for model D a II. {8.89 · 10 7 , 2.14 · 10 10 , 2.63 · 10 12 } Table 11 and 12 respectively. For model EI, the parameter set is:
d = 5 proton decay
Since the flavor dynamics occurs at the GUT scale in this class of models, the best hope for testing this idea is by studying proton decay, in particular, its branching ratios into different modes. While such an analysis can be done for both non-SUSY and SUSY models, here we confine our discussion to the more dominant d = 5 decay modes in SUSY mediated by the color-triplet Higgsinos. Table 14: Fitting result for models F with inputs correspond to tan β = 10. The fitted values correspond to χ 2 = 9 · 10 −4 and 3 · 10 −5 for model FI and 6 · 10 −4 for model FII respectively. These fittings correspond to |a ij | max = |a 44 | =0.67 and 1.08 for the type-I and type-II cases respectively. For the charged lepton masses, a relative uncertainty of 0.1% is assumed in order to take into account theoretical uncertainties arising for example from threshold effects.
We will bound ourselves to the (presumably) dominant d = 5 (charged) wino mediated mode, so that only SU(2) L non-singlets will appear in the effective operators: We have to project them to the mass eigenstates defined by the unitary matrices X = U, D, E, N which diagonalize the mass matrices as We will use the notation (X = U, D) After 1-loopw ± dressing and assuming degeneracy and negligible left-right sfermion mixing the normalized amplitudes for different channels [54] are, in the mass eigenbasis, where the numerical values (with maximal error around 30%) of the hadron matrix elements can be found in [55]. The unitary matrices X and the Yukawa matrix elements Y QQ,QL are outputs of each successful fit done. As an example, for model D a I we find After squaring (7.102)-(7.106) and multiplying by the appropriate phase space factor (m P , m L , m p are the pseudo-scalar, lepton and proton mass, respectively)
Conclusion
We have presented in this paper a new class of SO(10) models that can successfully address the flavor puzzle. The key ingredient of our models is the absence of 10 H that is conventionally used in most SO(10) models. Its absence is compensated by the introduction of a vector-like family in the 16 + 16 representation. The Yukawa sector of these models has just a single 4 × 4 matrix, along with two four-vectors. As a consequence, there are only 14 flavor parameters and 7 phases to fit all fermion masses and mixings, including the neutrino sector. While the Yukawa system is highly nonlinear, by numerical optimization we have found excellent fits to the fermion observables in a variety of models. A 126 H is present in all models, to generate large right-handed neutrino Majorana masses as well as to provide the SM Higgs doublet. The vector-like fermions have couplings to either a 45 H or a 210 H that is used to complete the symmetry breaking. A total of six models, four supersymmetric and two non-supersymmetric, have been studied. In each case type-I or type-II seesaw mechanism was analyzed. In one case (Model D) with SUSY, minimization of the Higgs potential led to a two-fold solution set, with each providing an excellent fit to flavor observables.
While this class of high scale models cannot be easily tested at collider experiments, proton decay provides an avenue to probe such models. We have investigate the branching ratios for proton decay in the SUSY models, with the results presented in Table 15. While it is an ambitious goal to test flavor models in proton decay discovery, even without such a discovery it is heartening to learn that a large class of models can shed light on the various puzzles of fermion masses observed in nature. In particular, starting from a highly symmetrical quark and lepton sector these models produce large neutrino mixing simultaneously with small quark mixing, a highly nontrivial achievement.
A Expressions for a ij
In this Appendix we give expressions for a ij used in the numerical analysis.
33
In this appendix, we give the expressions for a ν 33 , a R 33 , a L 33 for both the Φ = 45 H and 210 H cases. Using Eqs.(4.66), (A.118), (A.119) and (A.120) it is straightforward to find for the 45 H -case: And for the case of 210 H we find: | 2016-05-17T11:19:52.000Z | 2016-05-17T00:00:00.000 | {
"year": 2016,
"sha1": "b5f130b7949deaf33ec09eedceaeb204a91690e6",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevD.94.015030",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "b5f130b7949deaf33ec09eedceaeb204a91690e6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
81607425 | pes2o/s2orc | v3-fos-license | A sur vey on Biostatisticians Ser ving in the Italian Ethics Committees
Background: Italian ethics committees (ECs) have the responsibility for evaluating and monitoring clinical research. Methods: An electronic survey targeted to the biostatisticians operating in the 95 ECs in Italy, was launched in November 2016. Several aspects were explored such as education, job title, training in biostatistics and experience in the evaluation of protocols within the EC. Results: Seventy case report forms were returned (74%), and the response rate was highest for ECs located in the South (78%) and lowest in the North (51%). The biostatisticians in the respondent ECs were prevalently male, aged 50-60 years, with postgraduate education in medical specialties and statistics. The annual workload varied depending on the type of institution and geographical area, with an annual median number of protocols examined ranging from 80 in hospital ECs to 198 in university hospital ECs, and from 80 to 108, in the South and the Centre, respectively. Of these, 40% were observational study protocols. The EC biostatisticians proposed to reject 5% of protocols and to suspend with the request of clarification or amendments 10%. Only 61% and 79% of these opinions, respectively, were regarded as binding by the other EC members. Conclusion: The biostatistician will not be able to play a significant role in the EC as long as the required skill-set remains vague and his/her opinion on a protocol is underrated.
INTRODUCTION
At the European level the Ethics Committee (EC) is defined according to the Directive 2001/20/EC of the European Parliament and of the Council as "an independent body in a Member State, consisting of healthcare professionals and non-medical members, whose responsibility it is to protect the rights, safety and well-being of human subjects involved in a trial and to provide public assurance of that protection, by, among other things, expressing an opinion on the trial protocol, the suitability of the investigators and the adequacy of facilities, and on the methods and documents to be used to inform trial subjects and obtain their informed consent" [1].
Even if the European directive has been implemented differently in the European countries [2], it was intended to "harmonize the administrative provisions governing such trials by establishing a clear, transparent procedure and creating conditions conducive to effective coordination of such clinical trials in the Community by the authorities concerned" [1].
In the Italian context, up to 1998, all experimental studies had to be regulated, reviewed and approved at a national level by a Central Drug Committee (Commissione Unica del Farmaco).By Ministerial Decree (MD) 15/7/1997 [3] and MD 18/3/1998 [4], the responsibility for monitoring and approving clinical trials was transferred from the national level of the Ministry of Health to local ECs [5].Unfortunately, such deregulation in Italy resulted in an overly heterogeneous evaluation of clinical studies by the local ECs and a lack of guidance from the top level, with variations in the type of functions and types of competence [6][7][8][9].
According to MD 18/3/1998, Italian EC is typically composed of two clinicians, a biostatistician, a pharmacologist, a pharmacist, the medical director (or scientific director in case of research centers), an legal expert, a general practitioner, a bioethicist, a nurse and a volunteer for the care of patients or a patient advocacy organization [4].This choice appears as a right balance between clinicians and non-clinicians.It is worth noting that with the approval of a new Italian law about ECs, i.e., MD 12/5/2006 [10], partially modified by the MD 07/11/2008 [11], there has been more significant involvement of pharmacologists and biostatisticians [12].For instance, before the implementation into Italian law of Directive 2001/20/EC by Legislative Decree 24/6/2003 [13] 32 biostatisticians were working in the ECs because their presence was not mandatory [14].
More recently the MD 19/04/2018 [15] established the National Coordination Center for the Territorial Ethical Committees for clinical trials on medicinal products for human use and on medical devices, with no substantial change to the composition of ECs but reducing their number in Italy, with a specific aim to expediting and reducing variation in processing of trial applications.
The new European regulation (Regulation EU N. 536/2014) remains in the background of this complicated regulatory environment, since after being issued in 2014, at the time of writing there are still no certainties as to when it will come into force in Italy.
Given the heterogeneity in the work of Italian ECs [16], a survey was carried out by the Network of Biostatisticians in the Ethics Committee (NEBICE), with the aim i) to depict the profile of biostatisticians working in the ECs, ii) to promote a connection between them by means of an internet-based network, and iii) to determine the need for targeted professional development, education programs, and training.
The present report illustrates NEBICE survey results and offers some remarks.
Facets of Evidence and Ethics in Clinical Research
It has been observed that clinical research can be considered ethical if the following conditions are fulfilled (even if there can be exceptions in particular circumstances): (a) social or scientific value; (b) scientific validity; (c) fair subject selection; (d) favourable risk-benefit ratio; (e) independent review; (f) informed consent; (g) respect for enrolled subjects [17,18].More generally a trial submitted to an EC must feature, to be approved, a rigorous methodology, clinical relevance and appropriate principles of ethics directed towards patients, society and researchers (19).Notably, some ethical and methodological issues need to be handled with particular relevance by ECs, namely: the appropriateness concerning placebo use for the control group, the nature of the comparator, the equivalence or non-inferiority design hypothesis of the trial and the choice of study endpoints [20].ECs should require systematic reviews of existing research to avoid redundant and non-inferior studies [21] and to guarantee the clinical equipoise [22].However, there is no consensus in the bioethical community on the justification of the principle of clinical equipoise for the moral acceptability of conducting a new trial [23,24].
As stated centuries ago by Avicenna, when evaluating clinical research, we have to wonder: do I believe the data presented?Can I use the results for my patients?[25].It is also essential that the findings of the biomedical research have public dissemination since it is ethical to share medical knowledge with colleagues and lay people [26].In the past, there has been a substantial request for training programs from the members of the EC to appropriately deal with local needs.Many Italian local ECs are overloaded because of the high number of protocols that every year are submitted to them, even if only a small part of protocols concern innovative research, and there are many differences among the ECs in the country [19].Heterogeneity exists in Europe regarding the number of ECs, number of EC members [27], and training requirements.As to the latter, the following topics related to training for EC members have been proposed [28]: (a) the purpose and history of medical research, (b) the history of research ethics, (c) working together in the modern regulatory environment, (d) basic ethical principles, (e) critical appraisal of a project, (f) ethical analysis, (g) group working, (h) reaching consensus, (i) fraud and misconduct.Also, it has been noted that there are some negative aspects related to ECs, namely: extreme bureaucracy [29], late decisions and lack of interest in the decision process on genuine bioethical issues.
Ethical evaluation of a study by an EC requires on the part of at least a majority of members, a sound knowledge of Evidence-Based Medicine principles (EBM) and functional competence in biostatistical methods, since it is unethical to conduct research that is unsound, for they can improperly modify medical evidence on a particular issue, and this may ensue many ethical problems [30].Clinicians are mainly concerned with the ethical issues related to the health of their present patient while, in addition to that, the members of the ECs need to treat and evaluate all ethical questions which arise from a study, in order to warrant the safety of a drug or treatment also for future patients [31].Thus, the interplay between an individual level of ethics and a collective one is required and desirable.Moreover, many other ethical constraints are related to medical research: it is not ethical to deprive patients of useful treatment, as well as it is necessary to stop a trial when there is sufficient evidence of no clinical significance of treatment in order not to expose patients to useless risks.
It has been observed, in fact, that the scientific evaluation of a trial or a treatment is a necessary, but not a sufficient, condition for a sound ethical evaluation [30].Many errors can be associated with clinical research: poor definition of the research question, of the inclusion and exclusion criteria, wrong determination of the sample size, failure of a suitable control group, failure to carry out the study objectively, failure to evaluate the results of the subjects withdrawn from the study or to comment them in the study [32,33].Also, it has been claimed that "a valuable attribute of statisticians is their ability to ask relevant and important questions, not only about statistical issues but also about the purpose of the research" [34].Conversely, all the other members of the EC should also have a basic knowledge of medical statistics and of the many forms of bias which may affect clinical and statistical judgment [35,36].
METHODS
The survey was launched in early November 2016, and data were collected until August 2017.An e-mail survey invitation was sent to all secretaries of the 95 Italian ECs contained in the registry of the National Monitoring Center on Clinical Research with Medicines (Osservatorio Nazionale sulla Sperimentazione Clinica dei Medicinali) maintained by the Italian Agency of Medicines (AIFA) as updated at September 2016.
The survey was created within an electronic data capture system hosted at University of Padova and known as REDCap (Research Electronic Data Capture) [37].REDCap is a secure, web-based application designed to support data capture for research studies, developed initially at Vanderbilt University (https://projectredcap.org).
The interested EC biostatisticians were able to access the survey anonymously via a request for use through the link (https://redcap.dctv.unipd.it/surveys/?s=X89348XXFY)contained in the survey invitation.Several aspects were explored including education, job title, training in biostatistics and personal experience in protocol evaluation within the EC.
The Data were analyzed with R software version 3.2.5 [38].
RESULTS
The investigation covered 95 ECs in Italy, and 70 questionnaires were returned, yielding a 74% response rate.The response rate varied by geographical location (Northern, Center and Southern Italy), with a 78% compliance for ECs in the South versus 51% for those located in the North, and by type of institution in which the examined ECs operate.
The biostatisticians in the respondent ECs were prevalently male (n=41,59%), between 50 and 60 years of age (n=30, 43%) and 39 were not affiliated with the facility in which the EC operates.For 42 of them (60%), the highest academic degree was Ph.D. or postgraduate specialty (46 academic degrees, overall).Among Ph.D. fields, the most common was epidemiology (43%).Among postgraduate specialties, the most frequent was in health/ medicine (69%) followed by specialty in statistics (see Table 1).This distribution exhibits geographical variation, being statistics the most common specialty in the North and health/medicine the most common specialty in the South.
The self-reported best level of statistical training, relevant to EC activity (i.e.descriptive, inferential and medical statistics, clinical epidemiology, …) was mainly achieved through short courses, from 27% to 47% (depending on the topic) and courses in specialty programs, from 15% to 25%, whereas courses in Ph.D. programs contributed to a lower extent (from 9% to 18%).The practical knowledge of statistics was measured by the frequency with which the EC biostatistician analyses data (categorized as always, never, sometimes).Its distribution by type of data and geographical location of the EC is shown in Table 2.
Concerning the scientific activity, the median number of publications of the EC biostatistician, in the last five years on indexed journals, ranged from 4 to 46, for hospitals and university ECs, respectively.Regardless of the institution and the geographical location of the ECs, 76% of biostatisticians declared to be involved in research activities and 70% in teaching activities.
The annual workload for the EC members varied by the type of institution and geographical area.The median number of examined protocols per year ranged from 80 in hospital ECs to 198 in university ECs, and from 80 to 108, in the South and the Centre, respectively.About 40% of these protocols concerned observational studies.
As for the evaluation of protocols, the study design and objectives, together with statistical issues were identified as the most relevant aspects to be taken into account.The handling of missing data and economic aspects were regarded as less important (see Table 3, panel A).When asked to rank the principal motivations for a protocol rejection, an unethical treatment or aspects related to the sample size were given high importance (Table 3
, panel B). Sample size calculation equally influenced protocol suspension and rejection (Table 3, panels B-C).
Although an EC decision on a research protocol is made by consensus, a question on the individual opinion of the EC biostatistician was included in the survey.The proportion of protocols that were not approved by the EC biostatistician did not appear to depend on the type of institution in which the EC was set and the affiliation of the principal investigator, except for the suspension of protocols in university ECs (Table 4).
Overall, the biostatistician proposed to reject 5% of the protocols and to suspend with the request for clarification or amendments 10%.About 61% and 79% of these opinions, respectively, were regarded as binding by the other EC members in reaching a decision.It is worth noting that EC biostatisticians older than 45 years (n=30, 29 missing answers) were taken into greater consideration when proposing to reject a protocol.
In evaluating a protocol, 58% of EC biostatisticians declared to consult supplementary material (i.e., textbooks, online databases).This proportion was higher in the North (70%) and lower in the South (33%).As to the time spent, 40% of protocols were examined in 16-30 minutes, but variability exists between the different types of institutions that host the EC (Table 5).
Concerning the implementation of training, education and information programs for EC biostatistician, 18 declared that their EC did not provide for continuing education opportunities, never promoted courses on research methodology (n=20) but regularly informed on regulatory aspects (n=24).When asked whether his/her skills and experience fit the position of EC biostatistician (on a 0 -100 scale), the median self-perceived adequacy was good, ranging from 70 for LHA ECs to 90 for hospital ECs.
DISCUSSION
Italian ECs have the responsibility for evaluating and monitoring clinical studies in human subjects, with the ultimate goal to promote high ethical standards in research for health [19,39].
The NEBICE study was intended to provide an outline of the characteristics and activities carried out by the biostatisticians in the Italian ECs, to identify the best strategies to promote methodological rigor and ethical behavior.Although the survey was not aimed at an evaluation of quality and workload of ECs, it offered an intriguing insight into the operation of the clinical trial regulations in Italy, and a perspective which is different, albeit complementary, from that recorded by AIFA in its annual Bulletin (http://www.aifa.gov.it/en/content/bulletin-clinical-trials-drugs-italy).
To our knowledge, NEBICE is the study which involves the highest number of Italian ECs.
In our investigation, we mainly focused on the ethical The biostatistician will not play a significant role in the EC as long as the requirements that an individual has to fulfill to be a biostatistician in an EC remain vague and, his/her opinion is not binding for the judgment of approval or refusal of a protocol.This fact may also entail severe ethical problems, as a valid quantitative approach to research is a requirement for complete ethical evaluation of a protocol.
More generally, there is a lack of understanding of the intimate connection between biostatistics and ethics.Biostatistics must not be conceived as a value-free science since the ethical consequences of making a statistically wrong decision have to be taken into account [40,41].
The evaluation of a research protocol involves many biostatistical issues which present some intricate ethical counterparts.Given that there is such a strict relationship between ethics and methodology in ECs, the role of the biostatistician should be enriched with more comprehensive and interdisciplinary training to be capable of acting in response to the new challenges of the innovations in the biomedical sciences.
It is fundamental to pinpoint the mandatory competencies that the biostatistician of ECs should have, to guarantee standardization, fairness, and rigor in protocol evaluation, ultimately increasing the level of competence as new challenges, and new study designs arise [42][43][44].
Professional certification through organizations that establish credible and robust certification systems, incorporating requirements of the continuance of certification (i.e., to ensure that a certificate holder continues to learn and stay up to date in the practice field) might accomplish this task.Nevertheless, the need for professional certification of biostatisticians is still a matter of debate in the Italian scientific community.
type of institution where the ECs observed do operate, were classified as LHA=local health authority; Hospital=local hospital; IRCCS=public or local private hospital with a research mission acknowledged by the Italian Ministry of Health; University = teaching hospital of a State or Private University.
TABLE 1 .
Distribution of PhD or post-graduate specialty as highest academic degree, obtained by the EC biostatistician by type of institution in which the EC operates.
TABLE 2 .
Practical knowledge in data analysis by type of data and geographical location of the EC (15 missing answers).
TABLE 3 .
Frequency of ranks attributed by the EC biostatistician to some aspects in the evaluation of a protocol (panel A) and in a protocol rejection (panel B) or suspension (panel C). (1=maximum relevance, 10=minimum relevance).We observed that the role of the biostatistician in the Italian ECs is multifaceted.Only a few of them have a Ph.D. degree while the most part has postgraduate education in medical specialties.Also, it is quite remarkable that other EC members do not hold in enough regard the contrary opinion of the biostatistician. | 2019-03-18T14:02:01.276Z | 2018-09-21T00:00:00.000 | {
"year": 2022,
"sha1": "24255483723137ee661e34ee026dda216f754d61",
"oa_license": "CCBY",
"oa_url": "https://riviste.unimi.it/index.php/ebph/article/download/17377/15298",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "de07a285988ec17e118806af992813b7cada43e9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265319339 | pes2o/s2orc | v3-fos-license | Strategic dependence on external funding in Finnish higher education
Abstract Universities are facing increasing external pressures to compete for external funding and to develop mixed economies comprising both state budget funding and external sources. This study aims to provide empirical and theoretical insights by utilising the Resource Dependence Theory and the Ecology of Games Metaphor to examine how Finnish universities articulate their goals for external funding in their institutional strategies. The findings indicate that universities’ central strategic goals and interests in this specific domain of funding align with their aspirations for top-tier research university status and how to advance it. Rather than choosing between different funding streams, universities are adapting to the conditions of their funding environments. The strategies also reveal the presence of contradictory goals, where potential positive externalities serve as justifications for these contradictions. The strategies do not necessarily reveal unique or innovative choices made by the universities, but rather express their future institutional image as being more driven by external funding and research. These strategic goals can be seen as responses to critical dependence on academics and academic research, and as heuristic tools of the university management arena in navigating uncertain and dynamic university environments.
Introduction
A large body of higher education examines policies that consistently highlight the significance and need for external funding.In today's academic landscape, the scientific reputation and prestige of universities are increasingly established through their participation in competitive external research funding schemes and research networks (Auranen & Nieminen, 2010;Brankovic, 2018;Frølich et al., 2017;Hicks, 2012;Muscio et al., 2013;Musselin, 2018;Pucciarelli & Kaplan, 2016).The availability of and access to external research funding has become strategically critical for universities (Hicks, 2012;Parker, 2013;Sharrock, 2012;Stachowiak-Kudla & Kudla, 2017;Young et al., 2017).In these contexts, performing well using governmental core funding does not contribute to building an ideal reputation and prestige from which, for example, universities can communicate to potential new external research partners.
The pursuit of external funding opportunities and the preparation of funding applications require a significant amount of time and human resources, thereby influencing how research is understood and conducted.The impact of external funding on science and universities has been extensively discussed in the literature (e.g., Bolli et al., 2016;Chubb & Watermeyer, 2017;Franssen et al., 2018;Lillis & Lynch, 2014;Musselin, 2018;Young et al., 2017).In addition to various other actors, higher education institutions themselves are now taking proactive measures by initiating strategies and policies to increase external funding in order to enhance institutional competitiveness (Musselin, 2018;Parker, 2013;Pucciarelli & Kaplan, 2016;Stachowiak-Kudla & Kudla, 2017;Teixeira & Koryakina, 2013).Furthermore, universities are driven to compete for funding and prestige due to a combination of global pressures and policies, national policies, and funding incentives (Auranen & Nieminen, 2010;Franssen et al., 2018;Gunn & Mintrom, 2016;Hicks, 2012;Musselin, 2018).
Empirical research in the field of higher-education funding has primarily focused on performance-based funding and its implications for both institutions and individual academics (Auranen & Nieminen, 2010, Cattaneo et al, 2016;Hicks, 2012;Rabovsky, 2014;Shin, 2010).In response to these emerging pressures for financial diversification, as well as external and internal factors, universities are increasingly expected to adopt management practices akin to corporate-style management (Frølich et al., 2014;Luoma et al., 2016;Parker, 2013;Sharrock, 2012;Whitley, 2012;Whitley et al., 2018).
Universities in several countries, including Finland, have embraced the movement towards developing institutional strategies.A significant turning point for Finnish universities occurred with the legislative reform in 2009, which granted them financial and legal independence.As a result, the Universities Act (2009) now requires that all universities must have an institutional strategy.However, the content of Finnish university strategies has received relatively little attention in research (Luoma et al., 2016), especially in comparison to the international research on university strategy perspectives (Frølich et al., 2014;Frølich;Hall & Lulich, 2021, Frølich et al., 2017;Fumasoli et al., 2020;Fumasoli & Huisman, 2013;Larsen, 2020).Research on university strategies, both in the Finnish context and internationally, has not extensively explored the role of external funding in shaping and influencing strategic decisions and actions of universities.Given the importance of understanding the complex relationship between revenues and prestige (Bolli et al., 2016;Miotto et al., 2020) and the limited research on university strategies, particularly from the perspective of external funding, this article aims to analyse university strategies within the context of Finnish universities from a strategy content perspective.
According to Luoma et al. (2016) and also Hall and Lulich (2021), institutional strategies can be seen as valid reflections of universities' strategic interests, choices, and actions within their dynamic governance environments.While strategies themselves do not directly determine university actions, the generic strategic management literature assumes that strategies serve as tools to promote operational excellence (Lillis & Lynch, 2014;Martin, 2021).However, previous research has not specifically addressed external funding as a distinct content domain within university strategies.External funding is of critical importance, as previous research indicates that "the management of financial resources now drives organizational strategy" (Parker, 2013, 7;Whitley et al., 2018).Furthermore, universities are increasingly willing to adjust their priorities based on their revenue streams (Fowles, 2014).
The purpose of this article is to shed light how universities deal with the pressures they face and integrate external funding into their institutional strategic decision-making.For this purpose, we explore the declared strategic goals for external funding of public universities and foundation universities in Finland.The following research questions guide this article: RQ1: What external funding-related goals and interests do universities articulate and communicate in their institutional strategies?
RQ2:
For what end is external funding sought?
To approach the research questions, we have chosen to apply the Resource Dependence Theory (RDT; Pfeffer & Salancik, 2003) and the Ecology of Games Metaphor (EGM) (Firestone, 1989;Lubell, 2013;Nisar, 2015).They support shedding light and conceptualising external funding related strategic agendas (what) and the rationale behind them (reasons and justifications) as communicated in the strategy documents.The study aims to provide valuable insights and contributions to such as policymakers, university leaders and managers, and higher education researchers in the higher education sector.The research findings may offer valuable guidance on how to align external funding goals with broader institutional strategies, enhancing universities to make informed decisions and resource management.
After the introduction, this article presents briefly literature related policies on external funding and thereafter how external funding of Finnish universities has developed since the autonomy reform (2009).From there, the focus is on the theoretical approach of the RDT and EGM.It then covers the data and its analysis and the findings.Finally, it offers main conclusions and discussion.
A brief overview of policies to enhance external funding
The literature extensively addresses the shift towards market-driven governance in higher education, with various scholars contributing to this topic (Auranen & Nieminen, 2010;Clark, 2001;Hicks, 2012;Nisar, 2015;Parker, 2013Parker, , 2012;;Shin et al., 2022;Teixeira & Koryakina, 2013).Pressure to maximise the volumes of external funding characterises the policy environments of universities, as they strive to balance the fragmented funding structures with decreasing governmental budget funding.External funding, contract funding, and project funding are all competition-based funding mechanisms (Shin et al., 2022), which represent third-stream funding sources (Clark, 2001), are referred to in this study as external funding.
Efforts to diversify revenue structures are underpinned by policies aimed at enhancing competitiveness, achieving institutional financial autonomy, and managing governmental budget cuts in core funding (Bennetot et al., 2015;Teixeira & Koryakina, 2013).The financial autonomy of universities also deals with the question of to what extent universities actually have the financial power to accept or reject different funding streams (Stachowiak-Kudla & Kudla, 2017).
The apparent consequence of such external funding policies is to reinforce mixed economies of universities (Sharrock, 2012).The dynamics of mixed economies (a mix of core funding and external funding) highlight both financial objectives and competition strategies as key drivers of the operations and goals of universities.Mixed economies manifest at different organisational levels by shaping the behaviour, values, standards and culture of the academic community overall (e.g.Kohtamäki, 2022;Parker, 2013;Sharrock, 2012).
While universities are cooperating and competing with each other, they also design strategies to distinguish themselves in a governance context where fierce global academic and financial competition is the reality (Bolli et al., 2016;Brankovic, 2018;Kosmützky & Krücken, 2015;Young et al., 2017).Generating external funding has become the dominant goal for academic work in such environments.Academic units are increasingly financially-driven agencies and have the critical task of acquiring and competing for various resources.Launched policies are also rhetorical expectations, and universities' internal resistance to institutional revenue diversification strategies and incentives do not always lead to successful policy expectations (Teixeira & Koryakina, 2013).However, the level of external research funding is recognised as a component of a university's overall scientific prestige and reputation (Kwiek, 2012;Nisar, 2015;Young et al., 2017).To promote a strong commitment to excellence in science and its centrality and relevance to the diverging needs of society, academics have become more motivated to compete for funding (Raudla et al., 2015;Young et al., 2017).
Universities' diversified funding structures tend to lead to governance complexity and potentially to greater financial and reputational risks (Sharrock, 2012).External funding has both intended and unintended impacts on the strategic and financial management of universities.The most paradoxical is that "the more successful the research groups are in obtaining project-based funding from diverse sources, the more strained becomes the budget of the university as a whole", producing both internal financial and strategic coordination problems (Raudla et al., 2015, 958).
Despite the importance of external funding and the perceived trend towards the financialisation of university strategies (e.g., Parker, 2013, Sharrock, 2012;Whitley et al., 2018), there is limited research available on the topic of external funding of university strategies.
External funding in Finnish university context
In Finland, the public policy mechanism to enhance universities' capacities and reform their financial and strategic management was to grant universities independent player status, both financially and legally (Authors 2022;Universities Act, 2009).The university legislation (Universities Act, 2009) enabled the previous state-agency universities to operate either as foundation-run universities (referred to as foundation universities here) or universities operating under public law (public universities).The legislation established new financial autonomy frameworks for universities, with one of the primary expectations being that universities increase the proportion of external funding in relation to their total funding.In all universities, external research funding constituted an average of 25% of total funding (range: 1-34%) in 2011, 24% (range: 3-33%) in 2016, 27% (range: 3-37%) in 2018, and 21% (range: 3-31%) in 2021.These statistics indicate a slight progress in universities' capacities to generate external funding.Furthermore, if a university was successful in securing external funding in 2011, it also tended to be successful in 2018, and vice versa (R = .689,P = .009).In the Vipunen database, there have been extensive changes to the 2019 financial data collections, which have an impact on the time series.Figure 1 presents valid and comparable financial information up until 2018.
Consistent with other European countries, there are indications of external funding concentration among a smaller number of universities, creating categories of successful and less successful institutions (Bennetot et al., 2015;Kwiek, 2012).In 2018, the range of external funding for foundation universities was 29%-37%, while for public universities it was 3%-33%.The University of Arts, one of the public universities, had the smallest share of external funding (3%) due to its profile as an art university, relying primarily on state funding.Among the public universities, three stood out as notable players in securing external funding.
In 2020, the share of external funding of the university turnover was an average 30% in foundation universities and 24% in public universities (Vipunen, 2023).
Finnish universities are dependent on performance-based funding from the state.Dependence on performance-based funding refers to the extent to which a university, relies on performancebased funding as a key driver and determinant of its strategic decisions and actions.An average 58% of their revenues come from performance-based funding.One of the element of this funding is the volume of external funding.In the current performance-based funding formula its value is 12%.
Theoretical approach
In this study, RDT and EGM are used as guiding frameworks to explore the external funding-related strategic objectives, goals, and interests that universities articulate in their institutional strategies.RDT has been one of the leading theories in organisational studies for understanding interorganisational relationships and resource dependencies (Pfeffer & Salancik, 2003) while EGM has been applied in different policy contexts to analyse the rationales of interaction and relationships among players and a group of players (Berardo & Lubell, 2019;Long, 1958;Lubell, 2013).It is important to note that the EGM serves here as a metaphor, distinct from the assumptions found in classical game theory codifications.Metaphors are valuable tools for exploring specific aspects of a complex reality, particularly when it is challenging to grasp the entire picture (Firestone, 1989;Lubell, 2013;Weaver-Hightower, 2008).Consequently, metaphors provide relevant and useful perspectives for organisational and policy studies.
Combining the RDT (Davis & Cobb, 2010;Fowles, 2014;Nienhüser, 2008;Pfeffer & Salancik, 2003) and the EGM (Berardo & Lubell, 2019;Firestone, 1989;Li, 2021;Long, 1958;Lubell, 2013;Nisar, 2015) this study enhanc,es understanding of how universities interact with their external environment to acquire and manage resources.Resources and stakes can be tangible and intangible in line with RDT and EGM.Long (1958, 252) pointed out that "games provide the players with a set of goals that give them a sense of success or failure.They provide them determinate roles and calculable strategies and tactics."Universities need tangible resources from their environment to acquire intangible resources and vice versa.
In the current competitive funding environment, the ability to attract external funding and the interest in doing so are considered critical for universities striving to establish themselves as competitive institutions.It is acknowledged that external funding provide significant positive externalities, including prestige, visibility, and a greater capacity to attract additional external funding, within the higher education context (Bennetot et al., 2015;Kwiek, 2012;Musselin, 2018;Nisar, 2015;Young et al., 2017).Consequently, universities with higher levels of external funding have more at stake in terms of these externalities.
RDT emphasises that all organisations must strategically manage their relationships with external organisations to secure essential resources, growth and survival.RDT is useful to examine how university organisations reduce their vulnerability by diversifying their resource base, building strong relationships with resource providers, or developing alternative resource acquisition strategies.EGM views university organisations as competing entities in a dynamic environment where they must adapt, survive, and evolve.EGM helps to explore how organisations compete, adapt to changing environmental conditions, and develop strategies to outperform their rivals.
The above environment, characterised by resource scarcity, competition, and uncertainty, influences organisational behavior and decision-making and creates competition pressures and dependencies to universities.RDT and EGM help to shed light how organisations respond to environmental pressures, how they adapt strategies, and further explore new resource opportunities.
The policies to expand external funding and pressures to add strategic management in higher education draw attention to the content of institutional strategies.An institutional strategy encapsulates universities' intentions and aspirations within a competitive environment (Doyle & Brady, 2018;Frølich et al., 2017;Luoma et al., 2016).In this study, the specific aspect of strategic content under investigation is external funding.
The RDT and the EGM are employed in this study to uncover their potential in illuminating the content of university strategies.In this approach, university strategy is seen as a platform of goals and tactics, where various interests and motives interact.
Data collection
Our study utilises document data as the primary data source to address two research questions: 1) What external funding-related goals and interests do universities articulate and communicate in their institutional strategies?and 2) For what purposes is external funding sought?Our dataset includes the corporate-level strategies of all 14 Finnish universities, covering the period between 2015 and 2020.The number of Finnish universities was reduced to 13 in 2019 after the merger between one public university and one foundation university.As a result of the merger, a new university was established, which continued to operate as a foundation university.
The strategies from Finnish universities were gathered by accessing and downloading them from the universities' publicly available websites.However, in two cases, we requested through e-mails access to the strategy documents directly from the administration of the respective universities.This was because their strategy documents were not available in their websites.
The strategies of the universities have undergone comprehensive discussions during their preparation within the universities.Subsequently, they were further deliberated upon and approved in their respective university boards.These strategies hold official status, and their implementation responsibility lies with the University President.
Data analysis
This study analysed the university strategies using content analysis, following the approach proposed by Schreier (2012).Content analysis served as the method to systematically examine and interpret the content within the strategy texts.Specifically, this article is solely focused on the qualitative analysis of external funding-related strategic agendas, as expressed in the institutional strategies.We initiated the analysis using data-driven content analysis and subsequently transitioned to a theory-driven content analysis approach to interpret the content within the institutional strategies.This study does not include an analysis of strategy execution or other related aspects.
During the content-analysis process, the initial step involved reading and interpreting the institutional strategies, while undertaking a pre-analysis of the strategic goals.Based on this preanalysis, external funding-related strategic goals were coded, grouped, and re-grouped using Atlas Ti software to categorise their content.The research questions guided the selection of data for the analysis.
In the first-round analysis, a data-driven approach was employed, resulting in the identification of substance-related themes.Subsequently, in the second-round analysis, a theory-driven approach was adopted, utilising the RDT and the EGM to identify and analyse the content of external funding-related strategic agendas.
The above process entailed categorising universities' goals, expectations, and actions that represented the "stakes" or interests within the current or emerging dynamics involving the universities, their external funding bodies, and other stakeholders.This also extended to the interactions among the universities themselves.In the third-round analysis, the theme-based clusters were integrated with the RDT and the EGM, resulting in the identification of four distinct clusters, which are presented in the Findings section.It is important to acknowledge that certain strategic goals encompassed more than one Cluster.In other words, some objectives and initiatives adopted by universities were relevant to multiple clusters simultaneously.However, when presenting results the same strategy content is not repeated across the clusters.
Results
Along the strategic goals and interests, the university strategies documented concrete actions as responses to pressures from the environment.Furthermore, strategies articulated expected benefits from external funding.Both the foundation universities and public universities (marked as F and P, respectively) articulated goals, interests, actions, and expectations that were categorised into the four clusters (I-IV): Institutional player status (Cluster I); Research cooperation, networks, and partners and networks (Cluster II); Competition for external funding and proactive organisational actions to boost external funding (Cluster III) that accounted for approximately two-thirds of the external funding-related strategic goals; and Promoting excellence of human resources (Cluster IV).
Despite the pressures for external funding, it was not featured in any distinct section of the institutional strategies.In one example, the structure of the entire strategy was divided into five parts: I. Solutions for global challenges, II.Strategic policies and goals, III.Goals for research, IV.Goals for education, and V. Goals for third mission activities.External-funding-related strategic goals and interests were integrated and documented in various parts of the universities' strategies using titles, such as "strategy enablers," "operating conditions," "solutions for global challenges," "strategic goals and tools," and "strategic actions."These titles reflected how external funding was an interactive relationship to the environment.
Strategic articulations on institutional player status
The strategic goals expressed the aim to promote institutional player status and enhance the university's global reputation and competitiveness (Table 1).The goals focused on building leadership, recognition and excellence.They shed light to university players' goals for being positioned nationally and/or internationally and their ambitions and interests concerning their statuses and profiles as internationally recognised scientific institutions.International profile and reputation statements were incorporated into all the strategies.Status and reputation goals and recognition abroad were highly desired.
The strategic goals strive for a distinguished status in relation to other university players.Scientific prestige, status, and reputation were publicly displayed statements to indicate the competence and capability of the university signalling their excellence in the eyes of potential university partners.Finnish university players sought to establish an international (and national) status as single players rather than being part or members of top university clans.
The strategies articulated statements concerning the universities' external images as top-tier research players.A foundation university presented compelling statements showcasing the rapid growth of its international profile.Being perceived as attractive by students, researchers, and various types of potential partners was among the interests and goals of universities that were seeking to extend and develop their levels of cooperation.Image attractiveness and character as a sought-after partner was also promoted globally (see Cluster II).
The statement of becoming "truly competitive and more focused" (F) indicated a strategic agenda to actively engage in the status games of world-class universities.Foundation universities had a strong interest and efforts in extending their international reputation.This pursuit of competitiveness represented a forward-looking approach to elevate the university's status and impact in the academic world and beyond.Universities also emphasised their key disciplines and research activities in their strategies.Some universities categorised their aspirational status and prestige using a ranking table.By doing so, they aimed to distinguish themselves and achieve higher positions in international rankings, thus enhancing their brand images.The pursuit of high rankings, particularly for top positions, serves to increase the visibility of these universities on the international stage.High rankings, especially for top positions, attract attention and make universities more visible internationally.
Universities competed for reputation, rankings, and recognition within the global academic community.They aimed to establish themselves as prestigious institutions by achieving excellence in research, teaching, and innovation.The table below highlights the key emphases and includes example quotes extracted from the strategic statements aimed at promoting universities' player status.
The above strategic goals indicate a strong focus on enhancing the university's status, reputation, and international standing to become a prominent player in the global higher education landscape.
Strategies for enhancing partnerships and collaboration with leading international universities, research institutes, and companies
Under this cluster the strategic goals focused on enhancing collaboration, research impact, and innovation to strengthen the university's position in the global landscape (Table 2).The strategies recognised partners, the highest level of academic research and multidisciplinary research
Emphases
Example quotes • Building competitiveness and excellence " . . . to build a university that is more competitive, truly excellent, genuinely creative, and multidisciplinary" (F)."The university community is being built to become globally, nationally, and regionally recognised and attractive as a key player in shaping the future."(F) "We are one of the most international universities in Finland" (P).
• Global recognition and leadership "Build excellence to position as a global leader in high-quality artistic activities" (F) "Acting as pioneer in technological development, nationally and internationally" (P)."We rank among the top 200 research universities in the world and the top 50 in strong research areas" (P)."Position among the foremost universities in the world" (P).
• Being internationally known and competitive "An internationally competitive science university" (P)."Internationally known as a multidisciplinary science university" (P).
• Improving reputation "Our goal is that our university will have an international and national image, and its reputation and attractiveness will significantly improve" (P).
required to position institutions as research universities.Strong research areas, emerging research areas, and research profiles were articulated expectations of such externalities.
To enhance its status and reputation, a university must be connected to other top-level universities.The strategic goals and interest in cooperation, partners and networks reflected desired interactions between the university and its partners and other key stakeholders addressing the responsiveness of universities to their environment.Selecting one's partners, prioritising key partners, and preferring international partners and company partners are examples of the strategic goals and interests that affect with whom (players) these universities favoured collaboration.Only one university reported common goal setting with its external stakeholders in its strategy.Universities sought internal research cooperation across disciplines and departments.
Foundation universities pursued selectiveness by strategically identifying and prioritising research networks that could be developed through high-impact international partnerships with
Emphases
Example quotes • Partnerships with industry and businesses "We are building even stronger partnerships with industry and businesses for mutual benefit" (F)."We actively participate in creating new business, companies, and jobs around our research."(P) • Research networks with international partners "Strengthen research networks with high-impact international partnerships at leading research institutes and initiatives" (F).
"We build research environments with critical mass, high ambition, and strong international networks" (P)."We select the best partners for our strategic areas of expertise based on scientific terms without geographical boundaries" (P).
"University partnerships are developed on a fieldspecific basis and based on the university's profile.
Strong partnerships between research teams and high-level universities and research institutes will be continued" (P).
• Commercialization of research findings "Improving the recognition of the value of research findings with significant commercial potential" (F)."We promote the commercialization of research results and the creation of new companies" (F)."A wide range of efforts are being made to promote the commercialization of research results in cooperation with funders and industry players" (P) • Multidisciplinary collaboration "Build a university that is more competitive, more focused, but also more collaborative across disciplines" (F)."Concise and well-organised interaction with stakeholders and multidisciplinary research and development platforms and programs that combine various fields of science enable the integration of cutting-edge science and applied research, linking them to practical innovations at different levels."(F) • Cultivating a culture of international collaboration "We significantly strengthen the culture of international collaboration."(F).
• Engaging in prestigious international scientific conferences and networks "Our researchers participate in the most prestigious international scientific conferences and other networks in their field" (P).
• Becoming a desired partner "The university is a desired partner that provides its expertise to public authorities and companies, generates innovations, and effectively communicates information outside the scientific community" (P).
leading research institutes.This emphasis on selectiveness signifies the universities' intention to focus their resources and efforts on targeted collaborations that have the potential for significant outcomes and global influence.Likewise, public universities aspired to establish new partnerships and research cooperation, actively seeking the best collaborators for their strategic areas of expertise, based on scientific merit, and without being constrained by geographical boundaries.
Strategic interests were targeted to generate beneficial contributions (externalities) through research collaboration, partnerships, and networking.One key contribution was international research networks used as avenues to build international research environments.Mutual distribution of gains between a university and its partners was also given attention.Contributions were made to develop varied specific study fields or research areas (universities are multidisciplinary) rather than targeted to one or two specific fields.
Opportunities for commercialisation were expected as a potential mutual benefit of collaboration with companies.Strategies articulated expectations related to innovations and to the dissemination of innovations beyond the academic community.Benefits also occurred at the individual level when cooperating with prestigious universities.Individual-level activities were also mentioned to increase the possibilities for international cooperation, specifically with toplevel universities.
Public universities repeatedly emphasised their role in searching for solutions to societal challenges while foundation universities highlighted company relationships.Cooperation with companies was connected to potential commercialisation as an output of such externally funded activities.
Universities noted the patterns and nature of organisation-level relationships with partners.These relationships are expected to be reciprocal, interactive, systematic, and close.Long-term arrangements and mutually beneficial relationships provide universities with greater commitment and enhance their capacities to tackle the great global challenges together.Universities seek both public and private partners for joint interests and collaboration.European research funding was mentioned as an enabler toward having closer and increased cooperation with international partners.
The strategies were status-construction driven and used characterisations like cutting-edge research, first-class science, and high-impact partners.Research capacity, competitiveness, and status were both outputs and expected benefits from connections with high-status organisations.The table below presents the key emphases and includes example quotes extracted from the strategic statements aimed at promoting partnerships and collaboration.
To sum up, Cluster II focused on cooperation, partnerships, and network-building with the aim of extending research cooperation, strengthening internationalisation, reputations and status while nurturing research capacity and bolstering status and competitiveness.These collaborations can be viewed as strategic moves within the larger game, aimed at enhancing research capabilities, securing funding, and expanding universities' networks.
Strategic goals on competing for financial resources
Universities articulated proactive organisational actions to boost external funding as an agenda where the university is a player taking proactive actions internally (Table 3).Strategic interests, goals, and activities focused on the professionalisation of the application architecture (practices and processes) of external research funding.Universities establish institution-wide efficient support mechanisms, incentives, and central resources and services that are intended to enhance beneficial access to national and international external funding.Goals as organisational actions were set to advance and improve internal research funding application processes.
Universities use institutional strategies for internal communication and information.Increasing the share of international research funding is one of the main external funding-related goals and messages designed for internal communication.By highlighting this message internally, universities aimed to mobilise and incentivise their academic community towards attracting external funding.To specify and inform the target level, universities define external funding performance indicators in their strategies.However, only two universities used quantitative metrics for their external funding goals.Universities determined their strategic priorities with a focus on external funding to support their institutional research profile and their international external funding sources.
The national state funding model has an indicator for external research funding (9%, of which 3% is international research funding), thereby offering an incentive for universities to compete for external research funding.In their strategies, universities conveyed direct messages to all researchers, encouraging them to proactively seek and apply for funding, with a specific emphasis on acquiring new funds from international sources.
Internal organisational actions were input goals that were seeking to add to the total volume of external funding at the university level.For example, funding was intended to "remain high" (F), "increase" (P), be "significant" (P), and "double by 2020" (P).
Emphases
Example quotes • Active project funding applications "Researchers at different stages of their careers will be encouraged to apply more actively for project funding" (F)."All members of the university community must strive to impact university funding" (P).
• Professional funding application architecture "We invest in think tanks and other project development and improve our funding application support processes" (P)." [arranging] supporting services, specifically to obtain international funding" (P)."Increasing the amount of competitive external funding and improving the efficiency of local research services" (P).
• Securing financial sustainability and diversifying funding "Secure financial sustainability by diversifying our funding base, promoting public funding that rewards excellence and impact" (F)."Secure a solid financial position through diverse funding streams" (P)."We see our financial prerequisites as a whole consisting of national and international research and education funding through the Ministry of Education and Culture, the Academy of Finland, Tekes, the EU, and companies" (P).
• Increasing external research funding "The share of external and competitive research funding in total funding will increase" (P).
• Breaking traditional boundaries and gaining a competitive advantage "We create a structure, infrastructure, tools, and services that break traditional boundaries and provide a competitive advantage on the international level" (F)."We allocate resources to new initiatives that have the potential to become internationally renowned and attractive in the fields of research, education, and application" (F).
• Promoting international research funding "International research funding accounts for 25% of the university's external funding" (P).
Some universities in their strategies emphasised diversified funding structures and establishing the financial frameworks for an entire university, including both project-funded research and statefunded teaching activities.
The table below presents key emphases and includes example quotes extracted from strategic statements that address the topic of competition for financial resources.
Universities articulated the above goals with the intention of increasing research activity and research clusters by building high-level modern research environments and infrastructures.Universities set such goals as investing in attractive multidisciplinary research platforms, phenomenon-based research platforms, and specifically internationally attractive research platforms.
Strategies to promote the excellence of human resources
University strategies provided clear goals, actions, and directions for human resource (HR) policies and practices, with a focus on enhancing the excellence of staff and front-line teaching and research efforts (Table 4).The human resource-related goals and interests encompassed various aspects, including new international recruitment standards, policies aimed at increasing international recruitment, developing tenure-track systems for professors, and implementing practices to attract the best candidates for research endeavours.Specifically, the universities sought to establish new international recruitment standards to attract talents from around the world with successful funding records and high capacities to obtain new funding and cultivate research networks.In this way universities aimed to bolster its research capabilities and increase its competitiveness on the global stage.
Recruitment goals were often aligned with internationalisation interests, aiming to attract leading and well-connected researchers in their respective fields.The objective was to invest in open recruitment strategies, thereby drawing the best experts globally while also identifying promising new talents.University strategies emphasised the establishment of research teams and research platforms focused on specific key areas.To achieve these objectives, universities aimed to tailor their human resources strategies to fit their competitive funding and academic
Emphases
Example quotes • Attracting internationally prominent experts and new talents "We are strengthening our competitive edge by continuing to focus our recruitment and resources on our key areas of competence" (F)."We recruit, support, and develop brave innovators of the future" (F)."In order to obtain the best experts, we invest in the open recruitment of internationally prominent experts and new talents" (F)."Our recruitment practises are efficient and enhance internationalisation" (P).
• Abilities for international funding and research "When recruiting researchers and teachers, consideration will be given to their ability to acquire new international funding, recruit the best students, and carry out internationally recognised research and high-quality teaching" (P).
• Tenure-track model for strategic recruitments "We also use the professor's tenure-track model to recruit new top researchers to strengthen our strategic research programmes" (P).
• Qualifications for professorship "In addition to scientific and pedagogical merit, professors' qualifications require aptitude, international experience, and genuine opportunities for success in competitive research funding application processes" (P).
environment.They strategically recruit the best researchers, prioritise the recruitment of international researchers and the best team players.By aligning their recruitment efforts with internationalisation goals and emphasizing team-building, universities sought to enhance their research capabilities, strengthen their global presence, and position themselves as leading institutions.
Professors, researchers, and teachers were specified as key persons in acquiring external funding, recruiting students, and building research capacity.The orientation toward internationalisation was associated with human-resource-related capacities, such as by building capacity through international experience, by increasing the capacity to acquire new international funding, and increasing the capacity to conduct international-level research and teaching.
Table below addresses the emphases and shows example quotes from the strategic statements that focus on promoting the excellence of human resources.
The above indicated that universities actively competed for talented students and staff.They made efforts to attract and retain high-quality human resources.These strategic goals signify a clear emphasis on attracting top talents, fostering research excellence, and cultivating an internationally competitive and recognised personnel.
Discussion
In this section we highlight the complex and evolving nature of universities' strategic decisionmaking when navigating within and across multiple resource environments.Furthermore, we discuss the limitations of cohesive organisational behavior inside universities, the presence of interplay between institutional goals and individual goals, and the changing and organic nature of resource-related relationships and interactions that are important to take into account in strategic decision-making.
The major external funding related strategic goal (Cluster I Institutional prestige and player status) reveals universities' dependence on science and researchers and the dynamics of academic research (cf.Whitley 2000).However, this dependence is not often acknowledged when strong financial dependence on performance-based funding dominates the discussion.Finnish universities place significant importance on meeting the goal to be accounted as leading science institutions (Clusters I and II).Promoting the excellence of human resources (Cluster IV) and competition for external funding (Cluster III) become crucial means for achieving the strategic master goal, as they enhance universities' ability to support research activities, attract talented researchers, and contribute to advancing knowledge in their respective fields.While financial dependence on performance-based funding may dominate discussions, recognising the strategic dependence on external funding for scientific excellence emphasises the broader dimension of universities' reliance on resources from their environment.This strategic dependence highlights the link between external funding and the pursuit of research excellence and scientific reputation.
The strategic goals and tactics of universities (Clusters I-IV) indicate that university players are engaged in competitive games within their environment in line with the EGM.Universities' strategic goals can be understood as responses to and their management arenas' heuristic tools for both uncertain and dynamic university environments (cf.Lubell, 2013).They strive to attract and retain the resources to enhance their academic reputation and competitive standing (Clusters I-IV).However, it is the individual academics and research teams that compete for research funding and recognition (Thoenig & Paradeise, 2016;Young et al., 2017).Research funding agencies typically allocate grants and resources based on the quality of research proposals and individual researchers' track records.Competition takes place also between the institutions when universities are becoming more concerned about the status of their universities relative to other institutions (Musselin, 2018;Thoenig & Paradeise, 2016;Young et al., 2017).Universities and individual academics may perceive and face different policy directions due to variations in disciplinary fields, institutional contexts, and regional or national policies.This diversity in policy directions can create further uncertainties for universities while aligning their overall strategies with the needs and aspirations of individual academics and research teams (also Larsen, 2020;Young et al., 2017).
Additional uncertainties arise from rapidly evolving research landscapes, shifting funding priorities, and changing governmental policies (cf.Firestone 1989;Kwiek, 2012;Larsen, 2020;Lillis & Lynch, 2014;Shin et al., 2022;Young et al., 2017).The characterisation of universities as complex and loosely coupled organisations, as described by Orton and Weick (1990) Thoenig and Paradeise (2016), suggests that universities consist of diverse units and subsystems that operate relatively independently.In the context of academia, loose coupling implies that the work carried out by individual academics may not align closely or directly with the strategic goals set at the institutional level.
The aspiration of Finnish universities to achieve both institutional distinctiveness and international connectedness (Clusters I and II) reflects a desire to maintain a unique identity while actively participating in global academic networks.These strategic goals emphasise the importance of balancing local and international priorities to enhance the reputation, research collaborations, and global visibility of Finnish universities (cf.Whitley & Gläser, 2014).It also appeared that several universities preferred to simultaneously be competitive locally, regionally, nationally and internationally because competition occurs in all these domains (Cluster I).Furthermore, the preference for not excluding selections or choices between different funding sources, as noted in the literature (Raudla et al., 2015;Stachowiak-Kudla & Kudla, 2017), suggests that universities seek to diversify their funding portfolio (Cluster III).
The above considerations reflect the complexities in strategic decision-making that universities, including Finnish universities, encounter.Universities also articulated goals that may seem contradictory or at least challenging to reconcile.These goals emerge from different strategic domains or "games" in which universities participate, such as competition for research funding, internationalisation, community engagement, and societal impact.Each game has its own set of rules, dynamics, and expected outcomes, and universities must navigate these interconnected games while considering their individual objectives and external pressures (cf.Crozier & Friedberg, 1980.).The interconnections between these games are often unpredictable, challenging universities' assumptions about their strategies and the expected outcomes (cf.Nisar, 2015.).The inter-connections between games are much more unpredictable than Finnish university strategies linearly assumed (cf.Nisar, 2015;Firestone 1989).Each game is influenced by multiple sources, and the flows of resources and influences into any given game come from various directions.This viewpoint aligns well with the external funding environments of universities, where they must navigate multiple funding sources, stakeholder expectations, policy changes, and societal needs.The works by Nisar (2015) and Firestone (1989) explore these dynamics further, emphasising the organic emergence of games, the limitations of individual perspectives, and the unpredictable nature of interconnections.
Institutional strategy can be seen as a manifestation of the interaction between strategic management rhetoric and public policies.The works of Frølich et al. (2017), Hall and Lulich (2021); Fumasoli et al. (2020), andNisar (2015) likely explore this relationship further, highlighting how external factors and policy environments shape institutional strategies.Financial autonomy policy reforms often focus on empowering universities as organisational agencies.However, the ability of universities to compete and function as cohesive organisations is limited, as noted by Firestone (1989), Nisar (2015), and Whitley and Gläser (2014).The dynamics within universities, influenced by internal games and individual actors, affect how institutional goals are pursued and realised.While external funding games can influence academic behavior, as mentioned by Chubb and Watermeyer (2017), individual actors within the university also have their own motivations, goals, and games that shape their behavior and decision-making.Interactions within games lack stable structures, and the collection of games is constantly changing.This means that centrally guided structures from the top or outside the organisation are often absent (Crozier & Friedberg, 1980;Nisar, 2015).This case is particularly true of external funding games where external funding bodies, policy makers, universities, academic departments and individual academics all play a crucial role.
Conclusions and suggestions
Universities face increasing uncertainty and pressures how to deal with changing competitive environments.This study analysed what external funding-related goals and interests do universities articulate and communicate in their institutional strategies and for what end is external funding sought.We raised these two questions and applied RDT and EGM as theoretical approaches to enhance understanding of how universities interact with their external environment to acquire and manage resources.Finnish universities face competition not only from local or regional higher education institutions but also from universities around the world.The RDT and the EGM offer valuable perspectives on how universities interact with their environment and the challenges they face.RDT and EGM revealed that universities' strategy content reflected multiple goals, actions and strategies, potential beneficial financial externalities and research university externalities (cf.Hall & Lulich, 2021;Lillis & Lynch, 2014;Luoma et al., 2016).A top-tier research university status (Cluster I) was expected to contribute such as institutional academic excellence, new high-level external partnerships, and more external funding.Universities' cooperation, partners, and networking (Cluster II) focused on interaction with international leading research players to gain benefits from long-term international cooperation (stronger research capacity and better potential to win funding from varied instruments that require partnerships).University players' internal actions to enhance the internal capacity of funding application architecture (Cluster III) indicated that universities have launched more proactive external funding tactics.Human resources (Cluster IV) focused on the recruitment of top-level, productive, externally engaged international researchers.Below, we formulate reflections between universities' strategic agendas and the RDT and with the EGM as a metaphor.
(1) External funding goals reflected the public funding policies, strategic management rhetoric, and the key features of science institutions.Universities were multi-goal setters, but they place one single goal above the other goals, which is consistent with the RDT and EGM.(cf.Berardo & Lubell, 2019, Firestone 1989;Lubell, 2013;Nisar, 2015;Pfeffer and Salancik, 2003).
Universities aspired to grow their institutional status towards the prestigious top science institutions.This acknowledged the importance and dependence on science.It can also be seen as a strategy to decrease uncertainty and guarantee the continuity.The top status was expected to boost academic mastery performance.Universities used the rhetoric and wording of strategic management related to operational excellence (potential externality) for gathering external stakeholders (Brankovic, 2018;Hall & Lulich, 2021;Martin, 2021;Morphew, Fumasoli and Stensaker 2018).Behind the status of a top research university was seen potential for more resources and autonomy.Compared to public universities, Finnish foundation universities have shown relatively higher success in securing external research funding (see Section Background).These foundation universities often excel in technical study fields, which are known for generating significant external funding.Access to external funding grants these universities different funding environments and greater financial flexibility.Furthermore, it has been observed that universities with a track record of successful external funding tend to attract more funding opportunities (Auranen & Nieminen, 2010;Brankovic, 2018;Hicks, 2012;Parker, 2013;Shin et al., 2022).
(2) Strategies articulated how to be effective within the master goal.Prestige and status were aspired as a single institution apart and without cooperating with other Finnish universities whereas international leading research universities were preferred.In a couple of universities, the university ranking games affected goal setting, which came out as position-seeking and the desire to be noted as world-class research universities.Competing for ranking status (a ranking game) was visible only in a few strategies.In the Shanghai-ranking list, one Finnish university (public) was among the top 100.
(3) External funding-related tactics suggest that competitiveness associated with top research university status stimulated responsiveness through the competitive tactical strategy content.As Long (1958) pointed out, games give goals to players.University strategies did not always articulate mechanisms for achieving their priorities (Hall & Lulich, 2021), but RDT and EGM reveal that Finnish university strategies did reflect strategies and tactical games.Universities promoted research partner games that focused on acquiring international prestigious university partners, funding competition games to provide professionalised funding application architecture, and the HR game that sought recruitment of top-level human resources (cf.Brankovic, 2018;Hicks, 2012).The HR game was a signal of what type of academics universities preferred to recruit, how they are expected to behave, and how they are promoted (Chubb & Watermeyer, 2017).Tactics were considered relative to the universities' external and internal environments (cf.Fumasoli et al., 2020;Larsen, 2020;Lillis & Lynch, 2014).Interaction and exchanges occurred among and between various players shaping uncertainty and its dynamic (cf.Firestone 1989;Lubell, 2013;Nisar, 2015).
(4) In line with RDT and EGM, the strategies reflected what behaviour or favourable set of conditions (inputs) is required from one competition to get benefits in other competition (cf.Firestone 1989;Larssen 2020;Lubell, 2013;Young et al., 2017).Strategies as goal platforms linked external funding, using the sets of goals, interests, actions, and solutions to perceived university environments (cf.Larsen, 2020;Lillis & Lynch, 2014;Parker, 2013).Goal-setting revealed that universities articulated contradictory goals.Strategic external funding goals culminated in positioning and aims to guarantee access to resources.As found in the previous research (Luoma et al., 2016), Finnish university strategies do not fundamentally differ and lack new strategic choices (cf.Hall & Lulich, 2021;Lillis & Lynch, 2014).Universities can articulate similar external funding agendas, but universities are not identical financial management or strategic management players (Authors 2022).Altogether, university strategies addressed that universities were actively engaged in resource acquisition, adapted to resource dependencies on their environment, and participated in competitive dynamics to thrive and excel in the academic ecosystem.The performance funding indicator of the state funding model created a strong incentive for Finnish universities and had a significant impact on shaping the strategies of universities (cf.Bolli et al., 2016;Fowles, 2014;Shin et al., 2022;Young et al., 2017).While the performance funding indicator provided an incentive for universities to meet certain targets, their strategies also indicated that scientific research and the work of researchers were essential for their institutional reputation, academic standing, and ability to attract funding and resources.
(5) This study suggests avoiding considering the university, its key functions and its strategic goals and decision-making in isolation of their other key realities.Rather actors within the management arena should take a holistic approach and recognise strategically significant dependencies, which means recognising the scientific and social contexts within which universities operate and of which they are dependent.
This study did not analyse individual universities' game behaviour or the contextual information or history of Finnish universities.University strategies may be written primarily for their main external stakeholders, and strategies may mirror their external financial accountability and performance accountability and financial resource dependencies rather than actual strategic goals (Luoma et al., 2016;Parker, 2013).Strategy content can also justify actions and solutions already taken (Fumasoli et al., 2019;Hall & Lulich, 2021;Parker, 2013).
To obtain a more comprehensive understanding of the phenomenon of post-reform external funding from the perspective of strategic management teams and individual academic players, additional qualitative studies and in-depth interviews are required.These measures will enable a deeper analysis and capture a more nuanced picture of the subject in individual universities.
Figure
Figure 1.Shares of external funding relative to total university revenue in 2011, 2016 and 2018 (Vipunen, 2023). | 2023-11-22T16:29:16.151Z | 2023-11-18T00:00:00.000 | {
"year": 2023,
"sha1": "00376acd7ffca67dd9a5439c4b388d7466648906",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/2331186X.2023.2282816?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "79ceb84987087894c3a2714f0f99fd4326018699",
"s2fieldsofstudy": [
"Education",
"Economics"
],
"extfieldsofstudy": []
} |
210294749 | pes2o/s2orc | v3-fos-license | Influence of fire position on smoke movement in an inner corridor building with multiple openings
The influence of fire position on the neutral plane height was analyzed in order to analyze the smoke movement. One inner corridor building multiple openings was constructed by computational fluid dynamics (CFD) software and the airflow velocity and the pressure distribution under different fire position. The result shows that the influence of fire position on the height of neutral plane can be divided into two phases and the base height (Hb) is put forward as the critical values for the two phases. When the he height of fire (hf) is lower than Hb, namely hf< Hb, with the hf increasing, the neutral plane height increase is not significant. However, when the hf>Hb, with the fire height increasing, the neutral plane height increase significantly. Additionaly, in the condition, the position of neutral plane will be on the same level as that of the fire soucrce.
Introduction
With the rapid development of urbanization, many high-rise buildings have been constructed all over the world. According to the Council on Tall Buildings and Urban Habitat (CTBUH) [1] , up to now,1502 buildings over 200 meters have been constructed in the world, and that number has almost tripled over the past 10 years. At the same time, the number of 200m+ buildings under construction is 408. Although the development of high-rise building could meet the needs of urbanization, it would increase the difficulty level in fire control. In recent years, the fire accidents break out frequently, which has brought great casualties and economic losses to society. Therefore, the fire safety of high-rise buildings received more and more attention.
Statistics showed that smoke and toxic gases are the most fatal hazard to the people in fires, almost 85% of the people killed in building fires are killed by toxic smoke [2] . Therefore, the movement mechanism of fire smoke in high-rise buildings has drawn increasing attention. Almost of high-rise buildings have vertical shafts, such as stairwells, elevator shafts. When the smoke spread into these vertical shafts, the stack effect will form. At this time, a horizontal plane will appear, where the pressure inside equals to the one outside, which is called the neutral plane [3] . Below the neutral plane, the pressure inside a building is lower which leading air flow into the building form the opening in the area. Inversely, above the neutral plane, the pressure inside is higher, and the smoke will flow out. Therefore, the position of the neutral plane is a significant parameter to area affected by toxic smoke A number of studies have been focused on the calculation and prediction of neutral plane position in buildings in the last decades. Klote [4] assumed a developed calculation model for estimating the height of the neutral plane based on the assumption that the smoke temperature in vertical shafts is uniform. Zhang et al. [5] proposed an improved calculation model, in which the shaft space was divided into two zones, fire zone and inner space and the temperature was assumed uniform in each zone. The prediction result obtained by this method was found to be better in accuracy than Klote's. Xu et al. [6] proposed an improved continuum model based on the assumption which the shaft space was divided multi-zone. Numerical simulations and experiments were carried out to validate the models. Mao et al. [7] developed a new model to calculate the neutral plane height which considered the smoke temperature and the openings size. The prediction result is more accurate than other model by experiments.
Many different models for calculating the height of neutral plane in buildings have been proposed in previous studies. However, in these models, the fire source is assumed on the first floor in building. Few studies have been focused on influence of fire position on neutral plane height. In this paper, a set of numerical simulations were conducted to study the influence of fire location on the height of the neutral plane in an inner corridor building with multiple openings.
CFD simulations
In this study, the neutral plane height was analyzed based on a series of CFD simulations by the FDS software, which was released by the National Institute of Standards and Technology (NIST). In these simulations, different fire locations are considered. The Navier-Stokes equations for fire-driven fluid flow are solved by large Eddy Simulation (LES), which is second-order accurate with respect to space and time differences. The governing equations for smoke flowing in a fire are the conservation laws of mass, momentum and energy. More details on the LES can be found in the references [8] .
Model configuration
Fig 1 presents the actual inner corridor building with multiple openings and the model configuration constructed by FDS. The dimension of the modeled building section is 18.2m×3.6m×27.2m (L×W×H). Each floor is 3.0m high. Since the each door of room should be closed in order to prevent the toxic smoke flow into room, the rooms connect with corridor could be ignored. Each floor has a openings connected with outside and the area is 1.0 m×1.0 m. The distance between the opening and square fire source is 6.5m and the length of fire sources is 1m. According to the Shanghai engineering construction standard-Technical specification for smoke control code, the heat release rate (HRR) in actual building fire is 0.5MW-3.0MW. Considering the most dangerous condition, the HRR of fire is set 3MW in all CFD simulations. In order to analyze the influence of fire position on the neutral plane height, the fire position are set on the first floor, the second floor, the third floor, the seventh floor and the eighth floor, respectively.
Pr and Sc, together with C S , are the most important parameters in the simulation of fire-induced transportation of smoke, especially in the prediction of smoke temperature. According to the study by Zhang et.al [9] in FDS simulation, the values of Pr, Sc and C S are set to be 0.5, 0.5 and 0.18 respectively.
The other details on the simulation settings are summarized in Table 1.
Model Validation
To verify the parameter value in simulation is set correctly, the simulation results were compared to the experimental results in Luo et al study [10] , in which the corridor sectional dimension is equal to the simulated model based on the similarity principle. Fig 2 presents the comparison results. From the Fig 2, it is can be seen that the simulation results is good agree with the experiment results which indicates that all the settings in the simulations were appropriate.
Velocity distribution in inner corridors
When smoke spread into the vertical shafts, the neutral plane forms and the height of that gradually stable. As can be seen from the above, air outside will flow into the building from the opening below the neutral plane and the smoke in building will flow out over the neutral plane. On the neutral plane, the pressure inside equals to the one outside, so the airflow velocity is about equal to zero. Therefore, the neutral plane position can be roughly ascertained by the velocity distribution in building.
Taking the X-axis as the positive direction, the airflow velocity curves measured at middle of each opening for different fire position are displayed in Fig 3. It is can be seen from the Fig 3(a), when the fire is on the floor 1, the height of the neutral plane is on the floor 5 (hereafter, the height of the neutral plane when fire on floor 1 is called base height, H b ). When the fire height (h f ) is on the floor 2 and the floor 3, the height of neutral plane increaze with the height of fire increase, while the increase is not significant. Through observing the postion of the zero velocity line(shown ine Fig 3b and 3c) , the neutral plane height is not obviously increased and it is still on the floor 5. However, with the continued increasing of the fire position, when the fire is on the floor 7 and floor 8, the height neutral plane increase significantly. It is can be seen from the Fig 3(d) and (e), when the fire postion is from floor 7 to floor 8, the neutral plane position increase increase signifanctly.
The above results suggest that the influence of fire position on height of neutral plane can be divided into two phases. When the fire is on the lower floors, the height of neutral plane just increases slightly with fire height increasing. However, when the fire is on the higher floors, with the fire height increasing, the neutral plane height increase significantly.
Pressure distribution in inner corridors
In the previous sections, the height of neutral plane can be roughly determined through the velocity distribution. But it's not accurate, which lead to the division standard of two phases cannot be clear and definite. As the pressure in neutral plane is equal to zero, the height of neutral plane can be determined by finding the zero pressure plane. To precisely determine the neutral plane height, the pressure distribution is analyzed in the section. Therefore, the red lines on fire floor can be ignored. That is not our focus. As shown in Fig. 4(a), when the fire is on the floor 1, the height of neutral plane is about 13.6m, which is on the floor 5. With the fire position changing to floor 2 and 3 (shown in Fig 4b and 4c), the height of neutral plane is about 15.0m. The position is almost unchanged and it is still on the between floor 5 and floor 6. It is indicate that in these cases, the height of neutral plane had no significant increase with the height of fire increase. However, when the fire position is on the floor 7 and 8, the height of neutral plane is 19.4m and 22.4m, respectively, which show that the height of neutral plane increase significantly in these cases. In order to accurately observe the change of neutral plant height under different cases, the accurate heights of neutral plane are shown in the Fig 5. From the Fig.5, it is can be found that when the height of fire (h f ) is lower than the base height (H b ), the height of neutral plane just increase slightly with the h f increasing. However, when the h f higher than the H b , the height of neutral plane increase significantly with the h f increasing. Therefore, the H b can be the critical value to analyze the influence of fire position on the height of neutral plane. Additionally, it is can be observed that when h f is higher than H b , the position of neutral plane will be on the same level as that of the fire soucrce. This conclusion can be used to roughly determine the height of neutral plane when the fire postion is higher than the base height.
Conclusion
In this study, a set of CFD simulations using FDS were carried out to study the influence of fire position on the neutral plane height for inner corridor building with multiple openings. Results show that the influence of fire position on the height of neutral plane can be divided into two phases and the base height H b is the critical values for the two phases. When the height of fire is lower than the base height, namely h f < H b , with the height of fire increasing, the neutral plane height increase is not significant and the positon of that almost unchange. However, when the height of fire is higher than the base height, namely h f >H b , with the fire height increasing, the neutral plane height increase significantly. And in this condition, the position of neutral plane will be on the same level as that of the fire soucrce. | 2019-10-10T09:34:25.541Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "6a32ba758e0ca4131bff5225fc298e7122f56ff5",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/44/e3sconf_icaeer18_03057.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "e3c6ada7bd84d2360c5b2b8ff80209a346d41fd4",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
16450905 | pes2o/s2orc | v3-fos-license | Expression of HOXC8 is inversely related to the progression and metastasis of pancreatic ductal adenocarcinoma
Background: The transcription factor HOXC8 regulates many genes involved in tumour progression. This study was to investigate the role of HOXC8 in pancreatic ductal adenocarcinoma (PDAC) growth and metastasis. Methods: The Hoxc8 expression was determined in 15 PDAC cell lines and human specimens by RT–polymerase chain reaction and/or immunohistochemistry. The effects of HOXC8 silencing by RNA interference were investigated by functional tests. Results: The Hoxc8 mRNA expression in PDAC cell lines was negatively related to their growth in vivo. Except for Suit2-007 cells, only those with low Hoxc8 mRNA expression grew in nude rats. Successful down-regulation of HOXC8 expression caused increased proliferation, migration (P⩽0.05) and colony formation (P⩽0.05) in Suit2-007, Panc-1 and MIA PaCa-2 PDAC cells, respectively. The Hoxc8 mRNA levels in diseased human pancreas tissues were significantly increased over normal in PDAC and autoimmune chronic pancreatitis specimens (P<0.01, respectively), but negatively related to tumour stage (P=0.09). In primary and metastatic tumour samples, immunohistochemical staining for HOXC8 was stronger in surrounding than in neoplastic tissues. Furthermore, grading of primary carcinomas was negatively associated with HOXC8 staining (P=0.03). Liver metastases showed the lowest HOXC8 expression of all neoplastic lesions. Conclusion: These data indicate that HOXC8 expression is inversely related to PDAC progression and metastasis.
Pancreatic ductal adenocarcinoma (PDAC) is the third-most frequent neoplasia of the GI-tract and the fifth leading cause of cancer-related mortality in industrialised countries. With an overall 5-year survival rate of 1 -3% and a median survival time of 5 -6 months after diagnosis, PDAC is considered to be one of the most aggressive cancers (Jemal et al, 2005). Lack of early symptoms and reliable screening tests for early detection are main causes for the fact that over 80% of patients with PDAC are already inoperable at the time of diagnosis .
Since cancer is a genetic disease, an improved understanding of gene-expression alteration in PDAC and its distant metastasis is a reasonable way of identifying pathways, which are causal for tumour progression and metastasis. In addition, new markers for early diagnosis as well as potential targets for therapeutic intervention can be expected from that approach . Following these lines, the extra-cellular matrix proteins osteonectin (ON) and osteopontin (OPN) have been found up-regulated by gene array in PDAC. We could show that ON is markedly overexpressed in pancreatic cancer and chronic pancreatitis (CP) . Osteonectin or secreted protein acidic and rich in cysteine is a calcium-binding, anti-adhesive and bone-specific glycoprotein with high affinity to collagen and hydroxyapatite. Its expression is associated with processes such as morphogenesis, angiogenesis, cell differentiation, proliferation and migration, as well as wound healing (Brekken and Sage, 2001). In addition, the expression of ON has been related to cancer progression, an over-expression of ON was found in breast, colon and prostate cancers (Porte et al, 1995;Graham et al, 1997;Thomas et al, 2000). Remarkably, other studies assigned ON a tumour suppressor role, where its de-regulation seems to be related to tumour progression and/or poor prognosis (Podhajcer et al, 2008).
We also could show that OPN is markedly up-regulated in pancreatic cancer (Kolb et al, 2005). The secreted, adhesive noncollagenous phosphorylated calcium-binding glycoprotein OPN is associated with the progression of colon, papillary thyroid, lung, breast and prostate cancers, with its elevated expression being linked to a poor prognosis (Brown et al, 1994;Fedarko et al, 2001). Osteopontin has been found up-regulated by various genes, for example TGF-b and HOXC8 (Roy and Sen, 2005;Shen et al, 2007).
The HOXC8 belongs to the homeobox class I family. Like other members of this family, HOXC8 regulates anterior -posterior patterning and is jointly responsible for skeletal and neural development (Thickett and Morgan, 2002;Juan et al, 2006). In the mouse, HOXC8 is expressed in the neural tube and somitic mesoderm as well as in the prospective thorax (Belting et al, 1998) and is crucial for mouse skeletal and forelimb development (Le et al, 1992). The tissue-specific over-expression of HOXC8 inhibits the maturation and stimulates the proliferation of chondrocytes, resulting in cartilage defects (Yueh et al, 1998). The HOXC8 is also expressed in haematopoietic organs, fetal liver and adult bone marrow (Shimamoto et al, 1999). Hox-binding elements (ATTA) are involved in promoters of osteoblast differentiation and osteogenesis marker genes, such as osteoprotegerin, BMP-4 and ON (Wan et al, 2001;Roy and Sen, 2005). Furthermore, HOXC8 seems to have an essential role in cancer development and progression. In several cancer types, HOXC8 is down-regulated, such as in oesophageal and prostate cancers (Miller et al, 2003;Chen et al, 2005). In human prostate cancer, HOXC8 is both downregulated as well as up-regulated in association with loss of tumour differentiation. Its complex role seems to promote invasiveness, while inhibiting androgen receptor-mediated gene induction at androgen response element-regulated genes associated with differentiated function of the prostate (Axlund et al, 2010). This suggests that HOXC8 may have a role in both the acquisition of the invasive and metastatic phenotype of this malignancy as well as in inhibiting androgen responsive prostate cancer cells (Waltregny et al, 2002;Miller et al, 2003).
The aim of this study was to investigate the HOXC8 expression in PDAC and its liver metastasis in comparison with healthy and inflamed pancreatic tissues. In addition, the effect of Hoxc8 knockdown by RNA interference was investigated on cell proliferation, migration and colony formation. Finally, the relationship of HOXC8 to OPN-or ON-expression levels was to be determined in pancreatic cancer cell lines and related to their growth in vivo.
Cell culture
One rat (ASML) and 14 human pancreatic carcinoma cell lines were used for detecting the expression of HOXC8 and for transplantation into animals (see Supplementary Table). All cells were kept in log-phase, and passaged 2 -4 times per week depending on their growth rate, and were maintained in an incubator at 371C in humidified air with 5% CO 2 .
Patients and tissue collection
Human pancreatic cancer tissue samples were obtained from 47 patients (25 women, 22 men, median age 64 years (range, 39 -80 years)) and CP tissue samples from 37 patients (11 women, 26 men, median age 54 years (range, 25 -73 years)) who underwent pancreatic resection at the University Hospitals of Bern (Switzerland) and Heidelberg (Germany). Normal human pancreatic tissue samples were obtained through an organ donor programme from 10 previously healthy individuals (four women, six men, 15 -69 years median age 45 years). Freshly removed tissue samples were immediately fixed in paraformaldehyde solution for 12 -24 h and paraffin embedded for immunohistochemical analysis. Concomitantly, tissue samples for RNA extraction were immediately snap frozen in liquid nitrogen in the operating room and maintained at À801C until analysis.
Immunohistochemistry Paraffin-embedded tissue sections from 37 PDAC and 12 liver metastasis specimens were analysed as described in Supplementary methods S1.
Western blot analysis Western blotting was performed as described previously (Zhivkova-Galunska et al, 2010). Primary antibodies for HOXC8, OPN, ON and ERK2 as well as their corresponding secondary antibodies are given in Supplementary methods S2.
Real-time light cycler quantitative polymerase chain reaction
All reagents and equipment for mRNA/cDNA preparation were purchased from Roche Applied Science (Mannheim, Germany). mRNA was prepared by automated isolation using MagNA Pure LC instrument and isolation kits I (for cells) and II (for tissue samples). cDNA was prepared using the first Strand cDNA Synthesis kit for RT -polymerase chain reaction (PCR) according to the manufacturer's instructions. Real-time PCR was performed with the Light Cycler Fast Start DNA SYBR Green kit as described previously (Guo et al, 2004) (all primers used are shown in Table 1). The number of specific (Hoxc8) transcripts was normalised to housekeeping genes (cyclophilin-B and hypoxanthine guanine phosphoribosyltransferase). All primers were obtained from Search-LC (Heidelberg, Germany).
Hoxc8 siRNA transfection of human pancreatic cancer cells Cells were plated overnight at a density of 250 000 cells per well in six-well plates. A total of 100 ml transfection solution containing 6 ng (final concentration 100 -200 nM) of three different Hoxc8 siRNAs (siRNA oligomers used are shown in Table 1) or negative control siRNA (Invitrogen, Karlsruhe, Germany) and 15 ml transfection reagent (Invitrogen) were added to 1.9 ml medium per well. After 12 h, cells were trypsinised and used for subsequent protein or mRNA extraction, for immunoblot analysis or the in vitro assays mentioned above.
Clonogenicity assay
For determining the response of Suit2-007, Panc-1 and MIA PaCa-2 colony formation after exposure to siRNA oligonucleotides directed against Hoxc8 mRNA, the procedure previously detailed (Adwan et al, 2004) was performed. Clusters of 30 cells were counted as colony, whereas clusters of X60 cells were considered as large colony. Hoxc8 human Abbreviation: SPARC ¼ secreted protein acidic and rich in cysteine. a Obtained from Dharmacon (Lafayette, CO, USA). b Obtained from Invitrogen.
In vitro cell migration model
This assay was performed to investigate the effect of HOXC8 down-regulation on the migration of Suit2-007, Panc-1 and MIA PaCa-2 cells. The bottom layer in 24-well plates consisted of 50 ml FCS, which was gently over-layered with 200 ml semi-liquid RPMI medium (containing 0.4% methylcellulose and 20% FCS) resulting in the chemotaxis mixture. A period of 24 h was needed to build the chemotaxis gradient. Then, 1 Â 10 4 Suit2-007, Panc-1 or MIA PaCa-2 cells were seeded on 8 mm pore size polycarbonate membranes (Millicell; Millipore, Schwalbach, Germany), which were transferred onto the prepared wells. The next day, the cells were exposed to siRNA directed against HOXC8 for 1 -3 days, and then plated onto the polycarbonate inserts. The inserts were removed from the bottom layer after 24 h of co-cultivation and transferred onto a fresh well of the same plate with chemotaxis gradient. Cells migrating through the pores were then counted daily for 4 days by fluorescence microscopy.
Animals and husbandry
Nude rats (RNU strain) were obtained from Harlan or Charles River (Harlan comp, Borchen, Germany; Charles River, Sulzfeld, Germany) at an age of 6 -8 weeks. They were housed under specific pathogen-free conditions in a mini-barrier system of the central animal facility. Autoclaved feed and water was given ad libitum to the animals that were maintained under controlled conditions (21 ± 21C room temperature, 60% humidity and 12 h light -dark rhythm).
Pancreatic liver lesions in vivo
To induce liver lesions, approximately 2 Â 10 7 cells (15 different cell lines used) were injected either intraportally via a mesocolic vein or under the liver capsule of a nude rat. In the case of a positive outcome, some tumour growth could be visually detected during re-laparotomy in the liver after a period of 7 -14 days. The animals were euthanised after 4 weeks and examined a second time for the presence of liver metastases.
Statistics
The results of multiple measurements were given as mean with corresponding standard deviation. The effect of siRNA on cell proliferation, migration and colony formation was described as treated/control  100 (T/C%). Differences between treated and control groups were assessed by the Kruskal -Wallis test, a nonparametric rank sum test. The same test was used to compare the mRNA-expression levels between normal and diseased pancreatic tissues. The w 2 -test was used to examine for independent occurrence of investigated parameters in cell lines (expression of genes and growth in vivo). A P-level p0.05 was considered significant.
Expression and localisation of HOXC8 in pancreatic tissues
Quantitative RT -PCR was used to compare the in vivo expression profile of Hoxc8 in normal and diseased pancreatic tissues. The expression of HOXC8 mRNA in all 10 donor samples was extremely low: in 6 of 10 samples even under the detection limit (less than one copy per ml), while in the other samples only 1 -4 mRNA copies were detected. Compared with these levels, there was a slight but not significant increase in Hoxc8 mRNA expression in the 37 samples of patients with CP (2.4-fold, P ¼ 0.09; Figure 1A) with 50% of all samples being below the detection limit. In contrast, this analysis revealed a 24-fold increase in mean Hoxc8 mRNA level in 47 PDAC samples and a corresponding 28-fold increase in 6 autoimmune CP specimen as compared with normal pancreatic tissue (Po0.01, respectively). Remarkably, an inverse relation was found between tumour grade and expression level as shown in Figure 1B. Similarly, tumour samples from patients that were characterised as nodal positive (N1) harboured significantly less Hoxc8 mRNA copies than samples from N0 patients ( Figure 1C).
To determine the cellular source and localisation of HOXC8 in pancreatic tissues, 15 CP as well as 31 primary and 12 metastatic PDAC tissues were probed for immunoreactivity. This analysis showed that staining for HOXC8 was always stronger in the surrounding than in the neoplastic tissues. Pancreatitis samples showed more intensive immunoreactivity than PDAC tissues. (n ¼ 8) to faintly, focal positive (n ¼ 9). No HOXC8 immunoreactivity was detected in 10 samples. Grading of primary carcinomas was negatively associated with the extent and intensity of HOXC8 staining (P ¼ 0.03).
Furthermore, weak-to-faint HOXC8 immunoreactivity was observed in the 9 of 12 PDAC liver metastasis specimens and three were completely negative.
In contrast, in AIP and CP tissue samples, strong, diffuse HOXC8 immunoreactivity was observed in the tubular complexes, degenerating acinar cells and islands as well as in the extra-cellular matrix and nerves (Figures 2A and B).
The functional role of HOXC8 in pancreatic cancer cell lines
In vitro To examine the functional role of HOXC8 in pancreatic cancer, 15 pancreatic cancer cell lines were analysed for the expression of Hoxc8 mRNA by quantitative RT -PCR (primers are shown in Table 1). The levels of Hoxc8 mRNA were high in DANG, Panc-1 and Suit2-007 pancreatic cancer cells, relatively high to moderate in A8 18 -4, MIA PaCa-2, Capan 1, Patu 390, as well as SU 8686 cells, and low to very low in CFPAC, Aspc-1, Panc-89, Colo-357, ASML, S2013 and BxPc-3 cells.
Although there was no significant correlation between the expression levels of Hoxc8 mRNA and differentiation or basal growth characteristics of the cell lines in vitro, there was a noteworthy relationship regarding their growth behaviour in vivo (see below).
In vivo In vivo, there was a significant inverse relation in 14 out of 15 cell lines (93%; P ¼ 0.002) between the cells' ability to grow in the liver of nude rats and their Hoxc8 mRNA expression ( Table 2). All seven cell lines with low to extremely low Hoxc8 mRNA expression were able to extravasate, to escape the primary immune response mediated by Kupffer cells, as well as to form lesions in the liver. In addition, no cell line with moderate (SU 8686) or relatively high Hoxc8 mRNA expression (8 MIA PaCa-2, Capan 1 and PATU 390) developed lesions in the liver. The only exception of this finding was a cell line (Suit2-007) with high Hoxc8 mRNA level, which was able to form liver lesions ( Table 2).
Relationship of HOXC8 to OPN and ON
There was a significant inverse relationship between Hoxc8 and OPN mRNA expression in 13 of 15 cell lines (86.7% P ¼ 0.005), as well as a significant direct relationship between Hoxc8 and ON mRNA expression in 11 of 15 cell lines (73% P ¼ 0.05) ( Table 2). In addition, inhibition of Hoxc8 with siRNA caused reduced expression of ON, but stimulation of OPN, as shown in Figure 3. Exposure to siRNA species directed against Hoxc8 inhibited its expression to 15%, respectively, as shown by RT -PCR and western blot ( Figures 3A and B).
To further investigate a possible interdependence of Hoxc8 with OPN and ON, these two genes were investigated in parallel. Osteopontin expression was increased at mRNA (six-fold) and protein (three-fold) levels. In contrast, ON expression was downregulated by 60% at mRNA and 80% at protein levels.
Effect of HOXC8 on the growth of pancreatic cancer cells in vitro
For further investigations, three cell lines with high level of Hoxc8 mRNA (Suit2-007, Panc-1 and MIA PaCa-2 cells) were selected, one of which with the ability to form liver lesions (Suit2-007). A successful down-regulation of Hoxc8 by siRNA ( Figure 4A) increased proliferation in vitro by 51%, 60% and 78% compared with untreated controls for Suit2-007, Panc-1 and MIA PaCa-2 cells, respectively ( Figure 4A).
Colony formation
Colony formation was used for studying the effect of HOXC8 down-regulation on the ability of Suit2-007, Panc-1 and MIA PaCa-2 pancreatic cancer cells to form clusters of 430 cells. In fact, siRNA-treated cells formed 2.9-fold (Suit2-007), 1.9-fold (Panc-1) and 1.7-fold (MIA PaCa-2) more colonies than NSO-treated cells (Pp0.05; see Figure 4B). As untreated cells formed more colonies than the NSO-treated cells, the difference of the former experimental group to siRNA-treated cells was only 1.5-fold.
Migration
Migration was used to further characterise the influence of HOXC8 on cellular properties related to metastasis. After knockdown of Hoxc8 by RNA interference in three cell lines (Suit2-007, Panc-1 and MIA PaCa-2), a significant increase in migration was observed (Pp0.05). Compared with the respective NSO treatment, the number of migrating cells increased more than five-fold in Suit2-007, more than three-fold in Panc-1 as well as in MIA PaCa-2 cells within the observation period (24 h, left; 48 h, middle; 72 h, right; Figure 4C).
DISCUSSION
The family of Hox genes encodes transcription factors that regulate and coordinate the expression of several genes involved in embryonic development, differentiation and malignant transformation (Cillo et al, 2001). In humans, 39 class I Hox genes have been identified and grouped into four clusters (A, B, C and D) (Cillo et al, 2001). It has been shown that Hox genes are expressed in endothelial cells and are involved in the acquisition of the angiogenic phenotype. A relation between de-regulated Hox gene expression and malignant transformation has been reported by many independent studies, not only for leukaemias but also in solid tumours such as breast, cervical, ovarian, prostate and colorectal cancers as well as in melanoma and squamous cell carcinoma. Originally, up-regulation was thought to promote malignancy, but, more recently, both oncogenic and tumour suppressor functions have been attributed to Hox genes (Abate- Shen, 2002;Hung et al, 2003;Miller et al, 2003). This study sought to identify the function of HOXC8 in PDAC and to determine whether there is an interaction between HOXC8 and the non-collagenous proteins OPN and ON. In general, the role of HOXC8 in cancer development has not yet been clearly defined.
For deciphering the function of HOXC8 in PDAC, we initially analysed the expression of Hoxc8 mRNA and protein in diseased and healthy pancreatic tissues samples, respectively. The Hoxc8 showed only basal mRNA expression in adult pancreatic tissue (o5 copies per ml), but was markedly over-expressed in PDAC and AIP tissues. In comparison, its expression in CP was not significantly increased. This contradicts to the increased presence of this protein in CP tissues, as assessed by immunohistochemistry. We speculate that CP tissues used for mRNA extraction may have undergone damage in vivo because of auto-digestion. This type of error can be excluded for patho-histologic examinations because the pathologist will base his assessment on intact tissues. Remarkably, an inverse relation was found between tumour grade and expression level of Hoxc8 mRNA, indicating that the loss in HOCX8 mRNA expression is related to tumour progression. In line with this, the expression of HOXC8 protein in PDAC tissues was inversely associated with both tumour grade and liver metastasis. This pictures a factor, which shows low expression in normal tissue, is up-regulated in premalignant tissue and pancreatitis, but low again in metastatic tissue, thus possibly suggesting a temporal role for HOXC8 expression in tumour progression. However, the observation that the staining intensity of HOXC8 was higher in the surrounding ECM, including broblasts and endothelial cells, as well as in tissue adjacent to a metastasis than in the tumour cells themselves indicates that HOXC8 may rather have a defensive role against malignant PDAC cells.
This assumption is in line with our subsequent functional analysis showing increased proliferation, colony formation and migration of tumour cells after Hoxc8-mRNA down-regulation. Another support results from the in vivo part of this study. The expression of HOXC8 in 14 of 15 human pancreatic cancer cell lines was inversely related to their ability to grow in the liver of nude rats. Further support for the assumption that HOXC8 has a defensive role against tumour cell growth is derived from the fact that this transcription factor regulates proteins, such as OPN and ON, which are involved in cancer progression. It has been described by others and us that HOXC8 knockdown is associated with increased expression of OPN, which in turn stimulates the proliferation of cancer cells via two different pathways. First, OPN can act as a growth factor itself or can inhibit the onset of apoptosis. Furthermore, we have also shown that the inhibition of ON can stimulate the growth of cancer cells (Adwan et al, 2004;Lei et al, 2005;Zhivkova-Galunska et al, 2010).
Physiologically, OPN and ON are extra-cellular calcium-binding glycoproteins, which participate in the bone mineralisation via hydroxyapatite binding. In addition, they have functions as signalling molecules, either as cytokine (OPN) or for wound healing and angiogenesis (ON). Pathophysiologically, they share an increased expression in a series of malignant tumours, especially in those with skeletal involvement. These functions have been recently reviewed (Brown et al, 1994;Lane and Sage, 1994;Porter et al, 1995;Giachelli and Steitz, 2000;Brekken and Sage, 2001;Agrawal et al, 2002;Zhivkova-Galunska et al, 2010).
With regard to OPN, there was a significantly inverse relation between the mRNA-expression levels of Hoxc8 and OPN in 12 of 14 human pancreatic cancer cell lines investigated. Osteopontin has been associated with tumour progression in various types of cancer such as breast, colon, prostate and pancreatic carcinomas. Our functional analysis in Panc-1 and MIA PaCa-2 cells revealed an up-regulation of OPN mRNA in response to RNA interferencemediated down-regulation of Hoxc8.
In partial contrast, Suit2-007 cells expressing a genuinely high OPN level did not further increase OPN transcription upon HOXC8 silencing. Nevertheless, the functional parameters (proliferation, migration and colony formation) did increase in response to HOXC8 down-regulation. As the tumour cells grew in the liver of nude rats, we assume that HOXC8 lost its function for regulating OPN in this cell line, but not for those factors that are responsible for the aforementioned functional properties.
The direct relationship between HOXC8 and OPN was less significant than the inverse association between the former protein and ON (P ¼ 0.002 vs P ¼ 0.05). Nevertheless, knockdown of Hoxc8 in Suit2-007 cells by RNA interference was followed by down-regulation of ON, indicating that this order of events is still intact, other than that for OPN. Interestingly, the inverse relationship between HOXC8 and ON was less prominent at the RNA than at the protein level. This is unexpected for a transcription factor binding to the promoter region of ON (Wan et al, 2001;Roy and Sen, 2005). The findings are, however, in line with an inhibition of translation, as observed for miRNAs. Whether or not this consideration is valid should be investigated in future experiments.
Currently, there are contradicting assumptions on the role of ON in cancer development, including PDAC (Podhajcer et al, 2008). Osteonectin has been found increased in malignant tumours and was described to correlate in intestinal-type gastric cancer with local tumour growth, nodal spread and tumour stage (Franke et al, 2009). Accordingly, ON was considered a promising novel target for cancer treatment by these authors.
However, up-regulation alone is not sufficient to establish an oncogenic effect, since it could also be interpreted as a defence mechanism. This view is supported by the following findings. In primary PDAC, ON was detected in tumour cells, their adjacent ECM as well as in fibroblasts and endothelial cells. In metastatic PDAC, the strongest ON expression was detected in the surrounding stroma, whereas it remained below detection level within the metastases. This correlates well with the observation that ON has an anti-proliferative effect in vitro . In addition, ON knockout mice are distinctly more prone to enhanced growth of pancreatic tumours following both subcutaneous and orthotopic tumour cell implantation. Finally, the absence of ON in pancreatic cancer cells was reported to be due to hyper-methylation of the protein's promoter in 16 of 18 cell lines investigated (Sato et al, 2003). All these observations point to a role of ON, which suggests that it could be an anti-tumourigenic protein rather than a protein responsible for tumour progression (Zhivkova-Galunska et al, 2010). In line with this assumption, lack of ON expression in colorectal cancer was recently reported to be associated with poor prognosis (Yang et al, 2007). The direct relationship between HOXC8 and ON, in turn, is supportive of the assumption that HOXC8 participates in or regulates the antitumourigenic role of ON.
However, a classical tumour suppressor requires the continued presence of its gene product and this property lacks HOXC8, since in normal tissues, HOXC8 is expressed at basal levels only. This transcription factor is physiologically being up-regulated by signals emitted in response to changes that are not known so far. Based on the connection between HOXC8 and ON, it could be speculated that wound formation (including certain aspects of tumour formation) could be a trigger.
Another possibility could be that HOXC8 up-regulation is associated with tumour cell dissemination from the primary, a function that is dispensable at later, metastasised stages. A dual role of HOXC8 has been described in prostate carcinoma, with a repressive function on gene induction at androgen response element-regulated genes as well as a function promoting invasiveness of this tumour. This observation, however, does not seem to be paradigmatic for PDAC.
In conclusion, Hoxc8 mRNA expression in human pancreatic cancer cell lines was inversely related to their capability to grow in the liver of nude rats and successful down-regulation of HOXC8 expression caused increased proliferation, migration and colony formation in Suit2-007, Panc-1 and MIA PaCa-2 PDAC cells, respectively. The Hoxc8 mRNA levels in diseased human pancreas tissues were significantly increased over normal in PDAC, but negatively related to tumour stage (P ¼ 0.09). In primary and metastatic tumour samples, immunohistochemistry staining for HOXC8 was always stronger in surrounding than in neoplastic tissues. Furthermore, grading of primary carcinomas was negatively associated with the extent and intensity of HOXC8 staining. Liver metastases showed the lowest HOXC8 expression of all neoplastic lesions. These data indicate that HOXC8 expression is inversely related to PDAC progression and metastases and might thus serve as marker for PDAC progression. | 2016-05-12T22:15:10.714Z | 2011-06-28T00:00:00.000 | {
"year": 2011,
"sha1": "7abe465ae6e6871964afc151286ffffc7e0eb622",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/bjc2011217.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7abe465ae6e6871964afc151286ffffc7e0eb622",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
3053514 | pes2o/s2orc | v3-fos-license | General Radiography as Clue for the Working Diagnosis : Sacrospinous Ligament Calcification Leading to Left Ureteric Calculus with Non-Functioning Kidney
General radiography leaves enough clues for the ongoing diagnostic evaluation of the patient. The important clues can save a lot of time lost and other unnecessary investigations in the management of the patient illness. Sacrospinous ligament connects the sacrum with the pelvis. This in fact stabilizes the pelvis as it provides the support. This is important as this is helpful in supporting the vaginal vault in cases of prolapsed uterus in females. We report a 50-year-old male who had come for his intravenous pyelography for left ureteric calculus and was found to be having multiple other associated findings like osteophytosis, bilateral ilial horns and bilateral sacrospinous ligament calcifications. The clue was that of calcification and hardening of left sacrospinous ligament which has led to the formation of left side ureteric calculus. This ureteric calculus has caused great progressive damage to the left kidney by causing gross hydrouretero-nephrosis due to complete obstruction.
Introduction
Ureteric colic is one of the medical emergencies frequently encountered in medical practice and the cause is diagnosed after a few radiological investigations.Sometimes rare causes of stone formation can also be hig-hlighted in routine investigations.Impingement of sacrospinous ligament is one of those causative factors which can cause ureteric stones formation on the affected side.Sacrospinous ligament is triangular in shape with base attached to S2 -S4 and coccyx bone in the midline.This provides support to the pelvic organs.This also forms and divides the greater and lesser sciatic notches which form the greater and lesser sciatic foramina.This ligament prevents the ilium to ride over the sacrum.The ligament always comes under stress while leaning or getting up from the chair.
Case Report
50 years old male reported with complaints of pain abdomen and stiff back of two years duration.He also complained of burning micturation.He often felt difficulty in bending with stiffness of the back especially in the morning.These complaints used to get aggravated during winter.On examination he was averagely built person without any relevant previous past history of disease like ankylosis spondylosis or fluorosis or any endocrinological disorder.All the blood parameters were within normal limits.Routine urine examinations done now and earlier had shown recurrent urinary tract infections.He was getting relieved of these infections with appropriate antibiotic medications.He underwent plain radiography and ultrasonography of abdomen.Plain X-ray abdomen has shown a radio opaque shadow on left side at the pelvis inlet with bilateral calcification of sacro-spinous ligaments.Lumbosacral spine had revealed degenerative changes (Figure 1(a) and Figure 1(b)).
Bilateral iliac horns were also noticed on the iliac crests.Ultrasonography abdomen had shown gross dilation of the left pelvicalyceal system which was compromising the renal cortical thickness.Upper part of left ureter was also dilated.Intravenous excretory urography (IVU) had shown normal functioning of right kidney but no excretion was seen on left side even in delayed films (Figure 2 Other biochemical parameters were normal.He had been planned for retrograde endoscopic (ureteroscopy) removal of left ureteric calculus.
Discussion
There has not been any case reported in literature in our knowledge like our present case.General radiography has got its own importance as a first tier of investigation module.The availability at affordable cost is another factor which plays its undisputable role.Many patients are advised as simple plain skiagram of specific region with different views at the first stage of investigation.Radiographers play an important role while carrying out this task.The plain radiography is conducted under the proper instructions from the radiologists which help in assisting the diagnosis by this module.Sacrospinous ligament is of great value for supporting the pelvic organs.The laxity of these ligament leads to a variety of symptomatology.This all depends upon the location and the region being affected by these ligaments.Left side sacro-spinous ligament was responsible for the pathology in our case as is evident from its shape and location in the pelvis region (Figure 3).Similar type of entities had earlier been reported like ovarian vein syndrome where left ovarian vein has caused impingement over the left ureter responsible for obstructive uropathy.This impingement can further complicate in formation of stones [1].The complications will keep on adding to the existing pathology till the obstructive causative factor is removed.The same had also been treated successfully by robotic surgeries [2].Similarly the ureter can be pressed or displaced by the normal pelvic ligaments when these present with some abnormal pathology.Ureteric calculi are usually of renal origin and pass down to the ureter.These may cause partial obstruction in the beginning which subsequently leads to total obstruction superadded with infections.These ureteric calculi are usually oblong shaped.85% stones will be passed down because of various factors and forces in the pushing mechanism.If the size of the calculus is less than 5 mm than no active treatment is required 70% ureteric calculi are found in lower third of the ureters as was in our case.Ureteric calculus presents with classical colicky flank pain associated with either hematuria or infection [3].Calcium is the main constituent in approximate 80% of ureteric calculi.Other varieties are uric acid, struvite and cystic stone [4].The stone formation takes place by two mechanisms, either by super saturation of the urine by the constituents or by deposition on the uroepithelium [5].Though unenhanced computerized tomography is the gold standard for the diagnosis but ultrasonography is the choice of diagnostic modality in emergency.This is also of advantage in pregnant females and children where there is radiation risks involved [6].American Urological Association (AUA) and European Association of Urology (EAU) have set up the guidelines in 2005 for the management of ureteric calculi as per their size and location.Following three lines of managements were advocated: 1) Observation and medical therapy 2) Shock-wave Lithotripsy or Ureteroscopy 3) Open Surgery, Laproscopic or Percutaneous antegrade ureteroscopy.Lower third ureteric calculus can either be treated by shock wave lithotripsy or ureteroscopy as is our present case [7].
Conclusion
Sacrospinous ligament calcification may be incidental finding but the evaluation for other pathologies has to be ruled out when present.General radiography plays a great role in localizing the pathology.The role of radiographers is of great importance, as the proper exposure and region covered will unveil the hidden diagnosis.The outcome of the surgical removal is always encouraging and relieves the symptoms and associated complications.
Figure 1 .
Figure 1.Plain abdomen radiographs.(a) Bilateral calcified sacrospinous ligaments (white horizontal arrows) with degenerative changes in the spine; (b) Magnified view of the pelvis shows the same ossified ligaments projecting obliquely (thin white arrows).A radio opaque shadow is also seen on left side just medial to the inferior margin of sacro iliac joint (wide white arrow).
Figure 2 .
Figure 2. Plain and 1 hour intravenous excretory urography (IVU) abdomen radiographs.(a) Plain abdomen X-ray shows calcified bilateral sacrospinous ligaments (white arrows), radio opaque shadow in pelvis and left side ilial horn (vertical blue arrow); (b) 3 hour excretory urography abdomen radiograph shows normal functioning right kidney and non functioning left side excretory system because of radio opaque left ureteric calculus at the lower end (horizontal blue arrow).Ilial horn is also seen on the left iliac bone (white vertical arrow).Urinary bladder is normal in shape and outline and calcified ligaments are seen through it.
Figure 3 .
Figure 3. Diagramatic representation of the ligaments in the pelvis.Sacrospinous ligament is connecting the ischial spine to the sacrum (orange wide arrow).The location of left ureteric calculus formation in our present case has been shown with blue star. | 2017-10-02T12:07:21.890Z | 2016-08-02T00:00:00.000 | {
"year": 2016,
"sha1": "163470593cbbaf99ad7cda714a1d0a6b99d57da7",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=70196",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "163470593cbbaf99ad7cda714a1d0a6b99d57da7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
72069918 | pes2o/s2orc | v3-fos-license | A Case with Niemann-Pick Disease and Concomitant Kartagener ’ s Syndrome
ÖZET
Introduction
Niemann-Pick disease is a rarely seen heterogeneous lipid storage disorder.It was first described in an 18-monthold girl with hepatosplenomegaly, progressive mental and motor retardation by Niemann-Pick enabled to differentiate from other entities involved in the disease (1).It is sub-classified according to the age of onset and central nervous system involvement.Sphingomyelinase deficiency has been demonstrated in Type A and Type B disease, while sphingomyelinase values are shown to be normal or near-normal in Type C and Type D (2).Diagnosis is established by observation of lipid-containing macrophages (sea-blue histiocytes) in bone marrow biopsy.Although hepatosplenomegaly is a frequent finding, there are case reports in which Niemann-Pick disease was identified by isolated splenomegaly in the literature.
Kartagener's Syndrome, first identified in 1933, is characterised by primary ciliary dyskinesia and complete situs inversus (3).Mucociliary clearance is impaired due to ciliary dysmotility of the airway; thus, secretions accumulate on the epithelial surface, resulting in bacterial infections (4)(5)(6).Bronchiectasis develops at a younger age due to chronic and recurrent infections.Here, we present a case with Kartagener's Syndrome that was referred to a haematologist due to splenomegaly detected during the follow-up period and diagnosed as Niemann-Pick disease by bone marrow biopsy.Such a case has not been reported in the literature.
Case Report
A 21-year-old woman was referred to the Haematology Department due to finding of splenomegaly by another facility where she presented with a cough, abdominal pain, and fatigue.In her history, it was found that she had frequently had upper respiratory tract infections since infancy.It was also found that dextrocardia was detected on a chest radiography performed 4 years previously, when she presented with recurrent sinusitis and cough.We also learned that, on the thorax CT performed after the confirmation of dextrocardia by echocardiography, there was inverse positioning of the liver and spleen and the appearance of bronchial dilatation, peribronchial thickening and consolidation favouring bronchiectasis in the lungs (Figures 1 and 2).She was diagnosed with Kartagener's Syndrome due to the presence of bronchiectasis, situs inversus, and sinusitis.There was intermittent antibiotic use, expectorant therapy, and postural drainage training in her history.She received regular vaccinations against influenza and pneumococcus.
In the physical examination, general health status was good in the conscious patient.Mild mental retardation was detected in neurological and psychiatric evaluation.Her body temperature was measured as 36.9°C;blood Niemann-Pick disease is a rare lipid storage disorder with autosomal recessive inheritance, which is characterised by the accumulation of sphingomyelin and other sphingolipids in macrophages.Kartagener's Syndrome is a syndrome with autosomal recessive inheritance consisting of chronic paranasal sinusitis, situs inversus, and bronchiectasis.Here we reported a case having Kartagener's Syndrome with concomitant Niemann-Pick disease, as such a case has not been reported in the literature.pressure was 110/80 mmHg, and heart rate was 78 beats/min.In the auscultation, apex beat was heard on the right side and there were crackles in the bilateral lower zones of the lungs.On the right, the spleen was palpated at 4-5 cm below the last rib.On the chest radiography, the heart and gastric air were localised on the right side.Laboratory results showed that leukocyte counts were 7.12x10 3 /μl (polymorphonuclear leukocyte 57.9%; lymphocyte 33%; eosinophil 3.1%; basophil 0.4%; and monocyte 5.6%), haemoglobin was 13 g/dL, and platelet was 196x10 3 /μL.HBs Ag, anti-HBs, and anti-HCV were found to be negative.On abdominal ultrasound evaluation, it was found that the liver was normal in size with homogenous parenchymal echo, and the spleen was larger than normal (165 x 85 mm in size).On the peripheral blood smear, a few atypical monocytes were detected; thus, bone marrow aspiration and biopsy were performed.On the bone marrow smear, elements from three cell lineages and sea-blue histiocyte infiltration were detected (Figure 3).Results of the biopsy were reported as Niemann-Pick disease (Figure 4).Sphingomyelinase activity (7.73±3.08 nmol/17 hours/mg protein) in leukocyte was found to be 1.38 nmol/17 hours/mg protein.Thus, the patient was diagnosed with Type B Niemann-Pick disease.She declined the upper gastrointestinal endoscopy evaluation recommended for the investigation of splenomegaly aetiology.
Discussion
Niemann-Pick disease is a disorder of lipid metabolism in which sphingomyelin and secondary cholesterol accumulate in lysosomes either as a result of acid sphingomyelinase enzyme (sphingomyelin phosphodiesterase-ASM) deficiency or ASM gene mutations (7).Diagnosis is made by the assessment of sphingomyelinase activity and observation of lipid-containing macrophages (sea-blue histiocytes) in bone marrow biopsy.There are data on Niemann-Pick disease with pulmonary involvement in the literature; however, there is no report about Niemann-Pick disease with concomitant Kartagener's Syndrome (8).Kartagener's Syndrome is characterised by the triad of chronic sinusitis, bronchiectasis, and situs inversus (9).It is classified under the group of disorders known as primary ciliary dyskinesias.It is inherited in an autosomal recessive manner.No evaluation directed to sinusitis was performed in our case, as there was no sinusitis-related symptom at presentation.
In Niemann-Pick disease, there may be increased total cholesterol and LDL cholesterol as well as reduced HDL cholesterol as lipid abnormalities; these values were in the normal range in our case (10).In the follow-up for Kartagener's Syndrome, antibiotic therapy, postural drainage and maintenance of prophylactic vaccination were recommended for bronchiectasis by the Department of Chest Diseases.
Conclusion
We present this case as there is no reported case of Niemann-Pick disease with concomitant Kartagener's Syndrome in the literature.Enzyme replacement, gene therapy, and stem cell transplantation should be recommended in the treatment of appropriate cases.Cases diagnosed as Kartagener's Syndrome should be informed about autosomal recessive inheritance of the disease.Vaccinations against influenza and other frequent causes of pulmonary infection should be performed annually.It should be kept in mind that multiple syndromes may exist together in cases presenting with recurrent respiratory tract infection in the presence of splenomegaly. | 2018-12-05T08:39:52.538Z | 2013-09-12T00:00:00.000 | {
"year": 2013,
"sha1": "6c36dab9ed0173ac63ca8e28dd78b50b78c88844",
"oa_license": "CCBYNC",
"oa_url": "https://jag.journalagent.com/z4/download_fulltext.asp?pdir=erciyesmedj&plng=eng&un=EMJ-05826",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "6c36dab9ed0173ac63ca8e28dd78b50b78c88844",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238210394 | pes2o/s2orc | v3-fos-license | An Empirical Investigation on the Relationship between Carbon Emission and Regional Economic Growth
The paper empirically investigated the relationship between GDP and carbon emissions in Shanxi Province, P. R. China, from 1998 to 2012. Firstly, the unit root test and co-integration test are carried out for the two time series, and the co-integration equations are obtained. Then the Granger causality test was carried out, and it is found that when the lag order is 3, the carbon emissions of Shanxi Province is the Granger cause of GDP, while GDP is not the Granger cause of carbon emissions. The conclusion of the study shows that energy consumption in Shanxi Province still brings obvious economic benefits. The economic development of Shanxi Province has not caused a significant increase in carbon emissions in the past 15 years, which may be due to the preliminary effect of the economic transformation of Shanxi Province in recent years. This is consistent with the result that the carbon emission per unit GDP of Shanxi Province shows a downward trend in the past 15 years. Finally, it is put forward that active adjustment of energy structure, development of renewable energy, implementation of technological innovation, optimization of industrial structure and reasonable government intervention are effective measures for sustainable regional economic growth.
I. INTRODUCTION
With the rapid development of human economic activities, environmental problems caused by the heavy environmental burden were once almost out of control. Low-carbon Society initiative is one of the various mechanisms that have been deployed to achieve green economic growth, societal wellbeing and development, environmental preservation, and management in a holistic manner [1]. Over the past decade, the theme of environmental protection has been the focus of the world's attention. Environmental problems such as global warming caused by carbon dioxide emissions from economic development are particularly significant. China has taken measures to address climate change, including the establishment of a national carbon trading market, emphasizing the responsibility of provincial governments to meet emissions reduction targets. The environmental negative effects that gradually attach importance to economic development will be gradually attached to economic development, and countries have begun to take measures to avoid environmental deterioration. While much attention has been addressed to China's national level GHG emission, less is known about its regional and sectoral emission features [2].
Most of the existing literature on the relationship between economic growth and carbon emissions is to examine whether they are in line with the Environmental Kuznets curve (EKC) hypothesis. This hypothesis interprets the relationship between carbon emissions and economic growth as carbon emissions increase with economic development in the early stage, and gradually decrease with economic growth after reaching a certain point. However, Wu and Gu [3] have shown that it is inappropriate to examine the relationship between the two variables with the EKC hypothesis. They have thought that it is more reasonable to study the relationship between the two variables based on causal analysis. In addition, researchers based on the EKC hypothesis often subjectively assume that there is only a onesided causal relationship between potential economic growth and carbon emissions. Carbon emissions per unit of GDP (also called carbon emission intensity, CEI) can be utilized to measure regional carbon emission performance [4].
In fact, we cannot refuse the causal relationship between potential carbon emissions and economic growth [5]. China emits a large amount of anthropogenic carbon, but its carbon emission estimates are highly uncertain [6]. This will make the research results unable to effectively provide guidance for the formulation of economic policies according to the actual situation of energy choice and environment. Emissionsmitigation indicators, such as energy-efficiency targets, should be set relative to physical output (such as tons of steel production) rather than to economic growth [7].
City-level CO2 emission scenarios are important for cities' policies of emission reduction [8]. The relationship between carbon emissions and economic growth described in EKC hypothesis show limitations in China. Carbon emission imbalances of most Chinese regions have been reduced since 2007 [9]. Economic growth and carbon emissions in different regions must be studied as specific situations in different regions. Fuel switching and renewable energy penetration also exhibit positive effect to the CO2 decrease [10].
II. DATA SOURCES The paper uses the data of energy consumption of Shanxi Province and the carbon emission coefficients provided by the "IPCC National Greenhouse Gas Emission List Guide". The relationship between Carbon dioxide emissions and carbon emissions is a coefficient (44/12), which is the molecular weight ratio of carbon dioxide and the atomic quantity ratio of carbon. Since the linear transformation has no effect on the overall trend of the time series, this paper directly uses carbon emission data to study its relationship with GDP.
Since each category of energy is different, in order to facilitate comparison and further calculation of carbon emissions, the energy consumption data is first converted into the same unit-10,000 tons of standard coal. China defines energy per kilogram of energy for a certain fixed value as a standard coal. All kinds of energy sources are calculated as the same standard coal according to their ability to produce heat from combustion. The specific process is to first calculate the converted standard coal coefficient according to the real combustion heat production capacity of each energy source: Standard coal coefficient equals to the actual heat production per kilogram of a certain energy combustion divided by7000. The data of major energy consumption in Shanxi Province from 1998 to 2012 are converted into unified units.
After obtaining the energy consumption data of unified units, the carbon emissions are weighted according to the 2006 IPCC "National Greenhouse Gas Emission List Guide". In order to exclude the impact of price changes, we consider converting the nominal GDP of Shanxi Province from 1998 to 2012 into constant GDP in 1978.
A. Unit Root Test
The unit root test (the ADF test of the extended Dickey-Fuller) is used to test the stationarity of GDP and CE respectively, and the lag term is determined according to the AIC and SC criteria. The results are shown in Table I. As can be seen from Table I, neither GDP nor CE passed the stationarity test, so the first-order difference was used for the two time series. After the GDP time series are differentiated, the ADF statistical value of the GDP series (-7.02733) is less than the critical value (-4.992279) at the 1% significance level. The results show that there is no unit root for first-order GDP difference. There is no unit root in CE at 10% significance level, and it is also a first-order integral time series.
B. Co-integration Test
From the above analysis, it is shown that the GDP series and the CE series are single integers of the same order. Next, the EG two-step method is selected to test the unit root of the residual error. The results are shown in Table II.
The authors of the accepted manuscripts will be given a copyright form and the form should accompany your final submission. According to Table II, the sequence of residual error is stationary at 95% confidence level, so CE and GDP are cointegration with the same order. Thus, there is a long-term equilibrium relationship between CE and GDP. The standard co-integration equations of the two variables are obtained by using EVIEWS8.0: The equation shows that there is a stable relationship between CE and GDP. The economic development of Shanxi Province increases with the increase of carbon emissions, which is consistent with the actual situation that Shanxi Province takes the energy-dependent secondary industry as the main driving force of economic growth.
By calculating the carbon emission intensity per unit GDP, it can be known that the overall carbon emission per unit GDP shows a downward trend during the fifteen years from 1998 to 2012. The growth rate of carbon emissions in Shanxi Province is much lower than that of GDP, which may be related to a series of energy saving and emission reduction policies launched by attaching importance to the environment in recent years and the preliminary results of economic transformation in Shanxi Province.
C. Granger Causality Test
The purpose of Granger causality test for the two variables is to investigate whether the two variables have Granger causality or not. It can be seen that when the lag order is 3, the null hypothesis that CE is not the Granger cause of GDP is rejected at the significance level of 0.01. CE is considered to be the Granger cause of GDP while GDP is not the Granger cause of CE. This is consistent with the fact that the economy of Shanxi Province is highly dependent on the energy industry.
In recent years, Shanxi Province has advocated emission reduction, environmental protection, and low-carbon economy. In the process of industrial transformation, the economic center of gravity has shifted from the secondary industry to the tertiary industry, promoting the development of the service industry and curbing the energy consumption and carbon emissions of the energy-dependent manufacturing industry, so that the economic growth has not significantly caused the rise of carbon emissions. Table III
A. Conclusions
Firstly, the co-integration equation obtained above shows that there is a significant linear relationship between carbon emissions and economic growth in Shanxi Province, P.R China. CE and GDP change in the same direction, and this relationship is stable for a long time. This is in line with the fact that Shanxi Province has long formed an industrial structure with energy and raw material industries as its main industries. The uncoordinated industrial structure of Shanxi Province is accompanied by the low scientific and technological content of the energy industry. All these factors contribute to such high carbon emissions in Shanxi Province.
Secondly, from the correlation coefficient of GDP and CE, it can be seen that because carbon emissions mainly come from energy industry, the economy of Shanxi Province is still highly dependent on the secondary industry. This situation is not easy to change in the short-term. The characteristics of energy utilization based on high carbon emissions such as coal also increase the difficulty of emission reduction in Shanxi Province.
Finally, GDP is not CE Granger cause, which to some extent suggests that Shanxi Province should not reduce carbon emissions on the basis of restraining economic development but should seek a sustainable development approach to promote economic growth and reduce environmental pollution, and courage companies to explore new development paths.
B. Policy Implications
Firstly, it can be seen from previous analysis that the energy consumption of Shanxi Province is mainly coal, and other types of energy are extremely small. The carbon emission coefficient of coal and coke is relatively large. The economy of Shanxi Province is highly dependent on energy and requires a large amount of energy consumption, which is an important reason for the difficulty in controlling and reducing the carbon emission. With the objective of costs minimization, the results indicate that after 2025, the proportion of coal in the country's total energy supply will rapidly decline [11]. Therefore, reasonable adjustment of energy consumption structure is an effective measure to reduce carbon emissions. Secondly, the development of new renewable energy sources, such as solar energy, biomass energy, wind energy, geothermal energy, and so on, can significantly reduce the carbon emission intensity. Considering the long-term interests, the government should strengthen the investments in new energy development and research, support new energy research projects, and guide the change of energy consumption preference, which will prompt the energy structure to gradually change to low emission intensity energy.
Secondly, the promotion of technological innovation will inevitably affect the structure of supply and demand. There is a mismatch between supply and demand in production in Shanxi Province, and the new equipment, new materials and new products required by technological innovation will generate consumer demand, thus adjusting the supply and demand structure and finally changing the economic structure.
Thirdly, the function of technological innovation is to optimize the resource allocation. Carbon capture and storage, a technology that prevents CO2 emitted by coal-burning factories from being delivered into the environment, is one of the best options available with large-scale capacity for China to significantly reduce CO2emissions from factory sectors in the short run [12].
Fourthly, the economy of Shanxi Province is highly dependent on coal-mining industry, so the focus of the process of industrial structure optimization must be on industry transformation. Under the background of the rising domestic requirements for economic growth quality, it is necessary to upgrade the traditional energy industries in a selective and planned way, making them rationalization and efficient. | 2021-09-28T18:10:22.646Z | 2021-07-05T00:00:00.000 | {
"year": 2021,
"sha1": "feca7ef37790feedfb4138ca4208c9981c2b2408",
"oa_license": "CCBYNC",
"oa_url": "https://ejbmr.org/index.php/ejbmr/article/download/926/506",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4694eefad78441b8777354a2253a90e7f6c67e13",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
236941120 | pes2o/s2orc | v3-fos-license | Ribosomal L1 domain-containing protein 1 coordinates with HDM2 to negatively regulate p53 in human colorectal Cancer cells
Background Ribosomal L1 domain-containing protein 1 (RSL1D1) is a nucleolar protein that is essential in cell proliferation. In the current opinion, RSL1D1 translocates to the nucleoplasm under nucleolar stress and inhibits the E3 ligase activity of HDM2 via direct interaction, thereby leading to stabilization of p53. Methods Gene knockdown was achieved in HCT116p53+/+, HCT116p53−/−, and HCT-8 human colorectal cancer (CRC) cells by siRNA transfection. A lentiviral expression system was used to establish cell strains overexpressing genes of interest. The mRNA and protein levels in cells were evaluated by qRT-PCR and western blot analyses. Cell proliferation, cell cycle, and cell apoptosis were determined by MTT, PI staining, and Annexin V-FITC/PI double staining assays, respectively. The level of ubiquitinated p53 protein was assessed by IP. The protein-RNA interaction was investigated by RIP. The subcellular localization of proteins of interest was determined by IFA. Protein-protein interaction was investigated by GST-pulldown, BiFC, and co-IP assays. The therapeutic efficacy of RSL1D1 silencing on tumor growth was evaluated in HCT116 tumor-bearing nude mice. Results RSL1D1 distributed throughout the nucleus in human CRC cells. Silencing of RSL1D1 gene induced cell cycle arrest at G1/S and cell apoptosis in a p53-dependent manner. RSL1D1 directly interacted with and recruited p53 to HDM2 to form a ternary RSL1D1/HDM2/p53 protein complex and thereby enhanced p53 ubiquitination and degradation, leading to a decrease in the protein level of p53. Destruction of the ternary complex increased the level of p53 protein. RSL1D1 also indirectly decreased the protein level of p53 by stabilizing HDM2 mRNA. Consequently, the negative regulation of p53 by RSL1D1 facilitated cell proliferation and survival and downregulation of RSL1D1 remarkably inhibited the growth of HCT116p53+/+ tumors in a nude mouse model. Conclusion We report, for the first time, that RSL1D1 is a novel negative regulator of p53 in human CRC cells and more importantly, a potential molecular target for anticancer drug development. Supplementary Information The online version contains supplementary material available at 10.1186/s13046-021-02057-8.
Ribosomal L1 domain-containing protein 1 (RSL1D1) is a nucleolar protein encoded by cellular senescenceinhibited gene (CSIG) [14][15][16]. This protein contains a ribosomal L1 domain in the N-terminus and a lysinerich domain in the C-terminus [17]. The expression of RSL1D1 is high in early passaged fibroblasts but declines during cellular senescence [18,19]. Under normal conditions, RSL1D1 is mainly localized in the nucleolus. Upon various nucleolar stresses, such as treatment with lowdose actinomycin D (Act-D) and adriamycin or silencing of TIF-IA, RSL1D1 translocates to the nucleoplasm [20].
RSL1D1 regulates a wide range of cellular processes. It induces rRNA processing by destabilizing NOLC1 mRNA through direct interaction with its 5′-UTR [21]. It promotes UV-induced apoptosis by activating BAX [15]. Moreover, it regulates cellular replicative senescence and cell proliferation [16,19,22]. In a human 2BS fibroblast model, overexpression of RSL1D1 significantly promoted cell proliferation and delayed cellular replicative senescence, whereas downregulation of RSL1D1 expression reduced cell proliferation and accelerated cellular replicative senescence [19,22]. RSL1D1 promotes cell proliferation by negatively regulating PTEN expression in HEK 293 and 2BS cells [19]. It interacts with the 5′-UTR of PTEN mRNA to suppress PTEN translation, in turn promoting cell proliferation [19]. RSL1D1 also prevents c-Myc ubiquitination via direct interaction to increase its level in HepG2 and SMMC7721 hepatocellular carcinoma cells, accordingly promoting cell proliferation [16]. PTEN and c-Myc are important regulators of tumorigenesis and metastasis and participate in the p53 signaling pathway [23][24][25]. These data indicate a strong association between RSL1D1 and p53. Recently, Xie, et al has reported that RSL1D1 translocates from the nucleolus to the nucleoplasm in response to nucleolar stress and interacts directly with the C-terminal RING finger domain of HDM2, a primary negative regulator of p53, to inhibit its E3 ubiquitin ligase activity. This in turn stabilizes p53 and arrests cell cycle progression [20]. Here, we report a reverse function of RSL1D1 in the regulation of p53 in HCT116 and HCT-8 colorectal cancer (CRC) cells, which might result from the distribution of RSL1D1 in the entire nucleus of CRC cells under normal conditions. In our model, RSL1D1 acts as an oncoprotein that negatively regulate p53 activity by stabilizing HDM2 mRNA and recruiting p53 to HDM2 via direct interaction for ubiquitination and degradation, thereby leading to cancer cell proliferation and survival.
Mice
All animal experiments were approved by the Institutional Animal Care and Use Committee of Yangzhou University and complied with the guidelines of the Jiangsu Laboratory Animal Welfare and Ethical Committee of Jiangsu Administrative Committee of Laboratory Animals. To evaluate the efficacy of siRNA against RSL1D1 (siRSL1D1) in antitumor therapy in female BALB/c-Foxn1 nu /Nju nude mice, 2.5 × 10 6 HCT116 cells were injected into the upper right axillary fossa of 4-to 6-week-old mice with a body weight of 18-22 g. When tumors grew to 0.3-0.4 cm in diameter, the mice were randomly divided into two groups (n = 5) and treated diebus tertius with siNC-PEI or siRSL1D1-PEI mixtures via percutaneous intratumor injection. The antitumor effect was evaluated by determining the mean tumor volume of mice in each group. The mice were sacrificed after a 15-day treatment course. All tumors were photographed and divided into two parts. One part was subjected to H&E staining analysis and the other part was used for western blot analysis to determine the protein level of RSL1D1 protein in the tumors. β-actin was used as a loading control.
Cell culture HCT116 p53+/+ and HCT-8 cells were kindly provided by the Cell Bank of the Chinese Academy of Sciences. HCT116 p53−/− cells were a gift from Dr. Bert Vogelstein of Johns Hopkins University. Lenti-X™ 293 T cells were purchased from Takara Biomedical Technology Co., Ltd. (Beijing, China). HCT116 p53+/+ and HCT116 p53−/− cells were cultured in McCoy's 5A medium, supplemented with 10% heat-inactivated fetal bovine serum, 2 mM L-glutamine, 100 U/mL penicillin and 100 mg/mL streptomycin. The Lenti-X™ 293 T and HCT-8 cells were cultured in DMEM medium with the same supplements. All cells were cultured at 37°C in a humidified incubator with a 5% CO 2 atmosphere.
siRNA-mediated knockdown of genes siRNA molecules were designed to downregulate the expression of RSL1D1 (siRSL1D1), HDM2 (siHDM2), or FOXO3a (siFOXO3a). An unrelated siRNA sequence targeting PLEKHB1 (GenBank accession no. XM_ 018572553.1) from N. parkeri was used as a negative control (siNC). The siRNA was transfected into cells using Lipofectamine™ 2000 according to the reagent manual and the medium was replaced 6 h later. Fortyeight hours post-transfection, the cells were harvested for qRT-PCR and western blot analyses to evaluate the knockdown efficiency. The siRNA sequences are listed in Supplementary Table S1.
Total RNA extraction, cDNA synthesis, and qRT-PCR Total RNA was isolated from cells using TRIzol according to the reagent manual. The RNA samples were reversely transcribed to cDNA using a HiFiScript cDNA Synthesis Kit (CoWin Biosciences). qRT-PCR was performed using EvaGreen 2× qPCR MasterMix (Applied Biological Materials Inc., Richmond, Canada). Reactions were run in a CFX96 Touch™ Real-time PCR system (Bio-Rad Laboratories, Hercules, USA) and data were analyzed using the built-in CFX Manager™ software. Data normalization was performed as previously described [26,27]. The primer sequences are listed in Supplementary Table S1.
Extraction of tissue, whole-cell, nuclear, and cytoplasmic proteins For extraction of proteins from tissues or whole cells, tissue fragments or cells were resuspended in RIPA lysis buffer (Beyotime Biotechnology, Shanghai, China) containing 1× Protease Inhibitor Cocktail (Beyotime Biotechnology) and immediately homogenized (for tissue samples) or sonicated (for cell samples), followed by centrifugation at 12,000×g for 10 min at 4°C. Nuclear and cytoplasmic proteins were prepared from the same amount of cells using the Nuclear and Cytoplasmic Protein Extraction Kit (Beyotime Biotechnology). Protein concentration was quantified using the Micro BCA Protein Assay Kit (CoWin Biosciences Co., Ltd., Beijing, China), followed by western blot analysis of the proteins of interest.
Western blot analysis
Proteins were separated by SDS-PAGE and electrotransferred to a PVDF membrane. The membrane was blocked with 5% milk in PBST at room temperature for 1 h. Subsequently, the membrane was incubated with primary antibody at 4°C overnight and then with HRPlabeled secondary antibody at room temperature for 1 h. Next, the membrane was incubated in UltraECL Chemiluminescence reagent (YuanPinHao Bio, Beijing, China) for 1 to 2 min. The specific bands were visualized using a Tanon Proliferation of RSL1D1 knockdown HCT116 cells siRSL1D1-transfected HCT116 cells were seeded into 96-well plates and incubated for 12 h to allow firm attachment to the bottom of the wells. The plates were then incubated in a 5% CO 2 incubator at 37°C for 0, 1, 2, and 3 days. Subsequently, MTT was added to each well. Four hours later, the culture medium was carefully removed and DMSO was added to each well to dissolve the formazan crystals. The OD values were determined using an Infinite M200 Pro 96-well microplate reader (Tecan Life Science, Männedorf, Switzerland) at 570 nm with a reference wavelength of 630 nm. The values were normalized against the absorbance of the wells seeded with siNC-transfected cells at day 0.
Cell cycle and apoptosis analyses PI staining analysis was performed to determine the distribution of each cell cycle phase. Briefly, the harvested cells were fixed and permeabilized with 70% ethanol at − 20°C overnight. The cells were washed with 1× PBS and treated with 10 μg/mL of DNase-inactivated RNase A at room temperature for 1 h. Subsequently, the cells were incubated with 0.1 mg/mL of PI at room temperature in the dark for at least 10 min immediately prior to FACS analysis. Annexin V-FITC/PI staining analysis was performed to detect apoptosis according to the manufacturer's manual. Briefly, the cells were harvested and washed twice with cold 1× PBS, followed by resuspension in 1× Annexin V binding buffer. Annexin V-FITC and PI were sequentially added into the suspension for staining. The stained cells were then loaded into a FACSCalibur flow cytometer (BD, Santa Clara, USA). Data were collected for cell cycle and apoptosis analyses using FlowJo v10 software (TreeStar, Ashland, USA).
Immunoprecipitation (IP)
Cells were harvested and lysed in Cell Lysis Buffer for Western and IP (Beyotime Biotechnology) containing 1× Protease Inhibitor Cocktail (Beyotime Biotechnology). After ultrasonication and centrifugation at 12,000×g for 10 min at 4°C, the protein concentration of each supernatant was determined. The supernatant was coincubated with mouse anti-p53 monoclonal antibody (DO-1, Santa Cruz Biotechnology) at 4°C for 4 h, followed by further incubation with Protein A + G beads (CoWin Biosciences) at 4°C for 1 h. After washing four times with Cell Lysis Buffer, the beads were boiled in 1× SDS-PAGE sample loading buffer for western blot analysis with primary antibody against p53 (DO-1, Santa Cruz Biotechnology) and then with a secondary antibody against mouse IgG light chain (Abbkine, Wuhan, China). In addition, western blot analysis was performed to evaluate the levels of FLAG-RSL1D1, FLAG-RSL1D1-NT, FLAG-RSL1D1-CT, RSL1D1, p53, and HDM2 proteins in cell lysates (supernatants). β-actin was used as a loading control.
Assessment of mRNA stability
Since high dose Act-D is reported to rapidly shut off mRNA transcription in the cultured cells and thereby be widely used to study the decay rates of remaining endogenous transcripts, we assessed the cellular mRNA stability of HDM2 following this classical approach [28]. After treatment with 4 μM Act-D, cells were harvested at different time points for qRT-PCR analysis of HDM2 mRNA levels. The extremely stable GAPDH mRNA was used as an internal control [28].
RNA immunoprecipitation (RIP) assay
RIP assay was performed as previously described [29] with slight modifications. In detail, cells were incubated with 0.4% paraformaldehyde to cross-link RNA and protein at room temperature for 15 min and then with 0.2 M glycine for an additional 5 min to stop cross-linking. The cells were washed twice with PBS and lysed in RIP buffer containing 100 mM KCl, 5 mM MgCl 2 , 10 mM HEPES (pH 7.0), 0.5% NP40, 1 mM DTT, 1000 U/mL RNase Inhibitor (Beyotime Biotechnology), and 1× EDTA-free Protease Inhibitor Cocktail (Beyotime Biotechnology). After centrifugation at 12,000×g for 10 min at 4°C, supernatants were co-incubated with anti-FLAG antibody (M2, Sigma Aldrich, St. Louis, USA) or mouse IgG at 4°C overnight, followed by incubation with Protein A + G beads for 2 h. The beads were then washed four times with RIP buffer and treated with proteinase K to release RNA and protein components. TRIzol reagent was used to isolate RNA, followed by qRT-PCR analysis. RNA isolated directly from cell lysate was used as an input control.
Immunofluorescence assay (IFA)
Cells were fixed in 4% paraformaldehyde at room temperature for 20 min, permeabilized in ice-cold 1× PBS containing 0.2% Triton X-100 for 10-15 min, and then blocked in 3% BSA in 1× PBS at room temperature for 1 h. The cells were incubated with mouse anti-RSL1D1 monoclonal antibody (homemade), rabbit anti-p53 monoclonal antibody (7F5, Cell Signaling Technology), or rabbit anti-HDM2 monoclonal antibody (D1V2Z, Cell Signaling Technology) at 4°C overnight, followed by washing with 1× PBS three times. Subsequently, the cells were incubated with Cy3-or FITCconjugated secondary antibody against mouse or rabbit IgG (Beyotime) at room temperature for 2 h. After washing with 1× PBS three times, the cells were stained with Hoechst 33258 and observed under the Leica TCS SP8 STED laser confocal microscope (Wetzlar, Germany).
GST-pulldown assay
The GST-pulldown assay was performed as previously described [31] with slight modifications. In brief, purified GST-tagged RSL1D1 protein was co-incubated with Glutathione Sepharose 4B beads at 4°C for 1 h. Then, the beads were incubated with purified His-tagged p53 protein at 4°C for 1 h, followed by washing five times with 1% Triton X-100 in PBS. The beads were boiled in SDS-PAGE sample loading buffer for western blot analysis.
Bimolecular fluorescence complementation (BiFC) assay
The BiFC assay was performed to further explore the interaction between RSL1D1 and p53 in vivo according to a published protocol [32] with slight modifications. In brief, the coding regions of RSL1D1-FL, RSL1D1-NT and RSL1D1-CT were cloned into pBiFC-mCherryN159. The coding regions of p53-FL and p53-DBD were cloned into pBiFC-mCherryC160. HCT116 p53+/+ cells were seeded into confocal dishes and co-transfected with recombinant plasmid pairs pBiFC-mCherryN159-RSL1D1-FL (or pBiFC-mCherryN159-RSL1D1-NT or pBiFC-mCherryN159-RSL1D1-CT) and pBiFC-mCherryC160-p53-FL (or pBiFC-mCherryC160-p53-DBD). Thirty-six hours post-transfection, the cells were incubated with Hoechst 33258 for nuclear staining and observed under the laser confocal microscope. Negative control cells were co-transfected with pBiFC-mCherryN159 and pBiFC-mCherryC160, whereas positive control cells were co-transfected with pBiFC-mCherryN159-SV40gp6 and pBiFC-mCherryC160-p53 because of the established p53-SV40gp6 interaction [33]. Relative fluorescent quantitative analysis was performed to assess the interaction between proteins using the software Image J (NIH, Bethesda, USA). To facilitate comparison, the mean level of fluorescence intensity derived from the p53-SV40gp6 interaction was set as 1.
A combination of BiFC and immunofluorescence assays was performed to investigate the intracellular colocalization of RSL1D1, p53, and HDM2. Briefly, after cotransfection with pBiFC-mCherryN159-RSL1D1-FL and pBiFC-mCherryC160-p53-FL, cells were subjected to fluorescent staining with anti-HDM2 antibody following the IFA protocol.
Co-IP
Whole-cell proteins were extracted from lentivirustransduced HCT116 p53−/− cells stably expressing V5-p53 or HCT116 p53+/+ cells and incubated with anti-V5 (D3H8Q, Cell Signaling Technology) or anti-HDM2 antibody (D1V2Z, Cell Signaling Technology), respectively, at 4°C overnight. Rabbit IgG was used as a negative control. The antibody-protein mixture was then incubated with Protein A + G beads at 4°C for 2 h. After washing four times, the beads were boiled in 1× SDS-PAGE sample loading buffer for western blot analysis with a primary antibody against RSL1D1 (homemade), HDM2 (D1V2Z, Cell Signaling Technology), or p53 (DO-1, Santa Cruz Biotechnology). Cell lysate was used as an input control.
Statistical analysis
All numerical data are presented as the mean ± SD. The significance of the difference between the mean values of the two groups was evaluated using Student's t-test. Differences were considered statistically significant at P < 0.05 (*) and P < 0.01 (**).
RSL1D1 is required for proliferation and survival of human colorectal Cancer cells
To investigate the function of RSL1D1 (GenBank accession no. NM_015659.3) in cancer cells, we first analyzed the expression of RSL1D1 in human cancer tissues and normal counterparts by interrogation of the Oncomine Cancer Microarray database (www.oncomine.org/). 49 out of all 73 independent datasets showed that RSL1D1 was significantly upregulated in cancer comparing with normal tissues (P < 0.001) ( Supplementary Fig. S1). More importantly, RSL1D1 was upregulated in all 18 CRC datasets, suggesting that RSL1D1 might promote the proliferation and survival of CRC cells as an oncoprotein.
Hence, we transfected HCT116 cells with the siRSL1D1 to downregulate the expression of RSL1D1 and assess whether RSL1D1 is involved in cell proliferation. Efficient downregulation of RSL1D1 (approximately 80% at the mRNA level) ( Fig. 1A and B) greatly slowed down cell proliferation either in the presence (P < 0.01) or absence (P < 0.05) of p53 (Fig. 1C). Three days after transfection with the siRSL1D1, HCT116 p53+/+ and HCT116 p53−/− cells displayed a remarkable decrease in the proliferation rate by approximately 37 and 14%, respectively (Fig. 1C). Interestingly, even though RSL1D1 knockdown inhibited the proliferation of p53−/− cancer cells, the presence of p53 greatly enhanced the inhibitory effect (Fig. 1C). These findings indicate that RSL1D1 regulates cancer cell proliferation both in a p53dependent and -independent manner. Since p53 is critical in cell cycle progression [34], RSL1D1 is probably involved in p53-mediated cell cycle control, thereby regulating the proliferation of p53+/+ cells.
To test whether RSL1D1 affects cell cycle progression, we performed a PI staining assay to analyze the effect of RSL1D1 knockdown on the distribution of each cell cycle phase. In HCT116 p53−/− cells, downregulation of RSL1D1 resulted in a higher percentage of the G 2 population and a lower percentage of the G 1 population (P < 0.05), but had little effect on the percentage of either sub-G 1 or S population (Fig. 1D), demonstrating that RSL1D1 knockdown induces G 2 arrest in the absence of A The mRNA levels of RSL1D1 were determined by qRT-PCR analysis. GAPDH was used as an internal control to normalize the values. The normalized value of siNCtreated HCT116 p53+/+ cells was set to 1. B The levels of RSL1D1 protein were determined by western blot analysis. β-actin was used as a loading control. C Cell proliferation was evaluated by MTT assay. The cells were seeded to a 96-well plate and incubated for 0, 1, 2, and 3 days. The values on day 0 were normalized to 1. D Cell cycle was determined by PI staining. The stained cells were subjected to flow cytometry to analyze the average percentage of each cell cycle phase. Sub-G 1 indicates the apoptotic cell population. E Cell apoptosis was determined by Annexin V-FITC/ PI double staining. The stained cells were subjected to flow cytometry to analyze the percentage of apoptotic cells. The cells in the right lower (Annexin V-FITC + /PI − ) and right upper (Annexin V-FITC + /PI + ) quadrants indicate early and late apoptosis, respectively. A, C-E Data are represented as mean ± SD. Student's t test. *P < 0.05 and **P < 0.01 denote significant difference p53. In HCT116 p53+/+ cells, downregulation of RSL1D1 led to a higher percentage of the sub-G 1 (apoptotic) and G 1 populations, but a lower percentage of the S and G 2 populations (P < 0.05) (Fig. 1D), indicating that RSL1D1 knockdown induces cell apoptosis and G 1 arrest, a typical feature of senescent cells, in a p53-dependent manner [19,35,36].
Collectively, RSL1D1 promotes cancer cell proliferation and survival and the status of p53 determines how RSL1D1 regulates these cellular processes. In the absence of p53, RSL1D1 facilitates the G 2 /M transition. In the presence of p53, the function of RSL1D1 shifts to inhibit apoptosis and facilitate the G 1 /S transition.
RSL1D1 negatively regulates the protein level of nuclear p53
To investigate how RSL1D1 participates in the p53 signaling pathway, we modulated the expression of RSL1D1 in human CRC cells and analyzed the mRNA and protein levels of p53. The mRNA level of p53 showed no significant change in either RSL1D1-downregulated or -overexpressed HCT116 p53+/+ cells when compared with that in the negative controls ( Fig. 2A and C). However, the mRNA level of p21 increased remarkably in RSL1D1-downregulated cells (P < 0.01) and decreased in RSL1D1-overexpressed cells (P < 0.05) ( Fig. 2A and C). Since p21 is a canonical target of p53 and can be induced by this transcription factor to arrest the cell cycle at the G1/S checkpoint [37,38], RSL1D1 is likely to negatively regulate the level of p53 protein but not p53 mRNA. We therefore analyzed the protein levels of p53 and p21 in RSL1D1-modulated HCT116 p53+/+ cells. The result showed that the protein levels of p53 and p21 increased in RSL1D1-downregulated HCT116 p53+/+ cells (Fig. 2B), thereby inducing G1/S arrest (Fig. 1D). When RSL1D1 was overexpressed, the protein levels of p53 and p21 decreased (Fig. 2D).
Interestingly, RSL1D1 could negatively regulate p21 protein in a p53-independent manner ( Fig. 2A-D). The protein level of p21 increased in response to RSL1D1 knockdown and decreased when RSL1D1 was overexpressed in HCT116 p53−/− cells ( Fig. 2B and D). However, the mRNA level of p21 showed no significant change when RSL1D1 was either downregulated or upregulated in the absence of p53 ( Fig. 2A and C). It is noteworthy that the protein level of p21 in siRSL1D1-transfected HCT116 p53−/− cells was still lower than that in siNCtransfected HCT116 p53+/+ cells (Fig. 2B), accordingly insufficient to induce G1 arrest in the lack of the dominant contribution of p53 (Fig. 1D) [39,40].
Furthermore, RSL1D1 could negatively regulate PUMA, another p53 target gene and a major apoptosisinducing factor [41]. In the presence of p53, the mRNA (P < 0.01) and protein levels of PUMA increased significantly in RSL1D1-downregulated HCT116 cells ( Fig. 2A and B), thereby inducing apoptosis in a p53-dependent manner ( Fig. 1D and E). In the absence of p53, RSL1D1 knockdown also upregulated the mRNA (P < 0.05) and protein levels of PUMA ( Fig. 2A and B). However, the level of PUMA expression in siRSL1D1-transfected HCT116 p53−/− cells was still lower than that in siNCtransfected HCT116 p53+/+ cells ( Fig. 2A and B), accordingly insufficient to induce apoptosis ( Fig. 1D and E). To investigate how RSL1D1 regulates PUMA expression in HCT116 p53−/− cells, we determined the protein level of FOXO3a, a direct transcriptional regulator of PUMA that mainly contributes to the p53-independent upregulation of PUMA in CRC cells [42,43]. Upon RSL1D1 knockdown, the level of FOXO3a protein increased in HCT116 p53−/− other than HCT116 p53+/+ cells (Supplementary Fig. S2A). Further study showed that FOXO3a knockdown decreased the high level of PUMA expression in RSL1D1-downregulated HCT116 p53−/− cells ( Supplementary Fig. S2B). These data demonstrate that RSL1D1 knockdown increases PUMA expression by upregulation of FOXO3a in the absence of p53. The upregulation of FOXO3a contributed to G2/M arrest in RSL1D1-downregulated HCT116 p53−/− cells, thereby inhibiting cell proliferation ( Fig. 1C and D), which is consistent with the current opinion that FOXO3a activation induces G2/M arrest in various cancer cells [44][45][46][47]. However, the mRNA and protein levels of PUMA showed no significant change when RSL1D1 was overexpressed in either p53+/+ or p53−/− CRC cells (Fig. 2C and D).
To further confirm the negative regulation of p53 by RSL1D1, we also evaluated the mRNA and protein levels of p53 and its target genes in RSL1D1-downregulated HCT-8 cells, another human CRC cell line harboring wild-type p53 [48]. Similarly, the mRNA level of p53 showed no significant change. In contrast, the mRNA levels of p21 and PUMA increased remarkably in HCT-8 cells in response to RSL1D1 knockdown (P < 0.05) (Supplementary Fig. S3A). However, the protein levels of p53, p21, and PUMA were all significantly increased ( Supplementary Fig. S3B). Again, the data from HCT-8 cells support our hypothesis.
As a transcription factor, p53 is mainly localized in the nucleus and binds to the upstream activating sequences of target genes, such as p21 and PUMA, for The normalized values of siNC-transfected HCT116 p53+/+ cells were set to 1. B The protein levels of RSL1D1, p53, HDM2, p21, and PUMA were determined by western blot analysis in RSL1D1-downregulated HCT116 p53+/+ and HCT116 p53−/− cells. β-actin was set as a loading control. C The mRNA levels of RSL1D1, p53, HDM2, p21, and PUMA were determined by qRT-PCR in RSL1D1-overexpressed HCT116 p53+/+ and HCT116 p53−/− cells. GAPDH was used as an internal control to normalize the values. The normalized values of EGFP-overexpressed HCT116 p53+/+ cells were set to 1. D The protein levels of RSL1D1, p53, HDM2, p21, and PUMA were determined by western blot analysis in RSL1D1-overexpressed HCT116 p53+/+ and HCT116 p53−/− cells. β-actin was used as a loading control. E The levels of nuclear and cytoplasmic RSL1D1, p53 and HDM2 proteins were determined by western blot analysis in RSL1D1-downregulated HCT116 p53+/+ and HCT116 p53−/− cells. HDAC1 and β-actin were set as internal controls for nuclear and cytoplasmic proteins, respectively. F The levels of RSL1D1, p53, and HDM2 proteins were determined by western blot analysis in RSL1D1-downregulated HCT116 p53+/+ cells treated with Nutlin-3 (40 μM, 12 h). β-actin was used as a loading control. A, C Data are represented as mean ± SD. Student's t test. *P < 0.05 and **P < 0.01 denote significant difference transcriptional activation, leading to growth inhibition and apoptosis of cancer cells [49,50]. To investigate whether RSL1D1 negatively regulates p53 in the nucleus, we separated the nucleus and cytoplasm from RSL1D1downregulated HCT116 cells and measured the levels of p53 protein in these two subcellular compartments. Western blot analysis showed that in normal HCT116 p53+/+ cancer cells, the level of p53 protein in the cytoplasm was much lower than that in the nucleus (Fig. 2E). When RSL1D1 expression was silenced by transfection with the siRSL1D1, the level of p53 protein increased significantly in the nucleus, but not in the cytoplasm (Fig. 2E).
Taken together, RSL1D1 negatively regulates the protein level of nuclear p53, thereby suppressing p53 targets to promote the proliferation and survival of CRC cells.
RSL1D1 promotes p53 ubiquitination by upregulating HDM2
Since ubiquitination plays a major part in the negative regulation of p53 by acting as a signal for proteasomemediated degradation [51], we wondered whether RSL1D1 is involved in ubiquitin-mediated p53 degradation and therefore analyzed p53 ubiquitination in RSL1D1-modulated HCT116 p53+/+ cells treated with the proteasome inhibitor MG-132 [20]. The result showed that downregulation of RSL1D1 significantly decreased the amount of ubiquitinated p53 (Fig. 3A). In contrast, overexpression of RSL1D1 increased ubiquitinated p53 remarkably (Fig. 3B).
To test whether the negative regulation of p53 by RSL1D1 is HDM2-dependent, we treated HCT116 p53+/+ cells with Nutlin-3 to block p53-HDM2 interaction [58]. Unlike in the untreated cells (Fig. 2B), RSL1D1 knockdown did not affect the protein level of p53 in Nutlin-3 treated cells, but still decreased the protein level of HDM2 (Fig. 2F). This indicates that RSL1D1 negatively regulates p53 in a HDM2-dependent manner.
Since HDM4 (or HDMX) is also a negative regulator of p53 in the regulatory feedback loop of nucleolar protein-HDM2-p53 [59,60], we wondered whether HDM4 is also involved in the regulation of p53 by RSL1D1. The result showed that the protein levels of HDM4 were not remarkably affected by RSL1D1 knockdown in either HCT116 p53+/+ or HCT116 p53−/− cells ( Supplementary Fig. S4). It is unlikely that RSL1D1 affects p53 levels via HDM4.
To rule out the possibility that HDM2 regulates RSL1D1, we assessed the expression of RSL1D1 in HDM2-downregulated HCT116 p53+/+ cells. As expected, HDM2 knockdown did not change the level of p53 mRNA, but remarkably increased the amount of p53 protein, thereby upregulating the protein and mRNA levels of p21 and PUMA (P < 0.01) (Supplementary Fig. S5). However, downregulation of HDM2 did not significantly change the mRNA or protein levels of RSL1D1 ( Supplementary Fig. S5), indicating that RSL1D1 locates upstream of HDM2 in the RSL1D1-HDM2 signaling axis.
Collectively, RSL1D1 is an upstream factor in the RSL1D1-HDM2 signaling axis and positively regulates HDM2 to promote p53 ubiquitination.
RSL1D1 upregulates HDM2 by stabilizing HDM2 mRNA
As a canonical p53 target, HDM2 is upregulated upon p53 activation, which in return inhibits p53 [61]. However, in the current study, the mRNA level of HDM2 decreased in response to a high level of p53 protein in RSL1D1-downregulated HCT116 cells ( Fig. 2A). Since RSL1D1 participates in the regulation of mRNA stability of PTEN and NOLC1 via protein-RNA interaction [19,21], RSL1D1 is also possibly involved in regulating the stability of HDM2 mRNA. To verify this, we first evaluated the stability of HDM2 mRNA in siRSL1D1transfected HCT116 cells. Compared with the controls, downregulation of RSL1D1 remarkably accelerated the degradation of HDM2 transcripts (Fig. 4A), indicating that RSL1D1 is an important factor in maintaining the stability of HDM2 mRNA.
To explore whether RSL1D1 stablizes HDM2 mRNA via protein-RNA interaction, we performed an RIP assay and determined the amount of HDM2 mRNA in the immunoprecipitate from FLAG-RSL1D1-overexpressed cells. Compared with the negative controls, HDM2 transcripts were significantly precipitated by anti-FLAG antibody (P < 0.05) (Fig. 4B), demonstrating that RSL1D1 interacts with HDM2 mRNA.
RSL1D1 Colocalizes with p53 and HDM2 in the nucleus of colorectal Cancer cells
Since RSL1D1, as a nucleolus-localized protein, is released to the nucleoplasm of H1299 non-small cell lung cancer cells upon nucleolar stress [20], an IF assay was performed to address whether this nucleolusnucleoplasm translocation of RSL1D1 occurs in HCT116 CRC cells. We first assessed the applicability of homemade anti-RSL1D1 monoclonal antibody to IFA. Compared with the negative controls, the antibody led to a weaker staining in RSL1D1-downregulated HCT116 p53+/+ cells and a stronger staining in RSL1D1overexpressed cells (Supplementary Fig. S6), indicating that the homemade antibody is specific to RSL1D1. Unlike in H1299 cells, RSL1D1 was not limited to the nucleolus but distributed throughout the entire nucleus and colocalized with p53 or HDM2 in HCT116 p53+/+ cells under normal conditions ( Fig. 5A and B). Upon Act-D-induced nucleolar stress [20], the intranuclear distribution and colocalization of RSL1D1, HDM2, and p53 remained unchanged ( Fig. 5A and B), even though the levels of these three proteins changed (Fig. 5E). Moreover, downregulation of RSL1D1 also did not change the overall distribution of RSL1D1, HDM2, and p53 ( Fig. 5C and D). Furthermore, to explore whether RSL1D1-downregulated or -overexpressed HCT116 p53+/+ cells was treated with MG-132 (25 μM) for 6 h to inhibit the degradation of ubiquitinated p53. Monoclonal antibody against p53 (DO-1) was used for IP and western blot analyses of ubiquitinated and nonubiquitinated p53. A The levels of ubiquitinated p53 proteins were evaluated in siRSL1D1-or siNC-transfected HCT116 p53+/+ cells. The levels of RSL1D1, p53, and HDM2 proteins in the input cell lysate were determined by western blot analysis and β-actin was used as a loading control. B The levels of ubiquitinated p53 proteins were evaluated in RSL1D1-or EGFP-overexpressed HCT116 p53+/+ cells. The levels of RSL1D1, p53, and HDM2 proteins in the input cell lysate were determined by western blot analysis and β-actin was used as a loading control. C The levels of ubiquitinated p53 proteins were evaluated in RSL1D1-overexpressed HCT116 p53+/+ cells transfected with siHDM2 or siNC. The protein levels of RSL1D1, p53, and HDM2 in the input cell lysate were determined by western blot analysis and β-actin was used as a loading control HDM2 (B, D). Scale bars: 5 μm. E The levels of RSL1D1, p53, and HDM2 proteins were measured by western blot analysis in HCT116 p53+/+ cells transfected with siRSL1D1 or siNC. The cells were treated or untreated with 5 nM Act-D for 24 h. β-actin was set as a loading control nucleolar stress affects the role of RSL1D1 in the regulation of p53 and HDM2, we determined the protein levels of p53 and HDM2 in RSL1D1-downregulated HCT116 p53+/+ cells untreated or treated with Act-D. The results showed that RSL1D1 knockdown reduced HDM2 expression and thereby upregulated p53 regardless of the Act-D treatment (Fig. 5E).
Collectively, RSL1D1 colocalizes with p53 and HDM2 in the nucleus of CRC cells and nucleolar stress does not affect the overall distribution and function of RSL1D1 as a regulator of p53 and HDM2, which probably contributes to the negative regulation of p53 by RSL1D1 under various conditions.
To further confirm the RSL1D1-p53 interaction, we constructed a panel of BiFC plasmids expressing the p53-FL, the RSL1D1-FL, or their truncated variants. Plasmid pairs were co-transfected into HCT116 cells. Fluorescent images showed that both p53-FL and p53-DBD interacted with the RSL1D1-FL, the RSL1D1-NT, and the RSL1D1-CT in vivo (Fig. 6G). In agreement with the GST-pulldown data ( Fig. 6E and F), the relative fluorescence intensity of BiFC ( Fig. 6G and H) also indicated the preference of the p53-FL and the p53-DBD to bind the RSL1D1-CT other than the RSL1D1-NT which has been identified as the binding site for HDM2 [20].
To explore whether RSL1D1, HDM2, and p53 form a ternary protein complex, we evaluated the colocalization of these three proteins. GST-pulldown analysis showed that RSL1D1 interacted simultaneously with HDM2 and p53 in vitro ( Fig. 7A and Supplementary Fig. S7). The result of co-IP also verified the RSL1D1-p53, RSL1D1-HDM2, and p53-HDM2 interactions in HCT116 cells ( Fig. 7B and C). Then, we transfected HCT116 p53+/+ cells with two BiFC plasmids expressing the RSL1D1-FL and p53-FL proteins, respectively, followed by an immunofluorescence assay against HDM2. The result revealed an obvious colocalization of RSL1D1, HDM2, and p53 (Fig. 7D). Together, these in vitro and in vivo data strongly suggest that RSL1D1, HDM2, and p53 form a transient ternary protein complex in HCT116 cells.
Next, to explore whether the p53-RSL1D1 interaction recruits p53 to HDM2 for ubiquitination, we performed a competitive ubiquitination assay using the RSL1D1-NT or the RSL1D1-CT as a competitive inhibitor. Compared with the EGFP control, overexpression of either RSL1D1-NT or RSL1D1-CT significantly increased the levels of p53 protein (Fig. 7E) by inhibition of p53 ubiquitination (Fig. 7F) in HCT116 p53+/+ cells, consistent with the increased level of p53 protein (Fig. 2B) and decreased p53 ubiquitination (Fig. 3A) in RSL1D1downregulated cells. These data indicate that RSL1D1 recruits p53 to HDM2 for ubiquitination and HDM2mediated p53 ubiquitination can be alleviated by competitive occupation of the RSL1D1-binding site on p53 or HDM2 (Fig. 7G).
Collectively, RSL1D1 recruits p53 to HDM2 via protein-protein interactions to form a transient ternary protein complex, which enhances HDM2-mediated p53 ubiquitination.
RSL1D1 is a potential molecular target for anti-tumor therapy
To address whether RSL1D1 is a potential target in antitumor therapeutics, we evaluated the efficacy of the siRSL1D1 in treating HCT116 p53+/+ or HCT116 p53 −/− tumors in nude mice in a 15-day antitumor treatment. The intratumor levels of RSL1D1 protein were effectively downregulated by siRSL1D1 treatment in both p53+/+ and p53−/− xenografts (Fig. 8A), leading to a significant inhibition of tumor growth ( Fig. 8B and C). The siRSL1D1 group showed a reduction of approximately 90% in the mean volume of p53+/+ tumors (P < 0.01), compared with approximately 50% in that of p53−/− tumors (P < 0.05). Similar to the in vitro data (Fig. 1C), the presence of p53 greatly enhanced the in vivo tumor inhibitory effect of siRSL1D1 treatment. These data demonstrate that RSL1D1 knockdown inhibits tumor growth both in a p53-dependent and -independent manner and p53 contributed to a major part of the efficacy in treating HCT116 p53+/+ tumors ( Fig. 8B and C).
To further evaluate the antitumor efficacy of the siRSL1D1, a histopathological analysis was performed. Compared with the siNC controls, siRSL1D1-treated p53+/+ tumor tissues displayed cavitation and most tumor cells showed a hyperchromatic nucleus and condensed cytoplasm (Fig. 8D). These typical morphological features of cell apoptosis and necrosis, along with the decreased mean tumor volume ( Fig. 8B and C), suggested a potent tumor-suppressive effect induced by siRSL1D1 treatment. Moreover, the histological data also revealed a tumor-suppressive effect of the siRSL1D1 on p53−/− tumors, but with a lower efficacy (Fig. 8D).
Taken together, RSL1D1 is a potential therapeutic target for CRC and downregulation of RSL1D1 is a highly efficient therapeutic strategy against HCT116 p53+/+ tumors. A GST pull-down assays were performed to evaluate the interaction between full-length RSL1D1 (RSL1D1-FL) and p53 (p53-FL) in vitro. RSL1D1-FL and p53-FL were GST and His tagged, respectively. B, C Schematic diagrams showing p53 (B), RSL1D1 (C), and their truncated variants constructed in this study. D, E GST pull-down assays were performed to map the RSL1D1-binding domain on p53 (D) and the p53-binding domain on RSL1D1 (E). RSL1D1-FL and its truncated variants (RSL1D1-NT and RSL1D1-CT) were GST tagged, whereas p53-FL and its truncated variants were His tagged. F GST pull-down assays were carried out to evaluate the interaction between the RSL1D1-binding domain on p53 and the p53-binding domains on RSL1D1. G Bimolecular fluorescence complementation (BiFC) assay was performed to confirm the interaction between RSL1D1 and p53 in vivo. RSL1D1-FL, RSL1D1-NT, and RSL1D1-CT were cloned into pBiFC-mCherryN159, whereas p53-FL and p53-DBD were cloned into pBIFC-mCherryC160. The recombinant plasmid pairs were co-transfected into HCT116 p53+/+ cells. The in vivo interaction between two proteins fused with mCherryN159 and mCherryC160, respectively, was indicated by the red fluorescence in the cells and the nucleus was stained by Hoechst (blue). Co-transfection with empty plasmids pBiFC-mCherryN159 and pBiFC-mCherryC160 was set as a negative control, whereas co-transfection with plasmids pBiFC-mCherryN159-SV40gp6 and pBiFC-mCherryC160-p53 was used as a positive control. Scale bars: 50 μm. H The relative fluorescence intensity in different BiFC groups. To facilitate comparison, the mean value of fluorescence intensity in the cells co-transfected with pBiFC-mCherryN159-SV40gp6 and pBiFC-mCherryC160-p53 was set to 1. Data are represented as mean ± SD. Student's t test. *P < 0.05 and **P < 0.01 denote significant difference Fig. 7 RSL1D1 Recruits p53 to HDM2 to Enhance p53 Ubiquitination. A GST pull-down assays were performed to evaluate the interaction between RSL1D1, p53, and HDM2 in vitro. RSL1D1 was GST-tagged, whereas p53 and HDM2 were His-tagged. B, C Co-IP assay was performed to evaluate the interaction between RSL1D1, p53, and HDM2 in vivo. Lentivirus-transduced HCT116 p53−/− cells stably expressing V5-p53 (B) or HCT116 p53+/+ cells (C) were harvested and the lysate was immunoprecipitated with anti-V5 (B) or anti-HDM2 (C) antibody. Rabbit IgG was used as a negative control. Input represents 2.5% (B) or 5% (C) of the lysate utilized for IP. D A combination of BiFC and IF assays was performed to evaluate the intracellular co-localization of RSL1D1, p53, and HDM2. After co-transfection with pBiFC-mCherryN159-RSL1D1 and pBiFC-mCherryC160-p53, HCT116 cells were incubated with anti-HDM2 antibody and then FITC-labeled anti-IgG. Red fluorescence indicates an RSL1D1-p53 complex. Green fluorescence indicates HDM2 protein. Yellow color indicates a ternary protein complex comprising RSL1D1, p53, and HDM2. The nuclei were stained with Hoechst (blue). Scale bars: 50 μm. E The protein levels of p53, RSL1D1, and HDM2 were determined in HCT116 p53+/+ cells overexpressing truncated RSL1D1 variants or EGFP. The cells were transduced with lentiviruses inducibly expressing genes of interest and treated with 1 μg/mL doxycycline to induce gene expression for 24 h. The protein levels of RSL1D1-NT, RSL1D1-CT, p53, RSL1D1 and HDM2 were determined by western blot analysis. β-actin was used as a loading control. F The levels of ubiquitinated p53 proteins were evaluated in HCT116 p53+/+ cells overexpressing truncated RSL1D1 variants or EGFP. The cells were transduced with lentiviruses inducibly expressing genes of interest and treated with 1 μg/mL doxycycline to induce gene expression for 24 h, followed by an additional treatment with 25 μM MG-132 for 6 h. DO-1 was used for IP and western blot analyses of the ubiquitinated or non-ubiquitinated p53 protein. The protein levels of RSL1D1-NT, RSL1D1-CT, p53, RSL1D1, and HDM2 in the input lysate were determined by western blot analysis and β-actin was used as a loading control. G A model showing the interaction between RSL1D1, p53, and HDM2, which can be competitively destroyed by overexpression of either RSL1D1-NT or RSL1D1-CT Discussion RSL1D1 is an important nucleolar protein to participate in multiple biological processes [22], such as cellular senescence [19,64], cell migration and proliferation [16,17], and cell apoptosis [15]. It has recently been reported that RSL1D1 translocates to the nucleoplasm in response to nucleolar stress, which contributes to the stabilization of p53 by inhibiting HDM2-mediated ubiquitination through direct interaction with the RING finger domain of HDM2 in U-2 OS and H1299 cells [20]. In contrast to this model, we discovered that RSL1D1 negatively regulates p53 by upregulating HDM2 and forming a ternary RSL1D1/HDM2/p53 protein complex to promote p53 ubiquitination in CRC cells. The ensuing inactivation of p53 target genes, such as p21 and PUMA, attenuates cell cycle arrest and apoptosis, thereby promoting cell proliferation and survival (Fig. 9). On one hand, RSL1D1 directly downregulates the protein level of p53 by recruiting it to HDM2 for ubiquitination (Figs. 7 and 9). On the other hand, RSL1D1 indirectly downregulates the protein level of p53 by stabilizing HDM2 mRNA (Figs. 2,4,9,and Supplementary Fig. S3). The contrary regulatory mechanism is probably attributed to the different subnuclear localization of RSL1D1 in different types of cancer cells. Unlike in some cancer cell types [20], RSL1D1 is not confined to the nucleolus but distributes throughout the entire nucleus in CRC cells under normal conditions (Fig. 5), in which RSL1D1 performs more non-nucleolar functions, such as promoting tumor progression by negative regulation of p53 in this study.
Our results suggest that RSL1D1 is involved in regulating the mRNA stability of HDM2. As a nucleolar protein containing the ribosomal L1p/L10e domain in the N-terminus, RSL1D1 reportedly plays important ribosome-associated functions and participates in the regulation of the mRNA stability of NOLC1 and PTEN [19,21]. Similarly, we found that RSL1D1 interacted with and stabilized HDM2 mRNA (Fig. 4), thereby leading to relatively high levels of HDM2 mRNA and protein in HCT116 cells in a p53-independent manner (Fig. 2).
Beyond ribosome-associated functions, our results also suggest a crucial non-ribosomal function of RSL1D1 as a novel p53-interacting protein. In this study, our data demonstrate that the p53-DBD directly interacts with both RSL1D1-CT and RSL1D1-NT and the RSL1D1-CT displays a stronger binding capability than the RSL1D1-NT (Fig. 6). The RSL1D1-NT also reportedly interacts with the RING finger domain in the C-terminus of HDM2 (aa 349-489) [20], whereas the N-terminus of HDM2 (aa 1-125) interacts with the 15-29 residues of p53 [62]. The complicated regulation and interaction between RSL1D1, p53, and HDM2 has significant biological importance. In HCT116 p53+/+ cells, RSL1D1 recruits p53 to HDM2 via direct interaction to form a transient ternary protein complex (Fig. 7). The colocalization of RSL1D1, p53, and HDM2 enhances HDM2-mediated p53 ubiquitination, leading to a low level of p53 in CRC cells.
Our results also suggest a novel function of nucleolar proteins in the regulation of HDM2-mediated p53 ubiquitination and degradation. In the current opinion, nucleolar proteins play an important role in stabilizing p53. These proteins mainly include nucleolin [65], nucleostemin [66], NPM [67], ARF [68], and several protein components of large and small ribosomal subunits, such as RPL5 [69], RPL6 [11], RPL11 [70], RPL23 [71], RPL26 [72], RPS7 [73], and RPS14 [74]. In general, these nucleolar proteins are localized in the nucleolus under normal conditions. Upon nucleolar stress, they can be inducibly released to the nucleoplasm, where they interact with HDM2 and block HDM2-mediated p53 ubiquitination. Conversely, the findings in the current study demonstrate that RSL1D1 functions as an oncoprotein rather than a typical nucleolar protein in CRC cells. It is not only localized in the nucleolus but also in the entire nucleus under normal conditions (Fig. 5). This allows RSL1D1 to colocalize with p53 and HDM2 in a nucleolar stress-independent manner, which facilitates the formation of the ternary RSL1D1/HDM2/p53 complex, enhances HDM2-mediated p53 ubiquitination, and thereby regulates p53 negatively rather than positively (Figs. 5 and 7). In addition, our findings also provide evidences linking the highly expressed RSL1D1 (Supplementary Fig. S1) and p53 inactivation [75] in CRC cases. Our results help explain the relatively low level of p53 protein in p53+/+ human CRC cells [76,77]. Generally, RSL1D1 normally maintains a relatively high expression level ( Supplementary Fig. S1) and distributes throughout the nucleus (Fig. 5) in CRC cells. It negatively regulates p53 at the post-translational level by augmenting the expression and function of HDM2 (Figs. 2 and 3). As a result, a high level of RSL1D1 protein leads to a very low level of p53 protein in p53+/+ CRC cells, which facilitates cell proliferation and survival (Fig. 1C). When the enhanced HDM2 function is inhibited by downregulating RSL1D1 or introducing HDM2-or p53-binding domains of RSL1D1 into cells to destroy the ternary protein complex, p53 ubiquitination decreases greatly ( Fig. 3A and 7F). The decreased ubiquitination results in an increased amount of p53 protein ( Fig. 2B and 7E), which induces G 1 arrest and apoptosis ( Fig. 1D and E).
Our results also suggest a potential target for drug development against colorectal neoplasms retaining wildtype p53. The tumor suppressor p53 is important in preventing cancer development [78]. HDM2, as a primary cellular inhibitor of p53 [53,62,79], binds and ubiquitinates p53 protein for nuclear export and proteasomal degradation [34,80], thereby inhibiting p53 activity. Therefore, an important antitumor therapeutic strategy is to block the HDM2-p53 interaction to increase the amount of p53 protein. Over the past nearly two decades, scientists have made intense efforts to design and develop a number of structurally distinct, non-peptide, and highly potent small-molecule inhibitors of the HDM2-p53 protein-protein interaction or the HDM2 inhibitors [79], such as Idasanutlin [81], Nutlin-3a [82], RG7112 [83], MI-77301 [84], MI-888 [85], AMG-232 [86], RG7388 [87], NVP-CGM097 [88], and MK-8242 [89]. In the current study, RSL1D1 binds to HDM2 and p53 and upregulates HDM2 in CRC cells, implicating RSL1D1 as a potential antitumor target. In mouse xenograft models, siRSL1D1 treatment produced an excellent therapeutic efficacy in suppressing the growth of HCT116 p53+/+ tumors (Fig. 8), demonstrating that downregulation of RSL1D1 is a highly efficient therapeutic strategy for treating CRC. In addition to gene silencing, blocking p53-or HDM2-RSL1D1 interaction is another potential treatment strategy for treating p53+/+ colorectal tumors, since overexpression of the RSL1D1-NT or the RSL1D1-CT prevents p53 ubiquitination by competitively inhibiting the formation of the RSL1D1/ HDM2/p53 protein complex (Fig. 7). Therefore, an important research avenue is to screen chemicals or biological RSL1D1 inhibitors that downregulate RSL1D1 or inhibit the p53-or HDM2-RSL1D1 protein-protein interaction, which may lead to an alternative to HDM2targeted drugs.
Conclusion
RSL1D1 distributed throughout the entire nucleus of CRC cells and negatively regulates nuclear p53. Crucially, RSL1D1 stabilizes HDM2 mRNA through protein-RNA interaction and also interacts with and recruits p53 to HDM2 to form a RSL1D1/HDM2/p53 protein complex, which enhances p53 ubiquitination and ultimately promotes the proliferation and survival of CRC cells. Both downregulation of RSL1D1 and destruction of the RSL1D1/HDM2/p53 complex can remarkably increase the cellular amount of p53 protein. Furthermore, RSL1D1 downregulation induces G1/S arrest and apoptosis in a p53-dependent manner by upregulating p21 and PUMA, thus reducing the growth of p53+/+ CRC cells in vitro and in vivo. Our findings demonstrate that RSL1D1 is an oncoprotein in CRC and a potential molecular target for anticancer drug development.
Additional file 4: Supplementary Fig. S3. RSL1D1 Regulates the HDM2-p53 Signaling Axis in HCT-8 Colorectal Cancer Cells. Cells were transfected with siRNA and harvested 48 h post-transfection. A The mRNA levels of RSL1D1, p53, HDM2, p21, and PUMA were determined by qRT-PCR in siRSL1D1-or siNC-transfected cells. GAPDH was used as an internal control to normalize the values. The normalized values of siNC-treated cells were set to 1. Data are represented as mean ± SD. Student's t test. *P < 0.05 and **P < 0.01 denote significant difference. B The levels of RSL1D1, p53, HDM2, p21, and PUMA proteins were evaluated by western blot analysis in siRSL1D1-or siNC-transfected cells. β-actin was set as a loading control.
Additional file 5: Supplementary Fig. S4. Downregulation of RSL1D1 Does not Affect the Levels of HDM4 Protein in HCT116 Cells. The levels of RSL1D1 and HDM4 protein were determined by western blot analysis in HCT116 p53+/+ and HCT116 p53−/− cells transfected with siRSL1D1 or siNC. β-actin was used as a loading control.
Additional file 6: Supplementary Fig. S5. Downregulation of HDM2 Does Not Remarkably Change the Expression of RSL1D1 in HCT116 p53+/+ Cells. Cells were transfected with siRNA and harvested for determining the mRNA and protein levels of indicated genes 48 h post-transfection. A The mRNA levels of HDM2, RSL1D1, p53, p21, and PUMA were determined by qRT-PCR in siHDM2-or siNC-transfected cells. GAPDH was used as an internal control to normalize the values. The normalized values of siNC-treated cells were set to 1. Data are represented as mean ± SD. Student's t test. *P < 0.05 and **P < 0.01 denote significant difference. B The levels of HDM2, RSL1D1, p53, p21, and PUMA proteins were evaluated by western blot analysis in siHDM2-or siNC-transfected cells. β-actin was set as a loading control. Additional file 7: Supplementary Fig. S6 Homemade Antibody against RSL1D1 Is Suitable for Immunofluorescence Assay. A, B IF assay was performed to detect RSL1D1 (red) in siRSL1D1-transfected (A) or RSL1D1-overexpressed (B) HCT116 p53+/+ cells. The cells transfected with siNC (A) or overexpressing EGFP (B) were used as a negative control. Homemade anti-RSL1D1 monoclonal antibody was used as the primary antibody. The nuclei were stained with Hoechst (blue). Scale bars: 5 μm.
Additional file 8: Supplementary Fig. S7. Recombinant Proteins Were Purified by Affinity Chromatography and Subjected to SDS-PAGE Analysis to Assess the Purity. A Nucleotide sequences encoding full-length p53 and its truncated variants (aa 1-363, aa 1-292, aa 1-92, aa 293-393, and aa 93-292) were cloned into a prokaryotic expression vector pET-32a(+), respectively. Recombinant plasmids were transformed into E. coli BL21(DE3) and recombinant proteins were purified by affinity chromatography. The purified His-tagged proteins were subjected to SDS-PAGE analysis. B Nucleotide sequences encoding full-length RSL1D1 and its truncated variants (aa 1-281 and aa 282-485) were cloned into a prokaryotic expression vector pGEX-6P-1. Recombinant plasmids were transformed into E. coli BL21(DE3) and recombinant proteins were purified by affinity chromatography. The purified GST-tagged proteins were subjected to SDS-PAGE analysis. C Nucleotide sequence encoding HDM2 was cloned into a prokaryotic expression vector pET-32a-SUMO. Recombinant plasmids were transformed into E. coli BL21(DE3) and recombinant proteins were purified by affinity chromatography. The purified Histagged protein was subjected to SDS-PAGE analysis. | 2021-08-07T13:53:42.743Z | 2021-08-06T00:00:00.000 | {
"year": 2021,
"sha1": "970c621114f05e0e72a9469dea3a0145aa2c4340",
"oa_license": "CCBY",
"oa_url": "https://jeccr.biomedcentral.com/track/pdf/10.1186/s13046-021-02057-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d8a47b4c25e9e07cd04a99536ed0c11291c0b1f5",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
99826252 | pes2o/s2orc | v3-fos-license | Structural changes in leaves and roots are anatomical markers of aluminum sensitivity in sunflower 1
Acid soils account for approximately 50 % of arable land worldwide. In this environment, aluminum (Al) is available in a phytotoxic form (Al3+) and is considered the main factor to reduce plant growth (Panda et al. 2009). This restriction results from several physiological and biochemical changes in plants (Singh et al. 2015). Roots are the plant part most susceptible to Al toxicity and its effects on shoots are usually ABSTRACT RESUMO
INTRODUCTION
Acid soils account for approximately 50 % of arable land worldwide.In this environment, aluminum (Al) is available in a phytotoxic form (Al 3+ ) and is considered the main factor to reduce plant growth (Panda et al. 2009).This restriction results from several physiological and biochemical changes in plants (Singh et al. 2015).
Roots are the plant part most susceptible to Al toxicity and its effects on shoots are usually
ABSTRACT RESUMO
associated with root damage (Panda et al. 2009).
Aluminum causes extensive root injury, leading to poor ion and water uptake (Barceló & Poschenrieder 2002).Some studies suggest that Al toxicity induces anatomical changes in plant tissues, reducing cell elongation and division (Gupta et al. 2013), due to changes in the cell wall characteristics.Thus, epidermis, cortex and vessels damage has been observed in several plant species (Čiamporová 2002).
Given that root hydraulic conductivity is dependent on tissue radial conductivity (North & Nobel 1996), Aluminum (Al) toxicity in plants evidences the importance of genotype evaluation to the identification of tolerance markers.This study aimed at evaluating the effects of aluminum stress on the relative water content, membrane damages and anatomical changes, in Al-tolerant and Al-sensitive sunflower cultivars.Sunflower plants [Catissol (Al-tolerant) and IAC-Uruguai (Al-sensitive)] were grown in nutrient solution (control) or nutrient solution containing 0.15 mM of AlCl 3 (Al-stress treatment), in a greenhouse.The experimental design was completely randomized, in a factorial arrangement consisting of four harvest times x two sunflower cultivars x two Al levels, with four replications.The results showed that Al negatively affected the absolute integrity percentage and relative water content only for the IAC-Uruguay cultivar.These results in the stressed leaves of the Al-sensitive cultivar may be due to damage in the xylem structure.In addition, the increase in leaf blade thickness and parenchyma layers, as well as lignification of root tissues, are important traits of IAC-Uruguay plants and may be used as anatomical markers of Al sensitivity in sunflower.KEYWORDS: Helianthus annuus L.; relative water content; aluminum stress.
It has been observed that metal stress induces the lignification of root tissues (Moura et al. 2010).This anatomical change is a barrier against toxic metals, decreasing the influx into the central cylinder (Van de Mortel et al. 2008, Shu et al. 2012).On the other hand, increased root lignin levels may reduce the water uptake (Emam & Bijanzadeh 2012), affecting cell elongation (Sasaki et al. 1996).Therefore, it has been hypothesized that the lignin content in roots is associated with Al stress tolerance in several plant species (Moura et al. 2010).
Aluminum also affects the plasma membrane by interacting with negative charge groups (phosphates) on the membrane surface.It has been reported that the Al 3+ affinity to phosphatidylcholine is 560 times higher than Ca 2+ (Akeson et al. 1989).Thus, Al directly affects the plasma membrane structure by displacing Ca 2+ through binding to membrane phospholipids, increasing the membrane surface potential.Aluminum stress may also indirectly damage membrane polyunsaturated fatty acids by increasing the production of reactive oxygen species (Singh et al. 2015).
Sunflower is considered moderately tolerant to water and salt stress, but as an Al-sensitive plant, it cannot tolerate exchangeable Al levels greater than 5 % (Blamey et al. 1987).However, Al tolerance varies among species and cultivars (Jesus & Azevedo Neto 2013).As such, this study aimed at assessing the effects of Al stress on relative water content, membrane damage and anatomical changes in roots and leaves of Al-tolerant and Al-sensitive sunflower cultivars.These characteristics may be used as Al sensitivity markers in sunflower plants and in screenings for plant breeding programs.
MATERIAL AND METHODS
The experiment was carried out under greenhouse conditions, from July to August 2015, at the experimental unit of the Universidade Federal do Recôncavo da Bahia, Bahia State, Brazil.
Seeds of two sunflower (Helianthus annuus L.) cultivars [Catissol (Al-tolerant) and IAC-Uruguai (Al-sensitive)] were planted in trays containing vermiculite and irrigated daily with distilled water.After emergence, the seedlings were transferred to trays containing aerated, full-strength Clark's nutrient solution (Clark 1975).Ten days later, they were transplanted to 12-L plastic pots containing the same nutrient solution (control treatment) or nutrient solution with 0.15 mM of AlCl 3 (Al-stress treatment).
The experimental design was completely randomized, in a factorial arrangement consisting of four harvest times x two sunflower cultivars x two Al levels, with four replicates.Mean values for temperature, relative humidity and photosynthetic active radiation (at noon) were 27 ºC, 65 % and 1,200 µmol m -2 s -1 , respectively.Nutrient solutions were replaced every week and pH was adjusted daily to 4.5 ± 0.2 throughout the experiment.Plants from both treatments were grown under the same conditions for 15 days (end of the experimental period).
Absolute integrity percentage, estimated based on electrolyte leakage (Vásquez-Tello et al. 1990), and relative water content were measured in the youngest fully expanded leaf at 1, 5, 10 and 15 days after the AlCl 3 application.Leaf discs were placed in closed tubes containing 10 mL of deionized water and incubated in the dark for 24 h, at 8 ºC.The initial electrical conductivity of the solution (C 1 ) was measured after equilibration at 25 ºC.Samples were then boiled for 1 h at 100 ºC and the final electrical conductivity of heat-killed tissues (C 2 ) was measured after equilibration at 25 ºC.The absolute integrity percentage (AIP) was calculated as it follows: AIP (%) = 100 -(C 1 × 100/C 2 ).
Leaf relative water content (RWC) was determined as described by Barrs & Weatherley (1962), based on the fresh, turgid and dry weight of leaf discs, using the following equation: RWC = [(FW -DW)/(TW -DW)] × 100, where: FW, TW and DW are respectively the fresh, turgid and dry weights of leaf discs.
Histological assessments were performed in developed roots and leaves of both sunflower cultivars.Samples were fixed in 50 % FAA (37 % formaldehyde, glacial acetic acid and 50 % ethanol), for 24 h (Johansen 1940).Plant material was exposed to vacuum desiccation during the fixation process and then transferred to 70 % ethyl alcohol.
Photomicrographs were taken using a microscope (Olympus BX51) equipped with a digital camera (Olympus A330) and edited using the Adobe Photoshop 9.0 software.The figure scales were obtained by photographing a millimeter scale under the same optical conditions.
Averages ± standard deviation for each treatment (cultivar x Al treatment x time point) were depicted in graphs by using the Sigma Plot ® 12.0 software.
RESULTS AND DISCUSSION
Plant tolerance of toxic elements may be associated with differences in membrane structure and function (Gupta et al. 2013).Accordingly, only small differences at 5 and 10 days were observed between the absolute integrity percentage of the control and Al-stressed plants of the Catissol cultivar (Figure 1a).On the other hand, stressed IAC-Uruguai plants exhibited a decline in absolute integrity percentage at 10 (26 %) and 15 days (39 %) (Figure 1b).Cell membranes are the first elements affected by plant exposure to different stresses.Aluminum acts directly by displacing Ca 2+ ions, which act as bridges between membrane phospholipids, leading to membrane disruption (Akeson et al. 1989), which may cause reduced water permeability (Barceló & Poschenrieder 2002).Additionally, Al induces rapid root inhibition, resulting in decreased ion and water uptake (Panda et al. 2009).
In this respect, similarly to membrane integrity, only the IAC-Uruguai cultivar showed an Al-induced relative water content reduction of 11 % at 10 and 15 days (Figures 1c and 1d).Previous studies also report that Al stress affects the roots of IAC-Uruguai plants more significantly than those of the Catissol cultivar (Jesus & Azevedo Neto 2013).Maintaining the membrane integrity is considered an important component of plant tolerance to Al stress (Gomes et al. 2011).Thus, it is likely that reduced root growth with decreased membrane integrity and water status may at least partially explain the greater sensitivity of IAC-Uruguai to Al stress, when compared to Catissol plants.
Figure 2 shows the anatomical leaf structures of both Catissol (Figures 2a and 2c) and IAC-Uruguai (Figures 2b and 2d) cultivars, under controlled conditions.In the absence of Al, the two cultivars studied showed very similar histological features.In the midrib, the cortical region is composed of four layers of collenchyma, with irregular distribution, and parenchymal cells of varying sizes (Figures 2a and 2b).In both varieties, the vascular bundle of the midrib is collateral, with one central bundle and two accessory vascular bundles.The leaf blade has a uniseriate epidermis, with a thin outer periclinal wall and stomata on both sides (Figures 2c and 2d).The mesophyll is dorsiventral, with palisade parenchyma consisting of 2-4 layers of elongated cells, which occupy half of the leaf mesophyll, and spongy parenchyma with up to 8 layers of cells.The root has a uniseriate epidermis, cortex composed of several layers of irregularly-sized parenchyma cells, and exarch organization of the stele (Figures 2e and 2f).
Metal stress usually prompts changes in the leaf anatomy of several plant species (Čiamporová 2002).Accordingly, in a transverse section of the leaf, the differences resulting from Al tolerance and Al sensitivity were evident (Figure 3).The difference in midrib size is also clearly shown in Figures 3a and 3b, indicating a reduced growth in IAC-Uruguai (Al-sensitive).In IAC-Uruguai, the main vascular bundle is smaller in diameter, with fewer metaxylem elements (Figure 3d), when compared to Catisol plants (Figure 3c).The Al-sensitive cultivar also shows a disruption of the vessel elements and a decline in the size of all sieved elements (Figure 3d).This is most evident when comparing controls to stressed plants (Figures 2b and 3d).
Exposure to toxic metals may alter the structure of vascular bundles in leaves (Marchiol et al. 1996, Mukhtar et al. 2013).Structural changes in the vascular bundle may also alter the water status of leaves (Gowayed & Almaghrabi 2013), suggesting that decreased relative water content in stressed leaves of the Al-sensitive cultivar may be the result of xylem damage.In addition, a slight increase in leaf blade thickness was observed in IAC-Uruguai plants, promoted by increased cell size and cell layers in the palisade and spongy parenchyma (Figure 3f), when compared to Catissol (Figure 3e).Leaf blade thickening has been reported under metal stress, due to an increase in intercellular spaces and/or in the number of parenchymatic cells (Čiamporová 2002).
These results indicate that changes in leaf anatomy may be used as a marker of Al sensitivity in sunflower.
It has been suggested that the maintenance of the root integrity of peripheral tissue is associated with Al-stress tolerance (Alvarez et al. 2012).Accordingly, the results showed that the root epidermis and cortex were significantly affected by Al stress in the Al-sensitive plants (Figures 4b,4d and 4f).Both cultivars showed partial destruction of the epidermis (Figures 4a and 4b).However, damage was more conspicuous in IAC-Uruguai (Figure 4b).This effect may be due to the fact that roots are the first organ to be exposed to Al (Čiamporová 2002).
In IAC-Uruguai, the harmful effects of Al reach the inner layers of the cortex, with cell disintegration and the presence of enlarged cells.Increased cell size may be considered an additional factor exerting pressure on the epidermal cell layer, leading to rupture (Čiamporová 2002).
Significant differences were also observed between the Catisol and IAC-Uruguai cultivars in the vascular cylinder.In Catisol plants, the endodermis and pericycle are easily observed (Figure 4e), but are not evident in IAC-Uruguai (Figure 4f).Cells undergoing cell division may be seen in the pericycle region, indicating the formation of scar tissue (Figure 4f).In Catisol, the vascular cambium exhibits normal activity with the formation of secondary driving elements, in contrast with IAC-Uruguai, which shows no continuous cambium and complete disorientation of sieve tube elements (Figure 4f).In contrast to Catissol plants (Figures 4a, 4c and 4e), the Al-sensitive cultivar showed cortical parenchyma cells near the vascular cylinder with irregular arrangement and formation of intercellular spaces.Some cortical parenchyma cells, particularly those located close to the endodermis, exhibited safranin staining in the cell wall (Figures 4b, 4d and 4f).The use of safranin colorants enables the visualization of lignified cell walls in tissues, such as vessel elements (stained red) (Srebotnik & Messner 1994, Doğu & Grabner 2010).The Al-sensitive IAC-Uruguai plants showed an increase in the number of root cells with highly lignified cell walls.In some cases, lignification may be beneficial, forming a physical barrier against toxic metals (Van de Mortel et al. 2008).Alternatively, in a number of plant species, increased lignification occurs in response to mechanical damage in tissues, such as rupture of epidermal cell layer (Moura et al. 2010).
The Al content inside root cells was not measured.As such, we speculate that the lignin deposition in the cell walls of the Al-sensitive IAC-Uruguai cultivar was the result of the Al-induced mechanical damage in root cells.Lignification in the root elongation zone has also been reported in Al-stressed wheat plants (Sasaki et al. 1996).The authors demonstrated that lignification was dosedependent and directly related to reduced root growth.Similarly, Wu et al. (2014) reported that lignification may be considered an indicative of sensitivity to Al-stress.Our findings showing that the lignin deposition in root cell walls was observed only in the Al-sensitive cultivar support this hypothesis and suggest that Al-induced lignification may be an important trait of Al sensitivity in sunflower.
CONCLUSIONS
1. Aluminum stress alters the leaf structure and water status (absolute integrity percentage and relative water content) of sunflower plants; 2. The increase in leaf blade thickness and parenchyma layers observed in Al-stressed IAC-Uruguay plants indicates that these changes in leaf anatomy may be used as an anatomical marker of Al sensitivity in sunflower leaves; 3. Aluminum-induced root lignification may also be an important trait of Al sensitivity in sunflower roots.
Figure 1 .
Figure 1.Absolute integrity percentage (AIP; a and b) and relative water content (RWC; c and d) in leaves of Al-tolerant (Catissol; a and c) and Al-sensitive (IAC-Uruguai; b and d) sunflower cultivars grown under controlled (○) or Al-stress (•) conditions.Plants were harvested at 1, 5, 10 and 15 days after the addition of Al to the nutrient solution.Values represent the average ± standard deviation.
Figure 3 .
Figure 3. Cross sections of leaves of Al-tolerant (Catissol; a, c and e) and Al-sensitive (IAC-Uruguai; b, d and f) sunflower cultivars grown under stress conditions (0.15 mM of Al).a-b: differences in midrib thickness between the two cultivars; c-d: normal vascular bundles in Catissol and apparent disarray of the vessel elements in IAC-Uruguai; e-f: increase in intercellular spaces may be observed in IAC-Uruguai; VB: vascular bundle; Mt: metaxylem; PP: palisade parenchyma; LP: lacunose parenchyma.Scale: 170 µm (a-b); 60 µm (c-f). | 2019-04-08T13:11:35.461Z | 2016-11-14T00:00:00.000 | {
"year": 2016,
"sha1": "88f953870bf181462686291f78fb53e67bb3160b",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/pat/v46n4/1983-4063-pat-46-04-0383.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "88f953870bf181462686291f78fb53e67bb3160b",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
18906709 | pes2o/s2orc | v3-fos-license | 22 Evolutionary Algorithms in Decision Tree Induction
One of the biggest problem that many data analysis techniques have to deal with nowadays is Combinatorial Optimization that, in the past, has led many methods to be taken apart. Actually, the (still not enough!) higher computing power available makes it possible to apply such techniques within certain bounds. Since other research fields like Artificial Intelligence have been (and still are) dealing with such problems, their contribute to statistics has been very significant. This chapter tries to cast the Combinatorial Optimization methods into the Artificial Intelligence framework, particularly with respect Decision Tree Induction, which is considered a powerful instrument for the knowledge extraction and the decision making support. When the exhaustive enumeration and evaluation of all the possible candidate solution to a Tree-based Induction problem is not computationally affordable, the use of Nature Inspired Optimization Algorithms, which have been proven to be powerful instruments for attacking many combinatorial optimization problems, can be of great help. In this respect, the attention is focused on three main problems involving Decision Tree Induction by mainly focusing the attention on the Classification and Regression Tree-CART (Breiman et al., 1984) algorithm. First, the problem of splitting complex predictors such a multi-attribute ones is faced through the use of Genetic Algorithms. In addition, the possibility of growing “optimal” exploratory trees is also investigated by making use of Ant Colony Optimization (ACO) algorithm. Finally, the derivation of a subset of decision trees for modelling multi-attribute response on the basis of a data-driven heuristic is also described. The proposed approaches might be useful for knowledge extraction from large databases as well as for data mining applications. The solution they offer for complicated data modelling and data analysis problems might be considered for a possible implementation in a Decision Support System (DSS). The remainder of the chapter is as follows. Section 2 describes the main features and the recent developments of Decision Tree Induction. An overview of Combinatorial Optimization with a particular focus on Genetic Algorithms and Ant Colony Optimization is presented in section 3. The use of these two algorithms within the Decision Tree Induction Framework is described in section 4, together with the description of the algorithm for modelling multi-attribute response. Section 5 summarizes the results of the proposed O pe n A cc es s D at ab as e w w w .ite ch on lin e. co m
Introduction
One of the biggest problem that many data analysis techniques have to deal with nowadays is Combinatorial Optimization that, in the past, has led many methods to be taken apart. Actually, the (still not enough!) higher computing power available makes it possible to apply such techniques within certain bounds. Since other research fields like Artificial Intelligence have been (and still are) dealing with such problems, their contribute to statistics has been very significant. This chapter tries to cast the Combinatorial Optimization methods into the Artificial Intelligence framework, particularly with respect Decision Tree Induction, which is considered a powerful instrument for the knowledge extraction and the decision making support. When the exhaustive enumeration and evaluation of all the possible candidate solution to a Tree-based Induction problem is not computationally affordable, the use of Nature Inspired Optimization Algorithms, which have been proven to be powerful instruments for attacking many combinatorial optimization problems, can be of great help. In this respect, the attention is focused on three main problems involving Decision Tree Induction by mainly focusing the attention on the Classification and Regression Tree-CART (Breiman et al., 1984) algorithm. First, the problem of splitting complex predictors such a multi-attribute ones is faced through the use of Genetic Algorithms. In addition, the possibility of growing "optimal" exploratory trees is also investigated by making use of Ant Colony Optimization (ACO) algorithm. Finally, the derivation of a subset of decision trees for modelling multi-attribute response on the basis of a data-driven heuristic is also described. The proposed approaches might be useful for knowledge extraction from large databases as well as for data mining applications. The solution they offer for complicated data modelling and data analysis problems might be considered for a possible implementation in a Decision Support System (DSS). The remainder of the chapter is as follows. Section 2 describes the main features and the recent developments of Decision Tree Induction. An overview of Combinatorial Optimization with a particular focus on Genetic Algorithms and Ant Colony Optimization is presented in section 3. The use of these two algorithms within the Decision Tree Induction Framework is described in section 4, together with the description of the algorithm for modelling multi-attribute response. Section 5 summarizes the results of the proposed 445 test condition depending on a splitting method is applied to partition the data into more homogeneous subgroups at each step of the greedy algorithm. Splitting methods differ with respect to the type of splitting predictor: for nominal splitting predictors the test condition is expressed as a question about one or more of its attributes, whose outcomes are "Yes"/"No". Grouping of splitting predictor attributes is required for algorithms using 2-way splits. For ordinal or continuous splitting predictors the test condition is expressed on the basis of a threshold value υ such as (x i ≤ υ?) or (x i > υ?). By considering all the possible split points υ, the best one υ* partitioning the instances into homogeneous subgroups is selected. In the classification problem, the sample population consists of N observations deriving from C response classes. A decision tree (or classifier) will break these observations into k terminal groups, and to each of these a predicted class (being one of the possible attributes of the response variable) is assigned. In actual application, most parameters are estimated from the data. In fact, denoting with t some node of the tree (t represents both a set of individuals in the sample data and, via the tree that produced it, a classification rule for future data) from the binary tree it is possible to estimate P(t) and P(i|t) for future observations as follows: where π i is the prior probability of each class i (i ∈ 1,2,….,C), τ(x) is the true class of an observation x i ( x is the vector of predictor variables), n i and n t are the number of observations in the sample that respectively are class i and node t, and n it is the number of observations in the sample that are class i and node t. In addition, by denoting with R the risk of misclassification, the risk of t (denoted with R(t)) and the risk of a model (or tree) T (denoted with R(T)) are measured as follows: where L(i,j) is the loss matrix for incorrectly classifying an i as a j (with L(i,i)=0), and τ(t) is the class assigned to t once that t is a terminal node and τ(t) is chosen to minimize R(t) and t j are terminal nodes of the tree T. If L(i,i)=1 for all i≠j, and the prior probabilities τ are set to be equal to the observed class frequencies in the sample, then P(i|t)=n it /n t and R(T) is the proportion of misclassified observations. When splitting a node t into t r and t l (left and right sons), the following relationship holds: P(t l ) R(t l ) + P(t r ) R(t r ) ≤ P(t) R(t). An obvious way to build a tree is to chose that split maximizing ΔR, i.e., the decrease in risk. To this aim, several measures of impurity (or diversity) of a node are used. Denoting with f some impurity function, the local impurity of a node t is defined as: www.intechopen.com
Advances in Evolutionary Algorithms
where p it is the proportion of those in t that belong to class i for future samples. Since ε(t)=0 when t is pure, f must be concave with f(0)=f(1)=0. Two candidates for f are the information index f(p) = -p log(p) and the Gini index f(p) = -p(1-p), that slightly differ for the two class problem where nearly always choose the same split point. Once that f has been chosen, the split maximizing the impurity reduction is: Data partitioning proceeds recursively until a stopping rule is satisfied: this usually happens when the number of observations in a node is lower than a previously-specified minimum number necessary for splitting, as well as when the same observations belong to the same class or have the same response class.
FAST splitting algorithm
The goodness of split criterion based on (6) expresses in different way some equivalent criteria which are present in most of the tree-growing procedures implemented in specialized software; such as, for instance, CART (Breiman et al., 1984), ID3 and C4.5 (Quinlan, 1993). In many situations the computational time required by a recursive partitioning algorithm is an important issue that can not be neglected. In this respect, a fast algorithm is required to speed up the procedure. In view of that, it is worth considering a two-stage splitting criterion which takes into account of the global role played by a splitting predictor in the partitioning step. A global impurity reduction factor of any predictor x i is defined as: where ε y|g (t) is the impurity of the conditional distribution of y given the s-th attribute of x s and G is the number of attributes of x s (g ε G). The two-stage criterion finds the best splitting predictor(s) as the one (or those) minimizing (7) and, consequently, the best split point among the candidate splits induced by the best predictor(s) minimizing the (6) by taking account only the partitions or splits generated by the best predictor. This criterion can be applied either sic et simpliciter or by considering alternative modelling strategies in the predictor selection (an overview of the two-stage methodology can be found in Siciliano & Mola, 2000). The FAST splitting algorithm (Mola & Siciliano, 1997) can be applied when the following property holds for the impurity measure: and it consists of two basic rules: • iterate the two-stage partitioning criterion by using (7) and (6): select one splitting predictor at a time and consider, at each time, the previously unselected splitting predictors; • stop the iterations when the current best predictor in the order x(k) at iteration k does not satisfy the condition is the best partition at the iteration (k − 1). The algorithm finds the optimal split with substantial time savings in terms of the reduced number of partitions or splits to be tried out at each node of the tree. Simulation studies show that the relative reduction in the average number of splits analyzed by the FAST algorithm with respect to the standard approaches in binary trees increases as a function of both the number of attributes of the splitting predictor and of the number of observations at a given node. Further theoretical results about the computational efficiency of FAST-like algorithms can be found in Klaschka et al. (1998).
Tree pruning
As for the pruning step, it is usually required in DTI in order to control for the size of the induced model and to avoid in this way data overfitting. Typically, data is partitioned into a training set (containing two-third of the data) and a test set (with the remaining one-third). Training set contains labelled observations and it is used for the tree growing. It is assumed that the test set contains unlabelled observations and it is used for selecting the final decision tree: to check whether a decision tree, say T, is generalizable, it is necessary to evaluate its performance on the test set in terms of misclassification error by comparing the true class labels of the test data against those predicted by T. Reduced-size trees perform poorly on both training and test sets causing underfitting. Instead, increasing the size of T improves both the training and test errors up to a "critical size" from which the test errors increase even though the corresponding training errors decrease. This means that T overfits the data and cannot be generalized to class prediction of unseen observations. In the machine learning framework, the training error is named resubstitution error and the test error is known as the generalization error. It is possible to prevent overfitting by haltering the tree growing before it becomes too complex (pre-pruning). In this framework, one can assume the training data is a good representation of the overall data and use the resubstitution error as an optimistic estimate of the error of the final DTI model (optimistic approach). Alternatively, Quinlan (1987) proposed a pessimistic approach that penalizes complicated models by assigning a cost penalty to each terminal node of the decision tree: for C4.5, the generalization error is R(t)/n t +ε, where, for a node t, n t i s t h e n u m b e r o f o b s e r v a t i o n s a n d R(t) is the misclassification error. It is assumed that R(t) follows a Binomial distribution and that ε is the upper bound for R(t) computed from such a distribution (Quinlan, 1993). An alternative pruning strategy is based on the growing of the entire tree and the subsequent retrospective trimming of some of its internal nodes (post-pruning): the subtree departing from each internal node is replaced with a new terminal node whose class label derives from the majority class of observations belonging to that subtree. The latter is definitively replaced by the terminal node if such a replacement induces an improvement of the generalization error. Pruning stops when no further improvements can be achieved. The generalization error can be estimated through either the optimistic or pessimistic approaches. Other post-pruning algorithms, such as CART, use a complexity measure that accounts for both the tree size and the generalization error. Once the entire tree is grown using training observations, a penalty parameter expressing the gain/cost trade off for trimming each subtree is used to generate a sequence of pruned trees, and the tree in the sequence presenting the lowest generalization error (0-SE rule) or the one with a generalization error within one standard error of its minimum (1-SE rule) is selected. Let α be a number in [0,+∞], called complexity parameter, measuring the "cost" of adding another variable to the model. Let R(T 0 ) be the risk for the zero split tree. Define: to be the cost for the tree, and define T α to be that subtree of the entire tree having the minimal cost. Obviously, T 0 is the entire tree and T ∞ is the zero splits model. The idea is to find, for each α, the subtree T α ⊆ T 0 minimizing R α (T). The tuning parameter α ≥ 0 governs the trade off between the tree size and its goodness of fit to the data. Large values of α result in small trees, and conversely for smaller values of α. Of course, with α=0 the solution is the full tree T 0 . It is worth noticing that, by adaptively choosing αI, it exists a unique smallest subtree T α minimizing R α (T). A weakest link pruning approach is used to find T α : it consists in successively collapsing the internal node producing the smallest per-node increase in R(T), continuing this way until the single-node (root) tree is produced. This gives a (finite) sequence of subtrees, and it is easy to show that this sequence must contains T α (see Breiman et al (1984) for details). Usually, pruning algorithms can be combined with V-fold cross-validation when few observations are available. Training data is divided into V disjoint blocks and a tree is grown V times on V-1 blocks estimating the error by testing the model on the remaining block. In this case, the generalization error is the average error made for the V runs. The estimation of αI is achieved by V-fold cross-validation: the final choice is the α minimizing the cross-validated R(T) and the final tree is Tα . Cappelli et al. (2002) improved this approach introducing a statistical testing pruning to achieve the most reliable decision rule from a sequence of pruned trees.
Regression tree
In the case the response variable is numeric, the outcome of a recursive partitioning algorithm is regression tree. Here, the splitting criterion is SS t -(SS l -SS r ), where SS t is the residual sum of squares for the parent node, and SS l and SS r are the residual sum of squares for the left and right son, respectively. This is equivalent to choosing the splits maximizing the between-groups sum-of-squares in a simple analysis of variance. In each terminal node, the mean value of the response variable μ y of cases belonging to that node is considered as the fitted value whereas the variance is considered as an indicator of the error of a node. For a new observation y new the prediction error is (y new -μ y ). In the regression tree case, costcomplexity pruning is applied with the sum of squares replacing the misclassification error.
DTI enhancements
A consolidated literature about the incorporation of parametric and nonparametric models into trees appeared in recent years. Several algorithms have been introduced as hybrid or functional trees (Gama, 2004), among the machine learning community. As an example, DTI is used for regression smoothing purposes in Conversano (2002): a novel class of semiparametric models named Generalized Additive Multi-Mixture Models (GAM-MM).
Other hybrid approaches are presented in Chan and Loh (2004), Su et al. (2004), Choi et al. (2005) and Hothorn et al. (2006). Nevertheless, relatively simple procedures combining DTI models in different ways have been proposed in the last decade in the statistics and machine learning literature and their effectiveness in improving the predictive ability of the traditional DTI method has been proven in different fields of application. The first, rather intuitive, approach is Tree Averaging. It is based on the generation of a set of candidate trees and on their subsequent aggregation in order to improve their generalization ability. It requires the definition of a suitable set of trees and their associated weights and classifies a new observation by averaging over the set of weighted trees (Oliver and Hand, 1995). Either a compromise rule or a consensus rule can be used for averaging. An alternative method consists in summarizing the information of each tree in a table crossclassifying terminal nodes outcomes with the response classes in order to assess the generalization ability through a statistical index and select the tree providing the maximum value of such index (Siciliano, 1998). Tree Averaging is very similar to Ensemble methods. These are based on a weighted or non weighted aggregation of single trees (the so called weak learners) in order to improve the overall generalization error induced by each single tree. They are more accurate than a single tree if they have a generalization error that is lower than random guessing and if the generalization errors of the different trees are uncorrelated (Dietterich, 2000). A first example of Ensemble method is Bootstrap Aggregating, which is also called Bagging (Breiman, 1996). It works by randomly replicating the training observations in order to induce single trees whose aggregation by majority voting provides the final classification.
Bagging is able to improve the performance of unstable classifiers (i.e. trees with high variance). Thus, bagging is said to be a reduction variance method. Adaptive Boosting, also called AdaBoost (Freud & Schapire, 1996) is an Ensemble method that uses iteratively bootstrap replication of the training instances. At each iteration, previously-misclassified observations receive higher probability of being sampled. The final classification is obtained by majority voting. Boosting forces the decision tree to learn by its error, and is able to improve the performance of trees with both high bias (such as singlesplit trees) and variance. Finally, Random Forest (Breiman, 2001) is an ensemble of unpruned trees obtained by randomly resampling training observations and variables. The overall performance of the method derives from averaging the generalization errors obtained in each run. Simultaneously, suitable measures of variables importance are obtained to enrich the interpretation of the model.
Combinatorial optimization
Combinatorial Optimization can be defined as the analysis and solution of problems that can be mathematically modelled as the minimization (or maximization) of an objective function over a feasible space involving mutually exclusive, logical constraints. Such logical constraints can be seen as the arrangement of a bunch of given elements into sets. In a mathematical form: www.intechopen.com
450
where T can be seen as an arrangement, F is the collection of feasible arrangements and α(T) measures the value of the members of F. Combinatorial Optimization problems are of great interest because many real life decisionmaking situations force people to choose over a set of possible alternatives with the aim of maximizing some utility function. On the one hand, the discreteness of the solutions space offers the great advantage of concreteness and, indeed, elementary graphs or similar illustrations can often naturally be used to represent the meaning of a particular solution to a problem. On the other end, those problems carry a heavy burden in terms of dimensionality. If more than few choices are to be made, the decision-making process has to face with the evaluation of a terribly big expanse of cases. This dualism (intuitive simplicity of presentation of a solution versus complexity of solutions search) has made this area of combinatorics attractive for researchers from many fields, ranging from engineering to management sciences. Elegant procedures to find optimal solutions have been found for some problems, but for most of them only a bunch of properties and algorithms have been developed that still do not allow to reach a complete resolution. This is the case of Computational Statistics, in which computationally-intensive methods are used to "mine" large, heterogeneous, multidimensional datasets in order to discover knowledge in the data.
To give an example, the objective of Cluster Analysis is to find the "best" partition of the dataset according to some criterion, which is always expressed as an objective function. This means that all possible and coherent partitions of the dataset should be generated and the objective function has to be calculated for each of them. In many cases, the number of possible partitions grows too rapidly with respect to the number of units, making such strategy practically unfeasible. Another example is the apparently simple problem of calculating the variance for interval data, for which the maximum and the minimum of the variance function have to be searched over the multidimensional cube defined by all the intervals in which the statistical units are defined.
These are examples of statistical problems that cannot be faced with the total enumeration and evaluation of the solutions. In order to try to tackle with this kind of problems, a lot of theory has been developed. One case is when some properties about the objective function are available. These allow to calculate some kind of upper (or lower) bound that a set of possible solutions could admit. In this case, the search could be performed just on the set of possible solutions whose upper bound is higher. If one solution whose effective value is higher than the bounds of all the other sets is found, it would not be necessary to continue the search, being all the other subsets not able to provide better solutions. This is the case of the aforementioned problem of finding the upper bound of variance for interval data, because it can be verified that the maximum is necessarily reached in one of the vertices of the multidimensional cube, so that exploring the whole cube is not necessary. Such a situation allows to restrict the solutions space to a set of 2 n possible solutions, where n is the number of statistical units. Unfortunately, this does not solve the problem because the solutions space becomes enormous even in presence of small datasets (with just 30 units the number of solutions to evaluate is greater than one thousand millions). The FAST algorithm is another example of a partial enumeration approach, in which a measure of the upper bound of the predictive power of a solutions set is defined and exploited in order to get the same results of the CART greedy approach by using a reduced amount of computations. www.intechopen.com
Evolutionary Algorithms in Decision Tree Induction 451
Another way to proceed is to make use of non exact procedures, often called heuristics. Those algorithms do not claim to find the global optimum, but are able to converge rapidly towards a local one. Non exact algorithms (that will be called heuristics in the rest of this chapter) are certainly not recent. What has changed, in time, is the respectability associated to them, due to the fact that many heuristics have been proved to rival their counterparts in elegance, sophistication and, particularly, usefulness. Many heuristics have been proposed in the literature, but only two kinds of them will be briefly described in this context due to their role in the problems that will be faced in the next sections. These are: Greedy procedures and Nature Inspired optimization algorithms. In Greedy procedures the optimization process selects, at each stage, an alternative that is the best among all the feasible alternatives without taking into account the impact that such choice will have on the subsequent decisions. The CART algorithm makes use of a greedy procedure to grow a tree in which the optimality criterion is maximised just locally, that is, for each node of the tree but not considering the tree as a whole. This approach clearly results in a suboptimal tree but allows, at least, to obtain a tree in a reasonable amount of time. Whereas, the so-called Nature Inspired heuristics, which are also called "Heuristics from Nature" (Colorni et al., 1993), are Inspired by natural phenomena or behaviour such as Evolution, Ants, Honey-Bees, Immune systems, Forests, etc. Some important Nature Inspired heuristics are: Simulated Annealing (SA), TABU Search (TS) algorithms, Ant Colony Optimization (ACO) and Evolutionary Computation (EC). ACO and EC are described in the following since they are used throughout the chapter. Ant Colony Optimization represents a class of a l g o r i t h m s t h a t w e r e i n s p i r e d b y t h e observation of real ant colonies. Observation shows that a single ant only applies simple rules, has no knowledge and it is unable to succeed in anything when it is alone. However, an ant colony benefits from the coordinated interaction of each ant. Its structured behaviour, described as a "social life", leads to a cooperation of independent searches with high probability of success. ACO were initially proposed by Dorigo (1992) to attack the Traveling Salesman Problem. A real ant colony is capable of finding the shortest path from a food source to its nest by using pheromone information: when walking, each ant deposits a chemical substance called pheromone and follows, in probability, a pheromone trail already deposited by previous ants. Assuming that each ant has the same speed, the path which ends up with the maximum quantity of pheromone is the shortest one. Evolutionary computation (Fogel and Fogel, 1993) incorporates algorithms that are inspired from evolution principles in nature. The methods of evolutionary computation algorithms are stochastic and their search methods imitate and model some natural phenomena, namely: 1. the survival of the fittest 2. genetic inheritance Evolutionary computing can be applied to problems when it is difficult to apply traditional methods (e.g., when gradients are not available) or when traditional methods lead to unsatisfactory solutions like local optima (Fogel, 1997). Evolutionary algorithms work with a population of potential solutions (i.e. individuals). Each individual is a potential solution to the problem under consideration and it is encoded into a data structure suitable to the problem. Each encoded solution is evaluated by an objective function (environment) in order to measure its fitness. The bias on selecting high-fitness individuals exploits the acquired fitness information. The individuals will change and evolve to form a new population by applying genetic operators. Genetic operators perturb those individuals in order to explore the search space. There are two main types of genetic operators: Mutation and Crossover. Mutation type operators are asexual (unary) operators, which create new individuals by a small change in a single individual. On the other hand, Crossover type operators are multi-sexual (multary) operators, which create new individuals by combining parts from two or more individuals. As soon as a number of generations have evolved, the process is terminated according to a termination criterion. The best individual in the final step of the process is then proposed as a (hopefully suboptimal or optimal) solution for the problem. Evolutionary computing are further classified into four groups: Genetic Algorithms (GA), Evolutionary Programming, Evolution Strategies and Genetic Programming. Although there are many relevant similarities between these evolutionary computing paradigms, profound differences among them also emerge (Michalewicz, 1996). These differences generally involve the level in the hierarchy of the evolution being modelled, that is: the chromosome, the individual or the species. There are also many hybrid methods that combine various features from two or more of the methods described in this section. Genetic Algorithms (GAs), that will be used in the follwing, are part of a collection of stochastic optimization algorithms inspired by the natural genetics and the theory of the biological evolution. The idea behind genetic algorithms is to simulate the natural evolution when optimizing a particular objective function. GAs have emerged as practical, robust optimization and search methods in the last three decades. In the literature, Hollands' genetic algorithm is called Simple Genetic Algorithm (Vose, 1999). It works with a population of individuals (chromosomes), which are encoded as binary strings (genes).
Genetic algorithm for complex predictors
The CART methodology looks for the best split by making use of a brute-force (enumerative) procedure. All the possible splits from all the possible variables are generated and evaluated. Such a procedure must be performed anytime a node has to be split and can lead to computational problems when the number of modalities grows. Let us first consider how a segmentation procedure generates and evaluates all possible splits. Nominal unordered predictors (Nup) are more complicated to handle than ordered ones because the number of possible splits that can be generated grows exponentially with the number of attributes m. The number of possible splits is (2 m-1 -1). The computational complexity of a procedure that generates and evaluates all the splits from a nominal unordered predictor is O(2 n ). In this respect, it is evident that such enumerative algorithm becomes prohibitive when the number of attributes is high. This is one of the reasons why some software do not accept Nups with a number of attributes higher than a certain threshold (usually between 12 and 15). One of the possible way to proceed is to make use of a heuristic procedure, like the one proposed in this section. In order to design a Genetic Algorithm to solve such a combinatorial problem, it is necessary to identify: • a meaningful representation (coding) for the candidate solutions (the possible splits) • a way to generate the initial population • a fitness function to evaluate any candidate solution • a set of useful genetic operators that can efficiently recombine and mutate the candidate solutions • the values of the parameters used by the GA (population size, genetic operators parameters values, selective pressure, etc.); • a stopping rule for the algorithm. The aforementioned points have been tackled as follows. As for the coding, it has been chosen the following representation: a solution is coded in a string of bits (chromosomes) called x, where each bit (gene) is associated to an attribute of the predictor according to the following rule: The choice of the fitness function is straightforward: the split evaluation function of the standard recursive partitioning algorithm is used (i.e. the maximum decrease in node impurity). Since the canonical (binary) coding is chosen, the corresponding two parents single-point crossover and mutation operators and, as a stopping rule can be used. In addition, a maximum number of iterations is chosen on the basis of empirical investigations. The rest of the GA features are similar to the classic ones: elitism is used (at each iteration the best solution is kept in memory) and the initial population is chosen randomly.
An ACO algorithm for exploratory DTI
When growing a Classification or a Regression Tree, CART first grows the so-called exploratory tree. Such tree is grown using data of the training set. Then, it is validated by using the test set or by cross-validation. In this section, the attention is focused on the exploratory tree-growing procedure. In this phase, in theory, the best possible tree should be built, which is the tree having the lowest global impurity measure among all the generable trees. It has been shown (Hyafil and Rivest, 1976) that constructing the optimal tree is a NP-Complete problem. In other words, in order to use a polynomial algorithm, it is only possible to get suboptimal trees. For such a reason, the recursive partitioning algorithms make use of greedy heuristics to reach a compromise between the tree quality and the computational effort. In particular, most of the existing methods for DTI use a greedy heuristic, which is based on a top-down recursive partitioning approach in which, any time, the split that maximizes the one step impurity decrease is chosen. This kind of greedy approach, that splits the data locally (i.e., in a given node) and only once for each node, allows to grow a tree in a reasonable amount of time. On the other hand, this rule is able to generate only a suboptimal tree because anytime a split is chosen a certain subspace of possible trees is not investigated anymore by the algorithm. If the optimal tree is included in one of those subspaces there is no chance for the algorithm of finding it. Taking these considerations into account, we propose an Ant Colony Optimization algorithm to try to find best exploratory tree. In order to attack a problem with ACO the following design task must be performed: 1. Represent the problem in the form of sets of components and transitions or by means of a weighted graph, on which ants build solutions 2. Appropriately define the meaning of the pheromone trails: that is, the type of decision they bias. 3. Appropriately define the heuristic reference for each decision an ant has to take while constructing a solution. 4. If possible, implement an efficient local search algorithm for the problem to be solved.
The best results from the application of the ACO algorithms to NP-hard combinatorial optimization problems are achieved by coupling ACO with local optimizers (Dorigo and Stutzle, 2004) 5. Choose a specific ACO algorithm and apply it to the problem to be solved, taking the previous issues into account 6. Tune the parameters of the ACO algorithm. A good starting point is to use parameter settings that were found to be good when applying the same ACO algorithm to similar problems or to a variety of other problems The most complex task is probably the first one, in which a way to represent the problem in the form of a weighted graph must be found. We use a representation based on the following idea: let us imagine having two nominal predictors P 1 = {a 1 , b 1 , c 1 } and P 2 = {a 2 , b 2 } with, respectively, two and three attributes. Such simple predictors are considered only to explain the idea, because of the combinatorial explosion of the phenomenon. In this case, the set of all possible splits, at a root node, is the following: Any time a split is chosen, it generates two child nodes. For such nodes, the set of possible splits is, in the worst case, equal to 3 (the same as the parent node except the one that was chosen for splitting). This consideration leads to the representation shown in Figure 1 in which, for simplicity, only the first two levels of the possible trees are considered. It is easy to imagine how the complexity grows when we deal with predictors that generate hundreds or even thousands of splits (which is a common case). In Figure 1, the space of all possible trees is represented by a connected graph. Moving from a level to another one corresponds to split a variable. The arcs of such a graph have the same meaning of the arcs of the TSP graph (transition from a state to another one or, even better, addition of a component to a partial solution). In this view, it would be correct to deposit pheromone on them. The pheromone trails meaning, in this case, corresponds to the desirability to choose the corresponding split from a certain node. As for the heuristic information, it is possible to refer to the decrease in impurity deriving from adding the corresponding node to the tree. Such a measure has a meaning which is similar, in some way, to the one that visibility has in the TSP . An arc is much more desirable as higher the impurity decrease is. As a result, to make analogies with the TSP, such impurity decrease can be seen as an inverse measure of the distance between two nodes. Once the construction graph has been built, and pheromone trails meaning and heuristic function have been defined, it is possible to attack that problem using an ACO algorithm. It is important to note that, because of the specificity of the problem to be modelled (ants can move into a connected graph and there is a measure of "visibility"), the search of the best tree can be seen as a shortest path research, like in TSP. In the latter, ants are forced to pass only one time for each city while, in our case, ants are forced to choose paths that correspond to binary trees, since the solutions to build must be in the form of tree structures. All the ants will start from the root node and will be forced to move from one node to another in order to build a tour that corresponds to a tree. It is important to notice the basics of the ant moves in the graph shown in Figure 1. At each step, the ant looks for the heuristic information (impurity decrease) and the pheromone trail of any possible direction and decides for the one to choose (and, therefore, the associated split) on the basis of the selected ACO algorithm. Once the ant arrives to a terminal node, it recursively starts to move back to the other unexplored nodes. In different ACO algorithms, pheromone trails are initialized to a value obtained by manipulating the quality measure (the path's length for the TSP case) of a solution obtained with another heuristic (Dorigo suggests the nearest-neighbour heuristic). In our case, the greedy tree induction rule solution quality is used. Elitism will also be implemented and the chosen parameters (due to the strong similarity with TSP) are the same that have been used successfully for the TSP problem.
Identification of a parsimonious set of decision trees in multi-class classification
In many situations, the response variable used in classification tree modelling rarely presents a number of attributes that allow to apply the recursive partitioning algorithm in the most accurate manner. It is well known that: a) a multi-class response, namely a nominal variables with several classes, usually causes prediction inaccuracy; b) multi-class and numeric predictors play often the role of splitting variables in the tree growing process in disadvantage of two-classes ones, causing selection bias. To account for the problems deriving from the prediction inaccuracy of tree-based classifiers grown for multi-class response, as well as to reduce the drawback of the loss of interpretability induced by ensemble methods in these situations, Mola and Conversano (2008) introduced an algorithm based on a Sequential Automatic Search of a Subset of Classifiers (SASSC). It produces a partition of the set of the response classes into a reduced number of disjoint subgroups and introduces a parameter in the final classification model that improves its prediction accuracy, since it allows to assign each new observation to the most appropriate classifier in a previously-identified reduced set of classifiers. It uses a datadriven heuristic based on cross-validated classification trees as a tool to induce the set of classifiers in the final classification model. SASSC produces a partition of the set of the response classes into a reduced number of super-classes. It is applicable to a dataset X composed of N observations characterized by a set of J (numeric or nominal) splitting variables x j (j=1,…..,J) and a response variable y presenting K classes. Such response classes identify the initial set of classes C (0) =(c 1 ,c 2 ,….,c K ). Partitioning X with respect to C (0) allows to identify K disjoint subsets X (0) k , such that: X (0) k = {x s : y s ∈ c k }, with s=1,…..,N. In practice, X (0) k is the set of observations presenting the k-th class of y. The algorithms works by aggregating the K classes in pairs and learns a classifier to each subset of corresponding observations. The "best" aggregation (super-class) is chosen as the one minimizing the generalization error estimated using V-fold cross-validation. Suppose that, in the -th iteration of the algorithm such a best aggregation is found for the pair of classes c i* and c j* (with i*≠ j and i*, j* ∈ (1,….,K)) that allows to aggregate the subsets X i* and X j* . Denoting with T (i*,j*) the decision tree minimizing the cross-validated generalization error δ ( ) cv , the heuristic for selecting the "best" decision tree can be formalized as follows: The SACCS algorithm is analytically described in Table 1. It proceeds by learning all the possible decision trees obtainable by joining in pairs the K subgroups, and by retaining the one satisfying the selection criteria introduced in (12). After the -th aggregation, the number of subgroups is reduced to K ( -1) -1, since the subgroups of observations presenting the response classes c i* and c j* are discarded from the original partition and replaced by the subset X ( ) (i*,j*) = X (i*) ∩ X (j*) identified by the super-class c ( ) = (c (i*) ∩ c (j*) ). The initial set of classes C is replaced by C ( ) , the latter being composed of a reduced number of classes since some of the original classes form the superclasses coming out from the aggregations.
Likewise, also X ( ) k is formed by a lower number of subsets as a consequence of the aggregations. The algorithm proceeds sequentially in the iteration +1 by searching for the most accurate decision tree over all the possible ones obtainable by joining in pairs the K ( ) subgroups. The sequential search is repeated until the number of subgroups reduces to one in the K-th www.intechopen.com Tree Induction 457 iteration. The decision tree learned on the last subgroup corresponds to the one obtainable applying the recursive partitioning algorithm on the original dataset. The output of the procedure is a sequence of sets of response classes C (1) ,….,C (K−1) with the associated sets of decision trees T (1) ,…..,T (K−1) . The latter are derived by learning K − k trees (k = 1, ….., K − 1) on disjoint subgroups of observations whose response classes complete the initial set of classes C (0) : these response classes identify the super-classes relating to the sets of classifiers T (k) . An overall generalization error is associated to each T (k) : such an error is also based on V-fold cross-validation and it is computed as a weighted average of the generalization errors obtained from each of the K − k decision trees composing the set. In accordance to the previously introduced notation, the overall generalization errors can be denoted as Θ (1) cv , ……, Θ (k) cv ,……., Θ (K-1) cv . Of course, by decreasing the number of trees composing a sequence T (k) (that is, when moving k from 1 to K−1) the corresponding Θ (k) cv increases since the number of super-classes associated to T (k) is also decreasing. This means that a lower number of trees are learned on more heterogeneous subsets of observations, since each of those subsets pertain to a relatively large number of response classes. Taking this inverse relationship into account, the analyst can be aware of the overall prediction accuracy of the final model on the basis of the relative increase in Θ (k) cv when moving from 1 to K−1. In this respect, he can select the suitable number of decision trees to be included in the final classification model accordingly. Supposing that a final subset of g decision trees has been selected (g<<K−1), the estimated classification model can be represented as:
Evolutionary Algorithms in Decision
The parameter ψ is called "vehicle parameter". It allows to assign a new observation to the most suitable decision tree in the subset g. It is defined by a set of g−1 dummy variables.
Each of them equals 1 if the object belongs to the i-th decision tree (i = 1,…, g−1) and zero otherwise. The The estimation of τ i is based on the prediction accuracy of each decision tree in the final subset g. A new observation is slipped into each of the g trees. The assigned class ,ki c is found with respect to the tree whose terminal node better classifies the new observation. In other words, a new observation is assigned to the purest terminal node among all the g decision trees.
Another option of the algorithm is the possibility to learn decision trees to select the suitable pair of response classes satisfying (12) using alternative splitting criteria. As for CART, it is possible to refer to both the Gini index and Twoing as alternative splitting rules. It is known that, unlike Gini rule, Twoing searches for the two classes that make up together more than 50% of the data and allows us to build more balanced trees even if the resulting recursive partitioning algorithm works slower. As an example, if the total number of classes is equal to K, Twoing uses 2 K−1 possible splits. Since it has been proved (Breiman et al., 1984, pag.95) that the decision tree is insensitive to the choice of the splitting rule, it can be interesting to see how it works in a framework characterized by the search of the most accurate decision treers like the one introduced in SASSC.
Application on real and simulated datasets
Genetic Algorithm. The proposed GA has been applied on two datasets for which the optimal best split could be calculated and for a more complex one, for which it is not possible to proceed with such a brute force strategy. The first test has been done on the "Mushroom" dataset, available from the UCI Machine Learning Repository (source http://archive.ics.uci.edu/ml/). This dataset has a two-class response variable ("is the mushroom poisonous?") and set of categorical and numerical predictors. One of them (gill colour) has 12 categories (attributes), which can be evaluated exhaustively. The GA algorithm could find the global best solution (which was extracted by using the Rpart package of the R software) in less than 10 iterations. The algorithm has then been tested on a simulated dataset which was obtained by uniformly generating a response variable with 26 modalities and a nominal unordered predictor with 16 modalities for 20,000 observations. By letting be 16 the number of modalities of the splitting predictor it was possible, also in this case, to find the (global) best split by making use of the exhaustive enumeration. Such experimental studies showed that the most efficient configuration of the GA was the following: • By randomly selecting the initial population (no other solutions have been tried, in fact).
•
By setting the number of solutions building the population to be equal to the number of necessary genes (the number of categories of the predictor).
•
By setting a crossover proportion of 0.80.
•
By setting a mutation probability equal to 0.10.
•
By selecting the rank for choosing the solutions to be recombined.
For this kind of problem (20,000 units, 16 categories for the response variable and 26 categories for the splitting predictor) the global optimum was reached in less than 30 iterations. When the complexity of the problem grows many iterations seems to be required, though such number never appeared to grow exponentially. The GA has been tested also on the "Adult" dataset available from the UCI Machine Learning website. This dataset has been extracted from the US Census Bureau Database (source: http://www.census.gov/) with the aim of predicting whether a person earns more than 50,000 dollars per year. Such dataset has 325,614 observations and some categorical unordered splitting predictors with many attributes. In particular, the native-country predictor has 42 attributes. The GA has been run with the aim of trying to find a good split by making use of the nativecountry splitting predictor that both R and SPSS, for instance, refused to process. As previously mentioned, 30 iterations seemed to be not enough because, in many runs of the algorithm, the "probably best" solution appeared after iteration 80. The solution provided by the algorithm is shown in Table 2. It gives an idea of the complexity of the problem. The corresponding decrease in the node impurity is 0.3628465. The algorithm has been tested over many simulated dataset and the number of required iterations for the algorithm to reach convergence has been shown to linearly grow as a function of the number of attributes of the splitting predictor (the number of observations in the dataset appeared to be uninfluential).
State
Ant System. The strong complexity of the decision tree growing procedure (Hyafil & Rivest, 1976) does not allow to exhaustively enumerate and evaluate all the possible generable trees, even from very small datasets. In this respect, it is not possible to check whether the chosen heuristic is able to find the global optimum (in the same manner as it has been previously done for the genetic algorithm).
In the first experiment the algorithm has been tested on a simulated dataset of 500 observations with 11 nominal unordered predictors (with a number of attributes that ranges between 2 and 9) and 2 numeric (continuous) predictors. It could be seen that, when the required tree depth increases, the differences between the global impurity of the tree obtained by the CART greedy heuristic and the one obtained by the Ant System tend to increase. Table 3.Global impurity of the decision trees extracted by the proposed algorithm on a simulated dataset Figure 2 shows the result obtained on the "Credit" dataset that can be found in the SPAD software (source: www.spadsoft.com). This dataset has 468 observations on which 11 nominal variables have been observed together with a two-class response variable. The aim would be to predict such response variable ("is a customer good or bad?). The first decision tree is the one found by the CART heuristic and the second one has been extracted after 200 iterations of the Ant System algorithm. Table 4 shows the global impurity of the trees extracted by the CART and Ant heuristics. The algorithms presented here are in an early stage of development.
In these examples, an Ant System has been proposed to attack the problem of finding the best exploratory decision tree and it came out that the Ant System-based decision trees performed better than the ones found by the CART greedy heuristic. Even if the improvements weren't too large (from 2% to 5% in all of the simulation studies) such algorithm could be still useful for the situations in which high accuracy is required from the decision tree would. Ant System, on the other hand, is the simplest (yet less efficient) ACO technique, so that the use of more powerful ACO algorithms (which is currently under development) would reasonably bring better results. It is well known that ACO algorithms reach their maximum efficiency when coupled with local search techniques or can improve their efficiency by making use of candidate lists. Table 4. Global impurity of the decision trees extracted by the proposed algorithm on the Credit dataset SASSC algorithm. In the following, the SASSC algorithm is applied on the "Letter Recognition" dataset from the UCI Machine Learning Repository (source http://archive.ics.uci.edu/ml/). This dataset is originally analyzed in Frey & Slate (1991), who did not achieve a good performance since the correct classified observations did never exceed 85%. Later on, the same dataset is analyzed in Fogarty(1992) using nearest neighbours classification. Obtained results give over 95.4% accuracy compared to the best result of 82.7% reached in Frey & Slate(1991). Nevertheless, no information about the interpretability of the nearest neighbour classification model is provided and the computational inefficiency of such a procedure is deliberately admitted by the authors.
In the Letter Recognition analysis, the task is to classify 20, 000 black-and-white rectangular pixel displays into one of the 26 letters in the English alphabet. The character images are based on 20 different fonts and each letter within these 20 fonts was randomly distorted to produce a file of 20,000 unique stimuli. Each stimulus was converted into 16 numerical attributes that have to be submitted to a decision tree. Dealing with K = 26 response classes, SASSC provides 25 sequential aggregations. Classification trees aggregated at each single step were chosen according to 10-fold cross validation. A tree was aggregated to the sequence if it provided the lowest cross validated generalization error with respect to the other trees obtainable from different aggregations of (subgroups of) response classes. The results of the SASSC algorithm are summarized in Figure 3. It compares the performance of the SASSC model formed by g=2 up to g=6 superclasses with that of CART using, in all cases, either Gini or Twoing as splitting rules. Bagging (Brieman, 1996) and Random Forest (Breiman, 2001) are used as benchmarking methods as well. Computations have been carried out using the R software for statistical computing. The SASSC model using 2 superclasses consistently improves the results of CART using the Gini (Twoing) splitting rule since the generalization error reduces to 0.49 (0.34) from 0.52 (0.49). As expected, the choice of the splitting rule (Gini or Twoing) is relevant when the number of superclasses g is relatively small (2 ≤ g ≤ 4), whereas it becomes negligible for higher values of g (results for g ≥ 5 are almost identical). Focusing on the Gini splitting criterion, the SASSC's generalization error further reduces to 0.11 when the number of subsets increases to 6. For comparative purposes, Bagging and Random Forest have been trained using 6 and 10 classifiers respectively and, in these cases, obtained generalization errors are worse than those deriving from SASSC with g = 6. As for Bagging and Random Forest, increasing the number of trees used to classify each subset of randomly drawn objects improves the performance of these two methods in terms of prediction accuracy. The reason is that their predictions derive form ("in-sample") independent bootstrap replications. Instead, cross-validation predictions in SASSC derives from aggregations of classifications made on "out-of-sample" observations that are excluded from the tree growing procedure. Thus, it is natural to expect that cross-validation predictions are more inaccurate than bagged ones. Of course, increasing the number of subsets of the response classes in SASSC reduces the cross-validated generalization error but, at the same time, increases the complexity of the final classification model. In spite of a relatively lower accuracy, interpretability of the results in SASSC with g = 6 is strictly preserved.
Discussion and conclusions
In the last two decades, computational enhancements highly contributed to the increase in popularity of DTI algorithms. This cause the successful use of Decision Tree Induction (DTI) using recursive partitioning algorithms in many diverse areas such as radar signal classification, character recognition, remote sensing, medical diagnosis, expert systems, and speech recognition, to name only a few. But recursive partitioning and DTI are two faces of the same medal. While the computational time has been rapidly reducing, the statistician is making more use of computationally intensive methods to find out unbiased and accurate classification rules for unlabelled objects. Nevertheless, DTI cannot result in finding out simply a number (the misclassification error), but also an accurate and interpretable model. Software enhancements based on interactive user interface and customized routines should empower the effectiveness of trees with respect to interpretability, identification and robustness. The latter considerations have been the inspiration for the algorithms presented in this chapter aimed at the improvement of DTI effectiveness. They lead to easily interpretable solutions for rather complicated data analysis problems and can be fruitfully used in different fields of Knowledge Discovery from Databases (KDD) and data mining such as, for example, web mining and Customer Relationship Management (CRM). A Genetic Algorithm for multi-attribute predictor splitting is proposed in this chapter. It can be said that the proposed GA works very well in presence of treatable splitting predictors, for which the exhaustive enumeration is affordable. The algorithm always reaches the global optimum very quickly. This makes possible to think positively, even if nothing can be said, of course, about the case in which the number of attributes gets too large for the exhaustive enumeration and evaluation. Obtained results can be considered definitely useful in those cases where there are no other ways to attack the problem. Future research directions will include exhaustive enumerations on bigger datasets on a grid computing infrastructure.
In addition an Ant Colony Optimization algorithm is also proposed for exploratory tree growing. Such algorithm could be useful for the situations in which high accuracy is required from the decision tree would. Ant System, on the other hand, is the simplest (yet less efficient) ACO technique, so that the use of more powerful ACO algorithms (which is currently under development) would reasonably bring better results. It is well known that ACO algorithms reach their maximum efficiency when coupled with local search techniques or can improve their efficiency by making use of candidate lists. Finally, a sequential search algorithm for modelling multi-attribute response through DTI has also been introduced. The motivation underlying the formalization of the SASSC algorithm derives from the following intuition: basically, since standard classification trees unavoidably lead to prediction inaccuracy in the presence of multi-class response, it would be favourable to look for a relatively reduced number of decision trees each one relating to a subset of classes of the response variable, the so called super-classes. Reducing the number of response classes for each of those trees naturally leads to improve the overall prediction accuracy. To further enforce this guess, an appropriate criterion to derive the correct number of super-classes and the most parsimonious tree structure for each of them has to be found. In this respect, a sequential approach that automatically proceeds through subsequent aggregations of the response classes might be a natural starting point. The analysis of the Letter Recognition dataset demonstrated that the SASSC algorithm can be applied pursuing two complementary goals: 1) a content-related goal, resulting in the specification of a classification model that provides a good interpretation of the results without disregarding accuracy; 2) a performance-related goal, dealing with the development of a model resulting effective in terms of predictive accuracy without neglecting interpretability. Taking these considerations into account, SASSC appears as a valuable alternative to evaluate whether a restricted number of independent classifiers improves the generalization error of a classification model.
heuristic or any other one written by the user). It also allows to interactively visualize and compare the results. J-FAST divides the recursive partitioning procedure into three main sections. The data-importing Graphical User Interface (see Figure 4) allows to read data from Excel-like spreadsheets and plain text files and automatically recognises the nature of the variables by distinguishing the categorical, numerical or alphanumerical columns of a data matrix. J-Fast also allows the user to specify the Decision Tree Induction model by choosing the response variable, as well as which predictor(s) should be treated as ordinal, nominal or as excluded from the analysis.
Fig. 4. J-Fast data importing Graphical User Interface
A second GUI visualizes some information about the chosen DTI model and provides some descriptive statistics about the analyzing data. It also allows the user to specify which are the features of the DTI model under specification, such as the learning sample rate, the stopping conditions, the possibility of obtaining a verbose output. It also asks the user to choose between all the recursive partitioning heuristics that are present into the class path. Then, the software starts the tree growing procedure. The third component of the J-FAST software is the results navigator. It allows the user to interactively display and navigate into the results of the analysis. The results navigator GUI (see Figure 5) consists of two windows. The first one is the main results window. It visualises the obtained decision tree, charts the misclassification rates and the selected node's information panel (there is a button for visualizing the splitting rule to reach the node, the misclassification rate for the node, etc.). The second component is the Tree Console Window (Figure 6). It contains buttons that allow the user to navigate through the pruning sequence and access directly the best, the trivial and the maximal tree. For each tree in the pruning sequence, the node that is going to be pruned is highlighted. By clicking on the node, the interface allows to get the data units which fall in that node and to write them into a file in order to continue the analysis of such units using another software. It is also possible, from the second step GUI, to simultaneously start more than one analysis in order to obtain different tree navigators simultaneously on the screen. This feature is particularly useful for comparing trees grown from different datasets or on the same dataset but with using different DTI specifications. J-FAST is more than a simple recursive partitioning software. Because of the fact that it has been mainly designed to support the research activity, it offers many useful functions like the possibility of saving created objects (trees, datasets, nodes, etc.) via the Java serialization mechanism in order to better analyze using other ad-hoc written Java programs (some of them have already been implemented, like a different tree interface called "TreeSurfer"). Interactivity with the R statistical software is also provided: by right-clicking on a node it is possible to send the corresponding data to R in order to continue the analysis. This is particularly useful if another statistical analysis (i.e. a logit model) has to be made on a particular segment (node) extracted from the obtained decision tree. J-FAST has to be also considered as a Java objects Library (or API -Application Program Interface), for building Classification and Regression Trees. Any researcher which is able to program in Java could use the classes from the J-FAST API in order to get trees without having to write all the necessary code. In addition, the J-FAST platform offers many useful objects. The most important ones are: • Statistics: it provides univariate and bivariate descriptive statistics. • DataSet: it stores data for recursive partitioning purposes (response variable, predictors, etc.). • Split: it specifies the type of split (binary, ternary,etc.) • TreeGrower: it is a class for growing decision trees • Pruner: it is class that for decision tree pruning • TreeViewer: it is a interactive interface class • Utility: it encompasses many useful function like reading data from plain text files, Excel-like spreadsheets, etc. • TreeBuild interface: it defines all the rules to follow for the programmer to write his own heuristic. | 2014-10-01T00:00:00.000Z | 2008-11-01T00:00:00.000 | {
"year": 2008,
"sha1": "4b05804f5da26168afd0153dc5d53e046ac2d98a",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.intechopen.com/citation-pdf-url/5248",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f2bacec0c94ccfa98c58f547241314fbc827276d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
231149887 | pes2o/s2orc | v3-fos-license | A Holistic Systems Approach to Characterize the Impact of Pre- and Post-natal Oxycodone Exposure on Neurodevelopment and Behavior
Background: Increased risk of oxycodone (oxy) dependency during pregnancy has been associated with altered behaviors and cognitive deficits in exposed offspring. However, a significant knowledge gap remains regarding the effect of in utero and postnatal exposure on neurodevelopment and subsequent behavioral outcomes. Methods: Using a preclinical rodent model that mimics oxy exposure in utero (IUO) and postnatally (PNO), we employed an integrative holistic systems biology approach encompassing proton magnetic resonance spectroscopy (1H-MRS), electrophysiology, RNA-sequencing, and Von Frey pain testing to elucidate molecular and behavioral changes in the exposed offspring during early neurodevelopment as well as adulthood. Results: 1H-MRS studies revealed significant changes in key brain metabolites in the exposed offspring that were corroborated with changes in synaptic currents. Transcriptomic analysis employing RNA-sequencing identified alterations in the expression of pivotal genes associated with synaptic transmission, neurodevelopment, mood disorders, and addiction in the treatment groups. Furthermore, Von Frey analysis revealed lower pain thresholds in both exposed groups. Conclusions: Given the increased use of opiates, understanding the persistent developmental effects of these drugs on children will delineate potential risks associated with opiate use beyond the direct effects in pregnant women.
INTRODUCTION
Over the last few years, the increasing trend in opioid abuse has become a major public health crisis across the globe. As a result, this steep increase in abuse of prescription opioids, which include both licit and illicit opioids, has resulted in the opioid epidemic (Volkow and McLellan, 2016). Whilst this epidemic has traversed different groups in the society, pregnant women are a particularly vulnerable group since they are prescribed opioids such as morphine, buprenorphine, and methadone, all of which have been shown to cross the placenta (Gerdin et al., 1990;Nanovskaya et al., 2002Nanovskaya et al., , 2008, potentially impacting the developing fetus. Limited data exist regarding the effects of in utero (IUO) or postnatal (PNO) exposure to oxycodone (oxy), however. Oxy is prescribed for multiple types of pain and can bind to mu-and kappa-opioid receptors (Kim, 2017). Oxy easily passes through the blood-brain barrier, thus allowing higher concentrations to accumulate in the brain (Okura et al., 2008(Okura et al., , 2015Chaves et al., 2017), subsequently contributing to its analgesic properties and risk for dependency and addiction.
Several studies (Byrnes and Vassoler, 2018) have been conducted with rodent models to investigate the detrimental effects of gestational opioid use on neurodevelopment of the offspring, but a gap in knowledge exists regarding the effects of IUO or PNO oxy exposure on synaptogenesis. We have previously identified novel miRNA signatures related to neurodevelopment contained within brain-derived extracellular vesicles of PNO and IUO offspring (Shahjin et al., 2019), and our current study aims to investigate metabolic, synaptic, molecular, and behavioral alterations in these exposed offspring. Using a Sprague Dawley rat model previously established by our labs (Shahjin et al., 2019;Odegaard et al., 2020), we employed proton magnetic resonance spectroscopy ( 1 H-MRS) to measure biochemical changes of main brain metabolites in the hippocampus. Additionally, we identified synaptic alterations in the hippocampus through the use of electrophysiology experiments. Further, RNA-sequencing (RNA-seq) was conducted on tissue RNA isolated from the prefrontal cortex (PFC) to determine changes in gene expression, particularly in genes related to neurodevelopment, disease states, and mood disorders. The hippocampus and the PFC are key regions involved in substance abuse disorders and the negative emotional state associated with withdrawal (Koob, 2020); indeed, systemic opioid exposure has been shown to attenuate hippocampal afferent-driven activity in the PFC (Giacchino and Henriksen, 1998). For these reasons, we have investigated both regions in this study to identify alterations in either area of the brain during the early developmental period spanning from post-natal day 14 (P14) to P17, which corresponds with peak synaptogenesis (Semple et al., 2013). In our final experiments, we employed Von Frey tests to elucidate any lasting impacts of early-life oxy exposure on pain thresholds. The comprehensive and systematic approach used in this study allows for thorough research into pre-and post-natal oxy abuse, a critical step in closing the knowledge gap surrounding this commonly used opioid analgesic.
Animals
Male and female Sprague Dawley rats were obtained from Charles River Laboratories Inc. (Wilmington, MA, USA) and group housed in a 12 h light-dark cycle and fed ad libitum. The total number of animals used for this study can be found in Supplementary Table 1. All procedures and protocols were approved by the Institutional Animal Care and Use Committee of the University of Nebraska Medical Center (UNMC) and conducted in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals.
Oxycodone Treatment
The development of the IUO treatment paradigm was adapted from a previously published study (Davis et al., 2010), and the overall treatment paradigm previously established in our lab was followed (Shahjin et al., 2019;Odegaard et al., 2020). Briefly, nulliparous female (64-70 days of age) Sprague Dawley rats were treated with oxycodone HCl (Sigma Aldrich, St. Louis, MO) dissolved in saline or saline vehicle via oral gavage. An ascending dosing procedure was used wherein doses of 10 mg/kg/day of oxy were orally-gavaged for 5 days followed by a 0.5 mg/kg/day escalation for 10 days until reaching a final dose of 15 mg/kg/day, after which females were mated with proven male breeders. The treatment regimen continued throughout mating, gestation, and parturition until weaning (P21). For the PNO paradigm, dams were orally-gavaged with 15 mg/kg/day of oxy only after parturition until weaning. Upon weaning of the pups, dams were euthanized by isoflurane overdose followed by decapitation using a guillotine.
MRI/MRS Acquisitions
P17 pups were used for in vivo localized 1 H-MRS imaging of the hippocampus. Animals were anesthetized by inhalation of 1-1.5% isoflurane in 100% oxygen and maintained 40-80 breaths/minute. The duration of a study for a single animal was about 1 h. MRI and 1 H-MRS data were obtained using a Bruker R Biospin 7 Tesla/21 cm small animal scanner (Bruker, Billerica, MA), operating at 300.41 MHz, using a laboratorybuilt 22 mm diameter quadrature birdcage volume coil. All first-and second-order shim terms were first automatically adjusted in the volume-of-interest (VOI) using MAPSHIM R (Bruker, Billerica, MA), with a final shim performed manually to achieve a water line width of 10-15 Hz. The water signal was suppressed by variable power radiofrequency pulses with optimized relaxation delays (VAPOR) (Tkác et al., 1999). MR images were acquired for anatomical reference using a multi-slice rapid acquisition with relaxation enhancement (RARE) sequence (Effective echo time (TE) = 36 ms, Rare Factor = 8, repetition time (TR) = 4,200 ms, Number of Averages (NA) = 2, Scan Time = 3 m 21 s; FOV = 20 × 20 mm 2 , Matrix Size = 256 × 256, Spatial Resolution = 0.078125 × 0.078125 mm 2 , Number of Slices = 29, Slice Thickness = 0.5 mm). 1 H MRS data sets were obtained using semiLASER localization with timing parameters (TE/TR = 40/4,000 ms, 576 averages, 2,048 points) from a 2 × 5.187 × 1.557 mm 3 (16.15 µl) VOI located in the hippocampus. Pulse types and specifications: Excitation: hermite 90, duration = 0.7 ms, bandwidth = 5,400 Hz; 1st and 2nd Refocusing: hyperbolic secant, duration = 4 ms, bandwidth = 9484.5 Hz. The acquisition time was 38:24 min per data set. All pulses were applied with a frequency offset of −600 Hz to center the pulse bandwidth between Creatine (CRE) and N-Acetyl Aspartate (NAA). For the water suppression module, the spoiler strength matrix was calculated automatically. Spoiler strength was 35%; spoiler duration was 1.5 ms. For each experiment, one data set was acquired without water suppression to be used as the water concentration reference for the quantitation process. Unsuppressed water spectra were obtained with identical metabolite spectra parameters except for the following: TR = 10,000 ms, NA = 1, and Receiver Gain = 64. One 64 average (for quality assessment) plus four 128-average data sets were acquired for metabolite measurements using a combination of VAPOR (Tkác et al., 1999) scheme for water suppression.
Model parameters and constraints for quantification were generated using spectra from phantoms (n = 14) for the following metabolites: Alanine (ALA), Aspartate (ASP), Gamma-Aminobutyric acid (GABA), Glucose (GLC), Glutamine (GLN), Glutamate (GLU), Glycine (GLY), Lactate (LAC), Myo-inositol (MYO), Phosphorylcholine (PC), Taurine (TAU), total choline (tCHO), CRE, and NAA. Phantoms of each metabolite were prepared in pH 7.5 phosphate buffer (100 mM) and contained 3-(trimethylsilyl)-1-propane-sulfonic acid and sodium formate as chemical shift and phasing references. Spectra for each metabolite at known concentrations were acquired using semiLASER (Wijnen et al., 2010) sequences at 40 ms TE, maintaining the phantom at 38 • C with a circulating water jacket during spectral acquisition. The set of metabolite spectra formed a metabolite basis set, which was used as prior-knowledge in the quantification process. In all groups, n = 6 for all metabolites except LAC (IUO n = 4).
Electrophysiology
Coronal hippocampal brain slices were prepared from P17 animals (n = 6 per group) using the "protected recovery" method (Ting et al., 2014). Briefly, rats were euthanized by CO 2 asphyxiation and decapitated; brains were rapidly dissected into a slush of artificial cerebrospinal fluid (ACSF) containing (in mM) 124 NaCl, 2.5 KCl, 1.25 NaH 2 PO 4 , 24 NaHCO 3 , 12.5 glucose, 2 CaCl 2 , and 2 MgSO 4 and continuously bubbled with a mixture of 5% CO 2 and 95% O 2 . The cerebellum was removed with a razor blade, and the brain was affixed to the cutting chamber using cyanoacrylate glue. Two hundred and fifty micron-thick coronal brain sections through the hippocampus were cut using a vibrating microtome (Leica VT1000S) and hemisected into right and left halves through the midline before being transferred to a net submerged in an N-methyl-D-glucamine (NMDG)-based ACSF composed of (in mM) 92 NMDG, 2.5 KCl, 1.25 NaH 2 PO 4 30 NaHCO 3 , 20 HEPES, 0.5 CaCl 2 , 10 MgSO 4 , 2 thiourea, 5 Lascorbic acid, and 3 Na-pyruvate, warmed to ∼30 • C and bubbled with 5% CO 2 and 95% O 2 . After a 10 to 15 min incubation in the NMDG ACSF, slices were transferred to a chamber containing room temperature ACSF and allowed to recover for 1 h before beginning patch clamp experiments. Reagents were purchased from Thermo Fisher Scientific (Waltham, MA) unless noted otherwise.
For whole-cell recording, slices were positioned in a recording chamber on an upright fixed-stage microscope (Olympus BX51WI) and superfused by a gravity-fed system with ACSF warmed to 29-31 • C using an in-line solution heater at approximately 4 mL/min. The ACSF was supplemented with 60 µM picrotoxin. A concentric bipolar stimulating electrode was positioned in the stratum radiatum to stimulate Schaffer collateral axons using a 0.1 ms current delivered at 0.1 Hz from an isolated pulse stimulator (A-M Systems Model 2100). CA1 pyramidal neurons were targeted for whole-cell recording with patch pipettes pulled from thin-walled borosolicate glass on a Sutter P-1000 micropipette puller. The patch pipettes had a resistance of 5-8 M when filled with a solution containing (in mM) 120 Cs-methanesolfonate, 10 HEPES, 8 TEA-Cl, 5 ATP-Mg, 0.5 GTP-Na 2 , 5 phosphocreatine, and 0.5 EGTA (pH = 7.35, osmolality = 282 mOsm). Reported voltages are corrected for a 10 mV liquid junction potential. The intensity-response profile of the evoked excitatory post-synaptic currents (EPSCs) for each cell was determined by the average of 3-10 responses obtained at each stimulus strength (50-225 µA). The AMPA/NMDA ratio was measured as the ratio of the peak of the inward EPSC recorded at −70 mV to the outward EPSC amplitude at 50 ms post-stimulus at a holding potential of +40 mV. Miniature EPSCs (mEPSCs) were recorded in the absence of stimulus and were detected and analyzed using MiniAnalysis (Synaptosoft). mEPSC frequency for each recorded cell was determined as the median of the instantaneous frequencies of all detected events for that cell. Due to variability in the number of cells patched and recorded, respective samples sizes are provided in Figure 2.
Total RNA Extraction, Quality Control, Library Preparation, and RNA-Seq
Total RNA from prefrontal cortex (PFC) tissue was isolated from the randomly selected pups (n = 6) from each treatment group at P14 using the Direct-Zol RNA kit (Zymo Research, CA, USA) based on the manufacturer's protocol. Samples were sent to UNMC's Next Generation Sequencing (NGS) core. RNAseq libraries were generated beginning with 1 ug of total RNA from each sample using the TruSeq V2 RNA sequencing library kit from Illumina following recommended procedures (Illumina Inc., San Diego, CA). Resultant libraries were assessed for size of insert by analysis of an aliquot of each library on a BioAnalyzer instrument (Agilent Technologies, Santa Clara, CA). Each library contained a unique indexing identifier barcode allowing the individual libraries to be multiplexed together for efficient sequencing. Multiplexed libraries (18 samples per pool) were sequenced on a single flow cell of the NextSeq550 DNA Analyzer (Illumina) to generate a total of ∼28 million 75 bp single reads for each sample.
Bioinformatics
Differentially expressed genes (up-and downregulated) between SAL and PNO, SAL and IUO, and PNO and IUO were chosen for further functional characterization using ClueGO plug-in module (Bindea et al., 2009) in Cytoscape software (Shannon et al., 2003). The "biological process" option in Clue-Go analysis was used to visualize the categories of DEG functions in each comparison.
Von Frey
Von Frey experiments were conducted (n = 4) from each group at P17, and the same animals were tested at P75. The test was commenced as the rats placed four paws comfortably on the mesh floor and the plantar were clearly visible. The examiner randomly picked the left or right hind paw as the first evaluated paw during each assessment. A monofilament Von Frey hair was applied exactly vertical on the plantar surface until the hair buckled and the shape of the hair was held for 5 s. The specific value of the forces chosen were: 0.6, 1.0, 1.4, 2.0, 4.0, 6.0, 8.0, 10.0, and 15.0 g. The cut-off force was set as 15.0 g because the paw would be lifted if the next force (26.0 g) was applied. During the measuring process, each force was applied 10 times with an interval of at least 5 s to allow the animal to recover from the previous stimuli. Noxious responses were determined if any of the following robust reflex responses occurred: paw retracting, paw withdrawal, or paw licking. Once there were 4 positive responses in 10 applications, the force was determined as the mechanical withdrawal threshold. The mechanical withdrawal threshold for both of two paws was recorded.
Statistical Analyses
All data represented in the manuscript are reported as mean ± SEM. Data in each analysis were normally distributed. Significant differences were computed using Welch's t-test (Electrophysiology) and two-way ANOVA (Von Frey and MRI/MRS data) followed by Tukey's test with a significance criterion of p ≤ 0.05. All statistical tests were performed with GraphPad Prism (La Jolla, CA, USA); data represented as Mean ± SEM on the graphs.
Quantitation of Metabolites in Oxy-Exposed Pups
Brain metabolites are spatiotemporally regulated during development (Miyazawa and Aulehla, 2018). However, it is unknown whether IUO or PNO exposure influences the expression levels of these metabolites in the offspring. Accordingly, we conducted 1 H-MRS scans on the brain hippocampus of postnatal day 17 (P17) saline, PNO, and IUO groups (Figure 1). We found that IUO or PNO treatment did affect metabolite concentrations in these animals [Metabolite: F (13,208) = 96.75, P < 0.0001; Treatment: F (2,208) = 5.520, P = 0.0046; Interaction: F (26,208) = 2.093, P = 0.0023]. Specifically, we identified higher levels of neurotransmitter aspartate (ASP) and glutamate (GLU) in both PNO and IUO groups. However, N-acetyl aspartate (NAA), the second most abundant metabolite in the brain, was significantly elevated in the IUO group. Additionally, taurine (TAU) concentration was elevated in the PNO group but was significantly lower in the IUO offspring compared to controls. Together, these data point to alterations in key metabolite levels in both the PNO and IUO groups that are more pronounced in the latter.
Synaptic Alterations in IUO and PNO Offspring
CA1 synapses were monitored in hippocampal slices of control, PNO, and IUO rats (Figure 2). Input-response curves showed that the post-synaptic currents in the PNO group cells were smaller than control (p = 0.02, p = 0.017; Figure 2A). Although the AMPA/NMDA ratio in PNO rats appeared reduced compared to controls, the difference was not significant (p = 0.07; Figure 2A). The paired pulse ratio (PPR) of postsynaptic currents did not significantly differ among the three groups, suggesting no change in presynaptic vesicle release probability (Figure 2A). When measuring miniature excitatory post-synaptic currents (mEPSCs), we found that, although the frequency and amplitude of the currents did not differ between groups, the PNO mEPSCs had slightly faster decay kinetics (p = 0.0048; Figure 2B), which is consistent with altered AMPA receptor subunit composition. Together, these data point to altered synaptic maturation in the PNO offspring.
RNA-Seq Highlights Gene Expression Changes in Oxy-Exposed Pups
To further understand the molecular causes associated with changes in synaptic currents, we performed RNAseq analysis on the prefrontal cortex (PFC) (Figure 3A; Supplementary Table 2). Employing a criteria of 1.5-fold change and p < 0.05, we found 62 genes (20 up and 42 down) between saline and PNO and 161 genes (78 up and 83 down) between saline and IUO. When comparing the PNO and IUO groups, we found 1,465 genes (1,199 up and 266 down). We found three genes (Sytl2-Synaptotagmin-like 2, Vwa5b1-Von Willebrand Factor A Domain-Containing Protein 5B1, and a predicted gene AABR07042623.1) were differentially regulated among all three groups.
Next, the enriched biological pathways associated with these differentially expressed proteins were determined using Clue-GO analysis ( Figure 3B; Supplementary Table 3). Notably, pathways involved in synaptic transmission and morphine, nicotine, and alcohol addictions were significantly enriched in the two treatment groups. Furthermore, the opioid signaling pathway was affected in the PNO and IUO offspring. A number of the genes affected in the IUO and PNO pups were associated with diseases and psychotic disorders ( Table 1; Supplementary Table 4). To summarize, oxy exposure can significantly affect neurodevelopment in exposed offspring by inducing changes in key genes and pathways during synaptogenesis that could persist during late adolescence and adulthood.
Pain Sensitivity in Oxy-Exposed Pups
One key pathway we identified from our RNA-Seq analysis was regulation of sensory perception of pain. Because oxy is prescribed for pain management, we investigated whether PNO or IUO exposure has lasting effects on pain sensitivity. Von Frey testing was conducted at P17 (pups exposed to oxy via the breastmilk) and on the same animals at P75 (adulthood) after a sustained absence of oxy exposure (Figure 4). While no significant differences in the pain threshold were observed in the PNO or IUO at P17, both groups displayed a significantly lower pain threshold than controls at P75 [Age: F (3,242) = 455.3, P < 0.0001; Treatment: F (2,242) = 88.13, P < 0.0001; Interaction: F (6,242) = 31.18, P < 0.0001]. These data suggest a lasting impact of early life oxy exposure on pain sensitivity during adulthood. . In all groups, n = 6 for all metabolites except Lac (n = 4 in the IUO group). All data represented as mean ± SEM. *p < 0.05, **p < 0.01, ****p < 0.0001.
DISCUSSION
While previous studies have reported poor neurodevelopmental outcomes in offspring exposed to opioids (Davis et al., 2010;Devarapalli et al., 2016;Sithisarn et al., 2017;Fan et al., 2018), a comprehensive analysis comparing changes in neurodevelopment with pre-and post-natal exposure to drugs has not been evaluated. In the present study we show for the first time a comparative analysis on alterations in metabolic, synaptic, molecular, and behavioral changes in offspring exposed to the oxy pre-and post-natally. As mentioned in our previous works, the PNO and IUO groups are clinically relevant (Shahjin et al., 2019;Odegaard et al., 2020). The use of both PNO and IUO groups in this study was critical to elucidate the extent of oxy exposure effects on neonates. While the PNO group was exposed to oxy only via the breastmilk, the IUO group was exposed via placental concentrations of oxycodone throughout gestation as well as via the breastmilk. Oxy as a postoperative analgesic has been reported in the literature for postpartum pain or caesarian sections in lieu of morphine drips (Niklasson et al., 2015;Nie et al., 2017). It is important to note that offspring exposed to oxy via the breastmilk may receive <10% of a typical oral therapeutic infant dose (0.1-0.2 mg/kg) (Seaton et al., 2007). Despite this low dose, infant exposure to oxy via the breastmilk has been associated with sedation and central nervous system depression (Lam et al., 2012), and a number of animal studies have also revealed deficits in behavior and development associated with perinatal opioid exposure (Davis et al., 2010;Devarapalli et al., 2016;Sithisarn et al., 2017;Fan et al., 2018).
The effects of pre-and post-natal oxy use on brain chemistry have not been clearly elucidated. Accordingly, we used 1 H-MRS to investigate the biochemical changes present in the hippocampus of P17 rats in the PNO and IUO groups, revealing significant alterations in key brain metabolites in these groups. The IUO group had higher concentrations for the neurotransmitters ASP and GLU compared to controls and higher concentrations of NAA compared to controls and PNO. The IUO group also had lower concentrations of TAU than controls. GLU and ASP regulate a majority of the excitatory synaptic neurotransmission in the brain (Ballini et al., 2008), and their enhanced expression may point to excitotoxicity and possibly enhanced excitatory signaling in the brain. Additionally, higher concentrations of ASP may suggest more ASP is available to react with acetyl-CoA to make NAA (Hajek and Dezortova, 2008), which was also elevated in the IUO group. Interestingly, TAU plays a key role in brain development, and its deficiency can lead to a delay in cell differentiation and migration in certain brain areas such as cerebellum, pyramidal cells, and visual cortex (Ripps and Shen, 2012). Further, Hernandez-Benitez et al. have shown that TAU promotes neural development in the embryonic brain as well as in adult brain regions (Hernández-Benítez et al., 2010). The lower levels of TAU we observed may point to neurodevelopmental deficits. Intriguingly, we have shown that both IUO and PNO animals display an overall reduction in the head size circumference (Odegaard et al., 2020), which may be attributed in part to lower concentration of TAU.
Based on the observation of increased GLU levels in our MRS study, we investigated the extent of synaptic changes in IUO and PNO animals. Glutamate receptors play a role in mediating the reward pathway involved in drug addiction (D'Souza, 2015), and they are also involved in opiate-induced neural and behavioral plasticity (Jackson et al., 2000;Trujillo, 2000;Zhu and Barr, 2004). AMPA receptors, one type of ionotropic glutamate transporter, are crucial for opioid withdrawal during development (Jakowec et al., 1995a,b;Fitzgerald et al., 1996;Washburn et al., 1997;Mahanty and Sah, 1998;Ozawa et al., 1998). While we saw a reduction in the AMPA/NMDA ratio in the PNO rats, the difference was not significant. Additionally, FIGURE 2 | Evoked EPSC and mEPSC monitored in hippocampal slices of control, PNO, and IUO rats. (A) Evoked EPSC data; Top: A series of EPSCs recorded in each of the three treatment conditions in response to a series of stimulus strengths (50-225 µA). Stimulus timing is marked above the traces. Group data show the intensity-response profiles for cells recorded from Saline, PNO, and IUO rats. Middle: AMPA receptor and NMDA receptor-mediated EPSCs recorded by voltage-clamping CA1 pyramidal cells at −70 and +40 mV, respectively. The AMPA receptor component was measured at the peak of the inward EPSC while the NMDA receptor component was measured 50 ms post-stimulus. Group data of AMPA/NMDA ratios show there was no significant difference between the groups. Bottom: Paired pulse traces showing the characteristic synaptic facilitation in response to a pair of pulses separated by 50 ms. Group data show the paired pulse ratio was not significantly different between treatment conditions. (B) Traces of mEPSCs recorded in the absence of stimulation followed by mEPSCs waveforms from individual cells. These were obtained by averaging all detected events in individual cells. Group data show that mEPSC amplitudes and frequencies were not significantly different between treatment groups. Group data of mEPSC decay time constants (τ decay ), show that mEPSCs in the PNO group decayed more quickly than in control. Sample sizes of recorded cells are shown in the bars of each graph; all data represented as mean ± SEM.
there were no differences in the PPR of post-synaptic currents, suggesting no changes in vesicle release. Intriguingly, PNO mEPSCs had slightly faster decay kinetics, which is consistent with altered AMPA receptor subunit composition. Interestingly, no significant effects were seen in the IUO group. Possible reasons include the potential loss of neurons given the longer exposure to oxy (Hu et al., 2002;Hauser and Knapp, 2017) and the higher glutamate levels in the IUO pups compared to the PNO group, as evidenced by 1 H-MRS. Thus, our study for the first time lends insight into the synaptic changes associated with PNO exposure and its effects on altered glutamatergic signaling.
Recent studies employing high-throughput technologies have further provided new inroads in elucidating the molecular underpinnings associated with long term oxy dependency. These include alterations in key genes associated with integrated stress response in the brain (Fan et al., 2015), induction of apoptotic signaling in neurons by promoting demyelination (Fan et al., 2018), alterations in reward related genes , axon guidance molecules , inflammation/immune-related genes (Zhang et al., 2017), neurotransmitter receptor genes (Zhang et al., 2014), and synaptic plasticity genes (Zhang et al., 2015), including key sex-specific neuroplasticity-related genes . Similarly, our RNA-seq data showed alterations in pathways associated with synaptic transmission, axon guidance, inflammasomes, and genes associated with the reward system. Among others, genes affecting synaptic transmission and axon development included Egfr, Adrb2, and Ntrk. Interestingly, Fan et al. found that chronic oxy exposure leads to axonal degeneration in rat brains (Fan et al., 2018). Chronic oxy exposure altered the white matter of the rats via deformation of axonal tracks, reduced size of axonal fascicles, loss of myelin basic protein, and accumulation of the amyloid precursor protein (Fan et al., 2018). Importantly, human studies of infants prenatallyexposed to opioids have shown alterations in the white matter, such as punctate white matter lesions or white matter signal abnormalities on structural MR imaging (Walhovd et al., 2012;Merhar et al., 2019). The results from our RNA-seq analysis align with these previous observations from both animal models and human studies. Interestingly, our RNA-seq data showed differences in glutamatergic synapse genes, with seven genes being differentially expressed in the IUO and PNO groups: Adrb2, Egfr, Grik2, Npy2r, Ntrk1, Ntrk2, and Oxtr. Combined with our electrophysiology results, our RNA-seq results further suggest alterations in glutamatergic signaling within the reward pathway of these exposed offspring. In addition, our studies suggest PNO and IUO exposure not only alter gene expression but also may increase the risk of developing other diseases, particularly renal disease. Genes associated with renal adysplasia, renal cancers, renal failure, and several other renal diseases were differentially regulated in the PNO and IUO groups. Interestingly, depletion of TAU, such as that reported in our IUO group MRS data, has been shown to play a role in renal dysfunction (Ripps and Shen, 2012). In human studies, opioid use has been associated with acute kidney injury, particularly in the case of opioid overdose (Mallappallil et al., 2017). The altered gene expression and lower levels of TAU in the reward system of the IUO offspring during early development may warrant FIGURE 4 | Measurement of pain thresholds using the Von Frey test. Animals from each group (n = 4) were tested at P17 and again at P75 to determine changes in pain thresholds. At P75, the oxy-exposed groups had lower pain thresholds than controls. Additionally, IUO pups had lower pain thresholds than PNO pups. All data represented as mean ± SEM. ***p < 0.001, ****p < 0.0001.
further exploration into the potential of these offspring to develop renal diseases as adults. Further, several genes in our analysis are also involved in other substance use-related disorders, such as nicotine, cocaine, and cannabis dependence, morphine addiction, and fetal alcohol spectrum disorders. Opioid use has also been associated with mental health disorders, with a higher proportion of adolescents exposed prenatally to opioids having experiences with major depressive episodes, alcohol abuse, and attention deficit hyperactivity disorder (Nygaard et al., 2019). Genes associated with depression, anxiety disorders, schizophrenia, and obsessive compulsive disorder were all enriched in both PNO and IUO offspring, suggesting a higher risk for such disorders in these offspring. Oxy is generally prescribed for pain management, and the enrichment of the sensory pain pathway from our RNA-Seq analysis was not surprising. When assessed by the Von Frey filament test at P17, the PNO and IUO pups did not exhibit a difference in pain threshold compared to saline controls. However, during adulthood (P75), these same animals, especially the IUO group, displayed significantly lower pain threshold compared to saline offspring. In a study of neonatal morphine exposure, P40 rats exhibited a lower pain threshold than the controls, but the pain threshold approached control levels by P50 when tested using Von Frey (Zhang and Sweitzer, 2008). Because we see similar results continuing up to P75, pre-and post-natal exposure to oxy may alter normal synaptic development involved in nociception. Indeed, a number of genes shown to be differentially regulated in both PNO and IUO in our RNA-seq data are associated with pain: Adrb2, Cck, Htr2c, Npy2r, Oprk1, Oprm1, and P2rx3. Further analyses of these genes could possibly lend more mechanistic insights into the pain etiology in these offspring.
In summary, our study using a holistic systems approach shows a comparative analysis on alterations in metabolic, synaptic, molecular, and behavioral changes in offspring exposed to the prescription opioid oxy pre-and post-natally. Importantly, these changes not only impact the overall development during early stages but also persist into adulthood.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found at: https://www.ncbi.nlm.nih. gov/geo/, GSE159563.
ETHICS STATEMENT
The animal study was reviewed and approved by Institutional Animal Care and Use Committee of the University of Nebraska Medical Center (UNMC).
AUTHOR CONTRIBUTIONS
KO: animal treatments and maintenance, interpreted results, and drafted and edited manuscript. VS: animal treatments and maintenance, project organization, and scheduling. AC and SK: animal treatments and maintenance. JS: bioinformatic analysis of RNA-seq data and data deposition. ZX and HW: Von Frey behavior testing, analysis, and figures. MM, YL, and MU: MRI/MRS acquisition and analysis. AS and MV: electrophysiology experiments, analysis, and figures. CG: bioinformatic analysis of RNA-seq data. SL: conception of the experimental design and provision of resources. GP and SY: conception of the experimental design, oversaw experiments and analyses, and edited manuscript. All authors contributed to the article and approved the submitted version.
FUNDING
This work was supported by NIH grants DA049577 (GP), DA046284 (GP), DA042379 (SY), DA046852 (GP and SY) and departmental startup funds to GP and SY. The Bioinformatics and Systems Biology Core at UNMC received support from Nebraska Research Initiative (NRI) and NIH (5P20GM103427; 5P30CA036727; 5P30MH062261) for the bioinformatics analysis performed in this study. The University of Nebraska DNA Sequencing Core receives partial support from the National Institute for General Medical Science (NIGMS) INBRE-P20GM103427-19 grant as well as The Fred & Pamela Buffett Cancer Center Support Grant-P30 CA036727. | 2021-01-08T20:08:48.489Z | 2021-01-07T00:00:00.000 | {
"year": 2021,
"sha1": "508a18f1e648aeaf1a43ecb4fb9cedca00506d8a",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2020.619199/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "508a18f1e648aeaf1a43ecb4fb9cedca00506d8a",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55493291 | pes2o/s2orc | v3-fos-license | Geographically Weighted Regression ( GWR ) Modelling with Weighted Fixed Gaussian Kernel and Queen Contiguity for Dengue Fever Case Data
Regression analysis is a method for determining the effect of the response and predictor variables, yet simple regression does not consider the different properties in each location. Methods Geographically Weighted Regression (GWR) is a technique point of approach to a simple regression model be weighted regression model. The purpose of this study is to establish a model using Geographically Weighted Regression (GWR) with a weighted Fixed Gaussian Kernel and Queen Contiguity in cases of dengue fever patients and to determine the best weighting between the weighted Euclidean distance as well as the Queen Contiguity based on the value of R2. Results from the study showed that the modeling Geographically Weighted Regression (GWR) with a weighted Fixed Gaussian Kernel showed that all predictor variables affect the number of dengue fever patients, whereas the weighted Queen Contiguity, not all predictor variables affect the dengue fever patients. Based on the value of R2 is known that a weighted Fixed Gaussian Kernel is better used.
INTRODUCTION
Spatial data is the measurement data containing location information.Methods Geographically Weighted Regression (GWR) is a technique point of approach to a simple regression model be weighted regression model [1].Spatial weighting matrix is used to determine the closeness of the relationship between the regions.GWR weighting role model is very important because it represents a weighted value of research data layout.Weighted grouped by distance (distance) and region (contiguity) [2].In GWR models often use a weighting based on the distance (distance) without considering other weighting associated with the area (contiguity).
In GWR models often use a weighting based on the distance (distance) without considering other weighting associated with the area (contiguity).In general, dengue fever clustered in specific locations.Things to note about the location other than the one with the distance (distance), which is the state of the location or about the area (contiguity).Therefore, in this study will consider the distance (distance) and region (contiguity) as weighting to search for the best model in dengue cases.
Fixed Gaussian Kernel
Fixed Gaussian kernel is the weighting matrix based on the proximity of the location of the observation point i and point to another location.Fixed weighted Gaussian kernel as follows [1]: (2) If the location of the i located at coordinates(u i ,v i ) it will obtain the Euclidean distance (dij) between all locations i and j are: Bandwidth is a circle of radius from the center point location.Methods to determine the bandwidth is Cross Validation (CV) [1]):
Queen Contiguity
Observations on the adjacent locations tend to be similar compared to locations far apart, because they relate to weighted location [3].
w i where: w i : Sum total row w ij : The weighting matrix row i column j
Geographically Weighted Regression
Spatial data is the measurement data containing location information.Methods Geographically Weighted Regression (GWR) is a technique point of approach to a simple regression model be weighted regression model [1].According to Yasin [4] model of GWR is: = 0 ( , ) + ∑ ( , ) + =1 ( , )= Coordinates (longitude, latitude) point i to a geographical location.
Testing Geographically Weighted Regression Model Parameters
Testing Geographically Weighted Regression model parameters is done simultaneously and partially [5]: 1. Simultaneous testing to determine the effect of predictor variables together against the response variable.
If 0 true test statistic: ,−(+1)) * 2. Partial testing to determine which predictor variables that influence the response variable for each observation location, using the t test statistic is based on the hypothesis: H0: β j (u i ,v i ) = 0 H1: β j (u i ,v i ) ≠ 0, j = 1, 2, ⋯, p t test statistic can be written as follows: where cjj a diagonal matrix element CC T
Testing Assessment Geographically Weighted Regression Model
The coefficient of determination can describe the magnitude of the response variable diversity can be explained by the predictor variables.GWR R 2 value obtained by the following mathematical equation [1]:
RESULTS AND DISCUSSION
The results of statistical calculations Breusch-Pagan test with both weighting are presented in Table 1.
Table 1. Testing Result Breusch-Pagan
Table 1 shows that the critical point test χ 2 with error level = 0,05 and degrees of freedom (p+1) is 11,707 then reject H0 so the conclusion is that there is spatial heterogeneity in the data dengue cases.
Further Testing GWR model parameters simultaneously, the test results are presented in Table 2.
Table 2. Model Parameter Testing Results GWR Simultaneous
Table 2 shows that the predictor variables with Gaussian kernel weighting Fixed effecting simultaneously the response variable for F> (3,305) 0.05 (2,31) , where the weighted predictor variables Queen Contiguity no effect along the response variable for F > (2,922) 0.05 (3,30) .
The partial test shows throughout predictor variables with Fixed Gaussian kernel weighting affect the response variable at each location and weighted Queen Contiguity affect the response variable at each location.
Comparison of methods performed to determine the best weighting.Criteria for selection of the best weighting by using 2 are presented in Table 3. Best weighting is weighted with the largest value of 2 .
Table 3 .
Comparison of 2 at GWR Model Rated 2 for a model with Gaussian kernel weighting Fixed bigger than Queen contiguity weighted, so that it can be concluded that the weighting Fixed weighting Gaussian kernel is better used for data dengue cases in this study. | 2018-12-11T12:22:15.514Z | 2017-11-30T00:00:00.000 | {
"year": 2017,
"sha1": "7ce84a02a6b89edd9d074e0fb0b38d3c2ccf38b0",
"oa_license": "CCBYSA",
"oa_url": "http://ejournal.uin-malang.ac.id/index.php/Math/article/download/4393/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7ce84a02a6b89edd9d074e0fb0b38d3c2ccf38b0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
140310280 | pes2o/s2orc | v3-fos-license | Modeling of Protein–Protein Interactions in Cytokinin Signal Transduction
The signaling of cytokinins (CKs), classical plant hormones, is based on the interaction of proteins that constitute the multistep phosphorelay system (MSP): catalytic receptors—sensor histidine kinases (HKs), phosphotransmitters (HPts), and transcription factors—response regulators (RRs). Any CK receptor was shown to interact in vivo with any of the studied HPts and vice versa. In addition, both of these proteins tend to form a homodimer or a heterodimeric complex with protein-paralog. Our study was aimed at explaining by molecular modeling the observed features of in planta protein–protein interactions, accompanying CK signaling. For this purpose, models of CK-signaling proteins’ structure from Arabidopsis and potato were built. The modeled interaction interfaces were formed by rather conserved areas of protein surfaces, complementary in hydrophobicity and electrostatic potential. Hot spots amino acids, determining specificity and strength of the interaction, were identified. Virtual phosphorylation of conserved Asp or His residues affected this complementation, increasing (Asp-P in HK) or decreasing (His-P in HPt) the affinity of interacting proteins. The HK–HPt and HPt–HPt interfaces overlapped, sharing some of the hot spots. MSP proteins from Arabidopsis and potato exhibited similar properties. The structural features of the modeled protein complexes were consistent with the experimental data.
Introduction
Cytokinins (CKs) are low molecular weight phytohormones that regulate a plethora of physiological processes in higher plants [1,2] (Figure S1). Plants use a modified prokaryotic two-component system (TCS), referred to as multistep phosphorelay (MSP), to transduce CK signal ( Figure 1). This plant signaling system involves three main types of proteins. The first one is the multi-domain transmembrane CK receptor protein, hybrid sensor histidine kinase (HK). Its extracytosolic part facing the endoplasmic reticulum (ER) lumen or apoplast [3] represents the sensor module (SM), which includes dimerization interface, PAS (Per-Arnt-Sim) and PAS-like domains [4,5]. Transmembrane (TM) regions are represented by two to five α-helical stretches depending on receptor type [5]. The cytosolic part of the receptor includes the catalytic module, which consists of The ligand binding in the PAS domain is assumed to be coupled with the dimerization and activation of the receptor [1,5,[10][11][12]. CK signal transduction proceeds via the His-Asp phosphorelay of the prokaryotic type [13]. In the ATP-binding domain of the activated receptor, hydrolysis of ATP occurs, and the released phosphate ion binds to a conserved His residue in the The second protein type in the CK signaling pathway is the conserved histidine-containing phosphotransfer protein, or phosphotransmitter (HPt or HP), which shuttles between cytoplasm and nucleus ( Figure 1). The third and the last protein in the MSP is the response regulator (RR), a transcription factor containing a conserved phosphoaccepting Asp residue [1,[10][11][12].
The ligand binding in the PAS domain is assumed to be coupled with the dimerization and activation of the receptor [1,5,[10][11][12]. CK signal transduction proceeds via the His-Asp phosphorelay of the prokaryotic type [13]. In the ATP-binding domain of the activated receptor, hydrolysis of ATP occurs, and the released phosphate ion binds to a conserved His residue in the HisKA domain, forming
Dimerization Interfaces of the Sensor Module
The sensor module (SM) of CK receptor is the extracytosolic part of this transmembrane protein. The crystal structure of AHK4sm is the only CK-receptor domain structure experimentally solved so far [4]. SM consists of dimerization subdomain, formed by a long (pivotal) α1-helix, which seems to be an extension of the upstream TM helix, short α2-helix and loop between these helices; large CHASE domain, including PAS and PAS-like subdomains, and short downstream linker adjacent to the downstream TM helix [5]. The SMs of AHK4 were crystallized as homodimers with different CKs bound; no apo-form structure without a bound hormone was obtained [4]. Therefore, all SM models obtained using AHK4 crystal structure as a template correspond to the ligand-bound state of the receptor, regardless of the presence or absence of ligand in the model itself.
Taking this circumstance into account, homology models of dimeric SM structures of A. thaliana (AHK2-4) and S. tuberosum monoploid var. Phureja (StHK2-4) were built (Figure 2A-D, Supplementary dataset 1), and dimerization interfaces of SM homo-and heterodimers of Arabidopsis and SM homodimers of potato were analyzed (a total of nine complexes). Target proteins were at least 61% identical (Table S1). All models had acceptable Ramachandran plot statistics (Table S2). Only the membrane-distal part of SM dimerization subdomain participated in the formation of protein-protein interface (PPI). The interface area of modeled complexes ranged between 946 Å 2 and 1044 Å 2 depending on dimer subunit composition. Most of SM interface aa were highly conserved and identical in Arabidopsis and potato ( Figures 2B and 3A), but there are a few variable residues on the periphery of the interface. The SM interfaces included a hydrophobic core with at least two aromatic residues (Phe and Tyr) in the center, while the interface periphery was mostly hydrophilic (Figures 2C and 3B). This type of hydrophobicity spreading with a hydrophobic core and a hydrophilic rim is common for protein-protein interfaces [31]. Distribution of electrostatic potentials over the interface surfaces was also studied. This distribution had common as well as some specific traits for studied proteins ( Figure 2D). For example, AHK3 had two positively charged aa, R303 and K162, in the distal part of the interface; negatively charged E182 in the proximal part; and a group of positively (R170 and R178) and negatively (D168 and E174) charged residues in the lateral area ( Figure 3C). Three of these positions (R303, R170, and E174) are variable, as well as G161, which can be replaced by His in paralogous HKs ( Figure 3A). Only the membrane-distal part of SM dimerization subdomain participated in the formation of protein-protein interface (PPI). The interface area of modeled complexes ranged between 946 Å 2 and 1044 Å 2 depending on dimer subunit composition. Most of SM interface aa were highly conserved and identical in Arabidopsis and potato ( Figures 2B and 3A), but there are a few variable residues on the periphery of the interface. The SM interfaces included a hydrophobic core with at least two aromatic residues (Phe and Tyr) in the center, while the interface periphery was mostly hydrophilic ( Figures 2C and 3B). This type of hydrophobicity spreading with a hydrophobic core and a hydrophilic rim is common for protein-protein interfaces [31]. Distribution of electrostatic potentials over the interface surfaces was also studied. This distribution had common as well as some specific traits for studied proteins ( Figure 2D). For example, AHK3 had two positively charged aa, R303 and K162, in the distal part of the interface; negatively charged E182 in the proximal part; and a group of positively (R170 and R178) and negatively (D168 and E174) charged residues in the lateral area ( Figure 3C). Three of these positions (R303, R170, and E174) are variable, as well as G161, which can be replaced by His in paralogous HKs ( Figure 3A). The dimerization interfaces bore 5-11 hydrogen bonds and up to five putative salt bridges according to PISA software (Table S3). Hydrophobic p-values, a measure of hydrophobicity degree, differed for studied Arabidopsis (0.35-0.52) and potato (0.2-0.25) SM dimers, indicating a higher specificity of interaction in the dimers from S. tuberosum [32].
To find critical aa (hot spots) and their interactions in the interfaces determining strength and specificity of CK receptors dimerization, virtual alanine scanning was performed for Arabidopsis and potato SM. Residues that changed the free energy ΔG of the dimer by more than 2 kJ/mol upon conversion to Ala were considered as hot spots. Robetta scanning for all the homo-and heterodimers revealed two aa positions that were hot spots in both chains of all complexes: F158 and Y175, according to AHK3 numbering. Some of the positions were hot spots in most complexes, at least in one chain: H147, K162, T171, R178 and E182 (Table S5). For AHK3sm homodimer, additional investigation of interface residues was performed, including KFC2 and PPCheck hot spot prediction (Table S6) and calculation of buried surface area (BSA) percent with regard to accessible surface area (ASA) ( Table S7). All of the above-mentioned positions were conserved (except K162) and confirmed The dimerization interfaces bore 5-11 hydrogen bonds and up to five putative salt bridges according to PISA software (Table S3). Hydrophobic p-values, a measure of hydrophobicity degree, differed for studied Arabidopsis (0.35-0.52) and potato (0.2-0.25) SM dimers, indicating a higher specificity of interaction in the dimers from S. tuberosum [32].
To find critical aa (hot spots) and their interactions in the interfaces determining strength and specificity of CK receptors dimerization, virtual alanine scanning was performed for Arabidopsis and potato SM. Residues that changed the free energy ∆G of the dimer by more than 2 kJ/mol upon conversion to Ala were considered as hot spots. Robetta scanning for all the homo-and heterodimers revealed two aa positions that were hot spots in both chains of all complexes: F158 and Y175, according to AHK3 numbering. Some of the positions were hot spots in most complexes, at least in one chain: H147, K162, T171, R178 and E182 (Table S5). For AHK3sm homodimer, additional investigation of interface residues was performed, including KFC2 and PPCheck hot spot prediction (Table S6) and calculation of buried surface area (BSA) percent with regard to accessible surface area (ASA) ( Table S7). All of the above-mentioned positions were conserved (except K162) and confirmed (except K162 and R178) as hot spots by KFC2 calculation results for AHK3 homodimer (Table S6). Moreover, all of these residues, except K162 and R178, had a high BSA percentage of more than 85%, according to PISA calculation (Table S7). F158 can participate in a π-π stacking interaction with the corresponding residue of the dimer counterpart. When considering AHK3 homodimer, Y175:B ("B" after colon means subunit B) may form the hydrogen bonds with the backbone oxygen of A150:A ("A" after colon means subunit A); H147 formed stacking contacts with H147 of the dimer partner; K162 may interact with D168 of the other subunit via salt bridges and also may form hydrogen bonds with T171; R178 may form hydrogen bond with counterpart's S156; E182 may interact with the partner's N146 ( Figure 4). Hot spots in S. tuberosum SM dimers were similar to the Arabidopsis ones, especially when comparing the orthologs. This is easily explained by the high identity percent between orthologous SM, 78-80% (Table S8). In general, this also applies to bond composition formed by hot spots in potato SM dimerization interfaces. However, according to PISA analysis, the StHK2sm homodimer was distinguished by the absence of salt bridges in the PPI interface ( (Table S6). Moreover, all of these residues, except K162 and R178, had a high BSA percentage of more than 85%, according to PISA calculation (Table S7). F158 can participate in a π-π stacking interaction with the corresponding residue of the dimer counterpart. When considering AHK3 homodimer, Y175:B ("B" after colon means subunit B) may form the hydrogen bonds with the backbone oxygen of A150:A ("A" after colon means subunit A); H147 formed stacking contacts with H147 of the dimer partner; K162 may interact with D168 of the other subunit via salt bridges and also may form hydrogen bonds with T171; R178 may form hydrogen bond with counterpart's S156; E182 may interact with the partner's N146 ( Figure 4). Hot spots in S. tuberosum SM dimers were similar to the Arabidopsis ones, especially when comparing the orthologs. This is easily explained by the high identity percent between orthologous SM, 78-80% (Table S8). In general, this also applies to bond composition formed by hot spots in potato SM dimerization interfaces. However, according to PISA analysis, the StHK2sm homodimer was distinguished by the absence of salt bridges in the PPI interface (Table S3, Supplementary dataset 2). 2D maps obtained using MolSurfer allowed us to investigate hydrophobicity and electrostatic potential of interfaces in the modeled MSP complexes ( Figure S2). Profiles of hydrophobicity were very similar in all Arabidopsis and potato SM dimers. PQR files (see Section 3.3) were prepared for electrostatic potential determination for all dimers at two different pH values. Electrostatic potential complementarity at pH 5.5, typical for apoplast medium, was less than at pH 7.1, which mimicked ER medium ( [3] and refs therein). Hence, neutral pH inside the cell should favor SM dimerization as compared to acidic pH in the apoplast. These results are consistent with the experimental data showing ER as the main compartment for CK receptor dimerization [30]. Some differences in the subunit surface complementarity were noticed between homodimers at pH 7.1 as well. Arabidopsis AHK2sm-AHK2sm and AHK3sm-AHK3sm homodimers were clearly complementary, whereas AHK4sm-AHK4sm homodimer, all heterodimers, and potato homodimers were complementary to a lesser extent. However, with any paralog combination, the complementarity areas seem to be large enough to ensure the dimer formation.
To test the assumption of the negative effect of decreasing pH on complementarity of SM dimerization interfaces, calculations with a wider range of pH values were made for a distinct complex, AHK3 homodimer. Calculations with PQR files prepared in six different pH conditions corresponding to the dispersion of pH values in the apoplast (4.5, 5.5, and 6.5) and the ER (7.0, 7.5, and 8.0) were performed. The results showed ( Figure 5) a high complementarity at pH 7-8 and a presence of large non-complementary areas at pH values lower than 6.5 (especially at pH 4.5), thus confirming our assumption. 2D maps obtained using MolSurfer allowed us to investigate hydrophobicity and electrostatic potential of interfaces in the modeled MSP complexes ( Figure S2). Profiles of hydrophobicity were very similar in all Arabidopsis and potato SM dimers. PQR files (see Section 3.3) were prepared for electrostatic potential determination for all dimers at two different pH values. Electrostatic potential complementarity at pH 5.5, typical for apoplast medium, was less than at pH 7.1, which mimicked ER medium ( [3] and refs therein). Hence, neutral pH inside the cell should favor SM dimerization as compared to acidic pH in the apoplast. These results are consistent with the experimental data showing ER as the main compartment for CK receptor dimerization [30]. Some differences in the subunit surface complementarity were noticed between homodimers at pH 7.1 as well. Arabidopsis AHK2sm-AHK2sm and AHK3sm-AHK3sm homodimers were clearly complementary, whereas AHK4sm-AHK4sm homodimer, all heterodimers, and potato homodimers were complementary to a lesser extent. However, with any paralog combination, the complementarity areas seem to be large enough to ensure the dimer formation.
To test the assumption of the negative effect of decreasing pH on complementarity of SM dimerization interfaces, calculations with a wider range of pH values were made for a distinct complex, AHK3 homodimer. Calculations with PQR files prepared in six different pH conditions corresponding to the dispersion of pH values in the apoplast (4.5, 5.5, and 6.5) and the ER (7.0, 7.5, and 8.0) were performed. The results showed ( Figure 5) a high complementarity at pH 7-8 and a presence of large non-complementary areas at pH values lower than 6.5 (especially at pH 4.5), thus confirming our assumption. . For electrostatic potential similarity: blue-most similar zones, red-most dissimilar zones. Greatest dissimilarity (red) means greatest electrostatic potential complementarity.
HisKA (DHpD) Domain Dimerization Interfaces
A set of nine complexes of A. thaliana and S. tuberosum HisKA (DHpD) domain dimers were generated with the same combinations of protein pairs as for SMs (Table S2, Supplementary dataset 1). HisKA domains of modeled proteins shared 35-41% sequence identity with corresponding domain of ERS1 template (Table S1).
Unlike the SM, the HisKA domain was involved in dimerization along its entire length. Cytosolic HisKA domain consists of a long α1-helix and smaller α2-helix located in the membrane-distal part of the interface. This distal part, formed by two helices, is more conserved than the proximal part and consists mostly of hydrophobic aa. The proximal part of the dimeric interface, formed only by the α1-helix, is highly variable and involves both hydrophilic and hydrophobic residues ( Figure 6).
Interaction interfaces of the HisKA domain of the receptor covered a much larger area than the other interfaces considered here. Contact area of the investigated HisKA domain interfaces ranged between 1977 and 2267 Å 2 . HisKA domain dimers were twist-shaped and their interfaces included conserved and variable parts. At the same time, the number of hydrogen bonds was smaller than in the other interfaces (5-8 depending on subunit).
Thus, substantial differences between SM and HisKA dimerization properties were revealed. HisKA dimers had larger interface area and higher average interaction energy. The number of charged-charged and polar-polar contacts was pretty similar and the number of charged-polar contacts in HisKA dimers was even less than in SMs despite a much higher total number of contacts in HisKA. At the same time, apolar-apolar contacts in some cases amounted to more than half of the total contacts in HisKA complexes, whereas, in SM dimers, this proportion was much lower. . For electrostatic potential similarity: blue-most similar zones, red-most dissimilar zones. Greatest dissimilarity (red) means greatest electrostatic potential complementarity.
HisKA (DHpD) Domain Dimerization Interfaces
A set of nine complexes of A. thaliana and S. tuberosum HisKA (DHpD) domain dimers were generated with the same combinations of protein pairs as for SMs (Table S2, Supplementary dataset 1). HisKA domains of modeled proteins shared 35-41% sequence identity with corresponding domain of ERS1 template (Table S1).
Unlike the SM, the HisKA domain was involved in dimerization along its entire length. Cytosolic HisKA domain consists of a long α1-helix and smaller α2-helix located in the membrane-distal part of the interface. This distal part, formed by two helices, is more conserved than the proximal part and consists mostly of hydrophobic aa. The proximal part of the dimeric interface, formed only by the α1-helix, is highly variable and involves both hydrophilic and hydrophobic residues ( Figure 6).
Interaction interfaces of the HisKA domain of the receptor covered a much larger area than the other interfaces considered here. Contact area of the investigated HisKA domain interfaces ranged between 1977 and 2267 Å 2 . HisKA domain dimers were twist-shaped and their interfaces included conserved and variable parts. At the same time, the number of hydrogen bonds was smaller than in the other interfaces (5-8 depending on subunit).
Thus, substantial differences between SM and HisKA dimerization properties were revealed. HisKA dimers had larger interface area and higher average interaction energy. The number of charged-charged and polar-polar contacts was pretty similar and the number of charged-polar contacts in HisKA dimers was even less than in SMs despite a much higher total number of contacts in HisKA. At the same time, apolar-apolar contacts in some cases amounted to more than half of the total contacts in HisKA complexes, whereas, in SM dimers, this proportion was much lower. Differences were also evident in the hydrophobicity pattern: HisKA dimers had a large hydrophobic zone in the distal part of the interface, whereas in SM dimerization interfaces, hydrophobic and hydrophilic areas were distributed fairly evenly. Interestingly, among all studied MSP dimers, the largest number of intermolecular contacts and the highest value of the interaction energy were inherent to HisKA and SM homodimers of the same protein, AHK3. Hydrophobic p-value of HisKA dimers (ranged between 0.06 and 0.25) was lower than that of SM dimers (0.20-0.52), making evident the higher hydrophobicity of the HisKA complex interface. zone in the distal part of the interface, whereas in SM dimerization interfaces, hydrophobic and hydrophilic areas were distributed fairly evenly. Interestingly, among all studied MSP dimers, the largest number of intermolecular contacts and the highest value of the interaction energy were inherent to HisKA and SM homodimers of the same protein, AHK3. Hydrophobic p-value of HisKA dimers (ranged between 0.06 and 0.25) was lower than that of SM dimers (0.20-0.52), making evident the higher hydrophobicity of the HisKA complex interface. Despite the fact that HisKA dimers had large interface areas and a lot of intermolecular contacts, Robetta alanine scanning revealed only a small number of hot spots in these complexes (Table S9). There were no residue positions that were detected as presumable hot spots for all complexes. AHK3 homodimer had only two hot spots detected in chain (subunit) A and three ones in chain (subunit) B. K451 and Q485 positions, which were hot spots in both chains of AHK3 HisKA homodimers, were also hot spots in a few other complexes. Only K451:A of AHK3 HisKA homodimer was confirmed as a hot spot by PPCheck and KFC results; in addition, unlike Q485, K451 residue was shown to be very conserved (Table S10). S. tuberosum HisKA dimers differed from the Arabidopsis ones in alanine scanning. For example, eight hot spot positions were detected for chain B of StHK3 HisKA homodimer; this was much more than in other complexes. Three of these positions were unique for this dimer: F407, D469 and L489 (corresponding to AHK3′s I420, D482 and L502, respectively).
MolSurfer maps for hydrophobic and electrostatic complementarity of interfaces were obtained using PQR files prepared for pH 7.3, close to that in the cytosol ( Figure S3). Besides hydrophobic Despite the fact that HisKA dimers had large interface areas and a lot of intermolecular contacts, Robetta alanine scanning revealed only a small number of hot spots in these complexes (Table S9). There were no residue positions that were detected as presumable hot spots for all complexes. AHK3 homodimer had only two hot spots detected in chain (subunit) A and three ones in chain (subunit) B. K451 and Q485 positions, which were hot spots in both chains of AHK3 HisKA homodimers, were also hot spots in a few other complexes. Only K451:A of AHK3 HisKA homodimer was confirmed as a hot spot by PPCheck and KFC results; in addition, unlike Q485, K451 residue was shown to be very conserved (Table S10). S. tuberosum HisKA dimers differed from the Arabidopsis ones in alanine scanning. For example, eight hot spot positions were detected for chain B of StHK3 HisKA homodimer; this was much more than in other complexes. Three of these positions were unique for this dimer: F407, D469 and L489 (corresponding to AHK3 s I420, D482 and L502, respectively).
MolSurfer maps for hydrophobic and electrostatic complementarity of interfaces were obtained using PQR files prepared for pH 7.3, close to that in the cytosol ( Figure S3). Besides hydrophobic complementarity, all of the HisKA dimeric interfaces were complementary in electrostatic potential as well. These observations were equally true for the Arabidopsis and potato CK receptors. Thus, the possibility of CK receptors to form heterodimers (consisting of paralogs) as well as homodimers was confirmed.
Receptor-Phosphotransmitter Interactions
Homology models for all combinations of complexes of Arabidopsis phosphotransmitters AHP1-3 bound to receiver domains of Arabidopsis CK receptors AHK2-4 were built, and the AHK5rd-AHP1 model served as a control (compared to the AHK5rd-AHP1 crystal structure, PDB ID: 4EUK). A total of 10 A. thaliana complexes were modeled ( Figure 7, as well. These observations were equally true for the Arabidopsis and potato CK receptors. Thus, the possibility of CK receptors to form heterodimers (consisting of paralogs) as well as homodimers was confirmed.
Receptor-Phosphotransmitter Interactions
Homology models for all combinations of complexes of Arabidopsis phosphotransmitters AHP1-3 bound to receiver domains of Arabidopsis CK receptors AHK2-4 were built, and the AHK5rd-AHP1 model served as a control (compared to the AHK5rd-AHP1 crystal structure, PDB ID: 4EUK). A total of 10 A. thaliana complexes were modeled ( Figure 7, Table S2, Supplementary dataset 1). Receiver domains of S. tuberosum CK receptors StHK2-4 were modeled as complexes with StHP1a phosphotransfer protein, a total of three complexes ( Figure 7, Table S2, Supplementary dataset 1). The structures of the receiver domains of CK receptors include a five-stranded parallel β-sheet surrounded by five main (and often one additional) α-helices. Phosphotransfer proteins from Arabidopsis comprise six α-helices and lack β-strands. In the complex, the α1 helix of the receiver domain was the closest to the phosphotransmitter domain, forming a prevailing part of the receptor-phosphotransmitter interface. The phosphorylated Asp residue (D941 in AHK3rd) resided in the other region of the AHKrd molecule, namely, at the edge of the β3 strand adjacent to the loop L5. This site was accessible for the phosphoaccepting His residue (H82 in AHP2) of the phosphotransmitter. In the latter, three α-helices (α2, α3 and α4) of a total of six were involved in the formation of the interaction interface. The structures of the receiver domains of CK receptors include a five-stranded parallel β-sheet surrounded by five main (and often one additional) α-helices. Phosphotransfer proteins from Arabidopsis comprise six α-helices and lack β-strands. In the complex, the α1 helix of the receiver domain was the closest to the phosphotransmitter domain, forming a prevailing part of the receptor-phosphotransmitter interface. The phosphorylated Asp residue (D941 in AHK3rd) resided in the other region of the AHKrd molecule, namely, at the edge of the β3 strand adjacent to the loop L5. This site was accessible for the phosphoaccepting His residue (H82 in AHP2) of the phosphotransmitter. In the latter, three α-helices (α2, α3 and α4) of a total of six were involved in the formation of the interaction interface.
According to the PISA assessment, the modeled MSP complexes differed in interaction interface properties (Table S3). Most of the complexes had hydrophobic p-values in the range between 0.27 and 0.81, with AHK3rd-AHP1 complex distinguishing by the low p-value. Interface area of different complexes ranged between 777 and 942 Å 2 . Number of hydrogen bonds in the interaction interfaces of modeled complexes varied from 5 to 13, and number of salt bridges varied from 1 to 9. Predicted binding affinity according to Prodigy calculations ranged between −40 and −50 kJ/mol (Table S4).
Almost half of HKrd and most HPt interface aa residues were highly conserved (ConSurf scores of 7 and above) (Figures 7B and 8A). Notably, the interface region is the most conserved part of both interacting proteins. Distribution of hydrophobic and hydrophilic regions on the interfaces of all the investigated HK-HPt complexes was very similar. There was a hydrophobic core in HKrd and HPt interfaces, surrounded by hydrophilic residues with an additional small hydrophobic area on the periphery ( Figures 7C and 8B). The surface patterns of electrostatic potential showed complementarity of the HKrd and HPt interaction interfaces. The central region was neutral or almost neutral in both proteins with a clearly negative sector on one edge of the HPt interface, which matched a positive area of the HKrd interface and positive sector on another edge, which matched a negative part of the HKrd interface ( Figures 7D and 8C). According to MolSurfer results, PPI interfaces of all Arabidopsis and potato complexes had very similar and complementary patterns of hydrophobicity. Electrostatic potential complementarity at pH 7.3 (cytosolic pH) was clearly visible in all combinations of dimer counterparts, with StHK2rd-StHP1a and StHK3rd-StHP1a complexes distinguished by the perfect matching ( Figure S4). This high electrostatic complementarity as well as a high level of conservation of the interface residues (especially in HPt counterpart) can explain HK-HPt interaction promiscuity shown in our experiments earlier [30]. (Table S4). Almost half of HKrd and most HPt interface aa residues were highly conserved (ConSurf scores of 7 and above) (Figures 7B and 8A). Notably, the interface region is the most conserved part of both interacting proteins. Distribution of hydrophobic and hydrophilic regions on the interfaces of all the investigated HK-HPt complexes was very similar. There was a hydrophobic core in HKrd and HPt interfaces, surrounded by hydrophilic residues with an additional small hydrophobic area on the periphery ( Figures 7C and 8B). The surface patterns of electrostatic potential showed complementarity of the HKrd and HPt interaction interfaces. The central region was neutral or almost neutral in both proteins with a clearly negative sector on one edge of the HPt interface, which matched a positive area of the HKrd interface and positive sector on another edge, which matched a negative part of the HKrd interface ( Figures 7D and 8C). According to MolSurfer results, PPI interfaces of all Arabidopsis and potato complexes had very similar and complementary patterns of hydrophobicity. Electrostatic potential complementarity at pH 7.3 (cytosolic pH) was clearly visible in all combinations of dimer counterparts, with StHK2rd-StHP1a and StHK3rd-StHP1a complexes distinguished by the perfect matching ( Figure S4). This high electrostatic complementarity as well as a high level of conservation of the interface residues (especially in HPt counterpart) can explain HK-HPt interaction promiscuity shown in our experiments earlier [30]. To reveal critical aa and their interactions in the interfaces determining binding between CK receptors and phosphotransfer proteins, virtual alanine scanning was performed (Table S11, Figure S5). All 13 complexes were investigated not only to characterize individual pairs of proteins, but also to highlight general trends. For the HKrd counterpart, three hot spots' positions K1013, N898, and N901 (AHK3 numbering) were clearly revealed in all the studied complexes, and two additional putative hot spots, V900 and R903, were uncovered in more than two, but not in all complexes. For HPt, two strongly conserved hot spots were revealed, Q83 and S87, as well as two less conserved ones, D54 and S90 (AHP2 numbering).
For AHK3rd-AHP2 complex, additional investigation of the interface residues was performed, including KFC2 and PPCheck hot spot prediction (Table S12), and calculation of BSA percent with regard to ASA (Table S13). The hot spot status was confirmed for N898, N901, V900, and R903 positions in AHKrd counterparts, but not for K1013. Positions N898, N901, and K1013 were highly conserved (ConSurf scores 7, 8, and 9, respectively); N898, N901, and V900 were characterized by more than 90% BSA percentage, whereas K1013 had as little as ~40%. All the predicted hot spots of To reveal critical aa and their interactions in the interfaces determining binding between CK receptors and phosphotransfer proteins, virtual alanine scanning was performed (Table S11, Figure S5). All 13 complexes were investigated not only to characterize individual pairs of proteins, but also to highlight general trends. For the HKrd counterpart, three hot spots' positions K1013, N898, and N901 (AHK3 numbering) were clearly revealed in all the studied complexes, and two additional putative hot spots, V900 and R903, were uncovered in more than two, but not in all complexes. For HPt, two strongly conserved hot spots were revealed, Q83 and S87, as well as two less conserved ones, D54 and S90 (AHP2 numbering).
For AHK3rd-AHP2 complex, additional investigation of the interface residues was performed, including KFC2 and PPCheck hot spot prediction (Table S12), and calculation of BSA percent with regard to ASA (Table S13). The hot spot status was confirmed for N898, N901, V900, and R903 positions in AHKrd counterparts, but not for K1013. Positions N898, N901, and K1013 were highly conserved (ConSurf scores 7, 8, and 9, respectively); N898, N901, and V900 were characterized by more than 90% BSA percentage, whereas K1013 had as little as~40%. All the predicted hot spots of HPt interface were highly conserved (ConSurf scores 8 to 9), but only S90 was confirmed as a hot spot by the KFC2 and PPCheck services. For S90, BSA was almost 100%, for Q83 and S87 above 80%, while for D54 only about 60%.
K1013 did not form any intermolecular hydrogen bonds or salt bridges, but its intramolecular salt bridge with D941 is well known to stabilize active conformation and provide Mg 2+ binding needed for phosphorylation [17]. N898 and N901 of AHK3 formed at least three hydrogen bonds with side chains of AHP residues. N898 interacted with Q83 and S87 of HPts, N901 formed a hydrogen bond with S90 (Supplementary dataset 2, Figure 9). These Asn residues are located within a L1-α1 helix stretch in the RD. In the HPt counterparts, Ser and Glu residues forming hydrogen bonds with Asn residues of HKrd are located in the α4 helix. K1013 did not form any intermolecular hydrogen bonds or salt bridges, but its intramolecular salt bridge with D941 is well known to stabilize active conformation and provide Mg 2+ binding needed for phosphorylation [17]. N898 and N901 of AHK3 formed at least three hydrogen bonds with side chains of AHP residues. N898 interacted with Q83 and S87 of HPts, N901 formed a hydrogen bond with S90 (Supplementary dataset 2, Figure 9). These Asn residues are located within a L1-α1 helix stretch in the RD. In the HPt counterparts, Ser and Glu residues forming hydrogen bonds with Asn residues of HKrd are located in the α4 helix. Most results of alanine scanning were similar between Arabidopsis and potato, except minor features. For example, K1142 (corresponding to K910 in AHK3) was detected as hot spot only in the RD counterpart of StHK2rd-StHP1a complex, while the other complexes had no hot spot in this position.
Pekárová et al. [17] studied the interaction between CKI1 (which is a histidine kinase, but not a cytokinin receptor) and AHP1-6 proteins. Results of their BiFC study suggested a tight interaction of CKI1 with AHP2, AHP3 and AHP5, a weaker interaction with AHP1, and no interaction with AHP4 and AHP6. Yeast two-hybrid assay experiments confirmed these results except that there was no interaction between CKI1rd and AHP1, and CKI1rd-AHP5 interaction was weaker than the interactions with AHP2 and AHP3. ELISA experiments also showed strong interaction of CKI1rd with AHP2 and AHP3, while the interaction of CKI1rd with AHP5 was weaker. In the case of AHP2 and AHP3, these data are consistent with our results. The difference in the case of AHP1 can be explained both by different localization and structural features of CKI1 and CK receptors. CKI1 is located in the plasma membrane [17], while CK receptors are localized mainly in the ER [3,30]. The main structural difference in the cytosolic parts of these proteins is the absence of a receiver-like domain in CKI1, as well as in all histidine kinases except CK receptors. Hypothetically, this domain may play an indirect role in the interaction with HPt. There was also a difference in the RD-HPt Most results of alanine scanning were similar between Arabidopsis and potato, except minor features. For example, K1142 (corresponding to K910 in AHK3) was detected as hot spot only in the RD counterpart of StHK2rd-StHP1a complex, while the other complexes had no hot spot in this position.
Pekárová et al. [17] studied the interaction between CKI1 (which is a histidine kinase, but not a cytokinin receptor) and AHP1-6 proteins. Results of their BiFC study suggested a tight interaction of CKI1 with AHP2, AHP3 and AHP5, a weaker interaction with AHP1, and no interaction with AHP4 and AHP6. Yeast two-hybrid assay experiments confirmed these results except that there was no interaction between CKI1rd and AHP1, and CKI1rd-AHP5 interaction was weaker than the interactions with AHP2 and AHP3. ELISA experiments also showed strong interaction of CKI1rd with AHP2 and AHP3, while the interaction of CKI1rd with AHP5 was weaker. In the case of AHP2 and AHP3, these data are consistent with our results. The difference in the case of AHP1 can be explained both by different localization and structural features of CKI1 and CK receptors. CKI1 is located in the plasma membrane [17], while CK receptors are localized mainly in the ER [3,30]. The main structural difference in the cytosolic parts of these proteins is the absence of a receiver-like domain in CKI1, as well as in all histidine kinases except CK receptors. Hypothetically, this domain may play an indirect role in the interaction with HPt. There was also a difference in the RD-HPt interaction interface upon comparison of the CKI1 and CK receptors. N901 (AHK3 numbering), which was a hot spot in our calculations, was replaced by a serine in CKI1 (S997, CKI1 numbering). This may lead to weaker interactions with some HPts.
Bauer et al. [19] investigated the interaction between CKI2 (AHK5) and AHP1-6 proteins. It was shown in BiFC experiments that CKI2 interacted with all the HPts except AHP4. Interactions with AHP2 and AHP5 were a bit more pronounced than the others. Binding affinities of CKI2 for AHP1-3 were very similar, according to surface plasmon resonance, with a small decrease of the dissociation constant for interaction with AHP2. This correlates with our data on the promiscuous interaction of AHK2-4 with AHP1-3. It is also notable that N901 (AHK3 numbering) was not replaced in CKI2 (N789, CKI2 numbering), unlike in CKI1. This may indicate the effect of this aa position on the specificity of RD-HPt interaction.
HPt-HPt Interface and Its Comparison to the HPt-HKrd Interface
Two structures of phosphotransfer proteins in dimeric form were found in the PDB: OsHP1 (PDB IDs: 1YVI, 2Q4F) and YPD1 (PDB ID: 1C02) homodimers. YPD1 has only about a 20% sequence identity with HPts from A. thaliana and S. tuberosum. Thus, it cannot be a relevant template. OsHP1 structure contains two identical chains (A and B), corresponding to two HPt subunits. To check whether PDB dimer of OsHP1 has a biological meaning, we applied two protocols. At first, the structure was verified using "Dimer Classification" option in a ClusPro service. The principle of this procedure is blind redocking of dimer subunits. The docking results showed that this structure, indeed, should be a dimer with likely natural (biological) conformation. The probability of being a natural dimer was 80%, so the model based on this structure should be biologically relevant ( Figure S6A).
The second step was the blind docking of AHP2 homology model monomers using ClusPro and PatchDock servers. Results were compared with the OsHP1 crystal structure and the homology model of AHP2 dimer based on OsHP1 structure (PDB ID: 1YVI). The best result of PatchDock docking, the structure with the lowest atomic contact energy (ACE), was chosen from top 20 structures ranked by PatchDock main score. Both ClusPro and PatchDock best models had a configuration very close to OsHP1 crystal structure and homology model of AHP2 dimer ( Figure S6B). All-atom RMSD (root-mean-square deviation of atomic positions) for alignment between OsHP1 crystal structure and ClusPro AHP2 homodimer was 1.115 Å and 0.723 Å for alignment between OsHP1 crystal structure and PatchDock AHP2 homodimer, in the case of AHP2 homodimer homology model aligned with ClusPro and PatchDock dimers it was 1.768 Å and 1.375 Å, respectively. In this conformation of HPt-homodimer and HPt-HKrd heterodimer, the interfacial surface areas of HPt largely overlapped ( Figure 10).
According to the PISA interface summary, 25 residues of AHP2 were involved in the interaction with AHK3 receiver domain, and 36 in homodimerization (some of them were specific for A or B chain). Both interactions shared 19 residues ( Figure 10B). The interface area and number of interface hydrogen bonds and salt bridges were determined by PISA as follows: 818 Å 2 , 13, and 6 for AHK3rd-AHP2 complex, and 1150 Å 2 , 12, and 3 for AHP2-AHP2 dimer. Complexes were also checked in Prodigy service with default temperature setting (25 • C) resulting in the binding energy of AHK3rd-AHP2 complex as -33.47 kJ/mol, and for AHP2-AHP2 dimer as −48.53 kJ/mol. Thus, both dimeric structures were biologically relevant, and AHK3rd-AHP2 complex seems to be less stable than AHP2-AHP2 dimer. S6B). All-atom RMSD (root-mean-square deviation of atomic positions) for alignment between OsHP1 crystal structure and ClusPro AHP2 homodimer was 1.115 Å and 0.723 Å for alignment between OsHP1 crystal structure and PatchDock AHP2 homodimer, in the case of AHP2 homodimer homology model aligned with ClusPro and PatchDock dimers it was 1.768 Å and 1.375 Å, respectively. In this conformation of HPt-homodimer and HPt-HKrd heterodimer, the interfacial surface areas of HPt largely overlapped ( Figure 10). Sequence identity between modeled proteins and OsHP1 template ranged from 43% to 51% (Table S1). A total of seven HPt-HPt dimer models were built: A. thaliana AHP1-3 homo-and heterodimers and S. tuberosum StHP1a homodimer (Table S2, Figure 11, Supplementary dataset 1). According to the PISA interface summary, 25 residues of AHP2 were involved in the interaction with AHK3 receiver domain, and 36 in homodimerization (some of them were specific for A or B chain). Both interactions shared 19 residues ( Figure 10B). The interface area and number of interface hydrogen bonds and salt bridges were determined by PISA as follows: 818 Å 2 , 13, and 6 for AHK3rd-AHP2 complex, and 1150 Å 2 , 12, and 3 for AHP2-AHP2 dimer. Complexes were also checked in Prodigy service with default temperature setting (25 °C) resulting in the binding energy of AHK3rd-AHP2 complex as -33.47 kJ/mol, and for AHP2-AHP2 dimer as −48.53 kJ/mol. Thus, both dimeric structures were biologically relevant, and AHK3rd-AHP2 complex seems to be less stable than AHP2-AHP2 dimer. Alanine scanning of HPt-HPt complexes showed that the main hot spots were Q83 and D54 (AHP2 numbering), located at the center of the interaction interface (Table S14). Thus, at least one hot spot position was the same as in the AHKrd-AHP interaction. However, the trends in HPt-HPt interaction were not as unequivocal as in HKrd-HPt interaction, and hot spot positions could vary depending on complex composition and protein sequence. Particularly, in AHP2 homodimer (Table S15), four hot spots were identified by Robetta in the chain A, D54, Q76, Q83, and S87, and two of them, D54 and Q83, were confirmed by KFC2, but only one, Q83, was also confirmed by PPCheck. Within the chain B, two hot spots were detected by Robetta: Q46 and Q83, both confirmed by KFC2, and Q83 additionally confirmed by PPCheck. D54 and Q83 in the chain A, as well as Q46 and Q83 in the chain B, have BSA percentage of more than 70% (Table S16). The following residues were suggested to form hydrogen bonds in AHP2 homodimer (Figure 12, Supplementary dataset 2): D54:A with Q83:B; Q76:A with Q46:B; Q83:A with side chain of D54:B and the backbone oxygen of L50:B, Q46:B with S75:A, besides the interaction with Q76:A. A set of hot spots in S. tuberosum StHP1a homodimer was substantially different from that in Arabidopsis HPt homodimers. D52 and Q80 (corresponding to D54 and Q82 in AHP2) were no longer hot spots in both StHP1a chains, whereas F41 and E44, corresponding to F43 and E46 in AHP2, were hot spots only in StHP1a. In other respects, the structures of potato and Arabidopsis HPt dimers were very similar. whereas F41 and E44, corresponding to F43 and E46 in AHP2, were hot spots only in StHP1a. In other respects, the structures of potato and Arabidopsis HPt dimers were very similar. MolSurfer results showed that HPt-HPt complexes in the chosen conformation demonstrate only a moderate residue hydrophobic complementarity, but that of atomic hydrophobicity was rather high, similarly to the complementarity of electrostatic potential ( Figure S7).
RRrd-HPt Interface and Its Comparison to AHP-AHKrd Interface
Comparison of YPD1-SLN1 and YPD1-SSK1 crystal structures showed a marked similarity ( Figure S8) of HK receiver domain and RR conformations when forming a complex with phosphorelay intermediate. Thus, AHK5-AHP1 complex remained the main template for RRrd-HPt complexes modeling, and the second template for the RD was the E. coli CheY receiver domain.
The interface part of RRrd surface was more variable compared to HKrd (Figure 13). RRrd interfaces have similar charge distribution over the surface, but positive-charged area was more pronounced, especially in ARR11 and StRR11. On the contrary, hydrophobic regions were less pronounced and had no clear boundaries. RRrd-HPt interfaces comprised an area varying between 881 and 1013 Å 2 , which was a bit larger than that in the HKrds-HPts. The interfaces included up to nine hydrogen bonds and up to 10 putative salt bridges.
Alanine scanning highlighted three positions, D45, T47, and K138 (ARR1 numbering) that might be considered as hot spots in at least three RRrd-HPt complexes (Table S17). Surprisingly, ARR1 itself had only T47 and K138 as hot spots. HPt counterpart had two hot spot positions in RRrd-HPt complex: L35 and S87 (AHP2 numbering). Both T47 of ARR1 and S87 of AHP2 had BSA of 100% in the ARR1rd-AHP2 complex (Table S18). MolSurfer results showed that HPt-HPt complexes in the chosen conformation demonstrate only a moderate residue hydrophobic complementarity, but that of atomic hydrophobicity was rather high, similarly to the complementarity of electrostatic potential ( Figure S7).
RRrd-HPt Interface and Its Comparison to AHP-AHKrd Interface
Comparison of YPD1-SLN1 and YPD1-SSK1 crystal structures showed a marked similarity ( Figure S8) of HK receiver domain and RR conformations when forming a complex with phosphorelay intermediate. Thus, AHK5-AHP1 complex remained the main template for RRrd-HPt complexes modeling, and the second template for the RD was the E. coli CheY receiver domain.
The interface part of RRrd surface was more variable compared to HKrd (Figure 13). RRrd interfaces have similar charge distribution over the surface, but positive-charged area was more pronounced, especially in ARR11 and StRR11. On the contrary, hydrophobic regions were less pronounced and had no clear boundaries. RRrd-HPt interfaces comprised an area varying between 881 and 1013 Å 2 , which was a bit larger than that in the HKrds-HPts. The interfaces included up to nine hydrogen bonds and up to 10 putative salt bridges. Comparing the main HKrd hot spots of with residues at the same positions of RRrd, we can see that one HKrd's Asn (N898, AHK3 numbering) was replaced by Asp (D45 in ARR1), and another HKrd's Asn (N901, AHK3 numbering) was replaced by Cys (C48 in ARR1) in all the studied RRs, except ARR11 and StRR11, where Asn901 changed to Trp.
S. tuberosum complexes differed from Arabidopsis ones in alanine scanning results (Table S17). RD counterpart of StRR1a(rd)-StHP1a had the largest number of hot spots compared to the other studied RRrd-HPt complexes, and included a unique I43 residue (corresponding to I51 of ARR1). S87 in HPt counterpart (corresponding to S90 of AHP2) was a hot spot only in this protein pair. StRR11(rd)-StHP1a had two unique hot spots: R127 (corresponding to R141 of ARR1) in the RD and Q26 (corresponding to Q28 of AHP2) in HPt part. MolSurfer results showed a high level of hydrophobic and electrostatic complementarity for all the studied RRrd-HPt complexes ( Figure S9).
The occurrence of most RRrd-HPt interactions modeled in this study was consistent with experiments by Dortay et al. [33].
Verma et al. [34] investigated putative key aa in ARR4-AHP1 interaction. Using homology modeling, they have suggested that five ARR4 aa (D45, R51, Y96, C97 and P148) could play a critical role in ARR4-AHP interaction. Two of these residues, D45 and Y96, were confirmed experimentally as interaction determinants. There are several reasons why these results are not related to our identification of RRrd-HPt hot spots. First of all, RRrd interaction interfaces include quite extensive variable zones where properties may vary significantly depending on the specific protein. Second, ARR4 belongs to A-type ARRs, whereas our study considered B-type ARRs. Finally, as a criterion for Alanine scanning highlighted three positions, D45, T47, and K138 (ARR1 numbering) that might be considered as hot spots in at least three RRrd-HPt complexes (Table S17). Surprisingly, ARR1 itself had only T47 and K138 as hot spots. HPt counterpart had two hot spot positions in RRrd-HPt complex: L35 and S87 (AHP2 numbering). Both T47 of ARR1 and S87 of AHP2 had BSA of 100% in the ARR1rd-AHP2 complex (Table S18).
Comparing the main HKrd hot spots of with residues at the same positions of RRrd, we can see that one HKrd's Asn (N898, AHK3 numbering) was replaced by Asp (D45 in ARR1), and another HKrd's Asn (N901, AHK3 numbering) was replaced by Cys (C48 in ARR1) in all the studied RRs, except ARR11 and StRR11, where Asn901 changed to Trp.
S. tuberosum complexes differed from Arabidopsis ones in alanine scanning results (Table S17). RD counterpart of StRR1a(rd)-StHP1a had the largest number of hot spots compared to the other studied RRrd-HPt complexes, and included a unique I43 residue (corresponding to I51 of ARR1). S87 in HPt counterpart (corresponding to S90 of AHP2) was a hot spot only in this protein pair. StRR11(rd)-StHP1a had two unique hot spots: R127 (corresponding to R141 of ARR1) in the RD and Q26 (corresponding to Q28 of AHP2) in HPt part.
MolSurfer results showed a high level of hydrophobic and electrostatic complementarity for all the studied RRrd-HPt complexes ( Figure S9).
The occurrence of most RRrd-HPt interactions modeled in this study was consistent with experiments by Dortay et al. [33].
Verma et al. [34] investigated putative key aa in ARR4-AHP1 interaction. Using homology modeling, they have suggested that five ARR4 aa (D45, R51, Y96, C97 and P148) could play a critical role in ARR4-AHP interaction. Two of these residues, D45 and Y96, were confirmed experimentally as interaction determinants. There are several reasons why these results are not related to our identification of RRrd-HPt hot spots. First of all, RRrd interaction interfaces include quite extensive variable zones where properties may vary significantly depending on the specific protein. Second, ARR4 belongs to A-type ARRs, whereas our study considered B-type ARRs. Finally, as a criterion for the selection of key aa, Verma et al. [34] used the number of intermolecular bonds formed by these residues but not the interaction energy.
Effect of Phosphorylation on the HKrd-HPt Interactions
To study the effect of specific phosphorylation on HKrd-HPt interaction, AHK3rd-AHP2 dimer models were built using options of residue phosphorylation and Mg 2+ ion presence. Phosphoaccepting residues have been modified in Vienna-PTM service. His was virtually phosphorylated at Nε atom, with -2 charge (without protonation), and the same charge was used for phosphoaspartate ( Figure S10). Five versions of the complex were analyzed, one Mg 2+ -free wild type and four with bound Mg 2+ : wild type, dimer with phosphorylated D941 in AHK3rd, dimer with phosphorylated H82 in AHP2, and dimer with both phosphorylated phosphoaccepting residues. Additionally, two dimers with phosphorylation mimicking mutations in AHK3rd (D941E) and in AHP2 (H82E) were modeled, also in the presence of Mg 2+ (Table 1, Supplementary dataset 1). Earlier, Scharein and Groth [35] in their experimental research of interaction between AHP1 and ethylene receptor ETR1 (ethylene sensing HK structurally close to CK receptors) made the mutants of these proteins mimicking either permanent phosphorylation or preventing phosphorylation. The affinity of interaction decreased when both partners were either in non-phosphorylated or phosphorylated state. However, when only one of the interacting proteins was phosphorylated, the high binding affinity was restored.
Pekárová et al. [17] experimentally showed that adding Mg 2+ to the CKI1rd-AHP2 complex increased binding affinity, but addition of Mg 2+ together with BeF 3 reduced it to a level lower than that of the apo-forms. In case of CKI1rd-AHP3 complex, adding Mg 2+ led to a slight decrease in affinity, whereas addition of Mg 2+ together with BeF 3 increased the affinity to a level higher than that of the apo-forms. Our analysis of AHK3rd-AHP2 dimer models in different phosphorylation states (Table 1) showed that Mg 2+ -bound MSP partners markedly increased their mutual affinity in comparison to the apo-form. A further affinity increase was obtained when D941 of AHK3rd was shifted into the phosphorylated state in the presence of magnesium ions. In the opposite case, when AHP2 alone was phosphorylated (H82 transformed to phosphohistidine, also in the presence of Mg 2+ ), the affinity of RD-HPt binding dropped to a level even lower than that of the apo-forms. Phosphorylation of both phosphoaccepting residues led to an affinity decrease compared to Mg 2+ -bound non-phosphorylated form, although the affinity remained slightly higher than that of the apo-form.
Phosphorylation-mimicking mutants of AHK3rd (D941E) and AHP2 (H82E) in the presence of Mg 2+ showed the affinity higher than Mg 2+ -free wild type form. However, in comparison with Mg 2+ -bound wild type, results were qualitatively consistent (although less pronounced) with the data for phosphorylated MSP proteins: D941E mutation slightly increased the affinity while H82E mutation decreased it.
The effect of the phosphorylation on binding affinity can be explained by changes in the surface electrostatic potential ( Figure S11). Phosphorylation of D941 in AHK3 enlarged negatively charged zone of the receiver domain, whereas phosphorylation of H82 in AHP2 changed the charge of corresponding interface area from positive to negative, thus leading to a reduction of electrostatic complementarity and decrease in affinity. These affinity changes have a clear biological meaning: HPt has to dissociate from receptor after H82 phosphorylation to start moving into the nucleus.
Homology Modeling
SWISS-MODEL web service has been used for template search in homology modeling [36]. Sequence alignments were performed with NCBI BLAST [37] and Clustal X 2.1 [38]. Alignment files for individual models are provided in Supplementary Dataset 1. Template assignment for the models is given in Table 2 and Supplementary Table S1 [4,6,17,19,23,24,39]. To compensate for low sequence identity between templates and target proteins, to avoid incorrect geometry, and also to unify the modeling protocols, all HKrd-HPt and RRrd-HPt complexes were modeled using the multitemplate algorithm. Modeling of the A. thaliana and S. tuberosum (double monoploid var. Phureja) protein structures was accomplished in Modeller 9.20 [40] using an automodel class for comparative modeling. For each protein, 200 models were built, and the best model was selected according to the value of DOPE (discrete optimized protein energy) score [41] calculated by Modeller.
The main set of HKrd-HPt complexes was built in the absence of Mg 2+ ions to make the resulting model files compatible with the wide range of software. An additional AHK3rd-AHP2 model was built in the presence of Mg 2+ for further modification aiming to study the phosphorylation effect. Models built and studied in this work are available in Supplementary Dataset 1.
Structure Optimization, Validation, and Modification
Phosphoaccepting residues were modified in Vienna-PTM web server [42]. Histidines were virtually phosphorylated at the Nε atom. Modification procedures were performed on selected models before minimization.
All the best models underwent minimization. After adding hydrogen atoms, the models were energy minimized in UCSF Chimera 1.13.1 [43] using an AMBER ff14SB force field [44] with 300 steps of steepest descent and 300 steps of conjugate gradient; step size was 0.02 Å in both cases.
Stereochemical quality of the models was assessed with ProCheck [45] implemented in PDBsum Web service [46], ProSA-web [47], and QMEAN server [48]. The models obtained had acceptable Ramachandran plot parameters-after minimization, the percentage of residues in the most favored regions was at least 87%, and percentage of residues in disallowed regions did not exceed 1.2%. Minimization procedures increased a percentage of residues in the additional allowed regions and slightly decreased their number in the most favored regions in the Main Ramachandran plot statistics of models, but fixed most of the distorted geometry, including unusual bond lengths and angles.
In some cases, models were refined using UCSF Chimera: distorted geometry was manually fixed by adjusting bond angles and lengths to their "ideal" values according to ProCheck; side chains orientation was refined with Dunbrack rotamer library [49].
Interface Properties Investigation
Alanine scanning was performed with Rosetta implemented in Robetta [50] with default settings. Hot spots (positions, substitution of which led to the largest change of free energy of interaction) were identified according to the ∆∆G values of complex formation after virtual mutation of a single aa residue to alanine. Positions with ∆∆G of 2 kJ/mol or greater were considered as hot spots. Mitchell Lab KFC2 server [51] and PPCheck server [52] were used for additional prediction of hot spots.
Visualization and superposition of the models were accomplished with UCSF Chimera. Interfaces parameters, including interface area, ∆G (solvation energy gain upon interface formation), total binding energy of the proteins, hydrophobic p-value and information about hydrogen bonds, salt bridges, and disulphide bonds formed between the interfacing chains were obtained using QtPISA tool [31,53]. PRODIGY server with the default 25 • C temperature setting was used to explore binding affinity and dissociation constants of the MSP complexes [54].
To study hydrophobicity and electrostatic potential of interfaces, 2D maps were calculated using MolSurfer service [55]. Electrostatic potential was calculated with default settings: protein dielectric constant: 4; solvent dielectric constant: 80; ionic strength 150 mM; ion radius: 1.5 Å; probe radius for generating molecular surface: 0.0 Å (0.0 Å here resulted in using van der Waals surface to separate the protein and solvent). PQR files were generated with PDB2PQR server version 2.1.1 [56] with AMBER force field chosen for calculations. PROPKA was used to assign protonation states at provided pH. pH values were set different in various complex types, depending on their localization: 7.3 for cytosol (HisKA domains dimers, HKrd-HP complexes), 7.2 for nucleus (HP dimers and RRrd-HP complexes); for sensor module dimers, two types of calculations were performed, with pH 7.1 corresponding to ER lumen value, and pH 5.5, as the average apoplast value ( [3] and the references therein). Additionally, a study for AHK3 sensor module homodimer was conducted with a wider range of pH values: 4.5, 5.5, 6.5, 7.0, 7.5, and 8.0.
Surface Coloring
Conservation analysis along with corresponding coloring was accomplished with ConSurf server [57,58]. The HMMER homolog search algorithm with one iteration and 0.0001 E-value cutoff and CLEAN_UNIPROT Proteins database were used. In addition, 150 sequences were retrieved with identities of no less than 35% and no more than 95%. The CLUSTALW method was used to build the multiple sequence alignment. The Bayesian calculation method and default evolutionary substitution model (best model) were used. Models obtained after ConSurf calculation were colored according to sequence conservation in UCSF Chimera.
Surfaces were colored by hydrophobicity with UCSF Chimera "kdHydrophobicity" attribute according to the hydrophobicity scale of Kyte and Doolittle [59].
Surface coloring by electrostatic potential was performed also in UCSF Chimera, using Coulombic surface coloring, with a range between −10 and 10 kcal/(mol·e). Distance-dependent dielectric option was set on "true" with dielectric constant set on default 4.0 and distance from surface set on default 1.4 Å.
Protein-Protein Docking
Blind protein-protein docking was performed in ClusPro [60] and PatchDock [61]. Clustering RMSD in PatchDock was 4.0 and complex type was set to default. ClusPro has been launched with default settings. ClusPro was also applied to check the template structures' possibility of being a biological dimer (not a crystallographic artifact), using the "Dimer Classification" option in this service.
Conclusions
In this article, we present an in silico investigation of all types of protein-protein interactions involved in CK signaling. We used programs that were widespread and proved to be reliable for the required calculations [31,62]. The ability of CK receptors to give rise to heterodimers (made up of the paralogs) together with homodimers in vivo was consistent with a high conservation of interface residues, especially in the sensor module dimerization interface, and also with a high level of complementarity in hydrophobicity and electrostatic potential. The latter complementarity was assessed as strong at quasi-neutral pH but became less pronounced at acidic pH (5.5) close to that in the apoplast. Thus, previously obtained experimental results showing that receptor dimerization occurred mainly in the ER were corroborated and explained, at least partly. By means of a virtual alanine scanning, a number of aa were identified as hot spot residues for sensor module dimerization interface. Two of these residues (F158 and Y175, AHK3 numbering) behaved as hot spots in all of investigated complexes. Only minor differences in dimerization interface properties were observed between Arabidopsis and potato receptor dimers. The promiscuity in interactions between AHK2-4 receptors and AHP1-3 phosphotransmitters was also explained by strong conservation of the interface (especially in phosphotransmitter counterpart) and high electrostatic complementarity of interfaces of all studied complexes. Three hot spot positions (N898, N901, and K1013, AHK3 numbering) were predicted for the receptor interface in HK-HPt complexes. Two of them (N898 and N901) formed hydrogen bonds with the phosphotransmitter hot spot counterparts (Q83, S87 and S90, AHP2 numbering). Models of HPt-HPt complexes, based on the OsHP1 template, showed a large number of residues in the dimerization interface also involved in the interaction with receptor's RD. Conformation of HPt complex with RR receiver domain was similar to a complex with receptor's RD, but hot spot positions were found to be different. We also showed that the calculated affinity of HKrd-HPt complexes depends on the presence of Mg 2+ ion and on the phosphorylation state of conserved residues (His, Asp). Taking into account that CK signaling is a phosphorelay based on protein recognition and tight interaction, where the "hot" phosphate residue serves as a "transmissible baton", this specific dependence of affinities of interacting proteins on their phosphorylation state has a clear biological meaning. To conclude, our MSP protein modeling and computational model analysis not only explained recent experimental results but also provided justified predictions that can build up the basis for CK signaling modification by site-directed mutagenesis. In addition, the predicted hot spots can serve as targets for the development of new potato varieties. It is known that CKs play an important role as part of the regulatory hormonal complex in the formation of potato tubers [63]. Therefore, engineering of proteins associated with the MSP pathway may be beneficial for potato breeding. | 2019-05-01T13:04:02.920Z | 2019-04-28T00:00:00.000 | {
"year": 2019,
"sha1": "042eca587242bd6aa9fc5a0ece94a4445f0e4733",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijms20092096",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "042eca587242bd6aa9fc5a0ece94a4445f0e4733",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
3840485 | pes2o/s2orc | v3-fos-license | Chromosome Cohesion Established by Rec8-Cohesin in Fetal Oocytes Is Maintained without Detectable Turnover in Oocytes Arrested for Months in Mice
Summary Sister chromatid cohesion mediated by the cohesin complex is essential for chromosome segregation in mitosis and meiosis [1]. Rec8-containing cohesin, bound to Smc3/Smc1α or Smc3/Smc1β, maintains bivalent cohesion in mammalian meiosis [2, 3, 4, 5, 6]. In females, meiotic DNA replication and recombination occur in fetal oocytes. After birth, oocytes arrest at the prolonged dictyate stage until recruited to grow into mature oocytes that divide at ovulation. How cohesion is maintained in arrested oocytes remains a pivotal question relevant to maternal age-related aneuploidy. Hypothetically, cohesin turnover regenerates cohesion in oocytes. Evidence for post-replicative cohesion establishment mechanism exists, in yeast and invertebrates [7, 8]. In mouse fetal oocytes, cohesin loading factor Nipbl/Scc2 localizes to chromosome axes during recombination [9, 10]. Alternatively, cohesion is maintained without turnover. Consistent with this, cohesion maintenance does not require Smc1β transcription, but unlike Rec8, Smc1β is not required for establishing bivalent cohesion [11, 12]. Rec8 maintains cohesion without turnover during weeks of oocyte growth [3]. Whether the same applies to months or decades of arrest is unknown. Here, we test whether Rec8 activated in arrested mouse oocytes builds cohesion revealed by TEV cleavage and live-cell imaging. Rec8 establishes cohesion when activated during DNA replication in fetal oocytes using tamoxifen-inducible Cre. In contrast, no new cohesion is detected when Rec8 is activated in arrested oocytes by tamoxifen despite cohesin synthesis. We conclude that cohesion established in fetal oocytes is maintained for months without detectable turnover in dictyate-arrested oocytes. This implies that women’s fertility depends on the longevity of cohesin proteins that established cohesion in utero.
In Brief
How chromosome cohesion is maintained in female germ cells arrested for months or decades is poorly understood. Burkhardt et al. show that cohesion is built in fetal oocytes and after birth is maintained without detectable renewal for months. This implies that the oocyte's inability to renew cohesion contributes to maternal age-related trisomies.
Sister chromatid cohesion mediated by the cohesin complex is essential for chromosome segregation in mitosis and meiosis [1]. Rec8-containing cohesin, bound to Smc3/Smc1a or Smc3/Smc1b, maintains bivalent cohesion in mammalian meiosis [2][3][4][5][6]. In females, meiotic DNA replication and recombination occur in fetal oocytes. After birth, oocytes arrest at the prolonged dictyate stage until recruited to grow into mature oocytes that divide at ovulation. How cohesion is maintained in arrested oocytes remains a pivotal question relevant to maternal age-related aneuploidy. Hypothetically, cohesin turnover regenerates cohesion in oocytes. Evidence for post-replicative cohesion establishment mechanism exists, in yeast and invertebrates [7,8]. In mouse fetal oocytes, cohesin loading factor Nipbl/Scc2 localizes to chromosome axes during recombination [9,10]. Alternatively, cohesion is maintained without turnover. Consistent with this, cohesion maintenance does not require Smc1b transcription, but unlike Rec8, Smc1b is not required for establishing bivalent cohesion [11,12]. Rec8 maintains cohesion without turnover during weeks of oocyte growth [3]. Whether the same applies to months or decades of arrest is unknown. Here, we test whether Rec8 activated in arrested mouse oocytes builds cohesion revealed by TEV cleavage and live-cell imaging. Rec8 establishes cohesion when activated during DNA replication in fetal oocytes using tamoxifen-inducible Cre. In contrast, no new cohesion is detected when Rec8 is activated in arrested oocytes by tamoxifen despite cohesin synthesis. We conclude that cohe-sion established in fetal oocytes is maintained for months without detectable turnover in dictyate-arrested oocytes. This implies that women's fertility depends on the longevity of cohesin proteins that established cohesion in utero.
RESULTS AND DISCUSSION
The frequency of clinically recognized trisomic pregnancies increases with maternal age [13]. Most aneuploid pregnancies arise as a consequence of chromosome segregation errors during the first meiotic division of female germ cells called oocytes, leading to aneuploid eggs [13][14][15]. On average, 20% of human eggs and 1%-2% of mouse eggs are aneuploid [14]. In aging human and mouse oocytes, cohesin levels decrease, centromeric cohesion weakens, and chromosome segregation errors increase [16][17][18][19][20][21]. To gain insights into age-related chromosome missegregation, we need a molecular understanding of cohesion establishment and maintenance in oocytes. A defining feature of mammalian oocytes is the prolonged arrest at the dictyate stage of prophase I that lasts for months in the mouse and decades in the human ( Figure 1A). Crucially, it is not known whether bivalent cohesion is maintained with or without turnover during the arrest. If cohesion is maintained by cohesin turnover, then either the cohesion establishment mechanism deteriorates or the cohesin pool needed for replenishment diminishes in aging oocytes (Figure S1A). Alternatively, if cohesion is maintained without cohesin turnover, then cohesin loss from chromosomes is irreversible ( Figure S1B). Either model is interesting and has the potential to explain what goes awry in aging oocytes, leading to the production of aneuploid fetuses.
A Functional Cohesion Rescue Assay in Meiosis I Oocytes
The entrapment of sister DNA molecules by cohesin complexes can be measured indirectly and directly using biochemical and cell biological approaches [7,[22][23][24]. To determine whether cohesion is maintained with or without building additional cohesive structures after DNA replication, we used a functional cohesion assay that we had established previously ( Figure 1B) [3]. Briefly, endogenous Rec8 contains engineered Tobacco Etch Virus (TEV) recognition sites rendering cohesin cleavable by TEV protease. TEV protease expression in Rec8 TEV/TEV oocytes converts 100% of bivalents to chromatids. To test whether new cohesion is built, Rec8-Myc that is not cleavable by TEV protease is induced in addition to endogenous Rec8TEV that establishes cohesion during DNA replication ( Figure 1B). If Rec8-Myc becomes incorporated into cohesin complexes that establish cohesion, then bivalents become resistant to destruction by TEV protease. If no new cohesion is established, then TEV cleavage of Rec8 converts bivalents to chromatids. We assume that over time slow cohesin decay takes place and contemporaneously occurring reloading of Rec8-Myc can be revealed by TEV cleavage to rapidly destroy endogenous Rec8. The genetic components are Rec8 TEV/TEV oocytes that also contain a conditional silent BAC transgene with a Stop cassette flanked by LoxP sites, (Tg)Stop/Rec8-Myc. Cre recombinase deletes the Stop cassette and activates Rec8-Myc transgene expression. Deletion of the Stop cassette using (Tg)Sox2-Cre in the early embryo, before primordial germ cell specification, rescues bivalent cohesion in mature oocytes [3,25]. Therefore, Rec8-Myc is capable of establishing functional cohesion when activated before meiosis.
To visualize the cohesion status of chromosomes in meiosis I oocytes, we used a microinjection and live-cell imaging approach. Mature germinal vesicle (GV) stage oocytes isolated from sexually mature females are cultured in 3-isobutyl-1-methylxanthine (IBMX)-containing medium to inhibit germinal vesicle breakdown (GVBD). GV oocytes are microinjected with mRNA encoding H2B-mCherry to visualize chromosomes, TEV protease and another marker such as CenpB-EGFP that localizes to kinetochores. After expression of the mRNA constructs, oocytes are released into IBMX-free medium to resume meiosis and followed by confocal time-lapse microscopy. Bivalents are converted to chromatids within 3-4 hr in Rec8 TEV/TEV oocytes expressing wild-type but not mutant TEV protease (TEV mut ), demonstrating that bivalent cohesion is intact without TEV cleavage ( Figure 1C). Therefore, the comparison of bivalents versus chromatids enables the visualization of functional cohesion in live oocytes.
Since it is important that Rec8-Myc transgene activation occurs after meiotic DNA replication, we carried out due diligence to confirm that Gdf9-iCre does not delete during DNA replication. Using a conditional LacZ reporter strain (Rosa26-LacZ) [35], we analyzed Rosa26-LacZ (Tg)Gdf9-iCre ovaries on embryonic day E13.5 when oocytes enter meiosis. Unexpectedly, seven out of ten fetal ovaries contained X-gal positive cells ( Figure 2B), suggesting that deletion might occur as (C) Rec8 TEV/TEV oocytes are microinjected with mRNA encoding H2B-mCherry, CenpB-EGFP, and TEV protease. Confocal time-lapse microscopy allows scoring of chromosome type at metaphase I (5 hr post-GVBD). TEV protease efficiently converts bivalents to chromatids, which are detected as at least 72 single chromatids and no bivalents (oocytes analyzed n = 40), while no cleavage of all 20 bivalents is observed using mutant TEV protease (TEV mut ; oocytes analyzed n = 16). Scale bar, 10 mm. See also Figure S1. early as meiotic DNA replication. Indeed, Rec8-Myc is expressed in up to 50% of replicating germ cells, identified as BrdUand Ddx4-positive cells, in oocytes from (Tg)Stop/Rec8-Myc (Tg)Gdf9-iCre females ( Figures 2C and 2D). Gdf9-iCre activated Rec8-Myc before or during meiotic DNA replication in two out of three female embryos ( Figure 2D). Overall Gdf9-iCre deletes with high efficiency (Table S1), but the deletion timing varies between mice and between oocytes within one mouse. In agreement with this, cohesion rescue experiments using oocytes from Rec8 TEV/TEV (Tg)Stop/Rec8-Myc (Tg)Gdf9-iCre females resulted in variable rescue efficiencies ( Figures S2A and S2B). It is not possible to know whether cohesion rescue in these cells is due to cohesion establishment during DNA replication or thereafter. On a technical note, while it cannot be excluded that mouse strain background and genetic locus might have some effect on the timing of Cre-mediated deletion, our analyses using two different target loci showing earlier deletion than previously thought raise concerns about the suitability of Gdf9-iCre for cell cycle phase-specific deletion studies.
A second approach to activate the Rec8-Myc transgene after meiotic DNA replication might rely on Cre recombinase controlled by a promoter driving expression of a protein required for recombination. The topoisomerase-like enzyme Spo11 gen-erates DNA double-strand breaks that initiate recombination [36]. Therefore, we chose to test Spo11-Cre. If Spo11-Cre deletes after DNA replication and before or during recombination, then this system will test whether cohesion is built at all after DNA replication. It would not be possible to distinguish whether cohesion is generated during recombination or the dictyatestage arrest.
Timely Controlled Rec8 Activation in Fetal and Adult Arrested Oocytes
To demonstrate whether arrested oocytes maintain cohesion with or without turnover, it is important to activate Rec8-Myc transgene expression after meiotic DNA replication and homologous recombination. Since there are currently no mouse strains other than (Tg) 6 Figure S2 and Table S1.
Gdf9-iCre that are thought to delete in arrested oocytes before growth, we chose to directly control the timing of Cre-mediated deletion by injection of 4-hydroxytamoxifen (4-OHT). Specifically, the germ cell-specific Dppa3 promoter drives expression of Cre fused to mouse estrogen receptors (MERCreMER) and a PEST degradation motif in (Tg)Dppa3-MCM-P mice [37] (Table S1). MERCreMER is cytoplasmic, and 4-OHT binding to the receptors triggers translocation of the Cre-fusion to the nucleus, facilitating timely genetic deletion [38].
The challenges with this approach are a risk of deletion without 4-OHT, inefficient deletion with 4-OHT, and effects of 4-OHT on fertility. Since background deletion could result in a false positive cohesion rescue, we first tested whether there is any deletion without 4-OHT. Reassuringly, vehicle injection into Rosa26-LacZ (Tg)Dppa3-MCM-P females resulted in no X-gal positive oocytes in ovary sections, consistent with the negligible background reported by others ( Figure S3). On the other hand, 4-OHT injection into Rosa26-LacZ (Tg)Dppa3-MCM-P females resulted in $25% X-gal positive oocytes, indicating that Cremediated deletion had occurred ( Figure S3). Since the deletion efficiency is low, oocytes with a deleted Stop cassette in Rec8-Myc will be identified by single-cell PCR genotyping after the cohesion rescue assay (Table S1) Figures 3C and 3D). The localization of cohesin to the inter-chromatid axis of bivalents suggests, but does not demonstrate, that cohesin is entrapping sister chromatids. To test for functional cohesion, we injected oocytes with TEV protease and imaged them. Indeed, bivalent cohesion is rescued in oocytes from Rec8 TEV/TEV (Tg)Stop/Rec8-Myc (Tg)Dppa3-MCM-P F1 females with a deleted Stop cassette ( Figures 3E and 3F). This implies that sufficient levels of Rec8 are synthesized due to 4-OHT to establish cohesion and rescue bivalent cohesion in adult oocytes, at least when Rec8 is activated in fetal oocytes. See also Figure S3 and Table S1.
Before using the timely controlled activation of Rec8 in dictyate-arrested oocytes, it was necessary to test whether Rec8 is transcribed in arrested oocytes. Rec8 transcripts are detectable in adult ovaries [39] (Figures 4A and S4), but these could be stored mRNA that was synthesized in early meiosis. To test whether Rec8- Figure S3; Table S1; Movies S1 and S2. Figures 4A and S4). Since Rec8-Myc is under control of the endogenous promoter in the BAC, de novo transcription of Rec8-Myc suggests that endogenous Rec8 is also transcribed in adult oocytes. We next tested whether Rec8 protein is synthesized in adult oocytes. While no Rec8-Myc signal was detectable on chromosome spreads from control (Tg)Stop/Rec8-Myc oocytes, little but detectable Rec8-Myc localized to chromosomes from oocytes isolated from (Tg)Stop/Rec8-Myc (Tg)Dppa3-MCM-P females injected with 4-OHT ( Figure 4B). We conclude that Rec8 protein is synthesized de novo and associates with chromosomes in adult oocytes.
specifically in 4-OHT-and not in vehicle-injected (Tg)Stop/Rec8-Myc (Tg)Dppa3-MCM-P ovaries (
In the key experiment, we asked whether cohesion is maintained with or without turnover in oocytes arrested for months at the dictyate stage of prophase I. Rec8 TEV/TEV (Tg)Stop/Rec8-Myc (Tg)Dppa3-MCM-P adult females were injected with 4-OHT to activate Rec8-Myc in oocytes, and the cohesion rescue assay was performed 2 or 4 months after activation ( Figure 4C). Following TEV protease injection and time-lapse imaging, single-cell PCR identified 46% of oocytes with activated Rec8-Myc (n = 71 oocytes, 8 females, Table S1). Importantly, 100% of oocytes with activated Rec8-Myc both after 2 or 4 months displayed chromatids in meiosis I (Figures 4D-4F; Movies S1 and S2). Therefore, cohesion is maintained without turnover using newly synthesized Rec8 in oocytes arrested for several months at the dictyate stage of prophase I. Current experiments do not allow us to exclude that Rec8-cohesin complexes assembled early in meiosis might turn over. In summary, we conclude that cohesive structures maintaining bivalent cohesion must have been built before (Tg)Dppa3-MCM-P activation, most likely during DNA replication. Since genetic tools like Spo11-Cre cannot distinguish temporally between meiotic DNA replication and homologous recombination (Figures 2F, 2G, and S2E), it remains an open question whether additional cohesive structures are built during meiotic recombination or whether cohesion is established exclusively during DNA replication.
Conclusions
Overall our results show that sister chromatid cohesion mediated by Rec8-containing cohesin complexes is established in fetal oocytes and maintained without detectable turnover after birth, both during the prolonged dictyate-stage arrest and the weeks of oocyte growth [3]. To fully understand cohesin dynamics in female meiosis will require an integrated approach including other cohesin complexes and a variety of techniques. We have employed a functional cohesion rescue assay that overcomes some limitations of cohesin detection by indirect immunofluorescent staining of chromosomes, which could reflect any mode of association, e.g., binding to chromatids (non-cohesive) or holding sister chromatids together (cohesive). Our results are not mutually exclusive with the findings that the cohesin loading factor Nipbl/Scc2 localizes to chromosome axes during meiotic recombination since we have investigated post-recombination, dictyatestage arrested oocytes [9,10]. Moreover, it is conceivable that Nipbl/Scc2 loads different types of cohesin complexes, which may contain Rad21L rather than Rec8, onto chromo-somes. Thus, it remains an open question whether cohesion is built during meiotic recombination.
The advantage of the TEV cleavage assay of cohesin combined with an inducible transgenic rescue construct is that cohesion of sister chromatids is revealed in live cells. At the same time, it is challenging to empirically determine the sensitivity of the assay toward newly built cohesion. Certainly 50% turnover is robustly detected as bivalents remain intact following TEV protease expression in Rec8 TEV/+ oocytes [3]. Given that as little as 13% of cohesin is sufficient for cohesion in yeast [40], it is likely that relatively few cohesin molecules mediating cohesion would be sufficient for rescue of bivalent chromosomes. It is therefore noteworthy that Rec8-Myc expression is controlled by the endogenous promoter on the BAC, and we have investigated the mechanism of cohesion maintenance relevant to the wild-type.
The discovery that bivalent cohesion is established predominantly, if not exclusively, in fetal oocytes has important implications for aging oocytes with increasing chromosome segregation errors [17][18][19][20][21]. Rather than invoking deterioration of cohesion establishment mechanisms or diminishing soluble cohesin proteins, our results suggest that age-related chromosome missegregation is due to the irreplaceable loss of cohesin complexes holding chromosomes together ( Figure S1B). How cohesion is maintained at a mechanistic level for a long time and whether new cohesion is actively prevented by anti-establishment factors such as Wapl remain key questions for the future. Our work in mouse female meiosis supports the hypothesis that the inability of oocytes to build cohesion during the dictyate arrest that lasts for months or decades contributes to maternal age-related chromosome missegregation and production of aneuploid fetuses.
For material and methods, see Supplemental Information. The usage of mice followed the international guiding principles for biomedical research involving animals (the Council for International Organizations of Medical Sciences) and were in agreement with the authorizing committee.
ACKNOWLEDGMENTS
We thank Agnieszka T. Piszczeks and her team at IMBA for histological advice and expertise. We thank Kerstin Klien for taking care of our mice. We are grateful for the (Tg)Stop/Rec8-Myc mice provided by Nobuaki R. Kudo and Rosa26-LacZ mice provided by Elizabeth Robertson. We also thank Kim Nasmyth for fruitful discussions; the Gdf9-iCre experiments were initiated in his laboratory. This work was funded by the Austrian Academy of Sciences and by the European Research Council (ERC-StG-336460 ChromHeritance) to K.T.-K.
Model of cohesion maintenance in aging oocytes.
Models of cohesion maintenance with or without turnover to explain the decrease in chromosomal cohesin and increase in chromosome segregation errors with age. Maternal and paternal homologous chromosomes (black and grey) are held together by chiasmata mediated by cohesin distal to crossover sites.
(A) Cohesion is established during meiotic S phase (yellow) and maintained with turnover as new cohesive structures are built after S phase (red). The cohesion establishment mechanism becomes defective (black arrows) or cohesin complexes needed for replenishment (red) deteriorate in ageing oocytes.
(B) Cohesion is established during meiotic S phase (yellow) and maintained without turnover.
Chromosomal cohesin decays and is irreplaceably lost from chromosomes in ageing oocytes. Table S1.
De novo expression of Rec8-Myc using Gdf9-iCre or Spo11-Cre results in generation of cohesive structures during meiotic S phase and homologous recombination.
(A) Timing of cohesion rescue assay utilizing Gdf9-iCre to activate Rec8-Myc in oocytes shortly after birth.
Green, meiotic DNA replication; beige, homologous recombination; pink, dictyate stage. See also Figure 2 and Table S1 for evaluation of accurate deletion timing of Gdf9-iCre. Oocytes were obtained from > 1 female for all time points except 8 and 12 months.
(C) Since the Spo11 endonuclease produces the DSBs that initiate homologous recombination, we considered using a Cre recombinase under the Spo11 promoter to activate Rec8-Myc. Green, meiotic DNA replication; beige, homologous recombination; pink, dictyate stage. See also Figure 2 and Table S1 for evaluation of accurate deletion timing of Spo11-Cre. Table S1.
(B) Oocytes larger than 30 µm were scored. Total cell numbers are indicated. | 2016-10-07T08:50:01.774Z | 2016-03-07T00:00:00.000 | {
"year": 2016,
"sha1": "f868f2a923679751cfad1bb87b35a80ce053d0f1",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S0960982216000609/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8786c1861a736404cd85287af05db02ab2d88f0c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
270669982 | pes2o/s2orc | v3-fos-license | Reinterpreting Trends: The Impact of Methodological Changes on Reported Sea Salt Aerosol Levels
: Since 2017, there has been a considerable increase in the recorded sea salt aerosol (SSA) levels across the United States, particularly the economically critical Baltimore–Washington Corridor (BWC). This unexpected escalation, as reported in the Environmental Protection Agency’s (EPA) annual air quality report, has generated worries about the potential effects on air quality, public health, and regional climate dynamics. However, this technical note demonstrates that the apparent rise in SSA levels is mostly due to a change in the EPA’s Chemical Speciation Network’s (CSN) approach to measuring these aerosols. In 2017, the CSN switched from utilizing chlorine to chloride as a tracer for SSAs. Speciation data for this region show that chloride concentrations are often an order of magnitude greater than chlorine concentrations, explaining the significant increase in SSA levels following the methodological modification. The absence of a similar spike in SSA levels at the nearby IMPROVE site, which has been consistent with its methodology, provides more evidence to corroborate this conclusion. These findings demonstrate the importance of methodological consistency and openness in environmental monitoring networks. Clear documentation of such changes is critical to avoiding data misunderstanding, which might lead to the development of incorrect public health and environmental policies. We advocate for continued collaboration among researchers to establish standardized measuring procedures and data analysis tools to accommodate and clarify methodological changes, resulting in accurate environmental evaluations and informed decision-making.
Introduction
The atmosphere contains a variety of naturally emitted fine particulate matter PM 2.5 that includes sea salt, organic and elemental carbon, mineral dust, and sulfate aerosol particles from volcanic eruptions [1].In both fine and coarse size ranges, sea salt is an important aerosol in the atmosphere [2].Sea salt aerosols (SSAs) are microscopic particles formed by the evaporation of seawater droplets, breaking of ocean waves, or bursting of air bubbles, released into the air and transported by the winds [3].These particles have a significant impact on climate due to their direct and indirect influence on radiative transfer [4].SSAs can influence cloud properties, affecting cloud formation, lifetime, and precipitation patterns [5].The alteration of cloud dynamics due to SSAs can have cascading effects on the regional water cycle, including changes in rainfall patterns and water.There have been no major studies which have shown that sea salt aerosols in the atmosphere have a significant and direct health impact.As a result, air quality researchers over the years may have overlooked SSAs in both urban and rural setups and focused on other components of PM 2.5 [6].
Measuring the concentrations of SSA accurately presents significant challenges.Elements such as sodium (Na), chlorine (Cl), magnesium (Mg), sulfur (S), calcium (Ca), bromine (Br), and potassium (K) are elements found in these aerosols, which closely resemble seawater in their initial composition [2].Measurements have revealed that the two significant components of SSAs are Na and Cl with mass contributions (g/g) of 55. 4 and 30.8, respectively [7].However, once emitted into the atmosphere, sea salt particles undergo chemical reactions with other airborne pollutants.These transformations result in the depletion of Cl, thereby modifying the aerosol into what is commonly referred to as aged sea salt.This dynamic alteration poses a major challenge in accurately measuring the original and modified compositions of SSAs.
Further complicating the measurement is the high emission rate of SSAs, which is over 20 times greater than that of other aerosol constituents like organics, black carbon, sulfate, nitrate, and ammonium in the atmosphere [8].This high rate influences numerous atmospheric processes, including cloud formation and radiative forcing, which, in turn, affect the distribution and lifetime of these aerosols.The vast range of organic and inorganic constituents, and their size-dependent variations, add another layer of difficulty in quantifying the precise contributions of each component [9].
Additionally, the interaction of SSAs with industrial pollutants further complicates measurements.The reaction of Cl with anthropogenic precursors increases ozone formation, thereby influencing the oxidative capacity of the atmosphere [10].The rate constant for the reaction between Cl and many VOCs is higher than that of HO [2].Therefore, the presence of Cl can increase the rate of VOC oxidation, producing products that result in higher HO concentrations, which can accelerate ozone production.In coastal areas, there can be a depletion of Cl due to the emission of dimethyl sulfide (DMS) by phytoplankton, but this depletion is insignificant compared to reactions with anthropogenic emissions [11].Overall, sea salt particles play a multifaceted role in atmospheric chemistry, influencing cloud formation, aerosol composition, ozone formation and removal, and ocean-atmosphere exchange processes.Hence, understanding these complex interactions requires accurate measurement and calculation of sea salt aerosols which, in turn, will lead to improved aerosol and climate modelling simulations.
The US Environmental Protection Agency (EPA) issues an interactive report titled "Our Nation's Air" each year (https://gispub.epa.gov/air/trendsreport/2023/#home,accessed on 10 June 2024).The latest 2023 edition of the EPA's report reveals a significant finding-a marked increase in SSA measurements at selected sites in the US since 2017.Figure 1 shows the PM 2.5 speciation trends for the Baltimore-Washington Corridor (BWC) sites, namely Washington DC (Figure 1a), Beltsville (Figure 1b), and Essex (Figure 1c).The measurement sites where the increase in SSAs (in black) has been observed are contrasted with other unaffected locations such as nearest IMPROVE sites, namely Madison County (Figure 1d) and Piney Run (Figure 1e).This disparity in sea salt aerosol behavior has led to a significant research problem for the scientific community to determine the factors responsible for these variations.Understanding the root causes of these discrepancies may be a crucial source of valuable insights into the measurement and calculation of sea salt particles and the mechanisms behind their formation, transport, and dispersion in the atmosphere.
The primary aim of this technical study is to explore the methodologies employed by the CSN for quantifying sea salt aerosols, emphasizing a pivotal methodological shift in 2017 and its implications.This research examines the SSA measurements gathered from various monitoring sites, assessing whether the notable increase in the reported SSA levels following 2017 is a long-term trend or a result of short-term variability.By analyzing the data collected over the past two decades, with a specific focus on sites in the BWC area, this paper seeks to determine whether the observed trends are restricted to the CSN network or are also present in other networks, such as IMPROVE.Additionally, it aims to investigate whether the sharp increase in SSA levels was caused by measurement or reporting errors.The primary aim of this technical study is to explore the methodologies employed by the CSN for quantifying sea salt aerosols, emphasizing a pivotal methodological shift in 2017 and its implications.This research examines the SSA measurements gathered from various monitoring sites, assessing whether the notable increase in the reported SSA levels following 2017 is a long-term trend or a result of short-term variability.By analyzing the data collected over the past two decades, with a specific focus on sites in the BWC area, this paper seeks to determine whether the observed trends are restricted to the CSN network or are also present in other networks, such as IMPROVE.Additionally, it aims to investigate whether the sharp increase in SSA levels was caused by measurement or reporting errors.
Network Description and Sampling
In the United States, two major networks play crucial roles in assessing PM2.5 speciation, providing critical insights into air quality dynamics-the Interagency Monitoring of Protected Visual Environments (IMPROVE) and the Chemical Speciation Network (CSN).Established in 1985, after the 1977 amendments to the Clean Air Act, IMPROVE monitors the air quality in national parks and wilderness areas, using modern equipment to evaluate particulate matter composition, and currently, there are approximately 160 active sites [12].In reaction to the health-oriented National Ambient Air Quality Standards (NAAQS) for PM2.5 established in 1997, the CSN was established by the US EPA in 1997 to complement the National PM2.5 Monitoring Network (https://www.epa.gov/amtic/chemical-speciation-network-csn,accessed on 10 June 2024).This network comprises the Speciation Trends Network (STN) and supplemental speciation sites.Employment of CSN data has helped achieve various goals including the development of effective State Implementation Plans, formulation of emission control strategies, interpretation of health studies, and characterization of the spatial variation of aerosols throughout the year.In
Methodology 2.1. Network Description and Sampling
In the United States, two major networks play crucial roles in assessing PM 2.5 speciation, providing critical insights into air quality dynamics-the Interagency Monitoring of Protected Visual Environments (IMPROVE) and the Chemical Speciation Network (CSN).Established in 1985, after the 1977 amendments to the Clean Air Act, IMPROVE monitors the air quality in national parks and wilderness areas, using modern equipment to evaluate particulate matter composition, and currently, there are approximately 160 active sites [12].In reaction to the health-oriented National Ambient Air Quality Standards (NAAQS) for PM 2.5 established in 1997, the CSN was established by the US EPA in 1997 to complement the National PM 2.5 Monitoring Network (https://www.epa.gov/amtic/chemical-speciation-network-csn, accessed on 10 June 2024).This network comprises the Speciation Trends Network (STN) and supplemental speciation sites.Employment of CSN data has helped achieve various goals including the development of effective State Implementation Plans, formulation of emission control strategies, interpretation of health studies, and characterization of the spatial variation of aerosols throughout the year.In 2020, the CSN had around 50 STN sites and 100 State and Local Air Monitoring Stations (SLAMS) supplemental sites.The CSN program enhances the coverage of IMPROVE by adding monitoring sites in heavily populated urban areas [13,14].
Figure 2 shows the geographical distribution of IMPROVE (red dots) and CSN (blue dots) air quality monitoring networks in the mainland USA.IMPROVE sites are heavily concentrated in the western states, particularly in natural and protected areas such as the national parks, aligning with their focus on visibility and air quality in these regions.In contrast, CSN sites are more evenly distributed but denser in Eastern USA, particularly in the Midwest, Northeast, and along the Atlantic coast, targeting urban and suburban areas to analyze PM pollution.
Sea Salt Aerosol Calculation from Tracers
Accurately quantifying the contribution of sea salt aerosol to the total mass of PM2.5 is crucial for understanding the composition and sources of atmospheric PM.To achieve this, the identification and utilization of appropriate tracers/markers play a pivotal role.Among the numerous constituents of sea salt, sodium and chlorine emerge as the prominent tracers due to their prevalence and distinctive analytical signatures.In the literature, it is assumed that other significant sources of Na and Cl in the atmosphere are negligible, and hence their concentrations can be employed to estimate the concentration of freshly formed sea salt particles in the air.This estimation relies on the relative contribution of Na and Cl in sea salt (Table 1), as determined by the work of [7].By dividing the concentration of Na or Cl by their respective relative contributions, the concentration of sea salt particles is derived.
Table 1.Mass contribution (g/g) of selected elements in fresh sea salt (Millero, 2006).Since 2000, CSN and the IMPROVE network have followed a similar sampling process where samples are collected every three days from the monitoring sites.However, the sampling process involves independent sampling techniques and specific filter media.The CSN and IMPROVE networks focus on measuring various components present in the air, including major anions, carbonaceous material, and a range of trace elements.In addition to measuring major anions and carbonaceous material, the CSN network directly measures ammonium (NH 4 + ) and other cations.On the other hand, the IMPROVE network estimates ammonium levels by assuming that the measured sulfate (SO 4 2− ) and nitrate ions (NO 3 − ) are completely neutralized.The IMPROVE network also measures chloride and nitrite (NO 2 − ) ions from the beginning of its operations [12].The data samples at both the sites are systematically gathered according to a 1-in-3 sampling schedule, wherein an integrated 24 h sample is collected every third day.
Element
While the CSN and IMPROVE networks share similarities in their field and laboratory approaches, several subtle differences exist in their sampling and chemical analysis methods.These differences include variations in the techniques used to collect samples and analyze the chemical composition of the collected samples.For this study, the focus is on the measurement of chlorine and chloride.Both the CSN and IMPROVE networks use specific methods for these measurements.They utilize energy dispersive X-ray fluorescence (EDXRF) to measure the chlorine levels, and nylon ion chromatography to measure the chloride levels.There are differences in the acceptance testing protocols between IMPROVE and the CSN, but the impact on measurements is likely minimal due to the rigorous approach of collecting and analyzing blanks throughout the sampling and analysis process [14].Blanks are used as controls to identify and quantify any contamination or artifacts introduced during collection, processing, transportation, and analysis, thereby improving the data's reliability and validity [15].Over time, the CSN's methodologies have become more like those of IMPROVE [16,17].Starting with samples obtained in November 2015, a new contractor was hired to analyze and report on the CSN data.
Sea Salt Aerosol Calculation from Tracers
Accurately quantifying the contribution of sea salt aerosol to the total mass of PM 2.5 is crucial for understanding the composition and sources of atmospheric PM.To achieve this, the identification and utilization of appropriate tracers/markers play a pivotal role.Among the numerous constituents of sea salt, sodium and chlorine emerge as the prominent tracers due to their prevalence and distinctive analytical signatures.In the literature, it is assumed that other significant sources of Na and Cl in the atmosphere are negligible, and hence their concentrations can be employed to estimate the concentration of freshly formed sea salt particles in the air.This estimation relies on the relative contribution of Na and Cl in sea salt (Table 1), as determined by the work of [7].By dividing the concentration of Na or Cl by their respective relative contributions, the concentration of sea salt particles is derived.The computation of sea salt concentrations follows a simple relationship-the concentration of sea salt particles (in µg/m 3 ) equals the reciprocal of the element's mass fraction in sea water multiplied by the concentration of the tracer.For chlorine, the sea salt particle concentration is given by 1 divided by 0.554, multiplied by the Cl concentration, which yields a factor of 1.8 times the Cl concentration.Similarly, for Na, the sea salt particle concentration is obtained by dividing 1 by 0.308, and then multiplying it by the Na concentration, resulting in a factor of 3.26 times the Na concentration.This calculation is also based on the major and minor constituents of seawater and marine aerosols described in detail by [18].The EPA, following the research conducted by [6], calculates sea salt concentrations as 1.8 times the chloride measurement when available, or alternatively, 1.8 times the chlorine measurement.
Study Area
The Baltimore-Washington Corridor (BWC) was chosen as the research region due to its vital relevance in terms of air quality and strategic position.The BWC is a major metropolitan center in the Mid-Atlantic region of the United States, spanning the urban and suburban areas between Baltimore, Maryland, and Washington, D.C.It is particularly sensitive to many air contaminants, owing to its geographical location and high degree of urbanization [19].The area has significant air quality concerns due to mobility emissions, industrial activity, and its geographic location.The BWC, which is home to several manufacturing factories, power stations, and other industrial facilities, emits considerable amounts of pollutants into the atmosphere, including SO 2 , nitrogen oxides (NO x = NO + NO 2 ), and particulate matter (PM) [20].
The BWC has three prominent air monitoring sites in the BWC region, namely McMillan NCore-PAMS (DC), HU-Beltsville (Beltsville) and Essex (Essex), all of which belong to the CSN.These stations play a crucial role in collecting data that reflect the cumulative impact of emissions, particularly near industrial sources (Figure 3).Multiple CSN stations in the region ensure detailed chemical analyses of PM, helping to identify pollution trends and sources and aiding in developing strategies to improve air quality.Table 2 provides a comprehensive overview of the relative contribution of sea sal aerosols to the total PM2.5 concentration at the study sites, as seen in Figure 1 as well Analysis of the data from the CSN sites spanning from 2005 to 2014 shows that sea sal particles constitute the smallest fraction of PM2.5 compared to other constituents such a crustal matter, organic carbon, elemental carbon, nitrate, and sulfate, whereas in case o the IMPROVE sites, the slight increase in SSA percentage can be observed starting from 2011.This finding suggests that within the specified period, SSAs had a relatively mino influence on the overall PM2.5 concentration levels in the study region.The average per centage of SSA during this period was recorded as 0.43%, 0.35%, and 0.56% at the DC Beltsville, and Essex monitoring stations, respectively.However, an intriguing shift in sea salt aerosol levels emerged in 2015, as evidenced by a substantial increase in its propor tion.Remarkably, the Beltsville station stands out with a particularly high percentage o SSAs, approximately three times greater than its average during 2005-2014.In contrast the DC and Essex stations exhibit comparatively lower percentage increases during th same period.
Sea Salt Aerosol Trend
In 2016, a notable decline was observed across all three stations, with SSAs contrib uting to the lowest proportion of PM2.5 during that year.However, this downturn is short lived, as a significant upsurge occurred in 2017 for all three CSN sites, with DC showing a significant increase to 6.5% from just 0.1% the previous year.Following the spike in 2017 each of the three CSN locations experienced a decline in subsequent years, although th percentages remained higher than the pre-2017 levels.The two IMPROVE sites, while also showing an increase in 2011, did not exhibit sharp peaks as the CSN sites did.Madison County showed a higher baseline from 2011 onwards, with its peak occurring in 2020 (1.8%).Piney Run's data are generally the lowest of all the sites but follows the overal trend of a slight increase in 2011 onwards.Thus, the comparison between the CSN and IMPROVE dataset trends highlights a striking divergence in the SSA percentages.Th data suggest that the CSN sites registered notably higher SSA levels from 2017 onwards.To conduct a fair and insightful analysis of the PM speciation dataset trend from the BWC CSN, we chose the two nearest IMPROVE stations, namely Madison County, Virginia, and Piney Run, Maryland (Figure 3).This allowed for a thorough review of the long-term datasets from each station, allowing for the detection and explanation of any SSA measurement/calculation anomalies that may exist.The authors recognize that these stations, part of different networks, are situated in varying environmental conditions, which may influence the measurement of SSA concentrations.Despite potential variations, the study primarily focused on the overarching long-term trends in the SSA concentration levels.By examining the PM speciation dataset acquired over time, we aimed to gain a comprehensive understanding of regional air quality trends, with a special emphasis on SSAs.
Percentage of SSAs in the Total PM 2.5 Speciation
Table 2 provides a comprehensive overview of the relative contribution of sea salt aerosols to the total PM 2.5 concentration at the study sites, as seen in Figure 1 as well.Analysis of the data from the CSN sites spanning from 2005 to 2014 shows that sea salt particles constitute the smallest fraction of PM 2.5 compared to other constituents such as crustal matter, organic carbon, elemental carbon, nitrate, and sulfate, whereas in case of the IMPROVE sites, the slight increase in SSA percentage can be observed starting from 2011.This finding suggests that within the specified period, SSAs had a relatively minor influence on the overall PM 2.5 concentration levels in the study region.The average percentage of SSA during this period was recorded as 0.43%, 0.35%, and 0.56% at the DC, Beltsville, and Essex monitoring stations, respectively.However, an intriguing shift in sea salt aerosol levels emerged in 2015, as evidenced by a substantial increase in its proportion.Remarkably, the Beltsville station stands out with a particularly high percentage of SSAs, approximately three times greater than its average during 2005-2014.In contrast, the DC and Essex stations exhibit comparatively lower percentage increases during the same period.In 2016, a notable decline was observed across all three stations, with SSAs contributing to the lowest proportion of PM 2.5 during that year.However, this downturn is short-lived, as a significant upsurge occurred in 2017 for all three CSN sites, with DC showing a significant increase to 6.5% from just 0.1% the previous year.Following the spike in 2017, each of the three CSN locations experienced a decline in subsequent years, although the percentages remained higher than the pre-2017 levels.The two IMPROVE sites, while also showing an increase in 2011, did not exhibit sharp peaks as the CSN sites did.Madison County showed a higher baseline from 2011 onwards, with its peak occurring in 2020 (1.8%).Piney Run's data are generally the lowest of all the sites but follows the overall trend of a slight increase in 2011 onwards.Thus, the comparison between the CSN and IMPROVE dataset trends highlights a striking divergence in the SSA percentages.The data suggest that the CSN sites registered notably higher SSA levels from 2017 onwards.
Annual Average of SSA Concentrations
Figure 4 presents a comprehensive analysis of the annual average sea salt trends in the study area from 2001 to 2021.Notably, the absence of data for the Essex site in 2003 and 2004 contributes to a discontinuity in the trend.Madison county has data available from 2004, whereas the Beltsville and Piney Run sites provide data from 2005 onwards.This discrepancy in data availability also underscores the importance of continuous monitoring to maintain a comprehensive understanding of the PM 2.5 speciation trends.
Examining the measured sea salt levels over time, a slight increase was observed in 2007 at both the DC and Essex sites, although the trend declined until 2012.During this time, Essex regularly had slightly elevated annual average SSA concentrations than DC, an observation that could be attributed to the Essex site's proximity to the Baltimore Bay region.In contrast, the Beltsville site reported lower annual average SSA concentrations, most likely due to its location, which is further away from coastal area.From 2013, a consistent trend appeared as all three CSN sites showed a progressive increase in SSA levels, which lasted until 2015.In contrast to this ascending pattern, 2016 marked an unexpected decline in the SSA levels throughout these CSN sites.This shift cannot be attributed to missing measurements or datasets.From 2017, the data show a pronounced surge in SSA levels, notably at the DC location, which witnessed the most significant escalation in the same year.This major increase only being recorded in the DC site, unlike the moderate trends at Beltsville and Essex, suggests no shared large-scale event or change in environmental conditions of the BWC.Madison county generally showed a stable or slight downward trend with a small noticeable peak in 2004 and 2011.Piney Run exhibited a relatively stable trend with minor fluctuations in 2011 and 2020.The increase in SSA levels from 2010 to 2011 for both these IMPROVE sites is minor as compared to the one measured by the CSN sites between 2016 and 2017.The consistently moderate concentrations of SSA observed at the IMPROVE suggest that the SSAs may be attenuated due to transport and aging, given that thes are located a long distance from the coastal areas where such particles originate.T crease in SSA levels is likely attributed to the deposition and dispersion processe occur when aerosols move inland.However, the disparity becomes more evident we look at the CSN sites.Despite being impacted by regional emissions from v sources-industrial operations, urban emissions, or natural contributions from th rounding environment-the CSN sites showed increased SSA concentrations, nota ter 2016.This significant increase raises concerns, implying potential technological o cedural changes within the monitoring system that went into force around 2017. modifications might account for the observed increase in reported levels, indicatin provements in detection capabilities or updates to monitoring techniques that resu a more accurate depiction of the SSA abundance in these places.
Domination of Chloride over Chlorine Measurements
Figure 5 displays the measurements of chlorine and chloride taken at the Ess during the study period.All three CSN sites display similar trends, and hence Es used as a representative of the other two sites.The concentration of chlorine rem relatively low throughout the period, with no significant changes occurring during ter 2017.This trend suggests a consistent and stable presence of chlorine at the site.O contrary, the chloride levels measured at the station were consistently higher sin commencement of the measurements.We observed continuous and relatively highe centrations of chloride compared to chlorine.The annual average sea salt concentr from 2017 to 2021, presented in Table 3, support this observation.The data reveal th annual average chloride concentration is approximately ten times greater than the a average chlorine concentration, indicating a significant disparity between the two.
Interestingly, similar patterns were also noted at the other two sites in the BWC gesting that the lower chlorine levels are not specific to the Essex site alone but ma characteristic of the entire region.The lower chlorine levels could be attributed to pletion through atmospheric chemical reactions, leading to its conversion or loss in gas phase.These reactions can occur between chlorine and various compounds pres the atmosphere [22].Consequently, the chlorine concentration at the site remains co ently low.The consistently moderate concentrations of SSA observed at the IMPROVE sites suggest that the SSAs may be attenuated due to transport and aging, given that these sites are located a long distance from the coastal areas where such particles originate.The decrease in SSA levels is likely attributed to the deposition and dispersion processes that occur when aerosols move inland.However, the disparity becomes more evident when we look at the CSN sites.Despite being impacted by regional emissions from various sourcesindustrial operations, urban emissions, or natural contributions from the surrounding environment-the CSN sites showed increased SSA concentrations, notably after 2016.This significant increase raises concerns, implying potential technological or procedural changes within the monitoring system that went into force around 2017.These modifications might account for the observed increase in reported levels, indicating improvements in detection capabilities or updates to monitoring techniques that resulted in a more accurate depiction of the SSA abundance in these places.
Domination of Chloride over Chlorine Measurements
Figure 5 displays the measurements of chlorine and chloride taken at the Essex site during the study period.All three CSN sites display similar trends, and hence Essex is used as a representative of the other two sites.The concentration of chlorine remained relatively low throughout the period, with no significant changes occurring during or after 2017.This trend suggests a consistent and stable presence of chlorine at the site.On the contrary, the chloride levels measured at the station were consistently higher since the commencement of the measurements.We observed continuous and relatively higher concentrations of chloride compared to chlorine.The annual average sea salt concentrations from 2017 to 2021, presented in Table 3, support this observation.The data reveal that the annual average chloride concentration is approximately ten times greater than the annual average chlorine concentration, indicating a significant disparity between the two.usage of chlorine for the same purpose.This transition was crucial in interpreting the SSA data trends, as chloride tends to be present in higher concentrations and is a more substantial constituent of sea salt compared to elemental chlorine.As a result of this analytical refinement, the CSN stations began reporting higher SSA concentrations from 2017 onwards.This methodological change explains the significant increase in SSA percentages recorded after 2017 by the CSN sites, a trend that deviates sharply from the traditionally lower percentages.4 presents a comprehensive analysis of the data for chlorine and chloride measured at Essex CSN over a span of five years, from 2017 to 2021.The data includes key statistical measures such as the mean, standard deviation, standard error of the mean, and coefficient of variation (%), which are crucial for understanding the behavior and trends of these chemical elements.Chloride has a significantly higher mean compared to chlorine, indicating that, on average, the chloride measurements are higher than the chlorine measurements.During the study period, chloride also exhibited a larger standard deviation compared to chlorine, indicating that the measurements of chloride vary more widely around their mean than the measurements of chlorine do around their mean.However, chlorine measurements displayed a higher coefficient of variation compared to chloride.This indicates that the measurements of chlorine exhibit significantly more variability around their mean than the measurements of chloride.Interestingly, similar patterns were also noted at the other two sites in the BWC, suggesting that the lower chlorine levels are not specific to the Essex site alone but may be a characteristic of the entire region.The lower chlorine levels could be attributed to its depletion through atmospheric chemical reactions, leading to its conversion or loss into the gas phase.These reactions can occur between chlorine and various compounds present in the atmosphere [22].Consequently, the chlorine concentration at the site remains consistently low.
The CSN began reporting chloride concentrations in February 2017, marking a pivotal methodological shift with substantial implications for the way SSA was calculated.The use of chloride as a marker for SSA measurement was a change from the previous usage of chlorine for the same purpose.This transition was crucial in interpreting the SSA data trends, as chloride tends to be present in higher concentrations and is a more substantial constituent of sea salt compared to elemental chlorine.As a result of this analytical refinement, the CSN stations began reporting higher SSA concentrations from 2017 onwards.This methodological change explains the significant increase in SSA percentages recorded after 2017 by the CSN sites, a trend that deviates sharply from the traditionally lower percentages.
Table 4 presents a comprehensive analysis of the data for chlorine and chloride measured at Essex CSN over a span of five years, from 2017 to 2021.The data includes key statistical measures such as the mean, standard deviation, standard error of the mean, and coefficient of variation (%), which are crucial for understanding the behavior and trends of these chemical elements.Chloride has a significantly higher mean compared to chlorine, indicating that, on average, the chloride measurements are higher than the chlorine measurements.During the study period, chloride also exhibited a larger standard deviation compared to chlorine, indicating that the measurements of chloride vary more widely around their mean than the measurements of chlorine do around their mean.However, chlorine measurements displayed a higher coefficient of variation compared to chloride.This indicates that the measurements of chlorine exhibit significantly more variability around their mean than the measurements of chloride.
Conclusions and Discussions
The EPA, following the research conducted in 2008, calculates SSA concentrations as 1.8 times the chloride measurement when available, or alternatively, 1.8 times the chlorine measurement [6].The 1.8 factor accounts for the other sea salt components like sodium, which is directly measured by the network's routine analyses [7].However, the observations show significantly lower measured chlorine concentrations at the speciation sites compared to chloride concentrations.If chloride is considered a more accurate tracer for SSA instead of chlorine, then the calculation of SSA performed prior to 2017 using chlorine values by the CSN would show systematically lower concentrations.This discrepancy highlights the potential for erroneous reporting of SSA concentrations if chlorine is used as the only tracer for SSA measurement.Also, this methodological oversight could have led to significant underestimations of SSA levels in previous analyses, impacting the accuracy and reliability of air quality assessments.Consequently, this finding should be taken into consideration when estimating SSA concentrations at sites where the chloride levels are not reported.The disparity between the recorded chlorine and chloride concentrations at CSN locations in the study area (BWC) makes a strong case for a more sophisticated method to determine sea salt aerosol levels.While the EPA has historically used a factor of 1.8 times the chloride value, or 1.8 times the chlorine measurement, this study underscores the need for a more rigorous technique.Given the observed discrepancies in chlorine and chloride concentrations, a single tracer may not adequately represent sea salt aerosol levels.Hence, we recommend the use of more than one tracer for extensive verification of sea salt aerosol amounts.Beyond the traditional use of chlorine or chloride alone, including an extra tracer might improve computation accuracy.This method would require cross-referencing readings from two separate tracers, resulting in a more reliable and representative estimate of sea salt aerosol concentrations.Pairing chlorine or chloride with another compound that exhibits a distinct response to sea salt aerosol could better account for variations in sea salt composition and improve the precision of concentration estimates.
The analysis of the dataset from the Essex CSN site revealed notable trends and patterns in chlorine and chloride concentrations over the five-year study period.The chlorine concentrations displayed considerable fluctuations, as evidenced by the high coefficient of variation (%).This indicates that chlorine levels were subject to seasonal or episodic changes during the study period, indicating potential local environmental forcing factors.In contrast, the chloride concentrations exhibited more stable trends, as reflected by a lower coefficient of variation (%).The calculated standard errors of the means estimated for both chlorine and chloride provided a reliable basis for the annual mean values, contributing to the robustness of the results.The observed fluctuations in chlorine concentrations emphasize the need for continuous monitoring and assessment of potential sources of contamination.
The findings presented in this study have significant implications for the policymakers and stakeholders involved in assessing and managing air quality at the state and national levels for more accurate future assessments.Policymakers rely on accurate and reliable data to make informed decisions regarding air quality regulations, public health measures, and environmental protection strategies.The subtle variations in data collection and reporting, specifically in relation to sea salt aerosol levels (which are calculated by measuring its tracers), underscore the need for harmonization and standardization across monitoring networks.This is particularly crucial when interpreting historical data or conducting comparative analyses between different monitoring sites or regions.Strong differences in calculated sea salt aerosol concentrations, based on differences in chloride and chlorine measurements, highlight the importance of reconsidering the current calculation methodologies.
We also recommend that the data reporting agency adopt the term 'salt' rather than 'sea salt' when specifying this particular PM 2.5 speciation.This adjustment is crucial for a more comprehensive representation of the particles' sources.Using the term 'sea salt' inherently limits the understanding of the particle's origin to ocean spray, thereby excluding other significant sources such as dry lakebeds, chemical industries, and other anthropogenic activities.By using the broader term 'salt', we ensure that both natural and anthropogenic sources are adequately covered.This change will enhance the accuracy of data reporting and analysis, leading to better-informed environmental policies and research outcomes.Accurate terminology is essential for precise data interpretation and adopting 'salt' will contribute to a more holistic understanding of particulate matter in various environmental contexts.
Figure 1 .
Figure 1.PM 2.5 speciation (in percentage) trend for DC (a), Beltsville (b) and Essex (c) CSN stations; Madison County (d) and Piney Run (e) IMPROVE stations as reported in the EPA Our Nation's Air annual 2023 report.
Figure 2 .
Figure 2. Location of IMPROVE sites (red dots, top) and CSN sites (blue dots, bottom) throughout mainland USA.Source: Federal Land Manager Environmental Database.
Figure 2 .
Figure 2. Location of IMPROVE sites (red dots, top) and CSN sites (blue dots, bottom) throughout mainland USA.Source: Federal Land Manager Environmental Database.
Atmosphere 2024 , 1 Figure 3 .
Figure 3. Location of the DC, Beltsville, and Essex (as part of CSN); Madison County and Piney Run (as part of IMPROVE).
Figure 3 .
Figure 3. Location of the DC, Beltsville, and Essex (as part of CSN); Madison County and Piney Run (as part of IMPROVE).
Atmosphere 2024 , 1 Figure 4 .
Figure 4. Two decades of SSA trend in the study area from 2001 to 2021, showing the annual a concentrations.
Figure 4 .
Figure 4. Two decades of SSA trend in the study area from 2001 to 2021, showing the annual average concentrations.
Figure 5 .
Figure 5. Bar representation of chlorine (black) and chloride (red) measurements at the Essex.
Figure 5 .
Figure 5. Bar representation of chlorine (black) and chloride (red) measurements at the Essex.
Table 2 .
[21]entage of sea salt aerosol in the total PM 2.5 speciation measured at the study sites[21].
Table 3 .
Comparisons between annual average of chlorine and chloride data from the Essex CSN station from 2017 to 2021.
Table 3 .
Comparisons between annual average of chlorine and chloride data from the Essex CSN station from 2017 to 2021.
Table 4 .
Essential statistical measures such as mean, standard deviation, standard error of mean and coefficient of variation (%) for chlorine and chloride measurements from Essex CSN site from 2017 to 2021. | 2024-06-23T15:16:56.097Z | 2024-06-21T00:00:00.000 | {
"year": 2024,
"sha1": "7c6f292e93b5567b23240451ce82a9c582d4ab0b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4433/15/7/740/pdf?version=1718953954",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "daa33221c3cbef5c893a68dccaa4bf829b6d3efe",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
215805388 | pes2o/s2orc | v3-fos-license | LexSchem: a Large Subcategorization Lexicon for French Verbs
This paper presents LexSchem - the first large, fully automatically acquired subcategorization lexicon for French verbs. The lexicon includes subcategorization frame and frequency information for 3297 French verbs. When evaluated on a set of 20 test verbs against a gold standard dictionary, it shows 0.79 precision, 0.55 recall and 0.65 F-measure. We have made this resource freely available to the research community on the web.
Introduction
A lexicon is a key component of many current Natural Language Processing (NLP) systems. Hand-crafting lexical resources is difficult and extremely labour-intensive -particularly as NLP systems require statistical information about the behaviour of lexical items in data, and the statistical information changes from dataset to another. For this reason automatic acquisition of lexical resources from corpora has become increasingly popular. One of the most useful lexical information for NLP is that related to the predicate-argument structure. Subcategorization frames (SCFs) of a predicate capture at the level of syntax the different combinations of arguments that each predicate can take. For example, in French, the verb "acheter" (to buy) subcategorizes for a single nominal phrase as well as for a nominal phrase followed by a prepositional phrase governed by the preposition "à". Subcategorization lexicons can benefit many NLP applications. For example, they can be used to enhance tasks such as parsing (Carroll et al., 1998;Arun and Keller, 2005) and semantic classification (Schulte im Walde and Brew, 2002) as well as applications such as information extraction (Surdeanu et al., 2003) and machine translation. Several subcategorization lexicons are available for many languages, but most of them have been built manually. For French these include e.g. the large French dictionnary "Le Lexique Grammaire" (Gross, 1975) and the more recent Lefff (Sagot et al., 2006) and Dicovalence (http://bach.arts.kuleuven.be/ dicovalence/) lexicons. Some work has been conducted on automatic subcategorization acquisition, mostly on English (Brent, 1993;Manning, 1993;Briscoe and Carroll, 1997;Korhonen et al., 2006) but increasingly also on other languages, from which German is just one example (Schulte im Walde, 2002). This work has shown that although automatically built lexicons are not as accurate and detailed as manually built ones, they can be useful for real-world tasks. This is mostly because they provide what manually built resources don't generally provide: statistical information about the likelihood of SCFs for individual verbs. We have recently developed a system for automatic subcategorization acquisition for French which is capable of acquiring large scale lexicons from un-annotated corpus data (Messiant, 2008). To our knowledge, only one previously published system exists for SCF acquisition for French SCFs (Chesley and Salmon-Alt, 2006). However, no further work has been published since the initial experiment with this system, and the lexicon resulting from the initial experiment (which is limited to 104 verbs) is not publicly available.
Our new system is similar to the system developed in Cambridge (Briscoe and Carroll, 1997;Preiss et al., 2007) in that it extracts SCFs from data parsed using a shallow dependency parser (Bourigault et al., 2005) and is capable of identifying a large number of SCFs. However, unlike the Cambridge system (and most other systems which accept raw corpus data as input), it does not assume a list of predefined SCFs. Rather it learns the SCF types from data. This approach was adopted because at the time of development no comprehensive manually built inventory of French SCFs was available to us. In this paper, we report work where we used this recent system to automatically acquire the first large subcategorization lexicon for French verbs. The resulting lexicon, LexSchem, is made freely available to the community under LGPL-LR (Lesser General Public License For Linguistic Resources) license. We describe ASSCI, our SCF acquisition system, in section 2. LexSchem (the automatically acquired lexicon) is introduced and evaluated in section 3. We compare our work against previous work in section 4.
ASSCI : the subcategorization acquisition system
ASSCI takes raw corpus data as input. The data is first tagged and syntactically analysed. Then, our system produces a list of SCFs for each verb that occurred frequently enough in data (we have initially set the minimum limit to 200 corpus occurrences). ASSCI consists of three modules: a pattern extractor which extracts patterns for each target verb; a SCF builder which builds a list of candidate SCFs for the verb, and a SCF filter which filters out SCFs deemed incorrect. We introduce these modules briefly in the subsequent sections. For a more detailed description of ASSCI, see (Messiant, 2008).
2.1. Preprocessing : Morphosyntactic tagging and syntactic analysis Our system first tags and lemmatizes corpus data using the Tree-Tagger and then parses it using Syntex (Bourigault et al., 2005). Syntex is a shallow parser for French. It uses a combination of heuristics and statistics to find dependency relations between tokens in a sentence. It is a relatively accurate parser, e.g. it obtained the best precision and Fmeasure for written French text in the recent EASY evaluation campaign 1 . Our below example illustrates the dependency relations detected by Syntex (2) for the input sentence in (1) Syntex does not make a distinction between arguments and adjuncts -rather, each dependency of a verb is attached to the verb.
Pattern extractor
The pattern extractor collects the dependencies found by the parser for each occurrence of a target verb. Some cases receive special treatment in this module. For example, if the reflexive pronoum "se" is one of the dependencies of a verb, the system considers this verb like a new one. In (1), the pattern will correspond to "s'abattre" and not to "abattre". If a preposition is the head of one of the dependencies, the module explores the syntactic analysis to find if it is followed by a noun phrase (+SN]) or an infinitive verb (+SINF]).
(3) shows the output of the pattern extractor for the input in (1).
SCF builder
The SCF builder extracts SCF candidates for each verb from the output of the pattern extractor and calculates the number of corpus occurrences for each SCF and verb combination. The syntactic constituents used for building the SCFs are the following: 1. SN for nominal phrases; 2. SINF for infinitive clauses; 3. SP [prep+SN] for prepositional phrases where the preposition is followed by a noun phrase. prep is the head preposition; 4. SP [prep+SINF] for prepositional phrases where the preposition is followed by an infinitive verb. prep is the head preposition; 5. SA for adjectival phrases; 6. COMPL for subordinate clauses.
When a verb has no dependency, its SCF is considered as INTRANS.
(4) shows the output of the SCF builder for (1).
SCF filter
Each step of the process is fully automatic, so the output of the SCF builder is noisy due to tagging, parsing or other processing errors. It is also noisy because of the difficulty of the argument-adjunct distinction. The latter is difficult even for humans. Many criteria that exist for it are not usable for us because they either depend on lexical information which the parser cannot make use of (since our task is to acquire this information) or on semantic information which even the best parsers cannot yet learn reliably. Our approach is based on the assumption that true arguments tend to occur in argument positions more frequently than adjuncts. Thus many frequent SCFs in the system output are correct. We therefore filter low frequency entries from the SCF builder output. We currently do this using the maximum likehood estimates (Korhonen et al., 2000). This simple method involves calculating the relative frequency of each SCF (for a verb) and comparing it to an empirically determined threshold. The relative frequency of the SCF i with the verb j is calculated as follows:
LexSchem
We used ASSCI to acquire LexSchem, the first fully automatically built large subcategorization lexicon for French verbs. We describe this work and the outcome in the subsequent sections.
Corpus
The automatic approach benefits from a large corpus. In addition, as we want our lexicon to be suitable for general use (not only for a particular domain use), the corpus needs to be heterogeneous enough to cover many domains and text types. We thus used ten years of the French newspaper Le Monde (two hundred millions words in total). Le Monde is one of the largest corpora for French and "clean" enough to be parsed easily and efficiently.
Description of the lexicon
Running ASSCI on this corpus data, we extracted 11,149 lexical entries in total for different verb and SCF combinations. The lexicon covers 3268 verb types (a verb and its reflexive form are counted as 2 different verbs) and 336 distinct SCFs. Each entry has 7 fields : • NUM: the number of the entry in the lexicon; • SUBCAT: a summary of the target verb and SCF; • VERB: the verb; • SCF: the subcategorization frame; • COUNT: the number of corpus occurences found for the verb and SCF combination; • RELFREQ: the relative frequency of the SCF with the verb; • EXAMPLES: 5 corpus occurrences exemplifying this entry (the examples are provided in a separate file).
The following shows the LexSchem entry for the verb "s'abattre" with the SCF SP[sur+SN]. Two of the five corpus sentences exemplifying this entry are shown as follows (the syntactic analysis of Syntex is also available): 25458===Il montre la salle : On a fait croire aux gens que des hordes s' abattraient sur Paris .
Evaluation
We evaluated LexSchem against a gold standard from a dictionary. Although this approach is not ideal (e.g. a dictionary may include SCFs not included in our data, and vice versa -see e.g. (Poibeau and Messiant, 2008) for discussion), it can provide a useful starting point. We chose a set of 20 verbs listed in Appendix to evaluate this resource. These verbs were chosen for their heterogeneity in terms of semantic and syntactic features, but also because of their varied frequency (200 to 100,000) in the corpus. We compared our lexicon against the Trésor de la Langue Française Informatisé (TLFI) -a freely available French lexicon containing verbal SCF information from a dictionary. We had to restrict our scope to 20 verbs because of problems in turning this resource into a gold standard 2 . We calculated type precision, type recall and F-measure against the gold standard, and obtained 0.79 precision, 0.55 recall and 0.65 F-measure. These results are shown in table 1, along with: 1) the results obtained with the only previously published work on automatic subcategorization acquisition (from raw corpus data) for French verbs (Chesley and Salmon-Alt, 2006), and 2) those reported with the previous Cambridge system when the system was used to acquire a large SCF lexicon for English with a baseline filtering technique comparable to the one employed in our work (VALEX sub-lexicon 2) (Korhonen et al., 2006). Due to the differences in the data, SCFs, and experimental setup, direct comparison of these results is unmeaningful. However, their relative similarity seems to suggest that LexSchem is a state-of-the-art lexicon.
Related work
This section describes other existing syntax dictionaries and lexicons for French (most of the ones we are aware of). For comparison, it also includes a description of VALEXthe first large subcategorization lexicon acquired automatically for English. Table 3 summarizes the key information included in these different lexical resources.
Dictionaries and lexicons for French
The Lexicon-Grammar (LG) is the earliest resource for subcategorization information for French. (Gross, 1975;Gross, 1994) -a manually built dictionary including subcategorization information for verbs, adjectives and nouns. It is not ideally suited for computational use but work currently in progress is aimed at addressing this problem (Gardent et al., 2005). Only part of this resource is publicly available.
As mentioned earlier, the Trésor de la Langue Française Informatisé (TLFI) is derived from a syntax dictionary and (like we noticed with evaluation of 3.), requires substantial manual work for NLP use. The Lefff is an automatically acquired morphological lexicon for 6798 verb lemmas (Sagot et al., 2006) which has been manually supplemented with partial syntactic information.
DicoValence is a manually built resource which contains valency frames for more than 3700 French verbs (van den Eynde and Mertens, 2006). It relies on the pronominal paradigm approach of (van den Eynde and Blanche-Benveniste, 1978). Note that the information provided by LG, the TLFI, the Lefff and DicoValence is type-based, i.e. no statistical information about the likelihood of SCF for words is available. TreeLex (http://erssab.u-bordeaux3.fr/ article.php3?id\_article=150) is a subcategorization lexicon automatically extracted from the French TreeBank (Kupść, 2007). It covers about 2000 verbs. 160 SCFs have been identified (1.91 SCF per verb on average). To our knowledge, this lexicon has yet not been evaluated in terms of accuracy. Like other resources mentioned in this section, TreeLex relies on manual effort. Resources built in this matter are not easily adapted to different tasks and domains.
As far as we know, the only published work on subcategorization acquisition for French is (Chesley and Salmon-Alt, 2006) which proposes a method to acquire SCFs from a French cross-domain corpus. The work relies on the VISL parser which has an "unevaluated (and potentially high) error rate" while our system relies on Syntex which has been evaluated and discovered accurate by EASY evaluation campaign. We acquired and made publicly available a large subcategorization lexicon for 3268 verbs (336 SCFs) whereas Chesley and Salmon-Alt (2006) only reported an experiment with 104 verbs (27 SCFs).
The first automatically acquired large scale lexicon for English : VALEX
An interesting comparison point for us is VALEX -a large verb subcategorization lexicon created for English (Korhonen et al., 2006). This lexicon was acquired automatically using the system developed at Cambridge (Briscoe and Carroll, 1997) if the aim is to use SCF frequencies to aid parsing, it may be better to maximise the accuracy (rather than the coverage) of the lexicon. On the other hand, an NLP task such as lexical classification tends to benefit from a lexicon which provides good coverage at the expense of accuracy. The accuracy is controlled by using different SCF filtering options to build the different lexicons: Lexicon 1: Unfiltered, noisy SCF lexicon.
Lexicon 3: High frequency SCFs supplemented with additional ones from manually built dictionaries.
Lexicon 4: High frequency SCFs after smoothing with semantic back-off estimates.
Lexicon 5: High frequency SCFs after smoothing with semantic back-off estimates and supplemented with additional SCFs from manually built dictionaries.
LexSchem was released with a comparable filtering method and similar accuracy than Lexicon 2 of VALEX (see the comparison of results in the previous section). Future work could release other, more or less accurate versions of the lexicon after the filtering component of the system undergoes first further development.
Another idea for future work concerns lexical entries. As seen above in Section 3, the lexical entries of LexSchem provide various information. They could be further improved by gathering in them argument head and associated frequency data in different syntactic slots. In the case of VALEX, such information has proved useful for a number of NLP tasks.
Conclusion
This paper introduced LexSchem -the first fully automatically acquired large scale SCF lexicon for French verbs. It includes 11,149 lexical entries for 3268 French verbs. The lexicon is provided with a graphical interface and is made freely available to the community via a web page. Our evaluation with 20 verbs showed that the lexicon has state-of-the-art accuracy when compared with recent work using similar technology: 0.79 precision, 0.55 recall and 0.65 F-measure. Future work will include improvement of the filtering module (e.g. experimenting with SCF-specific thresholds or smoothing using semantic back-off estimates), automatic acquisition of SCFs for other French word classes (e.g. nouns), and automatic classification of verbs using the SCFs as features (Levin, 1993;Schulte im Walde and Brew, 2002). Like mentioned above, we also plan to enhance the lexical entries of the lexicon. It would be useful to include in them information about noun and preposition classes and morpho-syntactic properties of the words included in SCFs. Finally, as mentioned earlier, given different NLP applications have different requirements, it is worth building and releasing other versions of LexSchem. | 2014-07-01T00:00:00.000Z | 2008-05-01T00:00:00.000 | {
"year": 2008,
"sha1": "147eb9a3db1d50d941121f9ed41bb8f87205065a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "5dfa38f46261b17b77aa7ee81aad854381e79c49",
"s2fieldsofstudy": [
"Linguistics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
3024021 | pes2o/s2orc | v3-fos-license | Divergent Cortical Generators of MEG and EEG during Human Sleep Spindles Suggested by Distributed Source Modeling
Background Sleep spindles are ∼1-second bursts of 10–15 Hz activity, occurring during normal stage 2 sleep. In animals, sleep spindles can be synchronous across multiple cortical and thalamic locations, suggesting a distributed stable phase-locked generating system. The high synchrony of spindles across scalp EEG sites suggests that this may also be true in humans. However, prior MEG studies suggest multiple and varying generators. Methodology/Principal Findings We recorded 306 channels of MEG simultaneously with 60 channels of EEG during naturally occurring spindles of stage 2 sleep in 7 healthy subjects. High-resolution structural MRI was obtained in each subject, to define the shells for a boundary element forward solution and to reconstruct the cortex providing the solution space for a noise-normalized minimum norm source estimation procedure. Integrated across the entire duration of all spindles, sources estimated from EEG and MEG are similar, diffuse and widespread, including all lobes from both hemispheres. However, the locations, phase and amplitude of sources simultaneously estimated from MEG versus EEG are highly distinct during the same spindles. Specifically, the sources estimated from EEG are highly synchronous across the cortex, whereas those from MEG rapidly shift in phase, hemisphere, and the location within the hemisphere. Conclusions/Significance The heterogeneity of MEG sources implies that multiple generators are active during human sleep spindles. If the source modeling is correct, then EEG spindles are generated by a different, diffusely synchronous system. Animal studies have identified two thalamo-cortical systems, core and matrix, that produce focal or diffuse activation and thus could underlie MEG and EEG spindles, respectively. Alternatively, EEG spindles could reflect overlap at the sensors of the same sources as are seen from the MEG. Although our results generally match human intracranial recordings, additional improvements are possible and simultaneous intra- and extra-cranial measures are needed to test their accuracy.
Introduction
Among the most prominent oscillations in the human EEG are sleep spindles, repeated bursts of 10-15 Hz waves waxing and waning over about a second, mainly in stage 2 NREM sleep [1,2]. Spindles also occur in lower mammals [3], where they have been intensively studied during sleep, barbiturate anesthesia, and in vitro, as a prototype of thalamocortical synchronization [4,5,6,7], with a possible role in memory consolidation and regulation of arousal [8,9]. More generally, spindles may represent a basic thalamo-cortical mechanism for modulating widespread cortical areas and synchronizing their interactions [8].
In animals, direct thalamic and cortical recordings have found multiple asynchronous spindle generators in some preparations whereas others find a widespread synchrony [4,6,7]. Most studies showing asynchrony were conducted under anesthesia, or in vitro [6,9,10]. Contreras [6] showed experimentally that the synchrony of thalamic spindles depended upon cortico-thalamic projections, and proposed mechanisms that have been replicated in computational models [11,12].These models are based on intracellular studies demonstrating that spindles emerge from interactions between inhibitory cells in the thalamic reticular nucleus and bursting thalamocortical neurons, that entrain this rhythm on the connected cortical areas [13].
In humans, the high correlation of spindle discharges across widely dispersed scalp EEG channels has been taken to imply a widespread synchrony of spindle generators across the cortical mantle [6]. However, EEG spindles often have lower frequencies over frontal as compared to parietal leads [2], especially toward the end of the spindle burst [14], suggesting that at least two spindle generators may be active. Indeed, a variety of source estimation techniques have found that four sources, placed in the deep parieto-central and fronto-central regions bilaterally, are adequate to explain most of the variation in spindles, including the tendency for frontal spindles to be slower [15,16,17], although Gumenyuk et al [18] estimated that the sources for faster and slower spindle components as overlapping. Conversely, Shih et al. [19] found that more sources were needed to model their measurements but this could be related to their subjects being sedated [6]. Further evidence against a monolithic distributed synchronous generator has been found in comparisons of simultaneous EEG and MEG (magnetoencephalogram) recordings during spindles. Spindles may appear only in the MEG, only EEG, or in both modalities [15,17,20,21,22] We re-examined these issues in a recent study [23], using high density EEG and MEG. While we replicated the previous findings that EEG signals during spindles are highly coherent across the entire scalp, simultaneous MEG signals were generally incoherent with each other and with the EEG. Further, we showed that many spindles occurring in multiple MEG channels are not readily apparent in the simultaneous EEG [Dehghani et al, submitted]. These findings seem to contradict the well-known fact that MEG and EEG reflect the same cortical generating dipoles, although the biophysics of their projection to their respective sensors are somewhat different. Thus, one might expect that if their sources were separately estimated and then compared, they would be found to be more similar than the MEG and EEG signals at their sensors. That is, projecting the EEG and MEG signals back to their sources might remove, at least in part, differences in their manifestations that are due to their divergent projections from sources to sensors.
In order to evaluate the possibility that some of the differences between MEG and EEG spindles noted at the sensors would be attenuated at their cortical sources, we performed source localization on simultaneously recorded MEG and EEG. A distributed cortically-constrained noise-normalized minimum norm inverse solution was applied to individual spindles, and the time-courses and spatial patterns were compared between solutions based on EEG and those based on MEG. Inverse estimates based on both modalities combined were also calculated. We find that, at the source level, EEG and MEG remain poorly correlated and with divergent characteristics. Sources derived from EEG are widespread, synchronous, and consistent across time and spindles, whereas those derived from MEG gradiometer recordings are relatively focal, independent and variable. We hypothesize that MEG versus EEG may be differentially sensitive to different thalamocortical systems engaged in spindle generation.
Ethics Statement
These studies were approved by the Institutional Review Boards of the University of California at San Diego and Massachusetts General Hospital, and were performed after written informed consent in conformity with the principles expressed in the Declaration of Helsinki.
Participants and Recordings
We recorded the electromagnetic field of the brain during sleep from seven healthy adults (3 males, 4 females, ages 20-35). Participants denied neurological problems including sleep disorders, epilepsy, or substance dependence, were taking no medications, and did not consume caffeine or alcohol on the day of the recording. We used a whole-head MEG scanner (Neuromag Elekta) within a magnetically shielded room (IM-EDCO, Hagendorf, Switzerland) and recorded simultaneously with 60 channels of EEG and 306 MEG channels. MEG SQUID (super conducting quantum interference device) sensors are arranged as triplets at 102 locations; each location contains one ''magnetometer'' and two orthogonal planar ''gradiometers'' (GRAD1, GRAD2). Locations of the EEG electrodes on the scalp of individual subjects were recorded using a 3D digitizer (Polhemus FastTrack). HPI (head position index) coils were used to measure the spatial arrangement of head relative to the scanner. Four subjects had a full night's sleep in the scanner, and three had a daytime sleep recording (2 hours). Sampling rate was either 1000 Hz (down sampled by factor of 2 for the final analysis) or 600 Hz. The continuous data were low-pass filtered at 40 Hz. An independent component analysis (ICA) algorithm was used to remove ECG contamination [24]. Stage 2 sleep and spontaneous spindles were identified using standard criteria by three electroencephalographers (please see Figure 1 for representative channels and Figure S1 for all channels) [25].
Anatomical MRI and Cortical Reconstruction
Anatomical MRI images were acquired on 1.5 Tesla scanners using an MPRAGE (Magnetization Prepared Rapid Gradient Echo) sequence (on Siemens scanners) or its equivalent on a GE scanner. These T1 images were segmented using Freesurfer [26] and the tessellated border between white matter and gray matter was chosen as the representative cortical surface for forward/ inverse solutions [27]. The tessellated surface of each hemisphere had ,140,000 vertices. For computational efficiency, each hemisphere's surface was decimated down to ,3200 dipole seeding points. This decimation provides ,7 mm spacing between seeded dipoles across the cortical surface. For better visualization, tessellated surfaces were inflated to unfold cortical sulci [28]. Cortical parcellation was performed to create a ''mid-brain mask'' in order to exclude non-cortical structures (such as basal ganglia and corpus callosum) from inverse solution results as these structures are not likely to generate significant MEG signal [29].
Source localization
Realistically shaped models have higher prediction accuracy for source localization in comparison to spherical shell models [30,31,32,33]. In our source localization methods, we used a three-shell realistically shaped boundary element head model (BEM) constructed from tessellated surfaces of inner-skull, outerskull and outer-skin (scalp) [33]. It has been suggested that single shell BEM has adequate accuracy for forward-inverse calculation of MEG but not for EEG recordings [32,34]. However, as we needed to have an unbiased comparison of MEG and EEG source localization, we used the three shell BEM model for both. A forward BEM transformation matrix was calculated based on the spatial configuration of EEG electrodes, information from a threeshell boundary element model (BEM) and the location of dipole seeds on the reconstructed surface.
Dynamic statistical parametric mapping (dSPM) was used to estimate the cortical generators of measured signal at EEG or MEG sensors, as described by Dale and colleagues [35]. This inverse solution is a minimum norm procedure [36], where the source dipoles are constrained to lie in the reconstructed cortical surfaces [27], and the estimate is normalized for noise sensitivity so that statistical significance rather than dipole moment is mapped on the cortical surface. This results in a relatively uniform pointspread function between different dipole locations [37].
Since in the current study, no a priori assumptions were made about the local dipole orientation, three components were required for each location. A sensitivity-normalized estimate of the local current dipole power (sum of squared dipole component strengths) at each source location was calculated [27,38]. Spindle waveforms at the sources were tested for the null hypothesis that the signal was noise. The noise covariance was calculated in one of two ways. In one method, 100 epochs, each 600 ms long, were chosen for each patient. Although occurring in the temporal vicinity of the sleep spindles, these epochs were chosen because they lacked spindle discharges or other sleep grapho-elements. These epochs were filtered with the same filter as was used for the spindle recordings, averaged together, and then used for noise covariance calculations in the same way as the baseline prestimulus period is used when dSPM is applied to event-related potentials [35]. As is shown below, similar results were obtained using noise covariance estimates derived from empty room measurements. The significance of response at each site was calculated using an F-test [35,39]. The resultant dynamic statistical parametric maps (dSPM) were visualized on individual's inflated cortical surfaces [35]. Group averages were made by aligning the sulcal-gyral patterns of individual subjects and minimizing stretching of the surface while morphing into a reference sphere [40]. This approach provides statistical parametric maps of cortical activity, similar to the statistical maps typically generated using fMRI, or PET data, but with a temporal resolution limited by the 500/600 Hz sampling rate. Sources were not estimated for surfaces that represented deep white matter, ventricles, or noncortical structures unlikely to generate extracranial MEG or EEG signals.
For each spindle, the maximum ECD strength of a given dipole was calculated. The average of these maximums within each subject were mapped on that subject's reconstructed cortical surface for visual comparison. These methods were repeated for dSPM calculated from EEG alone ('EEG-dSPM'), MEG alone ('MEG-dSPM'), or both simultaneously ('MEG+EEG-dSPM').
Cross correlation and coherence of source space solutions
For a given spindle, ''within-modality'' correlations of EEG-dSPM solutions were measured by calculating the cross correlations of activity of all possible pairs of dipoles during spindling. Self-pairs were excluded. Averaging of these cross correlations aross spindles and then across subjects yielded the net ''within modality'' cross correlation of EEG-dSPM. The ''within modality cross correlation of MEG-dSPM'' was calculated in an analogous fashion.
For a given spindle, the ''between modality'' correlation was measured by finding the cross-correlations of activity of a given dipole as estimated from EEG with dSPM, with the activity of the same dipole as estimated from MEG. These ''between modality'' cross-correlations were averaged across dipoles, spindles and subjects. Analogous measures were obtained for coherence. When differential coherence in nearby frequency bands needed to be estimated, Capon's nonparametric spectral estimation which is known as the ''minimum variance distortionless response'' (MVDR) was used. MVDR spectral estimation is based on the output of a bank of filters where the bandpass filters are data and frequency dependent [41]. The MVDR may be advantageous over Welch's method in distinguishing the coherences of nearby frequencies. The fact that cross correlation of EEG-dSPM and MEG-dSPM were very low (see below) shows that these solution time courses do not have linear dependence.
Results
Based on standard clinical criteria, we used the EEG to select 85 spindles occurring in stage 2 sleep from the 7 subjects (,12 from each subject). Spindles immediately preceded by Vertex-waves or K-complexes were not chosen. Spindle duration mean and std were 7216235 ms (range 483 to 1123 ms). Spindle synchrony was examined between cortical locations in source space, using activity time-courses inferred from a distributed inverse solution.
Effects of different noise normalization procedures
For evoked responses, noise covariance in the dSPM procedure is calculated from either averaged or un-averaged pre-stimulus baseline activity [35]. Since there was no stimulus in the current study, noise covariance was calculated from ,100 epochs (with a duration of 600 ms each) selected from stage two sleep recordings which had been band-passed at the spindle frequency, i.e. 10-15 Hz. Epochs were chosen which lacked any recognizable graphoelements or oscillatory features that could be categorized as one of the signatures of sleep. These epochs were averaged and the diagonal elements of the second power of the standard deviation of the ''sample*channel'' matrix was used as the noise covariance matrix [27,35,37]. Since electromagnetic activity was not time-locked in any way with the onset of these epochs, the averaging procedure tended to represent the sensor distribution of the biological, instrumentation and environmental noise. In four subjects, an empty room recording provided an estimate of the instrumentation and environmental noise. Inverse solutions using the covariance matrix from this recording provided very similar results to those obtained using the 'inactive epochs,' as shown in Figure 2.
EEG sources appear more synchronous that MEG sources Estimated dipole strengths in ,6500 cortical locations during a sample spindle were color-coded and plotted as lines that were stacked vertically in Figure 3. ECDs derived from EEG-dSPM oscillated in synchrony, whereas those derived from MEG did not ( Figure 3A,B; see also Figure S2). Note also that MEG-derived sources do not exhibit peak amplitude at the same moment as EEG-derived sources. Dipole estimates derived from the combined MEG and EEG measurements display an intermediate pattern ( Figure 3C). This pattern of synchronous activity across the cortex when estimated from EEG, and asynchronous when estimated by MEG, was observed for every spindle analyzed.
Dynamic sequence of cortical activity during spindling estimated with combined MEG/EEG Figure 4 portrays the estimated sequence of cortical activity during a sample spindle (from subject 5) calculated based on MEG and EEG combined. Examination of these snapshots (every 20 ms) suggests that spindles are not synchronized among different regions of the cortex, but rather the peaks in different areas are offset in time, all within a given spindle discharge. Furthermore, successive peaks of the spindle produce maximal activation in different locations. In particular, activities in the left and right hemispheres are not in synchrony with each other. At different times in the example spindle shown in Figure 4, maximal activity is seen in the left parietal, left orbital, left occipitotemporal, right occipital, right temporal, and right parietal. A comparable level of variability was found in each spindle in all subjects.
Contrasting cortical dynamics during spindling estimated from EEG vs. MEG Figure 5 contrasts the dSPM estimates derived from EEG and MEG in the same spindle, plotted on the subject's reconstructed cortex, after expansion to reveal sulcal as well as gyral cortex. Again, in the MEG-dSPM solution, maximal activity is estimated to different cortical lobes and hemispheres in different parts of the same spindle. In contrast, the anatomical pattern of activity estimated from EEG is relatively constant over time. Also note that the activity estimated from EEG versus MEG are maximal at different times during the spindle discharge.
The overall average source distribution is dissimilar for MEG and EEG
Although the spindle thus appears to have a variable distribution over the cortical surface across time, it is possible that the total set of cortical areas active at some point in generation of a spindle burst is consistent across spindle bursts. In order to evaluate this possibility, we calculated the maximum of each cortical dipole's activation value during a given spindle burst. Next, by inter-spindle averaging of these maps of maximum activity, the overall cortex involved in spindle generation for an individual subject was estimated. This approach was applied to all three inverse solutions (calculated from EEG, MEG and both combined), and in Figure 6 they are mapped onto each individual's cortical surface. These images suggest that regardless of the measurement modality, this inverse method tended to place maximal activation in the deep midline areas, i.e., in the cingulate, subgenual, and parahippocampal areas. Secondary areas, variable across subjects and measurement modalities are also apparent. The similarity of these estimated source patterns was calculated as the correlation coefficient of the estimated average noisenormalized power across all cortical locations from EEG vs MEG. The average of this measure across the 7 subjects was 0.46 0.13. If the estimated source localizations from MEG vs EEG were always the same for each subject, then the correlations would have been 1; if they are random, then the average would be zero. The observed low correlation indicates that largely dissimilar activation maps are inferred from EEG vs MEG.
MEG and EEG sources are poorly correlated but moderately coherent
The inverse procedure estimates the timecourses for equivalent current dipole sources (ECDs) at about 6500 cortical locations. These estimates were made separately from MEG gradiometer and EEG referential recordings, and the between-modality correlation and coherence of such solution time courses were calculated at each cortical location. The results of these correlations were averaged across locations to obtain a single number for every spindle in each subject. This measure of the similarity of the source timecourses inferred from MEG versus EEG was very low, with a mean and std of 0.0960.06. However, the similarity of these time courses, estimated from their coherence from 10-15 Hz at given locations between modalities, was much higher, with mean and std of 0.4460.08 using the MVDR method and .5460.16 using the Welch method. Since correlation but not coherence is sensitive to phase differences, this suggests that the modalities share a rhythmic pattern but are out of phase. Thus, at the source level, EEG-and MEG-derived solutions were poorly correlated but moderately coherent.
Within-modality correlations indicate greater synchrony of EEG sources
The issue of synchrony of the inferred source activity across the entire cortical surface within each measurement modality was evaluated by calculating the correlation coefficient of the solution time-courses estimated at all possible cortical location pairs (,21 million pairs for 6500 dipoles excluding self-pairs and repeats). The correlation coefficients were averaged to yield a single number for each spindle and the same process was repeated for all spindles in each subject. The average across subjects of this acrossdipole correlation within the EEG modality was 0.646 0.05. In contrast, MEG-dSPM had a much lower within modality correlation of 0.136 0.01. These within modality correlation measures show that cortical sources estimated from EEG are more highly synchronous during spindles than those estimated from MEG.
Discussion
The current study was motivated by our finding that signals recorded during spindles simultaneously by EEG and MEG sensors have strikingly different characteristics [23]. EEG was highly coherent across the scalp, with consistent topography across spindles. In contrast, the simultaneously recorded MEG was not synchronous, but varied strongly in amplitude and phase across locations and spindles. These differences were observed between the activity of EEG and MEG sensors during spindles, raising the question as to whether they would also be observed in the cortical activity inferred from the sensor activity. In the current paper we examined this question by first estimating the activity of cortical dipoles during sleep spindles using a distributed corticallyconstrained source model (dSPM), separately from EEG and MEG, signals. We found that the location and timing of cortical activity inferred from EEG had a low correlation with that inferred from MEG. In agreement with sensor space measures, EEG-dSPM indicated a large-scale synchrony among different cortical sources. In contrast, MEG-dSPM applied to the same spindles estimated generation in shifting cortical locations, with simultaneously active generators that were largely independent of each other (and the EEG-dSPM) in frequency, phase and amplitude.
Comparison to previously applied source localization methods
Our study appears to be the first that has directly compared source estimates to simultaneously recorded whole-head EEG vs MEG during sleep spindles. However, several studies have previously estimated sources to EEG or MEG spindles individually. Most often, a small number of ECDs were used to model the signals. Several workers found that four sources, placed in the deep parieto-central and fronto-central regions bilaterally, are adequate to explain most of the variation in spindles, including the tendency for frontal spindles to be slower [15,17]. Urakami et al [15] specifically selected spindles based on their EEG frequency and topography. They fit a single ECD to ,10-15% of the MEG channels, projected this activity out of the signal, then fit another ECD to another set of selected channels, and so forth until 80% of the signal was accounted for. The resulting dipoles clustered in the white matter midway between the lateral and medial surfaces of the cortex, deep in the Rolandic areas. Sources for both slower and faster spindles were found in both precentral and postcentral cortices, with a slight preference for slower spindles to be located in precentral areas and faster in postcentral. Manshanden et al. [17] similarly found that the ECDs, which best modeled MEG signals during spindles, were clustered in the white matter underlying centro-parietal, parietal and posterior frontal cortices. Shih et al. [19] also modeled MEG spindles using small numbers of ECDs, located in all lobes across different spindles, but their subjects were sedated, and this may tend to cause spindles to be less synchronous [6]. Using Synthetic Aperture Magnetometry (SAM), Ishii et al. [16] also located the sources of MEG spindles mainly in the white matter underlying frontal and parietal cortices. Using another distributed solution (ICA followed by MR-FOCUSS), Gumenyuk et al [18] estimated maximal source activity to frontal, temporal and parietal lobes. Most commonly, maximal activity was in Rolandic cortex, with overlapping sources for faster and slower spindle components. In a study estimating sources from EEG using LORETA (low resolution brain electromagnetic tomography), Anderer et al. [42] localized activity to the medial parietal and frontal cortices, with more frontal areas associated with lower spindle frequencies.
The electromagnetic inverse problem is ill-posed; arriving at a solution requires a priori assumptions whose validity is generally unknown [43]. Despite their contrasting assumptions, the above studies generally yielded consistent results, estimating maximal activity during spindles to the white matter underlying parietal and frontal cortices. Since distributed sources generally result in equivalent dipoles that are deep to the generating surface [44], the previous results are consistent with distributed generators in parietal and frontal cortices. To a limited extent, direct intracranial measures have provided some validation of these conclusions. Several studies have recorded sleep spindles over lateral and medial prefrontal [45,46], medial temporal [46,47], and parietal cortices [48].
When averaged across all time points, spindles and subjects our results resemble previous results, being maximal in medial parietal, central and frontal areas ( Figure 6). However, since sources are constrained by dSPM to the individual subject's cortical surface, our results are unlike previous findings in that the activity is not localized to the white matter. Furthermore, our focus here is not on the location of the spindle sources but on a comparison of the spatiotemporal dynamics within individual spindles of the EEG vs MEG inverse estimates, a question that has not been examined previously.
Contrasting characteristics of MEG and EEG in source space
We used several methods to examine the synchrony of estimated source activity during the spindle between different cortical Figure 2. Comparison of source localization using different noise estimates. A. Spindle MEG-dSPM normalized with noise covariance calculated from averages of sleep epochs lacking grapho-elements. Normalized dipole strength for each of 6500 cortical dipoles is plotted as a horizontal line; red is high activity. B. MEG-dSPM from the same spindle, but normalized with noise estimates from empty room recordings. C, D. The activity shown in panels A and B were averaged over the course of the spindle and plotted on the reconstructed cortical surface of this subject. Very similar cortical activity patterns were inferred using the baseline (C) as compared to the empty room (D) noise covariance calculations. doi:10.1371/journal.pone.0011454.g002 locations. The stacked amplitude plots from EEG-dSPM ( Figure 3A) showed synchronous activity in the ,6500 ECDs tiling the cortical surface, while those estimated from MEG-dSPM ( Figure 3B) indicate peaks at different times in different locations. Not only the amplitudes but also the frequency and phase vary across locations for MEG-derived ECDs, but not EEG-derived ECDs. Similar differences are observed when the estimated ECD amplitudes are plotted on the cortical surface reconstructed from the MRI of each individual, as sequential topographical snapshots ( Figure 5). Although the power of cortical activation estimated from EEG varies across the course of a spindle, its pattern remains relatively constant. In contrast, the maximal activity estimated from MEG jumps rapidly between cortical areas and hemispheres, all within the same spindle discharge. These contrasting characteristics were observed in all 85 spindles sampled from the 7 subjects in the study. We quantified the degree of synchrony as the average correlation coefficient between the estimated activity time courses in different cortical locations. The average correlation across spindles and subjects of EEG-derived ECDs was 0.64, and for MEG-derived ECDs was 0.13. Thus, the stacked amplitude plots, sequential topographical snapshots, and average correlation coefficients all demonstrate that the cortical sources estimated from EEG are highly synchronous during spindles whereas those estimated from MEG are much less synchronous.
The fact that sources estimated from EEG vs MEG have very different characteristics implies that they are poorly correlated with each other. Indeed, the peaks of activation as observed with EEG vs MEG occurred at different times as indicated by stacked amplitude plots ( Figure 3A vs 3B ), or cortical topography snapshots ( Figure 5). When the time courses of all cortical dipoles are added together, the peaks of activation for EEG and MEG are seen to not only misalign, but to shift rapidly in phase and relative amplitude ( Figure 3C). We quantified the degree of synchrony in EEG vs MEG solutions as the correlation coefficient between the activities estimated at the same cortical location with the different modalities. This correlation, averaged across cortical locations, spindles and subjects was very low (0.09), despite the fact that these recordings were made simultaneously and source estimates were obtained using identical inverse methods.
Why are MEG and EEG sources different?
We conclude that in source space, using the dSPM method, EEG and MEG during spindles have highly contrasting characteristics during sleep spindles. There are two possible explanations for these findings. One is that our inverse estimates are correct, and EEG vs MEG sleep spindles are generated by different cortical sources with different characteristics. The second is that our inverse estimates are incorrect, mis-estimating either the EEG or MEG sources, or both, resulting in the incorrect conclusion that their sources are asynchronous and distinct.
The ultimate sources of both EEG and MEG signals are active transmembrane currents, balanced by passive transmembrane return currents. Intracellular currents linking active and passive transmembrane currents generate MEG, and extracellular currents generate EEG. The fact that EEG and MEG are thus generated by different limbs of the same circuit would lead one to assume that their sources should generally be estimated to the same locations. However, recent studies have suggested that differences could arise because, in fact, for distributed sources such as the sleep spindle, most of the electrical or magnetic signal that is generated in the cortex never arrives at the sensor, and that which does arrive at the sensor is different for EEG vs MEG. Simulations with actual cortical architectures show that co-activation of just 1% of the cortical dipoles results in cancellation of over 90% of their signal due to cortical folding [49]. For example, co-activation of dipoles lying on opposite sides of a sulcus may result in neartotal cancellation [50]. In addition, MEG is relatively insensitive to radial dipoles, whereas EEG is sensitive to dipoles that are either radial or tangential with respect to the skull [51]. It is essential to recognize that inverse estimators attempt to localize only the origins of signals that reach the sensors, not all of the dipolar activity in the cortex. Thus, it is entirely possible that our inverse estimates are correct and the EEG and MEG during spindles do arise from different sources.
A second argument suggesting that our inverse estimates may be in error in ascribing different cortical generators to the EEG versus MEG is that we could successfully estimate cortical source distributions that seemed to account for both the MEG and EEG data (Figures 4 and 6). However, our simultaneous inverse solution attempted to fit the spatial patterns of MEG and EEG but not their relative amplitudes, because arriving at the correct scaling factor requires data from a known single tangential dipole such as the initial response to median nerve stimulation, which was not available in this study [33]. A consideration of the absolute amplitudes of MEG and EEG spindles suggests that this may be critical for an accurate simultaneous solution. On the one hand, the ratio of EEG amplitude recorded at the cortical surface to that recorded at the scalp during spindles is about 2:1 [46,48,52], consistent with an extremely widespread generator [53]. Conversely, a focal source generating a MEG spindle of the observed size would produce an EEG spindle about 50x smaller than that actually observed [33,54]. Thus, it is entirely possible that a simultaneous dSPM solution which estimates cortical sources reproducing the relative amplitudes of the MEG and EEG signals Figure 5. Dynamic spatiotemporal patterns of spindling. Contrasting dSPM solutions from MEG and EEG to simultaneous data, as mapped on cortical surface throughout the duration of a spindle. Time proceeds from top to bottom in each column, with successive snapshots separated by 40 ms. The left 4 columns show activity from 0 to 360 ms, and the right 4 columns from 400 to 760 ms, of the same spindle discharge. Note that activation peaks are not synchronous in MEG and EEG, nor are they in the same locations. MEG is highly variable across time, with successive peaks of activity (see blue arrows) in left temporal at 0 ms (a), left parietal at 160 (b), right occipital at 240 (c), left occipital at 320 (d), left frontal at 520 (e), right insula at 600 (f), and left occipitotemporal at 720 (g). In contrast, EEG-derived source localizations appear more bilaterally symmetrical and consistent over time. For example, at 200 ms, relatively high activation is estimated to the left and right insula (h, m), superior temporal sulcus (j, n), and parietal lobe (k, p). Very similar activation is seen at 360 and 640 ms (see green arrows). Estimated ECD strength is plotted on the subject's cortex after expansion to reveal sulcal (dark gray) as well as gyral (light gray) cortex. doi:10.1371/journal.pone.0011454.g005 as well as their spatial distribution may estimate both a distributed diffuse synchronous generator which contributes significantly to EEG but not MEG, and multiple asynchronous relatively focal generators which contribute significantly to MEG but not EEG. In addition to fitting the absolute amplitudes of the MEG and EEG signals (and not only their topographical patterns), future modeling studies should include the CSF layer under the skull and the anisotropy of cerebral white matter, which affect the size and distribution of the EEG signal to cortical dipoles [55,56].
Previous reports have shown that some spindles are recorded by MEG but not EEG, and vice versa [17,20,21,22], and that intracranially recorded spindles often have no clear or consistent relationship to the spindles recorded simultaneously at the scalp [46,47,48,52]. These observations would also be consistent with the view that MEG and EEG are recording from different brain systems during spindles.
Indeed, studies in animals have demonstrated that, although cortical spindles can be widely synchronous, they can also be restricted to small thalamo-cortical modules, oscillating in multiple areas, with largely independent durations, onsets, frequencies and phase [4]. These distributed and focal spindles were interpreted in terms of the 'recruiting' and 'augmenting' responses that characterize thalamo-cortical projections from the 'non-specific intralaminar' and 'specific projection' nuclei respectively [5]. This distinction has evolved into a distinction between the 'matrix' and 'core' thalamo-cortical systems [57]. Thalamo-cortical cells in the matrix system project widely, even to multiple cortical areas, terminating with small boutons in layer I; in contrast, thalamocortical cells in the matrix system may project to a single column, terminating with large boutons in layer IV [58]. Matrix cells are found in all thalamic nuclei, but predominate in intralaminar and other nonspecific nuclei; core cells are concentrated in the specific sensory relay nuclei.
Thus, classical studies of spindle discharges in cats demonstrated both distributed and focal spindles which apparently reflect activation of the matrix and core thalamo-cortical systems, respectively. The differences between EEG versus MEG spindles may arise from their biophysically-determined differential sensitivity to these different thalamocortical systems, with EEG more sensitive to diffuse activation via the matrix system, and MEG to focal activation via the core system. The current study demonstrates that these contrasting characteristics are clearly seen in the cortical sources of MEG and EEG spindles, as estimated with dSPM. Figure 6. Maps of average estimated cortical activation during spindles. Activity was estimated with dSPM using EEG, MEG or combined EEG and MEG (MEG+EEG) data. For each spindle and modality, a map was made of the maximum activity during that spindle at each of ,6500 cortical locations. These maps were then averaged across all spindles from that subject, normalized to the maximum value, and displayed below on the expanded cortical surface. The subjectspecific cortical maps were then averaged together and plotted at the bottom of the figure. All subjects and modalities show maximal activity in medial cortex, varying across subjects between more anterior (green arrows, bottom row) and posterior (blue arrows) regions. Lateral activity is weaker and includes the insula (white arrows) and all lobes. The site of maximum lateral activity varies across subjects between frontal, parietal and temporal lobes. doi:10.1371/journal.pone.0011454.g006 | 2014-10-01T00:00:00.000Z | 2010-07-07T00:00:00.000 | {
"year": 2010,
"sha1": "315fc99ac159cc1d5e7acad40e96bf3e5f76dd25",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0011454&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7c242e58390c502a7ed9563503de24a30fa25d46",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
119209572 | pes2o/s2orc | v3-fos-license | Quasinormal modes and Hawking radiation of a Reissner-Nordstr\"om black hole surrounded by quintessence
We investigate quasinormal modes (QNMs) and Hawking radiation of a Reissner-Nordstr\"om black hole sur-rounded by quintessence. The Wentzel-Kramers-Brillouin (WKB) method is used to evaluate the QNMs and the rate of radiation. The results show that due to the interaction of the quintessence with the background metric, the QNMs of the black hole damp more slowly when increasing the density of quintessence and the black hole radiates at slower rate.
(2003); Zhang & Gui (2006); Zhang et al. (2007); Mahamat et al. (2009)). On the other hand, Hawking radiation from black holes is one of the most striking effects that is known, or at least widely agreed to arise from combination of quantum mechanics and general relativity. As one of the most important achievements of quantum field theory in curved spacetime, the discovery of hawking radiation supported these ideas availably which showed that a classical black hole could radiate thermal spectrum of particles. Hawking and Ellis (Hawking & Ellis (1973)) evoke the possible origin of radiation to be a black body radiation left over from a hot early stage of the universe, the result of superposition of a very large number of very distant unsolved discrete sources or intergalactic grains which thermalize other forms of radiations. The Hawking radiation for the Reissner-Nordström black hole is widely studied in literature (Jiang & Wu (2006); Goncharov & Firsova (2010); Zhai & Liu (2010); Zhao et al. (2010)).
There has been growing observational evidence showing that our universe is accelerated expanding driven by a yet unknown dark energy. Recent results from the cosmic microwave background(CMB) combined with supernova of type Ia, large-scale structure (cosmic shear) and galaxy cluster abundances show that our universe is dominated by a mysterious dark energy with negative pressure(∼ 70%) and contains cold dark matter with negligible pressure(∼ 25%), the ordinary baryonic matter makes up 5% (Mukanov (2005)). Dark energy can be studied by its influence on the expansion of the universe as well as on the growth of the large scale structure. Cosmological models with a dark energy fluid with an equation of state parameter ω q close to −1 are favored by combining recent CMB, supernova and baryon acoustic oscillations data, suggesting that the Hubble expansion accelerates in the current cosmic epoch. There are several types of models of dark energy such as the cosmological constant (Cardenas et al. (2003)), quintessence (Varun & Wang (2000); Kiselev (2003); Shuang-Yang (2008)), phantom (Kunz & Domenico (2006)), k-essence (Yang & Gao (2009)), and quintom (Guo et al. (2005); Xia et al. (2006)) models. For quintessence, the equation of state parameter is in the range of −1 ≤ ω q ≤ −1/3. Recently, Kiselev (2003) considered Einstein's field equation surrounded by quintessential matter and obtained a new solution dependent on state parameter ω q of the quintessence. Hod & Piran (1998) investigated the late-time evolution of charged gravitational collapse and decay of charged scalar hair. Konoplya (2002) investigated the decay of charged scalar field in the Reissner-Nordström black hole background. In this paper, a Reissner-Nordström black hole surrounded by quintessence is considered to investigate QNMs and Hawking radiation including the influence of the quintessence on them.
The paper is organized as follows. In section 2, we derive the wave equation of a scalar perturbation in the Reissner-Nordström background surrounded by quintessence. In section 3, we evaluate the QN frequencies of the scalar perturbation by using the third order WKB approximation method. In section 4, Hawking radiation is investigated. The last section is devoted to a summary and conclusion.
Scalar field perturbation
For the Reissner-Nordström black hole, the metric is given by: where M is the black hole mass and Q, the charge of the black hole. Due to the interaction of the quintessence with the spacetime, the background metric transforms to (Mahamat et al. (2009)) with f (r) = 1− 2M r + Q 2 r 2 − c r 3ωq +1 , ω q is the quintessential state parameter, c the normalization factor related to the density of quintessence, ρ q = − c 2 3ωq r 3(ωq +1) . Using the tortoise coordinate r * defined by dr * = 1− 2M r + Q 2 r 2 − c r 3ωq +1 −1 dr, the metric can be rewritten as: We consider the evolution of massless scalar perturbation. The wave equation for the complex scalar field is given by (Hawking & Ellis (1973)): where k and are constants. m = k represents the mass of the scalar field. We represent the scalar field into spherical harmonics φ = l,m ψ l m (r)e −iωt Y m l (θ, ϕ)/r and after some algebra the equation of motion takes the form: where the black hole potential is represented in figure 1. Since we considered a massless scalar field, k = 0. Through this figure, we can see that the nonquintessential potential is higher than the quintessential ones, and the height of the potential decreases with increasing ω q .
Quasinormal modes
The wave equation ( 5) can be rewritten as: where Q(r) = ω 2 − V . For a black hole, the QN frequencies correspond to solution of perturbation equation which satisfy the boundary conditions appropriate for purely ingoing waves at the horizon and purely outgoing waves at infinity. Incoming and outgoing waves correspond to the radial solution proportional to e −iωr * and e iωr * , respectively. Only a discrete set of complex frequencies satisfies these conditions. To evaluate the QN frequencies, we applied the third order WKB approximation method derived by Schutz, Will(Schutz & Will (1985))and Iyer ) to the above equation and these QN frequencies are given by (Zhang et al. (2007)) dr n * | r * =r * (rp) . Using equation ( 8), we calculated numerically the QN frequencies of the scalar field perturbation for M = 1, Q = 0.1 without quintessence and with quintessence. The results are shown in the following tables where l is the harmonic angular index, n is the overtone number, ω is the complex QN frequency and ω q is the state parameter of the quintessence.
We then plot the behavior of the scalar perturbation for some frequencies. The results are shown in figure 2.
The QNMs of the Reissner-Nordström black hole surrounded by quintessence were investigated for the Dirac field by Wang et al. (2010) and for the charged massive scalar field by Nijo & Kuriakose (2009). Comparing these results, we pointed out that the massless scalar field oscillates more rapidly than the Dirac field which oscillates more rapidly than the massive scalar field. But in term of damping, Dirac field damps more rapidly than scalar fields. On the other hand, the massless scalar field damps more rapidly than the massive one. When increasing the state parameter of quintessence ω q , the real part and the absolute value of the imaginary part of ω are increasing for the scalar fields but their variation is negligible for the Dirac field. Moreover, the rate of variation for the massless scalar field is higher than that of the massive one(see Figures 3 and 4).
Hawking radiation
Let's rescale the time coordinate into Eddington-Finkelstein coordinate (Zhai & Liu (2010)) t = T ± r * , where the signs + and − represent ingoing and outgoing particles, respectively. The tortoise coordinate r * is defined as In the following, the study is restricted to the outgoing particle radiated from the black hole horizon. The background metric can then transform to ds 2 = −f (r)dT 2 + 2dT dr + r 2 (dθ 2 + sin 2 θdϕ 2 ). (10) The metric obtained is a Vaidya-Bonner like metric (Niu & Liu (2010)) and can represent a Vaidya-Bonner black hole surrounded by quintessence.
The apparent horizon of this metric is given by the following equation In the absence of quintessence(c = 0), this equation gives two solutions r ± = M ± M 2 − Q 2 .
Actually, the normalization factor related to the density of quintessence, c, is smaller than 0.001 (Zhang & Gui (2006)). Therefore, the contribution to the metric background due to the presence of quintessence can be treated as a perturbation.
We regard the quintessence as perturbation leading to a small modification of the horizon radii and we put then the quintessential horizon radii R ± in the form Substituting this expression into Eq.(11), we obtain which gives us in first approximation. The radial null geodesic is given bẏ When a particle of energy ω is radiated from the black hole, it transforms tȯ The imaginary part of the action is ImS = Im P r dr = Im dP r dr = Im dḢ r dr, where we have used the Hamilton equation dH dPr =ṙ, H = M − ω ′ ⇒ dH = −dω ′ . The imaginary part of the action takes then the form We used the tunnelling method of Parikh & Wilczek (2000) to evaluate the integral over r and obtain Using the WKB approximation, the rate of radiation is expressed as where β is the Boltzmann factor with inverse temperature expressed as Explicitly, we plot the variation of the Boltzmann factor with respect to the state parameter of quintessence. Its behavior is represented in Figure 5.
For a black hole with a charge such as Reissner-Nordström black hole, the emitted particles can be charged. Thus, not only energy conservation but also electric charge conservation should be considered. Then, the radial null geodesics transforms tȯ where q is the charge of the emitted particle. The electromagnetic potential becomes The imaginary part of the action for the massive charged particle is (Jiang & Wu (2006)): where r ie and r f e represent the localization of the event horizon before and after the particle with energy ω and charge q tunnels out.ṙ andȦ t are given by the Hamilton's canonical equation of motion Substituting equations ( 22) and ( 25) into ( 24), we obtain Using the method of Parikh & Wilczek (2000), we can get Using the WKB approximation, we can get the tunnelling rate of radiation where ∆S EH denotes the change of Bekenstein-Hawking entropy at the even horizon before and after the particle tunnelled out, expressed as where The change of Bekenstein-Hawking entropy at the even horizon can then be written as where ∆S 0 is the free variation of the entropy and ∆S q is the contribution of the entropy variation due to the quintessence. Supposing that the mass and charge of the black hole are uniformly distributed and considering that the black hole radiates particles with energy and charge proportional to the total mass and charge, respectively, with the same coefficient of proportionality a, ω = aM, q = aQ, a << 1, the variation of entropy can be written as Its behavior is plotted in Figure 6. Through this figure, we can see that the variation of entropy is decreasing when decreasing ω q . We can also see that it is decreasing when increasing c.
Summary and Conclusion
In summary, QNMs of a scalar field perturbation around a Reissner-Nordström black hole were evaluated using the third order WKB approximation. The results of table 1 are obtained without the presence of quintessence while those of table 2 are obtained under the presence of quintessence for some values of the state parameter of the quintessence. The Boltzmann factor with inverse temperature was also derived and its behavior is plotted when varying the state parameter of quintessence. The behavior of the variation of entropy is also plotted when varying c and ω q , respectively. Through the above tables, we can remark that the absolute values of the imaginary parts of the quasinormal frequencies under quintessence are smaller compared to those without quintessence, for fixed set of l and n. Moreover, we can remark through table 2 that these values decrease when decreasing ω q . From the variation of the Boltzmann factor plotted bellow, we can see that it is increasing when decreasing ω q . From the behavior of the variation of entropy with respect to c and ω q , respectively, we can remark that this variation of entropy is decreasing when increasing c or when decreasing ω q , denoting that the rate of radiation is decreasing. Decreasing ω q for fixed c, or increasing c for fixed ω q means increasing the density of quintessence. Thus, we can conclude that when increasing the density of quintessence surrounding the Reissner-Nordström black hole, the QNMs damp more slowly and the black hole radiates at slower rate. | 2016-04-06T20:23:31.000Z | 2011-02-23T00:00:00.000 | {
"year": 2016,
"sha1": "459bc627665ad3c681d7b470d520200d8cf02a1f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1604.02140",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "459bc627665ad3c681d7b470d520200d8cf02a1f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
247871186 | pes2o/s2orc | v3-fos-license | How Heterogeneous Pore Scale Distributions of Wettability Affect Infiltration into Porous Media
: Wettability is an important parameter that significantly determines hydrology in porous media, and it especially controls the flow of water across the rhizosphere—the soil-plant interface. However, the influence of spatially heterogeneous distributions on the soil particles surfaces is scarcely known. Therefore, this study investigates the influence of spatially heterogeneous wettability distributions on infiltration into porous media. For this purpose, we utilize a two-phase flow model based on Lattice-Boltzmann to numerically simulate the infiltration in porous media with a simplified geometry and for various selected heterogeneous wettability coatings. Additionally, we simulated the rewetting of the dry rhizosphere of a sandy soil where dry hydrophobic mucilage depositions on the particle surface are represented via a locally increased contact angle. In particular, we can show that hydraulic dynamics and water repellency are determined by the specific location of wettability patterns within the pore space. When present at certain locations, tiny hydrophobic depositions can cause water repellency in an otherwise well-wettable soil. In this case, averaged, effective contact angle parameterizations such as the Cassie equation are unsuitable. At critical conditions, when the rhizosphere limits root water uptake, consideration of the specific microscale locations of exudate depositions may improve models of root water uptake.
Introduction
Despite several decades of intensive research in the field of soil physics, the reliable prediction of water flows in soil remains a major challenge. This is mainly due to the scale and time-dependent heterogeneity characteristic of soils, which can be of a structural, physical, chemical, and biological nature [1,2]. Especially soil organic matter (SOM) plays an important role in this context. Due to its numerous input possibilities as well as highly varying degrees of degradation and mineralization, SOM can be heterogeneously distributed across scales. Their importance for water transport in soil is partly due to their strong influence on the wettability of soil surfaces [3]. The contact angle (CA), which is present at the three-phase boundary between liquid, gas and soil surface, serves as a characteristic quantity for the wettability [4]. A CA of zero is referred to as wettable (hydrophilic), in the range 0 • < CA < 90 • as reduced wettability and CA ≥ 90 • as nonwettable (hydrophobic). In soil, hydrophilicity is mainly caused by mineral compounds [5], while hydrophobicity is mainly caused by non-polar organic compounds in the form of interstitial particulate material and coatings of mineral surfaces [6,7]. It should be noted that most soils are neither completely hydrophilic nor hydrophobic, but exhibit reduced wettability [8][9][10]. Within soil science, wettability or soil water repellency (SWR) is an important factor influencing soil erosion, surface and subsurface hydrology, and plant growth [3]. The significance of SWR for soil hydrology is based on the fact that it promotes preferential flow, surface runoff and the generation of heterogeneous moisture patterns, and thus above all the hydraulic heterogeneity of soil [11]. Despite the impact of SWR on soil hydraulics, still little is known about its spatial variability on different scales [11]. To quantify the effects of SWR on soil hydraulics, information on the specific microscopic distributions may be require. The spatial distributions depend on the specific processes which create the SWR like metabolic products of microorganisms, root and fungal exudates and their degradation products [11,12].
The rhizosphere is a narrow layer around the roots, which is a hotspot for microbial activity dominated by interactions between plant roots and soil that cause heterogenous spatial and temporal distributions of soil organic matter leading to varying SWR [13]. The pore-scale influence of SWR distributions on water transport in rhizosphere has not yet been fully investigated. Due to root growth and the release of plant root exudates, rhizosphere differs significantly in physical, chemical and biological properties from the surrounding bulk soil [14][15][16][17]. One of the most pertinent root exudates is mucilage, a hydrogel which can significantly alter the hydraulic properties of the rhizosphere [18][19][20]. It is characterized by a large water holding capacity of up to 1000 times of its own weight depending on the type [21,22]. Depending on its hydration status, it shows a hydrophilic or hydrophobic behavior: mucilage is hydrophilic in the water-saturated, swollen state, whereas it becomes hydrophobic after drying and forms hydrophobic structures on the surface of the soil particles [15,23,24]. The morphology of these structures can vary, depending on their type and concentration, from isolated local spots to networks extending over larger areas of the soil surface [12] and into the pore space [15,25,26].
For the investigation of the hydraulic properties of porous systems, such as the rhizosphere, computational fluid dynamics (CFD) is becoming an economical and effective complement or even alternative to the use of often very expensive imaging techniques such as microscale computed tomography and magnetic resonance imaging. For instances, the influence of various environmental factors on water retention curves [27,28], unsaturated hydraulic conductivity [29] or microbial diversity and activity in soils [30,31] can be analyzed by selectively varying appropriate parameters. Within the framework of the application of CFD on the pore scale, frequently used modeling methods for the investigation of fluid flows in porous media are, e.g., the finite element method [32,33] and the Lattice Boltzmann Method (LBM) [34][35][36][37][38].
To model the dynamics of spontaneous infiltration into complex porous structures with heterogeneous CA distributions, LBM is used in this work for its versatility in modelling problems with multi-phase interfaces and complex pore geometries. At the pore scale, LBM is therefore used to examine water infiltration and evaporation [39][40][41] or to obtain soil water retention characteristics [27]. Among the four most common LBMs, which are the color-liquid model [42], the Shan-Chen model (SC model) [43,44], the free-energy model [45,46] and the phase field model [47]. The SC model is utilized in this study and to simulate multiphase flows in porous media due to its higher computational efficiency and simplicity compared to the other types of models [48,49]. In addition, the model is suitable for the investigation of the infiltration process in case of heterogeneous CA distributions since the surface tension can be determined and different CA can be implemented [47].
Such infiltration processes can already be successfully simulated for simple geometric structures [50] as well as for complex porous systems [49,51,52].
Despite the important role of spatial wettability distribution for the hydraulic properties of porous media, which is intensively investigated e.g., within petroleum science [51,53,54], most simulations in soil science assume homogeneous CA for simplicity [55,56]. Only a small number of studies carry out investigations with heterogeneous wettability. They show experimentally that even hydrophilization of a few particles strongly reduces capillarity [32,57]. Of these studies, only a few deal with the effects within the rhizosphere [39,58], which is why quantitative experimental data and simulations of the influence of heterogeneous wettability on water infiltration in the rhizosphere are correspondingly rare. The aim of this study is to close this gap using a geometrically simplified model of a sandy soil and to demonstrate its practical relevance via pore scale simulations of a real sandy soil under mucilage influence.
The remainder of the paper is structured as follows. In Section 2 first the classical theory for effective contact angles for porous media with heterogenous distribution of wettability is recalled. This is followed by the details on the LBM model used in this study to investigate the infiltration at pore-scale and model validation against analytical solution. Finally, two-and three-dimensional simulation cases used in this study to systematically evaluate effect of heterogeneous wettability distributions are described. The results and discussion from the simulations are presented in Section 3. Section 3.1 deals with the dependence of infiltration on heterogeneous wettability distributions, which has been systematically investigated in over 400 simulations using a highly simplified sandy soil system. These results are compared and evaluated with those from corresponding simulations with effective contact angles. Section 3.2 describes the influence of the different wettability distributions on the water repellency of the simplified sandy soil investigated. Finally, in Section 3.3 the influence of mucilage-induced heterogeneous wettability distributions within the rhizosphere is demonstrated using a sandy soil. The conclusions from this study are finally presented in Section 4.
Method
This section details the classical theory for effective contact angle for heterogenous wettability distribution in the porous media and methodology for numerical simulation. In Section 2.1 the theory of the effective contact angle is first presented, as it is often used to obtain an effective homogeneous CA from heterogeneous wettability distributions. Section 2.2 describes the LBM model used to simulate infiltration process at pore-scale considering heterogenous distribution of contact angle. The model is validated against analytical solution and results are presented in Section 2.3. Subsequently, Section 2.4 describes the simulation setups used to investigate the influence of the spatial heterogeneous wettability on water infiltration at pore scale. As a first step a highly simplified, two-dimensional pore system of a sandy soil is used. Distribution of heterogenous wettability is changed systematically while keeping porous media same and in all 400 simulation variations are considered. For selected variations of heterogeneous wettability three dimensional simulations are carried out again on simplified porous media geometry to corroborate results simulated in two dimensions. Finally, Section 2.5 describes the illustrative two-dimensional simulation setup used for simulations on dry, mucilage amended sandy soil with pore-structure obtained using scanning electron microscope (SEM) to demonstrate the effect of heterogeneous wetting distributions in a rhizosphere system.
Theory of Effective Contact Angles
Often the real heterogeneous wettability distribution in porous media is represented via an effective contact angle (CA) obtained via analytical theories where the real spatial distribution is usually neglected, and averaged values are considered. One way to obtain such effective contact angles ϕ e f f is to use the Cassie equation [59] which was developed for flat surfaces: cos ϕ e f f = A 1 cos(ϕ 1 ) + A 2 cos(ϕ 2 ), (1) where A 1 and A 2 are the fractions of the surface with a CA of ϕ 1 or ϕ 2 , respectively. However, it should be noted that these theories do not take into account the spatial arrangement of the different wettabilities, but only macroscopically the area fractions of the existing wettabilities. So, to evaluate whether such effective CA can be used for simulations of infiltration processes in porous systems with heterogeneous CA distribution, ϕ e f f was calculated accordingly and simulations with the corresponding homogeneous CA ϕ e f f were carried out.
Multiphase Lattice Boltzmann Model
In this study standard pseudopotential-based LBM model for single component two phase flow is used to simulate the infiltration into two-dimensional porous media with heterogeneous CA distributions. The model has been implemented using an open-source lattice Boltzmann method framework Yantra [60,61]. In LBM model fluid is described through the evolution of particle distribution function as follows where f i represents the particle distribution function along direction i in the velocity space, ∆t is the time step, τ NS is relaxation time and is set to 1 for all simulation, and the set of discrete velocities e i depends on the type of lattice chosen. In this study it is the D2Q9 and D3Q19 lattice [47]. f eq i is the equilibrium distribution function given by For the D2Q9 lattice i ∈ [0, 8] and the weighting factors w i = 1/9 for i = 1-4, w i = 1/36 for i = 5-8 and w 0 = 4/9 and e s = e/ √ 3 . For the D3Q19 lattice i ∈ [0, 18] and w i = 1/18 for i = 1-4, 9, 14, w i = 1/36 for i = 5-8, 10-13, 15-18 and w 0 = 1/3 and e s = e/ √ 3 . The macroscopic quantities such as density (ρ) and velocity (u) from particle's distribution functions as follows and the macroscopic kinematic viscosity (ν) is related to the relaxation time as The body force F b is implemented by adding τF b ρ to the velocity computed from Equation (4) before computing the equilibrium distribution function. For the present model the body forces are given as F int is the interparticle force accounting for non-ideal pressure during two component flow which leads to phase change and F g accounts for gravity. F int is given as where G = −120 is the interaction strength. ψ is the interaction potential given as per the Shan-Chen model [47]: In above equation ψ 0 and ρ 0 are arbitrary constant which controls the shape of equation of state. For this study, ψ 0 = 4 mu 1 2 ts lu − 1 2 and ρ 0 = 200 mu lu −3 are chosen in such a way that the phase separation can take place with G = −120 [62,63], where mu is mass unit, lu length unit and ts time steps. For these set of parameters, the density of vapor ρ v = 85.86 mu lu −3 and that of water ρ W = 524.98 mu lu −3 . The density ratio between vapor and water phases is 6.11, which is significantly lower than the expected density ratio between water and air. However, the low-density ratio helps to mitigate spurious current. Moreover in this study, the mismatch between density ratio is not an issue, as flow in porous media investigated here is under a low capillary number and effects due to inertia and gravity are negligible [62]. This is further confirmed by the agreement of benchmark simulations with the Washburn equation in Section 2.3. The resulting non-ideal pressure can be computed as All simulations reported in this study are calculated in lattice units and then converted into physical units by a transformation adapted to the physical system (Appendix A).
Different CA in the presence of solid surfaces can be achieved by adjusting the pseudo density of solid phase in the simulation at the positions of the wall in the range between vapor and liquid density. For solid state density values close to the water (fluid) density, a hydrophilic surface can be achieved, whereas values close to the vapor density leads to a hydrophobic surface ( Figure 1). The dependence of the equilibrium contact angle ϕ on the pseudo wall density ρ wall ( Figure 2) was determined by modelling the droplet spreading on a flat surface with homogeneous wall density. Spherical cape method [64] was employed for measuring CA from equilibrium shape.
Model Validation
To validate pseudopotential LBM model implementation we test its ability to simulate a moving contact line problem against the capillary intrusion test [65]. Here, the course of the contact line between parallel plates is simulated for a two-dimensional system. Infiltration is controlled by capillary forces and the viscosity of the penetrating liquid. Under consideration of the capillary geometry and neglecting inertia, gravity, and viscosity of the gas, this leads to the established Washburn equation. It describes the following relationship for the contact line progression within the capillary [66,67]: where l is the position of the contact line, d is the plate spacing and µ is the dynamic viscosity of the liquid. For a homogeneous coating of the panels with one CA, the analytical solution of Equation (10) is: A system of 7 × 0.2 mm 2 with a conversion factor for length C L = 10 µm lu is used for simulation. The parallel plates with a length of 5 mm are arranged in a way that they are at a distance of 0.1 mm from each other and 1 mm each in the x-direction from the side walls ( Figure 3). Periodic boundary conditions are used in the y-direction where there are no parallel plates. The parallel plates are implemented as solid obstacles with bounce-back boundary conditions. At the left side of the system, a Dirichlet condition is set to the density of water, representing an infinite water reservoir. Whereas at the right edge the Neumann condition (u = 0) prevents a drop formation at the right end of the plates. As initial condition of the simulation the liquid is placed on the left side of the plates. After a short system settling of 7 µs, while the liquid meniscus slightly penetrated between the plates up to point x 0 close to the entry of the capillary, the speed u was set back to zero in the entire system. Subsequently, the progression of the meniscus in relation to the reference point x 0 was measured as a function of time.
The results for homogeneous CA of 0 • and 60 • are shown in Figure 4 and indicate that the penetration curves are basically divided into two different stages when comparing Washburn and LBM results. In the first stage up to about 0.08 ms, the penetration velocity of the LBM is significantly lower and therefore more reflects the actual course of the initial phase [68]. In the second stage, the LBM results approximately describe the Washburn process but the deviation increases over time.
To validate this LBM model also for spatial heterogeneous CA, the same system as before was simulated with alternating coating of the parallel surfaces, i.e., two different CA follow each other at an even spatial distance in flow direction. With regard to the results of the penetration at alternating CA, two phases can be identified, as in the case of homogenous coating. In the first phase (t < 0.085 ms), the average speed derived from the LBM simulations is lower while in the second phase it is higher than the numerical solution of the Washburn equation. The spatial distribution of the alternating CA is clearly reflected in the different penetration velocities and matches them, even though the temporal fit may vary in sections due to the time-dependent velocity difference mentioned above.
Infiltration into Porous Media Depending on Contact Angle Distribution
To examine the influence of CA distributions on infiltration into porous media, at first, the infiltration into geometrically highly simplified porous systems is simulated using the previously described and validated program. The 1.17 × 0.17 mm 2 systems employed here with a conversion factor for length C L = 1.7 µm lu are identical to those utilized for the capillary intrusion test in terms of boundary conditions and periodicity. The main difference lies in the fact that the parallel plates are replaced by a porous system consisting of circular particles positioned in a regular way ( Figure 6). The porosity is adjusted to resemble that of sand with Φ = 43.4 % and the unit conversion was based on a sand particle diameter of d = 0.1 mm. The CA coating of the particles is applied according to three different coating patterns (Table 1, Figure 7): Table 1. Listing and description of all basic particle coating patterns used for the simplified porous medium.
Coating Pattern Description
homogeneous pattern all particles homogeneously coated with the same CA (Figure 7a • are used to investigate the dynamics of the infiltration process. Two different wettability patterns are investigated in the cases of the body-throat coating pattern: in the body-phobic-case, the pore throat exhibits a hydrophilic CA of 0 • and the pore body exhibits various reduced wettabilities (Figure 7b). In the throat-phobic-case, the wettabilities of the pore throat and body are reversed. In addition to the CA, the influence of the size of the surface with reduced wettability is also investigated for all body-throat simulations and is varied in the form of a surface angle α from 5 • up to 60 • in 5 • steps. For the body-phobic coating, α describes the strip width with reduced wettability located in the pore body, while for the throat-phobic it is related to the strip width with reduced wettability in the pore throat. For coating pattern stripes, hydrophilic stripes are always followed by a stripe with reduced wettability whereby the stripe width is defined by the angle α of 5 • , 10 • or 15 • (Figure 7c).
To compare the influence of the coating pattern on dynamic infiltration, the average front velocity v of the fluid was determined by measuring the time until it penetrated 0.58 mm into the porous medium. For each coating pattern, the critical contact angle (CCA) at which no infiltration takes place was determined by additional simulations to an accuracy of 1 • .
In addition, the infiltration is simulated simplified, three-dimensional porous medium 1.17 × 0.17 × 0.17 mm 3 (Figure 8). Boundary conditions and periodicity are now adjusted to three dimensions and they are identical for the yand z-direction. The CA distribution from the two-dimensional throat-phobic-case is now extended to three dimensions. For this purpose, the wettability is reduced at the surface stripes of the spherical particles, which have the smallest distance between neighboring spheres and are oriented vertically to the direction of infiltration (x-direction). Also experimentally it has been observed that mucilage during drying retreats into the locations of smallest distances between particles [15,25]. For comparability with infiltration into soil from Section 2.5, a CA of 100 • was chosen for these surfaces, while the rest has a CA of 20 • . Infiltration scenarios were carried out for surface strip widths α of 5 • , 10 • or 15 • . Figure 8. Simplified three-dimensional porous medium, consisting of spherical particles (brown) with reduced wettable coating (red) and water source (blue) at the left side.
Infiltration into Soil
To demonstrate the dependence of water infiltration on the heterogeneous nature of CA distribution of soils, the infiltration process was simulated based on the example of a dry, sandy soil (0.125-0.2 mm) amended with maize (Zea mays L.) mucilage at a content of 8 mg g −1 (mg dry mucilage per g of dry soil). To resolve the spatial distribution of mucilage induced by drying in soil, the sand was amended with hydrated mucilage and dried by evaporation (see [25] for a detailed description). The distribution of dry mucilage structures created in the process of soil drying was resolved using synchrotronbased X-ray tomographic microscopy (SRXTM). Mucilage structures were visible as twodimensional surfaces connecting multiple pores across the soil domain [25]. The origin of two-dimensional structures on soil particles is indicated in green on the example displayed below (Figure 9; courtesy of M. Zarebanadkouki, University of Bayreuth, and P. Benard, A. Carminati, ETH Zurich). The simulations were carried out for this two-dimensional partial section of the sandy soil ( Figure 9) with a lattice of 0.6 × 0.48 mm 2 and a conversion factor for length C L = 0.33 µm lu . The boundary conditions as well as the periodicity of the simulations correspond to those of the simplified porous medium (Figure 6), whereby two thin parallel plates were additionally added above and below the soil section. The plates have a CA of 90 • to ensure that, at the top and bottom boundaries, the curvature of the water front behaves as if the system was mirrored in the y-direction. The CA of the areas covered with mucilage is set to 100 • , which corresponds to observed values at high maize mucilage surface concentrations [23]. These values can be expected locally as the retreat of liquid during drying leads to a high concentration of mucilage in this regions [25,26]. The CA of the remaining area of soil particle surfaces was set to 20 • , a typical value for untreated soil mineral surfaces. The area covered with mucilage is 5.2% of the total surface area of the particles. For this distribution, the Cassie equation predicts an effective CA of 28 • .
Two simulations were performed: one with the heterogeneous distributions of CA as described above and one assuming a homogeneous effective CA of 28 • according to the Cassie equation. To exclude a filling of the soil section due to artificial condensation (DeMaio2011), the determination of the equilibrium is here based exclusively on the fluid, which is in direct contact with the water reservoir.
Homogeneous Coating Pattern
For the infiltration process of the homogeneous coating pattern case, the time-dependent penetration length directly reflects the geometry of the porous system (Figures 10 and 11): the infiltration front velocity v in the pore throat is significantly greater than in the pore body. This can be explained by the interplay of continuity equation which states a higher flow velocity in narrow parts of the system and the stronger capillary forces in the pore throat compared to the pore body. Note that for infiltration in a capillary of constant radius, instead, the Washburn equation (Equation (11)) predicts a lower v at smaller diameters.
The mean velocity v decreases according to the Washburn equation and the CCA is 54 • (Figures 10 and 12). For values above the CCA water cannot infiltrate into the system and the average velocity is 0.
Transferring this to soils in general, this means that water repellency can occur not only for hydrophobic CA (>90 • ) but also at reduced wettability of the soil particles. The reason for this is that due to the system geometry the macro-scale, effective CA depends not only on the curvature of the water surface (as with a flat surface), but also on the curvature of the particle surface [69].
Body-Throat Coating Patterns
Also, for the body-throat coating patterns (Figure 13), in general, v decreases with increasing CA. The small increase in throat-phobic coatings for α ≤ 20 • (Figure 13b) are caused by discretization effects in the pore throat. For same CA, v decreases with increasing α. It should be emphasized that for body-phobic, for α ≥ 20 • , the value of α has nearly no influence on v and also corresponds approximately to the homogeneous coating pattern (Figure 13a). On the other hand, for α < 20 • v is directly dependent on α. For the throat-phobic pattern, in contrast, v depends strongly on α, for α ≥ 20 • , but is more or less independent of CA for α < 20 • (Figure 13b). Figure 13. v as a function of CA for body-throat coating pattern: (a) body-phobic and (b) throatphobic. The CA is related to the compartments of the particle surface whose wettability is reduced.
A comparison of v of body-phobic and throat-phobic ( Figure 14) shows that at same α, v in the body-phobic-case is always smaller than in the throat-phobic-case. If we consider v as a measure of the infiltration rate σ of the porous system and take into account that the total infiltration rate is limited by the globally lowest infiltration rate in this system, we can see that for a composite system of body-phobic and throat-phobic with α body-phob ≥ α throat-phob the effective total infiltration rate σ e f f corresponds to the infiltration rate of a system with only body-phobic coating. For α body-phob ≥ 20 • , σ e f f is even determined independently of α body-throat of the infiltration rate of body-phobic and can be approximately calculated by a homogeneous coating with a CA equal to the CA of body-phobic. Figure 14. Average infiltration velocity v as a function of CA for body-throat coating patterns to compare the effects of body-phobic and throat-phobic coatings. The CA refers to those compartments of the particle surface whose wettability is reduced.
Applied to porous media, these results show that the infiltration dynamics strongly depend on the CA distribution. Furthermore, the wettability of the pore body is here more decisive for the infiltration than the wettability of the pore throat. However, it should be noted that the two-dimensional findings cannot be directly transferred to the threedimensional real soil system case: in three dimensions the pore body has a much larger surface compared to the one of the pore throats. Therefore, much more hydrophobic material would be required to make a 3D pore body water repellent.
Stripes Coating Pattern
For the stripes coatings v also decreases with increasing CA, whereby for CA ≤ 40 • it is rather independent of α ( Figure 15). For α = 10 • and 15 • the curve is very similar up to CA ≤ 60 • , which could possibly be caused by the fact that for α = 10 • the opposite stripes of two neighboring particles have the same CA, whereas for α = 15 • these are distinct (Figure 7c). Figure 15. v as a function of CA for stripes coating. The CA is related to the compartments of the particle surface whose wettability is reduced.
v of the strip coating is smaller than that of the body-phobic for the same α which can be explained by the smaller fraction of the total surface with reduced wettability of the strips near the pore body ( Figure 16). For α = 15 • this is reversed, which is probably due to the different CA of the stripes facing each other. Figure 16. v as a function of CA for homogeneous, body-phobic, throat-phobic and stripes coating pattern. The CA is related to the compartments of the particle surface whose wettability is reduced.
From results of the stripes simulations, it can be concluded for porous systems in general that even at same CA area ratio a smaller characteristic pattern size of the CA distribution could increase the marco-scale effective infiltration rate.
Comparison with Cassie Equation
In the following, the dependence of v on CA is compared for coatings with the same effective CA but different spatial CA distribution to study the basic scope and limitations of effective CA. For this purpose, all coating samples with 1:1 ratio of hydrophilic surfaces to surfaces with reduced wettability are compared with the results of a corresponding homogeneous effective CA coating according to the Cassie equation. All results based on the homogeneous, effective CA coating are summarized in the following as Cassie curves. The comparison of the simulation results of the Cassie curve with the other coating patterns with respect to v shows that the deviations from the Cassie curve increase with increasing CA (Figures 17 and 18). The Cassie curve corresponds approximately to the curves of the strip coating for CA ≤ 40 • , whereby strong deviations are recognizable for CA > 60 • . Additionally, the stripes coating does not converge against the Cassie equation for decreasing α, since for the stripes coating at α = 5 • the values of v are always larger and for α = 10 • and 15 • always smaller than for the Cassie curve. Comparing the Cassie curve with those of body-phobic and throat-phobic, the Cassie curve is between them, whereby the difference for throat-phobic is always greater and also increases with increasing CA.
For porous media, it can be deduced that the specific spatial CA distribution pattern has a decisive influence on the infiltration process and that it is not sufficient to only take into account the relative area fraction of different CAs. Particularly in the case of relatively large CA differences as well as large α in the pore body, caution is required for the application of the Cassie equation. Moreover, the non-existence of convergence for small α against the Cassie curve generally raises the question about the reasonable application of the Cassie equation in more complex porous media, such as in the context of soils.
Critical Contact Angle (CCA)
The CCA is greater for the throat-phobic coating than for the body-phobic coating for all α (Figure 19). In the case of the body-phobic coating it changes from strongly hydrophobic (142 • ) to reduced wettability (54 • ) in the range of 5 • ≤ α ≤ 25 • . For larger α the CCA is constant 54 • and thus corresponds to the CCA of the homogeneous coating, which is why it is reasonable to assume that the CCA is also constant 54 • for 60 • < α < 90 • . Even for the throat-phobic coating the CCA initially drops sharply in the hydrophobic range (from 159 • to 102 • ) for 5 • ≤ α ≤ 10 • . For larger α it decreases more weakly and approximately constant, changing from hydrophobic to reduced wettability in the range 20 • ≤ α ≤ 25 • . It is assumed to decrease further for α ≥ 60 • until it reaches CCA = 54 • at α = 90 • which corresponds to the homogeneous coating. The comparison of the coatings shows that for α = 5 • the CCA of the stripes coating with 90 • is much smaller than for the body-phobic and throat-phobic coating, which may be explained by the additional stripes of the stripes coating. The CCA for α ≥ 10 • is approximately equal to that of the body-phobic coating, since the stripes in the pore body are more relevant (Figure 16). Regarding the behavior for small α, Figure 19 supports two hypotheses: we expect that for the body-throat coating at sufficiently small α, a CCA no longer exists, since the momentum of the infiltration front allows the fluid to flow over a sufficiently small hydrophobic layer even at a CA of 180 • . For the stripes coating on the other hand, it would be conceivable that for small α the CCA converges towards a fixed value.
Transferred to porous media, the water repellency can therefore strongly depend on the spatial CA distribution. In our two-dimensional cases, the wettability of pore bodies is more decisive than the one of pore throats. In order to transfer the data to three-dimensional systems, two main cases have to be considered, which can be distinguished on the basis of the CA. If only reduced wettability is present, water repellency is largely controlled by the coating of the pore body. In the second case, if additional hydrophobic areas occur in the pore throat, these can lead to water repellency. These areas in the pore throat are especially important because they control water content dynamics under unsaturated conditions. The three-dimensional simulations ( Figure 20) showing water repellency from a surface stripe width of α = 15 • underline that even small hydrophobic surface fractions (here 17% of the total surface) in the pore throat are sufficient to cause water repellency.
Cassie Equation in Soil
Water infiltration in sandy soil is highly dependent on the spatial distribution of wettability and not only on the specific area ratio. Simulated infiltration into a soil system covered at certain spots with mucilage (with mucilage: CA = 100 • without: CA = 20 • ) shows that the system is water repellent when covered partially with mucilage ( Figure 21), while it is conductive with a corresponding homogeneous coating with the equivalent CA = 28 • according to the Cassie equation. Thus, this underlines the hypothesis that for complex porous media theories like the Cassie equation that are based on an effective CA and do not consider the specific distribution may be unsuitable for estimating infiltration processes and that even small areas covered with dry mucilage (here 5% of the total surface) can have a considerable influence on water repellency.
Conclusions
In our study we investigated the influence of spatially heterogeneous wettability distributions on infiltration processes in simplified and real two-dimensional sandy soil systems at pore-scale. We not only confirmed that water repellency in porous media such as soil can occur for non-hydrophobic CA [70] but also we could show that water repellency as well as water infiltration dynamics are strongly influenced by the spatial distribution of wettability. We showed that simulations based on the effective contact angle theory such as the Cassie equation can predict dynamics that are far from the results obtained when considering location and patterns of local contact angle distributions.
At certain conditions, such as big CA variations, the Cassie equation is insufficient for calculating effective contact angles for heterogeneous CA distributions in porous media [71,72]. For two-dimensional systems we could show that, the infiltration and water repellency is more sensitive to increases in contact angle and area of coated surfaces in pore bodies than to increases in pore throats. Hydrophobic locations in the pore throat, but also in the pore body, are each sufficient to cause water repellency. Indeed, we could show for a two-dimensional real soil system that already covering certain pore throats which corresponds to 5% of the total surface with a hydrophobic layer is sufficient to induce water repellency. In the three-dimensional case, pore throats comprise far less soil surface compared to pore bodies. This means that there even smaller fractions of the soil surface in pore throats covered with a hydrophobic material can render soil water repellency, especially in unsaturated cases. For instance, in the rhizosphere during drying, mucilage retreats to the narrower parts of the pore throats and finally dries there on the soil surface [15], covering the pore throats with a hydrophobic layer of a contact angle of sometimes around 100 • . We have shown, that even very tiny amounts of such a layer in the proper location can induce water repellency. Since pore throats are the last parts of a soil (before surfaces) to dry and the dependence of CA on water content [73], it is also conceivable that a critical water content exists below which the respective soil, and in particular the rhizosphere, is water repellent. With regard to the application to natural soil, it should be noted that the described effects can vary considerably depending on particle geometry, roughness and contact angle.
Our study shows that averaged contact angle parametrizations are not appropriate and a proper knowledge about local contact angle patterns needs to be considered to advance our understanding of the effect of microscale properties on large scale hydraulic dynamics and to include the microscale effect of root exudation into larger root water uptake models [74].
Finally, it should be noted that this study focuses on investigating the influence of the wettability distribution when the pore structure properties of the porous medium are constant. The conclusions derived from this study can apply to all porous media in general. However, further simulations are needed to quantitatively evaluate the additional influence of the heterogeneity of the porous media in combination with the wettability distribution.
Appendix A Unit Conversion
The system variables represented in the LBM in grid units (lb) can be converted into physical units (phy) with the help of conversion factors. Starting from the basic conversion factors for length C L , time C T and mass C M , all quantities derived from these can be directly calculated. The conversion for infiltration processes can be done using dimensionless numbers such as the Bond or Reynolds number which can be adjusted for our simulations with help of system length, dynamic viscosity and surface tension. The length conversion factor is defined as the ratio between the physical length L of the computational domain and the corresponding number of lattice nodes N in the grid: With the help of the conversion factors of the dynamic viscosity C µ of fluid and the surface tension C γ the conversion factors for time and mass can be calculated by reshaping and inserting Equations (A2) and (A3): The kinetic viscosity in lattice units ν lb is determined by Equation (5) with τ NS = 1, which leads to ν lb = 1 6 lu 2 ts −1 for water and vapor. Using the densities of vapor ρ v = 85.86 mu lu −3 and fluid ρ W = 524.98 mu lu −3 , the dynamic viscosity of vapor is thus µ lb,v = 14.31 mu lu ts and that of the fluid µ lb,w = 87.5 mu lu ts . The calculation of the surface tension γ in this multi-phase LB-model in lattice units is done based on the two-dimensional Young-Laplace equation by determining the pressure difference ∆P in bubbles or droplets of different radii r. Since the interface transition between vapor and liquid is continuous, the arithmetic mean of the equilibrium vapor and liquid density was chosen as the limiting density for phase differentiation ρ I = 305.42 mu lu −3 . A linear fitting of the points on the curve P ∼ 1/r results in a surface tension of γ = 13.85 mu ts −2 with correlation coefficient R 2 = 0.9988 ( Figure A1). Figure A1. Correlation between pressure difference (∆p) between inside and outside the bubble and reciprocal of its equilibrium radius r −1 as well as the resulting surface tension γ.
Appendix B Grid Convergence
To check the grid convergence, the same simulation setup as for the capillary intrusion test with homogeneous hydrophilic wetting was used. The grid spacing is defined by c 1 = 1 × 10 −5 mm, c 2 = 5 × 10 −6 mm, c 3 = 7 × 10 −7 mm and c 4 = 1 × 10 −7 mm.
The relative error ( Figure A2) between the grid with the finest grid spacing c 4 = 1 × 10 −7 mm and the other coarser grids is represented in form of the L 2 error norm and can be calculated as follows with x and x f inest as the time dependent penetration length of the coarser grid and finest grid respectively ( Figure A2b). Figure A2. Illustration of convergence with respect to grid spacing using the simulation setup for the capillary intrusion test of water into parallel plates. The figure presents the effect of the grid spacing on (a) the curves of time-dependent penetration length and (b) the corresponding L 2 diagram with a convergence order of 1. | 2022-04-03T15:51:11.144Z | 2022-03-30T00:00:00.000 | {
"year": 2022,
"sha1": "4d797988d8ba9e2c8a86281a95e005c11c041af5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/14/7/1110/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "121fd7f8921d0f673c09546fdc758759414b711f",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
239885855 | pes2o/s2orc | v3-fos-license | Overcoming Pedestrian Blockage in mm-Wave Bands using Ground Reflections
mm-Wave communication employs directional beams to overcome high path loss. High data rate communication is typically along line-of-sight (LoS). In outdoor environments, such communication is susceptible to temporary blockage by pedestrians interposed between the transmitter and receiver. It results in outages in which the user is lost, and has to be reacquired as a new user, severely disrupting interactive and high throughput applications. It has been presumed that the solution is to have a densely deployed set of base stations that will allow the mobile to perform a handover to a different non-blocked base station every time a current base station is blocked. This is however a very costly solution for outdoor environments. Through extensive experiments we show that it is possible to exploit a strong ground reflection with a received signal strength (RSS) about 4dB less than the LoS path in outdoor built environments with concrete or gravel surfaces, for beams that are narrow in azimuth but wide in zenith. While such reflected paths cannot support the high data rates of LoS paths, they can support control channel communication, and, importantly, sustain time synchronization between the mobile and the base station. This allows a mobile to quickly recover to the LoS path upon the cessation of the temporary blockage, which typically lasts a few hundred milliseconds. We present a simple in-band protocol that quickly discovers ground reflected radiation and uses it to recover the LoS link when the temporary blockage disappears.
I. INTRODUCTION
mm-Wave communication systems use highly directional beams to combat high path loss. The base station and the mobile typically employ their beams in the Line of Sight (LoS) direction to maximize link strength for high data rate communication. Due to high oxygen absorption [1] at mmwave frequencies from 28 to 80 GHz there is an at least 40 dB absorption loss. Consequently, a human body interposed between the base station and the mobile completely blocks the narrow directional link between mobile and base station. In outdoors settings, when pedestrians obstruct the LoS path between mobile and base station, packet communication between mobile and base station is disrupted. The mobile is essentially "lost" to the base station, and will need to be re-acquired as a new user. This can however involve a delay of about a second.
For example, in 5G New Radio, for a mobile to start the initial access procedure, the base station sweeps broadcast information in 64 beams every 20 ms. So even to discover a base station beam, the mobile can take up to 1.28 seconds [2]. This not only incurs significant power consumption, but it also disrupts many mobile applications, e.g., user experience for high throughput applications like Virtual Reality (VR) and high-definition video streaming. Blockage by pedestrians is therefore regarded as a serious problem outdoors [3]- [5].
To avoid such disruption it has been suggested that the mobile can perform handover to a different nearby base station with an LoS beam [3]. This however requires a dense, and therefore costly, deployment of base stations in outdoor environments. Another suggested solution is Coordinated Multipoint Transmission [6], [7], which also requires a dense deployment of base stations. Prior works have studied cell density requirements [3] and the extent of coordination needed between base stations [6]- [8] during blockage events. In addition, it incurs delay since to discover a neighboring base station beam and perform handover, the mobile needs to perform cell beam discovery, respond with random access, followed by exchange of control plane messages, e.g., to perform authentication, etc. It also consumes power from the battery-driven mobile [9].
Avoiding a dense and costly deployment of base stations, and the coordination challenges it presents [3], [6]- [8], necessitates a different solution for outdoor environments. Through extensive outdoor signal measurement campaigns using 60 GHz transceivers, we show that it is possible to exploit a ground reflection that has a signal strength about 4dB less than a LoS path in outdoor environments with hard surfaces such as concrete or gravel, when the beam is narrow in azimuth but wide in zenith. 1 While such reflected beams, indoor or outdoors, cannot support the high rate of the LoS communication prior to blockage, they can support lower rate bidirectional channel communication. We show that it allows time synchronization between the mobile and the base station to be sustained, so that the mobile can quickly revert to LoS communication as soon as the temporary blockage, which typically lasts a few hundred milliseconds, ends. We also present an in-band protocol to discover ground reflections and quickly recover LoS communications after temporary blockages. This allows us to extend the BeamSurfer protocol for indoor environments [11] to outdoor environments. 1 We have also determined through experimentation that such a ground reflection also exists indoors, from hard surfaces such as ceramic tiles. Such reflections have not apparently been measured indoors previously, possibly because they require antenna array placement at a height and directed at an appropriate angle to allow the reflected beam to reach the mobile in spite of blockage by another human. In fact the reflected beam so generated is better than the RSS on NLoS paths previously measured in indoor environments via walls, which is about 10 dB lower than LoS paths [10].
II. RELATED WORK
The existence of ground reflections is well known. In fact it is the basis of instrumented landing systems [12] operating in the UHF 300 to 100 MHz band wave band that use a reflected wave to form a glide path for landing. For communications, Rajagopal et al [1] and Jaeckel et al [13] have conducted measurement campaigns in mm-wave bands to characterize ground reflections. Specifically, [1] has identified the presence of strong ground reflections comparable to LoS directions at 28 GHz in outdoor environments, which from the presented graphs appear to be about 6-8dB below LoS signal strength.
Several works [4], [14]- [16] have also shown that human body blockage is severe at mm-wave frequencies. The effect of pedestrian traffic on mm-wave systems has been studied in [10], [17], [18]. In particular, they have characterized the duration of pedestrian blockage events. It has been observed that blockage lasts for 100 to 300 milliseconds in crowded environments.
To overcome the ill effects of pedestrian blockage, researchers have broadly proposed two approaches. One is to directly use NLoS paths from the environment for data communication, and the other is to deploy a large number of base stations.
The experimental measurement campaign reported in [19], specifically mentions scatterers including "lamppost, building, tree, or automobile", but not specifically the ground itself, and reports NLoS beams that have 10-50 dB more loss than LoS. One approach, BeamSpy [20], suggests the employment of a full geometric model at an anchor location through a full space scan. It proposes that a predicted set of transmitter and receiver beam pairs that capture NLoS paths at a neighbor location to the anchor point be tested by the receiver to identify a useful pair that restores link signal strength during the blockage. The geometric model needs be recreated if the anchor point is far away, i.e., more than 3 m, from the neighbor location. The work reported in [21] simulated beam switching from LoS to NLoS beams during blockage, and assert that most NLoS paths from the environment are not useful for high data rate communication.
The density of base stations necessary to meet the performance of high throughput and low latency applications is studied in [3]. To meet 5G New Radio application requirements in outdoor environments, it is determined that a base station density of 200 BS/km 2 is needed. Such a high base station density demands tight network coordination to manage intercell interference, and [6] analyzed coordinated multiple access to switch base stations during blockage events, and mention an inverse relation at high base station density between reliable blockage recovery and throughput.
Our experiments, conducted in the 60 GHz band, show that strong ground reflections from multiple surfaces outdoors by surfaces such as concrete and gravel, and indoors via hard surfaces such as ceramic tile, can be used to overcome pedestrian blockage problems by sustaining time synchronization and control channels during blockage, and allowing quick recovery to a high data rate LoS when blockage disappears.
III. BACKGROUND
Cellular networks rely on mm-wave spectrum to provide gigabit throughput utilizing the large bandwidth available. Due to their smaller wavelengths, mm-wave links have high path and penetration losses. To overcome link losses in mm-wave networks, the base station and the mobile devices communicate in a directional fashion using narrow radio beams. While the power amplifiers in the radio front end provide certain gains, using the directivity gain from a passive antenna array is critical for mobile mm-wave devices. For minimal link loss, the base station and mobile need to communicate using a directional beam in the LoS direction.
In the sub-6 GHz frequencies, the environment scatters electromagnetic radiation from an omni-directional transmitter in all directions. An omni-directional receiver can capture this incoming radiation i.e., all the multi-path components of the transmitted signal. In contrast, due to directional transmission, there are fewer and more distinct multi-path components in the mm-wave bands. A narrow directional radio receiver beam can only receive signal components that arrive in the beam direction. To discover either LoS or NLoS paths, the base station sweeps beams within a sector and the mobile receiver performs a spatial scan. NLoS signal components in mm-wave have (RSS) at least 10 dB less than that of LoS paths. The receiver may discover an LoS path using wider beams but it cannot discover NLoS paths as they have much lower signal strength.
In the sub 6-GHz band, an obstacle interposed between the base station and the mobile cannot block the transmission as the receiver can capture a large number of multipath components. However, an interposed pedestrian does obstruct a directional mm-wave LoS link. The poor RSS during blockage events leads to link outage. The mobile is left with one of two choices to continue communication with the network -to switch to a NLoS path if such a path exists between the base station and mobile, or to perform handover to a neighboring base station (or otherwise employ a neighboring base station through, say, coordinated multipoint transmission). NLoS paths between the base station and the mobile typically exist in indoor environments.
The base station and mobile need to adapt their LoS beams to compensate for user mobility. They use phased antenna arrays that electronically steer the direction of radio beams. As the size of the array increases i.e., as the number of antenna elements increases, the directivity gain increases, and the resulting radio beam has a smaller beamwidth. For example, a 32x32 element uniform planar array can produce beamwidths as narrow as 4 • . Using these narrow directional beams, the mobile must scan the full space to discover a NLoS path. Typically, both the mobile and the base station use beam codebooks that have pre-calculated phase weights that yield beams in specific directions. Employing these weights, the array steers the beam in desired directions. The mobile and the base station use beams from their respective codebooks both to communicate along the LoS path as well as to discover NLoS paths. The number of measurements required to identify at least one NLoS path is proportional to the number of beams in the codebook.
Performing a complete environment scan is not only resource intensive but it also entails usage of a number of signal measurement opportunities. For the mobile to scan the environment, the base station has to allocate measurement opportunities while catering to data demands of entire network. The mobile needs measurement schedules until it discovers at least one NLoS path. The number of measurements depends on angular resolution of spatial scan.
Link blockage, especially by pedestrians is a sudden and unpredictable event. The mobile must always have in hand at least one NLoS direction, that it can use to avoid an outage when a blockage occurs. As the user moves, the environment changes, so the mobile needs to perform frequent environment scans.
For optimal link performance, both the base station and the mobile must continually adapt their respective directional radio beams to maintain highly aligned LoS beams. During user mobility, beam adaptation is quite challenging and requires several measurement opportunities. Also, to discover NLoS paths, the mobile needs significant additional measurement opportunities. Every time the base station adapts the transmit beam to counter user mobility, the mobile needds to perform a full spatial scan to discover new NLoS paths. As transmit beam adaptation happens frequently during user mobility, the NLoS path discovery process is performed often. The frequent ambient scans reduce the overall network throughput as the base station is responsible for scheduling measurement opportunities for all the users.
If the mobile does not have an NLoS path in its memory during a blockage event, then link outage occurs, and the mobile gets disconnected from the base station. The mobile will then need to perform an initial network access procedure just as though it were a new user. Similarly, the mobile will also need to perform a similar procedure to handover to a neighboring base station. Base stations periodically sweep directional beams with reference signals and broadcast information such as cell and network identity. A mobile sweeps through all its receive beams one at a time to discover at least one of the base station's beams. To complete bi-directional connection, the mobile transmits a random preamble in the same direction in which it discovered the base station's beam, and awaits a response. After physical layer procedures to establish reliable data communication, the network then authenticates the mobile before granting network access. This complete procedure takes several seconds to complete handover [9].
In indoor environments, [10], [11], [20], [22] have suggested harvesting NLoS paths to preserve the link between the base station and the mobile during transient blockage events. For outdoor environments, in contrast, dense base station deployment and switching base stations in case of blockage has been suggested [22], [23].
However, outdoors too, there can be strong ground reflections. In fact, such reflections are used to shape glide paths for instrument landing systems for aircraft [12]. Outdoor reflections have also been investigated for communication [1], [13]. Motivated by this possibility, we have conducted extensive experiments in the 60GHz band, and have observed that mm-wave signals are reflected from outdoor surfaces such as concrete and gravel. As base stations are usually deployed with a slight downward tilt, as shown in Figure 1, and are equipped with phased arrays that steer beams, the mobile's receiver can capture these ground reflections. The ground reflections can be found in the same azimuth direction as the LoS path. To investigate if such reflections are usable during blockage events, we performed link measurements with human's blocking the LoS link between the base station and the mobile. We have found that even under the presence of a human blocker, there is a ground reflection with an RSS that is within 6 dB of the RSS of a direct unblocked LoS link. We have repeated the experiments with different surfaces, and have found that in some scenarios, ground reflections are even strong enough to handle limited data plane traffic. Our experiments and observations are detailed in Section IV.
IV. MEASUREMENTS
To measure the signal strength of ground reflections at mm-wave frequencies, in particular at 60 GHz, we performed measurements in environments with commonly found ground surfaces. We conducted experiments using software-defined radios operating at 60 GHz [24]. Baseband IQ sample generation at transmitter and signal processing at the receiver are implemented in FPGA. An analog baseband signal of 2 GHz bandwidth is upconverted to 60 GHz carrier frequency. A 12 element phased array is used both at the transmitter and receiver.
The phase weights for desired radiation patterns are calculated and stored as beam codebooks. Our beam codebook has 25 beams, with narrow beams of width approximately 18 • , within a 120 • azimuth sector. Further details on the transceiver design and implementation are available in [10], [25]. Figs. 2 and 3 present Azimuth and Elevation radiation patterns of the bore sight beam. The zenith beamwidth is around 60 • , whereas azimuth beamwidth is, as noted above, 18 • . Transmit power is fixed at 20 dBm. The directivity gain of the phased array is 17 dB. On each surface under study, we set up the transmitter array 2.5 m above ground level using a tripod, with the receiver antenna array held about 1 m from the surface. Transmitter and receiver arrays are positioned facing each other, and are placed 6 m apart. For each scenario, we repeated experiments for two different cases, where the Transmitter antenna is tilted towards the ground by 10 • or 20 • . This geometry mimics potential outdoor deployments where base stations are located higher than mobiles. This tilt is responsible for creating additional reflected directions towards the receiver. Moreover, most of the elevation beamwidth is directed towards the receiver.
While the transmitter beam is in the LoS direction of the receiver, signal strength at the receiver is measured using a beam that is highly aligned with the transmitter beam. We use RSS LoS to represent the signal strength in the LoS direction at the receiver. It serves as a reference to calculate total loss suffered by ground reflection. RSS LoS in our experiments is −60 dBm. When a human obstructs the LoS direction by standing in between transmitter and receiver, the RSS is −78 dBm, which is the noise floor of our receiver. This indicates that a pedestrian can completely block a signal. Although the pedestrian blockage is transient, an undesirable outage event occurs at the receiver.
Let H T and H R be the distances from ground level to transmitter array and receiver array, respectively. D T R denotes the distance between transmitter and receiver. H B is the height of a human blocker. The blocker can only obstruct the transmission only when she is close to the receiver. Using ray tracing, we can derive the maximum distance between blocker and the receiver D BRmax to obstruct LoS transmissions as For H R = 7.8 m of 1.78 m, and D T R = 6 m, D BRmax was found to be 3.12m in our experiments. Table I presents RSS GR averaged over 100 measurements from an indoor surface with concrete tiles. Tables II and III show RSS from reflections outdoors from Concrete and Gravel pathways.
When both transmitter and receiver phased arrays are parallel to the ground surface, the only ground reflection available to the receiver is from radiation in one-half of the elevation beamwidth of the transmitter beam. In our case, the elevation beamwidth is 60 • , so the radiation in the bottom half beamwidth reflects from the ground. To capture the reflection in this position, the receiver needs to tilt its beams towards the ground while maintaining LoS in azimuth. We observed slightly less RSS GR in this position compared to the positions where the transmitter array is tilted downwards, as the gain of the beam pattern reduces towards the edge of the beamwidth. When the transmitter array is tilted towards the ground, directions with stronger incident radiation get reflected, resulting in higher RSS. The highest RSS GR observed on all the 3 surfaces under study is around -64 dBm. This implies that ground reflected radiation is just 4 dB less than LoS. RSS GR is at least 6 dB higher RSS than the NLoS paths [10], [26]. It is also important to note that ground reflected radiation takes a shorter path to reach the receiver than NLoS paths from the environment.
Based on the experiments, the following are our main observations: • Pedestrian blockers can create mm-wave link outage. • Strong ground reflections are available in outdoor environments. • Ground reflections are available in the same azimuth LoS direction at the receiver. • Tilting the transmitter towards the ground helps the receiver with even stronger ground reflections. • Finally, and the most important one, there is no need to handover to z neighboring base station in outdoor environments during transient blockage events.
V. PROTOCOL TO OVERCOME PEDESTRIAN BLOCKAGE From our measurement experiments, we observe that pedestrian blockers block the mm-wave transmissions completely, resulting in link outages. To avoid need for network reconnection, mm-waves devices must remain connected to the network during blockage events. Prior works [10], [11], [20] have identified that by adapting both transmitter and receiver beams in the direction of NLoS paths, devices can continue to communicate during the blockage. However, due to low RSS on NLoS beams, only control plane information can be exchanged reliably. Also, there are several challenges to identify NLoS paths as mentioned in Section III. Our experiments indicate that ground reflections indeed provide very good RSS, so mobiles can even continue a limited amount of data plane traffic during LoS blockage. Based on our observations in Section IV, we present a protocol that mobile devices can follow to recover link signal strength using ground reflections. The protocol is simple and only requires mobiles to make link strength measurements to identify the direction of ground reflections. The protocol is environment agnostic as it relies only on in-band information, specifically RSS. The state machine of the protocol is presented in Fig. 4, and is explained below.
During the initial access procedure in directional cellular networks in 5G New Radio, first the mobile discovers the base station beams by sweeping through all its receive beams. It chooses the beam with the highest RSS, and sends a random preamble. The beam with the highest RSS usually is in the line of sight with the base station. The mobile then performs a full spatial scan using its beams. To discover the LoS beam, it may choose to perform either exhaustive search or hierarchical [27] or compressive sensing based approaches [28]. We denote the state where the mobile performs initial access as IA. In IA, the mobile discovers the LoS beam to communicate with the base station.
By transmitting random access preamble in the same direction as discovered in the base station beam, the mobile implicitly informs the base station which LoS beam to use to continue communication. Subsequently, as the mobile moves, the base station and the mobile adapt their respective LoS beams to counter user mobility. This adaptation process is called beam alignment, and several protocols [11], [27], [29] have been proposed to adapt the base station and mobile beams during user mobility. Both base station and mobile switch their beams at appropriate times and maintain LoS beams throughout the user mobility. In the beam adaptation state (BA), the mobile continues to maintain an LoS beam during mobility.
Pedestrians can suddenly block the LoS link, in which case RSS suddenly drops [10], [30]. At this point, with our protocol, neither NLoS path discovery nor handover is required at mobile.
We observe from the experiments that the mobile needs to switch only elevation angle to receive ground reflections, while the azimuth direction remains in the LoS direction. Before the blockage event, the BA state ensures that the mobile is communicating with the base station using LoS beams. Communication using LoS beams is necessary to receive the highest possible signal strength and reach optimal link throughput. Any good beam alignment algorithm [11], [27], [29] ensures that the base station and mobile will use LoS beams to communicate. As observed from our experiments, the beam that captures the ground reflections is a neighbor beam to LoS beam. Therefore a mobile connected to the base station needs to switch its current receive beam only in elevation angle to discover the presence of ground reflections.
Suppose that for a base station transmit beam B T L , a highly aligned LoS beam of mobile is B R L , and the mobile receives ground reflection using beam B GR . To receive ground reflection, the mobile need not perform beam scan in azimuth direction. The azimuth Direction of B GR is in the same direction of B R L . Also, the ground reflected path between base station and mobile can be received a by neighboring beam of B R L i.e., the mobile can identify B GR by switching receive beams to a upward or downward neighbor of B R L . In the signal domain, the orientation of mobile is not available; therefore, it is not possible to identify whether a beam's upward or downward neighbor B R L can receive ground reflections. So the mobile makes measurements on both upward and downward neighbors. Let Θ T be the tilt of the base station, and φ T the elevation beamwidth of B T L . Using a ray tracing approach, we can show that the elevation angle of B GR is Θ T + φ T 2 away from elevation of angle B R L . For example, given Θ T = 0 • and φ T = 30 • , the mobile needs to search neighbors within 30 • of B R L . This search is performed in the state Ground Reflection Discovery (GRD). Once the B R L is discovered, the mobile stores it in its memory and uses it in Reflected Beam Operation (RBO) state. The mobile continues to operate in Normal operation state (N.Op) after connecting to the base station until unless beam adaptation becomes necessary. Fig. 4 shows the state machine of the protocol.
Figure 4: Protocol State Machine
As an illustration of the functioning of the protocol, the following is one plausible sequence of state changes in the protocol:
VI. PERFORMANCE COMPARISON
We compare our protocol with two other works, BeamSpy [20] and Unblock [10], which have implemented their respective protocols on 60 GHz testbeds. We implemented Unblock and our protocol on the testbed described in Section IV. For Beamspy [20], we simply quote its results, since a quasi-omnidirectional transmitter to which we have no access is required to reproduce its results. Our protocol needed 3 measurements to identify ground reflection: first, the mobile measures signal strength on its immediate neighboring elevation beam to its current receive beam, and then makes the final measurement using the neighboring beam to then choose the beam with higher signal strength from the previous measurements. Table IV comments on the number of measurements required by each method to discover an NLoS path, as well as the underlying algorithmic complexity. The RSS from ground reflection is 6 dB more than employing a beam discovered by Unblock [10] in the indoor environment.
VII. CONCLUSION
In this work, we have demonstrated that ground reflections can rescue mm-wave links during random and unpredictable pedestrian blockage events. We have presented a simple inband protocol that mobiles can use to discover ground reflected radiation and recover from temporary pedestrian blockages without outage. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. | 2021-10-27T01:15:52.916Z | 2021-10-26T00:00:00.000 | {
"year": 2021,
"sha1": "16de75e4ec7934ef5526895240fc14dea87a801f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "16de75e4ec7934ef5526895240fc14dea87a801f",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering",
"Mathematics"
]
} |
16802258 | pes2o/s2orc | v3-fos-license | (-)-Epigallocatechin-3-Gallate Protects against NO-Induced Ototoxicity through the Regulation of Caspase- 1, Caspase-3, and NF-κB Activation
Excessive nitric oxide (NO) production is toxic to the cochlea and induces hearing loss. However, the mechanism through which NO induces ototoxicity has not been completely understood. The aim of this study was to gain further insight into the mechanism mediating NO-induced toxicity in auditory HEI-OC1 cells and in ex vivo analysis. We also elucidated whether and how epigallocatechin-3-gallate (EGCG), the main component of green tea polyphenols, regulates NO-induced auditory cell damage. To investigate NO-mediated ototoxicity, S-nitroso-N-acetylpenicillamine (SNAP) was used as an NO donor. SNAP was cytotoxic, generating reactive oxygen species, releasing cytochrome c, and activating caspase-3 in auditory cells. NO-induced ototoxicity also mediated the nuclear factor (NF)-κB/caspase-1 pathway. Furthermore, SNAP destroyed the orderly arrangement of the 3 outer rows of hair cells in the basal, middle, and apical turns of the organ of Corti from the cochlea of Sprague–Dawley rats at postnatal day 2. However, EGCG counteracted this ototoxicity by suppressing the activation of caspase-3/NF-κB and preventing the destruction of hair cell arrays in the organ of Corti. These findings may lead to the development of a model for pharmacological mechanism of EGCG and potential therapies against ototoxicity.
Introduction
Nitric oxide (NO) plays essential roles in the physiological functions of the inner ear, including regulation of neurotransmission and blood flow [1]. Recently, accumulating evidence has suggested that excessive NO production may cause hearing impairment [2,3]. Noise-induced hearing loss can be caused by increased NO production in the inner ear, leading to auditory cell destruction [4][5][6]. Previous studies have suggested that treating animals with ascorbic acid, an agent that attenuates noise-induced hearing loss, reduces the concentration of NO [7]. These results indicate that excessive NO production may play an important role in pathological damage to the cochlea and elevated hearing thresholds. Although the correlation between hearing loss and NO production has been described in vitro and in vivo, the mechanism through which NO mediates ototoxicity has not been completely understood.
Apoptosis is a process involving genetically regulated programmed cell death that plays an essential role in the development and homeostasis of higher organisms [8]. Mitochondria, the central coordinators of apoptotic events, are involved in the intrinsic pathway of apoptosis [9]. Mitochondria induce apoptosis by increasing mitochondrial membrane permeability and producing reactive oxygen species (ROS) [10]. A key event in apoptotic signaling is the release of pro-apoptotic proteins, including cytochrome c (cyt c) and apoptosis-inducing factor (AIF), from the mitochondrial intermembrane space [11,12]. Once released, these proteins promote apoptosis through the activation of both caspase-dependent and caspase-independent pathways [13].
Nuclear factor kappa B (NF-kB) has been implicated in the regulation of proliferation, survival, angiogenesis, apoptosis, and differentiation [14][15][16]. In the nucleus, NF-kB activates genes that regulate apoptosis and respond to inflammation and oxidative stress [17,18]. Many studies have reported the role of NF-kB in hearing loss. Ototoxic stimulants, such as noise exposure and ototoxic drugs, can induce NF-kB activation [19,20], resulting in insults to the cochlear lateral wall via the production of high levels of ROS [21][22][23]. Acoustic overstimulation also increases the expression of inflammatory factors through NF-kB activation in the inner ear [24].
Caspase-1, a member of the caspase family that contains large prodomains [25], is involved in apoptosis and inflammation [26]. Caspase-1 activation induces inflammation via the production of pro-inflammatory cytokines [27], and caspase-1 also plays an important role in cisplatin-induced apoptosis in cochlear hair cells and spiral ganglion neurons [28]. However, the relationship between NO and caspase-1 activation in auditory cells has not yet been described.
Green tea, which contains a wide range of catechins, has a variety of modulatory effects on physiological functions, including antibacterial, radical scavenging, and antioxidant activities. Green tea also has a protective effect on the gastric mucosa and has been implicated in the prevention of atherosclerosis [29]. Epigallocatechin-3-gallate (EGCG), a major component of tea catechins, inhibits allergic reactions [30,31] and penetrates the blood-brain barrier, making it a promising candidate for the treatment of neurodegenerative disorders. However, the otoprotective effects of EGCG in the context of NO damage remain unknown.
The overall aim of this study was to gain further insight into the mechanism of NO-induced toxicity in auditory HEI-OC1 cells and in ex vivo analysis. We also examined whether and how EGCG regulates NO-induced auditory cell damage. The specific aims were as follows: (I) to examine the effects of NO on cell death, ROS generation, mitochondrial membrane potential (MMP) loss, cyt c release, caspase-3 activation, and NF-kB/caspase-1 activation in HEI-OC1 cells; (II) to investigate NO-induced damage to the arrangement of cochlear hair cells in the basal, middle, and apical turns of the organ of Corti from rats; and (III) to investigate the protective effects of EGCG against NO-induced ototoxicity both in vitro and ex vivo.
Cell culture
The HEI-OC1 cell line was a gift from Dr. Federico Kalinec (House Ear Institute, CA, USA). HEI-OC1 cells express several molecular markers that are characteristic of sensory cells of the organ of Corti, including thyroid hormone, brain-derived neurotrophic factor, calbindin, calmodulin, Connexin 26, Math 1, Myosin 7a, organ of Corti protein 2, tyrosine kinase receptor B and C, platelet-derived growth factor receptor, and prestin. HEI-OC1 cells are also extremely sensitive to ototoxic drugs [32]. The cells were maintained in DMEM with 10% FBS at 33uC under 5% CO 2 in air.
Ethics statement
All animal procedures and experiments were approved by the Animal Ethics Committee of Wonkwang University (approval number WKU10-038).
Organ of Corti explant cultures
Organ culturing procedures were similar to those described previously [33]. Sprague-Dawley rats were killed on postnatal day 2, and their cochleas were carefully removed by dissection. The basal, middle, and apical turns of the cochlea were used for further studies. Cochlear explants were treated with DMEM containing 10% FBS, SNAP, and EGCG (Sigma), or a combination of these, and incubated for 24 h at 33uC. The culture was then prepared for histological analysis. Organ of Corti explants were fixed for 15 min in 4% paraformaldehyde in phosphate-buffered saline (PBS). The specimens were rinsed in PBS, incubated in 0.25% Triton X-100 for 2 min, and immersed in tetramethylrhodamine isothiocyanate (TRITC)-labeled phalloidin (Sigma; 1:100 diluted) in PBS for 20 min. After rinsing with PBS, the specimens were examined by fluorescence microscopy with the appropriate filters for TRITC (excitation: 510-550 nm; emission: 590 nm).
MTT assay
To investigate the effects of NO on cell viability, SNAP was used as an NO donor. Cell viability was determined using the MTT assay as previously described [34]. Briefly, the cells (3610 5 cells/well) were exposed to various concentrations of SNAP (250-500 mM) or treated with SNAP at a constant concentration (500 mM) for varying periods (4-24 h). MTT solution (5 mg/mL in PBS) was added (50 mL/well), and the plates were further incubated for 4 h at 33uC. Precipitated formazan crystals were dissolved by adding DMSO. Absorption was measured using a spectrometer (Molecular Devices, Sunnyvale, CA, USA) at 540 nm.
Flow cytometric analysis of MMP
MMP was measured using the fluorescent probe 3,3-dihexyloxacarbocyanine iodide (DiOC 6 ; Invitrogen, Carlsbad, CA, USA). DiOC 6 uptake by mitochondria is directly proportional to membrane potential. Staining intensity decreases when the reagents disrupt the MMP; quantification is based on the depolarized mitochondrial membranes. Briefly, the cells (1610 6 cells/dish) were cultured in the presence or absence of SNAP (250-500 mM). After trypsinization, cells were washed in PBS and
Spectrofluorimetric measurement of intracellular ROS generation
Intracellular ROS levels were measured using the fluorescent dye 29,79-dichlorofluorescein diacetate (DCFH-DA). In the presence of an oxidant, DCFH is converted to a highly fluorescent molecule, 29,79-dichlorofluorescein (DCF). Cells were incubated with 500 mM SNAP for varying times, and then incubated for 30 min with 5 mM DCFH-DA. Fluorescence intensity was measured using a spectrofluorometer (Shimadzu Corporation, Japan) at excitation and emission wavelengths of 485 and 538 nm, respectively.
Flow cytometry analysis
For measurement of intracellular ROS levels by flow cytometry analysis, the oxidation-sensitive probe DCFH-DA was used. NO levels were elucidated using the fluorescent NO probe DAF-2/ DA. Briefly, cells were incubated with 10 mM DAF-2/DA for 30 min. For flow cytometry analysis, cells were detached by trypsinization, washed once in PBS, and resuspended in 800 mL PBS. Flow cytometric analyses (10,000 events per sample) were performed in a FACSCalibur system (BD Biosciences) with excitation and emission wavelength at 485 and 538 nm, respectively, and results were evaluated with CellQuest software.
Assay of caspase-3 and caspse-1 activity
Enzymatic activities of caspase-3 and caspase-1 were assayed using a caspase colorimetric assay kit (R&D Systems) according to the manufacturer's protocol. Briefly, the cells were pretreated with 50 mM EGCG, treated with 500 mM SNAP for 24 h, and then lysed. The lysed cells were centrifuged at 14,000 rpm for 5 min. Protein-containing supernatants were incubated with 50 mL reaction buffer and 5 mL caspase substrates (caspase-1 or caspase-3) at 37uC for 2 h. Absorbance was measured using a plate reader at a wavelength of 405 nm. Protein was quantified using a bicinchoninic acid protein quantification kit (Sigma).
Nitrite accumulation
Since NO itself is unstable, NO production was determined by the measurement of nitrite, a stable oxidation product of NO. Nitrite released into the media at various time points was measured by spectrophotometric assay based on the Griess reaction [35].
Preparation of nuclear and cytoplasmic extracts
Nuclear and cytoplasmic extracts were prepared as described previously [36]. Briefly, after cell activation, cells were washed with ice-cold PBS and resuspended in 60 mL buffer A (10 mM 4- Pellets containing the nuclei were resuspended in 40 mL buffer B (50 mM HEPES/KOH, 50 mM KCl, 300 mM NaCl, 0.1 mM EDTA, 10% glycerol, 1 mM DTT, and 0.5 mM PMSF, pH 7.9), left on ice for 20 min, and inverted. Nuclear debris was centrifuged at 15,0006 g for 15 min. Supernatants (nuclear extracts) were collected, frozen in liquid nitrogen, and stored at 270uC until analysis.
Western blot analysis
To analyze caspase-3, IkB-a, cyt c, caspase-1, Bcl-2, and NF-kB levels, western blot analysis was performed. The cells were rinsed with ice-cold PBS and lysed with lysis buffer (1% Triton, 1% Nonidet P-40, 0.1% sodium dodecyl sulfate [SDS], and 1% deoxycholate in PBS). Supernatants were mixed with an equal volume of 26 SDS sample buffer, boiled for 5 min, and separated on 10% SDS-polyacrylamide gels. After electrophoresis, the proteins were transferred to nylon membranes by electrophoretic transfer. Membranes were blocked for 2 h in 5% skim milk, rinsed, incubated overnight at 4uC with primary antibodies, and washed in PBS/0.5% Tween 20 (PBST) to remove excess primary Abs. Membranes were then incubated for 1 h with horseradish peroxidase-conjugated secondary Abs (anti-mouse, anti-goat, or anti-rabbit). After 3 washes in PBST, protein bands were visualized using an enhanced chemiluminescence assay (Amer- Cytokine assay IL-1b secretion was measured using a modification of the enzyme-linked immunosorbent assay (ELISA) described previously [37]. Ninety-six-well plates were coated with 100-mL aliquots of anti-mouse IL-1b monoclonal Abs at 1.0 mg/mL in PBS (pH 7.4) and incubated overnight at 4uC. The plates were then subjected to additional washes, and 100 mL of the cell medium or IL-1b standard was added and incubated at 37uC for 2 h. The wells were washed, followed by addition of 0.2 mg/mL of biotinylated antimouse IL-1b at 37uC for 2 h. After washing the wells, avidinperoxidase was added, and plates were incubated for 30 min at 37uC. The wells were washed again, and ABTS substrate was added. Color development was measured at 405 nm by using an automated microplate ELISA reader. A standard curve was generated for each assay plate using a serial dilution of rm IL-1b.
Transient transfection and luciferase assay
NF-kB luciferase reporter gene constructs (pNF-kB-LUC, plasmid containing NF-kB binding site; STANTAGEN, Grand Island, NY, USA) were transfected into HEI-OC1 cells using transfection reagent Tfx-50 (Promega, Madison, WI, USA) according to the manufacturer's protocol. After 24 h, the culture medium was replaced, and the cells were stimulated with SNAP. Cells were harvested after a 4-h stimulation and washed in cold PBS. After lysis, luciferase activity was measured using a luciferase assay system (Promega), normalized against b-galactosidase activity, and expressed as fold induction relative to the control. All experiments were performed in triplicate and repeated 3 times.
Transfection with small interfering RNA
Predesigned small interfering RNAs (siRNAs) targeting NF-kB (p65) and a nonspecific control were purchased from Santa Cruz Biotechnology. Briefly, cells were grown in 6-well plates and transiently transfected with 2 mg of NF-kB or control siRNA constructs mixed with X-tremeGENE siRNA transfection reagent (Roche Applied Science, Penzberg, Germany). After incubation at 33uC and 5% CO 2 for 24 h, cells were treated with SNAP. Gene silencing was confirmed by western blot analysis.
NF-kB immunofluorescence
Cells were fixed with 4% paraformaldehyde and incubated with 5% bovine serum albumin (BSA) in PBS for 60 min. The preparation was incubated for 1 h at room temperature with NF-kB Abs diluted in 0.1% BSA (1:500). Next, the preparation was washed 3 times with PBS and exposed to secondary Abs (fluorescein isothiocyanate-conjugated anti-rabbit IgG at 1:200 and 0.1% BSA/PBS) for 60 min. For 49,6-diamidino-2-phenylindole (DAPI) staining, cells were fixed and stained with 1 mg/mL DAPI, a DNA-specific fluorochrome, for 30 min in the dark. The fluorescent image was viewed using an Olympus confocal microscope (New Hyde Park, NY, USA).
Data analysis
Results were expressed as the mean 6 SEM of 3 independent experiments, and statistical analyses were performed by one-way analysis of variance with Tukey and Duncan post hoc tests to express differences between groups. All statistical analyses were performed using SPSS statistical analysis software. A P-value of less than 0.05 was considered statistically significant.
Protective effects of EGCG against NO-induced destruction of hair cell arrangement in organ of Corti explants
To investigate the effect of NO on the arrangement of hair cells, organs of Corti isolated from rat cochlea at postnatal day 2 were treated with an NO donor (SNAP). In this study, SNAP was chosen as the NO donor because it is a pure NO releaser [38,39], as opposed to SNP, which also releases toxic ONOO 2 or cyanide ions. As shown in Fig. 1A, SNAP destroyed the orderly arrangement of the 3 outer hair cell (OHC) rows and the inner hair cell (IHC) row in the basal, middle, and apical turns. Relative cell viability is shown in Fig. 1B. SNP also affected the orderly arrangement of these hair cell rows (Fig. 1C). As shown in Fig. 1D, L-NAME (an inducible nitric oxide synthase [iNOS] inhibitor) inhibited cisplatin-induced destruction of the hair cell arrangement. In addition, we used the NO scavenger C-PTIO to confirm that NO mediated the effects of SNAP. As shown in Fig. 1E, C-PTIO inhibited SNAP-induced destruction of the hair cell arrangement. The ability of EGCG to prevent SNAP damage to the organ of Corti was also investigated. As shown in Fig. 1F, EGCG prevented SNAP-driven destruction of the arrangement of the 3 OHC rows and the IHC row. Fig. 1G shows relative hair cell viability.
Protective effects of EGCG against NO-induced cell death in HEI-OC1 cells
To investigate the effects of SNAP on cell viability and NO levels in vitro, HEI-OC1 auditory cells were treated with 500 mM SNAP for varying times. SNAP affected cell viability in both timeand dose-dependent manners ( Fig. 2A and B). As shown in Fig. 2A, low doses of SNAP (,100 mM) did not affect cell viability. SNAP exposure also resulted in time-dependent cell damage along with a marked increased in NO production (Fig. 2C). Moreover, we found that EGCG exerted a significant protective influence against SNAP-induced cell death (Fig. 2D), thereby providing an insight into the mechanism mediating NO-induced ototoxicity in auditory cells treated with SNAP for 24 h.
Protective effects of EGCG on NO-induced MMP loss in HEI-CO1 cells
To determine the effects of SNAP on mitochondrial membrane integrity, cells were incubated with SNAP (250-500 mM) for 24 h, after which MMP levels were measured as an index of mitochondrial membrane integrity. The cells were loaded with DiOC 6 , an MMP-dependent fluorescent probe, and the resulting fluorescence was measured by flow cytometry. Compared to the control, DiOC 6 fluorescence intensity decreased following SNAP exposure (left-shifting of the cell distribution; Fig. 3A). Fig. 3B shows relative fluorescence levels as a function of SNAP concentration. Furthermore, we demonstrated that EGCG inhibited SNAP-induced MMP loss (Fig. 3C). Relative fluorescence levels are presented in Fig. 3D.
Protective effects of EGCG on NO-induced ROS generation in HEI-CO1 cells
To investigate the effects of SNAP on intracellular ROS generation, HEI-OC1 auditory cells were treated with 500 mM SNAP for various times. ROS production increased after SNAP exposure, but the effect became less pronounced as exposure time increased (Fig. 4A). EGCG effectively suppressed the SNAPinduced increase in ROS levels (Fig. 4B). We confirmed the effects of EGCG on ROS levels by using flow cytometry (Fig. 4C). Additionally, we used DAF-2/DA fluorescence to measure NO levels. As shown in Fig. 4D, EGCG significantly attenuated the SNAP-induced increase in NO levels in HEI-OC1 auditory cells.
Regulatory effects of EGCG on NO-induced apoptosisrelated gene expression in HEI-CO1 cells
Western blot analysis was performed to assess the effects of SNAP on the release of cyt c into the cytosol. SNAP induced the release of cyt c into the cytosol, and EGCG inhibited this process (Fig. 5A). The relative quantity of cyt c was determined using an image analyzer (Fig. 5B). As shown in Fig. 5C, EGCG also inhibited the reduction in Bcl-2 levels induced by SNAP. Relative Bcl-2 expression is shown in Fig. 5D. Next, we performed western blotting and a caspase-3 activity assay to determine whether NOinduced apoptosis was associated with the regulation of caspase-3 activity. SNAP increased the expression of caspase-3 (active form), while EGCG effectively inhibited this increase (Fig. 5E). EGCG also attenuated the SNAP-induced increase in caspase-3 activity (Fig. 5F). Rat organ of Corti explants were pretreated with 50 mM EGCG for 2 h, followed by treatment with 500 mM SNAP. After the explants were homogenized, caspase-1 levels were confirmed using a caspase-1 assay kit. All data represent the mean 6 SEM of 3 independent experiments ( # P,0.05 vs. control, *P,0.05 vs. SNAP alone). doi:10.1371/journal.pone.0043967.g008 Protective effects of EGCG on NO-induced NF-kB signaling in HEI-CO1 cells To determine the association of NO-induced apoptosis with the NF-kB pathway, we silenced endogenous NF-kB using specific siRNA. The siRNA effectively inhibited NF-kB expression in the nucleus relative to control cultures transfected with scrambled siRNA (Fig. 6A). As shown in Fig. 6B, knockdown of NF-kB was effective at inhibiting SNAP-induced caspase-3 activation (as an apoptosis marker). The siRNA transfections resulted in 52% and 48% knockdown of NF-kB and caspase-3, respectively (Fig. 6C). Based on these findings, we investigated the relationship between the protective mechanisms of EGCG and regulation of the NF-kB pathway. Our results revealed that SNAP induced the degradation of IkB-a in the cytosol and translocation of NF-kB into the nucleus; EGCG suppressed these SNAP-induced phenomena (Fig. 6D). Next, we performed a luciferase assay to investigate the effects of EGCG on NF-kB promoter activity. As shown in Fig. 6E, SNAP treatment enhanced NF-kB promoter activity, while EGCG pretreatment inhibited this SNAP-induced increase in NF-kB promoter activity. Immunofluorescent staining of NF-kB (green) and nuclei (blue) revealed that SNAP treatment caused translocation of NF-kB into the nucleus, while pretreatment with EGCG inhibited this phenomenon (Fig. 6F).
Protective effects of EGCG on NO-induced NF-kB activation in organ of Corti explants
Next, we investigated the regulatory effects of SNAP on NF-kB activation ex vivo. As shown in Fig. 7, treatment with SNAP induced NF-kB activation in the organ of Corti, and EGCG inhibited SNAP-induced NF-kB activation (red).
Protective effect of EGCG on NO-induced caspase-1 activation in HEI-CO1 cells and organ of Corti explants
We investigated whether NO-mediated ototoxicity occurred via the production of IL-1b and activation of caspase-1. As shown in Fig. 8A and B, SNAP induced IL-1b production and increased the levels of caspase-1 (cleaved form) in HEI-CO1 cells, while EGCG inhibited these effects. To confirm the effects of EGCG on caspase-1 activation ex vivo, we performed a caspase-1 activity assay in organ of Corti explants. The results demonstrated that SNAP induced caspase-1 activation, and this effect was again inhibited by EGCG (Fig. 8C).
Discussion
We have shown, for the first time, that EGCG is effective in preventing the destruction of hair cell arrays and apoptosis both in vitro and ex vivo. EGCG is also effective in counteracting ototoxicity by suppressing NF-kB and caspase-1 activation.
EGCG is the main constituent of polyphenols and the most abundant and active polyphenolic compound with potent biological properties, including antioxidant, hepatoprotective, chemopreventive, and anticarcinogenic effects. It has been reported that the active site of EGCG can react with oxygen free radicals, supporting that EGCG possesses potent antioxidant properties [40]. In addition, EGCG is a known inhibitor of the STAT1 transcription factor, which has been implicated in the production of ROS and the activation of caspase-3 in cisplatin-induced ototoxicity [41]. However, the effects of EGCG on NO-induced ototoxicity have not yet been established.
NO is a free radical that predominantly functions as a messenger and effector molecule. Many studies have suggested that free oxygen radicals can cause hearing impairment. Recent evidence suggests that excessive NO production plays an important role in pathological damage of the cochlea and elevated hearing thresholds [42]. The induction of apoptotic cell death by NO depends on its concentration and the cell type involved. High concentrations of NO donors have been shown to generate toxic concentrations of NO and induce apoptosis. However, NO treatment at lower, more physiological levels may often have protective effect, preventing the onset of apoptosis in many mammalian cells [43,44]. In the current study, we found that higher concentrations of SNAP (.250 mM) induced auditory cell death, but low doses of SNAP (,100 mM) did not affect cell viability. This finding is consistent with other studies that have demonstrated the induction of Molt-4 cell death by high concentrations of SNAP [45]. Many studies have shown that the ototoxicity of cisplatin can be mediated by increased NO production in the inner ear, leading to auditory cell destruction. L-NAME, a competitive inhibitor of NOS, was shown to reduce cisplatin-induced hearing disturbances [46]. In this study, we confirmed that L-NAME suppressed cisplatin-induced hair cell destruction and iNOS expression in organ of Corti explants (data not shown), and we investigated the direct effects of NO and protective effects of EGCG on hair cell death. NO destroyed the orderly arrangement of the 3 OHC rows and the IHC row in the basal, middle, and apical turns in Corti explants, and EGCG abrogated NO-induced destruction of hair cell arrays. Additionally, an NO scavenger effectively inhibited NO-induced hair cell destruction. These results imply that a high concentration of NO is involved in ototoxicity and that this phenomenon can be counteracted by antioxidants.
In mammals, mitochondria act as the central checkpoint for many forms of apoptosis. The mitochondrial pathway is believed to be the main target for survival signaling pathways [47]. NO has been reported to interfere with the mitochondrial respiratory chain at several sites, resulting in increased generation of ROS that subsequently react with NO to form peroxynitrite, which in turn damages cells and leads to cell death. Mitochondrial alterations leading to mitochondrial membrane depolarization induce apoptosis by reduction of MMP and release of cyt c. Thus, we investigated NO-induced cell death, MMP loss, ROS generation, and cyt c release in auditory HEI-OC1 cells. The results revealed that NO-induced ROS production may lead to a decrease in MMP, which in turn increases mitochondrial membrane permeability and releases mitochondrial apoptogenic factors, such as cyt c, into the cytosol. This indicated that NO-induced apoptosis may occur through the mitochondrial pathway. Moreover, EGCG regulated the NO-mediated mitochondrial pathway in auditory cells. These findings demonstrate that the antiapoptotic effects of EGCG on NO-induced apoptosis may be related to its antioxidant potential and its ability to scavenge ROS. However, the mechanisms through which NO triggers other pathways in auditory cells were not examined in this study. Therefore, further studies are needed to identify nonmitochondrial signaling pathways in NO-induced ototoxicity.
Many recent studies have investigated the association between NF-kB activation and hearing loss. Some have suggested that NF-kB family proteins found in the inner ear are required for normal hair cell function [49], while others have reported that signal transduction pathways respond rapidly to ototoxic stimulants, such as noise exposure and ototoxic drugs [19,20]. The activation of NF-kB induces cochlear lateral wall insults by producing large amounts of ROS [21,23]. Acoustic overstimulation also increases the expression of inflammatory factors through NF-kB activation in the inner ear [24]. Despite the results of these studies, the functional role of NF-kB in hearing loss remains controversial.
Moreover, the ability of NO to regulate NF-kB can vary with cell type, NO concentration, and duration of exposure. Some studies have suggested that SNP induces NF-kB activation, as was demonstrated by cytosolic IkB-a phosphorylation and degradation in human periodontal ligament cells [50]. Others have reported that NO-induced apoptosis is a result of downregulation of NF-kB DNA-binding activity, as shown in J774 macrophages [51]. In this study, we sought to determine whether the cytotoxic effects of NO were exerted through the regulation of the NF-kB pathway. The results showed that NO induced the degradation of IkB-a in the cytosol and translocation of NF-kB to the nucleus in HEI-OC1 cells. To test this phenomenon ex vivo, we used rat organ of Corti explants to confirm that NO caused NF-kB activation. Silencing NF-kB with specific siRNA inhibited NO-induced apoptosis, and pretreatment with EGCG suppressed the degradation of IkB-a and translocation of NF-kB to the nucleus. These results suggested that the cytotoxicity of NO was mediated by NF-kB activation both in vitro and ex vivo. Accumulating evidence has shown that the association of NF-kB activation with apoptosis-related gene expression depends on cell type. Moreover, Bcl-2 proteins control the release of mitochondrial cyt c by regulating mitochondrial permeability. Recent studies have shown that NF-kB acts upstream of apoptosis-related genes, including Bcl-2 [52]. In this study, we found that treatment with an NO donor inhibited Bcl-2 expression. Bcl-2 is a marker for antiapoptotic activity and a product of one of the NF-kB target genes. Thus, we postulated that NF-kB may regulate apoptosis-related genes in NO-mediated cytotoxicity.
Caspases serve important functions in apoptosis and have been implicated in NO-induced cell death [48]. In this study, we demonstrated that NO enhanced caspase-3 activity, while EGCG attenuated caspase-3 activation in auditory cells. Therefore, the mechanism mediating NO-induced apoptosis in auditory cells may, at least in part, involve a caspase-dependent pathway. Although NO can induce apoptosis through a caspase-dependent pathway, the effects of NO on caspase-independent processes were not elucidated in the present study. Hence, further studies are needed to determine how NO influences translocation of AIF from the cytosol to the nucleus and how NO mediates caspaseindependent apoptosis. Caspase-1 is an IL-1-converting enzyme involved in numerous biological processes, including apoptosis and inflammation. Work by Zhang et al. has indicated that caspase-1 triggers the release of cyt c and activation of caspase-3 in ischemia/ hypoxia-mediated neuronal cell death [53]. Studies have also shown that cisplatin induces the activation of caspase-1 in cochlear hair cells and spiral ganglion neurons [28]. In this study, we found that NO treatment resulted in caspase-1 activation and IL-1b production, while EGCG inhibited the observed NO-induced increase in IL-1b production and caspase-1 activation, suggesting that the caspase-1 pathway is a potential therapeutic target for preventing NO-induced ototoxic damage. Receptor interacting protein (RIP)-2, specific adaptor, has been found to regulate the activation of caspase-1; the caspase activation and recruitment domains (CARDs) of RIP-2 bind to the CARD of the caspase-1 prodomain via CARD-CARD interactions, inducing caspase-1 activation. This RIP-2/caspase-1 interaction causes IKK phosphorylation and IkB-a degradation. Thus, NF-kB is released and translocates to the nucleus, where it induces gene transcription [54]. Caspase-1 may also contribute to NF-kB activation through the autocrine action of IL-1b [55]. From this, we postulated that the NF-kB pathway may be involved in caspase-1 activation in auditory cells. However, further studies will be needed to clarify the precise relationship between NF-kB and caspase-1 in NOmediated ototoxicity. Furthermore, we demonstrated that the antiapoptotic mechanism of EGCG may be driven by the regulation of the signaling molecules that participate in the NOmediated apoptotic process.
In conclusion, high levels of NO resulted in cell death, ROS generation, MMP loss, cyt c release, and caspase-3 activation in auditory cells. In addition, NO destroyed hair cells in the basal, middle, and apical cochlear turns in primary organ of Corti explants from rats. NO ototoxicity was mediated through the activation of NF-kB and caspase-1, and EGCG was effective in counteracting this ototoxicity by suppressing NF-kB and caspase-3 activation and preventing hair cell array destruction. This study therefore indicates that EGCG may be a beneficial agent for preventing or halting the progression of certain types of hearing loss. | 2016-05-12T22:15:10.714Z | 2012-09-28T00:00:00.000 | {
"year": 2012,
"sha1": "b58bf8998fcafee91593d5580e318698a1455083",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0043967&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b58bf8998fcafee91593d5580e318698a1455083",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
241504196 | pes2o/s2orc | v3-fos-license | The Challenges of Health Services at Senior Centers in an Urban South Korean Community: A Mixed-Method Approach, Focusing on the Role of Nurses
Background This study aimed to identify the current challenges of health services provided by senior centers in South Korea and to determine recommendations for improving service to community-dwelling older adults, focusing on the nurse’s role. Methods Quantitative data were obtained from a survey of 30 nurses at senior centers in Seoul, South Korea. In addition, focus group interviews were conducted with 6 senior center nurses, 5 health experts, and 4 attendees of senior centers (n = 15). Content analysis was performed to analyze the qualitative data. Results The study results revealed several challenges, including insufficient health services; a lack of human resources; non-systematic and overlapping health services; and a lack of legal clarification of nurses’ roles. Our recommendations for improvement are that senior centers should: focus on disease prevention and chronic disease management; be hubs to connect health and welfare services; empower nurse’s role and capacity; and establish legal regulation and adequate staffing for nurses. Conclusions These findings are important for improving the senior centers in playing a key role in contributing to health promotion, disease prevention, and chronic disease management among older individuals in South Korea. services” and “suggestions for future health services.” Four subthemes were identified under each theme. and improve their competency. Both our quantitative and qualitative results demonstrate that most senior center nurses reported feeling very limited in terms of their personal competence, and their access to the educational and administrative
senior center users' health service needs and satisfaction [15][16][17], or the results of health services programs [7,8]. In addition, several studies have been conducted to explore the role of and efforts to improve senior centers [13,18]. However, no studies have yet focused on identifying current challenges with regard to the nurse's role in order to suggest possible improvements to senior center health services. Therefore, the purpose of this study was to identify challenges and possible recommendations for the utilization of senior center health services in an urban community, by explicitly focusing on the role of senior center nurses.
Methods
This study used a cross-sectional, mixed-methods approach that combined quantitative survey data, which provides a general picture of the current status of health service at senior centers, and qualitative data from focus group interviews (FGIs), which facilitates a deeper understanding of the relevant challenges and recommendations. The study was approved by the institutional review board at a university in South Korea (IRB approval #2013-70). The researchers explained the study purpose and process to participants, and assured them of the anonymity and confidentiality of their data.
Written, informed consent was obtained from all participants.
Quantitative data collection
Out of a total of 59 senior centers in Seoul, 29 hired nurses. Of these, 28 centers hired one nurse and 1 center hired two. We conducted a survey of all 30 of these senior center nurses. The survey instrument was developed by the authors, based on a comprehensive literature review, with the aim of collecting quantitative data on nurses working for senior centers in Seoul. Twenty-five nurses completed the survey while attending a continuing education program at the Seoul Senior Center Association, and the remaining five nurses received the survey via mail. Participants' demographic information, educational profiles, experience, and qualifications were examined, and their roles in and perspectives on health services at senior centers were collected using a closed questionnaire survey.
Qualitative data collection
The participants were recruited via purposive sampling to explore the perspectives of various senior center stakeholders and provide a multi-faceted understanding of health service utilization in senior centers. To ensure the reliability of our data, we recruited people who are familiar and experienced with senior centers. Six senior center nurses, five experts in health and welfare for older adults, and four senior center attendees were recruited to participate in the FGIs. Four FGIs were performed with two groups of senior center nurses, one group of experts in health and welfare for older adults, and one group of senior center attendees, respectively. The semi-structured FGI questions in this study were formulated based on previous research. Each FGI adhered the Krueger and Casey method to ensure the auditability of the data [19]. The FGI questions were organized into initial, introduction, transition, major, and final stage questions. All groups participating in the FGIs were guided by the following major questions: What do you think the role of a senior center is?
What do you think the nurse's role in senior center is?
In what health services do senior center attendees mainly participate?
What do you think should be included in senior center health services?
What are the current challenges to senior center health services?
What is needed for improvement?
The FGIs were conducted until data saturation was reached. Each meeting (approximately 60-120 minutes in length) was digitally recorded, and detailed field notes were taken. After each interview, research team members debriefed the interviewees. These results were verified via repeat interviews and a follow-up meeting with three nurses who participated in the interviews.
Data Analysis
Quantitative data analysis was conducted using the SPSS 23.0 statistical software. The database was screened for normality, outliers, and missing data. Descriptive statistics were used to examine the current status of staffing, educational profiles, job experience, qualifications, nurses' roles, and utilization of health services.
Graneheim and Lundman's content analysis method [20] was followed to analyze the qualitative data.
All recorded data were transcribed, with the final transcriptions checked against the digital recordings. The text was divided into meaning units, which were then condensed, abstracted, and labeled with codes. Differences and similarities between the codes were identified, and the codes were sorted into themes, which constituted the content. Ultimately, 291 codes were extracted, and 8 subthemes were identified under two main themes. To increase the credibility and dependability of the data analysis, the research team held regular meetings to review and analyze the results until consensus was reached.
Quantitative Findings
The senior center nurses (n = 30) were all women with an average age of 38.2 ± 8.5 years. Most participants were college graduates of three-year degrees (n = 15, 50.0%), followed by graduates of four-year degrees (n = 12, 40.0%). Most nurses held only one nursing license (n = 20, 60.6%), and those with multiple licenses were most commonly licensed social workers (n = 6, 20%). Most nurses had worked ≤ 3 years (n = 13, 43.3%), and a similar number had worked for > 5 years (n = 12, 40.0%). Regarding senior center health services (see Table 2), the perceived main goals of the senior center health services are chronic disease management (n = 14, 46.7%) and health screening and prevention (n = 8, 26.7%). The nurses perceived their key roles to be mainly chronic disease management (n = 11, 36.7%), health counseling (n = 10, 33.3%), and health education (n = 10, 33.3%). They noted that chronic disease management (n = 13, 43.3%) and health education (n = 7, 23.3%) are the most frequently provided services. The most utilized services by older attendees are health screening and prevention (n = 11, 36.7%) and chronic disease management (n = 8, 26.7%). The ways in which senior center health services were perceived to differ from those provided by other agencies are health education (n = 10, 33.3%), chronic disease management (n = 8, 26.7%), and health screening and prevention (n = 6, 20.2%).
With respect to their training needs, the participants responded that the training about emergency treatment (n = 10, 33.3%), development of health program (n = 9, 30.0%), and clinical physiology specific to older people (n = 9, 30.0%) are required.
Qualitative Findings
The participants' characteristics in each of the four focus groups are described in Table 3 (n = 15).
Senior center nurses who participated were aged 28-54 years and had worked for 1.8-12 years. The experts in health and welfare for older adults included a gerontological nursing professor, geriatric doctor, senior center director, and section chief, and they were aged 34-54 years. The senior center attendees were aged 70-94 years and all had used a senior center 1-7 times per week for > 5 years.
They participated in various activities such as yoga, fitness, table tennis, English classes, and diabetes self-management programs. In the qualitative data content analysis, two main themes were identified: "challenges to senior center health services" and "suggestions for future health services." Four subthemes were identified under each main theme.
First Main Theme: Challenges to Senior Center Health Services
Insufficient availability of health services. The senior center attendees confirmed that they had received health management assistance via various health services offered there. They were aware that the senior centers are places in which basic health management services, such as blood pressure and blood sugar checks, are offered, and various health programs, such as those involving exercise and chronic disease management, are available. They stated that most senior center attendees were satisfied with the easy accessibility of health and welfare services. However, they were concerned that the current state of health services provided at senior centers is far insufficient to their needs and asked for an expansion of these services. This comment was heard from not only senior center attendees but also nurses and expert groups.
The expert group stated that despite the numerous recreational and leisure activities provided at senior centers for the elderly, the true priority for older adults is healthcare services: Lack of legal clarification of nurses' roles. The senior centers in South Korea have been defined as leisure facilities for older people by the law; therefore, nurses are not considered mandatory staff members. However, because of increasing demand for health services, many of these are provided by nurses without clarification of their roles or any relevant legal protection for tasks performed. Hence, the senior center nurses expressed confusion regarding their roles and the limitation of their ability to provide continuous health services. The experts also voiced this concern as an issue to be addressed.
Although senior centers are expected to play an important role in the screening of high-risk older adults and managing chronic disease in the community, there are no clear regulations or legal protection for these nurses. As an example, a senior center nurse introduced her senior center as a "one-stop assessment system" in which nurses assessed patients to make referrals, and liaised between senior centers and hospitals or public health centers: Establish legal regulations and adequate staffing for nurses. The senior center nurses and experts noted the necessity of defining nurses as mandatory personnel in senior centers by law. They also pointed out that adequate staffing should be included as part of the legal regulations for nurses.
Further, most suggested that excessive work should be reduced so that nurses can focus on improving health service quality, although there may be differences between senior centers.
"The nurse is the only medical person who actually performs health services in a senior center. It doesn't make sense that nurses are not mandatory in senior centers, which offer various health services for older people. As the demand for health services increases, it is also essential to properly arrange the staffing of nurses to perform the work." (Expert 2) They also said that a standard manual and regulations for the nurse's role is needed. Without these components, confusion concerning the scope and limits of nurses' roles and responsibilities could arise, and, as a result, the improvement of health service quality may be adversely affected:
Discussion
This study was meaningful, in that the current perspectives of various stakeholders on the expansion of health services at senior centers were explored, focusing on the role that nurses play in their provision. The results reveal several vulnerabilities of and suggestions for health services at senior center in an urban South Korean community.
According to the findings, the attendees felt that health services at senior centers were insufficient, and they requested more and better services. These results are consistent with those of previous studies showing that the primary reason senior centers are used for are the health care services they provide [3,15], and that older adults desire higher-quality health care services than any other services [4,15]. The rapid expansion of health services at senior centers has resulted in a serious lack of both medical staff and resources. The duplication of services provided by other agencies was identified as another problem, and these results are consistent with previous studies [12]. To use limited resources efficiently, it is necessary to clarify and focus on the center's main purpose and role of providing services, and to avoid indiscriminately expanding various health services already provided by other agencies. The expansion of the center's role should occur in consideration of the entire framework for older adults' health and welfare in South Korea. In that sense, it is significant that older adults, nurses, and experts all have a consistent view that the main purpose of health services in senior centers should be to prioritize health promotion, disease prevention, and chronic disease management.
Previous studies have suggested that health services are inefficient because of a lack of a referral system and communication and mutual cooperation between health and welfare systems in Korea [17,18]. In Korean communities, particularly, medical services are provided mainly for acute diseases, and the system of referral or coordination of health services has not been properly established [21]. In this point, experts who participated in this study proposed that senior centers should be regional hubs that connect various services, particularly in health and welfare. Considering the general characteristics of older adults in South Korea, where both health and welfare services are in high demand, these services cannot be separated. Therefore, the collaborative and integral role of senior center nurses in providing these services is very important [9,11,22,23].
As shown in this study, although senior center health care needs are currently increasing and the quantity of health services offered is expanding, a shortage of personnel and resources remains. In both the quantitative and qualitative results, most of the senior center nurses complained of a lack of staffing and resources alongside too much work.
At present, mandatory staffing in senior centers include facility managers, social workers, physiotherapists, clerks, and cooks, but not nurses [24]. Nevertheless, about half of all senior centers employed nurses. This means that, contrary to their initial purpose of providing recreational welfare facilities for older people when they were established over 30 years ago, nurses have been hired at the director's discretion as demand for more specialized health services have increased [25].
According to a systematic review of health services conducted at the senior center, the health service programs provided by nurses are the most frequently used and effective [8]. However, given the reality that nurses are not required personnel, and that no regulations exist for their role in senior centers, they are limited in their capacity to provide more professional and qualified health services [25]. We suggest that one way to solve these problems would be to hire registered nurses (RN) as mandatory senior center staff. Furthermore, utilizing existing geriatric nurse practitioners (GNPs) is another appropriate option [26]. Despite the fact that 2,361 GNPs have been licensed in South Korea since 2006 [27], senior centers do not take full advantage of their advanced training and skills. In order to solve the staff shortage and improve the quality of senior center health services, nurses (either RN or GNP) should be included as mandatory personnel at the outset. Further, nurses' roles within senior center health services should be regulated, and a standard manual should be established.
In addition, educational and administrative resources are required to empower nurses and improve their competency. Both our quantitative and qualitative results demonstrate that most senior center nurses reported feeling very limited in terms of their personal competence, and their access to the educational and administrative resources needed to perform their roles. This study showed that one third of them (10 out of 30) acquired certificates other than RN; eight had social worker qualifications, and two had further NP qualifications in addition. These seem to attest to the personal endeavors of individuals to gain the competency and resources needed for their work. Although these individual efforts should be encouraged, there should be formal support to meet their educational and training needs, given the fact that most nurses desire to improve their work through continuing education.
It has been over 30 years since senior centers were first established in South Korea. Since then, the elderly population has increased several times and interest in health has increased dramatically. In response to these changes, we suggest that regulations be introduced to designate nurses as mandatory medical personnel in senior centers, and to support a greater provision of educational and administrative resources to empower and increase the competence of nurses. Finally, we suggest that senior centers serve as regional hubs, focusing on health promotion, disease prevention, and chronic disease management. Through these interventions, we expect that senior centers may play the key role in improving and managing the health of community-dwelling older adults.
Strengths And Limitations
This study has two limitations. The first is that the sampling method targeted senior centers only in urban community settings, namely, Seoul. Secondly, the study was based on nurses' perceptions of their senior centers (n = 30) and the FGI groups' (n = 15) perspectives on current challenges to and suggestions for senior center health services. Therefore, the results should be interpreted with caution.
Despite these limitations, however, this study was meaningful in that it explored the current opinions of various stakeholders and all nurses who work at senior centers in Seoul about the expansion of health services at senior centers, focusing on the nurse's role. This investigation yielded significant understanding and insights into the challenges of health service provision at senior centers, and resulted in policy and practice recommendations to improve these services in future. Amid the current phenomenon of accelerated aging worldwide, these suggestions could enable senior centers to play a key role in contributing to the chronic health management, health promotion, and maintenance of elderly people.
Conclusions
The study results revealed several challenges, including insufficient health services; insufficient availability of health services; a lack of human resources; non-systematic and overlapping health services; and a lack of legal clarification of nurses' roles. The suggestions for improvement are as follows: senior centers should focus on disease prevention and chronic disease management, and function as hubs to connect health and welfare services. To this end, it is necessary to increase nurses' competencies through training and administrative support; in addition, the establishment of legal regulations and adequate staffing for nurses are just as important.
Ethics approval and consent to participate
The study was approved by the institutional review board at a university in South Korea (IRB approval #2013-70).
The researchers explained the study purpose and process to participants, and assured them of the anonymity and confidentiality of their data. Written, informed consent was obtained from all participants. | 2020-03-19T10:22:33.109Z | 2020-03-17T00:00:00.000 | {
"year": 2020,
"sha1": "47ee4dba231abc748d1661c079bd0ac8cb2d54ef",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21203/rs.3.rs-17495/v1",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1edde8f181039c4888897d1c72085619edeae4ca",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": []
} |
1836852 | pes2o/s2orc | v3-fos-license | Sedation in French intensive care units: a survey of clinical practice
Background Sedation is used frequently for patients in intensive care units who require mechanical ventilation, but oversedation is one of the main side effects. Different strategies have been proposed to prevent oversedation. The extent to which these strategies have been adopted by intensivists is unknown. Methods We developed a six-section questionnaire that covered the drugs used, modalities of drug administration, use of sedation scales and procedural pain scales, use of written local procedures, and targeted objectives of consciousness. In November 2011, the questionnaire was sent to 1,078 intensivists identified from the French ICU Society (SRLF) database. Results The questionnaire was returned by 195 intensivists (response rate 18.1%), representing 135 of the 282 ICUs (47.8%) listed in the French ICU society (SRLF) database. The analysis showed that midazolam and sufentanil are the most frequently used hypnotics and opioids, respectively, administered in continuous intravenous (IV) infusions. IV boluses of hypnotics without subsequent continuous IV infusion are used occasionally (in <25% of patients) by 65% of intensivists. Anxiolytic benzodiazepines (e.g., clorazepam, alprazolam), hydroxyzine, and typical neuroleptics, via either an enteral or IV route, are used occasionally by two thirds of respondents. The existence of a written, local sedation management procedure in the ICU is reported by 55% of respondents, 54% of whom declare that they use it routinely. Written local sedation procedures mainly rely on titration of continuous IV hypnotics (90% of the sedation procedures); less frequently, sedation procedures describe alternative approaches to prevent oversedation, including daily interruption of continuous IV hypnotic infusion, hypnotic boluses with no subsequent continuous IV infusion, or the use of nonhypnotic drugs. Among the responding intensivists, 98% consider eye opening, either spontaneously or after light physical stimulation, a reasonable target consciousness level in patients with no severe respiratory failure or intracranial hypertension. Conclusions Despite a low individual response rate, the respondents to our survey represent almost half of the ICUs in the French SRLF database. The presence of a written local sedation procedure, a cornerstone of preventing oversedation, is reported by only half of respondents; when present, it is used in for a limited number of patients. Sedation procedures mainly rely on titration of continuous IV hypnotics, but other strategies to limit oversedation also are included in sedation procedures. French intensivists no longer consider severely altered consciousness a sedation objective for most patients.
Background
Patients who are mechanicallyventilated in the intensive care unit (ICU) commonly receive sedation, either alone or in combination with analgesia, to relieve pain and discomfort and to control agitation and ventilator dyssynchrony. Most sedative drugs have potent hypnotic properties; thus excessive sustained alteration of consciousness is a major side effect of sedation [1]. The main consequence is an increased duration of mechanical ventilation, which is now a common surrogate marker of oversedation. Oversedation also results in increased rates of ventilator-associated pneumonia [2] and ICUacquired weakness [3].
Different strategies have been proven to reduce oversedation and are recommended by the French ICU Society sedation guidelines in 2007 [4] and more recently by the Society of Critical Care Medicine sedation guidelines in 2013 [5]. These include the targeted titration of continuous intravenous (IV) infusions of hypnotics and daily interruption of continuous IV infusions of hypnotics. The importance of formalizing the sedation strategy in a written, local procedure is also emphasized; the procedure should include repeated measurements of consciousness level on a sedation scale and the detection and treatment of procedural pain. It is unknown whether written local procedures are used and what type of oversedation prevention strategy is currently used in French ICUs.
Alternatives to continuous around-the-clock IV infusions of hypnotics have recently been proposed. These alternatives include short-duration (e.g., 6-h duration) IV infusions of hypnotics [6], repeated IV boluses of hypnotics (with no continuous IV infusion) [7], or the use of nonhypnotic drugs, such as neuroleptics [6,8]. Increasingly the concept of light sedation has emerged, where (once discomfort, dyssynchrony, and agitation have been controlled) patient awakeness and cooperation are promoted, rather than deep alterations of consciousness. It is unknown how often alternatives to continuous IV hypnotic infusion are used in daily practice or what targets of consciousness are currently used among intensivists.
In the present study, we conducted a survey of French ICUs to determine the perceived sedation practices in patients that require mechanical ventilation (invasive ventilation). We investigated the use of continuous IV hypnotics and alternatives to continuous IV hypnotics, the use of local written procedure; the type of strategy used to prevent oversedation, the detection and treatment of procedural pain, and the assessment and objectives of consciousness level. The information provided by this survey may stimulate educational interventions.
Methods
The questionnaire was developed by three senior intensivists (BDJ, FV, and GP) experienced in sedation of patients with critically illnesses (see Additional file 1). The first of the six sections of the questionnaire summarized the characteristics of the intensivists (experience in ICU, full-or part-time position, description of hospital and ICU). The following sections collected data about the drugs used (midazolam, propofol, nonhypnotic benzodiazepines, hydroxyzine, neuroleptics, opioids), the routes of administration (continuous infusion, IV bolus, enteral route), the use of a sedation scale and Bispectral index (BIS), the use of a pain scale in communicating and non-communicating patients, the use of a written local procedure, and the sedation objective in a patient with no severe respiratory failure or intracranial hypertension. We did not record the use of delirium scales because, despite the uncontroversial prognostic value of delirium in critically ill patients, therapeutic strategies based on delirium assessment, conversely to those based on the use of sedation and pain scales, are sparse and, to our knowledge, their impact on outcome has not been assessed so far. Most of the items in the second set of sections were designed to be answered with a Likert scale based on the following four anchors: "in more than 75% of patients"; "in 25-75% of patients"; "in less than 25% of patients"; and "never." After data collection and analysis, the anchor labels were transformed to "routinely," "often," "occasionally" and "never", respectively. Each questionnaire item was discussed by six intensivist members of the Epidemiology and Clinical Research Committee of the French ICU Society (BDJ, FV, GP, JA, SL, AG) until no further issue arose regarding educational value, relevance, clarity, and ease of completion.
In November 2011, the survey was emailed to 1,078 intensivists (seniors or assistants, excluding residents) in university-and nonuniversity-affiliated adult ICUs across France. The intensivists were identified from the French ICU Society (SRLF) database. After 1 and 2 weeks, reminders were emailed to non-respondents. We offered no compensation for participation in the survey.
The data are described as the number and percentage or as the median and interquartile range (IQR).
Results
The questionnaire was returned by 195 intensivists (response rate 18.1%), representing 135 of the 282 ICUs (47.8%) listed in the French ICU society (SRLF) database. Table 1 reports the main characteristics of the respondents. Notably, 77% of intensivists were full-time senior intensivists, and 66% had more than 10 years experience in the care of critically ill patients. Continuous IV midazolam is used routinely (in >75% of patients) by 76% of the responding intensivists, whereas continuous IV propofol is used only occasionally (in <25% of patients) by 66% of the respondents ( Figure 1). Sufentanil is the most frequently used continuous IV opioid, with 48% of the responding intensivists reporting routine use of this opioid ( Figure 1). Subcutaneous morphine is used never or occasionally by 52% and 42% of the respondents, respectively.
Routine use of a sedation scale is reported by only 68% of responding intensivists ( Figure 4). The Ramsay scale and the RASS are used by 50% and 38%, respectively, of the intensivists using a sedation scale. These assessments are primarily performed by nurses ( Table 2). The BIS is almost never used, regardless of whether the patient is receiving neuromuscular blockers or not. The routine use of pain scales for assessing communicating patients undergoing potentially painful procedures is reported by 70% of respondents. However, only 38% of respondents report routine pain scale use in noncommunicating patients ( Figure 4). The most frequently used pain scales were the Behavioral Pain Scale (BPS) in non-communicating patients (80% of respondents), and analogous scales in communicating patients (98% of respondents); again pain levels are mainly assessed by nurses ( Table 2).
The presence of a written, local sedation, management procedure in the ICU is reported by 55% of responding intensivists. However, in ICUs with these procedures, only 54% of intensivists declare they use it routinely ( Figure 5). The presence of a written local pain management procedure in the ICU is reported by 45% of the intensivists. However, in ICUs with these procedures, only 40% of respondents use the procedure routinely. Written, local sedation procedures mainly rely on the titration of continuous IV hypnotics according to patient consciousness and tolerance (90% of the sedation procedures). The use of IV boluses that were not followed by continuous infusion also is reported in 30% of procedures. Other strategies, including daily interruption of continuous IV hypnotics, are used much less frequently ( Figure 6).
Among the responding intensivists, 61% consider spontaneous eye opening to be a reasonable consciousness target level in patients with no severe acute respiratory distress syndrome (ARDS) or intracranial hypertension (ICH); 37% consider eye opening after a light physical stimulation to be a reasonable target. Eye opening to a strong noxious stimulation is considered a reasonable consciousness target by 2% of responders, whereas no respondents consider no eye opening, whatever the stimulation, a reasonable target.
Discussion
In this survey, midazolam and sufentanil appear as the most frequently used hypnotic and opioid, respectively, administered in continuous IV infusions. IV boluses of hypnotics without subsequent continuous IV infusion, anxiolytic benzodiazepines (e.g., clorazepam, alprazolam), hydroxyzine, and typical neuroleptics, via either an enteral or IV route, are used occasionally by two thirds of respondents. The existence of a written, local sedation management procedure in the ICU (mainly relying on continuous IV hypnotic titration) is reported by 55% of respondents, 54% of whom declare that they use it routinely. Among the responding intensivists, 98% consider eye opening, either spontaneously or after light physical stimulation, a reasonable target consciousness level in patients with no severe respiratory failure or intracranial hypertension.
Despite the strong recommendations of the 2007 French [4] and 2013 U.S. [5] Consensus Conferences to monitor sedation using clinical scales, and the large number of scales currently available and validated for the ICU setting, less than 70% of the responding intensivists report the routine use of a sedation scale. Routine objective detection of procedural pain in communicating patients is reported by a similar number of respondents (70%), but that frequency contrasts sharply with the further lower rate of detecting pain in non-communicating patients, as only 38% of respondents report the routine use of a pain scale in these patients. This likely reflects the common, persistent belief that noncommunicating patients, whose consciousness is frequently altered, are unlikely to feel pain. Furthermore, in a highly technically sophisticated environment dedicated to the treatment of life-threatening organ failures, pain detection might still not be considered a priority. Yet, validated, simple-to-use tools exist, including the BPS [9]. The detection of pain and assessment of the response to analgesics have been shown to have favorable impact on outcomes for patients in the ICU, including those with a noncommunicating phase during the ICU stay [10].
Despite numerous trials showing that written sedation algorithms beneficially affect important outcome markers of oversedation, including mechanical ventilation duration [2,[11][12][13], only 55% of the responding intensivists have a written sedation procedure in their ICU. Furthermore, when a written sedation procedure exists, it is not used routinely by nearly 50% of respondents. There are numerous barriers to the implementation of written sedation procedures, including insufficient education programs, understaffing (in particular, there is a high patient-tonurse ratio in most French ICUs) and the reluctance of intensivists to transfer sedation management to nurses, which is an integral part of most published algorithms. It also should be acknowledged that written sedation procedures might not apply to some specific ICU patients, particularly those with severe brain injury or those in whom treatment withdrawal has been decided. Titration of continuous IV infusion of hypnotics is by far the most common method used to prevent oversedation in French ICUs. This is in accordance with the 2007 French Consensus conference guidelines, which included a frame for hypnotics and morphinics use based on continuous titration, whereas the use of daily interruption of continuous IV infusions of hypnotics was not addressed [4]. Inversely, a recent survey showed that 80% of U.S. hospitals use daily interruption of sedatives in mechanically ventilated patients [14]. Although both continuous titration [2,11,12] and daily interruption of sedatives [15] have shown a significant beneficial effect on mechanical ventilation duration, they have not been formally compared. However, a recent randomized trial in North America showed that combining daily interruption of sedatives and continuous titration did not improve outcomes compared to continuous titration alone but was associated with increased nurse workload [16].
The IV bolus of a hypnotic with no subsequent continuous IV infusion is present in more than 30% of written sedation procedures. In a randomized trial of repeated IV boluses of midazolam with a goal of 1-2 on the Ramsay scale compared to a continuous IV infusion with a Ramsay goal of 3-4, the light sedation strategy with repeated IV boluses revealed feasible and safe and was associated with significantly shorter mechanical ventilation duration and ICU stay and no long-term, adverse cognitive, or psychological impact [7]. This approach therefore might be an interesting alternative to the continuous IV infusion of hypnotics, particularly in patients with moderatelyaltered tolerance to the ICU environment. The infrequent incorporation of nonhypnotic anxiolytic benzodiazepines, hydroxyzine, or neuroleptics in sedation procedures (15% of the local procedures) contrasts with the high percentage of intensivists (approximately 65%) that report occasional use. This suggests that some aspects of daily practice like care of agitated patients or weaning from IV hypnotics remain to be captured by local procedures. Neuroleptics have been proposed as first-line drugs for controlling agitation, discomfort, and delirium [6] and ventilator dyssynchrony [8]. However, their sparing effect on hypnotic use requires further investigation.
Finally, we found that more than 60% of respondents consider spontaneous eye opening a reasonable consciousness target in patients with no severe ARDS or brain injury; 35% of intensivists targeted eye opening to slight verbal or nociceptive stimulus. After several editorials in the 2000th pledging the need for lighter sedation objectives and cooperative sedation for ICU patients [17,18], our finding reflects the considerable shift in the paradigm of sedation practice among intensivists during the past decade.
This survey has several limitations. First, the low response rate of 18.1% might question the generalizability of the results. However, compared to postal mail surveys, email surveys, commonly allowing for larger target population, frequently have lower response rates [19][20][21]. The response rate of a recent large email survey with questionnaires sent to 6,227 gastroenterologists listed in the American College of Gastroenterology database was 9.5% [22]. Interestingly, the number of respondents in our survey is comparable to the 273 respondents to a survey of sedation practices over Canada in 2006 sent by postal mail to a relatively low number of critical care physicians (448), resulting in a high response rate of 60% [23]. Of note, the 195 respondents to our survey represent almost half of the ICUs listed in the French ICU society (SRLF) database. Furthermore, the respondents in our survey represent a broad range of ICU characteristics (university and nonuniversity hospitals; medical, surgical, and mixed ICUs; large and small ICUs; and various annual ICU admission rates); additionally, the demographic pattern is similar to that of previous surveys of sedation practices in French ICUs [24,25]. A second limitation is that results of practice surveys might differ from the true bedside practice, mainly because perception is inherently subjective. Our study therefore differs from the observational study of sedation practices conducted in 2007 with patient-based data collected in 44 French ICUs [26]. However, our aim in this study was to address the perception of sedation practices among intensivists, not the actual practices.
Conclusions
Despite a low individual response rate, the respondents to our survey represent almost half of the ICUs in the French SRLF database. This survey revealed that the written sedation procedure, a cornerstone for the prevention of oversedation, is present in only 50% of respondents' ICUs. Furthermore, we found that when a written sedation procedure exists, it is used in only a limited number of patients. In addition, procedural pain is frequently detected in communicating patients, but not in noncommunicating patients. The use of procedures for detecting and treating procedural pain also is limited. Educational measures are warranted to improve these findings. Our study also revealed that several alternatives to the common continuous IV hypnotic infusions, including repeated IV hypnotic boluses or the use of nonhypnotic drugs, can be judiciously included in local sedation procedures to limit oversedation. Finally, French intensivists no longer consider severely altered consciousness an objective of sedation for most patients. | 2018-04-03T03:15:38.075Z | 2013-08-09T00:00:00.000 | {
"year": 2013,
"sha1": "cab49a15f830fc890fed0a21b34ff9a02f09a0d5",
"oa_license": "CCBY",
"oa_url": "https://annalsofintensivecare.springeropen.com/track/pdf/10.1186/2110-5820-3-24",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ca8799ed5595110bc9ac262255e74b1fa086cb14",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52385005 | pes2o/s2orc | v3-fos-license | Strategies for coupling global and limited-area ensemble Kalman filter assimilation
This paper compares the forecast performance of four strategies for coupling global and limited area data assimilation: three strategies propagate information from the global to the limited area process, while the fourth strategy feeds back information from the limited area to the global process. All four strategies are formulated in the Local Ensemble Transform Kalman Filter (LETKF) framework. Numerical experiments are carried out with the model component of the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and the NCEP Regional Spectral Model (RSM). The limited area domain is an extended North-America region that includes part of the north-east Pacific. The GFS is integrated at horizontal resolution T62 (about 150 km in the mid-latitudes), while the RSM is integrated at horizontal resolution 48 km. Experiments are carried out both under the perfect model hypothesis and in a realistic setting. The coupling strategies are evaluated by comparing their deterministic forecast performance at 12-h and 48-h lead times. The results suggest that the limited area data assimilation system has the potential to enhance the forecasts at 12-h lead time in the limited area domain at the synoptic and subsynoptic scales (in the global wave number range of about 10 to 40). There is a clear indication that between the forecast performance of the different coupling strategies those that cycle the limited area assimilation process produce the most accurate forecasts. In the realistic setting, at 12-h forecast time the limited area systems produce more modest improvements compared to the global system than under the perfect model hypothesis, and at 48-h forecast time the global forecasts are more accurate than the limited area forecasts. Correspondence to: D. Merkova (dagmar.merkova@nasa.gov)
Introduction
An atmospheric limited area model uses time-dependent lateral boundary conditions provided by a global atmospheric model. In current practice, the initial conditions for the limited area model are either analyses prepared using the global model and interpolated to the higher resolution grid of the limited area model, or analyses prepared by using a data assimilation system specifically designed to produce initial states for use by the limited area model. In the latter case, the analysis inside the limited area domain is obtained independently of the global analysis (e.g., Torn et al. 2006;Zhang et al. 2006;Huang et al. 2009). The aforementioned two approaches are motivated by the practical constraint that most weather prediction centers and research groups who run limited area models have access to global analysis products, but do not have the capability to produce global analyses. The only exceptions are a handful of operational NWP centers, e.g., the National Centers for Environmental Prediction (NCEP), who prepare both global and limited area analyses, but, mainly for practical reasons, follow one of the two aforementioned approaches.
In this paper, we consider the scenario in which we have access to both the global and the limited-area model and a model-independent data assimilation system. Our goal is to begin to address the problem of finding that configuration of the coupling between these three components of the forecast system, which provides the best global and limited area model forecasts. In particular, we compare the forecast performance of the system for different coupling strategies using both simulated and operationally used observations of the atmosphere. In our experiments, the global model is the model component of the Global Forecast System (GFS) of the National Centers for Environmental Prediction (NCEP) (Sela 1980) integrated at a T62L28 (about 150 km ) horizontal resolution, the limited area model is the Regional Spectral Model (RSM) of NCEP (Juang 1992;Juang and Kanamitsu 1994;Juang et al. 1997;Juang and Hong 2001) integrated at 48 km and L28 resolution, while the data assimilation system is the Local Ensemble Transform Kalman Filter (Ott et al. 2004;Hunt et al. 2007;Szunyogh et al. 2008). We choose the NCEP RSM for this study, because it has the most consistent dynamics, among all limited area models, with that of the NCEP GFS model. In particular, the two models share the same physical parametrization packages and the GFS model solution affects the RSM solutions not only at the lateral boundaries, but also in the entire limited area domain. We design numerical experiments to start assessing the forecast value added by the limited area assimilation.
The structure of the paper is as follows. Section 2 describes the coupling strategies we consider in this study. Section 3 explains the design of the numerical experiments that we carry out to assess the performance of the different coupling strategies. The results of the numerical experiments obtained for the perfect model scenario are presented in section 4, while the results obtained with assimilating observations of the real atmosphere are reported in section 5. Our conclusions are summarized in section 6.
Coupling Strategies
To design strategies for the coupling of a global and a limited area data assimilation system, we assume that the higher resolution limited area model provides a more accurate representation of the atmospheric dynamics in the limited area than does the global model.
Our goal is to take advantage of the availability of this presumed better model information within the limited area to improve the quality of the analyses. We introduce our strategies assuming that the data assimilation component is based on an ensemble transform algorithm (e.g., Bishop et al. 2001;Hunt et al. 2007). While Strategy 1 is a conventional uncoupled approach, which could be easily implemented using any data assimilation algorithm, and Strategy 2 and 3 could be implemented using any ensemble-based schemes, Strategy 4 takes advantage of the fact that ensemble transform algorithm provides a straightforward way to propagate information from the limited area data assimilation process to the global process.
a. Global and limited area model dynamics
The global model dynamics g, defined by propagates an estimate x g (t) of the global atmospheric state between an initial time t i and a final time t f . The components of x g (t) are the spatially discretized atmospheric state variables (e.g., temperature, components of the wind vector, surface pressure, humidity variables, etc.). The limited area model dynamics f, defined by propagates an estimate x l (t) of the atmospheric state in a limited area sub-domain of the globe at a resolution that is higher than that of x g (t). We introduce the notation for the difference between the high resolution and the global state estimate in the limited area domain. In Eq. 3, L is the mapping from the state space of the global model onto the state space of the limited area model. In practice, this mapping is an interpolation from the lower resolution grid of the global model to the higher resolution grid of the regional model.
While the limited area model resolves motions at scales that are smaller than the smallest scales resolved by the global model, there are scales that contribute to both x l (t) and x g (t).
Thus, x ′ l (t) cannot be simply considered to be a small scale perturbation to the global state vector in the limited area domain.
b. The motivation for coupled data assimilation
The derivation of the version of the LETKF which is considered in this study is based on the assumption that a model can provide a perfect representation of the dynamics of the observed system (Ott et al. 2004;Hunt et al. 2007). An implementation of the scheme on a numerical weather prediction model inevitably violates this assumption. One particular source of the error is the spatial discretization of the dynamics: the atmospheric state at time t is represented by a spatially continuous vector field u(t), while a model uses a finitedimensional discretization x(t) of u(t) assuming that a suitable projection P: x t (t) = P [u] exists. (Here, the superscript t indicates that x t (t) is the model state representation of the true atmospheric state.) Thus the finite-dimensional model dynamics g and f ignore an infinite number of interactions associated with the unresolved flow components. While parametrization of the sub-grid (unresolved) processes are designed to account for the effects of the unresolved scales on the resolved scale (e.g., Kalnay 2002), in general, a higher resolution model is expected to provide a more accurate representation of the atmospheric dynamics. The motivation for employing a limited area model is to provide a more accurate representation of the atmospheric dynamics in a limited area domain of particular interest.
Our intended purpose in coupling the global and limited area data assimilation processes is to take advantage of the presumed superiority of the limited area model in the limited domain to improve the accuracy of the limited area analyses.
c. Ensemble transform data assimilation schemes
An ensemble-based data assimilation system obtains the state estimate at analysis time t n in two steps: (i) in the forecast step, a prior estimate of the state, called the background, and an estimate of the uncertainty in the background are obtained by propagating information from the previous analysis time t n−1 to t n = t n−1 + ∆t using the model dynamics; and (ii) in the state update step, the prior estimates of the state and its uncertainty are updated based on the observations collected in the time window [t n − ∆t/2, t n + ∆t/2].
Formally, the forecast step involves preparing a K-member ensemble of background forecasts {x b(k) (t n ), k = 1, . . . , K}. For instance, in a global data assimilation system where {x a(k) g (t n−1 ), k = 1, . . . , K} are the members of the analysis ensemble at the previous analysis time t n−1 . The background statex b (t n ) is defined by the ensemble mean, while the uncertainty in the estimatex b (t n ) is described by the ensemble based estimate of the background error covariance matrix. Here X b (t n ) is the matrix whose k-th column is In an ensemble-transform-based data assimilation scheme the ensemble mean analysis is obtained byx where the "weight vector" w a (t n ) is the value of w that minimizes the quadratic cost function Here, y o (t n ) is the vector of observations assimilated at time t n and the observation operator h(x) maps the model representation of the atmospheric state to observables at observation times. The observation operator is assumed to satisfy where the vector of Gaussian random variable e(t n ) with mean 0 and covariance matrix R(t n ) represents the observation noise. In practice, h(x) is an interpolation of the model variables from the model grid points to the locations and times of the observations and a conversion of the model variables to the observed quantities. 1 In addition to the analysisx a (t n ), an ensemble transform scheme also generates an en-1 Because the observations assimilated at time t n are collected in the time window t ∈ [t n − ∆t/2, t n + ∆t/2], the model is integrated for a time 3 2 ∆t from t n−1 to provide a background trajec- semble of analysis perturbations by The analysis perturbations, which are the columns of X a (t n ), are added tox a (t n ) to obtain the members of the analysis ensemble x a(k) (t n ); k = 1, ..., K. One approach to compute the weight vector w a (t n ) and the weight matrix W a (t n ) is through a square-root Kalman filter algorithm (e.g., Tippett et al. (2003)).
d. Coupling strategies
In all four configurations of the coupling considered in this paper, the global background ensemble is obtained by equation 4. In the first three strategies we describe, the coupling is in one direction: the limited area data assimilation process uses information provided by the global analysis at the current or the previous analysis time, but the the limited area analysis has no effect on the global analysis at the current or future analysis times. In the fourth strategy, the global analysis within the limited area domain is prepared using information from the limited area analysis, thus feeding back information from the limited area data assimilation process to the global data assimilation process. In our description of coupling strategy 4, we make use of the fact that both the mean analysis and the analysis ensemble members can be computed by linearly combining the background ensemble perturbations.
1) Strategy 1: Limited area analysis by spectral interpolation
The limited area analysisx a l (t n ) is obtained by interpolating the global ensemble mean analysis to the higher resolution grid of the limited area model: In this configuration, although the global model is run in an ensemble mode, only a single limited area run is prepared using the mean of the global ensemble solution to provide the large scale forcing. In this configuration, the limited area model can outperform the global model if it can develop predictable flow features in response to the higher resolution boundary forcing terms in the limited area domain.
2) Strategy 2: Non-cycled limited area analysis
Members of the global analysis ensemble at t n−1 are interpolated to the higher resolution model grid of the limited area model to obtain a limited area analysis ensemble: This limited area analysis ensemble is then propagated forward in time by the limited area model to obtain the limited area background ensemble: A limited area analysisx a l (t n ) is then prepared by applying Eqs.
; applying the same weights to the limited area and global ensemble members, we increase the chance of preserving the dynamical balance between the global ensemble member x a(k) g (t n ) and the high resolution perturbation x ′ (k) l (t n ) during the data assimilation process.
The feedback may also improve the global analysis in the area near and within the limited area domain. In particular using the high-resolution model fields to obtain the global analysis may reduce the effect of the representativeness errors in the observations. There are two practical issues that have to be addressed when implementing Strategy 4. First, using the weights from the higher resolution limited area analysis in the global analysis requires an algorithm to map the weights from the high resolution grid to the lower resolution global grid. Second, abrupt changes may occur in the weights near the boundaries of the limited area domain. This can be addressed by implementing a blending process that smoothes the changes in the weights near the boundary of the limited area domain (see section 3.b).
Experiment design
First, we briefly introduce the three main components of our coupled analysis-forecast system: the global GFS model, the limited area RSM model and the LETKF data assimilation system. Then, we describe the design of the numerical experiments, the observational data sets we assimilate, and the verification scores we use to evaluate the different coupling strategies.
1) The model component of the GFS
The GFS consists of a model and a data assimilation component, but in this study we use only the model component. The dynamical core of the model is described in Sela (1980). The model has been upgraded numerous times since the nineteen-eighties, mainly to improve the physical parametrization and the computational performance, but the general solution strategy of the dynamical core has remained the same. In particular, the model uses the spectral transform technique to solve the model equations; that is, the nonlinear terms and the terms associated with parametrized physical processes are computed on a grid, while the spatial derivatives are computed in spectral space using spherical harmonics for the representation of the atmospheric fields. For vertical discretization the model uses a sigma coordinate system. We integrate the model using a triangular truncation with a cut-off wave-number of 62 and 28 vertical sigma levels (T62L28). At this spectral resolution the nominal resolution of the model (the grid spacing) is about 150 km in the midlatitudes, but, because of the use of a scale dependent diffusion to maintain a realistic kinetic energy spectrum , the small-scale components of the fields are artificially dampened.
2) RSM
The RSM is a perturbation model. That is, it predicts the evolution of a high resolution perturbation to the lower resolution global model solution and obtains the high-resolution model forecast by adding the forecast perturbation to the global forecast (Juang 1992;Juang et al. 1997). For the computation of the sum of the perturbation and the global fields, the spherical harmonics that represent the global fields in spectral space are directly transformed to the grid points of the NCEP RSM.
In the RSM, the time evolution of the perturbation is governed by the nonlinear inter-
b. LETKF
In the LETKF, the state update step of the Kalman filter is performed independently for each component of the state vector (Ott et al. 2004;Hunt et al. 2007). A key step of the LETKF algorithm is the selection of the set of observations that are considered when updating the estimate of a given state vector component. In practice, the different state vector components at a given grid point are analyzed in one step and in situ observations are selected for assimilation if they are closer to the grid point than a given distance. The assimilation of nonlocal radiance observations with the LETKF is also possible, but for those observations the observation selection is done in a different way (Fertig et al. 2008). In this study, we use the same set of LETKF parameters in both the global and the limited area data assimilation system as we used in the global system described in Szunyogh et al. (2008).
The number of ensemble members is K = 40, observations are assimilated if they are in a 800 km radius of the grid point, and the inverse of the assumed observational error variance is tapered linearly from its original value to zero between a distance of 500 and 800 km (thus tapering the effects of observations on the analysis that are further away than 500 km). The initial ensemble members are sampled from a free run with the NCEP GFS.
The one important difference between our implementation of the LETKF on the GFS and the RSM is that in the GFS implementation we employ a digital filter (Lynch and Huang 1992) to control free gravity wave oscillations, but we cannot perform such a filtering of the high-resolution limited area fields, because a digital filtering capability is not available for the RSM.
In our implementation of strategy 4, we compute the weights w a g (t n ) and W a g (t n ) for the global analysis within the limited area domain by taking the algebraic mean of the weights at the four closest grid point of the high resolution grid. We found that the blending procedure applied by the RSM to the model fields results in a sufficiently smooth transition of the weights of the global system near the boundaries. Thus, applying a blending algorithm directly to the weights was deemed not necessary.
c. Observational data set
The observational data set is identical to the one used in Szunyogh et al. (2008). It includes all conventional (non-radiance) measurements that were operationally assimilated at NCEP between 1 January 2004
Results for the perfect model scenario
We first compare the performance of data assimilation systems based on Strategies 1-3.
Then we compare the performance of the two systems based on Strategy 3 and 4.
a. The comparison of Strategies 1, 2 and 3 In Figures 5 and 6, we show the vertical profile of the root-mean-square error at 12-hr and 48-hr forecast times for the temperature and the two horizontal components of the wind.
The results suggest that all three limited area strategies provide forecasts, which are more accurate than the global forecast. On average, Strategy 3 provides more accurate forecasts than Strategy 2, and Strategy 2 provides more accurate forecasts than Strategy 1. While all three limited area forecast systems maintain their large advantage over the global system for the entire 48-hr, the difference between the performance of the three limited area systems is smaller at 48-hr than at 12-hr forecast time.
We show the spatial distribution of the forecast improvements introduced by the increas-ingly more sophisticated limited area data assimilation process for the geopotential height at the 300 and 500 hPa (Figures 7 and 8) and for the temperature at 850 hPa pressure level ( Figure 9). Using the limited area model only to prepare the forecasts (Strategy 1) consistently improves all verified forecast parameters in the verification domain (see panels a and d of the Fig. 7-9 c. Spectral analysis Figure 11 shows the spectral distribution of the error in the zonal wind forecasts for strategies 1, 2 and 3 at both the 12-h and the 48-h forecast times. This figure is produced the same way as Fig. 4, except that the Fourier transform is applied to the errors in the high-resolution wind forecast perturbation. Since the error in the large scale component of the forecast is the same for all three strategies, this figure illustrates the difference between the spectral distribution of the errors in the high resolution forecast for three strategies.
Results are not shown for Strategy 4, because in that case the difference between errors in the large scale forecasts also contributes to the difference between the errors for Strategy 4 and not for the other strategies.
The most striking feature of Fig. 11 is the large advantage of the system that cycles the limited area analysis (strategy 3) at 12-hr forecast time in the wave number range 10-30.
This result indicates that the LETKF coupled with the RSM can more skillfully predict the covariance in the wave number range 10 − 30 when the analysis is cycled. At 12-hr forecast time, the difference between the performance of the different configurations of the system is small at the longest resolved scales (wave numbers larger than 10) and at the shortest resolved scales (wave numbers larger than 60). There is no real difference at 48-hr between the performance of the three configurations, with the exception of a slight advantage of the cycled system (strategy 3).
Results with observations of the real atmosphere
a. Comparison of Strategies 1, 2, and 3 Verification results for our analysis-forecast experiments using observations of the real atmosphere are shown in Figs. 12 and 13. Overall, the limited area systems perform slightly better than the global system at 12-hr forecast time, while the global system performs better than the limited area systems at 48-hr forecast time. The difference between the performance of the limited area systems and the global system is larger for the two components of the wind than for the temperature. In particular, the clear advantage of the limited area systems for the zonal component of the wind below the 300 hPa level at 12-hr lead time turns into a clear disadvantage by the 48-hr forecast time. Another interesting feature of the verification results for the two components of the wind is the big advantage of the global system in the upper troposphere (above 300 hPa), most of which disappears by the 48-hr forecast time.
One possible explanation is that the poorer performance of the limited area systems at 12hr forecast time may be due to vertically propagating spurious gravity waves. Such waves may play a more important role in the limited area model than in the global model either because of the the lack of initialization or because of a less careful tuning of the mountain drag parametrization for the higher resolution orography of the RSM. A further investigation of this issue is beyond the scope of the present paper.
Similar to the results for the perfect model scenario, there is not much difference between the systems based on the different coupling strategies at 48-hr lead time. However, the picture is very different from what we observed for the perfect model scenario at 12-hr forecast time: Strategy 3, which performed the best under the perfect model scenario, performs the worst in the realistic case, while Strategy 2 maintains its slight advantage over Strategy 1.
This suggest that the RSM at the tested resolution is not a sufficiently better model than the GFS in the limited area to compensate for the problems that arise at the boundaries in Strategy 3.
b. Comparison of the Strategy 3 and 4
The comparison of the performance of strategies 3 and 4 with real observations is shown in Figs. 14 and 15. In these figures, we show the impact of the feedback on both the limited area and global forecasts (the two curves without feedback are the same as in Figs. 14 and 15).
We note that some caution should be exercised when interpreting the results shown in this pair of figures: the difference between the errors shown in these figures are statistically not significant when tested using the approach of Szunyogh et al. (2008). That test compares the time (sample) mean of the instantaneous differences between the root-mean-square-errors for variance of the two configurations at the different verification times to the variance of the same differences. The failure of the test indicates that the differences in the errors are not due to consistent differences at the different verification times. Instead, they are the net result of differences that are highly variable in magnitude and sign.
Interestingly, at 12-hr forecast time, the feedback has a much larger effect on the performance of the global forecast than on the performance of the limited area forecast. In particular, while the global forecasts of the temperature is clearly degraded by the feedback above 300 hPa and below 700 hPa, and the two horizontal wind components above 500 hPa are clearly degraded, the feedback improves the global forecasts of the two wind components
Conclusions
This paper documents our first attempt at exploring the potential benefits of coupling the global and limited area ensemble Kalman Filter data assimilations. To the best of our knowledge, ours is the first study that considers a feedback from the limited area data assimilation process to the global process. We carried out analysis-forecast experiments under a perfect model scenario, where the limited are model was considered to be perfect in the limited area domain and the global model errors is considered to be perfect elsewhere.
We also carried out experiments in a realistic setting. Our most important findings for the perfect model scenario are the following: • In the limited area domain, the limited area systems based on the different coupling strategies perform better than the global system. The advantage of the limited area systems is much larger at 12-hr lead time than at 48-hr lead time.
• Preparing a limited area analysis with a cycled limited area system enhances the performance of the limited area forecast system. The main benefit of cycling the limited area analysis may be that, through the inverse energy cascade in the two-dimensional inertial range, it can provide information about the effects of uncertainties of the smallest resolved scales on the uncertainties at synoptic and sub-synoptic scales (global wave number 10-30). A single analysis cycle does not provide sufficient time to achieve a similar effect.
Our additional findings from our tests using real atmospheric data are the following: • The results with observations of the real atmosphere confirmed that the limited-area data assimilation has potentially larger benefits at the shorter forecast times (12-hr vs. 48-hr in our experiments). The advantage of the limited area systems is smaller than in the perfect model scenario at 12-hr forecast time and has a disadvantage at 48-hr forecast time.
• Our attempt to feed back information from the limited area analysis to the global analysis led to mixed results. The feedback improved the 48-hr high resolution wind forecast under the perfect model scenario and the meridional large scale wind forecast at 48-hr in the realistic scenario, but also led to considerable degradation of some of the other verified atmospheric variables.
We emphasize that we consider the current study only to be the first step toward exploring the benefits of coupling the global and limited area data assimilation process. One potential extension of this study would be to increase the ratio of the resolution of the two models from the current 1:3 ratio (48 km vs. about 150 km). Since, in the current system the cutoff wave number for both models is within the inertial range of two-dimensional turbulence, the regional model does not really bring in new physics compared to the global model.
Increasing the resolution of the limited area model to a range where some of the nonhydrostatic processes are explicitly resolved would bring in a new source of kinetic energy (convection), as well as the effects of three-dimensional turbulence. Bringing in new physics could reduce the representativeness component of the observation errors with respect to the limited area model dynamics. This, in turn, could be expected to increase the potential benefits of feeding back information from the limited area data assimilation system to the global data assimilation system. One particular area of research where we expect such an approach to be especially beneficial is in the verification of interaction between a tropical cyclone and the large scale flow. We are currently in the process of testing our coupled data assimilation system for such a scenario. 5. Vertical profile of the root-mean-square forecast error in the limited area domain at 12-hr forecast time for the global forecast (solid) and for the limited area forecasts with coupling Strategies 1 (long dashes and dots), 2 (short dashes) and 3 (dots). Fig. 7. The difference between the root-mean square errors of the geopotential height forecasts at the 300 hPa level for the different configurations of the analysis system at 12-hr and 48-hr lead times. Shown are the difference between the forecasts started from the global analysis and the limited area analysis of strategy 1 (panel a and d), from the limited area analyses of strategies 1 and 2 (panel b and e), from the limited area analyses of strategies 2 and 3 (panel c and f ). Where the values are positive the forecast from latter analysis is more accurate. Also shown is the mean flow at the 300 hPa level for the "true states" in the verification period (contours). Fig. 8. Same as Fig. 7, except for the geopotential height forecast at the 500 hPa level. Fig. 9. Same as Fig. 7, except for the temperature at the 850 hPa level. Fig. 10. The difference between the root-mean square errors of the 48-hr forecasts started from the analyses obtained by Strategy 4 and Strategy 3. Results are shown for the limited area geopotential height forecasts at the 300 hPa (panel a) and the 500 hPa (panel b), and the limited area temperature forecasts at 850 hPa (panel c); for the global geopotential height forecasts at 300 hPa (panel d) and 500 hPa (panel e) and the global temperature forecast at 850 hPa (panel f ). Strategy 4 provides more accurate forecasts where the shades indicate positive values. Contour show the time mean of the true geopotential height at the given level. Fig. 11. The kinetic energy spectrum of the forecast error with respect to the global wave number at 12-hr and 48-hr forecast lead times in a log-log scale. Shown is the error for strategy 1 (blue dashes), strategy 2 (green dashes and dots) and strategy 3 (red dots). The straight solid line with slope -3 indicates the scaling law for the kinetic energy in the inertial range for two-dimensional turbulence. Fig. 12. Vertical profile of the root-mean-square forecast error in the limited area domain at 12-hr forecast time for the global forecast (solid) and for the limited area forecasts with coupling strategies 1 (dashes and dots), 2 (dashes) and 3 (dots) assimilating observations of the real atmosphere. | 2016-02-26T08:23:38.721Z | 2011-06-27T00:00:00.000 | {
"year": 2011,
"sha1": "4c5c154265043a370553525c7d3c033809ab397e",
"oa_license": "CCBY",
"oa_url": "https://npg.copernicus.org/articles/18/415/2011/npg-18-415-2011.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "7d0910dfc1c10cc0be96e8b93f6f2189016e4081",
"s2fieldsofstudy": [
"Environmental Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
216510961 | pes2o/s2orc | v3-fos-license | Research on the Image Annotation Technology for Product Quality and Safety Inspection Data
In recent years, the vicious events about quality and safety in China have continued to bring serious impacts on people’s lives and property. The effective analysis and processing of product quality and safety inspection data will provide intellectual support for the overall improvement in product quality, and the effective control of prominent quality and safety problems in China. Aiming at the phenomenon that there is a large amount of image information in the quality detection data, this paper proposed an image annotation technology based on big data fusion, conducted weight fusion to image similarity and image user similarity, calculated the total similarity of images, and made denoising treatment. The experimental results showed that the method proposed in this study could annotate the quality test data well.
Introduction
Along with the development of computer technology, network technology and digital media technology, quality inspection data contains more and more image information. Nevertheless, it is very difficult for relevant departments to search for useful image information and make data analysis. In the light of this phenomenon, this research studied the image annotation technology of quality detection data.
Characteristics of quality test data
The characteristics of quality test data mainly include: 1) Extensive sources and huge volume: the quality inspection data includes national product quality supervision and random checking, 12315/12365 consumer complaints, WTO/TBT recall notifications, laboratory product testing, product injury and accident, and etc., so the data volume is huge.
2) Different features: quality test data includes text information, image information, audio information, video information, and etc., as well as the structured, semi-structured and unstructured data.
3) Large differences in data quality: the quality of quality test data is uneven. Some data is of high quality, while some data is missing.
Research status of image annotation
The scholars at home and abroad have carried out a large number of studies on image annotation technology. For example, Wang et al. [1] used two-dimensional multi-scale HMM to annotate images, and established Markov chains on multiple scales to express the relationship between multiple scales. [2] established a general two-layer model by using context information based on CRF, and annotated it by using the relationships between regions, regions and objects, as well as objects and objects in images. Monay et al. [3] combined the annotation keywords and regional features of images for training. Martinez et al. [4] used the local classifier SVM-KNN to automatically select unmarked samples and add them to the training set for image annotation by means of active learning. Wang et al. [5] retrieved a large number of images on the Internet based on the annotation keywords by determining an annotation keyword of the image to be annotated, and calculated the visual feature similarity between the retrieval results and images to be annotated, and then annotated the images. Blei et al. [6] proposed a model based GM-Mixture and GM-LDA for image annotation. He et al. [7] extracted the local features, regional features and global features from images respectively, and annotate images by using multi-scale CRF.
Theories and methods
For the collected quality and safety inspection data, the first step is to preprocess the data, including denoising, Chinese word segmentation, stopwords removal, data protocol and data loading processing; the second step is to analyze the image similarity, including attribute similarity calculation and text similarity calculation; the third step is to analyze the image user similarity; then to calculate the total similarity. Finally, noise removal is carried out. The image annotation flow chart based on multi-source big data fusion is shown as figure 1:
Data pre-processing
The supervision and random checking data, risk monitoring data, network public opinion data, product quality damage data, notification and recall data, and other data related to product quality collected in this paper was used as the experimental data, in which the related data accompanied with images was chosen to obtain the release time, release location, image, and image-related information, as well as the location information, authentication information, social contact information and other user information. The quality test data was subject to denoising, Chinese word segmentation, stopwords removal, data protocol, and data loading in this paper.
Data denoising.
There was a large amount of noise data in the collected quality detection data, so the data shall be denoised first. First of all, the quality inspection data collected in this paper contained a large amount of data without images. Since this paper studied image annotation technology, the CCEAI 2020 IOP Conf. Series: Journal of Physics: Conf. Series 1487 (2020) 012020 IOP Publishing doi:10.1088/1742-6596/1487/1/012020 3 relevant data without images were removed. Secondly, the collected data contained a large number of special symbols, such as "#", "@", "¥", etc., which would affect the accuracy of data analysis; therefore, special symbols must be removed.
Chinese word segmentation.
There was no word segmentation marker for the text in the collected data text, but the frequency of adjacent Chinese characters or terms appearing in the corpus could be used to judge whether adjacent Chinese characters or adjacent terms could be combined. In this paper, Chinese words segmentation was conducted with the word segmentation algorithm based on statistics, which could acquire experience information by training a large number of corpus subject to manual word segmentation, convert language knowledge into statistical information, and establish the probability model that could reflect the trust degree of adjacent characters or terms, so as to identify new words and segment sentences into words.
Stopwords removal.
Stopwords refer to the words that increase the complexity of data analysis but fail to provide useful information, including auxiliary words, conjunctions, adverbs, etc., in the text. Therefore, these words should be removed before data analysis. For example, "next", "then", "of", "in a word" and so on.
Data protocol.
There are two kinds of attribute information in a given data text, namely image information and user information. Image information includes three valid attributes: release time, release site and text content, while user information includes three valid attributes: location information, authentication information and social contact information.
Data loading.
The image information and user information subject to de-noising, Chinese word segmentation, stopwords removal and data protocol were stored in the database respectively.
Image similarity analysis
Image information attributes include release time, release place, author information, text content, and etc., among which, text content is missing or irrelevant, and so on; therefore, the image information was divided into attribute information similarity and text information similarity, of which the image similarity could be obtained by weighting method, with the calculation formula shown as follow: The construction of bipartite graph is the first step of similarity calculation, and an example is used to illustrate the construction. It is supposed there are four data in the text, as shown in As can be seen from the table, the maximum time difference is 13, and there are 2 types of places and 3 types of categories, so there is a total of 13*2*3 attribute sets constructed. Among them, only the numbers 1002 and 1003 satisfy the three conditions for establishing association, so it is considered that the images numbered 1002 and 1003 can be associated with the attribute set. In order to better analyze the compactness of images and attribute sets, the time and place should be given weights. The higher the weights, the closer relationship between images and attribute sets. The formula for calculating the weight is as follows: Where, T W represents the weight of time attribute, P W represents the weight of place attribute, and 12 =1 + .
Image text similarity calculation.
In the process of image text similarity calculation, image text is a word vector composed of words after data preprocessed, and editing distance algorithm is used to calculate the image text similarity in this study. If the two texts are set to be
Image user similarity analysis
User similarity can be calculated by weighting users' location information, authentication information and social information. The formula is as follows:
Analysis of total similarity of images
The total similarity of images is obtained by weighting image similarity and user similarity, with the calculation formula as follow:
Image annotation de-noising
The similarity of the images to be annotated with other images in the database was calculated according to the image similarity calculation method described in section 2.4; when the similarity was greater than the threshold set, the images in the database would be selected, and then an image set would be formed; based on the existing information of each image, the place information, time information, text information, and other annotation information could be obtained. However, there were also some cases of incorrect annotation resulting in a decline in accuracy. Therefore, it is necessary to de-noise the annotation information. TF-IDF (term frequency-inverse document frequency) was employed in this research to remove irrelevant annotation words, with the calculation formula shown as follows: Where, i N is the frequency of occurrence of annotation word i w in all annotation words, N is the sum of all annotation words, and i I is the inverse document frequency of annotation word i w in the corpus.
Selection of experimental data sets
In this paper, a total of 5,000 pieces of product quality related data, including product quality supervision and random checking, 12315/12365 consumer complaint, WTO/TBT recall notification, laboratory product testing, product injury and accident were collected, and the precision rate and recall rate was used to evaluate the annotation performance. The calculation formula of precision rate and recall rate is as follows: Where, P is the precision rate, C is the number of correct tag words, N is the total number of candidate tag words; R represents the recall rate, and n is the number of correct annotated words.
For the analysis of similarity, the precision rate of similarity was used in this paper for determination. The higher the precision rate of similarity, the more credible the similar images. The precision rate of similarity is defined as follows:
Simulation analysis
As a matter of experience, the weights of image similarity, attribute similarity and image user similarity were assigned respectively; 0.4 and 0.6 were assigned to 1 and 2 in image similarity, 0.5 and 0.5 were assigned to As could be seen from table 2, the similarity threshold is set at 0.8. When multi-source information text annotation method is used, the precision rate will be reduced due to the introduction of both correct and wrong annotation words, but the recall rate is greatly improved compared with text annotation method. When the text annotation of multi-source information is denoised, the precision rate is greatly improved, and the recall rate is much higher than that of text annotation method. Therefore, the denoised multi-source text annotation method is suitable for image annotation processing of quality control data. It could be seen from figure 2 and figure 3 that, the precision rate increases with the increase of similarity threshold. When the similarity threshold is greater than 0.9, the precision rate reaches a high level and the growth rate gradually slows down.
Conclusions
There is a lot of image information in product quality and safety data, which makes it difficult to find useful information from a lot of image information for analysis and processing. The research analyzed the product quality and safety information by using multi-source text information annotation technology, then obtained the total similarity by the weight fusion of each similarity after the calculation of image similarity and user image similarity of each image, and sought the similar image sets through the setting of similarity threshold, so as to annotate the image to be analyzed by using the annotation in the similar image sets. The experimental results show that the image annotation technology based on noise reduction and multi-source big data fusion could effectively annotate the images related to quality inspection, of which both the precision rate and recall rate is better than that of text annotation method. | 2020-04-16T09:16:43.942Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "d9389dd76e6578c981605e738b412022cd82d501",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1487/1/012020/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8ed8f6449abd07cf3cc30cdc1a87ce03f76d20de",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
12446903 | pes2o/s2orc | v3-fos-license | A Collaborative Framework for Collecting Thai Unknown Words from the Web
We propose a collaborative framework for collecting Thai unknown words found on Web pages over the Internet. Our main goal is to design and construct a Web-based system which allows a group of interested users to participate in constructing a Thai unknown-word open dictionary. The proposed framework provides supporting algorithms and tools for automatically identifying and extracting unknown words from Web pages of given URLs. The system yields the result of unknown-word candidates which are presented to the users for verification. The approved unknown words could be combined with the set of existing words in the lexicon to improve the performance of many NLP tasks such as word segmentation, information retrieval and machine translation. Our framework includes word segmentation and morphological analysis modules for handling the non-segmenting characteristic of Thai written language. To take advantage of large available text resource on the Web, our unknown-word boundary identification approach is based on the statistical string pattern-matching algorithm.
Introduction
The advent of the Internet and the increasing popularity of the Web have altered many aspects of natural language usage. As more people turn to the Internet as a new communicating channel, the textual information has increased tremendously and is also widely accessible. More importantly, the available information is varied largely in terms of topic difference and multi-language characteristic. It is not uncommon to find a Web page written in Thai lies adjacent to a Web page written in English via a hyperlink, or a Web page containing both Thai and English languages. In order to perform well in this versatile environment, an NLP system must be adaptive enough to handle the variation in language usage. One of the problems which requires special attention is unknown words.
As with most other languages, unknown words also play an extremely important role in Thailanguage NLP. Unknown words are viewed as one of the problematic sources of degrading the performance of traditional NLP applications such as MT (Machine Translation), IR (Information Retrieval) and TTS (Text-To-Speech). Reduction in the amount of unknown words or being able to correctly identify unknown words in these systems would help increase the overall system performance.
The problem of unknown words in Thai language is perhaps more severe than in English or other latin-based languages. As a result of the information technology revolution, Thai people have become more familiar with other foreign languages especially English. It is not uncommon to hear a few English words over a course of conversation between two Thai people. The foreign words along with other Thai named entities are among the new words which are continuously created and widely circulated. To write a foreign word, the transliterated form of Thai alphabets is often used. The Royal Institute of Thailand is the official organization in Thailand who has respon-sibility and authority in defining and approving the use of new words. The process of defining a new word is manual and time-consuming as each word must be approved by a working group of linguists. Therefore, this traditional approach of constructing the lexicon is not a suitable solution, especially for systems running on the Web environment.
Due to the inefficiency of using linguists in defining new lexicon, there must be a way to automatically or at least semi-automatically collect new unknown words. In this paper, we propose a collaborative framework for collecting unknown words from Web pages over the Internet. Our main purpose is to design and construct a system which automatically identifies and extracts unknown words found on Web pages of given URLs. The compiled list of unknown-word candidates is to be verified by a group of participants. The approved unknown words are then added to the existing lexicon along with the other related information such as meaning and POS (part of speech). This paper focuses on the underlying algorithms for supporting the process of identifying and extracting unknown words. The overall process is composed of two steps: unknown-word detection and unknown-word boundary identification. The first step is to detect the locations of unknownword occurrences from a given text. Since Thai language belongs to the class of non-segmenting language group in which words are written continuously without using any explicit delimiting character, detection of unknown words could be accomplished mainly by using a word-segmentation algorithm with a morphological analysis. By using a dictionary-based word-segmentation algorithm, locations of words which are not previously included in the dictionary will be easily detected. These unknown words belong to the class of explicit unknown words and often represent the transliteration of foreign words.
The other class of unknown words is hidden unknown words. This class includes new words which are created through the combination of some existing words in the lexicon. The hidden unknown words are usually named entities such as a person's name and an organization's name. The hidden unknown words could be identified using the approaches such as n-gram generation and phrase chunking. The scope of this paper focuses only on the extraction of the explicit unknown words. However, the design of our framework also includes the extraction of hidden unknown words. We will continue to explore this issue in our future works.
Once the location of an unknown word is detected, the second step involves the identification of its boundary. Since we use the Web as our main resource, we could take advantage of its large availability of textual contents. We are interested in collecting unknown words which occur more than once throughout the corpus. Unknown words which occur only once in the large corpus are not considered as being significant. These words may be unusual words which are not widely accepted, or could be misspelling words. Using this assumption, our approach for identifying the unknownword boundary is based on a statistical patternmatching algorithm. The basic idea is that the same unknown word which occurs more than once would likely to appear in different surrounding contexts. Therefore, a group of characters which form the unknown word could be extracted by analyzing the string matching patterns.
To evaluate the effectiveness of our proposed framework, experiments using a real data set collected from the Web are performed. The experiments are designed to test each of the two main steps of the framework. Variation of morphological analysis are tested for the unknown-word detection. The detection rate of unknown words were found to be as high as approximately 96%. Three variations of string pattern-matching techniques were tested for unknown-word boundary identification. The identification accuracy was found to be as high as approximately 36%. The relatively low accuracy is not the major concern since the unknown-word candidates are to be verified and corrected by users before they are actually added to the dictionary. The system is implemented via the Web-browser environment which provides user-friendly interface for verification process.
The rest of this paper is organized as follows. The next section presents and discusses related works previously done in the unknownword problem. Section 3 provides an overview of unknown-word problem in the relation to the word-segmentation process. Section 4 presents the proposed framework with underlying algorithms in details. Experiments are performed in Section 5 with results and discussion. The conclusion is given in Section 6.
Previous Works
The research and study in unknown-word problem have been extensively done over the past decades. Unknown words are viewed as problematic source in the NLP systems. Techniques in identifying and extracting unknown words are somewhat language-dependent. However, these techniques could be classified into two major categories, one for segmenting languages and another for non-segmenting languages. Segmenting languages, such as latin-based languages, use delimiting characters to separate written words. Therefore, once the unknown words are detected, their boundaries could be identified relatively easily when compared to those for non-segmenting languages.
Some examples of techniques involving segmenting languages are listed as follows. Toole (2000) used multiple decision trees to identify names and misspellings in English texts. Features used in constructing the decision trees are, for example, POS (Part-Of-Speech), word length, edit distance and character sequence frequency. Similarly, a decision-tree approach was used to solve the POS disambiguation and unknown word guessing in (Orphanos and Christodoulakis, 1999). The research in the unknown-word problem for segmenting languages is also closely related to the extraction of named entities. The difference of these techniques to those in non-segmenting languages is that the approach needs to parse the written text in word-level as opposed to character-level.
The research in unknown-word problem for non-segmenting languages is highly active for Chinese and Japanese. Many approaches have been proposed and experimented with. Asahara and Matsumoto (2004) proposed a technique of SVM-based chunking to identify unknown words from Japanese texts. Their approach used a statistical morphological analyzer to segment texts into segments. The SVM was trained by using POS tags to identify the unknown-word boundary. Chen and Ma (2002) proposed a practical unknown word extraction system by considering both morphological and statistical rule sets for word segmentation. Chang and Su (1997) proposed an unsupervised iterative method for extracting unknown lexicons from Chinese text corpus. Their idea is to include the potential unknown words to the augmented dictionary in order to im-prove the word segmentation process. Their proposed approach also includes both contextual constraints and the joint character association metric to filter the unlikely unknown words. Other approaches to identify unknown words include statistical or corpus-based (Chen and Bai, 1998), and the use of heuristic knowledge (Nie et al. , 1995) and contextual information (Khoo and Loh, 2002). Some extensions to unknown-word identification have been done. An example include the determination of POS for unknown words (Nakagawa et al. , 2001).
The research in unknown words for Thai language has not been widely done as in other languages. Kawtrakul et al. (1997) used the combination of a statistical model and a set of context sensitive rules to detect unknown words. Our framework has a different goal from previous works. We consider unknown-word problem as collaborative task among a group of interested users. As more textual content is provided to the system, new unknown words could be extracted with more accuracy. Thus, our framework can be viewed as collaborative and statistical or corpus-based.
Unknown-Word Problem in Word Segmentation Algorithms
Similar to Chinese, Japanese and Korea, Thai language belongs to the class of non-segmenting languages in which words are written continuously without using any explicit delimiting character.
To handle non-segmenting languages, the first required step is to perform word segmentation. Most word segmentation algorithms use a lexicon or dictionary to parse texts at the character-level. A typical word segmentation algorithm yields three types of results: known words, ambiguous segments, and unknown segments. Known words are existing words in the lexicon. Ambiguous segments are caused by the overlapping of two known words. Unknown segments are the combination of characters which are not defined in the lexicon.
In this paper, we are interested in extracting the unknown words with high precision and recall results. Three types of unknown words are hidden, explicit and mixed (Kawtrakul et al. , 1997). Hidden unknown words are composed by different words existing in the lexicon. To illustrate the idea, let us consider an unknown word ABCD where A, B, C, and D represents individual characters. Suppose that AB and CD both ex-ist in a dictionary, then ABCD is considered as a hidden unknown word. The explicit unknown words are newly created words by using different characters. Let us again consider an unknown word ABCD. Suppose that there is no substring of ABCD (i.e., AB, BC, CD, ABC, BCD) exists in the dictionary, then ABCD is considered as explicit unknown words. The mixed unknown words are composed of both existing words in a dictionary and non-existing substrings. From the example of unknown string ABCD, if there is at least one substring of ABCD (i.e., AB, BC, CD, ABC, BCD) exists in the dictionary, then ABCD is considered as a mixed unknown word.
It can be immediately seen that the detection of the hidden unknown words are not trivial since the parser would mistakenly assume that all the fragments of the words are valid, i.e., previously defined in the dictionary. In this paper, we limit ourself to the extraction of the explicit and mixed unknown words. This type of unknown words usually represent the transliteration of foreign words. Detection of these unknown words could be accomplished mainly by using a word-segmentation algorithm with a morphological analysis. By using a dictionary-based word-segmentation algorithm, locations of words which are not previously defined in the lexicon could be easily detected.
The Proposed Framework
The overall framework is shown in Figure 1. Two major components are information agent and unknown-word analyzer. The details of each component are given as follows.
• Information agent: This module is composed of a Web crawler and an HTML parser. It is responsible for collecting HTML sources from the given URLs and extracting the textual data from the pages. Our framework is designed to support multi-user and collaborative environment. The advantage of this design approach is that unknown words could be collected and verified more efficiently. More importantly, it allows users to select the Web pages which suit their interests.
• Unknown-word analyzer: This module is composed of many components for analyzing and extracting unknown words. Word segmentation module receives text strings from the information agent and segments them into a list of words. N-gram generation module is responsible for generating hidden unknown-word candidates. Morphological analysis module is used to form initial explicit unknown-word segments. String pattern matching unit performs unknown-word boundary identification task. It takes the intermediate unknown segments and identifies their boundaries by analyzing string matching patterns The results are processed unknown-word candidates which are presented to linguists for final post-processing and verification. New unknown words are combined with the dictionary to iteratively improve the performance of the word segmentation module. Details of each component are given in the following subsections.
Unknown-Word Detection
As previously mentioned in Section 3, applying a word-segmentation algorithm on a text string yields three different segmented outputs: known, ambiguous, and unknown segments. Since our goal is to simply detect the unknown segments without solving or analyzing other related issues in word segmentation, using the longest-matching word segmentation algorithm previously proposed by Poowarawan (1986) is sufficient. An example to illustrate the word-segmentation process is given as follows. Let the following string denotes a text string written in Thai language: Suppose that {a 1 a 2 ...a i } and {c 1 c 2 ...c k } are known words from the dictionary, and {b 1 b 2 ...b j } be an unknown word. For the explicit unknown-word case, applying the word-segmentation algorithm would yield the following segments: {a 1 a 2 ...a i }{b 1 }{b 2 }...{b j }{c 1 c 2 ...c k }. It can be observed that the detected unknown positions for a single unknown word are individual characters in the unknown word itself. Based on the initial statistical analysis of a Thai lexicon, it was found that the averaged number of characters in a word is equal to 7. This characteristic is quite different from other non-segmenting languages such as Chinese and Japanese in which a word could be a character or a combination of only a few characters. Therefore, to reduce the complexity in unknown-word boundary identification task, the unknown segments could be merged to form multiple-character segments. For exam- x y y y y y Figure 1: The proposed framework for collecting Thai unknown words.
ple, a merging of two characters per segment would give the following unknown segments: In the following experiment section, the merging of two to five characters per segment including the merging of all unknown segments without limitation will be compared.
Morphological analysis is applied to guarantee grammatically correct word boundaries. Simple morphological rules are used in the framework. The rule set is based on two types of characters, front-dependent characters and reardependent characters. Front-dependent characters are characters which must be merged to the segment leading them. Rear-dependent characters are characters which must be merged to the segment following them. In Thai written language, these dependent characters are some vowels and tonal characters which have specific grammatical constraints. Applying morphological analysis will help making the unknown segments more reliable.
Unknown-Word Boundary Identification
Once the unknown segments are detected, they are stored into a hashtable along with their contextual information. Our unknown-word boundary identification approach is based on a string pattern-matching algorithm previously proposed by Boyer and Moore (1977).
Consider the unknown-word boundary identification as a string pattern-matching problem, there are two possible strategies: considering the longest matching pat-tern and considering the most frequent matching pattern as the unknown-word candidates. Both strategies could be explained more formally as follows.
Given a set of N text strings, {S 1 S 2 ...S N }, where S i , is a series of len i characters denoted by {c i,1 c i,2 ...c i,len i } and each is marked with an unknown-segment position, pos i , where 1≤pos i ≤len i . Given a new string, S j , with an unknown-segment position, pos j , the longest pattern-matching strategy iterates through each string, S 1 to S N and records the longest string pattern which occur in both S j and the other string in the set. On the other hand, the most frequent pattern-matching strategy iterates through each string, S 1 to S N , but records the matching pattern which occur most frequently.
The results from the unknown-word boundary identification are unknown-word candidates. These candidates are presented to the users for verification. Our framework is implemented via a Web-browser interface which provides a userfriendly environment. Figure 2 shows a screen snapshot of our system. Each unknown word is listed within a text field box which allows a user to edit and correct its boundary. The contexts could be used as some editing guidelines and are also stored into the database.
Experiments and Results
In this section, we evaluate the performance of our proposed framework. The corpus used in the experiments is composed of 8,137 newspaper articles collected from a top-selling Thai newspaper's Web site (Thairath, 2003) during 2003. The corpus contains a total of 78,529 unknown words of which 14,943 are unique. This corpus was focused on unknown words which are transliterated from foreign languages, e.g., English, Spanish, Japanese and Chinese. We use the publicly available Thai dictionary LEXiTRON, which contains approximately 30,000 words, in our framework (Lexitron, 2006).
We first analyze the unknown-word set to observe its characteristics. Figure 3 shows the plot of unknown-word frequency distribution. Not surprisingly, the frequency of unknown-word usage follows a Zipf-like distribution. This means there are a group of unknown words which are used very often, while some unknown words are used only a few times over a time period. Based on the frequency statistics of unknown words, only about 3% (2,375 words out of 78,529) occur only once in the corpus. Therefore, this finding supports the use of statistical pattern-matching algorithm described in previous section.
Evaluation of Unknown-Word Detection Approaches
As discussed in Section 4, multiple unknown segments could be merged to form a representative unknown segment. The merging will help reduce the complexity in the unknown-word boundary identification as fewer segments will be checked for the same set of unknown words.
The following variations of merging approach are compared. • Merging all segments (all): No limit on number of characters per segment.
We measure the performance of unknown-word detection task by using two metrics. The first is the detection rate (or recall) which is equal to the number of detected unknown words divided by the total number of previously tagged unknown words in the corpus. The second is the averaged detected positions per word. The second metric directly represents the overhead or the complexity to the unknown-word boundary identification process. This is because all detected positions from a single unknown word must be checked by the process. The comparison results are shown in Figure 4. As expected, the approach none gives the maximum detection rate of 96.6%, while the approach all yields the lowest detection rate. Another interesting observation is that the approach 2-char yields comparable detection rate to the ap- Figure 4: Unknown-word detection results proach none, however, its averaged detected positions per word is about three times lower. Therefore to reduce the complexity during the unknownword boundary identification process, one might want to consider using the merging approach of 2-char.
Evaluation of Unknown-Word Boundary Identification
The unknown-word boundary identification is based on string pattern-matching algorithm. The following variations of string pattern-matching technique are compared.
• Longest matching pattern (long): Select the longest-matching unknown-word candidate • Most-frequent matching pattern (freq): Select the most-frequent-matching unknownword candidate • Most-frequent matching pattern with morphological analysis (freq-morph): Similar the the approach freq but with additional morphological analysis to guarantee that the word boundaries are grammatically correct.
The comparison among all variations of string pattern-matching approaches are performed across all unknown-segment merging approach. The results are shown in Figure 5. The performance metric is the word-boundary identification accuracy which is equal to the number of unknown words correctly extracted divided by the total number of tested unknown segments. It can be observed that the selection of different merging approaches does not really effect the accuracy of the unknownword boundary identification process. But since the approach none generates approximately 6 positions per unknown segment on average, it would be more efficient to perform a merging approach which could reduce the number of positions down by at least 3 times.
The plot also shows the comparison among three approaches of string pattern-matching. Figure 6 summarizes the accuracy results of each string pattern-matching approach by taking the average on all different merging approaches. The approach long performed poorly with the averaged accuracy of 8.68%. This is not surprising because selection of the longest matching pattern does not mean that its boundary will be identified correctly. The approaches freq and freq-morph yield similar accuracy of about 36%. The freq-morph improves the performance of the approach freq by less than 1%. The little improvement is due to the fact that the matching strings are mostly grammatically correct. However, the error is caused by the matching collocations of the unknown-word context. If an unknown word occurs together adjacent to another word very frequently, they will likely be extracted by the algorithm. Our solution to this problem is by providing the users with a user-friendly interface so unknown-word candidates could be easily filtered and corrected.
Conclusion
We proposed a framework for collecting Thai unknown words from the Web. Our framework ¡ £ ¢ ¡ ¤ ¥ ¦ § ¨ © ª « ¬ ¬ Figure 6: Unknown-word boundary identification results is composed of an information agent and an unknown-word analyzer. The task of the information agent is to collect and extract textual data from Web pages of given URLs. The unknownword analyzer involves two processes: unknownword detection and unknown-word boundary identification. Due to the non-segmenting characteristic of Thai written language, the unknownword detection is based on a word-segmentation algorithm with a morphological analysis. To take advantage of large available text resource from the Web, the unknown-word boundary identification is based on the statistical pattern-matching algorithm.
We evaluate our proposed framework on a collection of Web Pages obtained from a Thai newspaper's Web site. The evaluation is divided to test each of the two processes underlying the framework. For the unknown-word detection, the detection rate is found to be as high as 96%. In addition, by merging a few characters into a segment, the number of required unknown-word extraction is reduced by at least 3 times, while the detection rate is relatively maintained. For the unknown-word boundary identification, considering the highest frequent occurrence of string pattern is found to be the most effective approach. The identification accuracy was found to be as high as approximately 36%. The relatively low accuracy is not the major concern since the unknown-word candidates are to be verified and corrected by users before they are actually added to the dictionary. | 2014-07-01T00:00:00.000Z | 2006-07-17T00:00:00.000 | {
"year": 2006,
"sha1": "347ba6b039d1940936cebf5e45d7f1bcc153d14c",
"oa_license": null,
"oa_url": "http://dl.acm.org/ft_gateway.cfm?id=1273118&type=pdf",
"oa_status": "BRONZE",
"pdf_src": "ACL",
"pdf_hash": "347ba6b039d1940936cebf5e45d7f1bcc153d14c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
236991635 | pes2o/s2orc | v3-fos-license | A People’s Green New Deal: Obstacles and Prospects
Within the past years, the Green New Deal (GND) became the common language for Northern climate politics, offering a seeming exit path from Northern social and ecological crises while erasing an older Northern climate discourse tied to Southern demands for climate reparations and rights to development. This Eurocentric GND has become the environmental program for an equally Eurocentric social democratic renewal. This article situates the GND in world-systemic shifts, and Northern reactions to such shifts. It situates the GND as one of three possible Eurocentric solutions to the climate crisis: a great elite transformation from above; a left-liberal “reformist” resolution; a social democratic resolution. It then elaborates a possible “People’s Green New Deal,” a revolutionary transformation focused on state sovereignty, climate debt, auto-centered development, and agriculture. Within each proposed resolution, it traces the role of the land, agriculture, and peasants.
Introduction
The Green New Deal (GND) (Markey, 2019;Ocasio-Cortez, 2019) is now something like a celestial object exerting a force on discourse and politics around global climate change-inciting fear, inquiry, unease, or opportunity. In some cases, Northern scholars in platforms which sidestep the national question (funded by the Rockefeller Foundation) now try to set the agenda for the South (Cohen & Riofrancos, 2020). 1 In others, similar mechanisms pass through multilateral and traditionally Southern institutions like the United Nations Conference on Trade and Development (Perry, forthcoming;UNCTAD, 2019), or emerge as "pacts" from generally Northern-oriented scholars (Various, 2020) studded with buzzwords like "autonomy," which alchemize climate debt into cheaper and agreeable debt cancellation accords and rehearse earlier fatally flawed attempts to propose North-South integration while rejecting the national question (Ahmed, 1981). And in still others, the GND is being creatively appropriated, recast, and refinished to reflect Southern priorities (OSAE, 2020).
Yet much discussion of the GND lacks mooring in the political economy of the production of discourse, the environmental crisis, or shifts in capitalism. In what follows, light will be shed on why and how the GND emerged, as layers of space and time slammed together: shifts and challenges to US accumulation, the broader environmental crisis, and breakdown of capitalism as a political mode of rule in the core. The GND in its dominant anti-racist green Keynesian formulation emerges as one resolution-if not exactly a solution-to these instabilities. Three other possibilities will be discussed and the political alliances each implies. Focus will be brought to the potential elements of a People's Green New Deal, built on foundational demands from the South/Fourth World (Manuel, 2019), which emerge from the quest for national liberation: a renewal and strengthening of state sovereignty, the unfinished conquest of economic sovereignty, and the drive for environmental sovereignty and decolonization, in the form of the 2010 demands of climate debt settlement which emerged from Cochabamba, Bolivia, the state of the art of Southern climate politics (People's Agreement of Cochabamba, 2010).
Ecological and Political Context
The stage upon which the GND occurs has multiple political and environmental planks. First, the climate crisis and an awareness within conservative chambers of scientific-technocratic thought that something must be done about it. The main spur has been the 2018 report of the Intergovernmental Panel on Climate Change, which for the first time used alarming language, demanding immediate and vast change: " [r] apid and far-reaching transitions in energy, land, urban and infrastructure (including transport and buildings), and industrial systems," and added that such "systems transitions are unprecedented in terms of scale, but not necessarily in terms of speed, and imply deep emissions reductions in all sectors… There is no documented historic precedent for their scale" (IPCC, 2018).
The second "natural" phenomenon, although a facet of social nature under imperialist monopoly capitalism, has been environmentally uneven exchange. This is not just the well-documented phenomenon of peripheral under-and de-development through drain, primitive accumulation, and unequal exchange in the world market (Amin, 1974;Patnaik & Patnaik, 2021;Rodney, 2012). It is, furthermore, as prefigured in earlier dependency work (Ajl, 2021a), about how accumulation on a world-scale has meant grabbing peripheral use-values. 2 Those include labor, soil fertility, forests, and the degradation of Southern capacities for social reproduction (Ossome, 2020), including the export of pollution, the enclosure of atmospheric space, and rising Southern climactic disasters (Hornborg, 2006;Roberts & Parks, 2006;Warlenius et al., 2015). 3 Politically, the United States has slowly and only relatively declined from its absolute political and economic predominance within the worldsystem. It is critical to separate such a diagnosis from discourses of the US "declining empire," which understate the capacity of US power to de-develop Third World nation-states through asphyxiation (Ameli, 2020;Weisbrot & Sachs, 2019) and proxy war (Capasso, 2020;Higgins, 2018). The nuclear backstop and enduring dollar seigniorage (Hudson, 2003) testify to the endurance of the US imperialism and the value flows on which imperialism is based and safeguards. Rather, the rise of China and rising Chinese labor shares within bilateral US-Chinese traded goods indicate more fundamental shifts within the world-system (Kadri, 2021;Macheda & Nadalini, 2020). The United States can de-develop peripheral countries but cannot enfold them into the political architecture of value extraction. Asymmetric resistance movements in Yemen and elsewhere cannot be put down through US violence. 4 Second, there are rising social-democratic politics in the core, the backdrop to the GND. After the financial crisis of 2008 and Occupy Wall Street, capitalism as a mode of rule has faced serious ideological challenge. Marxism as a mode of analysis, informing a spectrum of redistributive or anti-systemic politics, has become increasingly normalized. Furthermore, amid the crumbling of the old way of rule, an array of figures arguing for sharp internal redistributions of wealth, with serious deficits and blind spots when it comes to the national question, and usually with anti-Communist politics, have emerged across the Euro-Atlantic arena: from Britain's Jeremy Corbyn to Greece's SYRIZA movement to Podemos and Bernie Sanders. Ruling classes have dismantled or evaporated such challenges, pushed them into compromise, or co-opted them. Yet in each case, the challenges have left behind radicalized if disorganized downwardly mobile petty bourgeois and working-class populations in the core, constituencies for anti-systemic politics. Against this background, the GND has emerged as one option to confront these interlacing crises.
Responding to the Crises
Responses to the crisis essentially fall within four possible camps. First is a far-right response, based on green imperial integration and capitalist engorgement of remaining un-commodified arenas, above all those of peasantries, pastoralists, and forest-dwellers. Second is a leftliberal response, based on green imperial integration, some level of core redistribution, and some extension of renewable infrastructure for the South, possibly through a commodified extension of renewable energy, alongside appropriation of peripheral rural wealth. Third is a green social-democratic response, calling for deep domestic redistribution, relying on parliamentary procedures and some extra-parliamentary pressure, and a green Marshall Plan of sorts for the South-with the echoes of shoring up imperial infrastructure which that name implies. And, fourth is a radical solution, based on widespread decommodification of social reproduction, shrinking of Northern energy use, and payment of climate debt to the periphery, aiming for North-South industrial and developmental convergence with agriculture as a keystone.
The far-right proposals imagine a greened capitalism of circular economies using industrial ecology where possible to remediate or integrate waste into the productive cycle (see, e.g., Smith (2020). Ruling classes will be laagered up in the settler-states and the European core, and climate change will be controlled enough to avoid difficult-to-manage numbers of climate refugees (Spratt & Dunlop, 2018). The raw material and technologies for such "stabilization from above" would be secured through enfoldment or dispossession of peripheral direct producers and further large-scale environmentally uneven exchange. Malthusian agendas logically are emerging as well (Shaw & Wilson, 2019).
Those sketching out such blueprints range from the Australian Breakthrough Institute to the Energy Transitions Commission to Transform to Net Zero. 5 These proposals share several traits: one, statecorporate partnerships; two, rhetoric about corporate-state-community co-partnership; three, embrace of the "national security sector"; four, effusion about technological salvation; five, ripping open new frontiers of land-based accumulation in the South through financializing nature or turning such landscapes into carbon farms; six, the hollowing out of Third World sovereignty. Many also gesture at the Wall Street Consensus, which seeks to reorganize "development interventions around selling development finance to the market…escort[ing] capital" into bonds, remaking Third World governments as "de-risking states" by demanding that they and their treasuries take on the risks of investment, removing them from the currently more-or-less idle capital those plans mean to "crowd in" (Gabor, 2020).
Land and bio-mass-based production, in the form of the grasslands, forests, and smallholder plots currently incorporated into accumulation on a world scale as social nature and uncompensated social reproduction (Ossome, 2020), are central. Increasingly, there are calls across the Eurocentric political spectrum for Half-Earth biodiversity corrals, which rest on the apartheid concept of humanity separated from "wild nature," a phantasm of colonial-capitalist ideologues for eons (Gilio-Whitaker, 2019; Merchant, 1990). This idea, which comes cloaked in red (Robinson, 2018) and even Northern academic production aligned to the northern imperialist agenda (Vettese, 2018), forgets human history is a history of landscape management (Denevan, 1992(Denevan, , 2001, 6 and the Indigenous (Schuster et al., 2019) are some of the very best guardians of biodiversity.
This economistic agenda of neo-colonial improvement-for where are the Southern popular demands for Half-Earth?-runs cover for the capitalist right (Wilson, 2016), wherein reserving half the planet for "nature" means a fantastical afforestation, plopping trees, where they have never been before, or a reforestation, based on reductionist ecology and dreams about prelapsarian Arcadias which justified colonial agricultural settlement and incursion (Davis, 2007). Indeed, closedcanopy forests were almost certainly mythical in Western Europe, their supposed heartland, as well as the United States (Bond, 2019;Vera, 2000). Furthermore, willy-nilly tree planting is environmentally disastrous (Pearce, 2019;Schmitz, 2016) and when incorporated into carbon markets through REDD and REDD+ displaces smallholders and sows mono-crop tree plantations (Kansanga & Luginaah, 2019;McElwee, 2009) leading to diminishing bio-diversity-"really-existing" Half-Earth (cf. Büscher et al., 2017;Kröger, 2014).
When land is not to be cordoned off for sterile tree plantations, a spectrum of core institutions advocates planting land for biofuel crops, a "clean" energy source. All such reports genuflect to potential displacement of food production and biodiversity. However, as the Energy Transitions Commission (ETC, 2020a) states, "[s]ustainable biofuels or synthetic fuels will need to scale up from today's trivial levels to play a major role in aviation and perhaps shipping," a clear embrace of these technologies to shift difficult-to-decarbonize sectors onto a fuel whose cost is paid on South-North gradients. Similarly, the ETC's national manifesto for Australia states "[f]ull decarbonisation for industries such as steel, cement and chemicals require the use of electrification, hydrogen, bioenergy and carbon capture and storage" (ETC, 2020b). The EU energy transition plan proposes increases of the biofuel mix in airborne and maritime transport (European Commission, 2020). The US Senate's Special Committee does the same, also advocating afforestation (2020). Using land for biofuel growth and planting trees while maintaining existing relations of production will worsen social and ecological outcomes, harming biodiversity, lowering water-tables, displacing smallholders, and reducing land available for smallholder crops. And even under highly optimistic projections, shifting global hydrocarbon use to biofuels would cut deeply into land and water available for agriculture.
The next option is of the liberals who propose full replacement of current energy use in core and periphery (Jacobson et al., 2015), preservation of capitalist property structures (Plan for Climate Change and Environmental Justice, 2020) the United States as a "green-tech" powerhouse, and dependency-inducing "aid" to the periphery to assist the renewables transition (Ocasio-Cortez, 2019). These proposals emerge from earlier US discussions about a jobs-for-all program for the underemployed or un-employed déclassé "middle class," and respond to the need for political containment of anti-systemic politics among largely the core petty bourgeoisie. The 2018 draft legislation situated itself as a response to "wage stagnation, deindustrialization, and antilabor policies" and the need to keep the planet below 1.5°C of warming (Ocasio-Cortez, 2019), and urged a "new national, social, industrial, and economic mobilization" updating the original pro-systemic New Deal (Ferguson, 1984) with a new corporatist, core-centered pact. The legislation did not pretend to be anti-systemic: it called for "transparent and inclusive consultation … and partnership with … businesses," alongside allocating "adequate capital… [to] businesses working on the Green New Deal mobilization," partnered with "appropriate ownership stakes and returns on investment" for the public. It does gesture to "consultation, collaboration, and partnership with frontline and vulnerable communities" (Ocasio-Cortez, 2019). Yet such nouns are denuded of class content. Front-line and vulnerable are geographical or spatial indices of heightened threat of direct physical risks, even if they pass through the prism of power relationships. Communities themselves do not refer clearly to material divisions based on access to resources. Indeed, the appropriation and remolding of previously radical rhetoric about community was central to the post-1960s reformation of racial liberalism (Ferguson, 2013).
Internationalism and the national question did quietly enter the Markey/Ocasio-Cortez legislation on two fronts, gesturing at its scope, limits, vulnerabilities, and constituencies necessary for maneuvering on reconfigured US progressive political topography. First, the legislation called for "Promoting the international exchange of technology, expertise, products, funding, and services, with the aim of making the United States the international leader on climate action" (Ocasio-Cortez, 2019): the United States as a new green-tech powerhouse. Such a call foretells future and oncoming maneuvering amidst a new Space Race for monopoly control and leadership over green transition technology (Rifkin, 2019;The Biden Plan … Future, 2021;World Economic Forum, 2016). The second is a small openness to the indigenous question.
A third position, the most capacious and blurry, is the diffuse socialdemocratic tendency in the imperial core (Aronoff et al., 2019;Chomsky & Pollin, 2020;Klein, 2019). Although this position often demonizes anti-systemic anti-colonial projects like the agrarian reform in Zimbabwe (Selwyn, 2021), or the environmental record of Venezuela (Klein, 2019), ignoring its vanguard role in anti-colonial international climate politics (Frías, 2009), it is in complex alliance with indigenous formations (The Red Nation, 2021), a positioning rife with vulnerabilities for breaking in reformist and radical directions. 7 It argues for retrofitting core countries' infrastructure, and domestic redistribution to return to at least 1950s-era levels of inequality as a "transitional plan" to eco-socialism. It emphasizes care work, borrowing from feminist economics and Northern social reproduction theory, although like that theory, it is blind to the role of peripheral labor in social reproduction on a world scale, extending beyond care work to un-commodified production of subsistence, including using "natural" landscapes, small-scale plots, and animals to do so (Ossome & Naidu, n.d.). It ambiguously calls for grants to the South while muting the touchstone Cochabamba Accords, and sometimes outright refuses calls for global energy-use convergence (Pollin, 2018). Techno-ecologically, this position remains in détente with calls for biofuels, afforestation, or half-Earth "conservation" strategies, reprising, in its blindness to the agrarian question, especially that of reproduction, economistic and core-centered plans for social transformation (Amin, 2019;Moyo et al., 2013). Politically, this position seeks to ride the Markey/AOC GND, expanding it through diffuse commitments to grassroots internationalism. Yet, it is silent on the national question.
A fourth, revolutionary solution, to which we now turn, advocates guaranteed well-being, far smaller core energy use, decommodified access to social needs, and tremendous grants of technology to the Third World through climate debt. A form of agro-ecological and indigenous management which intertwines with a renewed defense of sovereignty, demilitarization, and decolonization. 8
A People's Green New Deal
Because the world-system is divided materially, superficially antisystemic movements in the core and periphery recurrently deviate into support for rallying behind the flag in the former, and in the latter "Color Revolutions" with a pro-systemic character. The reproduction of Eurocentrism in GND discussions should not be surprising, for that reason: the dominant ideology defaults to upholding the international color-class line. But it suggests the need for clarity concerning programmatic elements of a People's GND North and South, and the complementary and distinct burdens of transformation in each.
I enter such a debate with three postulates. One: the only legitimate aim is world-scale developmental convergence on permanently sustainable ecological and relatively egalitarian social bases, alongside national-regional sovereign industrialization. Reverse-engineering that outcome means sketching political paths which lead there. Two: politics starts with location, implying distinct, converging paths. Three: agriculture and land-use management are central to the Southern and Northern transformative twenty-first-century projects (the subsequent emphasis on agriculture should not be read as rejection of sovereign and ecologically modulated industrialization).
The national question is central, with different faces in the North (Patnaik, 2015) and the South. Four elements are key. First, effective decolonization. Second, a renewed defense of sovereignty, as the political and economic gains of decolonization are now being rolled back, especially in the Arab region (Kadri, 2016). Third, an inflection of the national question visible through the prism of climate debt, which synergistically interacts with stronger sovereignty to advocate for and receive debt settlements. Fourth, the agrarian question, enfolding land, labor, ecology, and gender, requires resolution of the national question including the active defense of peripheral gains and solidarity with those speaking in national-popular grammars (Ajl, 2021b).
The colonial question is far from over. It endures de jure in many settler-states. Furthermore, settler-colonialism, as the foundation for a racist system of social power, remains a potent flash point and accelerator for anti-systemic struggles. In the North American settler-states, the struggles of the Indigenous through Idle No More and at Standing Rock have catalyzed broader consciousness among non-indigenous radicals of the simmering "domestic" national question (Dunbar-Ortiz, 2014; Estes, 2019). These struggles have a latent or explicit environmental edge, since it has been among indigenous people that rights to use-values are intertwined with rights to land and land back (Mihesuah & Hoover, 2019). Because of that, the largest arenas of biodiversity preservation are indigenous-managed. This confluence of environmental and national struggles should not imply reducing the Indigenous to any kind of prospective beneficiary of a restored antelapsarian environment. They are, in the words of indigenous scholars Andrew Curley and Majerle Lister (2020, p. 251), "modern peoples whose greatest threats are political marginalization at the hands of continued colonial processes." Nevertheless, the scope of anti-systemic politics within this arena is potentially immense, since claims to land imply a kind of reversal of primitive accumulation.
A second national question relates to the renewed defense of state sovereignty. Nation-states are the political framework within which accumulation on a world scale can deepen or endure. Yemen, Iraq, Venezuela, or Zimbabwe are targeted with sanctions and war as nationstates, leading to national losses in productive forces. In consequence, the nation is a central political-social vehicle that carries resistance to oppression. From the 1980s to today, many of the most active struggles have deployed a national-popular idiom to unite the people for change to try to place domestic wealth at the service of their popular classes (Moyo & Yeros, 2011). It is understated that the most widely supported struggle for justice, Palestine, is that of a nation fighting for land, liberation, and return. The Zimbabwe flash-point should be obvious as the most fundamental post-Cold War material challenge to the international color line.
It is in and through the national political sphere that certain decisions must be made, alliances built, and internationalisms constructed. It matters who helms Bolivia, a national-popular, sovereign, and Indigenousled nation-state, which was the political safehold for drafting the Cochabamba documents. A renewal of de facto and an achievement of de jure national sovereignty, the bulwark behind which peoples can consider and plan for the future, secures the institution that can advocate for, receive, and manage climate debt payments. Indeed, the international state system is the basis for calculating ecological debts in world political fora. And planning on the whole, the right to determine the contours of the future, requires sovereignty, although it cannot be reduced to it. To focus on the national question does not mean suppressing social, democratic, or "inner" environmental questions: who does get what within nations, who does make that decision, and what is the environmental texture of national production and distribution? It simply reflects the hierarchical structure of the world-system, based on a sovereignty deficit in other states. This deficit has been a feature of settler-colonialism, pacted decolonization, and neo-colonialism. It has reduced the actual material foundation upon which peoples and especially lower classes can build up their own lives (Tsui et al., 2013).
Climate debt and active respect for national sovereignty and decolonization raises the two faces, the two locations, of the national question. Third World rights to ecological debt repayment or the Fourth World rights to land implies First World political struggle to put meat on textual flesh. If Palestinians have the right to national liberation or Syrians and Yemenis the right to full exercise of state sovereignty in the international system, defense of those rights through anti-systemic struggle is part of the burden of transformation. Rights imply responsibilities, including identifying how global value relations are based on certain exclusions and primitive accumulations.
The national question's ecological face is climate debt, requiring the restoration to radical developmental thought and practice of its state of the art as achieved in the 2010 meetings in Cochabamba. The ecological debt reflects how accumulation on a world scale, including its colonial and settler-colonial underpinnings, occurred alongside unequal access to waste sinks and the biosphere's capacity to absorb and process all manner of waste, especially carbon dioxide. The subset of that debt related to climate refers to the seizure of global capacity to absorb greenhouse gases, with large implications for the Third World's developmental path, preventing it from retreading the path paved in the West by cheap and easily accessible hydrocarbons: the "emissions debt." The Cochabamba meetings laid out a five-point program to settle climate debts. First, to return "occupied" atmospheric space and "decolonize" the atmosphere by removing and reducing emissions to try to fairly distribute atmospheric space and to account for the potentially clashing needs for "developmental space and equilibrium with Mother Earth." Second, to honor the debts incurred by lost development opportunities related to unrepeatable cheap development paths. Third, to honor debts related to climate-induced destruction, including lifting immigration restrictions. Fourth, to honor the "adaptation debt," the resources needed by poor countries to respond to the environmental dislocations produced by those emissions. Fifth, to refuse to wall off the climate crisis from the environmental crisis, clarifying that honoring such debts was part of the "broader ecological debt to Mother Earth" (Final Conclusions Working Group 8, 2010).
Rickard Warlenius (2012) used these positions to try to give numbers for climate debt. He found that if atmospheric space were to have been fairly allocated, the North, or the Annex I countries, would only have emitted 15% of their total emissions, c. 2008. The South, including China, would have been able to emit a bit more: 4.4%. By 2008, the North had over-emitted 746.5 GtCO 2 . At a $50 per ton of CO 2 , price, the historical debt value would have been $37.325 trillion. The IPCC estimates a $150-600 carbon price is needed for sub-1.5° Celsius global warming. That price would increase the debt's size to $111.975-447.9 trillion (IPCC, 2018, pp. 80-81). Bolivia demanded specifically "[p] rovision of financial resources by developed countries to developing countries amounting to at least 6% of the value of GNP of developed countries, for adaptation, technology transfer, capacity building and mitigation" (Submission by the Plurinational State of Bolivia, 2010). In 2019, US GNP was $21.584 trillion. Six percent of that sum is $1.29 trillion. The GNP of the Organization for Economic Co-operation and Development, approximately equivalent to Annex 1, was roughly $54 trillion. Six percent of that sum would be $3.24 trillion per year. These numbers cannot be metabolized by a system devoted to polarized accumulation, and imply a massive burden of transformation on the North to move toward developmental convergence with the South.
Agrarian Elements of a People's Green New Deal
The fourth element of convergent development outcomes is a focus on and shift in Southern and Northern agrarian systems: in the South, as the only plausible path to popular development, and in the North, as the only reasonable way to actually husband the land, eliminate the environmental crisis, produce sufficient food, and cease value extraction from the South.
National questions, anti-imperialism, and submerged but still-present Northern agrarian questions of land and labor intersect. If Third World agrarian systems oriented their production away from agro-export (Patnaik, 2015) and toward food sovereignty for their laboring classes, and the North moved away from current environmentally destructive methods of replacing labor and attention with industrial inputs and capital in agriculture, some higher percentage of core populations would be engaged in agricultural labor. This would require a large-scale agrarian reform, dismantling of the monopoly agricultural conglomerates, parity prices which reflect ecological costs, changing the relations of production to convert the sizeable rural proletariat to self-directed labor, and investments in localized processing, thereby bringing the secondary sector into the fold on planning and political levels. This would probably but not certainly require some slightly higher percentage of the US population to enter direct production, but more central is to ensure such labor is as well compensated if not more so than other forms of labor. 9 In the periphery, the mostly unwalked peasant path, based on largescale land to the tiller agrarian reforms, state support for cooperatives, due attention to historical and current "internal" oppressions related to race, gender, and ethnicity, alongside protected national agricultures and price engineering to ensure such production is as ecological as is reasonable, is the only path to Third World development. 10 Such shifts in farming systems would, furthermore, produce the raw material for sovereign industrialization, preferably through regionally inter-linked markets (Ajl, 2021b;Fergany, 1987), while industry would serve the technical upgrading of agriculture, including through provision of appropriate-scale technologies. 11 Such shifts would free up surplus for necessary heavy industrialization, especially for reasons of defense (Kontorovich, 2015), and renewable energy infrastructure, and the creation of national and regional transportation infrastructures. Through careful adoption of more advanced industrial suites, especially in countries that have yet to fully extend electrification, it may be possible to leapfrog over portions of the ecologically destructive industrial path walked by the North.
Second, it is increasingly clear that the unwalked peasant path, meaning worldwide land to the tiller agrarian reforms, paying attention to gender inequalities, and ideally but not necessarily through cooperatives, is the only path to Third World development. A focus on shifting social power to smallholders and the landless is the only possible way to secure surpluses for sovereign industrialization, until such time as capital grants arrive from the North. Perhaps equally centrally, agro-ecological farming can vastly increase yields on marginal lands and in the Third World may only slightly reduce yields on prime land. In at least some cases, there have been agro-ecological transitions involving decreased labor, increased yields, and decreased inputs: the holy trinity of attention-intensive ecological farming (Rosset et al., 2011). Furthermore, agro-ecological farming and pasturing using landrace and rustic species and breeds lead to superior biodiversity outcomes. Additionally, agro-ecologically managed lands are more droughtresilient and resilient to flooding because the soil retains moisture. This will be a gift beyond value in an age of global warming-induced climactic chaos, producing rural lifeboats for oncoming floods (Altieri & Nicholls, 2017;Holt-Giménez, 2002).
Third, in a bit of historical poetic justice, peasant or small farmer agro-ecology pulls CO 2 from the atmosphere, so can attention-intensive pastoralism. The limits of these processes are not at all known, a consequence of epistemology sitting atop capitalist political economy: we know about what it is profitable to know about, rather than what a popular law of value would demand we know. The upper bounds of such absorption may be enough to bring atmospheric CO 2 levels down to early industrial levels, if emissions are stopped soon enough. 12 It would then be somewhat ironic that it would be the small peasant class, the preserve of so-called traditional agricultural knowledge, and so often spit upon as a barely-surviving relic of the past, who holds in her hand the keys to the future of humanity.
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
Funding
The author received no financial support for the research, authorship and/or publication of this article.
Max Ajl
https://orcid.org/0000-0002-1422-1010 Notes 1. Climate debt is missing from the entire dossier. 2. One can likewise find comment from Rodney (2012) about soil exhaustion and the importation of inappropriate agricultural technology by the colonizers. 3. Southern innovations in political ecology and especially the relationship of erosion and soil loss to colonial-capitalism remain seriously underexplored through the Eurocentric process of canonical disciplinary construction; for "precursors," see Cabral (1954), and his under-examined work on agrarian issues more broadly, and Sari (1977). 4. I follow the Yemeni government in referring to the assault on Yemen as a US attack, rather than the convention of "Saudi-led." 5. The use of "net-zero" means that ongoing emissions will be balanced out by absorption of carbon via afforestation, reforestation, and likely bio-energy, carbon capture and storage, all premised on primitive accumulation of the Southern countryside. 6. Kevin Lin, a frequent speaker at Verso-sponsored events (e.g., MCLC Resource Center (2020) on 'Viral Politics'), is a program officer at the NEDfunded International Labor Rights Forum (2016). 7. The de-material "turn" of theory about indigenous people and settlercolonialism (Wolfe, 2016), as sharply distinct from indigenous studies or an older generation of work on settler-capitalism, and which needless to say completely ignores Zimbabwe and South Africa, has underpinned this transformation of settler solidarity with indigenous peoples into a knife more than capable of cutting into the social formations which have most supported indigenous struggles in the broadest sense, as in criticism of Bolivia under Evo Morales for essentially self-inflicting the coup d'état, in the name of solidarity with the country's Indigenous movements; see, for example, Cavooris (2019) (2021) and the many writings of Keston Perry and the blog, Uneven Earth. 9. I discuss some of these issues at much greater length here (Ajl, 2019). 10. In the short-run, much like matters of defensive industrialization, there is a possibility that some measure of agricultural intensification could be necessary given short-term yield increases; but this is far from established and ought not to be the default. 11. The question of appropriate technology North and South urgently needs revisiting, see GREDET (1983) and Mahjoub (1983). 12. Some of these estimates are reviewed in Ajl (2021c, ch. 6). | 2021-08-13T13:12:21.621Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "08bf7fbe6c352b946637c8350c10f1fb0c381252",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/22779760211030864",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "08bf7fbe6c352b946637c8350c10f1fb0c381252",
"s2fieldsofstudy": [
"Political Science",
"Environmental Science"
],
"extfieldsofstudy": []
} |
233974359 | pes2o/s2orc | v3-fos-license | A randomized controlled trial investigating the effects of a mediterranean-like diet in patients with multiple sclerosis-associated cognitive impairments and fatigue
Background: Among multiple sclerosis (MS) related symptoms and complications, fatigue might impact roughly 90% of patients. Decline in cognitive function is one of the other complications that occur in the first years after disease onset. The Mediterranean diet is one of the well-known anti-inflammatory dietary approaches. Therefore, this study aimed to explore the effects of a modified Mediterranean-like diet on cognitive changes and fatigue levels in comparison with a conventional standard diet over a 1-year follow-up. Methods: In the current single-blind randomized controlled trial, 34 MS patients in the Mediterranean- like diet group and 38 patients in the standard healthy diet group were studied for 1 year. The dietary interventions were modified each month by an expert nutritionist. MS-associated fatigue level was evaluated using the Modified Fatigue Impact Scale (MFIS). Cognitive assessment was also performed using Minimal Assessment of Cognitive Function in MS (MACFIMS). Results: Intergroup comparisons demonstrated that after considering confounding variables in ANCOVA, fatigue scores appeared significantly lower in patients who were treated with the Mediterranean-like diet than those in the standard healthy diet group [Mean 95% confidence interval (CI)}: 33.93 (32.97-34.89) and 37.98 (36.99-38.97), respectively; P < 0.001]. However, the intergroup analysis of cognitive status only showed a difference in the mean score of Brief Visuospatial Memory Test-Revised (BVMT-R) subtest of the MACFIMS. The BVMT-R was higher among standard healthy diet patients compared to the Mediterranean-like diet group after the intervention following adjustment for covariates [Mean (95% CI): 23.73 (21.88-25.57) and 20.56 (18.60-22.51), respectively; P = 0.020]. Conclusion: In conclusion, the results of this study highlighted the higher protective effects of the Mediterranean-like diet against MS-related fatigue than the standard healthy diet. However, no significant improvement was observed in the cognitive status of MS patients after a 1-year treatment with the Mediterranean-like diet. More randomized clinical trials with larger sample sizes are needed to elucidate the effects of dietary modifications on MS-associated symptoms and complications.
Introduction
Multiple sclerosis (MS) has been recognized as a chronic disabling disorder, with an autoimmune origin, which affects about 2500000 individuals in the world. 1 According to the registry system of the Iranian MS Society 1991-2014, it was estimated that approximately 101.39 per 100 thousand individuals in Tehran, the capital of Iran, are affected by MS during 1991-2014. Moreover, its prevalence has been rising with an alarming rate over the past 20 years. It was also mentioned that Tehran is ranked as one of the places with the highest prevalence of patients affected by MS. 2 MS manifests through a variety of symptoms including sensory and motor impairment in extremities, ataxia, visual disturbances, fatigue, behavioral and emotional complications such as depressive symptoms, as well as cognitive decline. These different clinical manifestations appear according to the places of plaques/lesions within the brain and spinal cord, which occur subsequent to the demyelination of central nervous system (CNS) nerves and neurodegeneration.
Symptoms become aggravated with the progression of destruction in the myelin sheath and neuronal transmission impairment. As is previously hypothesized, in addition to autoimmune processes, dysfunction in inflammatory responses also seems to be involved in the pathogenesis of MS. 1,3 The disorder affects mostly young women aged between 20 and 40 years and particularly white adults. 1 Genetic vulnerability, immune system overactivation, and environment-related factors, including smoking, childhood obesity, low 25-hydroxyvitamin D (25-OH-D) concentration, and diet-associated factors, are among the proposed risk factors for MS. 4 Although several treatment strategies are available for MS, a specific strategy has not been established to date. Current procedures can only relieve symptoms, reduce the number of relapses, and control or modify disease progression, mainly through the inhibition of immune system activation. Thus, there is still a need for new treatment options to alleviate MS, control accompanied symptoms, and reduce the impact of MS on patients' life. 1 Among MS-related symptoms and complications, fatigue might impact roughly 90% of patients and can also occur at a very early stage of disease development. Thus, due to its adverse effects on different aspects of a person's life, including a decline in physical, mental, psychological, and social abilities, particularly cognition, 3 designing interventions to alleviate MS-related fatigue could lead to an improvement in patients' overall quality of life (QOL).
Moreover, a decline in cognitive function is one of the other MS-related complications that could occur in the initial years of disease onset. 5,6 Some of the suggested risk factors of cognitive impairment in MS patients include genetics, age, and the male gender. 7 However, it seems that there is no established and highly effective therapy to combat the cognitive decline of different disorders. In this regard, nonpharmacological options such as neuropsychological interventions and dietary modifications have been studied previously; however, their effectiveness is still under investigation. 6 The role of diet in the development, progression, and alleviation of MS-associated symptoms has been examined in a number of studies. For example, it has been suggested that following a gluten-free diet, having a diet with a high amount of plant-based foods (vegetables and fruits) and low amount of high-fat animal-based http://cjn.tums.ac.ir 05 July foods (meat and dairy), or increasing the intake of some beneficial dietary components such as unsaturated fatty acids, nutritional antioxidants, and anti-inflammatory agents may also be useful in preventing and/or controlling MS progression. There is, however, still no consensus for dietary advice in these patients, mostly because of the limitations of the previously designed trials, including a small number of studied subjects, short duration of research, and failure to use double-blind techniques. 4,6,[8][9][10][11] The Mediterranean diet is characterized by high amounts of mono-unsaturated and polyunsaturated fatty acids (MUFA and PUFA) and, conversely, low amounts of saturated fats along with high-fiber consumption. The diet is also comprised of a high content of anti-inflammatory foods and nutrients such as olive oil, fresh fruit, vegetables, and other plant foods, whole grains, as well as fish and seafood; in contrast, a limited intake of high-fat meats, sweets, and processed foods is recommended. 12 The Mediterranean diet is one of the popular complementary approaches to treating a variety of conditions, especially obesity, metabolic syndrome, diabetes, cardiovascular diseases (CVD), neurodegenerative diseases, and chronic inflammatory disorders. A growing body of evidence showed the protective effects of the Mediterranean diet against inflammation in rheumatoid arthritis and Crohn's disease, obesity, CVD, and even in healthy subjects via suppressing the expression of a number of pro-inflammatory factors [tumor necrosis factor (TNF-α), Interleukin-1 (IL-1), and IL-6], improving endothelial function, and enhancing the antioxidant capacity of the body. It has also been proposed that this type of diet might be related to decreased MS risk. [12][13][14][15] Therefore, the current single-blind, randomized, controlled trial aims to contribute to this growing area of research by exploring the effects of a modified Mediterranean-like diet on MS-associated cognitive changes and fatigue levels in comparison with a standard healthy diet over a 1-year follow-up period.
Materials and Methods
Study participants and randomization: All study participants of this single-blind, randomized, 1-year clinical trial have been recruited from the MS clinic of Sina University Hospital, Tehran University of Medical Sciences, Tehran, Iran.
From among about 115 relapsing-remitting MS (RRMS) patients, who had the study inclusion criteria, 80 patients were enrolled in the study. We used recommendations of the "Multiple Sclerosis Clinical Trials: Part 1" 16 as the basis for determining the inclusion/exclusion criteria. Moreover, these criteria were established to reduce the source of bias in the interpretation of results. [17][18][19] The inclusion criteria (and its rationale) were the diagnosis of RRMS based on the McDonald 2010 MS diagnostic criteria (due to the progressive nature of other types of MS and higher prevalence of RRMS), undergoing beta-interferon treatment (to rule out effects of different treatment modalities and various types of drugs that could be a source of bias in the interpretation of results), an Expanded Disability Status Scale (EDSS) score of < 5.5 (to include more independent patients and less disabled subjects because of the nature of the intervention, a dietary intervention, which necessitates the patients to be independent to follow the instructions for preparing 5 meals a day, etc.), an age of 18-55 years and body mass index (BMI) of 18-30 kg/m 2 (to reduce the effect of obesity and BMI of more than 30 on MS progression and response to dietary intervention). 17 To eliminate the effects of corticosteroids on the immune system, all patients were in the remitting phase with no relapse over the past 3 months before the study. Changes in disease-modifying therapy during the study and consumption of cytotoxic medications, antipsychotic drugs, and cortisone were considered as the exclusion criteria because of their probable effects on weight gain and the response to dietary interventions, 20 and their likely impact on cognitive status and fatigue 21,22 as the outcomes of the study. Having a history of drug abuse, following any special diet because of medical reasons, suffering from any neurological condition other than MS, and psychologic or chronic disorders including head trauma, tumors, eating disorder, major depression, CVD, as well as endocrine, metabolic, liver, or kidney impairment were the other exclusion criteria. Subjects who were pregnant, breastfed, or had a planned pregnancy were also excluded. Prior to enrollment, information on the research procedures and objectives were given to all participants. Furthermore, written informed consent was obtained from all participants.
The study protocol was approved in the institutional review board of the Iranian http://cjn.tums.ac.ir 05 July center of neurological research (research number = 93-02-54-24463) and received ethical approval from the ethics committee of Tehran University of Medical Sciences (93-02-54-24463-316140).
Demographic and anthropometric data and clinical assessments: After enrollment, all patients were initially examined by our medical team, including expert MS-specialist neurologists, a physician, a registered dietitian, a sports medicine specialist, and a clinical psychologist. Data on demographic and socioeconomic characteristics such as age, sex, educational level, and job were collected at baseline.
Bodyweight was assessed on a Seca 755 dial column medical scale with a weighing accuracy of 0.5 kg, and height was measured using a standard stadiometer with an accuracy of 0.1 cm. BMI was estimated by dividing weight in kg by height in square meter.
At the beginning of the study, a neurological examination was performed to assess the level of disability using the Kurtzke EDSS and 25-foot walk test .
Furthermore, data on past medical history and MS-related features, including the duration of disease onset, disease progression status, relapse rate during the last year, and drug abuse history, were also recorded at baseline.
Fatigue Assessment: MS-associated fatigue level was evaluated using the Modified Fatigue Impact Scale (MFIS), which was previously validated in the Iranian population. This 21-item questionnaire includes 9 items to assess physical status, 9 items to determine cognitive status, and 2 items to evaluate the psychosocial function status. 23 Cognitive Assessment: cognitive assessment was performed at baseline and after a 1-year trial using a reliable and validated cognitive questionnaire score and DKEFS total sorting score). 24 Intervention: A registered dietitian interviewed all patients. Data on usual dietary intake of study participants were collected using 24-hour diet recall for 3 days (2 working days and 1 day on the weekend) to prescribe the specific diet for each subject considering her/his usual dietary habits and preferences. At the first visit, the energy requirement was calculated according to individuals' anthropometric assessments. After that, nutritional needs and macronutrient needs (protein, carbohydrate, and fat) were estimated. The distribution of macronutrients in the prescribed diet for patients in both groups was 18-20% for protein, 30% for lipid, and 50-52% for carbohydrate. Then, patients were visited by the same dietitian monthly until the end of the study, and their prescribed diet was adjusted according to the new weight assessments.
The energy needs and macronutrients were proportionate to the participants' age, sex, and BMI. In general, the intervention diet was modified in accordance with the Mediterranean diet, except for wine and some other unspecified foods. Thus, some special dietary instructions were made for patients in the Mediterranean-like diet group. The advice mainly focused on encouraging the increasing of the consumption of healthy oils (especially olive and olive oil), whole grains, vegetables, fruits and raw and unroasted nuts and seeds, legumes, and healthy plant based foods. Moreover, the consumption of fish and seafood (for about 2 times per week), poultry, eggs, and low fat or skimmed dairy (daily to weekly) was recommended. Furthermore, the participants were instructed to limit the intake of red meat, fried foods, and refined grains in addition to minimize the consumption of simple sugar, sugary foods and beverages, processed meat, and animal based fats to as low amounts as possible. The main modification that was made to the original Mediterranean diet included eliminating wine and some types of foods according to the Iranian culture based on religious beliefs. It is worth noting that all study participants were Muslim.
Moreover, the participants' usual diet was not a standard healthy diet, but rather a nutritionistaided diet in accordance with the United States Department of Agriculture (USDA) dietary guidelines for Americans, 2010. The guidelines were customized in macronutrients to be proportionate to the patients' age, sex, and BMI.
http://cjn.tums.ac.ir 05 July
Furthermore, these guidelines propose foodbased recommendations for promoting public health, attempting to ensure patients' dietary requirements have been met, and preventing the development and progression of chronic disease.
Patients in both groups were recommended to have 5 meals each day. In addition, the participants were not aware of the treatment they received (i.e., they did not know whether the diet they received is the control diet or the Mediterranean-like diet). The Mediterranean-like diet adherence scores were assessed by applying a 6-item questionnaire (scored from 0 to 14) every 12 weeks during the study, the higher the value, the higher the adherence to the Mediterranean-like diet.
Sample Size Calculation and Statistical Methods: Randomization was performed using block randomization methods based on pre-generated randomization code lists provided by the "http://www.randomization.com" website. Thus, 40 patients were randomly allocated to each arm of the study in multiple blocks of 4 and 1:1 ratio according to age and sex. Only the researchers were aware of the assigned dietary intervention (standard healthy diet or the Mediterranean-like diet).
The normality of data was assessed using the Kolmogorov-Smirnov test. Categorical variables were analyzed, applying the chi-squared test. The +paired t-test or Wilcoxon signed-rank test was applied for the comparison of intragroup changes in variables. In addition, independent sample t-test or Mann-Whitney U test was used to make intergroup comparisons. Moreover, intergroup differences at the end of the trial were determined using the analysis of covariance (ANCOVA) test adjusted for age, changes in the Mediterraneanlike diet adherence scores, and BMI, in addition to the baseline value of each variable. Data are presented as mean [standard deviation (SD)], median [interquartile range (IQR)], percentages, and ANCOVA-derived adjusted means and 95% confidence interval (CI) when appropriate. A two-sided p-value < 0.05 was considered as statistically significant. All data analysis was performed using SPSS software (version 19, Chicago, IL, USA).
Results
From among 115 patients, 80 individuals met the study inclusion criteria and were randomly allocated to either the Mediterranean-like diet or standard healthy diet group. In addition, 6 patients in the Mediterranean-like diet group (n = 34) and 2 patients (n = 38) in the standard healthy diet group were excluded from the study (Figure 1). Finally, 72 subjects completed the survey. About 91.2% of patients in the Mediterranean-like diet group and 86.8% of standard healthy diet group consisted of women. The mean (SD) of age in the studied subjects in the Mediterranean-like diet and standard healthy diet groups was 34 (8) and 34 (9) years, respectively. No differences were observed between the groups in terms of baseline disease-related factors (Table 1). Figure 2 presents the studied groups' adherence to the Mediterranean diet advises at 4 study time points. Each time point represents the data obtained every 12 weeks during a year of followup. Although no significant differences were observed between the two groups at baseline, the adherence to this diet was higher in the groups who underwent treatment with the Mediterraneanlike diet. Mean (SD) Mediterranean diet adherence scores were 9.45 (2.49) in the Mediterranean-like diet group versus 7.00 (2.54) in the standard healthy diet group at the end of the trial (P < 0.001).
Changes in fatigue levels of the studied groups, according to MFIS scores, are presented in table 2. At baseline, the mean fatigue score was significantly higher among patients who were randomly allocated to the Mediterranean-like diet group than the subjects in the standard healthy diet (P = 0.040). After 1 year of treatment with either of the two diets, levels of fatigue significantly decreased within both groups (P < 0.050). However, this reduction was more significant in the Mediterranean-like diet group (P < 0.001). After considering age, changes in the Mediterranean diet adherence scores, changes in BMI, and fatigue scores at baseline in ANCOVA, fatigue scores also appeared significantly lower in the patients who were treated with the Mediterranean-like diet than those in the standard healthy diet group after a year of follow-up (P < 0.001) ( Table 2). No significant differences were observed between the study groups in terms of PASAT, SDMT, CVLT-II total learning, CVLT-II delayed recall scores, JLO, NAART, COWAT, MC9HPT, and NONDOM subtests. However, the BVMT-R subtest score was shown to be significantly higher among standard diet patients after the intervention following adjustment for covariates in ANCOVA (P = 0.020). Regarding intragroup changes, CVLT-II total learning score significantly increased among the standard healthy diet group participants, D-KEFS description score slightly decreased among patients in both groups, and DKEFS total sorting score significantly reduced only in the Mediterranean-like diet group (P < 0.050); in contrast, no significant changes were observed in the scores of other subtests in each of the study arms (Table 3).
Diet and fatigue:
The results regarding the effects of diet on fatigue demonstrated the ameliorating effects of both Mediterranean-like and standard healthy diet on fatigue levels after a 1-year intervention. Albeit this improving effect was more pronounced among patients who were treated with the Mediterranean-like diet, even after controlling for age, changes in BMI levels, and the Mediterranean-like diet adherence scores in addition to fatigue scores at baseline in ANCOVA. The more significant effects of the Mediterranean-like diet on fatigue could be explained through various mechanisms. Approximately 80% of patients with MS experience different levels of fatigue, and this complication has been established as one of the main disabling symptoms related to MS. 25 MS is recognized as a disorder with an imbalance in T cells production toward augmented activation of T helper 1 and 17 and disturbed function of regulatory T cells. In addition, evidence shows that the concentration of TNF-α, a pro-inflammatory factor, might act as a predictor for disease progression in MS sufferers. Moreover, it seems that there might be a correlation between the level of several inflammatory markers and MS-associated fatigue. 25 In this regard, a review study reported that the augmented concentrations of IFNγ, TNF-α, and IL-6 might be related to fatigue levels in MS subjects. 3 Furthermore, although the effects of diet on MS-associated symptoms and complications are not entirely understood, 26 the protective effect of the Mediterranean-like diet, which is mainly composed of a variety of antioxidants, fiber, and unsaturated fatty acids especially olive oil, on inflammation through suppressing inflammatory markers such as CRP, IL-6, and TNF-α has been confirmed in a growing body of research. [27][28][29][30] Our findings are in agreement with those obtained by Yadav et al. 31 In this study, the researchers examined the effects of a plant-based diet with very low fat content (n = 32 MS patients) compared to a control diet (n = 29 MS patients). They observed that this diet could significantly attenuate MS-related fatigue level which was assessed using the Fatigue Severity Scale (FSS) (4.89 at baseline, -0.06 points per month reduction) and MFIS-short version (9.87 at baseline, -0.23 points per month reduction), during a year of intervention. In contrast, there were no differences between the study groups in terms of brain MRI results, MS relapse rate, and disability level; however, the study was not powered to find differences in these endpoints. 31 Furthermore, it has been suggested that MS patients with higher BMI levels may experience higher fatigue levels. 32 In the current study, the BMI of the patients in both groups significantly reduced, which might also be involved in attenuating fatigue levels in the studied subjects.
Diet and cognition: In our study, MACFIMS subtests, which assess the verbal and learning memory, auditory information processing speed and flexibility and calculation ability, attention, speed of information processing visuospatial ability, verbal fluency, and executive function, did not reveal any significant differences between the studied groups neither at the beginning of the study nor after a year of follow-up. Only after controlling for confounding variables, the BVMT-R subtest score that evaluates visual learning and visual memory was significantly higher among patients in the standard healthy diet group. Furthermore, intragroup comparisons showed a slight decrease in the executive function scores assessed using the D-KEFS description score in both studied groups and sorting abilities evaluated by the D-KEFS sorting score among patients who underwent the Mediterranean-like diet intervention. Furthermore, CVLT-II total learning score that assessed verbal learning changes significantly increased in the standard healthy diet group.
These findings regarding the effects of diet on cognition differ from some previous studies. A cross-sectional study on 70 patients suffering from MS in comparison with 142 healthy subjects showed the higher the adherence to the Mediterranean diet, the lower the risk of MS. 13 Moreover, in a randomized controlled trial on 20 MS patients investigating the effects of a 6-month Mediterranean diet following calorie restriction for about 10 days, it was shown that the intervention resulted in increased levels of attention and memory. 33 However, these results should be interpreted with caution because it is not possible to distinguish the effects of the interventions separately. In addition, the lack of a control group made it difficult to interpret the findings. 33 Regarding our results, it is noteworthy that the relatively high adherence to the Mediterranean diet at baseline and throughout the study, which was observed in both groups, might be related to the main modifications that http://cjn.tums.ac.ir 05 July were made to the original Mediterranean diet. These modifications included eliminating wine and some types of foods according to the Iranian culture based on religious beliefs. Moreover, as mentioned in the methods section, the average macronutrient content of the diets was similar.
In addition, existing evidence shows the protective role of the Mediterranean diet against cognition decline, especially in the elderly and individuals with dementia. Nevertheless, it is noteworthy that this proposed effect is mainly based on cross-sectional studies in which the association between the Mediterranean-like diet and age-related cognitive decline has been evaluated. According to the concluding remarks of the review studies concerning this association, more robust clinical trials and longitudinal studies are warranted in order to confirm the improving outcomes of the Mediterranean diet in cognitive impairment. [34][35][36][37] Although our research was unsuccessful in proving the positive effects of a year of the Mediterranean-like diet intervention on cognitive functions in MS patients, there could be some explanations. MS-related impairment in cognitive function is an essential reason for the disability that could be attributed to brain tissue damage, indicators of its atrophy, and macroscopic MS lesions. 7 Genetic factors, age, and being male are of the most commonly recognized risk factors for cognitive dysfunction among these patients. 7 As mentioned, it is noteworthy that a decline in cognitive function could occur during the first years after disease onset even when the level of disability is not necessarily affected by disease progression. 5,6 Moreover, the mean duration of disease among studied groups in this trial was about 8 years, which might not be an appropriate time to prevent or attenuate impaired cognition in MS patients. Furthermore, previous studies have shown that non-pharmacological treatments could hardly affect cognitive function in MS subjects. 5 In addition, it has been emphasized that the level of depressive symptoms affect different aspects of cognitive function and could be an important factor in determining the cognitive status in patients. 7 However, the depression status of patients was not evaluated in this study.
In an open-label study by Lee et al., the effects of a multi-intervention strategy consisting of a modified version of a modified Paleolithic diet (that excluded gluten sources in the diet, dairy, and eggs, and increased the intake of vegetables, animal, and plant-based protein, and omega 3 rich oils), exercise practices, stress management, and neuromuscular electrical stimulation (NMES) on 19 MS subjects were investigated. 6 It was demonstrated that during a 1-year intervention, cognition and depression status improved significantly. 6 Moreover, these improvements occurred concurrently with an enhancement of fatigue level in patients. 6 Our study had some strengths and weaknesses. First, as mentioned, we did not assess the changes in depression scores of the study participants. Second, the consumption of dietary supplements was not recorded. Third, the comorbidities of MS patients and their impact on patients' overall health status, QOL, and response to treatment were not investigated. Fourth, we were not able to explore the exact energy consumption or expenditure out of the prescribed daily calorie for each individual. Fifth, the nearsignificant higher adherence to the Mediterranean diet at baseline, which was observed in the control arm, might be a source of bias.
The study strengths include the low dropout rate, long-term duration of the study follow-up, and the definite diagnosis of RRMS according to examination by an expert MS-specialist neurologist based on the McDonald 2010 MS diagnostic criteria. Moreover, the same drug treatment was applied for all studied patients who could reduce the risk of bias in study results interpretation.
Conclusion
In conclusion, the results of this study highlighted the higher protective effects of the Mediterraneanlike diet against MS-related fatigue than the standard healthy diet, even after controlling for age, changes in BMI levels, the Mediterranean-like diet adherence scores, and fatigue levels at baseline. However, no significant improvement was observed in the cognitive status of MS patients after a year of dietary intervention. More randomized clinical trials with larger sample sizes are needed to elucidate the effects of dietary modifications on MS-associated symptoms and complications. | 2021-05-08T00:03:49.141Z | 2020-07-05T00:00:00.000 | {
"year": 2020,
"sha1": "d364f54f61f8a7d17cef251bea30d010cf684522",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.18502/cjn.v19i3.5424",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3be8d9b3a0c3291ad25d83b1a82da3e962594997",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246822561 | pes2o/s2orc | v3-fos-license | Embedded Quantitative MRI T1ρ Mapping Using Non-Linear Primal-Dual Proximal Splitting
Quantitative MRI (qMRI) methods allow reducing the subjectivity of clinical MRI by providing numerical values on which diagnostic assessment or predictions of tissue properties can be based. However, qMRI measurements typically take more time than anatomical imaging due to requiring multiple measurements with varying contrasts for, e.g., relaxation time mapping. To reduce the scanning time, undersampled data may be combined with compressed sensing (CS) reconstruction techniques. Typical CS reconstructions first reconstruct a complex-valued set of images corresponding to the varying contrasts, followed by a non-linear signal model fit to obtain the parameter maps. We propose a direct, embedded reconstruction method for T1ρ mapping. The proposed method capitalizes on a known signal model to directly reconstruct the desired parameter map using a non-linear optimization model. The proposed reconstruction method also allows directly regularizing the parameter map of interest and greatly reduces the number of unknowns in the reconstruction, which are key factors in the performance of the reconstruction method. We test the proposed model using simulated radially sampled data from a 2D phantom and 2D cartesian ex vivo measurements of a mouse kidney specimen. We compare the embedded reconstruction model to two CS reconstruction models and in the cartesian test case also the direct inverse fast Fourier transform. The T1ρ RMSE of the embedded reconstructions was reduced by 37–76% compared to the CS reconstructions when using undersampled simulated data with the reduction growing with larger acceleration factors. The proposed, embedded model outperformed the reference methods on the experimental test case as well, especially providing robustness with higher acceleration factors.
Introduction
Magnetic resonance imaging (MRI) is one of the most important tools for the clinical diagnosis of various diseases due to its excellent and versatile soft tissue contrast. Clinical MRI is based on expert interpretation of anatomical images of varying contrasts and thus tends to retain a level of subjectivity. Quantitative MRI (qMRI) methods, such as measurements of different relaxation times, allow reducing the subjectivity by providing numerical values on which diagnostic assessment or predictions of tissue properties can be based on.
However, such quantitative MRI measurements necessarily take more time than standard anatomical imaging. For example, in T 1ρ mapping [1,2], typically, 5-7 sets of measurements with varying spin lock times are collected to estimate the T 1ρ map. Such measurements will thus take 5-7 times longer than acquiring similar anatomical images, often approaching 10 min for a stack of quantitative 2D images.
T 1ρ imaging is based on tilting the magnetization into the xy-plane and then locking the magnetization with a spin-lock pulse of a certain amplitude and duration. Quantitative mapping, i.e., the measurement of the T 1ρ relaxation time constant, is realized by repeating the T 1ρ preparation with several different durations of the spin-lock pulse and collecting the full MR image for each of these preparations. The T 1ρ MRI contrast is particularly sensitive to molecular processes occurring at the frequency (ω 1 ) of the spin-lock pulse corresponding to the amplitude of the pulse: ω 1 = γB 1 , where γ is the gyromagnetic ratio, which ties the magnetic field strength (of the radio frequency (RF) pulse) B 1 to its resonance frequency. Generally, spin-lock pulses operate at and are limited to frequencies that correspond to slow molecular processes that are often both biologically important and altered in disease-related changes. The T 1ρ relaxation time has been reported as a promising biomarker for numerous tissues and diseases, such as different disorders of the brain [3,4], cardiomyopathy [5], liver fibrosis [6], musculoskeletal disorders [2,7,8] and many others. For a broader overview of T 1ρ relaxation and its applications, the reader is referred to the reviews by Gilani and Sepponen [1], Wang and Regatte [7] and Borthakur et al. [2].
Staying still in the scanner for extended periods of time can prove to be challenging, for example, for pediatric patients. The excessively long data acquisition times are also operationally impractical because they lead to a small number of studies that can be performed daily with a single MRI device. Quantitative MRI and T 1ρ imaging in particular can thus greatly benefit from using undersampled measurements, which are a natural and efficient way to reduce the scanning time for a single qMRI experiment. When using undersampled data, conventional MR image reconstruction methods, such as regridding [9], may lead to insufficient reconstruction quality. The usage of compressed sensing (CS) [10,11] methods, where an iterative reconstruction method is used together with a sparsifying transform of the image, has proven highly successful with undersampled MRI data [12].
Usage of CS methods for T 1ρ imaging have been previously studied, for example, in [13][14][15][16]. In [13], the authors used principal component analysis and dictionary learning in the first in vivo application of CS to T 1ρ reconstruction. In [14], the authors used spatial total variation (TV) together with Autocalibrating Reconstruction for Cartesian sampling (ARC) to accelerate the measurements. In [15], the authors compared 12 different sparsifying transforms in 3D T 1ρ mapping. The regularization model combining spatial TV with second-order contrast TV was found to perform the best, with satisfactory results with an acceleration factor (AF, i.e., the number of datapoints in full data divided by the number of data used in the reconstruction) up to 10 when using cartesian 3D sampling together with parallel imaging. In [16], both cartesian and radial data were reconstructed using various different regularization methods. The authors reached acceptable accuracy with AF up to 4 for the cartesian data, whereas with the radial data, the accuracy was acceptable with AF up to 10.
When using CS for T 1ρ mapping, the image series with varying spin-lock durations T SL is first reconstructed, followed by a pixel-by-pixel non-linear least squares fit of a monoexponential (or a biexponential) signal model to the reconstructed image intensity data to obtain the desired T 1ρ relaxation time map. Since the exponential signal model combining the T 1ρ and varying T SL is well known, a direct, embedded model may also be used to reconstruct the desired T 1ρ map directly from the k-space measurement data without the intermediate step of reconstructing the separate intensity maps corresponding to different T SL . Figure 1 shows a schematic of the CS T 1ρ mapping method as well as the direct, embedded model.
The direct one-step reconstruction utilizing the embedded model has clear advantages over the sequential two-step reconstruction model. First, it reduces the number of unknowns in the reconstruction problem significantly; for example, for measurements with seven spin-lock times, the number of unknowns may be reduced from 14N (one complex image for each contrast) to just 3N (T 1ρ , S 0 and a single phase map), where N is the number of pixels or voxels in a single image. Secondly, it allows the regularization of the parameter map of interest, i.e., the T 1ρ parameter map in the case of T 1ρ mapping instead of posing regularization on the complex-valued images corresponding to different contrasts in the intermediate step. Thirdly, since the signal model is embedded in the reconstruction, there is no need to decide what type of a contrast regularization model fits the data best. A disadvantage of the embedded model is that it transforms the MRI inversion into a non-linear problem, which is not necessarily convex and thus requires proper initialization. The resulting non-linear and possibly non-convex optimization problem can, however, be solved conveniently with, for example, the non-linear primal-dual proximal splitting algorithm [17].
Alternatively, various deep learning approaches have also been proposed for different aspects of quantitative MRI. For example, in [18], the authors propose the use of deep learning neural networks to reduce the number of contrasts required for an accurate model fit in myocardial T 1 mapping. Additionally, a model-guided self-supervised deep learning MRI reconstruction framework for direct T 1 and T 2 parameter mapping has been proposed [19]. For an overview of the usage of deep learning in MR relaxometry, see [20].
In this work, we propose an embedded parameterization model to directly reconstruct the T 1ρ , S 0 , and phase maps from the k-space measurement data and use the non-linear primal-dual proximal splitting algorithm to solve the problem. The proposed model is tested with 2D simulated radial phantom data and 2D cartesian ex vivo mouse kidney data. The proposed embedded model is compared with two CS models: one with spatial TV and TV over the T SL contrasts, which, we believe, is generally the most commonly used CS model in MRI, and a second CS model with spatial TV and second-order contrast TV, which in [15] was found to perform the best out of 12 different CS models for T 1ρ mapping. The first CS model is labeled "CS S1+C1", and the second CS model is labeled "CS S1C2" throughout the paper. The models are named slightly different since in the first model, the spatial and contrast TV components are separate with two different regularization parameters, and in the second model, the spatial TV and the second-order contrast TV are under the same root with a single regularization parameter. In the cartesian test case, results from a direct inverse fast Fourier transform (iFFT) model are also shown as a reference. Reconstructions from both the CS models and the iFFT model are followed by the mono-exponential pixel-by-pixel T 1ρ fit.
Embedded T 1ρ Model
The signal model in T 1ρ mapping is where S c,k is the signal intensity with spin-lock time T SL c , where c denotes the contrast index and k denotes the pixel index, and S 0 is the proton density map, i.e., the signal intensity when T SL = 0. For the recovery of the T 1ρ map, k-space measurement data are collected by scanning the target with multiple spin-lock times T SL c . The measurement model mapping the S 0 , T 1ρ , and the phase map θ to the k-space measurements then reads where the vectors S 0 , T 1ρ , and θ ∈ R N are the parameter maps to be reconstructed, and the complex measurement vector m ∈ C CM is composed of k-space data with C spin-lock times, each consisting of M measurements. Further, we denote the complex-valued measurement noise by e ∈ C CM and the non-linear forward model by K : R 3N → C CM . The non-linear forward model can be further decomposed to where A is the block-diagonal matrix containing the Fourier transform operations. In the case of cartesian measurements, the blocks of A read A c = U c F , where U c is the undersampling pattern used with the measurements with contrast index c, and F is the Fourier transform. In the case of non-cartesian measurements, we approximate the forward model using the non-uniform fast Fourier transform (NUFFT [21]), i.e., A c = P c F L c , where P c is an interpolation and sampling matrix, and L c is a scaling matrix. Furthermore, D maps the S 0 and T 1ρ parameter maps to magnitude images as where is the Hadamard product, i.e., elementwise multiplication, and the exponentiation and the division of the scalars T SL i by the vector T 1ρ are to be interpreted as elementwise operations. Moreover, B maps the magnitude and phase components of the images to real and complex components and can be expressed as Here too, the sin and cos are to be interpreted as elementwise operations. Note that if the phase maps vary between contrasts, the model can be easily modified to reconstruct separate phase maps for all contrasts instead of reconstructing only a single phase map. In a typical T 1ρ measurement, however, the contrast preparation is usually well-separated from the imaging segment, and thus, the phase can be expected to be the same between the otherwise identical image acquisition segments.
In the embedded reconstruction, we use total variation regularization for the S 0 and T 1ρ maps and L2-norm regularization for the spatial gradient of the phase map. TV regularization has been shown to be one of the best performing approaches with CS in T 1ρ mapping [15], and thus, for a fair comparison, it was chosen as regularization for the S 0 and T 1ρ maps. Additionally, gradient L2 regularization was used for the phase map since the phase maps are most often smooth. We also limit the S 0 and T 1ρ parameter maps above a small positive value. With these, the minimization problem reads where TV S denotes spatial total variation, ∇ S is the spatial discrete difference operator, and δ a i are step functions with an infinite value below the parameter a i and 0 above or equal to the parameter. Further, α 1 , α 2 , and α 3 are the regularization parameters for the S 0 , T 1ρ , and phase maps, respectively, and a 1 and a 2 are the small positive constraints on the S 0 and T 1ρ maps, respectively.
Solving the Embedded T 1ρ Reconstruction Problem
The non-linear, non-smooth optimization problem in Equation (6) is solved using the non-linear primal-dual proximal splitting algorithm proposed in [17], which is described in Algorithm 1 in its most general form. Here, the non-linear mapping H : R 3N → C CM+6N contains the non-linear forward model K and the discrete difference matrices. The algorithm applied to the embedded T 1ρ reconstruction is described in more detail in the Appendix A.
Algorithm 1 Non-linear primal-dual proximal splitting presented in [17] (Algorithm 2.1) Choose ω ≥ 0 , and τ, σ In our implementation, x = (S 0 T , T T 1ρ , θ T ) T , and we initialize the S 0 and phase parts of x 0 using iFFT or adjoint of NUFFT of the T SL = 0 measurements. T 1ρ was initialized to a constant value of 20, and the dual variable y was initialized to 0. Initializing the S 0 map with a constant value instead of the iFFT or adjoint of NUFFT, the algorithm generally fails to converge to feasible solutions, whereas initializing the T 1ρ map with different feasible values, the algorithm converges to nearly the same solution with differences mainly in convergence speed.
In addition, we use varying primal step sizes for the different blocks of the embedded reconstruction, i.e., different τ i parameters for the S 0 , T 1ρ , and phase updates [22]. This essentially replaces the scalar step length parameter τ in Algorithm 1 with the diagonal matrix The step parameters τ 1 , τ 2 , and τ 3 are derived from the norm of the corresponding block of the matrix ∇H. Here, however, we only use the non-linear part K of H to estimate the step lengths, as the linear part of H has only a minor impact on the norm of ∇H. We set the parameter σ to σ = 1/ max(τ i ) and use ω = 1 for the relaxation parameter.
Since the block-diagonal matrix A is linear and can be normalized to 1, we have ∇K = J B J D . Furthermore, the product of the Jacobians writes where E i = diag(exp(−T SL i /T 1ρ )) and r i = S 0 exp(−T SL i /T 1ρ ). Now, since the matrix J B J D consists of only diagonal blocks, and the index of the maximum value is the same for all E i , it is straightforward to estimate the τ i from the norms of the maximum values of the column-blocks of Equation (8) yielding In addition, we calculate the norms in every iteration and update the used τ i and σ if the step is smaller than the previously used step.
In our experience, these step lengths may, however, prove to be too small, and in some cases, larger step lengths, especially for the T 1ρ update step, may be used to obtain faster convergence. In this work, we used a multiplier of 50 for the T 1ρ update step τ 2 in the radial simulation. Note that the step length criterion of Algorithm 1 still holds with the multiplier since τ 2 · σ remains small due to the selection of σ.
Compressed Sensing Reference Methods
We compare the embedded model to two CS models, which include a complex valued reconstruction of the images with different spin-lock times, followed by a pixel-by-pixel non-linear least squares fit of the monoexponential signal model to obtain the T 1ρ and S 0 parameter maps. The first CS reconstruction model uses spatial total variation together with first-order total variation over the varying T SL contrasts (labeled CS S1+C1), and the second one uses spatial total variation together with second-order total variation over the varying T SL contrasts (labeled CS S1C2).
The measurement model for a single contrast image is where the superscript c denotes the contrast index, m c ∈ C M is the k-space data vector for contrast index c, u c ∈ C N is the image vector, e c ∈ C M is the complex valued noise vector, and A c is the forward model, which depends on the measurement sequence and undersampling pattern and is described in more detail in Section 2.1. With the measurement model of Equation (12), spatial total variation, and total variation over the contrasts, the CS minimization problem reads where A is a block-diagonal matrix containing the forward transforms A c corresponding to each image, u ∈ C NC is all the images vectorized, such that C is the number of contrasts, and m ∈ C MC is all the k-space measurements vectorized. Further, TV S denotes spatial total variation, TV C denotes total variation over contrasts, and α and β are the regularization parameters of spatial and contrast TV, respectively.
The second CS minimization problem, which uses the single regularization parameter version of combined spatial TV and second-order contrast TV, reads where where ∇ x and ∇ y are the horizontal and vertical direction spatial discrete forward difference operators, respectively, ∇ 2 c is the second order contrast direction discrete difference operator, and k is an index that goes through all the pixels in the set of images.
Both of the minimization problems (Equations (13) and (14)) are solved using the popular primal-dual proximal splitting algorithm of Chambolle and Pock [23].
Finally, in the CS models (and the iFFT model), we fit the mono-exponential T 1ρ signal equation pixel by pixel to the reconstructed intensity images obtained by solving either Equation (13) or Equation (14). Here, |u k | = |u 1 k |, ..., |u C k | is the vector of reconstruction intensity values at pixel location k with T SL contrasts 1 to C, and similarly, T SL is the vector of T SL values of contrasts 1 to C. Note that the final S 0 estimate is obtained from the mono-exponential model fit instead of taking the intensity values from the reconstructions with T SL = 0.
Simulated Golden Angle Radial Data
The simulation of the radial measurement data was based on the Shepp-Logan phantom in dimensions 128 × 128, which was zero-filled to dimensions 192 × 192. The T 1ρ values of the target were set to between 20 and 120. The intensity with T SL = 0 was set to a maximum of 1, and the phase of the target was set 2πx/192, where x is the horizontal coordinate of the pixel. The images of the simulated T 1ρ , S 0 , and phase maps are shown in Figure 2. To generate the varying T SL measurements, spin lock times of 0, 4, 8, 16, 32, 64, and 128 ms were used. For each T SL , 302 (i.e., ∼192 · π/2) golden angle [24] spokes were generated. This corresponds to full sampling for equispaced radial spokes with image dimensions 192 × 192 in the sense that the distance between spokes at their outermost points satisfies the Nyquist criterion [25]. Finally, complex Gaussian noise at 5 % of the mean of the absolute values of the full noiseless simulation was added to the simulated measurements.
Cartesian Data from Ex Vivo Mouse Kidney
Experimental ex vivo data from a mouse kidney was acquired from a separate study. The data were collected in compliance with ethical permits (ESAVI/270/04.10.07/2017) at 9.4 T using a 19 mm quadrature RF volume transceiver (RAPID Biomedical GmbH, Rimpar, Germany) and VnmrJ3.1 Varian/Agilent DirectDrive console. T 1ρ relaxation data were collected using a refocused T 1ρ preparation scheme [26] with a spin-lock frequency of 500 Hz and T SL = 0, 8, 16, 32, 64, and 128 ms. The T 1ρ -prepared data, i.e., T 1ρ -weighted images, were collected using a fast spin echo sequence with a repetition time of 5 s, effective echo time of 5.5 ms, echo train length of 8, slice thickness of 1mm, field-of-view of 17 × 17 mm and acquisition matrix of 192 × 192. Eventually, only spin-lock times up to 64 ms were used in the reconstruction as the signal intensity of the longest spin-lock time was close to the noise level and had minimal or no effect on the reconstruction.
Reconstruction Specifics
The radial data from the 2D phantom were reconstructed with the embedded model and the two CS models with acceleration factors of 1, 5, 10, 20, 30, 50, and 101 (rounded to the nearest integer). In T 1ρ imaging, the images measured with varying spin-lock times are expected to have high redundancy in the sense that the images are expected to be structurally similar with decreasing intensity as T SL increases, making complementary k-space sampling warranted. In complementary k-space sampling, the subsampling with any measured contrast is different from the others, meaning that each sampling adds to spatial information gained at other contrasts. The golden angle radial sampling is especially well suited for this as the measurements are inherently complementary (i.e., each new spoke has a different path in the k-space compared to the previous ones), and each measured spoke traverses through the central (low-frequency) part of the k-space, which contains significant information on the intensity level of the images. Thus, we sampled the golden angle data such that, for example, with an acceleration factor of 0, the first contrast used the first 30 spokes out of the 302 total, the second contrast used spokes 31 through 60 and so on to achieve complementary k-space sampling. Examples of the radial sampling pattern for an acceleration factor of 20 and the cartesian sampling pattern for acceleration factor of 5 are shown in Figure 3. In the embedded model, the phase regularization parameter was set to a constant value at 0.01, and the other regularization parameters were varied over a wide range. In the CS models, the regularization parameters were also varied over a wide range to find the best parameters. The reconstructions shown use the regularization parameters that yielded the smallest T 1ρ RMSE with respect to the ground truth phantom.
The NUFFT operator used in the radial data reconstructions was implemented using the Michigan Image Reconstruction Toolbox (MIRT) [27]. The interpolator used was the minmax:kb interpolator with a neighbourhood size of 4 and scaling factor of 2.
The cartesian ex vivo mouse kidney data were reconstructed with the embedded, the iFFT, and the two CS methods with acceleration factors of 2, 3, 4, and 5 (rounded to the nearest integer). Undersampling was conducted by taking a number of full k-space rows corresponding to the desired acceleration factor since cartesian data collection in MRI scanners is carried out line by line. For the undersampled reconstructions, 1/4 of the total sampled k-space rows were taken from around the center to include zero frequency and enough low-frequency data in all contrasts. Half of the rest 3/4 were taken from the top part and the other half from the bottom part. To achieve complementary sampling, the rows from the top and bottom parts were selected such that all rows were first selected once in random order before continuing to sample from the full set of rows again (Figure 3).
In the ex vivo test case, too, the phase regularization parameter of the embedded model was set to a constant level, which was 0.0001, and the other parameters of the embedded and both CS models were varied over a wide range to find the optimal T 1ρ estimate. The embedded model reconstructions were compared to the embedded reconstruction with full data, and likewise, the CS and iFFT model reconstructions were compared to the corresponding reconstructions with full data as the true T 1ρ map is not available. Thus, the RMSEs reflect each model's relative tolerance for undersampling compared to the situation where fully sampled data are available for the particular reconstruction model.
Simulated Golden Angle Radial Data
With the radial simulated phantom data, all the methods produce reconstructions with similar RMSEs when using full data (acceleration factor 1). With undersampled data, the embedded model outperforms both the CS models as measured by RMSE of both the T 1ρ ( Figure 4) and S 0 ( Figure 5) maps with all acceleration factors and the improvement increases with larger acceleration factors. Figure 4. The top row contains the CS S1+C1 model, and the middle row the CS S1C2 model S 0 parameter maps obtained from the monoexponential fit of Equation (15), and the bottom row contains the embedded model reconstructions. Columns 2-5 show the S 0 parameter maps at acceleration factors 5, 10, 30, and 101. Images are cropped to content.
The T 1ρ maps computed using the CS models are also visibly noisier as the model does not allow direct regularization of the T 1ρ map (Figure 4). With an acceleration factor of 101, reconstructions of both CS models start to break down, whereas the embedded model reconstruction still reconstructs the target reasonably well, with RMSE values below those of the the CS models at an acceleration factor of 20-30 (Figures 4-6).
Cartesian Data from Ex Vivo Mouse Kidney
In the cartesian ex vivo test case, the performance of the embedded and CS models in their relative tolerance for undersampling is similar with an acceleration factor of 2, and both CS models perform slightly worse than the embedded model with an acceleration factor of 3 (Figures 7-9). With an acceleration factor of 4, the performance of the CS models is already clearly worse than the performance of the embedded model, and while both of the CS models fail in the reconstruction with an acceleration factor of 5, the embedded model still produces similar tolerance for undersampling as with the smaller acceleration factors. The undersampled iFFT reconstructions shown for reference perform worse than the CS or the embedded model reconstructions with all the acceleration factors. Figure 7. The T 1ρ maps of the cartesian ex vivo mouse kidney data with the iFFT, CS S1+C1, CS S1C2, and embedded models, as well as the RMSEs as compared to the corresponding model reconstructions with full data. The top row contains the iFFT, the second row the CS S1+C1, and the third row the CS S1C2 model T 1ρ parameter maps obtained from the monoexponential fit of Equation (15), and the bottom row contains the T 1ρ maps obtained from the embedded model reconstructions. Columns 1-4 show the parameter maps corresponding to acceleration factors 1, 3, 4, and 5. Images are cropped to content. Figure 8. The S 0 maps of the cartesian ex vivo mouse kidney data with the iFFT, CS S1+C1, CS S1C2, and embedded models, as well as the RMSEs as compared to the corresponding model reconstructions with full data. The S 0 maps shown here are from the same reconstructions as the T 1ρ maps shown in Figure 7. The top row contains the iFFT, the second row the CS S1+C1, and the third row the CS S1C2 model S 0 parameter maps obtained from the monoexponential fit of Equation (15), and the bottom row contains the S 0 maps obtained from the embedded model reconstructions. Columns 1-4 show the parameter maps corresponding to acceleration factors 1, 3, 4, and 5. Images are cropped to content. Figure 9. The RMSEs of the T 1ρ (left) and S 0 (right) maps of the cartesian ex vivo mouse kidney data with the embedded, CS S1+C1, CS S1C2, and iFFT models at acceleration factors 2, 3, 4, and 5.
Discussion
In this work, we proposed a non-linear, embedded T 1ρ model for direct quantitative T 1ρ reconstruction. The model is solved using the non-linear primal-dual proximal splitting algorithm [17]. We compared the embedded model reconstructions to two compressed sensing reconstructions followed by a mono-exponential T 1ρ fit in a radial simulated test case and a cartesian ex vivo test case. In the cartesian test case, we also show results from iFFT reconstructions followed by the T 1ρ fit.
In the simulated test case, where the RMSE metric with respect to the true target image is available, the embedded model outperformed both of the CS models with improvement increasing towards the higher acceleration factors. In the experimental test case with Cartesian ex vivo mouse kidney data, the RMSEs reflect the relative tolerance of the method with respect to the case where the fully sampled data were available for that particular method. In this case, the embedded model and the CS models had similar RMSEs for an acceleration factor of 2, and for higher acceleration factors, the embedded model clearly exhibited better tolerance for undersampling, indicating that the embedded model would allow the usage of higher acceleration factors than the CS models.
We believe the main factor for the better performance of the embedded model, especially with higher acceleration factors, is the reduction in the reconstructed parameters. In the simulation, in the standard CS approach, there are 14N unknowns, where N is the number of pixels in a single image, whereas in the embedded model, there are 3N unknowns. This is a reduction of 79% in the number of reconstructed unknowns. The same also holds true for the experimental case, where the reduction is 70%. Thus, when utilizing the embedded model, the problem is less undersampled-in the sense of the number of unknowns compared to the number of measurement points-than when using the CS models.
The two CS models perform quite similarly with the second-order contrast TV model CS S1C2 performing slightly better overall than the CS S1+C1 model in the simulated test case. The same observation can be made in the cartesian test case up to an acceleration factor of 4. In the Cartesian test case, the CS S1+C1 model has a smaller RMSE than CS S1C2 with an acceleration factor of 5, but in this case, both of the CS models failed to produce useful T 1ρ or S 0 maps. From the practical point of view, the second-order contrast TV model with the implementation described in [15] is also more convenient than the CS S1+C1 model as it requires selecting only a single regularization parameter.
The embedded model is, however, slower to compute than the CS models. For example, our code implementation running on MATLAB (R2017b, The MathWorks, Inc., Natick, MA, USA) using an Intel Xeon E5-2630 CPU took 104 min for the embedded model and 26 min for the CS S1+C1 model with the radial simulation data with AF = 5. For the experimental cartesian data, the difference was bigger: for example, for AF = 2, the embedded model took 75 min to compute, while the CS S1+C1 model converged to stopping criterion in under a minute. The computation times could, however, be shortened, for example, by optimizing the code, running the code on a GPU, and also loosening the stopping criteria since we ran the iterations with rather strict criteria.
In the radial simulated test case, the embedded model reconstructs the target quite well even with an acceleration factor of 101, using only three spokes per T SL contrast, and 21 spokes in the whole reconstruction. In the cartesian test case, the acceleration factors that can be reached are much smaller. Even though the target used in the radial simulation is rather simple, it is evident that the radial sampling pattern, particularly with the golden angle sampling where k-space spokes are complementary and go through the center part of the k-space, allows much higher acceleration factors than a cartesian line-by-line sampling pattern. This is due to the undersampling artefacts in radial sampling (i.e., streaking) being more noise-like in the transform domain than the undersampling artefacts that arise in cartesian sampling [28,29]. This finding is aligned with the findings of [16].
Testing the proposed embedded model with radial experimental data, in vivo data, 3D data, and parallel imaging data are interesting future works, and our hypothesis is that similar results, where the embedded model outperforms the CS models, are to be expected. In addition, the embedded T 1ρ model could be tested with other regularizers, such as total generalized variation [30], which balances between minimizing the first-and secondorder differences of the signal, making the results less piecewise constant, an issue for TV regularization, which is visible in the embedded reconstructions in, e.g., Figure 7. Other regularizers, which could alleviate the over-smoothness, include, for example, non-local means [31] or dictionary learning [32].
As the contrast manipulation scheme of the signal acquisition and the quantitative signal equation are the only major aspects that change between different qMRI contrasts, the proposed method can easily be adapted to fit other qMRI cases as well. Besides other qMRI methods, other aspects where embedded modelling could offer further benefits are T 1ρ dispersion imaging [33,34], where the data are acquired at multiple spin-locking amplitudes, and reducing RF energy deposition by asymmetric data reduction for the different spin-lock times (i.e., less data for long spin-lock pulses). More generally, shorter scan times may allow for higher spin-lock durations and/or higher amplitude pulses, as the specific absorption rate of RF energy can be minimized via acquiring less data for the most demanding pulses. Alternatively, multi-contrast embedded modelling could offer further avenues for data reduction.
Conclusions
In this work, we proposed an embedded T 1ρ reconstruction method, which directly reconstructs the T 1ρ , S 0 , and phase maps from the measurement data. The reconstruction method also allows direct regularization of these parameter maps, and thus, a priori information about the parameter maps may be incorporated into the reconstruction. We also showed that the proposed method outperforms two compressed sensing models in two test cases, especially when using higher acceleration factors. | 2022-02-15T06:47:40.243Z | 2022-02-14T00:00:00.000 | {
"year": 2022,
"sha1": "df6da45de2c2e67bd6edbb45177dab65d43a942b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2313-433X/8/6/157/pdf?version=1653995840",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3fc5cb8668fc1ab873c0109bdb9fb17cf4b5265c",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Physics",
"Engineering"
]
} |
265034290 | pes2o/s2orc | v3-fos-license | Mini Minds: Exploring Bebeshka and Zlata Baby Models
In this paper, we describe the University of Lyon 2 submission to the Strict-Small track of the BabyLM competition. The shared task is created with an emphasis on small-scale language modelling from scratch on limited-size data and human language acquisition. Dataset released for the Strict-Small track has 10M words, which is comparable to children's vocabulary size. We approach the task with an architecture search, minimizing masked language modelling loss on the data of the shared task. Having found an optimal configuration, we introduce two small-size language models (LMs) that were submitted for evaluation, a 4-layer encoder with 8 attention heads and a 6-layer decoder model with 12 heads which we term Bebeshka and Zlata, respectively. Despite being half the scale of the baseline LMs, our proposed models achieve comparable performance. We further explore the applicability of small-scale language models in tasks involving moral judgment, aligning their predictions with human values. These findings highlight the potential of compact LMs in addressing practical language understanding tasks.
Introduction
LMs accurately encode language-specific phenomena required for natural language understanding and generating coherent continuation of text.LMs gain language understanding about morphosyntax and grammar from large corpora during pretraining.However, they demonstrate partial functional linguistic competence when applying grammatical knowledge to novel expressions at inference time, which is caused by memorising the most occurring linguistic patterns from the training corpus and limited generalization ability of learnt linguistic representations (Wu et al., 2022;Tucker et al., 2022;Mahowald et al., 2023).
Recent pre-training dynamics studies revealed that the performance of LMs can be seen as a function of training corpus vocabulary: (1) grammatical knowledge improves with the expansion of the pretraining data vocabulary (van Schijndel et al., 2019) and ( 2) small-scale LMs can perform on par with RoBERTa if the vocabulary of used tokenizer is close to the actual human and even child's vocabulary (Liu et al., 2019).
In this paper, we introduce small-scale LMs with an architecture optimized for the STRICT-SMALL track data of BabyLM competition (Warstadt et al., 2023).Our objective is to estimate the general performance and capabilities of shallow LMs in downstream tasks beyond the ones suggested in the evaluation pipeline of shared task.That was achieved through two main contributions.Contribution 1.We determine an optimal architecture of encoder-based LMs using the Treestructured Parzen Estimator algorithm and minimal perplexity as a minimizing objective function.Our parameter search results suggest that optimal LMs have a ratio of attention heads to layers around 2, while the ratio of previously tested and existing LMs at their base configuration is equal to one.We introduce new small-scale LMs submitted to the shared task: (i) 4-layer encoder Bebeshka2 and (ii) 6-layer decoder Zlata. 3The parameters of the models are presented in Table 1.Our LMs perform on par with the shared task baselines, while they are half the size of those.
Contribution 2. We investigate the alignment of small-scale LMs predictions with shared human values in the context of moral judgment tasks.We find that shallow LMs, yet trained on limited corpora, perform on par with base LMs in commonsense morality scenarios, and, surprisingly outper- forming existing baselines in such tasks as virtue and justice assessment.To the best of our knowledge, our work represents one of the earliest attempts to investigate how predictions made by tiny language models trained on a developmentally plausible corpus correlate with human-shared values.This paper has the following structure.After a short section dedicated to related work ( §2), we first describe tokenizer training ( §3.1), architecture search results and optimal model selection ( §3.2), and the final architecture of the pretrained LMs ( §3.3).Then, we present scores on datasets included in the shared task ( §4), and we present ethics evaluation results ( §5).
Related Work
Recent large LMs found applications in many NLP tasks, such as grammatical correction, text completion, and question answering; yet, their usage is constrained by their computational cost.Previous works reduce the model size and inference time with knowledge distillation, parameter quantization and other compression techniques (Sanh et al., 2019;Yao et al., 2021;Tao et al., 2022).Other studies investigated the relationship between model parameter count and performance.Kaplan et al., 2020 has introduced scaling laws, showing the power-law dependency between perplexity and the model size, as well as between the training loss and dataset size.The paradigm of scaling laws further formed the basis for recent research examining the behaviour of LMs at a small scale (Fedus et al., 2022;Fu et al., 2023).For instance, Puvis de Chavannes et al., 2021 presented results of Neural Architecture Search in limited parameter space, suggesting that optimal LMs are smaller than the existing base configurations.
In parallel, there is numerous research focusing on the efficiency of dataset size, vocabulary and representation that can help to reduce computation cost by minimizing the training steps (van Schijndel et al., 2019;Huebner et al., 2021;Schick and Schütze, 2021;Warstadt and Bowman, 2022).van Schijndel et al., 2019 have demonstrated that LMs trained on a small-volume corpus can reach human performance under some grammatical knowledge evaluation scenarios, questioning the necessity of large datasets for pre-training.Huebner et al., 2021 introduced a small encoder-based LM BabyBERTa with 5M parameters and showcased the efficiency of small training data; that work bridged the gap between earlier studies on model size reduction and optimal data size.
The aforementioned related works mainly analyse the difference between compact LMs and their larger counterparts with throughput time measures and performance on GLUE benchmark (Wang et al., 2018).In this paper, we evaluate LMs at a small scale trained on a 10M size dataset of BabyLM shared tasks and try to complement existing research with additional evaluation on moral judgment tasks.The decision to focus on the moral judgment task is driven by recent studies that reveal human-like biases in the moral acceptability judg-ments made by large language models trained on extensive corpora (Schramowski et al., 2022).This paper complements existing research by conducting a moral judgment evaluation for small language models.
Methodology
We follow pre-training tasks of RoBERTa (Liu et al., 2019) and GPT-2 (Radford et al., 2019) and refer to these as the architecture baselines in this section.We train Bebeshka4 and Zlata5 with masked language and causal language modelling objectives, respectively, and compare their vocabularies and architectures with the baselines.
Vocabulary
Training Data We use data provided within the STRICT-SMALL track of the shared task.We report statistics of the training corpus in Table 6 (Appendix A).The transcribed speech, extracted from recordings of casual speech addressed to children and educational movie subtitles, makes up the bulk of the corpus.The average length of the texts is around 30 tokens; considering that and the maximum text length, we lower the maximum sequence length from the base 512 to 128 tokens for the configuration of our LMs.
Input Representation
We follow tokenization models of the baselines (GPT-2, RoBERTa) and BabyBERTa (Huebner et al., 2021) and use bytelevel Byte-Pair Encoding (BPE) algorithm (Sennrich et al., 2016); that is, a tokenization method based on iterative merging of the most occurring bytes pairs in a further shared vocabulary.For the encoder Bebeshka, we build a case-insensitive vocabulary6 of size 8K.We find a few mismatches between Bebeshka and RoBERTa tokenization and provide more details in Appendix B. The decoder Zlata has a 30K vocabulary constructed with default parameter settings of Tokenizers trainer;7 that value also allows for bypassing the inclusion of onomatopoeic words that prevail in some transcribed texts of the shared task data.
Model Selection
To determine an optimal configuration of encoder LM, we use an Optuna-implemented Bayesian op-timization algorithm (Akiba et al., 2019) and tune parameters listed in Table 2 that determine the architecture.The upper bounds of the numerical parameters in a search space are chosen in accordance with the base RoBERTa configuration.We set the lower bounds to 1, ensuring a thorough exploration of architectural variations to find the optimal configuration for the masked language modelling task.Optuna features efficient implementation of optimization algorithms; in our optimization study, we use a standard Tree-structured Parzen Estimator (TPE) algorithm, which uses tree-structured representations and Parzen windows for modelling the probability distributions of hyper-parameters and their density estimation.We use TPE to sample parameter values from the search space and an automated early-stopping based on pruning runs with an intermediary perplexity higher than the median of preceding runs.
We set masked language modelling loss (perplexity) of RoBERTa initialized with the TPE sampled configuration parameters as a minimizing objective function.The perplexity is calculated on the STRICT-SMALL validation set after training the model for 10 epochs on written English texts sample (Gutenberg and Children's Book Test corpora and Wikipedia) from the training BabyLM corpus (see Table 6).We choose a corpus sample to reduce parameter search executing time since dataset size directly impacts an LM training time at each optimization step.We manually found that training on written texts yields a better score.Optimization study with an upper bound of 100 trial runs ran for roughly two days on a single A100 GPU.
Table 2 reports parameter search results for the best and worst runs according to perplexity on the validation dataset.
The optimal configuration for encoder LMs can be summarized as follows: (1) the ratio of the number of attention heads to the number of layers fluctuates within the 1.5-2 range, (2) employing relative key query type positional embeddings, (3) the dropping ratio 0.3 for attention probabilities.We further use these three key configuration attributes to initialize Bebeshka.Parameters other than positional embeddings type, dropout ratio and the number of layers/heads vary significantly across the top 10% runs.Precisely, all types of activation functions, except for ReLU, appear evenly in the best range.When it comes to the hidden size per head, it takes values from 65 to 85, with a mean of 81.6.We also observe a notable deviation of intermediary size from the mean value.Altogether our results show that the best-performing encoder LMs are smaller than the base configuration of RoBERTa, which aligns with Puvis de Chavannes et al., 2021.
Model Pre-training
We train our models on 4 Graphcore IPUs with two encoder layers trained on each with mixed precision8 and use STRICT-SMALL training split.
Table 1 shows the configuration settings of our LMs.
Bebeshka The 16M parameters model is based on RoBERTa architecture with determined optimal layer sizes ( §3.2).We train Bebeshka on the 10M training corpus of the shared task.We decrease the probability for selecting masked tokens from standard 15% to 13.5%, which is one of the equivalents to set RoBERTa unmasking probability to 0 discussed by Huebner et al., 2021.
Zlata That decoder LM is a light 66M version of GPT-2 with 6 layers trained for 10 epochs on the training STRICT-SMALL data.Motivated by the configuration of the best encoder LM, we use the ratio of attention heads to decoder layers equal to 2. We explain parameter choice in Appendix C.
Experiments Results
In this section, we report the results submitted for the BabyLM shared task.LMs discussed in this section are pre-trained on the shared task data, including the baselines.We use baselines that were created with existing tokenizers and released by the organizers of the BabyLM competition.9
Pre-training Objective Loss
We present the evaluation results of our LMs in Table 3, where we compare their performance against the shared task baselines and evaluation runtime.While the baselines were trained for 20 epochs, we can observe competitive results by pre-training our small-scale models for ten epochs.One of the main advantages of the introduced models lies in their compact size, which makes them more efficient at inference time, even though they do not outperform the baselines by a large margin, which can be seen from the average run time.
Linguistic Minimal Pairs
Figure 1 depicts the evaluation results of our LMs on the BLiMP dataset (Warstadt et al., 2020a) in a zero-shot setting.The goal of this evaluation benchmark is to assess a model's ability to distinguish between grammatically acceptable and unacceptable sentences without specific fine-tuning on the task.The dataset consists of minimal pairs annotated with a grammatical phenomenon.We report detailed LMs accuracy scores across various BLiMP tasks in Table 7 (Appendix D).The general trend is that LMs trained on BabyLM data perform well on minimal pairs with morphological tasks, such as Irregular Forms and Determiner-Noun Agreement.
Zlata achieves the best accuracy (92.1%) on Irregular Forms and outperforms OPT-125M baseline on some morphological tasks (Anaphor Agreement, Subject-Verb Agreement), minimal pairs with a violation in phrasal movements (Filler Gap) and other tasks, such as NPI Licensing.Bebeshka achieves the second-best accuracy (64.7%) on Filler Gap minimal pairs and distinguishes sentences with syntactic errors in pronoun and its antecedent relationship or syntactic islands (Binding, Island Effects).The results show that LMs trained on the BabyLM corpus have syntactic and morphology understanding which influences their behaviour on downstream tasks discussed next.
GLUE
Table 4 shows results of fine-tuned LMs evaluation on a variety of tasks present in GLUE and Super-GLUE benchmarks.10Submitted to the shared task, Bebeshka and Zlata were fine-tuned for ten epochs on most of the tasks (see Appendix C for more detail).The overall trend is that the introduced small-scale encoder Bebeshka and decoder Zlata demonstrate scores comparable with large baseline LMs on downstream tasks.That highlights that LMs at a small scale can quickly adapt to the finetuning task, though may achieve lower performance in a zero-shot evaluation on BliMP.When comparing decoder LMs, we observe that the introduced Zlata outperforms OPT-baseline on paraphrase detection (MRPC & QQP), entailment/contradiction detection (MNLI), and question answering (BoolQ) downstream tasks.As for the encoder LMs, the encoder Bebeshka has moderate scores compared to RoBERTa, which, in general, achieves the best scores on GLUE.However, Bebeshka outperforms OPT-125M baseline on QQP and MRPC tasks with F1 scores of 73.5% and 66.4%, respectively.
The most difficult task for shallow LMs seems to be Recognizing Textual Entailment (RTE).We suppose that LMs trained on STRICT-SMALL corpus with an average length of 28.65 tokens (Table 6, Appendix D) or restricted to the 128 maximum sequence length, can perform well on datasets with short sequences and contexts, which can explain lower results on some fine-tuned tasks; another issue can be the fine-tuning hyper-parameters search: perhaps, shallow LMs require more epochs to improve the submitted scores.
Mixed Signals Generalization
The MSGS dataset introduced by (Warstadt et al., 2020b) comprises 20 binary classification tasks and is used to test whether a LM has a preference for linguistic or surface generalizations.The evaluation pipeline of the shared task includes 11 MSGS tasks; we report obtained accuracy scores for the fine-tuned LMs in Table 8 (Appendix D).The Matthew's Correlation Coefficient (MCC; Matthews, 1975) scores suggest that all LMs fine-tuned in a controlled setting show better results (>0.9) than those fine-tuned in an ambiguous scenario, with the only exception for Control Raising category; the highest scores are achieved on Lexical content and Relative position tasks.Lexical Content is a task of classifying sentences with "the" (the mouse vs a mouse) when Relative Position is a task of determining whether "the" precedes "a" in a sentence.Decoder LMs perform similarly on MSGS tasks chosen for the BabyLM competition, excluding Syntactic Category-Lexical Content (SC-LC) classification task, where SC is a task of detecting sentences with adjectives.A decoder LM Zlata seems to adopt surface generalization during fine-tuning on unambiguous data (SC-LC), whereby the baseline model OPT learns to represent linguistic features.Bebeshka behaves likewise on the Syntactic Category task and reaches scores close to RoBERTa on Lexical Content and Main Verb classification problems, suggesting that Bebeshka tends to encode surface features.
Age of Acquisition
Portelance et al., 2023 introduced a method for measuring the age-of-acquisition in LMs compared to the actual age-of-acquisition by English American children on words set from the CHILDES corpus.Table 9 (Appendix D) illustrates that deviation measured in months for the introduced and baseline LMs.The models Zlata and Bebeshka demonstrate comparable scores to the baselines.
Moral Judgments
In this section, we present the results of additional experiments on moral judgements that we conduct outside of the main shared task evaluation.We evaluate small-scale LM's understanding of fundamental moral principles in various scenarios covered by ETHICS benchmark (Hendrycks et al., 2020).The benchmark consists of 5 morality judgment tasks, including reasonable and fair justice, virtue responses, permitted behaviour depending on context-specified constraints (deontology ethics), pleasant scenario choice (utilitarianism ethics), and commonsense morality.We grid search hyper-parameters for our LMs and use test splits for further evaluation.We fine-tune Bebeshka for ten epochs on each of the tasks and evaluate Zlata in a few-shot setting (see more details in Appendix C).Table 5 outlines the moral judgements classification results.Our small LMs generally outperform existing baselines with respect to accuracy scores on sentence-level tasks, and the best results are achieved on Virtue moral judgements.
We suggest that the efficiency of small LMs in these tasks can be explained by some properties of pre-training data, such as lower mean sequence length, transcribed speech prevalence with single-word reactions or responses, childrendirected speech, and imperatives.For example, Virtue task is a collection of scenario-trait pairs, such as "Jordan will never do harm to his friends.<sep> caring", which have a structure similar to one-word responses in transcribed dialogues.
Conclusion and Future Work
In this paper, we present our results for the STRICT-SMALL track of the BabyLM competition.Our submission to the shared task consists of two LMs, namely encoder Bebeshka and decoder Zlata.We first search for an optimal architecture, minimizing perplexity on the released training corpus, and find that the best models have around 6 encoder layers on average, down from 12 layers of existing base models, and have twice as many attention heads.When the number of encoder layers fluctuates among the best models, we find that they all have an attention-heads-to-layers ratio of two, which we further use for building our LMs.Our final LMs, which are scaled-down versions of RoBERTa and GPT-2 with a total of 16M and 66M parameters, perform better than the baseline LMs on development and test BabyLM corpora.Zero-shot evaluation results suggest that our shallow LMs have some basic grammatical knowledge of language syntax and morphology.The introduced LMs also perform better than OPT model on several downstream tasks when having 2 times fewer parameters.We also observe a good performance of our small LMs in a range of ethics judgment tasks, showing that their vocabulary and after-training knowledge can positively contribute to the morality assessment of the described scenarios.These results can serve as baselines for the evaluation of ethical judgment capabilities in small language models.The achieved scores may be attributed to the interplay between ethical and linguistic rules, particularly in encoding action verbs used to describe moral and immoral behaviour.This aspect can be further explored by examining the usage of verbs in various syntactic contexts within the BabyLM corpus and their encoding by trained language models.
In our future work, we plan to determine more capabilities of small LMs, trained on small-size corpora, such as short stories data containing words only 4-year-old children can understand (Eldan and Li, 2023).We also plan to extend our experiments with an analysis of fine-tuning dynamics to investigate how small models adapt to the tasks.
Limitations
Despite achieving good performance on BabyLM test data, our approach has some limitations.We use a variant of Bayesian optimization (TPE algorithm, §3.2) to find an optimal range of parameters that we further use for building our LMs.We predefine constraints for parameters (Table 2) that narrows down the search space and can influence further parameter distributions built with Parzen (kernel density) estimators and, thus, future candidate selection.Future work can benefit from both expanded search space and parameter limits range.The architecture of our small language models, including the number of layers, heads, and hidden layer size, can serve as a minimum lower bound for the parameter search space.
B Tokenization Tests
We compare the tokenization of Bebeshka and RoBERTa on the corpus of STRICT-SMALL track and find that the tokenization coincides on 87% of the sequences.We manually analyse a random sample of 100 non-matching tokenization cases and find that those fall on transcribed speech sentences with no more than three words or include two words missing in RoBERTa vocabulary but processed as a whole word by Bebeshka LM (sweetie and duke).We also found that the RoBERTa tokenizer splits non-capitalised first names or other terms used for addressing (th-omas, m-ister, mom-my) opposed to Bebeshka.
C.1 Pre-training parameters
We experimented with the same configuration for our decoder LM Zlata as we used for Bebeshka, including 4 layers and the same type of positional embeddings; however, that always resulted in gradients underflow and that loss was not decreasing.We manually found the 6-layer and absolute positional embedding configurations by increasing and traversing values of the parameters that were grid searched for Bebeshka (Table 2).We pre-train our LMs using 4x IPUs freely available in Paperspace11 and use IPU Trainer API.We use auto-loss scaling with an initial value of 16384 and half-precision for training our LMs.Training with IPUs requires specifying IPU configuration, containing instructions for mapping layers between the devices; for Bebeshka, we use one layer per IPU, and for Zlata, we use that parameter equal to 2. For both LMs, we use per-device training batch size equal to 1 and gradient accumulation steps equal to 64.Each batch consists of 1,000 concatenated data examples from the training corpus.The time for the computational graph construction took under 10 minutes for both training both LMs.
C.2 Fine-tuning parameters
BabyLM Evaluation For Bebeshka fine-tuning, we use parameters used by default in the evaluation pipeline of the competition, that is, learning rate equal to 5e-5, batch size equal to 64, and maximum epochs equal to 10.For Zlata fine-tuning, we use the learning rate equal to 1e-4 and fine-tune the tasks for 5 epochs.That allowed us to reduce fine-tuning time.Note that the performance of our LMs can be improved upon the submitted results if grid search the optimal hyper-parameters.
Moral Judgement We use a weighted loss for fine-tuning Bebeshka and grid search optimal parameters using an official implementation by the authors of the dataset.12For our GPT-2 based model Zlata, we use an existing evaluation harness benchmark in the k-shot setting with k equal to 15. 13
Table 1 :
Shoeybi et al., 2019.andpre-training details of Bebeshka and Zlata LMs compared to RoBERTa-base and GPT-2 medium.Our LMs have configurations of optimal architecture determined with an architecture search ( §3.2).GPT-2 official training information has not been publicly disclosed; we report GPT-2 pre-training hardware details when using model parallelism specified byShoeybi et al., 2019.We use Graphcore Intelligence Processing Units (Jia et al., 2019ining our LMs(Jia et al., 2019provide a detailed review on IPUs).MLM=Masked Language modelling, CLM=Causal Language modelling, L=Layers, A=Attention heads, H=Hidden size per head, F =Feedforward (intermediary) layer size.
Table 2 :
Parameter search space of Optuna study for pre-training encoder LMs on STRICT-SMALL corpus and mean parameter values across 10 best and worst runs sorted by the perplexity.For non-numerical parameters, we report the most common parameter values among study runs.
Table 3 :
Pre-training objective loss on validation and test data of Bebeshka and Zlata compared to baseline models and average run time in seconds.We run an evaluation of all LMs on the same V100 GPU and use Hugging Face Trainer API for calculating the scores.The best score is in bold, and the second-best score is underlined.
Table 4 :
Evaluation results on GLUE and SuperGLUE (BoolQ, MultiRC, WSC) benchmark datasets.We report metrics suggested in the shared task evaluation pipeline and baselines.The best score is in bold, and the second-best score is underlined.
Figure Accuracy on BLiMP tasks of our LMs withRoBERTa-base, OPT-125M, and T5-base baselines.The lighter colours correspond to greater accuracy and, hence, better scores.Morphology: Anaphor Agr., D-N Agr., Irregular Forms, S-V Agr.. Semantics: NPI Licensing, Quantifiers.Syntax-Semantics:Binding, Control/Raising.The rest phenomena correspond to the Syntax category.
Table 5 :
Hendrycks et al., 2020.CS benchmark.LMs trained on STRICT-SMALL corpus reach results close to the large model baselines reported byHendrycks et al., 2020.We do not report results for the fine-tuning tasks which require the maximum sequence length exceeding the one of an LM.The best score is in bold, and the second-best score is underlined.
features matter: RoBERTa acquires a preference for linguistic generalizations (eventually).In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 217-235, Online.Association for Computational Linguistics.
Table 6 :
Statistics of the training corpus offered in the STRICT-SMALL track of BabyLM competition. | 2023-11-07T06:42:31.146Z | 2023-11-06T00:00:00.000 | {
"year": 2023,
"sha1": "08c964247960e96b07c394ca365dfd7d5b65683a",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2023.conll-babylm.4.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "cc0b6fe5b18b798ea537d43072bf30bfb30c2bc4",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
221299841 | pes2o/s2orc | v3-fos-license | False Opposing Fear Memories Are Produced as a Function of the Hippocampal Sector Where Glucocorticoid Receptors Are Activated
Injection of corticosterone (CORT) in the dorsal hippocampus (DH) can mimic post-traumatic stress disorder (PTSD)—related memory in mice: both maladaptive hypermnesia for a salient but irrelevant simple cue and amnesia for the traumatic context. However, accumulated evidence indicates a functional dissociation within the hippocampus such that contextual learning is primarily associated with the DH whereas emotional processes are more linked to the ventral hippocampus (VH). This suggests that CORT might have different effects on fear memories as a function of the hippocampal sector preferentially targeted and the type of fear learning (contextual vs. cued) considered. We tested this hypothesis in mice using CORT infusion into the DH or VH after fear conditioning, during which a tone was either paired (predicting-tone) or unpaired (predicting-context) with the shock. We first replicate our previous results showing that intra-DH CORT infusion impairs contextual fear conditioning while inducing fear responses to the not predictive tone. Second, we show that, in contrast, intra-VH CORT infusion has opposite effects on fear memories: in the predicting-tone situation, it blocks tone fear conditioning while enhancing the fear responses to the context. In both situations, a false fear memory is formed based on an erroneous selection of the predictor of the threat. Third, these opposite effects of CORT on fear memory are both mediated by glucocorticoid receptor (GR) activation, and reproduced by post-conditioning stress or systemic CORT injection. These findings demonstrate that false opposing fear memories can be produced depending on the hippocampal sector in which the GRs are activated.
Injection of corticosterone (CORT) in the dorsal hippocampus (DH) can mimic post-traumatic stress disorder (PTSD)-related memory in mice: both maladaptive hypermnesia for a salient but irrelevant simple cue and amnesia for the traumatic context. However, accumulated evidence indicates a functional dissociation within the hippocampus such that contextual learning is primarily associated with the DH whereas emotional processes are more linked to the ventral hippocampus (VH). This suggests that CORT might have different effects on fear memories as a function of the hippocampal sector preferentially targeted and the type of fear learning (contextual vs. cued) considered. We tested this hypothesis in mice using CORT infusion into the DH or VH after fear conditioning, during which a tone was either paired (predicting-tone) or unpaired (predicting-context) with the shock. We first replicate our previous results showing that intra-DH CORT infusion impairs contextual fear conditioning while inducing fear responses to the not predictive tone. Second, we show that, in contrast, intra-VH CORT infusion has opposite effects on fear memories: in the predicting-tone situation, it blocks tone fear conditioning while enhancing the fear responses to the context. In both situations, a false fear memory is formed based on an erroneous selection of the predictor of the threat. Third, these opposite effects of CORT on fear memory are both mediated by glucocorticoid receptor (GR) activation, and reproduced by post-conditioning stress or systemic CORT injection. These findings demonstrate that false opposing fear memories can be produced depending on the hippocampal sector in which the GRs are activated.
Keywords: dorsal hippocampus, ventral hippocampus, fear memory, fear conditioning, glucocorticoid receptors, mice INTRODUCTION Exposure to an extreme stress can produce highly fearful memories, which contribute to the development of stressrelated disorders (de Quervain et al., 1998). In such a situation, excess glucocorticoids, whose the hippocampus constitutes a key brain site of action, impair hippocampusdependent memory consolidation of the event (McEwen, 2000;Roozendaal, 2003). Therefore, this may explain the emergence of pathological fear memories like those observed in post-traumatic stress disorder (PTSD; Layton and Krikorian, 2002;Desmedt et al., 2015a). In accordance with this, we previously demonstrated in mice that, under a high stressful situation, post-training infusion of glucocorticoids into the dorsal hippocampus (DH) impairs contextual fear memories while inducing fear memory for an irrelevant (i.e., not predicting the threat) simple tone, thereby mimicking both contextual amnesia and the maladaptive hypermnesia observed in PTSD (Kaouane et al., 2012).
However, accumulated evidence demonstrates a functional dissociation along the dorsal-ventral axis of the hippocampus (Fanselow and Dong, 2010). The DH receives polymodal sensory information from cortical areas (Witter and Amaral, 2004) and primarily contributes to contextual learning. In contrast, the ventral hippocampus (VH) is strongly connected to the subcortical structures, especially the amygdala, and may rather contribute to emotion-related processes (Moser and Moser, 1998;Bannerman et al., 2004), particularly fear conditioning (Maren, 1999;Bast et al., 2001;Maren and Holt, 2004) and anxiety-related behaviors (Bannerman et al., 2004;Calhoon and Tye, 2015). In addition, stress and glucocorticoids differentially regulate long-term potentiation (LTP) and long-term depression (LTD) in the DH and the VH Segal, 2007a,b, 2009a,b). In particular, they impair LTP in the DH while enhancing it in the VH (Maggio and Segal, 2007b), whereas stress increases LTD in the DH while converting LTD to LTP in the VH (Maggio and Segal, 2009a).
Moreover, the glucocorticoid receptors (GRs), on which depend the glucocorticoids' memory effects (Oitzl et al., 2001), show a higher density in the DH than in the VH (Robertson et al., 2005;Segal et al., 2010). Of particular interest in the context of normal vs. pathological fear memory, GR activation within the hippocampus is crucial for the consolidation of contextual fear memories and can even enhance them (Donley et al., 2005;Revest et al., 2005Revest et al., , 2010Revest et al., , 2014 whereas full GR activation results in impaired spatial memory (Conrad et al., 1999;Brinks et al., 2007). Furthermore, specific activation of GRs also abolishes in vitro synaptic excitability in both DH and VH (Segal et al., 2010) and facilitates LTD in both hippocampal sectors (Maggio and Segal, 2009a). Together, these data strongly suggest that GR activation in either DH or VH could differentially contribute to the deleterious effects of glucocorticoids on fear memories.
To address this issue: (1) we compared the effects of local infusions of corticosterone (CORT), the major glucocorticoid in rodents, into either the DH or the VH, on the consolidation of tone and contextual fear memories; (2) we assessed whether these effects are mediated by GR activation; and (3) we tested whether these effects could be physiologically mimicked by post-training stress or systemic CORT injection.
Subjects
Three-month-old male mice (C57Bl/6 JI Company, Charles River Laboratories) were individually housed a week before experiments in standard Macrolon cages in a temperatureand humidity-controlled room under a 12-h light/dark cycle (lights on at 07:00) and had ad libitum access to food and water. As all the present experiments were restricted to male mice, future experiments will have to determine the extent to which the present findings can be extended to female mice. Mice were handled a few days before experiments and habituated to intracerebral or systemic injection procedures. All experiments took place during the light phase. All animal care and behavioral tests were conducted in compliance with the European Communities Council Directive (86/609/EEC).
Surgical Procedure
Mice were anesthetized with ketamine (80 mg/kg body weight, i.p.) and xylazine (16 mg/kg body weight, i.p.; Bayer) and secured in a David Kopf Instruments stereotaxic apparatus. Stainlesssteel guide cannulas (26 gauge, 8-mm length) were implanted bilaterally 1 mm above either the dorsal hippocampus (A/P, −2 mm; M/L, ±1.3 mm; D/V, 1 mm) or the ventral hippocampus (A/P, −3.6 mm; M/L, ±3 mm; D/V, 3.3 mm; relative to dura and bregma; Franklin and Paxinos, 1997), then fixed in place with dental cement and two jeweler screws attached to the skull. Mice were then allowed to recover in their home cage for at least 8 days before behavioral experiments.
Fear Conditioning Procedures
The procedures have been fully described in previous studies (Calandreau et al., 2005(Calandreau et al., , 2006(Calandreau et al., , 2007Kaouane et al., 2012). Briefly, mice were placed in the conditioning chamber and, after a baseline period of 100 s, received two tone cues (63 dB, 1 kHz, 15 s) either paired (intertrial interval of 60 s) or unpaired (pseudo-random distribution of the stimuli) with two electric footshocks (0.8 mA, 50 Hz, 3 s). With the tone presentation always followed by the shock delivery (cue-shock pairing procedure), the animals identified the tone as the main threat predictor of the shock (predicting-cue group). In contrast, when the tone presentation was never followed by shock delivery (cue-shock unpairing procedure), the animals identified the conditioning context as the right predictor of the shock (predicting-context group; Figure 1). Specifically, in the CS-US unpairing procedure, 100 s after being placed into the chamber, animals received a shock, then, after a 20-s delay, a tone; finally, after a 30-s delay, the same tone and the same shock spaced by a 30-s interval were presented. After 20 s, animals were returned to the home cage. The relatively high footshock intensity used (i.e., 0.8 mA) is known to produce a strong fear conditioning to the tone or to the context in the predicting-cue group and the predicting-context group, respectively, as demonstrated in our previous study (Kaouane et al., 2012). The quality of the FIGURE 1 | Experimental design of the behavioral procedure. Mice were either submitted to a cue-shock pairing procedure (predicting-cue group) or to a cue-shock unpairing procedure (predicting-context group). Immediately after conditioning, animals received intra-hippocampal infusions of artificial cerebrospinal fluid (aCSF), corticosterone (CORT), or dexamethasone (DEX). The next day, mice were first re-exposed to the cue alone in a neutral chamber, then (2 h later) were re-exposed to the conditioning context without the cue. During these tests, the freezing responses were measured during 2-min periods. memory formed was assessed the following day when mice were submitted to two memory tests. First, after a 2-min baseline period in a neutral chamber, mice were exposed for 2 min to the tone cue alone, followed by a 2-min post-tone period. Conditioned fear to the tone is expressed by the percentage of freezing during the tone presentation, and the strength and specificity of this conditioned fear is attested by a ratio that considers the percentage of freezing increase to the tone with respect to a baseline freezing level (i.e., pre-and post-tone periods mean) and that was calculated as follows: [% freezing during tone presentation − (% pre-tone period freezing + % post-tone period freezing)/2]/[% freezing during tone presentation + (% pre-tone period freezing + % post-tone period freezing)/2]. Two hours later, mice were re-exposed for 6 min to the conditioning context alone for the assessment of their contextual freezing, which attests of their contextual memory of the aversive event.
As for the tone test, the percentage of freezing is assessed during three 2-min blocks. However, because all contextual fear responses decline on the second and third 2-min blocks of the context test because of a classical fear extinction and that the first 2-min block is the only one that allows the observation of key optimal differences between the different groups, we chose to restrict the data to the first block. Freezing behavior of animals, defined as a lack of all movement except for respiratory-related movements, was used as an index of conditioned fear response. Animals were continuously recorded on videotape for off-line second-by-second scoring of freezing by an observer blind of experimental groups.
Intracerebral Infusions
Immediately after the acquisition of fear conditioning, animals were placed back in their home cage and received intra-DH or intra-VH bilateral infusions (0.3 or 0.1 µl per side, respectively) of artificial cerebrospinal fluid (aCSF), CORT (2-hydroxypropylβ-cyclodextrin complex; Sigma-Aldrich; 10 ng per side), or the specific GR agonist dexamethasone (DEX: 2-hydroxypropyl-βcyclodextrin complex; Sigma-Aldrich; 1 ng per side). The dose of CORT was selected based on our previous study reporting that post-training intra-DH infusions of 10 ng disturbed the selection of the right predictor of the shock under a 0.8-mA footshock intensity (Kaouane et al., 2012). The dose of DEX was selected on the basis of previous in vivo studies using similar concentration for intra-DH infusions in rats (Mizoguchi et al., 2007) and reporting that dose 10-fold lower than for CORT is efficient to mimic in vitro CORT effects (Maggio and Segal, 2007b;Chaouloff et al., 2008). For infusions, stainlesssteel cannulas (32-gauge, 9 mm) attached to 1-µl Hamilton syringes with polyethylene catheter tubing were inserted through the guide cannula. The syringes were fixed in a constant rate infusion pump (0.1 µl/min). The cannulas were left in place for an additional 1 min before removal to guarantee diffusion of the drug.
Restraint Stress
Immediately after the acquisition of fear conditioning using a footshock of 0.8 mA or a lower one (0.3 mA) known to be too weak for inducing significant fear responses but with which post-training stress can enhance contextual fear conditioning (Kaouane et al., 2012), mice were placed during 20 min in a transparent Plexiglas cylinder (diameter: 2.5 cm, 11 cm long) in a room adjacent to the fear conditioning room. After immobilization, mice were returned to their home cage.
Systemic Injection of Corticosterone
CORT (2-hydroxypropyl-β-cyclodextrin complex) or vehicle (NaCl 0.9%) was administrated intraperitoneally (i.p.) immediately after the acquisition of fear conditioning. The complex of corticosterone with cyclodextrins allows dissolving this steroid in aqueous solutions. After the injection, animals were returned to their home cage. We selected a relatively low (1.5 mg/kg) and a high (10 mg/kg) dose of corticosterone (in a volume of 0.1 ml/10 g bodyweight) known to produce dose-dependent effects on fear memories as shown in our previous study (Kaouane et al., 2012).
Histology
After behavioral testing, animals were given an overdose of pentobarbital and transcardially perfused with physiological saline, followed by 10% buffered formalin. Brains were postfixed in formalin-sucrose 30% solution for 1 week, frozen, cut coronally on a sliding microtome into 60-µm sections that were mounted on gelatin-coated glass slides, and stained with thionine to evaluate the cannula placements (Supplementary Figure S1).
Data Analysis
Data are presented as the mean ± SEM. Statistical analyses were performed using, on StatView software, analysis of variance (ANOVA) followed by Bonferroni-Dunn post hoc test when appropriate. Values of p < 0.05 were considered as significant. Several experiments using a similar behavioral protocol led to the conclusion that 6-8 mice per group were enough to reach statistical significance. To face the variability of surgery, we increased the sample size to 10 mice per group. For indication, the power of these experiments, with n = 8 mice per group and an alpha of 0.05, is 85%.
Activation of GR in the DH Produces a Simple Tone-Based Fear Memory at the Expense of Contextual Fear Memory
This experiment first replicates our previous findings showing that intra-DH injection of CORT mimicked a simple/salient tone fear conditioning at the expense of contextual conditioning, thereby reproducing the PTSD-like pathological hypermnesia and contextual amnesia (Kaouane et al., 2012). Second, the present results reveal that such abnormal fear memory can also be produced by intra-DH injection of DEX, indicating that GR activation is sufficient to induce such memory alteration. The effects of the intra-DH injections of CORT and DEX on the fear responses to the tone (Figure 2A) and the conditioning context ( Figure 2B) were dependent on the conditioning procedure (i.e., predicting-cue vs. predicting-context) as they were both restricted to the predicting-context condition [Tone conditioning ratio: effect of treatment (F (2,38) = 8.19, p < 0.01) and treatment × procedure (F (2,38) = 13.00, p < 0.001); Context test: effect of treatment (F (2,38) = 6.20, p < 0.01) and treatment × procedure (F (2,38) = 7.04, p < 0.01)].
First, compared with their aCSF-injected controls, CORTand DEX-injected mice submitted to the predicting-context procedure displayed increased fear responses to the tone, attested by a significant increased tone conditioning ratio (Figure 2A, left, both CORT and DEX: Bonferroni-Dunn post hoc test p < 0.001), and a significant increase in the percentage of freezing level to the tone per se (Figure 2A, right, CORT: p < 0.01, DEX: p < 0.05; see also Supplementary Figure S2A), so that they did not differ anymore from animals submitted to the predicting-tone procedure (in CORT and DEX groups: both p > 0.05). It must be noted that in our previous study (Kaouane et al., 2012), we also showed that CORT-injected mice displayed a fear response to a previously unexperienced cue (2-kHz tone) at some extent similar to the one experienced during the conditioning (1-kHz tone), but not to a completely different cue (white noise). Thus, as in PTSD, these mice showed a partial fear generalization to cues more or less similar to trauma-related cues, but not to very different cues.
FIGURE 2 | Activation of glucocorticoid receptor (GR) in the dorsal hippocampus (DH) produces a simple tone-based fear memory at the expense of contextual fear memory. In controls (aCSF), mice of the predicting-cue group (n = 7) expressed a high tone conditioning ratio (A, left) and high percentage of freezing to the tone per se (A, right), as well as low freezing to the context (B), in comparison with the predicting-context group (n = 8), which expressed low levels of freezing to the cue and high levels of freezing to the context. In mice submitted to the predicting-context condition, intra-DH CORT or DEX infusions increased the tone conditioning ratio (A, left), produced a fear response to the cue (A, right), while reducing the conditioned fear to the context (B; CORT: n = 7; DEX: n = 6). No change was observed in mice submitted to the predicting-cue condition (CORT: n = 7; DEX: n = 9). In all conditions, the ratio differed from zero (all p < 0.05). *Procedure effect (predicting cue vs. predicting context group; *p < 0.05, **p < 0.01, and ***p < 0.001); # treatment effect (aCSF vs. CORT or aCSF vs. DEX; # p < 0.05, ## p < 0.01, and ### p < 0.001).
In parallel, although the conditioning context is the objective predictive stimulus in this training condition, the same mice displayed significantly decreased contextual fear responses compared with aCSF controls (Figure 2B, both CORT and DEX: Bonferroni-Dunn post hoc test p < 0.01). As a result, their levels of contextual freezing were as low as those displayed by mice submitted to the predicting-tone condition (in CORT and DEX groups: p > 0.05).
Activation of GR in the VH Promotes a Context-Based Fear Memory at the Expense of Tone-Based Fear Memory
The effects of intra-VH injections of CORT and DEX on the fear responses to the tone ( Figure 3A) and the conditioning context ( Figure 3B) were dependent on the conditioning procedure (i.e., predicting-cue vs. predictingcontext), but in contrast to intra-DH injections, they were both restricted to the predicting-cue condition [Tone conditioning ratio: effect of treatment (F (2,40) = 7.26, p < 0.01) and treatment × procedure (F (2,40) = 7.40, p < 0.01); Context test: effect of treatment × procedure (F (2,40) = 8.61, p < 0.001)].
First, compared with their aCSF-injected controls, CORTand DEX-injected mice submitted to the predicting-cue procedure did not express any conditioned fear to the tone, which is the objective predictive stimulus in this training condition. This blockade is attested by a significant decreased tone conditioning ratio (Figure 3A, left, both CORT and DEX: Bonferroni-Dunn post hoc test p < 0.001) and a significant decrease in the percentage of freezing level to the tone per se (Figure 3A, right, both CORT and DEX: p < 0.01; see also Supplementary Figure S2B). As a result, these mice did not differ anymore from those submitted to the predicting-context procedure (in CORT and DEX groups: both p > 0.05).
In parallel, the same mice displayed significantly increased fear responses to the conditioning context ( Figure 3B, CORT: p < 0.01, DEX: p < 0.001) to the extent that their levels of contextual freezing were as high as those of mice submitted to the predicting-context condition (in CORT and DEX groups: both p > 0.05).
Stress or Systemic CORT Injection Mimics the Effects of Local CORT Infusion on Fear Memories
We previously showed that post-training (restraint) stress or systemic CORT injection performed after a predicting-context conditioning mimicked the effects of intra-DH CORT infusions on fear memories, i.e., promoting the selection of the tone cue instead of the context as predictor of the shock when a relatively high stress intensity was used (Kaouane et al., 2012, and see Figure 4A for a summary). Here, in the same perspective of more physiological stress-related manipulations, we tested whether similar manipulations performed after a predictingcue conditioning could mimic the effects of intra-VH CORT infusions on fear memories, i.e., the selection of the context instead of the tone cue as predictor of the shock.
First, we analyzed whether post-training stress had differential effects on tone fear conditioning depending on the shock intensity, as demonstrated for contextual fear conditioning (Kaouane et al., 2012). The amplitude of the conditioned fear to the tone (Figure 4B, left and middle, Supplementary Figure S2C) was dependent on the footshock intensity (0.3 vs. 0.8 mA; effect of intensity on tone conditioning ratio: F (1,30) = 16.26, p < 0.001; on freezing to the tone per se: F (1,30) = 37.47, p < 0.001), and depending on this intensity the post-training stress had an opposite effect on the tone conditioning (intensity × stress for tone conditioning ratio: F (1,30) = 15.21, p < 0.001; for freezing to the tone per se: F (1,30) = 30.17, p < 0.001). As expected, the conditioned fear responses to the tone were higher after a footshock of 0.8 mA than 0.3 mA in the control condition (Bonferroni-Dunn post hoc test p < 0.001), but this difference disappeared in the post-training stress condition because both the tone conditioning ratio and the percentage of freezing to the tone increased after a 0.3-mA footshock (p < 0.05 and p < 0.001, respectively) whereas they decreased FIGURE 3 | Activation of GR in the ventral hippocampus (VH) promotes a context-based fear memory at the expense of tone-based fear memory. The controls (aCSF) expressed a high tone conditioning ratio (A, left) with high percentage of freezing to the tone per se (A, right; predicting-cue group: n = 8) and low conditioned fear to the context (B) when submitted to the predicting-cue condition (n = 8), whereas they expressed an inverse pattern of results when submitted to the predicting-context condition (n = 8). In the predicting-cue condition, intra-VH CORT or DEX infusions (CORT: n = 8; DEX: n = 8) abolished the tone conditioning ratio (A, left) and the conditioned fear to the tone per se (A, right), while increasing the fear responses to the context (B). After CORT and DEX infusions, the ratios did not differ from zero (all p > 0.05). No change was observed in mice submitted to the predicting-context condition (CORT: n = 7; DEX: n = 7). *Procedure effect (predicting cue vs. predicting context group; *p < 0.05 and ***p < 0.001); ## treatment effect (aCSF vs. CORT or aCSF vs. DEX; ## p < 0.01 and ### p < 0.001).
after a 0.8-mA footshock (both p < 0.05). In parallel, the amplitude of the fear responses to the conditioning context ( Figure 4B, right) increased with the footshock intensity (effect of intensity: F (1,30) = 14.12, p < 0.001) and the post-training stress (effect of stress: F (1,30) = 7.7, p < 0.01), which produced a significant enhancement of contextual freezing when the highest footshock intensity was used during training (p < 0.05).
Second, we analyzed whether systemic (i.p.) injection of CORT could mimic the deleterious effects of intra-VH CORT injections on fear memory when performed after tone fear conditioning using a 0.8-mA footshock ( Figure 4C). Systemic CORT injection decreased both the tone conditioning ratio (Figure 4C, left, treatment: F (2,24) = 8.13, p < 0.01) and the percentage of freezing to the tone per se (Figure 4C, middle, treatment: F (2,24) = 15.82, p < 0.001; see also Supplementary Figure S2D), whatever the dose used (1.5 mg/kg: p < 0.01 and p < 0.001, respectively; 10 mg/kg: p < 0.05 and p < 0.01, respectively), but increased the fear responses to the conditioning FIGURE 4 | Stress or systemic CORT injection mimics the effects of local CORT infusion on fear memories. (A) Summary of previously published data relative to the effects of post-training stress or systemic CORT injection on fear memory after a predicting-context procedure. (B) Post-training stress after a predicting-cue procedure using a low shock intensity (0.3 mA) increased the tone conditioning ratio (left) and the fear responses to the tone per se (middle), whereas the same stress applied after this conditioning procedure using a high shock intensity (0.8 mA) reduced them and significantly increased the fear responses to the context (right). 0.3 mA control: n = 8; 0.3 mA + stress: n = 9; 0.8 mA control: n = 9; 0.8 mA + stress: n = 8. (C) Intraperitoneal (i.p.) corticosterone injection after a 0.8-mA predicting-cue conditioning decreased the tone conditioning ratio (left), the fear responses to the tone per se (middle), and increased at 10 mg/kg the fear responses to the context (right). After CORT injection, the ratios decreased such that it did not differ from zero (both p > 0.05). Control (0): n = 9; CORT 1.5: n = 10; CORT 10: n = 8. *Effect of shock intensity (0.3 vs. 0.8 mA; *p < 0.05 and ***p < 0.001); # effect of stress or injection compared with controls ( # p < 0.05, ## p < 0.01, and ### p < 0.001).
DISCUSSION
The present results show that CORT differentially alters fear memories depending on the hippocampal sector where GRs are activated and the fear learning considered. Replicating our previous results (Kaouane et al., 2012), intra-DH CORT infusion impaired contextual fear conditioning and induced fear responses to a salient cue, non-predicting the threat. Strikingly, the present study shows that intra-VH infusion produced the exact opposite pattern: it blocked cue fear conditioning while inducing fear responses to the (background) conditioning context which was not yet the main predictor of the shock. The fact that these opposite effects could be reproduced by local infusions of DEX indicates that they are mediated by activation of GRs in the DH and VH, respectively. Finally, post-training stress or systemic CORT injections reproduced the alterations of fear memories induced by local infusions of glucocorticoids.
The replication of the deleterious effect of intra-DH infusion of CORT on contextual fear memory is fully congruent with a vast literature indicating that the dorsal part of the hippocampus supports the establishment of a unified representation of the context (Rudy et al., 2002;Matus-Amat et al., 2004) and is crucial for contextual fear conditioning (Kim and Fanselow, 1992;Phillips and LeDoux, 1992;Anagnostaras et al., 2001). In accordance with previous studies, our past and present results also support the idea that CORT can have different effects on hippocampus-dependent memories, promoting contextual fear memories under low stress situations (Pugh et al., 1997;Cordero and Sandi, 1998;Revest et al., 2005Revest et al., , 2010Revest et al., , 2014Kaouane et al., 2012), while disrupting spatial (de Quervain et al., 1998;Conrad et al., 1999) and contextual memory (Kaouane et al., 2012) when high doses or high stress situations are used. Interestingly, intra-DH infusion of CORT also resulted in the selection of the simple tone instead of contextual cues as predictor of the shock, leading to a prevalent, although maladaptive, tone-based fear memory. Similar switch from contextual to cue fear conditioning was already observed after pharmacological manipulations that reduced the dorsal hippocampal activity (Calandreau et al., 2006;Desmedt et al., 2015b). This indicates that, when the consolidation of predicting contextual information is disrupted by alteration of the DH, a cognitive switch promotes an association between the footshock and the most salient simple cue (i.e., the tone), despite the absence of any explicit cue-shock pairing during training.
In contrast, when CORT was infused into the VH, the present study shows that tone cue conditioning was blocked to the benefit of contextual fear. Conditioned fear to a discrete tone is classically viewed to involve a brain circuit restricted to the amygdala and the thalamus (LeDoux, 2000). However, numerous studies have reported that tone cue conditioning can be impaired by electrolytic (Maren and Holt, 2004), neurotoxic lesions (Maren, 1999;Bast et al., 2001;Zhang et al., 2001), or inactivation of the VH (Maren and Holt, 2004;Esclassan et al., 2009). Because the VH is strongly connected to the amygdala (Maren and Fanselow, 1995;Pitkänen et al., 2000), it could convey information about the tone to it (Sakurai, 2002). This transmission would be here disrupted by excess glucocorticoids in the VH.
In parallel, intra-VH CORT infusion also increased conditioned fear to the context in animals for which the tone is yet the objective predictor of the shock, and as such known to normally overshadow contextual cues. The role of the VH in contextual fear conditioning is unclear because it receives little visuo-spatial information from the sensory cortices (Pitkänen et al., 2000;Witter and Amaral, 2004), displays less numerous and less specific place cells than the DH (Jung et al., 1994), and its lesion or inactivation resulted in opposite results Zhang et al., 2001;Kjelstrup et al., 2002;Hunsaker and Kesner, 2008). It could thus be hypothesized that CORT-induced alteration of the VH could promote cognitive processes based on the DH functioning. Particularly, previous data have shown that promoting the activity of the DH can abolish tone fear conditioning while promoting background contextual conditioning (Calandreau et al., 2006), mimicking the present effects of intra-VH CORT infusions. Therefore, our results suggest that when a simple cue-shock association is blocked by interfering with VH-dependent processes, a context-shock association, which could be mainly supported by the DH, is preferentially consolidated, leading to a prevalent contextual fear memory even if the context, which is consigned in the background in this learning situation, is not the best predictor of the threat.
The present study also shows that activation of the same receptor (GRs), by infusion of the specific agonist DEX either into the DH or the VH, mimicked the opposite CORT-induced alterations of fear memories. CORT can act on two subtypes of nuclear receptors: the high-affinity mineralocorticoid receptor (MR) and the low-affinity GR (de Kloet et al., 1998). Both receptors are found in the hippocampus and often co-localized on the same neurons (Joëls, 2007). At the basal level, only MRs are occupied. During mild stress, the increase in CORT levels results in full MR and moderate GR occupancy, resulting in enhanced synaptic plasticity in the hippocampus (Diamond et al., 1992). This effect is thought to mediate the facilitating effects of glucocorticoids on hippocampus-dependent memory function (Conrad et al., 1999;Brinks et al., 2007). In contrast, full GR occupancy, which occurs during high stressful situation, impairs synaptic excitability and hippocampus-dependent memory functions (Conrad et al., 1999;Brinks et al., 2007). Specifically, GR activation in the DH was shown to disrupt excitability and synaptic plasticity (Diamond et al., 1992;Kim and Yoon, 1998;Garcia, 2001;Maggio and Segal, 2007b), which is crucial for contextual fear conditioning (Sacchetti et al., 2001). In the VH, whereas stress or direct corticosterone bath application promotes LTP via the MRs, specific activation of GRs results in low excitability and synaptic plasticity (Maggio and Segal, 2009a,b), which could explain the disruption of the cue fear conditioning in the present study.
Using more physiological conditions, our last experiment shows that post-training (restraint) stress or systemic injection of CORT mimicked the effects of local CORT injections on fear memories. More specifically, we had previously shown that in a predicting-context situation, post-training stress or systemic CORT injection reproduced the effects of intra-DH CORT injections, i.e., enhancing adaptive contextual fear memory after a low stress condition, but producing a false tone fear memory after a high stress condition (Kaouane et al., 2012; see Figure 4A). In contrast, the present study shows that in a predicting-cue situation, the same manipulations reproduced the effects of intra-VH CORT injections, i.e., promoting an adaptive tone fear memory after a low stress condition while promoting a maladaptive contextual fear memory after a high stress condition. Under the low stress condition, the increase in tone fear memory is in accordance with previous studies showing that post-training glucocorticoid injections increase cued fear memory in low or mild stress situations (Hui et al., 2004;Marchand et al., 2007). Under the high stress condition, the observed CORT-induced maladaptive contextual fear memory and deficit in cue fear conditioning strongly suggest that activation of GR, in the VH specifically, constitutes a key molecular device for such fear memory disturbances. Now, how can we explain that the same biological manipulation (GR activation) can have drastically different, and even opposite, effects on fear memory as a function of the learning procedure used? Even if GRs respond similarly to systemic CORT/DEX injection in the DH and VH, the deleterious impact of this systemic injection on fear behavior is supposed to differ as a function of the specific recruitment of the DH and VH in the training procedure considered. Because the DH and VH are known to be differentially involved in contextual and tone fear conditioning, the present findings suggest that the opposite effect of systemic CORT/DEX injection on fear behavior may result from an imbalance between the recruitment of the DH and that one of the VH as a function of the training procedure used.
In conclusion, our study shows that glucocorticoids alter fear memories in an opposite way as a function of the hippocampal sector where GR are activated, providing evidence for a functional dissociation between the DH and the VH. These findings indicate that glucocorticoids, under high stressful situation, can produce false fear memories on the basis of the erroneous selection of the most salient, but irrelevant, simple cue or the background context as predictor of an aversive event. Classically, ''false memory'' applies to memories formed without the actual experience of the items that constitute the object of these memories. Here, ''false memories'' refer to totally erroneous memory representation of the stressful situation as regards to the objective training situation. Indeed, animals wrongly attribute an aversive predictive value to the salient but not predictive tone instead of the (foreground) context in the unpairing situation, and to the (overshadowed) context instead of the predictive tone in the pairing situation. These erroneous representations, based on distorted meanings of both the salient tone and the context, are clearly akin to ''false memories.'' These opposing false fear memories might be related to the development of different stress-related disorders. On the one hand, in human, specific alterations in the posterior hippocampus (DH in rodents) have been linked to PTSD-related cue-based hypermnesia and contextual amnesia (Brewin, 1996;Brewin and Holmes, 2003;American Psychiatric Association, 2013). We precisely reproduced this paradoxical memory alteration with intra-DH CORT (Kaouane et al., 2012) or DEX infusions in mice. On the other hand, the anterior hippocampus in human (Satpute et al., 2012) and the VH in rodents (Degroot and Treit, 2004) have been involved in anxiety-related behaviors including intense and irrational fear reactions to particular stressful situation (American Psychiatric Association, 2013). Such abnormally high fear behaviors to (background) contextual elements under stress were precisely those observed after intra-VH CORT or DEX infusions in the present study. Furthermore, highlighting different roles for the DH and the VH in adaptive and maladaptive (anxiety-like) contextual fear, respectively, our findings are congruent with an optogenetic study reporting similar functional dissociation (i.e., contextual fear vs. anxiety-like behaviors) along the dorso-ventral axis of the dentate gyrus (Kheirbek et al., 2013). Altogether, our findings strongly suggest the different effects of glucocorticoids induced by their action along the dorsoventral axis of the hippocampus might explain the constellation of memory alterations observed after stressful situations, thereby contributing to a better understanding of the pathophysiology of stress-related disorders.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The animal study was reviewed and approved by Bordeaux ethic committee; European Communities Council Directive (86/609/EEC).
AUTHOR CONTRIBUTIONS
NK, E-GD, and AD conceived, designed, performed the experiments, and analyzed the data. NK, AM, MS, and AD wrote the article.
FUNDING
This work was supported by Centre National de la Recherche Scientifique, Institut National de la Santé et de la Recherche Médicale, Fondation pour la Recherche sur le Cerveau, Conseil Régional d'Aquitaine, Ministère de l'Enseignement supérieur et de la Recherche, and University of Bordeaux. This study was also performed in the framework of the Laboratoire Européen Associé ''France-Israel Laboratory of Neuroscience'' and the International Research Network ''France-Israel Center for Neural Computation.'' | 2020-08-26T13:08:49.563Z | 2020-08-26T00:00:00.000 | {
"year": 2020,
"sha1": "cf9d47594a692688da1cb995c1884aca419685f1",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnbeh.2020.00144/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cf9d47594a692688da1cb995c1884aca419685f1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
256128063 | pes2o/s2orc | v3-fos-license | The safety of immediate extubation, and factors associated with delayed extubation, in cardiac surgical patients receiving fast-track cardiac anesthesia: An integrative review
Background Early extubation (EE), within 8 h of cardiac surgery, is associated with improved resource utilization. Studies have demonstrated that for patients receiving low-dose, fast-track opioid cardiac anesthesia (FTCA) protocols, EE is as safe as conventional care. To date, it is unclear when the earliest timepoints for safe extubation might be. Additionally, some authors pointed out that certain patients receiving FTCA protocols frequently experience delays during extubation attempts. Understanding the factors associated with delayed extubation is crucial for perioperative planning and resource management. This review seeks to 1) determine whether immediate extubation (IE) in the operating room is as safe as EE and 2) identify factors associated with delayed extubation. Methods MEDLINE, Cochrane Library, EMBASE and CINAHL (up to March 2022) were searched. Studies pertaining to FTCA, IE, EE or factors associated with delayed extubation were included. All authors extracted, appraised and synthesized data. The primary outcome measures were treatment results and factors associated with delayed extubation. Results Six studies investigated treatment outcomes associated with FTCA and IE. One randomized controlled trial reported that outcomes associated with IE were comparable to those with EE. Five observational studies reported incidence for 19 treatment outcomes associated with IE, but no comparisons were made to EE. Six observational studies assessed pre- and intraoperative factors associated with delayed extubation in FTCA patients. In at least one study, 37 factors were investigated and 22 were identified. The most frequently reported factors were pre-existing cardiac insufficiency or renal disease, time on pump and cross-clamp time. Obesity and stroke were investigated but were not associated with delayed extubation. No study examined the influence of race, ethnicity or gender on outcomes. Discussion and conclusion Evidence pertaining to treatment outcomes associated with FTCA and IE is weak. Observational studies cannot determine causation. Large multicentre randomized control trials are required to determine the safety of IE. Although numerous factors have been associated with delayed extubation, several studies do not describe how or which factors were selected for examination. Therefore, certain factors may have yet to be evaluated. Future studies should comprehensively define all factors under investigation.
INTRODUCTION Early extubation following cardiac surgery
Fast-track cardiac care (FTCC) involves the use of low-dose opioid-based general anesthesia and/or time-directed extubation protocols to facilitate safe early extubation (EE) (ie, within 8 h postcardiac surgery). These protocols were developed to address resource demands associated with an increase in the number of cardiac surgeries [1]. The proposed advantages of FTCC include reduced hospital and intensive care unit (ICU) lengths of stay (LOS) and lower hospital costs [1]. Low-dose opioid-based general anesthesia protocols, also called fast-track cardiac anesthesia (FTCA), is an approach to FTCC and typically includes the use of opioids (≤20 µg/kg fentanyl or equivalent) along with sedative-hypnotics such as midazolam with the goal of earlier emergence from anesthesia. Time-directed extubation protocols often accompany FTCA and are generally based on expert consensus and have broad parameters [2]. EE has been arbitrarily defined as occurring within 8 h of cardiac surgery completion (skin closure); however, a rationale for this time parameter has not been provided [3].
The authors of a recent Cochrane review synthesized 19 randomized controlled trials (RCTs) (n=2834) comparing FTCA to conventional care in cardiac surgery patients [3]. Conventional care was defined as using high-dose opioid anesthesia (≥20 µg/kg fentanyl or equivalent) and extubation greater than 8 h postsurgery in patients undergoing cardiac surgery (eg, coronary artery bypass graft, aortic valve replacement, mitral valve replacement) [3]. This review, which included studies before May 2015, demonstrated that EE under a FTCA protocol appeared as safe as conventional care regarding the risk of mortality and major postoperative complications such as myocardial infarction, stroke, tracheal reintubation, postoperative renal failure, postoperative risk of major bleeding or postoperative mortality. This study demonstrated significantly reduced times to extubation (mean difference -7.40 h, 14 studies) and reduced ICU LOS (mean difference -3.70 h, 12 studies) in the FTCA group. Despite a low level of evidence across studies, these authors concluded that the outcome risks were similar between groups, but the resource demands (eg, shortened ICU LOS) were reduced with FTCA.
Although the review demonstrated the safety of EE with FTCA protocols, identifying the earliest time point for safe extubation may be further advantageous. For example, reductions in ICU LOS or eliminating the need for ICU care may lead to higher patient throughput and shorter surgical wait times. Of the 19 included studies in the Cochrane review, only one described treatment outcomes in patients who were immediately extubated following surgery (following skin closure in the operating room) and reported that treatment outcomes associated with FTCA and immediate extubation (IE) were the same as those associated with conventional care. A synthesis of studies examining the safety of IE may help determine whether it is feasible to do after cardiac surgery.
Factors associated with delayed extubation in patients receiving FTCA protocols
Similarly, FTCA protocols are not always successful in achieving the goal of EE. In two studies, a significant proportion of patients receiving FTCA protocols (11% [4] and 16% [5]) experienced delayed extubation. Patients with delayed extubation have longer LOSs, consume additional resources [4,6] and have higher mortality rates [7]. Understanding pre-existing and intraoperative factors that may predict delays in extubation are essential to treatment planning and resource management (eg, predicting the need for ICU care) [4].
Although prior studies [4,5] have sought to establish prediction models for delays in EE, most studies have used data from a wide array of cardiac procedures to construct their model. Furthermore, models to date have primarily been constructed from outcomes measured in single centres and may lack generalizability. As several studies have sought to define factors associated with delayed EE, synthesizing these studies may support the development of more holistic predictive models.
This review aims to address two related questions: 1) What is the evidence regarding the safety of IE in patients receiving a FTCA protocol for cardiac surgery? and 2) In adult patients receiving FTCA protocols for cardiac surgery, which pre-existing and intraoperative factors may be associated with delayed extubation?
Data sources and search strategies
This integrative review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines [8]. We used an integrative review methodology, because we identified diverse methodologies (eg, RCTs and observational studies) used in studies pertaining to the research questions that would not benefit from combining in a systematic review [9,10]. Four electronic databases (PubMed, CINAHL, Cochrane Library and EMBASE) were searched from inception till March 2022. No restrictions on language, publication date, study location, type or size were applied.
Systematic literature searches were guided, in part, by the search strategy from the Cochrane review [3]. The following search terms were used across databases for research questions 1 and 2: cardiac surgery, coronary artery bypass, valve replacement, EE, IE and ultrafast-track. The complete search strategy for PubMed is provided in Appendix 1. 1
Study selection criteria
Research question 1: What is the evidence regarding the safety of IE in patients receiving a FTCA protocol for cardiac surgery?
We included studies that met the following inclusion criteria: 1) study population and intervention: adult patients undergoing cardiac 1 Supplementary materials are available at https://www.cjrt.ca/wpcontent/uploads/Supplement-cjrt-2022-037.docx surgery (coronary artery bypass graft, valve replacement or both) with or without cardiopulmonary bypass and irrespective of severity of disease; IE in patients receiving a FTCA protocol (fentanyl ≤20 µg/kg or equivalent) [11] with or without supplemental propofol, etomidate or volatile anesthesia; with a protocol for confirming readiness for IE within the operating room; 2) comparator or control group: no comparison or studies that compared the use of FTCA versus conventional anesthesia (fentanyl >20 µg/kg or equivalent); 3) treatment outcomes: service outcomes and other postoperative complications, including but not limited to mortality, risk of postoperative reintubation, myocardial infarction, stroke, acute renal failure, major bleeding, sepsis, wound infection, extended ICU or hospital LOS; 4) study design: RCT, nonrandomized studies, observational studies; and 5) language: no restrictions.
Research question 2: In patients receiving FTCA protocols for cardiac surgery, which pre-existing and intraoperative factors may be associated with delayed extubation?
We included studies that met the following inclusion criteria: 1) study population and intervention: adult patients undergoing coronary artery bypass graft surgery and/or valve replacement with or without cardiopulmonary bypass and irrespective of the severity of disease; extubation in patients receiving a FTCA protocol (fentanyl ≤20 µg/kg or equivalent) [11] with or without supplemental propofol, etomidate, or volatile anesthesia; with a protocol for confirming readiness for extubation; 2) comparator or control group: no comparison or studies that compared the use of FTCA versus conventional anesthesia (fentanyl>20 µg/kg or equivalent); 3) outcomes: pre-existing or intraoperative factors associated with delayed extubation; 4) study design: observational studies; and 5) language: no restrictions.
Studies that defined EE as greater than 8 h postsurgery were excluded. We also excluded studies with remifentanil for both research questions because of its short half-life and because it does not accumulate with prolonged administration compared to fentanyl [12]. Studies that examined remifentanil or major regional blockades (epidural or intrathecal) or compared hypothermia or normothermia during cardiac surgery have been reviewed previously and were excluded [13,14].
Because this work was a learning initiative, all steps of the integrative review were done collaboratively. That is, every decision was made through full group consensus. All authors screened titles and abstracts of all identified studies using the selection criteria defined above. For studies that satisfied the selection criteria, full texts were retrieved for review. Reference lists of included studies were hand-searched for relevant papers. All authors contributed to all components of the review and achieved consensus through discussion.
Data extraction
All authors collaboratively extracted the following data from each study using a standardized form: study design; geographic location; year of study; sample size; age and sex of participants; time to extubation; type of surgery; anesthetic protocol; and outcome measures. To ensure that no single voice dominated the discussion, every author participated equally in this process.
Risk of bias assessment
Authors collaboratively assessed the risk of bias in included studies using tools developed by the Joanna Briggs Institute: Checklist for Randomized Controlled Trials [15]; Checklist for Case Series [16]; and Checklist for Case Control Studies [17]. Heterogeneity because of variations in study designs and outcome measures prohibited meta-analysis. As above, all authors contributed equally during analysis and decision-making pertaining to risk of bias.
Search results
A total of 495 unique studies pertaining to either question 1 or 2 were identified ( Figure 1). Two additional studies were located through hand searching the reference list of the included papers. A total of 485 studies were excluded following a review of titles, abstracts or full texts. Six studies met the criteria for research question 1 [18][19][20][21][22][23], and six studies met the criteria for research question 2 [6,20,21,[24][25][26]. Two studies were relevant to both research questions [20,21].
Characteristics of included studies
Research Question 1: What is the evidence regarding the safety of IE in patients receiving a FTCA protocol for cardiac surgery?
This review included one RCT [22] and five case series [18][19][20][21]23]. Studies were published between 2000 and 2018 (Table 1). There were 2109 subjects across the six studies. In the RCT, there were 52 subjects with 26 subjects in the intervention arm (IE) and 26 subjects in the conventional care arm [22]. A total of 2057 subjects participated in the observational studies [18][19][20][21]23]. The percentage of females in the IE groups was 26.4% (one study [23] did not report sex). The mean age of subjects in the IE group was 56.9±9.4 years (one study [19] did not report age). The single RCT was of low methodological quality [22]. Although observational studies included in this review followed most design expectations, the design was not the most appropriate to the research question. Therefore, all included studies were at high risk of bias and the level of evidence was low ( Table 2).
Service and treatment outcomes associated with IE
The purpose of research question 1 was to investigate service and treatment outcomes associated with IE in patients receiving a FTCA protocol for cardiac surgery. Across 6 studies, 20 unique outcomes were reported (Table 3). One study reported no incidence of postoperative complications [23].
Hospital length of stay. Hospital LOS was assessed by three studies; however, these studies varied in their definitions [18,21,23].
Mortality
Four of the six included studies assessed the mortality rate associated with IE, which was between 0% and 5.8% [18][19][20][21]. It is noted that the two more recent studies did not assess for mortality [22,23]. No studies investigated a comparison between the IE and conventional extubation groups.
Postoperative complications
Myocardial ischemia/infarction. All studies recorded the incidence of myocardial ischemia or infarction. One study reported no incidence [23]. In the five studies that did, the incidence ranged from 0.6% to 7.7% [18][19][20][21][22]. Only one study compared the incidence of myocardial ischemia/infarction between the IE and conventional care groups and demonstrated a significantly lower incidence in the IE group [22].
Reintubation. All studies recorded the incidence of reintubation. One study had no incidence of reintubation [23]. In the five studies, the incidence ranged from 2.5% to 8% [18][19][20][21][22]. Only one study compared the incidence of reintubation between the IE and conventional care group and showed no significant difference [22].
Reoperation because of bleeding or occlusion. Five studies recorded the incidence of reoperation because of bleeding or occlusion, which ranged from 0.4% to 1.5% [19][20][21][22][23]. One study reported no incidence [23]. Only one study compared incidence between IE and conventional groups and noted no significant difference between the two arms [22].
Arrhythmias or blocks. Four studies recorded the incidence of arrhythmia or block [19][20][21]23]. One study found no incidence [21]. In the two studies that did, the incidence ranged from 10.3% to 16.3% [19,20]. No studies investigated comparisons between groups.
FIGURE 1
Systematic literature search. CABG, coronary artery bypass graft. Question Eight postoperative complications were investigated in a few studies. Two of these complications (bleeding not requiring reoperation [18,22] and vomiting [22]) were found to have lower incidences in the conventional care group (Table 3). Incidence for five postoperative complications (ie, prolonged ventilation [19,23], mediastinitis [20,23], need for postoperative transfusion [19,23], tamponade [19], vasoplegia [20]) was reported, however these were all observational studies and comparisons were not made. Only one study assessed for a reduction in cardiac output, and no reduction was noted [23].
Research Question 2: In patients receiving FTCA protocols for cardiac surgery, which pre-existing and intraoperative factors may be associated with delayed extubation?
Characteristics of included studies. This review included six studies: one case series [24] and five case controls [6,20,21,25,26] and a total of 3534 patients (Table 4). Two studies investigated factors associated with delayed IE [20,21], and four investigated factors associated with delayed EE [6,[24][25][26], although the time frame used to define delayed EE varied between studies.
Studies were published between 1999 and 2015. Across all studies, 78.1% were male (range: 68.0%-87.0%). Mean age was provided in five studies and ranged from 33.8 to 69 years. Although all studies adhered to their design expectations, observational studies are subject to high risk of bias (Table 2).
Three studies predefined potential factors associated with delayed extubation [6,24,26], although reasons for those choices were not justified. No study provided a comprehensive description of how preoperative data were collected or defined exclusion criteria.
Information was collected on 37 factors in patients, but no study investigated all 37 factors. Of the 37 factors only 2 (age >60 and pre-existing cardiac insufficiency) were investigated by all six studies. In one or more studies, 22 factors associated with delayed extubation were described. Fifteen pre-existing conditions or intraoperative factors were investigated but were not identified as factors associated with delayed extubation (Appendix 2). 2
Factors associated with delayed extubation
Pre-existing conditions. Six studies considered preoperative factors that may be associated with delayed immediate [20,21] or EE [6,[24][25][26]. Twentyseven pre-existing conditions were examined, and two were identified as factors that delay EE by three studies Age (>60). All six studies considered whether age was a factor associated with delayed immediate [20,21] or EE [6,[24][25][26]. Two studies identified age >60 as a factor associated with delayed extubation [6,24]. Cardiac insufficiency (heart failure). All six studies considered whether cardiac insufficiency was a potential factor associated with delayed immediate [20,21] or early [6,[24][25][26] extubation. One study identified cardiac insufficiency as a factor associated with delayed IE [20], and two identified it as a factor associated with delayed EE [24,25].
Renal disease/renal insufficiency. Five studies considered whether renal disease/renal insufficiency was a factor associated with delayed IE [20,21] or EE [6,24,25]. One of these studies identified pre-existing renal disease/renal insufficiency as a factor associated with delayed IE [21], and two studies identified it as a factor associated with delayed EE [6,24].
Diabetes. Five studies considered whether diabetes was a factor associated with delayed IE [20,21] or EE [6,24,25]. One of these studies identified diabetes as a factor associated with delayed IE [21].
Pulmonary disease. Five studies considered whether the pre-existing pulmonary disease was a factor associated with delayed IE [20,21] or EE [6,24,25]. One study identified pulmonary disease as a factor associated with delayed IE [20].
Sex. Five studies considered whether sex was a factor associated with delayed IE (20,21) or EE [6,24,25]. One study described female sex as a factor associated with delayed EE [24].
Emergency surgery. Four studies considered whether the need for emergency cardiac surgery was a factor associated with delayed IE [20,21] or EE [6,24]. One study identified it as a factor associated with delayed EE [24]. Obesity. Four studies considered whether obesity was a factor associated with delayed IE [20] or EE [6,24,25]. None of these studies identified it as a factor.
History of stroke. Four studies considered whether a history of stroke was a factor associated with delayed IE [20] or EE [6,24,25]. None of these studies identified it as a factor.
Lower EuroSCORE. Three studies considered whether the European System for Cardiac Operative Risk Evaluation (EuroSCORE) was a factor associated with delayed IE [21] or EE [6,25]. Two studies identified a higher EuroSCORE as a factor associated with delayed EE [6,25].
Hypertension. Three studies considered whether hypertension was a factor associated with delayed IE [21] or EE [6,24]. One study identified it as a factor associated with delayed EE [6].
Recent myocardial infarction. Three studies considered whether a recent myocardial infarction was a factor associated with delayed EE [6,24,25]. Each study defined recent MI differently (eg, one study defined recent MI as within 2 weeks [25], whereas another defined it as within 1 week [24]). One study identified MI before surgery as a factor associated with delayed EE [24].
Intra-aortic balloon pump. Two studies considered whether the need for a preoperative intra-aortic balloon pump was a factor associated with delayed IE [21] or EE [24]. Each study identified a preoperative intra-aortic balloon pump as a factor associated with delayed extubation.
Fourteen pre-existing factors were investigated only once. Of these, only two, prior cardiac surgery [21] and number of diseased vessels [25], were identified as factors associated with delayed extubation (Appendix 2).
Intraoperative factors. Six studies considered intraoperative factors that may be associated with delayed IE [20,21] or EE [6,[24][25][26]. Eleven intraoperative conditions were examined, and two were identified as intraoperative factors associated with delayed IE or EE by three studies (Appendix 2).
Cross-clamp time. Four studies considered whether a longer crossclamp time was a factor associated with delayed IE [20] or EE [6,24,26]. One study identified a longer cross-clamp time as a factor associated with delayed IE [20], and two studies identified it as a factor associated with delayed EE [6,26].
Time on pump. Four studies considered whether time on pump was a factor associated with delayed EE [6,[24][25][26]. Three studies identified time on pump as a factor associated with delayed EE [6,24,26].
Difficulty discontinuing cardiopulmonary bypass. Two studies considered whether difficulty discontinuing cardiopulmonary bypass was a factor associated with delayed IE [20] or EE [24]. Each study identified it as a factor.
Requirement for intraoperative cardiac pacing. Two studies considered whether the need for intraoperative cardiac pacing was a factor associated with delayed IE [20] or EE [26]. Each study identified the need for intraoperative pacing as a factor. Requirement for transfusion. Two studies considered whether the requirement for transfusion was a factor associated with delayed EE [25,26]. Each study identified it as a factor.
Four intraoperative conditions were investigated only once (see Appendix 2). Of these, two, total surgical time [21] and first lactate or acid-base deficit after surgery [6], were identified as factors associated with delayed extubation. Two conditions, intraoperative hemofiltration and peak intraoperative CK-MB, were only investigated by a single study and were not identified as factors associated with delayed extubation [20,26].
Risk of bias in included studies
Research question 1: Of the six studies, one used a design with the potential to provide the highest level of evidence; however, it had critical methodological weaknesses [22]. The other observation studies, while meeting most expectations of their designs, have inherent noncontrollable biases, and at best provide low levels of evidence. [ Table 2; [18][19][20][21]23]. A further challenge with included studies was that they used a variety of extubation protocols (Appendix 3 3 ). Consequently, evidence for question 1 was low. Research question 2: All studies used observational designs and met most methodological expectations of their designs [6,20,21,[24][25][26]; Table 2]. The potential for bias is high with observational studies, so the level of evidence must be considered low. Specific limitations of these included studies were that observational studies cannot determine cause and that no study provided a rationale for identifying factors for investigation. Finally, studies employed a variety of extubation protocols (Appendix 3).
Research Question 1: What is the evidence regarding the safety of IE in patients receiving a FTCA protocol for cardiac surgery?
In the early 1990s, FTCA was introduced to reduce resource demand in patients undergoing cardiac surgery. EE with FTCA has been shown to reduce ICU LOS and hospital costs [1]. Authors from the previous Cochrane review of 19 studies [3] identified that EE under a FTCA protocol appeared safe, regarding the risk of mortality and major postoperative complications, as conventional care. That study included a single study that suggested IE with FTCA may be equally safe as EE [3]. A synthesis of studies evaluating the safety of IE might ultimately support change in practice that reduces resource waste.
Our research team reviewed the evidence regarding the safety of IE in patients receiving a FTCA protocol. Although most included studies used designs that did not permit comparison of treatment outcomes in patients undergoing IE versus EE, they do allow the identification of frequently reported postoperative complications in patients undergoing IE.
The studies included in this review were conducted in countries with diverse resources, so treatment outcomes reported may reflect variations in system capacities. Participants represented in this review were predominantly male (79%). Although women may be proportionally represented in some studies [27], the authors failed to conduct gender-or sex-based analysis. Furthermore, none of the studies investigated associations between race and treatment outcomes. Such analyses are important as prior studies demonstrate that 20% of the recently approved drugs exhibit racial and/or ethnic differences in disposition and response [28], whereas other studies have shown associations between sex and cardiac surgery outcomes [27].
Currently, the evidence to support IE under an FTCA protocol is low, demonstrating the need for adequately powered multicentre RCTs that examine the service and treatment outcomes in patients receiving IE with a FTCA protocol. These studies should seek to control variables that may introduce bias (eg, variations in extubation protocols across studies, resource and skill capacities of investigating centres). This review also establishes a list of treatment outcomes that should be considered in these future trials. Future studies should consider the influence of race, sex, gender and ethnicity on treatment outcomes associated with IE.
As with EE, adopting IE in practice may support additional reductions in resource demands making further primary research in this area important. Evidence that supports safe IE protocols would enable and encourage anesthesia care teams to consider IE. Research Question 2: In patients receiving FTCA protocols for cardiac surgery, which pre-existing and intraoperative factors may be associated with delayed extubation?
In a small percentage of patients undergoing cardiac surgery with an FTCA protocol, extubation within a desired time frame (eg, within 8 h) is not appropriate and therefore delayed [4,5]. These patients often experience longer LOS, use more medical resources and experience higher mortality rates. Understanding pre-existing and intraoperative factors that may predict delayed extubation is essential for ensuring the best quality of care and pre-and postoperative planning. This review synthesized studies that investigated pre-and intraoperative factors that may predict delayed extubation in cardiac surgery patients on a FTCA protocol. Six studies [6,20,21,[24][25][26] totalling 3534 patients were included. As this synthesis aimed to determine factors associated with delayed extubation, the designs employed in the included studies were appropriate for this research question. Furthermore, all six studies met most expectations of their methodological designs.
Studies included in this synthesis evaluated 37 unique factors; however, no single study investigated all 37. Pre-existing cardiac insufficiency, pre-existing renal disease, time on pump and cross-clamp time were each investigated in four or more of the included studies and identified as risk factors for delayed extubation in three. Factors that were associated with delayed extubation in multiple studies may warrant consideration as important potential predictors for future research on this topic.
The EuroSCORE predicts in-hospital mortality risk following major cardiac surgery [29]. Three studies investigated EuroSCORE as a predictor of delayed extubation, and two studies identified it as a predictor. It is interesting to note that five components that are used in the calculation of the EuroSCORE (age, diabetes, pulmonary disease, emergency surgery and sex) were not identified as possible predictors of failed extubation when concurrently examined in these studies. This indicates that further consideration should be given regarding using the EuroSCORE as a predictor of delayed extubation.
The total collective sample size addressing research question 2 is low and few factors were investigated by all studies. A significant limitation of the included studies is the lack of justification by authors regarding how these factors were identified and/or selected. Future studies could be more deliberate in selecting and researching outcomes related to delayed extubation. Furthermore, other important factors such as race, ethnicity and gender have not been investigated, and other possibly important factors (eg, sleep apnea [30]) have yet to be evaluated. Finally, there were numerous variables across studies that were difficult to account for. Potential jurisdictional variations in resource and equipment availability, practice guidelines, extubation protocols and skill capacity may have influenced study outcomes.
Future studies should include prospective comparative designs that incorporate a more judicious list of outcome measures and analyses that define combinations of factors most associated with delayed extubation. Adequately powered prospective multicentred observational studies will provide a higher level of evidence. Such evidence will support the development of algorithms that predict those at risk for delayed extubation and support resource allocation.
As with research question 1, future studies should seek to control variables that may introduce bias (eg, race, ethnicity, gender, variations in extubation protocols across sites, resource and skill capacities of investigating centres).
Strengths and limitations
Although the present study has strengths, the results should be interpreted relative to its limitations.
Strengths: This is the first review that synthesized the studies examining the outcomes of IE. To our knowledge, this is also the first review to synthesize the studies of pre-and intraoperative factors associated with delayed extubation in patients receiving FTCA protocols. This review also included studies published in multiple languages. All authors worked collaboratively to conduct a comprehensive search, review citations, synthesize data and assess study quality, thereby minimizing the likelihood of bias [31].
Limitations: Our searches were limited to studies published in peer-reviewed journals. There were few studies for each research question and relatively low sample sizes in most studies. Although the studies were generally of high quality for their chosen design, most were observational designs; thus, they produced a low level of evidence. In research question 2, there was heterogeneity across studies regarding the time to extubation. Furthermore, the four most frequently reported risk factors (ie, pre-existing cardiac disease, pre-existing renal disease, time on pump and cross-clamp time) lacked consistent definitions in the papers. For example, the amount of time on pump considered to be a risk for failed extubation was not quantified. Because of the significant heterogeneity and the absence of control in most studies, meta-analysis was not possible. It is also important to note that the studies included in this review are all single-centred. Single-centre studies typically lack the external validity required to draw widespread conclusions about a given practice or population, and their incorporation into universal guidelines and policies may be inappropriate.
Finally, studies including remifentanil, an opioid often used in FTCA, were excluded from this review because its pharmacokinetic properties differ significantly from those of fentanyl and sufentanil (12). This prohibits the applicability of our findings from settings where remifentanil is used as part of the FTCA protocol.
CONCLUSION
Our review highlights the paucity of high-quality studies examining the safety of IE in patients receiving FTCA. Although the included studies were of insufficient quality or design to enable comparisons of treatment outcomes in patients undergoing IE versus EE, they have allowed the development of a comprehensive list of common treatment outcomes.
This review reveals factors associated with delayed extubation, but the results are lacking in that not all studies looked for the same pre-and intraoperative factors, and the most frequently reported factors were inconsistently defined. | 2023-01-24T16:26:41.522Z | 2023-01-20T00:00:00.000 | {
"year": 2023,
"sha1": "d9d708c40e6883d158956ffdbcf8d8cbfff173f3",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.29390/cjrt-2022-037",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f967203c3596ca08a1027209a79f9e9a76d031be",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
272816940 | pes2o/s2orc | v3-fos-license | Can immediate postoperative radiographs predict outcomes in pediatric clubfoot?
BACKGROUND The goal of treatment for pediatric idiopathic clubfoot is to enable the patient to comfortably walk on his or her soles without pain. However, currently accepted treatment protocols are not always successful. Based on the abnormal bone alignment reported in this disease, some studies have noted a correlation between radiographic characteristics and outcome, but this correlation remains debated. AIM To assess the correlation between immediately postoperative radiographic parameters and functional outcomes and to identify which best predicts functional outcome. METHODS To predict the outcome and prevent early failure of the Ponseti’s method, we used a simple radiographic method to predict outcome. Our study included newborns with idiopathic clubfoot treated with Ponseti’s protocol from November 2018 to August 2022. After Achilles tenotomy and a long leg cast were applied, the surgeon obtained a single lateral radiograph. Radiographic parameters included the tibiocalcaneal angle (TiCal), talocalcaneal angle (TaCal), talofirst metatarsal angle (Ta1st) and tibiotalar angle (TiTa). During the follow-up period, the Dimeglio score and functional score were examined 1 year after surgery. Additionally, recurring events were reported. The correlation between functional score and radiographic characteristics was analyzed using sample and multiple logistic regression, and the optimal predictor was also identified. RESULTS In total, 54 feet received approximately 8 manipulations of casting and Achilles tenotomy at a mean age of 149 days. The average TiCal, TaCal, Ta1st, and TiTa angles were 75.24, 28.96, 7.61, and 107.31 degrees, respectively. After 12 mo of follow up, we found 66% excellent-to-good and 33.3% fair-to-poor functional outcomes. The Dimeglio score significantly worsened in the poor outcome group (P value < 0.001). Tical and TaCal showed significant differences between each functional outcome (P value < 0.05), and the TiCal strongly correlated with outcome, with a smaller angle indicating a better outcome, each 1 degree decrease improved the functional outcome by 10 percent. The diagnostic test revealed that a TiCal angle of 70 degrees predicts an inferior functional outcome. CONCLUSION The TiCal, derived from lateral radiographs immediately after Achilles tenotomy, can predict functional outcome at 1 year postoperatively, justifying its use for screening patients who need very close follow-up.
AIM
To assess the correlation between immediately postoperative radiographic parameters and functional outcomes and to identify which best predicts functional outcome.
METHODS
To predict the outcome and prevent early failure of the Ponseti's method, we used a simple radiographic method to predict outcome.Our study included newborns with idiopathic clubfoot treated with Ponseti's protocol from November 2018 to August 2022.After Achilles tenotomy and a long leg cast were applied, the surgeon obtained a single lateral radiograph.Radiographic parameters included the tibiocalcaneal angle (TiCal), talocalcaneal angle (TaCal), talofirst metatarsal angle (Ta1st) and tibiotalar angle (TiTa).During the follow-up period, the Dimeglio score and functional score were examined 1 year after surgery.Additionally, recurring events were reported.The correlation between functional score and radiographic characteristics was analyzed using sample and multiple logistic regression, and the optimal predictor was also identified.
RESULTS
In total, 54 feet received approximately 8 manipulations of casting and Achilles tenotomy at a mean age of 149 days.The average TiCal, TaCal, Ta1st, and TiTa angles were 75.24, 28.96, 7.61, and 107.31 degrees, respectively.After 12 mo of follow up, we found 66% excellent-to-good and 33.3% fair-to-poor functional
However, Ponseti noted that radiography cannot predict prognosis because it is not associated with clinical appearance [8].Nevertheless, some reports identified a significant relationship between preoperative radiographs and treatment decisions.For example, a lateral tibiocalcaneal angle > 80 degrees indicates a need for Achilles tenotomy [9].In 2015, a retrospective study found that a preoperative dorsiflex angle > 16.6 degrees did not require reoperation and was related to recurrence [10].Furthermore, a lateral tibiocalcaneal angle > 77 degrees and lateral talocalcaneal angle < 29 degrees at the time of brace withdrawal predicts reoperation [11].
The purposes of this study were as follows: To demonstrate the correlation between immediately postoperative radiographic parameters and functional outcome.To identify the radiographic parameter that best predicts functional outcome.
Population
This work was a retrospective cohort study conducted from November 2018 to August 2022.Any newborn patient who was diagnosed with idiopathic clubfoot was included.Exclusion criteria included syndromic clubfoot, recurrent cases and patients who had received previous treatment.All patients underwent weekly manipulation of their feet according to the Ponseti technique, followed by Achilles tendon tenotomy and long leg cast application.Immediately after this procedure, the surgeon took one radiograph of the lateral foot and ankle.
Radiography and clinical parameters
In an attempt to predict the outcome and prevent early failure of the Ponseti's method, we used a simple radiographic method to predict outcome because radiographic postoperative studies are lacking and radiographic assessment is not associated with disadvantages [12].
Radiographic parameters included the tibiocalcaneal angle (TiCal), talocalcaneal angle (TaCal), talofirst metatarsal angle (Ta1st) and tibiotalar angle (TiTa) (Figure 1).Foot characteristics were evaluated according to the Dimeglio classification, and functional scores were assessed 1 year after surgery, as described by Ponseti, and interpreted as follows: A total score < 70, 70-79, 80-89, and 90-100 represents poor, fair, good, and excellent outcomes, respectively [13].Additionally, recurring events were reported if further surgery was needed.Data were analyzed by 2 observers, a fourth-year resident orthopedic training and pediatric orthopedic surgeon, and interrater reliability was confirmed using the kappa statistic.
Statistical analysis
All statistical analyses were performed using STATA 11 statistical software (Stata Corp., College Station, TX, United States).The chi-squared test and Fisher's exact test were used to assess independence between two dichotomous variables.The chi-squared test was applied under the assumption that the sample was large.When more than 20 percent of cells had expected frequencies < 5, the Fisher's exact test was run for small-sized samples.The two-sample t-test was used to compare the mean of continuous variables, and the Mann-Whitney U test was used when the variable did not have a normal distribution.Logistic regression by using a penalized maximum likelihood estimation method was used to determine factors associated with functional scores, P value < 0.05 was considered statistically significant and the magnitude of association was shown as crude odds ratios (OR), adjusted OR, and 95% confidence intervals (CI).
RESULTS
The study included 54 feet from 35 newborn patients with clubfoot.All feet received manipulation, and a long leg cast was applied approximately 8 times on average.Then, the Achilles tenotomy procedure was performed at an average age of 149 days.Immediately after surgery and cast application, we obtained radiographs and found that the average TiCal, TaCal, Ta1st, and TiTa angles were 75.24, 28.96, 7.61, and 107.31 degrees, respectively.After the last cast was removed, the brace protocol was utilized as usual, and the Dimeglio score significantly worsened in the poor outcome group (P value < 0.001), which was clearly evident 6 mo postoperatively.After 12 mo of follow up, 24% of cases required further surgery, 66% of cases had an excellent-to-good functional outcome and 33.3% of cases had a fair-to-poor functional outcomes.Demographic data did not significantly differ between groups, as shown in Table 1.
Table 2 presents the significant differences in the Tical and TaCal angles between each functional outcome (P value < 0.05), and the TiCal angle was strongly predictive of outcome, as shown in Table 3.Furthermore, the study shows that a lower TiCal angle corresponded to a better outcome, with an adjusted odds ratio of 0.90 (0.83-0.99).Specifically, each 1 degree decrease improved the functional outcome by 10 percent.The diagnostic test revealed that a TiCal angle of 70 degrees predicts an inferior functional outcome, with 88.9% sensitivity, 41.7% specificity, and 0.56 ROC area (95%CI: 0.42-0.70).
DISCUSSION
Idiopathic clubfoot is the most common multifactorial irreducible foot problem in newborns [2,14].To date, the Ponseti protocol is widely utilized to treat this condition, in which the deformity is corrected sequentially by Achilles tenotomy and a brace is applied.However, a previous study showed a 33%-41% rate of recurrence [15,16].Clubfoot is pathogenically characterized by abnormal bone alignment and abnormal radiographic features compared with normal feet, including bony abnormalities from incorrect treatment, whereas radiographic features from correct treatment are obviously better than pretreatment [17,18].
This study found a correlation between radiographic data, lateral tibiocalcaneal and talocalcaneal angles derived immediately postoperation, and functional outcomes at the 12 mo follow-up.This finding is in agreement with a previous report that supports the use of radiographs for treatment guidance, especially in residual deformity correction, such as complete subtalar release or posteromedial release procedures [19][20][21].
The tibiocalcaneal angle was the most reliable feature for predicting outcome in the present study, as a smaller angle predicted a better outcome based on the plantigrade ability.We found that a cutoff point of > 70 degrees could predict fair-to-poor functional outcome at walking age with 88.9% sensitivity, similar to the equinus position, which results in a poor quality of life.Similarly, previous studies recommended using this angle to predict risk of relapse and decide the surgical type, such as Achilles tenotomy, soft tissue release, and even reconstructive procedures for recurrent clubfoot, to improve functional outcome, but these studies investigated older children [8,10,11,22,23].Additionally, a close relationship of clinical and talocalcaneal and talo-1 st metatarsal angles was found in some studies [17,24,
25].
Although a later study from 2017 discovered that radiographic abnormalities are not indicative of clinical abnormalities and that the Ponseti method can improve foot shape but cannot correct bone deformities, the treatment protocol needs to be based on various data sources [8,26].Radiography is a criterion to screen patients who need very close follow-up.
This study has the following strengths: (1) We used functional outcome as the end result instead of recurrence because recurrence is a subjective assessment that the surgeon utilizes to determine whether to perform additional interventions; (2) We analyzed only ossified bone to provide more accurate results; and (3) We based our analysis on one lateral view radiograph, which is harmless to patients, as shown in a previous study [12].
Limitations of the study
The small sample analyzed in this study precludes large effect sizes between groups.Furthermore, we calculated the angle based on only ossified bone in a small child for accuracy reasons.Consequently, we may lack information from other nonossified bone.
Figure 1 Radiographic angles. A:
The tibiocalcaneal angle was defined as the angle between the axis of the tibia and the axis of the calcaneus; B: The talocalcaneal angle was defined as the angle between the talus axis and the calcaneus axis'; C: Talofirst metatarsal angle was defined as the angle between the axis of the talus and the axis of the 1 st metatarsal bone; D: The tibiotalar angle was defined as the angle between the axis of the tibia and the axis of the talus.
CONCLUSION
The tibiocalcaneal angle, derived from lateral radiographs immediately after Achilles tenotomy and casting, can predict functional outcome at 1 year postoperatively and provide a sufficient rationale for screening patients who need very close follow-up.
Table 2 Mean range of each radiographic angle in clubfeet patients after surgery in lateral view Angle Excellent and good group, mean (SD) Fair and poor group, mean (SD) Mean different (95%CI) P value a
a Independent samples t test.
Table 3 Correlation of radiographic parameters and functional outcomes Functional score, excellent and good group (n = 36), fair and poor group (n = 18) Angle Crude odds ratio (95%CI) a Adjusted odds ratio (95%CI) P value b
a Simple logistic regression.b Multiple logistic regression.CI: Confidence interval. | 2022-11-18T16:29:36.190Z | 2022-11-18T00:00:00.000 | {
"year": 2022,
"sha1": "19a2137fb3b2f595611ce74350f2bc77b6484f48",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5e8dad8ab38def459fdf2c9f4ed1d732326d67a0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
242816568 | pes2o/s2orc | v3-fos-license | Klotho upregulates the interaction between RANK and TRAF6 to facilitate RANKL-induced osteoclastogenesis via the NF-κB signaling pathway
Background α-Klotho (Klotho) plays a wide range of roles in pathophysiological processes, such as low-turnover osteoporosis observed in klotho mutant mice (kl/kl mice). However, the precise function and underlying mechanism of klotho during osteoclastogenesis are not fully understood. Here, we investigated the effects of klotho on osteoclastogenesis induced by receptor activator of nuclear factor kappa-B ligand (RANKL). Methods The effects of klotho deficiency on osteoclastogenesis were explored using kl/kl mice both in vivo and in vitro. In in vitro experiments, lentivirus transfection, real-time quantitative PCR (RT-qPCR) analysis, western blot analysis, immunostaining, RNA-seq analysis, differential pathway analysis, Energy-based protein docking analysis and co-immunoprecipitation were used for deeply investigating the effects of klotho on RANKL-induced Osteoclastogenesis and the underlying mechanism. Results We found that klotho deficiency impaired osteoclastogenesis. Furthermore, in vitro studies revealed that klotho facilitated osteoclastogenesis and upregulated the expression of c-Fos and nuclear factor of activated T cells cytoplasmic 1 (NFATc1) during osteoclastogenesis. Mechanistically, we confirmed that klotho co-localized with nuclear factor kappa B (RANK) and facilitated the interaction between activated RANK and TNFR-associated factor 6 (TRAF6), thus klotho exerts its function in osteoclastogenesis through the activation of the NF-κB signaling pathway. Conclusions Klotho promotes RANKL-induced osteoclastogenesis through upregulating the interaction between RANK and TARF6, Targeting on klotho may be an attractive therapeutic method for osteopenic diseases.
Introduction
Bone is a dynamic tissue that is continuously remodeled by osteoblasts and osteoclasts. Bone integrity and mineral homeostasis rely on a balance between osteoblastic bone formation and osteoclastic bone resorption (1). Osteoclasts are derived from hematopoietic stem cells, and they are the only cell type responsible for bone resorption in adulthood. Abnormalities of osteoclasts result in many diseases such as osteoporosis, ectopic ossification, and osteopetrosis (2). Studies on osteoclastogenesis can provide valuable strategies for combating osteoclast abnormalities.
Klotho (KL) is a highly conserved gene that encodes the klotho protein, with a sequence homology of up to 94% among humans, rats, and mice (3)(4)(5). Importantly, klotho mutant (kl/kl) mice that carry hypomorph klotho alleles develop a syndrome exhibiting human aging-related phenotypes, including short life span (3-4 months), neural degeneration, and abnormal mineral metabolism. An interesting finding in kl/kl mice is that both osteoblasts and osteoclasts are impaired and lead to low-turnover osteoporosis, however, this impairment is independent of Osteoblast-Osteoclast Interactions. Osteoclast abnormality has no relationship with a defect in osteoblastic-cell support of osteoclast differentiation but is due to abnormalities of osteoclast progenitors (6). It is difficult to conclude that klotho solely leads to the abnormality of osteoclast progenitors in kl/ kl mice as klotho is completely deficient in this model. Also, the role of klotho during osteoclastogenesis remains unclear, so it is of great value to explore the function of klotho in osteoclastogenesis and the underlying mechanisms.
Osteoclastogenesis relies on the stimulation of 2 important factors, namely macrophage colony-stimulating factor (M-CSF) and receptor activator of nuclear factor kappa-B ligand (RANKL) (7,8). M-CSF is a cytokine involved in the initiation of bone marrow-derived macrophages (BMMs) differentiation into osteoclast precursors, and it also regulates the survival and proliferation of BMMs and pre-osteoclasts. RANKL is the only ligand that binds to the extracellular portion of RANK (9). It induces osteoclast differentiation, the termination of differentiation, and supports the differentiation of pre-osteoclasts into mature osteoclasts (10). RANK, a type I transmembrane protein, recruits different adaptor proteins for intracellular signal transduction, followed by the activation of various signaling pathways such as NF-κB, ERK, p38, and AKT (1,2,(11)(12)(13)(14). Among these adaptor proteins, TNFR-associated factors (TRAFs), especially TRAF6, are the most critical ones. TRAF6 can be recruited by RANK after RANKL stimulation to form a RANK-TRAF6 complex through which downstream signaling pathways are activated (15,16). Various activated downstream signaling pathways ultimately trigger the expression and translocation of nuclear factor of activated T cells cytoplasmic 1 (NFATc1), which serves as an essential regulator for a number of osteoclast-specific genes responsible for osteoclast function such as TRAP, cathepsin K (CTSK), and calcitonin receptor (CTR) through cooperation with c-Fos (17)(18)(19).
In this study, we used kl/kl mice and several in vitro experiments which demonstrated that klotho promotes RANKL-induced osteoclastogenesis. Through an exploration of the underlying mechanisms, during RANKL-induced Osteoclastogenesis, klotho activates the downstream NF-κB signaling pathway thereby upregulating the activation of the expression of c-Fos and NFATc1, moreover, we found that the positive effect of klotho on the activation of NF-κB signaling pathway was attributed to klotho binds to RANK to facilitate the interaction between RANK and TRAF6. Klotho may serve as a promising therapeutic target for osteoclasts related diseases. We present the following article in accordance with the ARRIVE reporting checklist (available at https://dx.doi. org/10.21037/atm-21-4332).
Animals
We purchased heterozygous klotho mutant mice (kl/+) named C57BL/6N-Klem1cyagen from Cyagen Biosciences Inc. Since homozygous klotho mutant mice (kl/kl) are infertile, we generated kl/kl mice and wild-type mice (WT) by crossing kl/+ mice. All the experimental mice were 4-6 weeks old. Klotho mutant homozygotes or WT mice (4-6 weeks old) were used for experiments. The kl/ kl mice and WT mice were bred and maintained in the animal facilities at the Third Military Medical University. Experiments were performed under a project license (NO.: AMUWEC2021881) granted by the ethics board of The Third Military Medical University, in compliance with The Third Military Medical University institutional guidelines for the care and use of animals.
MicroCT analysis
The mice were sacrificed and We dissected femur specimens of both sides from kl/kl mice and WT mice, all adapted mice were 4-week-old males. Then specimens were fixed overnight in 10% formalin and analyzed by high-resolution μCT (Skyscan1272, Bruker microCT, Kontich, Belgium). There were 24 samples in total, including 12 specimens from kl/kl mice and 12 specimens from WT mice. The scanner was set at a voltage of 60 kV and a resolution of 12 μm per pixel. Images of perfusion computed tomography (PCT) were used to perform three-dimensional (3D) histomorphometric analyses. The region of interest was defined to cover the whole PCT compartment (the femoral head and the femoral shaft). The images were reconstructed with NRecon v1.6 software (Bioz, Inc., CA, USA), analyzed by CTAn v1.9 software (Bruker microCT), and visualized using the 3D model visualization software CTVol v2.0 (Bruker microCT).
Histochemistry
We used femur specimens from 4-week-old males kl/kl mice and WT mice for histochemistry analysis. After 4 weeks bone decalcification in 4 ℃, specimens were embedded as paraffin blocks. Blocks were sectioned at 4 μm using a paraffin microtome. We performed staining using paraffinembedded sections. The sections were dewaxed and then washed 3 times with phosphate-buffered saline (PBS). The sections were used for tartrate-resistant acid phosphatase (TRAP) staining, which was conducted in accordance with the protocol provided by the manufacturer (387A-1KT, Sigma-Aldrich, USA), followed by counterstaining with methyl green (M884, Sigma-Aldrich, MO, USA) for 10 s. A total of 12 blocks were used for analysis, including 6 femurs from kl/kl mice and 6 femurs from WT mice.
Isolation of BMMs
BMMs were isolated from the femurs and tibias of kl/kl mice and WT mice, and all adapted mice were 4-week-old males. The mice were sacrificed and the hind legs were sterilized with 70% ethanol. Followed by removing all the connected soft tissues from the bones and their femurs and tibias were dissected. After the epiphysis is removed, the bone marrow was flushed using alpha-modified minimal essential medium (α-MEM) (HyClone, UT, USA) and the cells in medium were considered as bone marrow cells. The bone marrow cells were mixed with red blood cell lysis buffer (Beyotime, Shanghai, China) for 2 min, then the cells were cultured for 12 h. The non-adherent cells were collected and the concentration was adjusted to 2×10 6 cells/mL in BMM medium (α-MEM) containing 10% fetal bovine serum (FBS, HyClone, UT, USA) and 30 ng/mL M-CSF (R&D Systems, MN, USA), then cells were placed in a humidified incubator with 5% CO 2 at 37 ℃. After 2 days without any procedures, the culture medium was removed and the cells that adhered to the dish were considered BMMs which could be used for subsequent experiments.
Lentivirus transfection
LV-NCKL, LV-KL, LV-NCshKL, and LV-shKL were supplied by Hanbio (Shanghai, China). The lentiviruses were purified from supernatants using ultracentrifugation (3,000 g and 4 ℃ for 15 min) 72 h post-transfection, and the titers of the lentiviruses were determined, we adapted MOI 50 for transfection. RAW264.7 cells were transfected for 8 h and 3 times each, then puromycin (3 μg/mL) was used for selection. Cells which expressed GFP were considered as transfected cells and were used to measure transfection efficiency.
Immunohistochemistry
RANKL-induced RAW264.7 cells were seeded in 96well plates at a starting density of 2×10 4 cells/well. On 3 d, cells were washed using PBS 3 times and fixed with 3.75% formaldehyde in cold PBS for 10 min, followed by 0.5% Triton X-100 for 1-min permeabilization. Blocking was then performed using 5% skim milk overnight at 4 ℃, and cells were incubated in primary antibody solution for 1 h at room temperature using reagents from the Actin Cytoskeleton and Focal Adhesion Staining kit (Merck Millipore, Darmstadt, Germany). Cells were then washed with PBS 3 times and incubated with the secondary antibody for 1 h at room temperature. Cells were counterstained with DAPI for 10 min followed by observation with fluorescence microscopy.
Pit formation
BMMs isolated from the femurs and tibias of either kl/kl mice or WT mice as previous described were used for pit formation, Bovine bone slices (IDS Nordic, Herlev, Denmark) were placed in 48-well plates and 4×10 4 BMMs were seeded per well. Cells were stimulated with M-CSF (50 ng/mL) and RANKL (50 ng/mL) to generate multinucleated osteoclasts for 5 d, then the bone slices were washed with PBS 3 times. Then, 1N NaOH was used to remove the adherent cells, and resorption pits were visualized by staining with hematoxylin (Beyotime, Shanghai, China) for 3 mins. The pit formation area was analyzed by ImageJ 1.53c software (National Institutes of Health, MD, USA).
RT-qPCR
Total RNA was extracted using TRIzol reagent (Invitrogen, Carlsbad, CA, USA). Single-stranded cDNA was reverse transcribed from 1 μg of total RNA according to the manufacturer's instructions of Reverse Transcription System kit (Promega). Quantitative PCR was performed on CFX96 Touch Real-Time PCR System (Bio-Rad, CA, USA) using Power SYBR-Green PCR Master Mix (Takara, Shiga, Japan) according to the instructions. The specific primer sequences were designed as follows:
RNA-seq analysis and differential pathway analysis
Total RNA was extracted using TRIzol from RANKLinduced BMMs of either kl/kl mice or WT mice at 1 d, and ribosomal RNA was removed using the Ribo-Zero™ kit (Epicentre, Madison, WI, USA). The purified library products were prepared according to the protocol of the NEBNext ® Ultra™ RNA Library Prep kit for Illumina (NEB, MA, USA), then evaluated with the Agilent 2200 TapeStation and Qubit ® 2.0 (Life Technologies, MD, USA).
The libraries were sequenced with the IlluminaHiSeq 3000 platform at Guangzhou RiboBio Co. Ltd. (Guangzhou, China). The limma R package was used for gene expression analysis, then Gene Ontology (GO) analysis and KEGG pathway enrichment were performed using the cluster Profiler R package.
Energy-based protein docking analysis
The protein structure of RANK (ID: 3ME2) was downloaded from the Protein Data Bank (PDB) database. The 3D protein model of the extracellular region of α-klotho (NP_038851.2) was constructed using the Rosetta 49-51 program (20)(21)(22). Rosetta was used to predict an automated 3D structure of klotho, where the benchmarked scoring system helps to obtain quantitative assessments of the Rosetta models. To select the final models, Rosetta clustered all the decoys based on the pairwise structure similarity and reported up to 5 models which corresponded to the 5 largest structure clusters. The top-scoring model was used for future analysis. The ZDOCK program was primarily used to search for all possible modes of interaction in the space between 2 proteins by translation and rotation, and to evaluate each binding model using an energy-based scoring function (23). Docking simulations were run with ZDOCK to generate rigid-body docking poses, which were rescored by energy-based functions composed of van der Waals, electrostatics, and solvation energy terms (24).
Co-IP assay
RAW264.7 cells were treated with RANKL, then the supernatant was obtained after homogenization and centrifugation, followed by incubating with beads bound to the beads bound to the anti-RANK antibody. Centrifugation of the mixture was performed and the supernatant was discarded. The beads were washed and the protein complex was collected. The western blot assay as described above was used for detection. Anti-GAPDH antibody (Bioss, Beijing, China) was used as the loading control.
Statistical analysis
The data were statistically analyzed using Statistical Product and Service Solutions (SPSS) version 15 software (IBM, Armonk, NY, USA), and were presented as mean ± SD. An unpaired two-tailed Student's t-test was conducted for comparisons between 2 groups. The level of significance was set at P<0.05, GraphPad Prism 8 software was employed for statistical analysis.
Deficiency of klotho impairs osteoclastogenesis
We collected femur specimens of kl/kl mice and WT mice which were then systemically scanned and analyzed by micro-computed tomography (μCT). The images showed that there was increased trabecular bone in the kl/kl group compared with the sham group, whereas cortical bone decreased ( Figure S1A,S1B). Impaired osteoclastogenesis in kl/kl mice was indicated by significantly lower values of Oc.s/BS (%) and N.OC/B.Pm compared with WT mice ( Figure S1B). A previous study identified that klotho causes impairment of osteoclasts independently from osteoblasts (3), but its effect on osteoclastogenesis remains unknown. Therefore, tartrate-resistant acid phosphatase (TRAP) staining of femur heads was performed and revealed that the number of TRAP + osteoclasts was significantly decreased in kl/kl mice compared to WT mice ( Figure 1A,1B). To further assess the effects of klotho deficiency on osteoclast formation and function, a RANKL-stimulated osteoclastogenesis assay was performed in BMMs which were isolated from kl/kl mice and WT mice separately. As shown by the results, there were dramatically fewer TRAP + multinucleated cells of the kl/kl group compared to the sham controls ( Figure 1C,1D). The pit formation assay was carried out to evaluate the effects of klotho deficiency on osteoclastic bone resorption using bovine bone slices, which showed that the bone resorption of osteoclasts significantly decreased in the kl/kl group compared with the WT group, suggesting reduced osteoclast function ( Figure 1E,1F). Taken together, we concluded that deficiency of klotho impaired osteoclastogenesis.
Klotho promotes osteoclastogenesis in vitro
To examine the role of klotho during osteoclastogenesis, RT-qPCR was employed in both RANKL-treated RAW264.7 cells and BMMs from WT mice to detect the transcriptional levels of klotho, c-Fos, NFATc1, and several other osteoclast differentiation-related marker genes (CTSK, CTR, DCstamp). The results revealed that klotho was expressed in osteoclastic cells and shared the same expression pattern with NFATc1 and c-Fos during osteoclastogenesis, which reached their expression peak at the early phase of osteoclast differentiation, unlike the other related marker genes (CTSK, CTR, DC-stamp) (Figure 2A,2B). Furthermore, the measurement of protein expression by western blot also showed consistent results ( Figure 2C). It has been well established that NFATc1 in cooperation with c-Fos are master regulators of osteoclastogenesis marker genes (25)(26)(27). The above results suggested that klotho might exert functions during osteoclastogenesis. To confirm this, we transfected RAW264.7 cells using a lentivirus with a GFP expression vector for stable overexpression or knockdown of klotho in cells. Transfection efficiency was tested by GFP+ cells/well after puromycin selection ( Figure 2D-2F), and the transfection effect was tested by RT-qPCR for klotho mRNA level. Three experimental vectors for each group were tested, and the subgroup LV-KL1 with the highest transfection effect were chosen for the subsequent experiments ( Figure 2G). We found that wells with klotho overexpression in RANKLtreated RAW264.7 cells had more osteoclasts with actin rings compared to the sham controls at 3 d ( Figure 2H,2I), and under the same condition, the subgroup LV-shKL2 with the highest transfection effect were chosen for the subsequent experiments ( Figure 2J). Klotho knockdown in RAW264.7 cells led to decreased osteoclasts with actin rings ( Figure 2K,2L). Collectively, these results revealed that klotho facilitates RANKL-induced osteoclastogenesis in vitro.
Klotho upregulates the expression of NFATc1 and c-Fos during osteoclastogenesis
To further explore whether klotho affects essential regulators during osteoclastogenesis, we carried out RT-qPCR and western blot and found that overexpression of klotho in RANKL-treated RAW264.7 cells increased the expression of NFATc1 and c-Fos at the mRNA level and protein level at 1 d compared to sham controls ( Figure 3A,3B). Knockdown of klotho in RANKL-treated RAW264.7 cells decreased the expression of NFATc1 and c-Fos at the mRNA level and protein level at 1d compared to sham controls ( Figure 3C,3D). These results suggested that klotho positively regulates the expression of NFATc1 and c-Fos during osteoclastogenesis.
Klotho exerts functions through the NF-κB signaling pathway
To explore the underlying mechanisms by which klotho promotes RANKL-induced osteoclastogenesis, we firstly identified differentially expressed mRNAs in kl/kl mice and WT mice which were presented in a heatmap and volcano plots ( Figure 4A,4B). A total of 2,335 differentially expressed mRNAs were identified between kl/kl mice and WT mice. Of these, 1,120 were upregulated and 1,215 were downregulated in kl/kl mice. Then, Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis was used to examine the processes in which the differentially expressed mRNAs were involved. The results revealed that the 'NF-κB signaling pathway' was the most enriched pathway (Figure 4C), suggesting that this was the primary signaling pathway in which differentially expressed mRNAs were involved. According to the above results, we employed western blot to detect whether klotho exerted any promotive impact on the RANKL-induced phosphorylation of specific downstream signals of the NF-κB pathway. The results showed that the phosphorylation of IκB and P65 by RANKL was not significantly different in BMMs isolated from kl/kl mice compared with the elevated phosphorylation levels of IκB and P65 in BMMs from WT mice ( Figure 4D). In addition, the elevated phosphorylation levels of IκB and P65 by RANKL were more significant in klotho overexpressed RAW264.7 cells compared with sham controls ( Figure 4E). However, these 2 factors showed no significant changes in klotho knockdown RAW264.7 cells compared with sham controls ( Figure 4F). Furthermore, the effect of klotho on osteoclastogenesis in RANKL-treated RAW264.7 cells was significantly suppressed after treatment with BAY 11-7082, which is a widely used inhibitor of the NF-κB pathway ( Figure 4G,4H). Taken together, the above results suggested that klotho facilitates osteoclastogenesis by upregulating the activation of the NF-κB pathway.
Klotho combines with RANK to promote the interaction between RANK and TRAF6
To further identify the underlying mechanism of NF-κB pathway activation, RT-qPCR and western blot were performed to explore the changes upstream of the NF-κB pathway. To our surprise, mRNA and protein levels of RANK and TRAF6 in RANKL-treated RAW264.7 cells at 1d showed no statistically significant differences between either the LV-KL group or the LV-shLV group compared to sham controls ( Figure 5A-5C). These data suggested that klotho might exert its function through mediating the interaction between RANK and TRAF6. We next analyzed the protein structures of RANK and constructed a 3D protein model of the extracellular region of klotho ( Figure 5D). We used the ZDOCK program to explore the top 20 hot spots of RANK and klotho as an interface from 2000 complex structures ( Figure 5E, Table 1). Finally, we identified the docking structure of the RANK/klotho complex ( Figure 5F). To further identify the function of the RANK/klotho complex, we carried out coimmunoprecipitation (Co-IP) to detect the recruitment of TRAF6 by RANK activation in RAW264.7 cells with klotho overexpression/knockdown. We found that overexpression of klotho promoted the process in which TRAF6 was recruited and bound with activated RANK (Figure 5G), whereas knockdown of klotho significantly slowed this process ( Figure 5H). Therefore, these data demonstrate that klotho binds to RANK, thus facilitating the interaction between RANK and TRAF6.
Discussion
The KL gene was initially identified as an anti-aging gene.
Klotho mutant mice, in which the expression of klotho is disrupted, inherit a syndrome resembling human aging (28)(29)(30). In this study, we focused on the functional role and related mechanism of klotho in osteoclastogenesis. We found that klotho combined with RANK to promote the interactions between RANK and TRAF6, and subsequently activated the RANKL-induced NF-κB signaling pathway, ultimately facilitating osteoclastogenesis. Klotho is encoded by the KL gene in humans (3). The extracellular domain is composed of 2 internal repeats and has homology to family 1 glycosidases. These 2 domains form a butterfly-shaped molecule on the surface of the cellular membrane (22); however, the intracellular domain has functional domains but the specific function is still unknown. Klotho mRNA is predominantly expressed in the kidneys, brain, and reproductive organs (3). One hallmark of aging-related phenotypes of klotho mutant mice is osteoporosis (3). When bone resorption exceeds bone formation, osteoporosis develops. The pathophysiology of osteopenia observed in kl/kl mice is characterized as low-turnover osteoporosis in which both bone formation and bone resorption are impaired, however, the decrease of bone formation is more significant (6). In the current study, we identified that the numbers of osteoblasts or osteoclasts per bone perimeter were decreased in femurs of klotho mutant mice, which is consistent with previous findings (Figure S1A,S1B). Moreover, we used RANKL to induce osteoclastogenesis in BMMs isolated from kl/kl mice and found that the lack of klotho showed impairment of osteoclastogenesis ( Figure 1A-1D). Studies have identified the presence of the klotho protein in osteocytes (31), and the disruption of klotho in osteocytes contributes to the osteoporotic bone phenotype in kl/kl mice, especially bone formation. However, the deficiency of klotho in osteocytes showed no significant effect on osteoclast resorption (32). Although bone is not the main distribution organ of klotho, recent studies provide new insights into the function of klotho expressed in bone cells, indicating that klotho may be expressed in different bone cells and functions as a regulator for original cells, such as osteocytes, osteoblastic cells and osteoclastic cells. Therefore, we measured the expression of klotho mRNA and protein during osteoclast differentiation. Notably, the results showed that klotho was indeed expressed in RANKL-induced BMMs or RAW264.7 cells and shared the same expression pattern changes with NFATc1 and c-Fos during osteoclastogenesis, which was elevated at the early phase of differentiation and decreased gradually. NFATc1 is an indispensable factor for osteoclastogenesis (17,18,28). Furthermore, c-Fos is a key transcription factor at the early stage during osteoclast differentiation that belongs to the activator protein-1 (AP-1) family (25).
Studies have also identified that c-Fos is recruited to the NFATc1 promoter and is an indispensable factor for the early induction of NFATc1 for osteoclast differentiation (26,27). Our data suggested that klotho might be a regulator of osteoclastogenesis, thus we transfected the RAW264.7 cells with lentivirus vectors for klotho overexpression or klotho knockdown ( Figure 2D-2G,2J). RAW 264.7 cells are generally used for studies of osteoclasts as the macrophage/pre-OC population, which will differentiate into functional osteoclasts upon stimulation of RANKL (33,34). We found that overexpression of klotho in RANKL-treated RAW264.7 cells promoted Osteoclastogenesis ( Figures 2H,2I,3A,3B) and upregulated the expression of NFATc1 and c-Fos, while knockdown of klotho achieved the opposite results ( Figures 2K,2L,3C,3D). These data identified that klotho is expressed in osteoclastic cells and upregulates NFATc1 and c-Fos to promote osteoclastogenesis. As NFATc1 is a master transcriptional factor for osteoclast differentiation, our subsequent studies focused on the relationship between klotho and NFATc1 during osteoclastogenesis. In osteoclastic cells, several upstream signaling pathways of NFATc1 are activated by RANKL, including the NF-κB, MAPK, and AKT signalling pathways (35)(36)(37). We performed KEGG signaling pathway enrichment analysis using RANKL-induced BMMs isolated from kl/kl mice and WT mice and found that the NF-κB pathway was the most significantly activated pathway ( Figure 4A-4C). Further tests for the phosphorylation of IκB and P65 in different RANKL-induced groups of cells identified that klotho functioned via activation of the NF-κB pathway ( Figure 4D-4F), the effect of klotho on osteoclastogenesis in RANKL-treated RAW264.7 cells could be suppressed by the NF-κB pathway inhibitor BAY 11-7082 ( Figure 4G,4H). Klotho mRNAs and proteins are localized at different sites in various cells. For example, in the kidney, klotho mRNAs and proteins are localized in the distal tubular cells (38). Moreover, it co-localizes with other proteins involved in tubular calcium reabsorption (39), suggesting that klotho proteins carry out their functions in various ways, especially through co-localizing with other proteins. To explore whether klotho affects the activation of the NF-κB pathway through regulating RANK and/or TRAF6, we firstly identified that klotho has no effect on the expression of RANK and TRAF6 ( Figure 5A-5C). Further studies using the energy-based protein docking assay, which can be efficiently applied to identify interfaces and hotspot residues in protein-protein complexes ( Figure 5D-5F), revealed that klotho co-localized with RANK, suggesting it may regulate the interaction between RANK and TRAF6. Co-IP identified that TRAF6 was recruited and bound to RANK with RANKL stimulation, and klotho significantly facilitates this process ( Figure 5G,5H), indicating that RANKL-induced recruitment of RANK with TRAF6 was positively regulated by klotho.
There are several limitations of the current study. Lentivirus transfection was not carried out in BMMs. In our pre-study, we found that cytotoxicity of lentivirus vectors led to the unstable proliferation of BMMs. Therefore, adenovirus may be an alternative vector for future study. One of the roles of klotho is to act as a co-receptor with fibroblast growth factor (FGF) receptor 1 (FGFR1) for FGF23. A previous study has found that external FGF23 also plays a role in osteoclastic cells. We performed some experiments and found that the expression of FGF23 was also present in osteoclastic cells during RANKLstimulated osteoclastogenesis, although our subsequent experiments found that the expression of klotho and FGF23 was independent of each other ( Figure S2). The specific role of FGF23 and whether it interacts with klotho during osteoclastogenesis was not fully elucidated in our research, and further study is needed.
Collectively, in this study, we demonstrated that klotho facilitates RANKL-induced osteoclastogenesis. Klotho co-localized with RANK, through which the interaction between RANK and TRAF6 was upregulated in RANKLtreated osteoclastic cells. The downstream NF-kB signaling pathway was thereby activated and ultimately increased the expression of NFATc1 and c-Fos. Therefore, klotho may be a promising therapeutic candidate for the treatment of osteoclast-associated osteopenic diseases. | 2021-10-15T15:18:09.129Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "0c3a155f7f46616625cc6c46a55f848672a27aca",
"oa_license": "CCBYNCND",
"oa_url": "https://atm.amegroups.com/article/viewFile/80768/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "cd55e42ff2ac40af63a52e7bd047d77acd0e95e2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
247931695 | pes2o/s2orc | v3-fos-license | Maximizing Student Clinical Communication Skills in Dental Education—A Narrative Review
Dental student training in clinical communication skills and behavioral aspects of treatment are lauded as clinically meaningful in the dental education literature. However, many dental school curricula still only provide didactic, one-time coursework with multiple choice examination assessment and little or no student skill-activating activities. This article aims to review literature relevant to optimizing clinical communication and behavioral skills in dental education. The review summarizes findings of several relevant reviews and usable models to focus on four themes: (1) special characteristics of dentistry relevant to communication skill needs, (2) essential components of dental student learning of communications skills, (3) clinical consultation guides or styles and (4) optimal curricular structure for communication learning effectiveness. Contexts of communications in the dental chair differ from medical and other allied health professions, given the current mostly dentist-dominant and patient-passive relationships. Patient-centered communication should be trained. Dental students need more practical learning in active listening and patient-centered skills including using role-play, videotaping and ultimately, real patient training. Medical consultation guides are often unwieldy and impractical in many dental contexts, so a shortened guide is proposed. Communication skills need to be learned and taught with the same rigor as other core dental skills over the entire course of the dental curriculum.
Introduction
Effective communication skills, that is, accurate listening or observation and focused verbal or non-verbal response, enhances efficiency of diagnoses, ethical clinical decision making, positive clinical outcomes, promotes patient use of services, and patient-clinician satisfaction [1][2][3]. It also helps to decrease patient anxiety and pain perception [4]. Conversely, poor communication is the most common reason for dissatisfaction with care, promotion of distrust, including malpractice [5], and is the most common cause for termination of the relationship [6].
Most of the literature about student learning of clinical communication skills appears in allied health, especially for medical and nursing students. This literature is vast compared with similar dental educational literature [7,8]. The role of the dentist or dental student and the context of interactions with patients reveals some very special conditions, such as supine dental chair position and that the patient's mouth is often occupied. These among others, make this role unique among health care workers. Moreover, unlike modern medical care situations, the patient role is most often passive and the relationship is dentist dominated [9,10]. For this reason, it is important not to extrapolate allied health findings and draw conclusions for dental student communication skills learning. Furthermore, given this persistent dentist-dominated relationship over time, many educators emphasize the need for dental students to increase practical learning in active listening and patientcentered skills [8][9][10][11]. Patient-centered communication has shown several advantages in the medical literature, including improved patient recall, compliance and satisfaction [11]. Research on advantages of patient-centered communication in dentistry is scant in comparison, but includes decreased patient dental anxiety [4,12], decreased perceived operative pain [4] as well as improved oral hygiene and periodontal compliance [13,14].
Finally, dental communication skills have traditionally been taught as one time didactic coursework, with little or no practical components prior to 2010 [7]. Although this has changed considerably, some dental education experts [7,15,16] have called for more longitudinal teaching that would follow students' clinical development through the course of the dental curriculum.
The following review attempts to synthesize knowledge gained from the literature specifically related to dental student learning of clinical communication skills in order to make recommendations towards optimization. Knowledge synthesis is important in health care research and practice because it can help contextualize and make sense of evidence that might not be obvious for some, yet can be influential for, as in this case, educational policy and practice. This review placed relatively few restrictions on inclusion criteria in order to sample essential or underemphasized points in the literature that speak to the unique nature and expected quality of teaching dental clinical communication skills. As described in the paragraph above, several reviews about dental school communication skills training, among others referred to below, seemed to share several conceptual agreements over the time between their respective publications. These temporal reiterations appeared important to their respective authors, but sometimes lacked depth of description contextualized to the daily needs of dental students. Four themes appeared to provide important contexts for optimizing learning of communication skills of dental students. The following review research agenda and search aimed to improve understanding of (1) special characteristics of dentistry relevant to communication skills needs, (2) essential components of dental student learning of communications skills, (3) clinical consultation guides or styles used in communication skills learning and (4) optimal curricular structure for clinical communication learning effectiveness. The hope of this review is to bring into focus these important themes of dental communications skills training that could improve educational policy and practice for optimizing dental student learning in the clinic.
Materials and Methods
In order to make a list of considerations that would be helpful for optimizing communication skills training (CST) in dental education, the review especially focused on including (1) review studies, (2) 2010 or later and (3) English language articles that examined CST components in dental education. The initial search query was run in PubMed. Text terms used in the blanket query were "communication skills +/in dental education". The search retrieved 369 articles. A Cochrane Library search contributed 7 more articles after duplicates were culled out. Since most of the review articles in that batch were before or included 2019, Google Scholar was used to repeat the same blanket query search for updates from 2020 to November 2021 to supplement previous searches. Google Scholar also has another convenient feature with its selection choice for "reviews", which was helpful to refine the overall search for the most up-to-date review articles. The Google Scholar search found 10 more potentially relevant articles after duplicates were culled. Thirteen articles were identified as bona fide review articles related to learning or teaching of communication skills in dental education. The reason review articles were the focus of selection from the initial search was in order to explore themes from an overview perspective and in context, rather than search out of context for "uniqueness of dental communications", "essential components of dental communication skills training", "clinical consultation guides in communication skills training" or "optimal dental curricular structure for communication skills training". Thus, presently selected review articles and their references were used to sample the literature that contained contextual information about any of the four themes. Seven of the studies were systematic reviews, one was a systematic scoping review and five were narrative reviews.
• Other aspects of education of dental students' communication skills Khalifah & Celenza, 2019 ■ Systematic review showed dental students were positive about actively learning communication skills, regardless of using role-play or clinical video supervision However, video supervision of actual patient-dentist interactions and especially one to three concentrated course days, were best for learning optimal communication skills. Carey et al., 2010 ■ Systematic review indicated that it was best that skills be evaluated during interactions with real patients, thus calling for at least some clinical coursework after initial role-play in earlier coursework. • Role of clinical instructors Burkert, 2021 ■ Narrative review reported that ultimate learning of optimal communication skills requires teachers to be role models, effective supervisors, powerful tutors, and supportive persons who use diverse teaching methods with an individual approach to educating their students.
Ayn et al., 2017
■ Clinical instructors present communication role models with very little institutional control over learning quality. Teacher education required in order to maximize student learning. Buduneli, 2020 Kalifah & Celenza, 2019 ■ Reviews identifying literature relevant to learning consultation styles, guides and models that have been used to try to organize learning of CST: motivational interviewing (MI), Calgary-Cambridge Guide (C-CG), Macy Foundation model, Manitoba model, Dental Consultation Communications Checklist (DCCC) and Four-plus-one Habit model (4 + 1HD)
Clinical consultation guides used in communication skills learning
Gillam & Yusuf, 2019 ■ MI developed into a patient-centered communication approach for patients, such as alcoholics and smokers, who wanted to change their behavior. Promotes the use of the six-functions model as well as a strategy with the acronym "OARS", i.e., asking Open-ended questions, providing Affirmations about patient goals, use of Reflective listening including "change talk" and discussion of Summaries that capture the process in reflection. Gao et al., 2014 ■ MI outperformed conventional education in improving at least one outcome in: four studies on preventing early childhood caries, a study on adherence to dental appointments, and two studies on prevention of facial injury after abstinence from illicit drugs and alcohol abuse. MI had a superior effect on oral hygiene in five CBT trials out of seven.
King & Hoppe, 2013 ■ The six-functions model as the foundation of all consultation models in which goals for medical encounters are 1) fostering the relationship, 2) gathering information, 3) providing information, 4) making decisions, 5) responding to emotions, and 6) enabling disease and treatment related behavior (help to self-management). Systematic scoping review suggested that CST may be most effective if integrated throughout the curriculum suggesting effectiveness may be optimized as students gain clinical experience as in other clinical disciplines. Students seemed to benefit from increased self-evaluation and reflection regarding their CST performance. Assessments that are similar in nature to those of other clinical competencies might improve perceived importance of CST.
Systematic reviews a decade apart reiterated the need for communication skills learning not just as in initial medical consultations, but also dental intra-operative communication due to differences in medical and dental contexts. Cheng et al., 2015 Dent. J. 2022, 10, x FOR PEER REVIEW 4 of 16 • Other aspects of education of dental students' communication skills Khalifah & Celenza, 2019 ■ Systematic review showed dental students were positive about actively learning communication skills, regardless of using role-play or clinical video supervision However, video supervision of actual patient-dentist interactions and especially one to three concentrated course days, were best for learning optimal communication skills.
Carey et al., 2010
■ Systematic review indicated that it was best that skills be evaluated during interactions with real patients, thus calling for at least some clinical coursework after initial role-play in earlier coursework. • Role of clinical instructors Burkert, 2021 ■ Narrative review reported that ultimate learning of optimal communication skills requires teachers to be role models, effective supervisors, powerful tutors, and supportive persons who use diverse teaching methods with an individual approach to educating their students.
Ayn et al., 2017
■ Clinical instructors present communication role models with very little institutional control over learning quality. Teacher education required in order to maximize student learning. Buduneli, 2020 Kalifah & Celenza, 2019 ■ Reviews identifying literature relevant to learning consultation styles, guides and models that have been used to try to organize learning of CST: motivational interviewing (MI), Calgary-Cambridge Guide (C-CG), Macy Foundation model, Manitoba model, Dental Consultation Communications Checklist (DCCC) and Four-plus-one Habit model (4 + 1HD)
Clinical consultation guides used in communication skills learning
Gillam & Yusuf, 2019 ■ MI developed into a patient-centered communication approach for patients, such as alcoholics and smokers, who wanted to change their behavior. Promotes the use of the six-functions model as well as a strategy with the acronym "OARS", i.e., asking Open-ended questions, providing Affirmations about patient goals, use of Reflective listening including "change talk" and discussion of Summaries that capture the process in reflection. Gao et al., 2014 ■ MI outperformed conventional education in improving at least one outcome in: four studies on preventing early childhood caries, a study on adherence to dental appointments, and two studies on prevention of facial injury after abstinence from illicit drugs and alcohol abuse. MI had a superior effect on oral hygiene in five CBT trials out of seven.
The six-functions model as the foundation of all consultation models in which goals for medical encounters are 1) fostering the relationship, 2) gathering information, 3) providing information, 4) making decisions, 5) responding to emotions, and 6) enabling disease and treatment related behavior (help to self-management). Systematic scoping review suggested that CST may be most effective if integrated throughout the curriculum suggesting effectiveness may be optimized as students gain clinical experience as in other clinical disciplines. Students seemed to benefit from increased self-evaluation and reflection regarding their CST performance. Assessments that are similar in nature to those of other clinical competencies might improve perceived importance of CST.
• Other aspects of education of dental students' communication skills Khalifah & Celenza, 2019 ■ Systematic review showed dental students were positive about actively learning communication skills, regardless of using role-play or clinical video supervision However, video supervision of actual patient-dentist interactions and especially one to three concentrated course days, were best for learning optimal communication skills.
Carey et al., 2010
■ Systematic review indicated that it was best that skills be evaluated during interactions with real patients, thus calling for at least some clinical coursework after initial role-play in earlier coursework.
• Role of clinical instructors
Burkert, 2021 ■ Narrative review reported that ultimate learning of optimal communication skills requires teachers to be role models, effective supervisors, powerful tutors, and supportive persons who use diverse teaching methods with an individual approach to educating their students.
Ayn et al., 2017
■ Clinical instructors present communication role models with very little institutional control over learning quality. Teacher education required in order to maximize student learning. Buduneli, 2020 Kalifah & Celenza, 2019 ■ Reviews identifying literature relevant to learning consultation styles, guides and models that have been used to try to organize learning of CST: motivational interviewing (MI), Calgary-Cambridge Guide (C-CG), Macy Foundation model, Manitoba model, Dental Consultation Communications Checklist (DCCC) and Four-plus-one Habit model (4 + 1HD)
Clinical consultation guides used in communication skills learning
Gillam & Yusuf, 2019 ■ MI developed into a patient-centered communication approach for patients, such as alcoholics and smokers, who wanted to change their behavior. Promotes the use of the six-functions model as well as a strategy with the acronym "OARS", i.e., asking Open-ended questions, providing Affirmations about patient goals, use of Reflective listening including "change talk" and discussion of Summaries that capture the process in reflection.
Gao et al., 2014
■ MI outperformed conventional education in improving at least one outcome in: four studies on preventing early childhood caries, a study on adherence to dental appointments, and two studies on prevention of facial injury after abstinence from illicit drugs and alcohol abuse. MI had a superior effect on oral hygiene in five CBT trials out of seven.
King & Hoppe, 2013 ■ The six-functions model as the foundation of all consultation models in which goals for medical encounters are 1) fostering the relationship, 2) gathering information, 3) providing information, 4) making decisions, 5) responding to emotions, and 6) enabling disease and treatment related behavior (help to self-management). Systematically identified 26 communication skills that fell under four categories: generic skills, case-specific skills, time-specific skills and emerging skills (see Figure 1). Tabled each of the 50 relevant studies for type of communications skills taught, teaching method (e.g., role-play, video supervision, lectures), assessment method and outcomes. Active listening, empathy and professionalism were prominent, indicating a trend toward patient-centered communication. • Other aspects of education of dental students' communication skills
Optimal
Khalifah & Celenza, 2019 ■ Systematic review showed dental students were positive about actively learning communication skills, regardless of using role-play or clinical video supervision However, video supervision of actual patient-dentist interactions and especially one to three concentrated course days, were best for learning optimal communication skills.
Carey et al., 2010
■ Systematic review indicated that it was best that skills be evaluated during interactions with real patients, thus calling for at least some clinical coursework after initial role-play in earlier coursework.
• Role of clinical instructors
Burkert, 2021 ■ Narrative review reported that ultimate learning of optimal communication skills requires teachers to be role models, effective supervisors, powerful tutors, and supportive persons who use diverse teaching methods with an individual approach to educating their students. Gillam & Yusuf, 2019 ■ MI developed into a patient-centered communication approach for patients, such as alcoholics and smokers, who wanted to change their behavior. Promotes the use of the six-functions model as well as a strategy with the acronym "OARS", i.e., asking Open-ended questions, providing Affirmations about patient goals, use of Reflective listening including "change talk" and discussion of Summaries that capture the process in reflection.
Gao et al., 2014
■ MI outperformed conventional education in improving at least one outcome in: four studies on preventing early childhood caries, a study on adherence to dental appointments, and two studies on prevention of facial injury after abstinence from illicit drugs and alcohol abuse. MI had a superior effect on oral hygiene in five CBT trials out of seven.
■
The six-functions model as the foundation of all consultation models in which goals for medical encounters are 1) fostering the relationship, 2) gathering information, 3) providing information, 4) making decisions, 5) responding to emotions, and 6) enabling disease and treatment related behavior (help to self-management). Systematic review of German-speaking schools found that 30% surveyed pursued a longitudinal curriculum, i.e., multiple points over Systematic review concluded that PCC is about delivering humane care involving good communication and shared decision making. Noted there was neither work assessing these concepts empirically nor were they clearly understood in dental settings. Presented a model of four levels of information and choice provision and/or agreement between clinician and patient. • Other aspects of education of dental students' communication skills Khalifah & Celenza, 2019 ■ Systematic review showed dental students were positive about actively learning communication skills, regardless of using role-play or clinical video supervision However, video supervision of actual patient-dentist interactions and especially one to three concentrated course days, were best for learning optimal communication skills.
Carey et al., 2010
■ Systematic review indicated that it was best that skills be evaluated during interactions with real patients, thus calling for at least some clinical coursework after initial role-play in earlier coursework.
• Role of clinical instructors
Burkert, 2021 ■ Narrative review reported that ultimate learning of optimal communication skills requires teachers to be role models, effective supervisors, powerful tutors, and supportive persons who use diverse teaching methods with an individual approach to educating their students. MI developed into a patient-centered communication approach for patients, such as alcoholics and smokers, who wanted to change their behavior. Promotes the use of the six-functions model as well as a strategy with the acronym "OARS", i.e., asking Open-ended questions, providing Affirmations about patient goals, use of Reflective listening including "change talk" and discussion of Summaries that capture the process in reflection.
Gao et al., 2014
■ MI outperformed conventional education in improving at least one outcome in: four studies on preventing early childhood caries, a study on adherence to dental appointments, and two studies on prevention of facial injury after abstinence from illicit drugs and alcohol abuse. MI had a superior effect on oral hygiene in five CBT trials out of seven.
The six-functions model as the foundation of all consultation models in which goals for medical encounters are 1) fostering the relationship, 2) gathering information, 3) providing information, 4) making decisions, 5) responding to emotions, and 6) enabling disease and treatment related behavior (help to self-management).
Systematic review revealed a lack of understanding of PCC particularly in general dental practice. Reported that current patient outcome measures as indicators of patient-centeredness are inadequate. Special dentistry qualitative research about treatment of phobic or economically disadvantaged patients provided some evidence of good outcomes using PCC for vulnerable patients. • Other aspects of education of dental students' communication skills Khalifah & Celenza, 2019 ■ Systematic review showed dental students were positive about actively learning communication skills, regardless of using role-play or clinical video supervision However, video supervision of actual patient-dentist interactions and especially one to three concentrated course days, were best for learning optimal communication skills.
Carey et al., 2010
■ Systematic review indicated that it was best that skills be evaluated during interactions with real patients, thus calling for at least some clinical coursework after initial role-play in earlier coursework.
• Role of clinical instructors
Burkert, 2021 ■ Narrative review reported that ultimate learning of optimal communication skills requires teachers to be role models, effective supervisors, powerful tutors, and supportive persons who use diverse teaching methods with an individual approach to educating their students.
Ayn et al., 2017
■ Clinical instructors present communication role models with very little institutional control over learning quality. Teacher education required in order to maximize student learning. Gillam & Yusuf, 2019 ■ MI developed into a patient-centered communication approach for patients, such as alcoholics and smokers, who wanted to change their behavior. Promotes the use of the six-functions model as well as a strategy with the acronym "OARS", i.e., asking Open-ended questions, providing Affirmations about patient goals, use of Reflective listening including "change talk" and discussion of Summaries that capture the process in reflection.
Gao et al., 2014
■ MI outperformed conventional education in improving at least one outcome in: four studies on preventing early childhood caries, a study on adherence to dental appointments, and two studies on prevention of facial injury after abstinence from illicit drugs and alcohol abuse. MI had a superior effect on oral hygiene in five CBT trials out of seven.
The six-functions model as the foundation of all consultation models in which goals for medical encounters are 1) fostering the rela-A narrative review of the medical literature showed considerable evidence to support positive associations between PCC physician communication and positive outcomes with patients, such as improved recall, understanding, satisfaction and compliance.
•
Other aspects of education of dental students' communication skills Dent. J. 2022, 10, x FOR PEER REVIEW 4 of 16 • Other aspects of education of dental students' communication skills Khalifah & Celenza, 2019 ■ Systematic review showed dental students were positive about actively learning communication skills, regardless of using role-play or clinical video supervision However, video supervision of actual patient-dentist interactions and especially one to three concentrated course days, were best for learning optimal communication skills.
Carey et al., 2010
■ Systematic review indicated that it was best that skills be evaluated during interactions with real patients, thus calling for at least some clinical coursework after initial role-play in earlier coursework.
• Role of clinical instructors
Burkert, 2021 ■ Narrative review reported that ultimate learning of optimal communication skills requires teachers to be role models, effective supervisors, powerful tutors, and supportive persons who use diverse teaching methods with an individual approach to educating their students.
Ayn et al., 2017
■ Clinical instructors present communication role models with very little institutional control over learning quality. Teacher education required in order to maximize student learning. MI developed into a patient-centered communication approach for patients, such as alcoholics and smokers, who wanted to change their behavior. Promotes the use of the six-functions model as well as a strategy with the acronym "OARS", i.e., asking Open-ended questions, providing Affirmations about patient goals, use of Reflective listening including "change talk" and discussion of Summaries that capture the process in reflection.
Gao et al., 2014
■ MI outperformed conventional education in improving at least one outcome in: four studies on preventing early childhood caries, a study on adherence to dental appointments, and two studies on prevention of facial injury after abstinence from illicit drugs and alcohol abuse. MI had a superior effect on oral hygiene in five CBT trials out Systematic review showed dental students were positive about actively learning communication skills, regardless of using role-play or clinical video supervision However, video supervision of actual patient-dentist interactions and especially one to three concentrated course days, were best for learning optimal communication skills. • Other aspects of education of dental students' communication skills Khalifah & Celenza, 2019 ■ Systematic review showed dental students were positive about actively learning communication skills, regardless of using role-play or clinical video supervision However, video supervision of actual patient-dentist interactions and especially one to three concentrated course days, were best for learning optimal communication skills.
Carey et al., 2010
■ Systematic review indicated that it was best that skills be evaluated during interactions with real patients, thus calling for at least some clinical coursework after initial role-play in earlier coursework.
• Role of clinical instructors
Burkert, 2021 ■ Narrative review reported that ultimate learning of optimal communication skills requires teachers to be role models, effective supervisors, powerful tutors, and supportive persons who use diverse teaching methods with an individual approach to educating their students.
Ayn et al., 2017
■ Clinical instructors present communication role models with very little institutional control over learning quality. Teacher education required in order to maximize student learning. Systematic review indicated that it was best that skills be evaluated during interactions with real patients, thus calling for at least some clinical coursework after initial role-play in earlier coursework. • Other aspects of education of dental students' communication skills
Clinical consultation guides used in communication skills learning
Khalifah & Celenza, 2019 ■ Systematic review showed dental students were positive about actively learning communication skills, regardless of using role-play or clinical video supervision However, video supervision of actual patient-dentist interactions and especially one to three concentrated course days, were best for learning optimal communication skills.
Carey et al., 2010
■ Systematic review indicated that it was best that skills be evaluated during interactions with real patients, thus calling for at least some clinical coursework after initial role-play in earlier coursework.
• Role of clinical instructors
Burkert, 2021 ■ Narrative review reported that ultimate learning of optimal communication skills requires teachers to be role models, effective supervisors, powerful tutors, and supportive persons who use diverse teaching methods with an individual approach to educating their students.
Ayn et al., 2017
■ Clinical instructors present communication role models with very little institutional control over learning quality. Teacher education required in order to maximize student learning. • Other aspects of education of dental students' communication skills
Clinical consultation guides used in communication skills learning
Khalifah & Celenza, 2019 ■ Systematic review showed dental students were positive about actively learning communication skills, regardless of using role-play or clinical video supervision However, video supervision of actual patient-dentist interactions and especially one to three concentrated course days, were best for learning optimal communication skills.
Carey et al., 2010
■ Systematic review indicated that it was best that skills be evaluated during interactions with real patients, thus calling for at least some clinical coursework after initial role-play in earlier coursework.
• Role of clinical instructors
Burkert, 2021 ■ Narrative review reported that ultimate learning of optimal communication skills requires teachers to be role models, effective supervisors, powerful tutors, and supportive persons who use diverse teaching methods with an individual approach to educating their students. tient-dentist interactions and especially one to three concentrated course days, were best for learning optimal communication skills.
Carey et al., 2010 ■ Systematic review indicated that it was best that skills be evaluated during interactions with real patients, thus calling for at least some clinical coursework after initial role-play in earlier coursework. • Role of clinical instructors Burkert, 2021 ■ Narrative review reported that ultimate learning of optimal communication skills requires teachers to be role models, effective supervisors, powerful tutors, and supportive persons who use diverse teaching methods with an individual approach to educating their students. MI developed into a patient-centered communication approach for patients, such as alcoholics and smokers, who wanted to change their behavior. Promotes the use of the six-functions model as well as a strategy with the acronym "OARS", i.e., asking Open-ended questions, providing Affirmations about patient goals, use of Reflective listening including "change talk" and discussion of Summaries that capture the process in reflection.
Gao et al., 2014
■ MI outperformed conventional education in improving at least one outcome in: four studies on preventing early childhood caries, a study on adherence to dental appointments, and two studies on prevention of facial injury after abstinence from illicit drugs and alcohol abuse. MI had a superior effect on oral hygiene in five CBT trials out of seven.
The six-functions model as the foundation of all consultation models in which goals for medical encounters are 1) fostering the relationship, 2) gathering information, 3) providing information, 4) making decisions, 5) responding to emotions, and 6) enabling disease and treatment related behavior (help to self-management).
Optimal curricular structure for effective learning of clinical communication skills
Khalifah & Celenza, 2019 ■ Systematic review of curricula; proposing that learning of CST is best in a longitudinal curriculum Rütterman et al., 2017 ■ Systematic review of German-speaking schools found that 30% surveyed pursued a longitudinal curriculum, i.e., multiple points over time in which CST was taught.
Ayn et al., 2017
■ Systematic scoping review suggested that CST may be most effective if integrated throughout the curriculum suggesting effectiveness may be optimized as students gain clinical experience as in other clinical disciplines. Students seemed to benefit from increased self-evaluation and reflection regarding their CST performance. Assessments that are similar in nature to those of other clinical competencies might improve perceived importance of CST. • Other aspects of education of dental students' communication skills Khalifah & Celenza, 2019 ■ Systematic review showed dental students were positive about actively learning communication skills, regardless of using role-play or clinical video supervision However, video supervision of actual patient-dentist interactions and especially one to three concentrated course days, were best for learning optimal communication skills.
Carey et al., 2010
■ Systematic review indicated that it was best that skills be evaluated during interactions with real patients, thus calling for at least some clinical coursework after initial role-play in earlier coursework. • Role of clinical instructors Burkert, 2021 ■ Narrative review reported that ultimate learning of optimal communication skills requires teachers to be role models, effective supervisors, powerful tutors, and supportive persons who use diverse teaching methods with an individual approach to educating their students. MI developed into a patient-centered communication approach for patients, such as alcoholics and smokers, who wanted to change their behavior. Promotes the use of the six-functions model as well as a strategy with the acronym "OARS", i.e., asking Open-ended questions, providing Affirmations about patient goals, use of Reflective listening including "change talk" and discussion of Summaries that capture the process in reflection.
Gao et al., 2014
■ MI outperformed conventional education in improving at least one outcome in: four studies on preventing early childhood caries, a study on adherence to dental appointments, and two studies on prevention of facial injury after abstinence from illicit drugs and alcohol abuse. MI had a superior effect on oral hygiene in five CBT trials out of seven.
King & Hoppe, 2013 ■
The six-functions model as the foundation of all consultation models in which goals for medical encounters are 1) fostering the relationship, 2) gathering information, 3) providing information, 4) making decisions, 5) responding to emotions, and 6) enabling disease and treatment related behavior (help to self-management).
Optimal curricular structure for effective learning of clinical communication skills
Khalifah & Celenza, 2019 ■ Systematic review of curricula; proposing that learning of CST is best in a longitudinal curriculum Rütterman et al., 2017 ■ Systematic review of German-speaking schools found that 30% surveyed pursued a longitudinal curriculum, i.e., multiple points over time in which CST was taught.
Ayn et al., 2017
■ Systematic scoping review suggested that CST may be most effective if integrated throughout the curriculum suggesting effectiveness may be optimized as students gain clinical experience as in other clinical disciplines. Students seemed to benefit from increased self-evaluation and reflection regarding their CST performance. Assessments that are similar in nature to those of other clinical competencies might improve perceived importance of CST.
MI developed into a patient-centered communication approach for patients, such as alcoholics and smokers, who wanted to change their behavior. Promotes the use of the six-functions model as well as a strategy with the acronym "OARS", i.e., asking Open-ended questions, providing Affirmations about patient goals, use of Reflective listening including "change talk" and discussion of Summaries that capture the process in reflection. Gao et al., 2014 Dent. J. 2022, 10, x FOR PEER REVIEW 4 of 16 • Other aspects of education of dental students' communication skills Khalifah & Celenza, 2019 ■ Systematic review showed dental students were positive about actively learning communication skills, regardless of using role-play or clinical video supervision However, video supervision of actual patient-dentist interactions and especially one to three concentrated course days, were best for learning optimal communication skills.
Carey et al., 2010
■ Systematic review indicated that it was best that skills be evaluated during interactions with real patients, thus calling for at least some clinical coursework after initial role-play in earlier coursework. • Role of clinical instructors Burkert, 2021 ■ Narrative review reported that ultimate learning of optimal communication skills requires teachers to be role models, effective supervisors, powerful tutors, and supportive persons who use diverse teaching methods with an individual approach to educating their students. The six-functions model as the foundation of all consultation models in which goals for medical encounters are 1) fostering the relationship, 2) gathering information, 3) providing information, 4) making decisions, 5) responding to emotions, and 6) enabling disease and treatment related behavior (help to self-management).
Optimal curricular structure for effective learning of clinical communication skills
Khalifah & Celenza, 2019 ■ Systematic review of curricula; proposing that learning of CST is best in a longitudinal curriculum Rütterman et al., 2017 ■ Systematic review of German-speaking schools found that 30% surveyed pursued a longitudinal curriculum, i.e., multiple points over time in which CST was taught.
Ayn et al., 2017
■ Systematic scoping review suggested that CST may be most effective if integrated throughout the curriculum suggesting effectiveness may be optimized as students gain clinical experience as in other clinical disciplines. Students seemed to benefit from increased self-evaluation and reflection regarding their CST performance. Assessments that are similar in nature to those of other clinical competencies might improve perceived importance of CST.
MI outperformed conventional education in improving at least one outcome in: four studies on preventing early childhood caries, a study on adherence to dental appointments, and two studies on prevention of facial injury after abstinence from illicit drugs and alcohol abuse. MI had a superior effect on oral hygiene in five CBT trials out of seven.
King & Hoppe, 2013
Dent. J. 2022, 10, x FOR PEER REVIEW 4 of 16 • Other aspects of education of dental students' communication skills Khalifah & Celenza, 2019 ■ Systematic review showed dental students were positive about actively learning communication skills, regardless of using role-play or clinical video supervision However, video supervision of actual patient-dentist interactions and especially one to three concentrated course days, were best for learning optimal communication skills.
Carey et al., 2010
■ Systematic review indicated that it was best that skills be evaluated during interactions with real patients, thus calling for at least some clinical coursework after initial role-play in earlier coursework.
• Role of clinical instructors
Burkert, 2021 ■ Narrative review reported that ultimate learning of optimal communication skills requires teachers to be role models, effective supervisors, powerful tutors, and supportive persons who use diverse teaching methods with an individual approach to educating their students. MI outperformed conventional education in improving at least one outcome in: four studies on preventing early childhood caries, a study on adherence to dental appointments, and two studies on prevention of facial injury after abstinence from illicit drugs and alcohol abuse. MI had a superior effect on oral hygiene in five CBT trials out of seven.
King & Hoppe, 2013 ■
The six-functions model as the foundation of all consultation models in which goals for medical encounters are 1) fostering the relationship, 2) gathering information, 3) providing information, 4) making decisions, 5) responding to emotions, and 6) enabling disease and treatment related behavior (help to self-management).
Optimal curricular structure for effective learning of clinical communication skills
Khalifah & Celenza, 2019 ■ Systematic review of curricula; proposing that learning of CST is best in a longitudinal curriculum Rütterman et al., 2017 ■ Systematic review of German-speaking schools found that 30% surveyed pursued a longitudinal curriculum, i.e., multiple points over time in which CST was taught.
Ayn et al., 2017
■ Systematic scoping review suggested that CST may be most effective if integrated throughout the curriculum suggesting effectiveness may be optimized as students gain clinical experience as in other clinical disciplines. Students seemed to benefit from increased self-evaluation and reflection regarding their CST performance. Assessments that are similar in nature to those of other clinical competencies might improve perceived importance of CST.
The six-functions model as the foundation of all consultation models in which goals for medical encounters are 1) fostering the relationship, 2) gathering information, 3) providing information, 4) making decisions, 5) responding to emotions, and 6) enabling disease and treatment related behavior (help to self-management).
Khalifah & Celenza, 2019
Dent. J. 2022, 10, x FOR PEER REVIEW 4 of 16 • Other aspects of education of dental students' communication skills Khalifah & Celenza, 2019 ■ Systematic review showed dental students were positive about actively learning communication skills, regardless of using role-play or clinical video supervision However, video supervision of actual patient-dentist interactions and especially one to three concentrated course days, were best for learning optimal communication skills.
Carey et al., 2010
■ Systematic review indicated that it was best that skills be evaluated during interactions with real patients, thus calling for at least some clinical coursework after initial role-play in earlier coursework.
• Role of clinical instructors
Burkert, 2021 ■ Narrative review reported that ultimate learning of optimal communication skills requires teachers to be role models, effective supervisors, powerful tutors, and supportive persons who use diverse teaching methods with an individual approach to educating their students.
Ayn et al., 2017
■ Clinical instructors present communication role models with very little institutional control over learning quality. Teacher education required in order to maximize student learning. MI developed into a patient-centered communication approach for patients, such as alcoholics and smokers, who wanted to change their behavior. Promotes the use of the six-functions model as well as a strategy with the acronym "OARS", i.e., asking Open-ended questions, providing Affirmations about patient goals, use of Reflective listening including "change talk" and discussion of Summaries that capture the process in reflection.
Gao et al., 2014
■ MI outperformed conventional education in improving at least one outcome in: four studies on preventing early childhood caries, a study on adherence to dental appointments, and two studies on prevention of facial injury after abstinence from illicit drugs and alcohol abuse. MI had a superior effect on oral hygiene in five CBT trials out of seven.
King & Hoppe, 2013 ■
The six-functions model as the foundation of all consultation models in which goals for medical encounters are 1) fostering the relationship, 2) gathering information, 3) providing information, 4) making decisions, 5) responding to emotions, and 6) enabling disease and treatment related behavior (help to self-management).
Optimal curricular structure for effective learning of clinical communication skills
Khalifah & Celenza, 2019 ■ Systematic review of curricula; proposing that learning of CST is best in a longitudinal curriculum Rütterman et al., 2017 ■ Systematic review of German-speaking schools found that 30% surveyed pursued a longitudinal curriculum, i.e., multiple points over time in which CST was taught.
Ayn et al., 2017
■ Systematic scoping review suggested that CST may be most effective if integrated throughout the curriculum suggesting effectiveness may be optimized as students gain clinical experience as in other clinical disciplines. Students seemed to benefit from increased self-evaluation and reflection regarding their CST performance. Assessments that are similar in nature to those of other clinical competencies might improve perceived importance of CST. • Other aspects of education of dental students' communication skills Khalifah & Celenza, 2019 ■ Systematic review showed dental students were positive about actively learning communication skills, regardless of using role-play or clinical video supervision However, video supervision of actual patient-dentist interactions and especially one to three concentrated course days, were best for learning optimal communication skills.
Carey et al., 2010
■ Systematic review indicated that it was best that skills be evaluated during interactions with real patients, thus calling for at least some clinical coursework after initial role-play in earlier coursework.
• Role of clinical instructors
Burkert, 2021 ■ Narrative review reported that ultimate learning of optimal communication skills requires teachers to be role models, effective supervisors, powerful tutors, and supportive persons who use diverse teaching methods with an individual approach to educating their students.
Ayn et al., 2017
■ Clinical instructors present communication role models with very little institutional control over learning quality. Teacher education required in order to maximize student learning. MI developed into a patient-centered communication approach for patients, such as alcoholics and smokers, who wanted to change their behavior. Promotes the use of the six-functions model as well as a strategy with the acronym "OARS", i.e., asking Open-ended questions, providing Affirmations about patient goals, use of Reflective listening including "change talk" and discussion of Summaries that capture the process in reflection.
Gao et al., 2014
■ MI outperformed conventional education in improving at least one outcome in: four studies on preventing early childhood caries, a study on adherence to dental appointments, and two studies on prevention of facial injury after abstinence from illicit drugs and alcohol abuse. MI had a superior effect on oral hygiene in five CBT trials out of seven.
King & Hoppe, 2013 ■
The six-functions model as the foundation of all consultation models in which goals for medical encounters are 1) fostering the relationship, 2) gathering information, 3) providing information, 4) making decisions, 5) responding to emotions, and 6) enabling disease and treatment related behavior (help to self-management). • Other aspects of education of dental students' communication skills
Optimal curricular structure for effective learning of clinical communication skills
Khalifah & Celenza, 2019 ■ Systematic review showed dental students were positive about actively learning communication skills, regardless of using role-play or clinical video supervision However, video supervision of actual patient-dentist interactions and especially one to three concentrated course days, were best for learning optimal communication skills.
Carey et al., 2010
■ Systematic review indicated that it was best that skills be evaluated during interactions with real patients, thus calling for at least some clinical coursework after initial role-play in earlier coursework.
• Role of clinical instructors
Burkert, 2021 ■ Narrative review reported that ultimate learning of optimal communication skills requires teachers to be role models, effective supervisors, powerful tutors, and supportive persons who use diverse teaching methods with an individual approach to educating their students.
Ayn et al., 2017
■ Clinical instructors present communication role models with very little institutional control over learning quality. Teacher education required in order to maximize student learning. MI developed into a patient-centered communication approach for patients, such as alcoholics and smokers, who wanted to change their behavior. Promotes the use of the six-functions model as well as a strategy with the acronym "OARS", i.e., asking Open-ended questions, providing Affirmations about patient goals, use of Reflective listening including "change talk" and discussion of Summaries that capture the process in reflection.
Gao et al., 2014
■ MI outperformed conventional education in improving at least one outcome in: four studies on preventing early childhood caries, a study on adherence to dental appointments, and two studies on prevention of facial injury after abstinence from illicit drugs and alcohol abuse. MI had a superior effect on oral hygiene in five CBT trials out of seven.
King & Hoppe, 2013 ■
The six-functions model as the foundation of all consultation models in which goals for medical encounters are 1) fostering the relationship, 2) gathering information, 3) providing information, 4) making decisions, 5) responding to emotions, and 6) enabling disease and treatment related behavior (help to self-management). [8]. Generic skills are those to be used at any dental visit and must become natural habits of the dentist. Case-specific skills regard individual cases and situations and vary according to patient and case. Time-specific skills are appropriate at certain times in a consultation. Emerging skills are skills to be applied in distinctive cases with special considerations.
Optimal curricular structure for effective learning of clinical communication skills
Khalifah and Celenza also tabled each of the 50 relevant studies for type of communication skills taught, teaching method (e.g., role-play, video supervision, lectures) and assessment method and outcomes [8]. From these, we obtain a picture of the specific contexts for learning communication skills in dentistry and being able to rate them according to quality. In common with most of the articles reviewed and tabled by Khalifah and Celenza, "active listening", gathering information, establishing rapport, empathy, professionalism and motivation were predominant [8]. According to Khalifah and Celenza [8], the highest quality studies put the patient's perspective at the center of communications, no matter how short or long the duration of consultation or needs within the context described.
Patient-Centered Care-Active Listening and Empathy
People tend to overestimate their listening comprehension, suggesting that they may not perceive listening as a skill requiring development in the same way that speaking, reading, writing or manual techniques are skills acquired through instruction, effort and time [22]. Active listening is the major skill promoted in patient-centered communication [8,[23][24][25][26][27]. Active listening was initially promoted among others, by psychologist Carl Rogers [27,28]. Rogers argued that active listening was the most effective way to explore and understand a patient's problems and help them as well. A normal first reaction of most people in thinking about listening as a possible therapy for dealing with human problems [8]. Generic skills are those to be used at any dental visit and must become natural habits of the dentist. Case-specific skills regard individual cases and situations and vary according to patient and case. Time-specific skills are appropriate at certain times in a consultation. Emerging skills are skills to be applied in distinctive cases with special considerations.
Khalifah and Celenza also tabled each of the 50 relevant studies for type of communication skills taught, teaching method (e.g., role-play, video supervision, lectures) and assessment method and outcomes [8]. From these, we obtain a picture of the specific contexts for learning communication skills in dentistry and being able to rate them according to quality. In common with most of the articles reviewed and tabled by Khalifah and Celenza, "active listening", gathering information, establishing rapport, empathy, professionalism and motivation were predominant [8]. According to Khalifah and Celenza [8], the highest quality studies put the patient's perspective at the center of communications, no matter how short or long the duration of consultation or needs within the context described.
Patient-Centered Care-Active Listening and Empathy
People tend to overestimate their listening comprehension, suggesting that they may not perceive listening as a skill requiring development in the same way that speaking, reading, writing or manual techniques are skills acquired through instruction, effort and time [22]. Active listening is the major skill promoted in patient-centered communication [8,[23][24][25][26][27]. Active listening was initially promoted among others, by psychologist Carl Rogers [27,28]. Rogers argued that active listening was the most effective way to explore and understand a patient's problems and help them as well. A normal first reaction of most people in thinking about listening as a possible therapy for dealing with human problems is that listening is thought of as passive and insufficient. They think that listening does not communicate anything to the speaker. However, by consistently listening and verifying what one hears with a speaker, the listener is conveying the idea: "I'm interested in you and I think that what you have to say is important. I respect your thoughts, and (even if I may not agree with them), I know that they are valid for you. I'm not trying to change you or evaluate you. I just want to understand you." Active listening promotes empathy with the speaker, which promotes positive outcomes [11,24]. Active listening involves learning to work with both non-verbal as well as verbal communication in order to "mirror" for a patient, that is, to verify meaning by summarizing and reformulating statements for clearer mutual understanding. Active listening is a "healthy combination" of critical listening, reflective listening and passive listening. Active listeners are critical in trying to interpret a message and evaluate the speaker's emotions and non-verbal cues; reflective listening helps the speaker to "feel heard"; and silence and pauses in passive listening signals to the speaker that there is uninterrupted time for them to communicate their message [29]. Active listening is used both in initial consultations and during communications in active dental treatment. Adapting active listening as a central element in a communication skills curriculum is not only essential to optimizing one-on-one communication between the student dentist and patients, but it also signals a philosophical change from doctor-centered to patient-centered consultations and treatment [11,15].
Patient-centered care (PCC) has grown out of observations that active listening and empathy maximize clinical communication and health outcomes and was first officially espoused by the Institute of Medicine in 2001 [11]. King and Hoppe's 2013 extensive review of patient-centered care indicated a consensus about what constitutes "best practice" for communication in clinical encounters, the so-called "six-functions model" described above, which basically pervades all consultation models and styles that will be described below. King and Hoppe [11] surmised that there was abundant evidence in the medical literature supporting the importance of patient-centered communication skills as a dimension of physician competence. They cited evidence of positive outcomes about patient recall, understanding, satisfaction and adherence to therapy [11]. King and Hoppe stated that "efforts to enhance teaching of communication skills to medical trainees would likely require significant changes in instruction at undergraduate and graduate levels, as well as changes in assessing the developing communication skills of physicians." An added critical dimension is faculty understanding of the importance of communication skills, and their commitment to helping trainees develop those skills [11].
In a systematic review of PCC in dentistry, Scambler et al., [10] concluded it is about delivering humane care involving good communication and shared decision-making. However, they noted there was no evidence in the dental literature suggesting that the concept is either clearly understood or empirically and systematically assessed in dental settings. They presented a model of four levels of information and choice provision and/or agreement between dentist and patient. Level 1 is one-way information from the dentist. Level 2 is when patient makes informed choice among informed treatment options. Level 3 is when patients are given tools to make the choice themselves, under advisement. Finally, Level 4 is when the patient is in full control of their care and capable of making informed choices they wish or do not wish to achieve.
The model does not assume that all patients would want, or be happy with a Level 4 approach, that is, may not be their hierarchical endpoint.
Another systematic review by Mills et al. [9] revealed a lack of understanding of PCC in dentistry, and in particular, general dental practice. Mills et al. [10] pointed to a poor evidence base and no support for the use of then current patient reported outcome measures as indicators of patient-centeredness. However, Mills et al. did find that special dentistry qualitative research about treatment of phobic and economically disadvantaged patients provided some evidence of good outcomes using patient-centered communication for these vulnerable populations.
In summary, unlike the medical literature, the dental literature on patient-centered care and communication is scant. This possibly reflects values of dental education and of the profession as a whole in the reluctance to adopt what has proven to be in the medical profession and in society in general, a developmental paradigm shift.
Role-Play vs. Clinical Video Projects
In their seminal review of teaching dental students communication skills, Khalifah and Celenza expressed disappointment that CST was still being taught using passive lecture techniques and that assessment was by written examination [8]. It is generally agreed that these teaching formats limit student learning and create difficulties in assessing practical communication skills [7,8,30]. Khalifah and Celenza [8] found that role-playing, patient interviewing and clinical observation were used in CST, especially in the clinical years of dental study. Two of the studies they cited found these methods were useful in distinguishing effective from ineffective communication skills, and dental students showed positive attitudes toward actively learning skills, regardless of using role-play or clinical video supervision [31,32].
However, the Carey et al., review of dental communication skills training [7] indicated that it was best that skills be evaluated during interactions with real patients, thus calling for at least some clinical coursework after initial role-play in earlier coursework. Contrary to scripted role-play, the nuances of how patient interactions live may provide a more realistic sense of clinical situations, and thus is naturally more motivating for student learning [7]. Khalifah and Celenza also assessed that video supervision of actual patient-dentist interactions and especially in one to three concentrated course days, were best for learning optimal communication skills [8]. Currently, there are several video platforms that can be used for secure storage of video clips obtained during the students' interactions with patients, given proper written consent. These are also excellent resources for case presentations in plenum, in order for teachers and students to discuss many clinical situations filmed within the learning group.
The Role of Clinical Instructors in Communication Skills Learning
Burkert's review [33] reported that the ultimate learning of optimal clinical communication skills requires teachers to be "good role models, effective supervisors, powerful tutors and supportive persons who use dynamic and diverse teaching methods and have an individual approach to educating their students." Ayn et al. [15] also underscored the role of the clinical instructor in modeling clinical communication skills as a form of learning in which there is very little educational institution control over its quality. If students are taught by instructors with poor communications skills, there is a greater chance that the student will learn less optimal communication skills. Ayne et al. [15] described it as "Investing in faculty as well as student communication skill development provides an opportunity for positive role-modeling and patient-focused rapport between students and instructors." A major takeaway from the review was that instructors, as well as students, need to value good communication skills and be willing to adapt to patient-centered communication.
Clinical Consultation Guides and Styles Used in Communication Skills Learning
The literature on structural checklists or guides for health care personnel for making consultations is in agreement with a central premise to facilitate patient-centered communication. However, from the start, there is some controversy, since the overwhelming amount of literature about structural guides for clinical student learning is mainly in health science education literature other than dentistry. There is not nearly this amount of literature for dental consultation formats and styles. The descriptions in the first theme at the beginning of this review regarding special characteristics of the dental clinical environ-ment and special clinical communication conditions fundamentally refers to a difference between medical and dental consultations and that important emphasis is also needed on intra-operative communication in dentistry. The intention with the descriptions below is to provide dental students with a helpful, relevant guide in learning the case wise temporal structure of longer patient consultations. Hopefully, the following narrative will provide some needed synthesis about this theme and can clarify directions for teaching dental students these skills.
According to the literature, there are two main types of models of patient-centered consultation strategies that have been developed as guidelines for interviewing patients for diagnostics and/or health promotion: motivational interviewing (MI) [34][35][36] and the Calgary-Cambridge Guide (C-CG) [37,38] or similar scales such as the Macy Foundation or Manitoba models [8,13,19]. All consultation guides or styles require information gathering, active listening, empathy and relation building as a premise fundamentally similar to the "six functions" as described above [11]. They provide structure to learning how to conduct systematic interviews mostly in relation to anamnestic consultations, in health promotion or as needed in specific cases of clinical communication about behavioral or medical problems. They differentiate specific communication skills (e.g., being attentive, using appropriate language, tending to patients' comfort), while also incorporating, to a lesser or greater degree, assessment of communication skills and particularly students' self-assessment, since they tend to be checklists.
Motivational Interviewing Consultation Style
Motivational interviewing (MI) was perhaps the earliest consultation model strategy. It came about from a desire of health professionals to support patients to achieve desired behavior change in special needs (e.g., decreased smoking, increased oral hygiene) [14,34,39]. MI is defined as "a person-centered counseling style for addressing the common problem of ambivalence about change." [36] MI originally arose from efforts of Miller and colleagues to help patients with risky alcohol behavior [35,36]. The interview structure assumes initiation of rapport building, discussion of a patient's motivations and information sharing and use of Roger's humanistic psychology techniques [34]. Miller and others [34,36] described the strategy as identifiable by the acronym "OARS", i.e., asking Open-ended questions, providing Affirmations (positive feedback), use of Reflective listening, and discussion of Summaries. These are described in detail by Miller and others [34,36] as the following.
•
Open-ended questioning-In contrast to closed questions, which generally require a simple yes/no or numeric answer, open questions do not direct a patient to respond in a particular manner. Instead, they enable a patient to think through and provide richer, fuller responses. The conversation should be started with words such as how or what or describe so that the patient does most of the talking.
•
Affirmations-Sincere affirmations can help build a stronger relationship with a patient. These are the statements and gestures of health care workers that help the patient to recognize strength and acknowledge behaviors that lead to positive change, regardless of how big or small. • Reflective listening-Akin to active listening, this demonstrates that the clinician has accurately heard and understood a patient's communication. This relational empathy encourages further exploration of problems and feelings and encourages "change talk".
•
Summaries reinforce what has been said. Summary statements include trying to obtain the full picture of a patient's behavior and checking with the patient to make sure that they feel the health care professional has reflected their situation accurately. Summarizing helps in integrating the communication that has occurred between the patient and provider.
Macy Model of Doctor-Patient Communication
The Macy Foundation model identifies seven parts of the medical encounter (preparing, opening, gathering information, eliciting patient perspectives, educating patients, agreeing on treatment plans and closing the interview) and focuses on relationship-building skills and interview-managing skills [8]. The Manitoba model is conceptually and structurally similar to the Macy model [8].
Calgary-Cambridge Guide (C-CG) to Health Consultations
The Calgary-Cambridge Guide [37] has often been used for training mostly medical and nursing students in the essential steps for communication in medical consultation with patients. It is used in some dental educational programs as well. Unlike MI, the C-CG is not necessarily seen as a tool for promoting motivation and behavioral change in patients' health behaviors. It is mostly seen as a checklist approach to assure maximum patient contact and useable information in making diagnoses and treatment decisions. Depending on the version, the C-CG has been a 59 to 71 item checklist that can be scored by an observer to assess student behavior in a consultation on a 0-2 Likert scale [40]. The C-CG has been described as unwieldy given the number of items [40]. Several attempts have been made to economize it. A potentially useful and significantly shortened measure was recently adapted for learning/assessing clinical consultation skills in students at a multiple health care provider setting using a 12-item observation scheme (Observation Scheme 12, OS-12) [40]. OS-12 adjusts the Cambridge-Calgary Guide to 12 action parameters assessed on a 0-4 scale (48 max.) and if more detail is needed, provides detail of micro-skills. The items in combination or singularly are more concise than the C-CG, but correspond to C-CG domains ( Figure 2) as shown in Table 2 by an observer to assess student behavior in a consultation on a 0-2 Likert scale [40]. The C-CG has been described as unwieldy given the number of items [40]. Several attempts have been made to economize it. A potentially useful and significantly shortened measure was recently adapted for learning/assessing clinical consultation skills in students at a multiple health care provider setting using a 12-item observation scheme (Observation Scheme 12, OS-12) [40]. OS-12 adjusts the Cambridge-Calgary Guide to 12 action parameters assessed on a 0-4 scale (48 max.) and if more detail is needed, provides detail of micro-skills. The items in combination or singularly are more concise than the C-CG, but correspond to C-CG domains ( Figure 2) as shown in Table 2 below.
Skill Level Representing Multiple Micro-Skills
Corresponding C-CG Domains (1) Identifies problems the patient wishes to address.
Initiating the session (2) Clarifies patient's prior knowledge and desire for information.
Building a relationship (5) Provides support: expresses concern and willingness to help.
Building a relationship (6) Structures the interview in logical sequence.
Providing structure (7) Attends to passage of time and keeps the interview on track.
Providing structure (8) Shares thoughts and reflections with the patient.
Explanation and planning There is also a dental version inspired by the C-CG, the Dental Consultation Communications Checklist (DCCC) [41][42][43]. The original version was formulated and tested on 43 third year English dental students by Theaker et al. in 2000 [43]. It was then validated on 204 Iranian clinical dental students in 2015 [41]. This original version was 31 items, where the patient was also asked for an assessment in three of the items. A version of DCCC was tested in 2013 [42] and validated a 27 item version used for assessment of (see Figure 3) an experimental intervention group versus a control group of Indian dental students. Sangappa et al. [44] more recently created another 40-item version that was modified and factor analyzed as both an interview guide and an assessment tool. The authors of these studies suggest that DCCC is appropriate for use in guiding consultations, feasible for routine use as assessment tools for dental students and reliable [41,43,44]. Although not as complicated or lengthy as the C-CG, even the DCCC might be unnecessarily tedious for routine dental student use and seems unwieldy in intra-operative consultations. There is also a dental version inspired by the C-CG, the Dental Consultation Communications Checklist (DCCC) [41][42][43]. The original version was formulated and tested on 43 third year English dental students by Theaker et al. in 2000 [43]. It was then validated on 204 Iranian clinical dental students in 2015 [41]. This original version was 31 items, where the patient was also asked for an assessment in three of the items. A version of DCCC was tested in 2013 [42] and validated a 27 item version used for assessment of (see Figure 3) an experimental intervention group versus a control group of Indian dental students. Sangappa et al. [44] more recently created another 40-item version that was modified and factor analyzed as both an interview guide and an assessment tool. The authors of these studies suggest that DCCC is appropriate for use in guiding consultations, feasible for routine use as assessment tools for dental students and reliable [41,43,44]. Although not as complicated or lengthy as the C-CG, even the DCCC might be unnecessarily tedious for routine dental student use and seems unwieldy in intra-operative consultations. Torper et al. [20] present a consultation model for dentistry that they propose is "applicable for most types of visits, patients and problems". The original medical model of the Four Habits model (4H) as described by Frankel and Stein [45] was modified by Torper et al. [20] to the specific structure and content of dental visits. The 4H model proposes four
The "Four Habits" Model Plus One Extra Good Dentist Habit
Torper et al. [20] present a consultation model for dentistry that they propose is "applicable for most types of visits, patients and problems". The original medical model of the Four Habits model (4H) as described by Frankel and Stein [45] was modified by Torper et al. [20] to the specific structure and content of dental visits. The 4H model proposes four phases: (1) invest in the beginning (relationship building), (2) elicit the patient's perspective, (3) demonstrate empathy and (4) invest in the end (successful closure). It is thus generally similar to C-CG. Facilitate Perceived Control (FPC) was added to the model by Torper et al. due to its crucial importance in dental visits, so Torper et al. named the model the "Fourplus-one Habit model for Dental Visits (4 + 1HD)" [20]. The model was intended to become a flexible framework for communication skills training at all levels of dental education, given its simplicity [20]. Torper [20], the "six functions" [11] and the Cambridge-Calgary Guide and weld them into a model of pre-clinical consultation and intra-operative communication. However, even though this model better addresses the context of dental clinical consultations, it still has a checklist of up to 27 habit components and seems nearly as tedious as the DCCC.
Given the need described above to find a concise yet flexible guide for training dental student consultations, the following suggestion is made, given special communication needs in the dental clinic. In Khalifah and Celenza's systematic review, they differentiated generic skills from case-or time-specific skills. However, all of the models or guides above described a mix of skill sets as if they are from the same dimension. Logically, generic skills such as active listening, empathy, professionalism and patient-centered communication should be precursors to learning the structure and timing of communications for initial consultations. Generic skills are relevant to both initial consultations as well as more sporadic intra-operative communication. In other words, use of case-and time-specific guides that help students structure their communications with patients in a simplified manner should follow after they have some competency in the basic generic skills that apply to all forms of clinical communication. Generic skills often require the most time to learn, given their qualitative nature [8]. If learning generic skills were assumed to be a necessary precursor for the case-and time-specific structural guide of the DCCC, and thus were removed from the list in Figure 3, then the consultation guide would resemble the shortened list in Table 3 below. This would be a new streamlined, but effective consultation guide that assumes that generic skills are learned earlier. Assessment could be a tally of 15 items on a 0-4 scale (60 max.), similar to the OS-12 for students of allied health care. Table 3. A Select Dental Consultation Communications Checklist as a case-and time-specific structural guide that does not include generic skills, such as active listening and other PCC skills.
Action:
Corresponding Domains: (1) Greet the patient. Investing in relationship (2) Introduce yourself and that you are prepared to listen.
Investing in relationship (3) Ask patient to explain reason for visit from own perspective.
Investing in relationship (4) Explain what will happen during the visit; no jargon.
Investing in relationship (5) Be ready to reformulate questions with patient confusion.
Gathering information (6) Handle personal questions sensitively: What has patient heard?
Gathering information (7) Explain what you want to do and why before you do it.
Action: Corresponding Domains:
(10) Reassure patient if necessary and use X-rays, other aids. Examination/Explanation/Planning (11) Negotiate a mutual plan of action and next steps.
Examination/Explanation/Planning (13) Point out that conversation is coming to an end.
Investing in closure (14) Summarize session briefly; invite further questions/concerns.
Investing in closure (15) Explain what will happen next; make new appointment.
Investing in closure In summary, clinical consultation guidance schemes for dental students need to fit with the assumptions for dental clinical situations. Based on analysis of the consultation guide literature, recommendations described above are (1) two phase learning in CST for dental students, with focus on learning patient-centered generic skills in the first phase and (2) an efficient and effective guidance and structural plan for the chronological order of longer consultations in dentistry in the second phase.
Optimal Curricular Structure for Clinical Communication Learning Effectiveness Longitudinal vs. One-Time Learning
Longitudinal learning is almost taken for granted in educating dental students about caries/restorative dentistry, periodontology, prosthodontics and oral physiology. However, in dental educational programs for communication, there is limited progressive development during the curriculum in most cases [8,15]. The premise about superiority of a longitudinal communications curriculum versus one-time learning was also a conclusion drawn from the systematic review by Carey et al. [7]. Later Ayn et al., 2017 [15] and Khalifah and Celenza in 2019 [8] emphasized that dental students should start receiving communication training in the pre-clinical years, continuing into their clinical years and be evaluated during interactions with real patients in order to provide maximum competence.
A curricular example of teaching communication skills was developed by Ayn et al. [15] after they compiled their review from the dental education literature and other higher education experience. Inspired from the review, the suggested model curriculum reemphasized that knowledge learned during earlier, generally preclinical stages of dental education should be the foundations upon which both experiential and lecture-based learned strategies are based throughout students' clinical learning experiences. A similarly structured timeline for communication skills in medical education described by Deveugele et al. [46] was shown to have the benefits of early detection and correction of communication skill issues and improved retention while providing a greater understanding of the importance for communication skills in the patient-professional relationship. Earliest possible experiences with the dental clinic should also improve motivation of young dental students for optimal communication with patients and personnel [47]. Ayn et al. [15] believe that this longitudinal learning approach should also be the mainstay of learning and becoming competent in communication skills in dental education.
Studies on German-speaking dental schools [16,48] have indicated that a longitudinal curricular approach in dental student communication skills is best in both teacher and student evaluations [16]. However, only 18 of the 34 German-speaking schools had implemented a fully or partially longitudinal curriculum, while the other sites only offered standalone courses as of 2016 [16]. Of the 34 dental schools, only six assessed communication skills in a summative way. Three of those schools also used formative assessments for their students.
It was apparent in the literature that there is a need to assess dental students' skills in a longitudinal communication skills curricula. Although this review does not focus on assessment as a theme, the importance of competence assessment begs the question: "Did dental students learn what they needed to learn to improve their clinical communication skills?" Two of the tabled reviews [13,33] cited Miller [49] who proposed a pyramid framework for assessing clinical competence, which also applies to the learning of communication skills in dental education (see Figure 4). At the lowest level of the pyramid is knowledge (what a student knows), followed by competence (when a student knows how to use their knowledge), performance (when a student shows a teacher how they applied their knowledge) and action (when a student implements the practiced knowledge without tutelage). So, the ultimate test of clinical competence is that the knowledge is used in practice rather than what happens in artificial testing situations. Practical methods of assessment target this highest level of the pyramid. Miller [49] posited that other common methods of assessment, such as multiple choice questions, simulation tests and objective structured clinical examinations (OSCEs) target the lower levels of the pyramid. Thus, this pyramid model assumes that actual practice is a much better reflection of routine performance than assessments done under academic testing [13,33], which would also be the long-term goal of a longitudinal curricular program in CST. practice rather than what happens in artificial testing situations. Practical methods of assessment target this highest level of the pyramid. Miller [49] posited that other common methods of assessment, such as multiple choice questions, simulation tests and objective structured clinical examinations (OSCEs) target the lower levels of the pyramid. Thus, this pyramid model assumes that actual practice is a much better reflection of routine performance than assessments done under academic testing [13,33], which would also be the long-term goal of a longitudinal curricular program in CST.
Conclusions and Recommendations
The following conclusions and recommendations can be drawn from this study: (1) The work environment for dentists is unique in that communication occurs most often while the patient is sitting passively in the dental chair, which for some, can be symbolic for vulnerability. Often the patient cannot talk since they are opening their mouths for dental treatment. It is important that regardless of whether there is an initial consultation or whether treatment needs to be interrupted for a consultation, the basis for patient communication and at dental teaching clinics should be a patient-centered approach. It should emphasize high-quality dialogue based on the generic skills of active listening, empathy and mirroring of patient perceptions.
(2) Differentiation of skill sets required in the dental clinical environment and appropriate adjustment to needs for generic skills, case-specific skills, time-specific skills and emerging or advanced skills would be best fine-tuned with the aid of video clip supervision both in dentist-patient role-play in pre-clinical years and with patients in clinical years.
(3) Motivational interviewing as a consultation guide is optimally useful in health promotion and appreciative inquiry approaches to oral health behavioral change interventions.
(4) The basic protocol for systematic initial anamnestic dental consultations should start with a modified Calgary-Cambridge-like Guide, such as the Dental Consultation Communication Checklist. However, learning in dental communication skills should not be limited to initial anamnestic consultations, which also requires rethinking consultations during the treatment phase much as the Four-plus-one Habit model suggests. A DCCC short form was proposed for learning initial consultation structure and timing in
Conclusions and Recommendations
The following conclusions and recommendations can be drawn from this study: (1) The work environment for dentists is unique in that communication occurs most often while the patient is sitting passively in the dental chair, which for some, can be symbolic for vulnerability. Often the patient cannot talk since they are opening their mouths for dental treatment. It is important that regardless of whether there is an initial consultation or whether treatment needs to be interrupted for a consultation, the basis for patient communication and at dental teaching clinics should be a patient-centered approach. It should emphasize high-quality dialogue based on the generic skills of active listening, empathy and mirroring of patient perceptions.
(2) Differentiation of skill sets required in the dental clinical environment and appropriate adjustment to needs for generic skills, case-specific skills, time-specific skills and emerging or advanced skills would be best fine-tuned with the aid of video clip supervision both in dentist-patient role-play in pre-clinical years and with patients in clinical years.
(3) Motivational interviewing as a consultation guide is optimally useful in health promotion and appreciative inquiry approaches to oral health behavioral change interventions.
(4) The basic protocol for systematic initial anamnestic dental consultations should start with a modified Calgary-Cambridge-like Guide, such as the Dental Consultation Communication Checklist. However, learning in dental communication skills should not be limited to initial anamnestic consultations, which also requires rethinking consultations during the treatment phase much as the Four-plus-one Habit model suggests. A DCCC short form was proposed for learning initial consultation structure and timing in dental educational CST. The short form assumes prior learning and competence in generic skills such as active listening, empathy, rapport building and motivation.
(5) Longitudinal learning should be a strategic goal in communication skills education, in which learning with time and experience progress over the curriculum along with general clinical experience levels of students. Communication skills need to be taught and learned with the same rigor as other core dental skills. | 2022-04-04T15:13:18.984Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "ba315754dcc3f0cccefe9b94341af3be95436f93",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-6767/10/4/57/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "da5a5ec89f32fef4b7f72295687568293b4eb78b",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
} |
212556428 | pes2o/s2orc | v3-fos-license | Extranodal primary non hodgkin lymphoma of breast: Multimodal approach to diagnosis
The primary Non-Hodgkin’s lymphoma of breast is rare. The primary lymphomas of breast are bilateral in younger age group and unilateral in older age group. We report a rare case of primary Non-Hodgkin’s lymphoma (NHL) diffuse large B cell type of right breast with multimodal approach for diagnosis. Features of undifferentiated carcinoma were seen on fine needle aspiration cytology, histopathology with gold standard method for diagnosis showed features of aggressive malignant diffuse large cell lymphoma. Immunohistochemistry positive for CD20, MUM1 and Bcl6 provided the evidence to the histopathological diagnosis and guided us towards the poor prognosis of the malignancy by subtyping. We compare the incidence of NHL of breast at our institute with the available literature.
Introduction
Primary breast lymphoma (PBL) is very rare, with incidence of 0.04 to 0.5% of all breast malignancies, it is 0.38 to 0.7 % of all Non Hodgkin Lymphomas (NHL), and between 1.7 to 2.2% of all extranodal NHL. 1,2 PBL and breast carcinoma show similar clinical and radiological features. Most commonly females are affected, presenting as a painless lump. We report a rare case of recurrent primary breast lymphoma and discuss the diagnosis and its incidence in our institute and compare it with the available literature.
Case History
A 52 year old female presented with progressive increasing painless lump in the right breast since 3-4 months. Clinically the lump was 5x5cm, firm to hard in consistency. The overlying skin was normal and there was no discharge from nipple. Fine needle aspiration cytology (FNAC) from breast mass showed high cellularity of malignant cells (Fig. 1) and a diagnosis of undifferentiated carcinoma was rendered; Modified Radical Mastectomy (MRM) was done. On gross examination right MRM specimen measuring 18x6x4cm with skin tag measuring 13x3cm was received ( Fig. 2 A & B). Tumour involved all the four quadrants measuring 5x4cm. Cut section revealed solitary irregular mass with fleshy whitish in appearance. Axillary tail showed 14 lymph nodes, largest one measuring 3x2 cm. Microscopy revealed monotonous population of cells having pleomorphic, hyperchromatic cleaved vesicular nuclei with prominent nucleoli & scanty eosinophilic cytoplasm.
Focal area shows atypical mitosis 6-8 /hpf. Few areas showed fibrous septae separating tumour, which is infiltrating into surrounding fat. On histopathology the diagnosis of PBL was made ( Fig. 2 C&D) with involvement of one lymph node. On Immunohistochemistry (IHC) the tumour cells were immunopositive for CD20, MUM1, Bcl6 and immunonegative for CD3, CD10, CD138. The Ki67 proliferative index was >60% (Fig 4). Diagnosis of Diffuse Large B cell Lymphoma (DLBL), Activated Bcell like, high grade (Hans Algorithm) was confirmed. Clinical and CT imaging evaluation did not reveal any other mass/neoplasm in the body. The patient received chemo and radio therapy at an outside hospital. Despite the treatment after 6 months the patient presented with hard lump in breast at our institute and on FNAC monomorphic malignant cells were seen. The patient further received chemo and radio therapy and for the second time after mastectomy had a swelling over the right breast region. On FNAC there were cells of malignant lymphoma and few cells showed regressive changes due to radiotherapy.
Discussion
PBL is malignant lymphoma which primarily occurs in breast in absence of previously detected lymphoma. In a retrospective five years analysis we encountered 399 (71.12%) cases of benign breast lesion, and 162 (28.88%) cases of malignant lesions. Out of these 162 malignant tumours only one case of PBL (0.6%) was observed, this is in concurrence with the data put forward by Wendy Jeanneret-Sozzi et al 1 and Enver Vardar et al 2 in their articles. The incidence of PBL is rare in breast because breast contains very less lymphoid tissue compared to other organs like lungs and intestine, where lymphomas are more common. 2 In Roberto Giardinis study of 33 cases of PBL it was found that, right breast (51.5%) is more involved than left (42.4%) breast. 5 Wiseman and Liao in 1972 reported that a diagnosis of PBL must satisfy the following criteria-adequate pathological evacuation, both mammary tissue and lymphomatous infiltrate must be in close association, the exclusion of either lymphoma or previous extra mammary lymphoma. 4 Clinical examination and radiological features along with screening mammography do not provide any specific characteristic to the diagnosis of PBL. 5 Diagnosis of PBL is exclusively made by aspiration cytology or breast excision. Histopathology and IHC are helpful in differentiating PBL from other malignancy. IHC study using CD20, MUM1, Bcl6, CD3, CD10, CD138 and Ki67antibody markers provides evidence to the histopathological diagnosis and further helps in subtyping of the lesion. Roberto Giardini et al, 3 Ho Jong Jeon et al 6 and Mu-Tai Liu et al 7 studies showed DLBL as most common histologic type of PBL. Gene expression profiling described in 2000 identified two prognostic types on the basis of cell of origin, germinal center B-cell (GCB) and activated B cell (ABC) like. Recent studies showed GCB profile predicts better survival than ABC-like. 8 The management of PBL is based on histological grade. Cytohistological evaluation with IHC offers a better guide for therapy. Our study satisfies the multimodal approach for diagnosis and differentiation of undifferentiated carcinoma and lobular carcinoma from the PBL.
Conclusion
We report a rare case of recurrent PBL-DLBL, involving the right breast with single axillary lymph node metastasis. Our case fulfilled the Wiseman and Liao's criteria for PBL. The diagnosis of PBL should always be confirmed by histopathology and sub typing by IHC. Further studies about PBL in Indian population are necessary to improve understanding of this disease. | 2019-08-17T21:20:21.176Z | 2020-12-15T00:00:00.000 | {
"year": 2020,
"sha1": "230ddf30fd31aa9c3eee64447acdc446ef96d94f",
"oa_license": null,
"oa_url": "https://doi.org/10.18231/2394-6792.2018.0066",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9d9badac14b64d094df9a9f4b564f6278dd2ccec",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
232106025 | pes2o/s2orc | v3-fos-license | Effects of osteopathic manipulative treatment and bio-electromagnetic energy regulation therapy on lower back pain
Context: Lower back pain (LBP) is prevalent and is a leading contributor to disease burden worldwide. Osteopathic manipulative treatment (OMT) can alleviate alterations in the body that leads to musculoskeletal disorders such as LBP. Bio-electromagnetic Energy Regulation (BEMER; BEMER International AG), which has also been shown to relieve musculoskeletal pain, is a therapeutic modality that deploys a biorhythmically defined stimulus through a pulsed electromagnetic field (PEMF). Therefore, it is possible that combinedOMT and BEMER therapy could reduce low back pain in adults more than the effect of either treatment modality alone. Objectives: To investigate the individual and combined effects of OMT and BEMER therapy on LBP in adults. Methods: Employees and students at a medical college were recruited to this study by email. Participants were included if they self-reported chronic LBP of 3 months’ duration or longer; participants were excluded if they were experiencing acute LBP of 2 weeks’ duration or less, were currently being treated for LBP, were pregnant, or had a known medical history of several conditions. Ultimately, 40 participants were randomly assigned to four treatment groups: an OMT only, BEMER only, OMT+BEMER, or control (light touch and sham). Treatments were given regularly over a 3 week period. Data on LBP and quality of life were gathered through the Visual Analog Scale (VAS), Short Form 12 item (SF-12) health survey, and Oswestry Low Back Pain Questionnaire/Oswestry Disability Index prior to treatment and immediately after the 3 week intervention protocol. One-way analysis of variance (ANOVA) was performed retrospectively and absolute changes for each participant were calculated. Normal distribution and equal variances were confirmed by Shapiro–Wilk test (p>0.05) and Brown-Forsythe, respectively. Significance was set at p<0.05. Results: Despite a lack of statistical significance between groups, subjective reports of pain reported on the VAS showed a substantial mean percentage decrease (50.8%) from baseline in the OMT+BEMER group, compared with a 10.2% decrease in the OMT-only and 9.8% in BEMER-only groups when comparing the difference in VAS ratings from preintervention to postintervention. Participants also reported in quality of life assessed on the Oswestry Low Back Pain Questionnaire/Oswestry Disability Index, with the OMT+BEMER group showing a decrease of 30.3% in score, the most among all groups. The OMT+BEMER group also reported the greatest improvement in score in the physical component of the SF-12, with an increase of 21.8%. Conclusions: The initial data from this study shows a potential additive effect of combination therapy (OMT and BEMER) formanagement of LBP, though the results did not achieve statistical significance.
Low Back Pain Questionnaire/Oswestry Disability Index prior to treatment and immediately after the 3 week intervention protocol. One-way analysis of variance (ANOVA) was performed retrospectively and absolute changes for each participant were calculated. Normal distribution and equal variances were confirmed by Shapiro-Wilk test (p>0.05) and Brown-Forsythe, respectively. Significance was set at p<0.05. Results: Despite a lack of statistical significance between groups, subjective reports of pain reported on the VAS showed a substantial mean percentage decrease (50.8%) from baseline in the OMT+BEMER group, compared with a 10.2% decrease in the OMT-only and 9.8% in BEMER-only groups when comparing the difference in VAS ratings from preintervention to postintervention. Participants also reported in quality of life assessed on the Oswestry Low Back Pain Questionnaire/Oswestry Disability Index, with the OMT+BEMER group showing a decrease of 30.3% in score, the most among all groups. The OMT+BEMER group also reported the greatest improvement in score in the physical component of the SF-12, with an increase of 21.8%. Conclusions: The initial data from this study shows a potential additive effect of combination therapy (OMT and BEMER) for management of LBP, though the results did not achieve statistical significance.
Low back pain (LBP) has a relatively high incidence and prevalence [1], affects people of all ages, and is a leading contributor to socioeconomic burden [2]. Chronic LBP is defined as occurring for 3 months or more, whereas acute LBP is defined as occurring for 2 weeks or less [3]. Individuals affected by LBP may experience progressive physical discomfort and may suffer psychological effects for several months; a proportion may remain severely disabled [3,4]. The etiology of LBP is varied, from visceral causes to a lack of adequate blood flow to the muscles or musculoskeletal imbalance [5]. Inadequate blood supply and/or inadequate oxygen consumption can lead to fatigue and muscle pain [6][7][8]. For many people experiencing LBP, it is not possible to identify a specific nociceptive cause. Those suffering from LBP may recover but recurrence is common, and LBP often becomes persistent and disabling [9]. Despite the available management options, this back related disability and the number of individuals affected by it have increased steadily [5,10]. Recent changes to key recommendations in national clinical practice guidelines now emphasize self management, physical and psychological therapies, and some forms of complementary medicine (such as spinal manipulation, massage, or acupuncture), with less emphasis placed on pharmacological and surgical treatments [11].
Even with multiple clinical guidelines providing similar recommendations for managing LBP, a substantial gap between evidence and practice exists [12][13][14][15]. Osteopathic manipulative treatment (OMT) is a distinctive modality used by osteopathic physicians to complement conventional management of LBP. A previous randomized, controlled trial [14] of 455 patients found that OMT significantly improved patient outcomes and functionality compared with OMT sham treatment (p<0.001) and decreased the need for prescription analgesics (p=0.048). Franke et al. [16], in a meta-analysis of six studies (n=769 patients), reported clinically relevant effects of OMT for reducing pain (mean difference, −14.93; 95% confidence interval [CI], −25.18 to −4.68) and improving functional status (standard mean difference, −0.32; 95% CI, −0.58 to −0.07) in patients with low back pain. These studies paved the way for updated guidelines from the American Osteopathic Association in 2016, which stated, "The AOA believes that patients with low back pain should be treated with OMT given the high level of evidence that supports its efficacy." [17] An alternative approach for the treatment of LBP is the use of Bio-electromagnetic Energy Regulation (BEMER) therapy (BEMER International AG). BEMER is a therapeutic modality that deploys a biorhythmically defined stimulus through a pulsed electromagnetic field (PEMF). This stimulus has a targeted effect on the microvasculature, and the primary effect is an improvement in tissue microcirculation [18,19]. The positive effects of vasomotion on the microcirculation have been shown to result in significant increases in arteriovenous oxygen difference, the number of open capillaries, arteriolar and venular flow volume, and flow rate of red blood cells in the microvasculature [20,21]. Several studies have reported promising outcomes in musculoskeletal pain management with the use of BEMER therapy [7,[22][23][24]. Furthermore, a systematic review [25] of six randomized, controlled trials investigating whether PEMF was effective in low back pain showed that it resulted in a decrease in pain intensity and improved functionality for patients suffering from LBP. The reduction in pain intensity from baseline to the end point in that study ranged from 2.1 to 6.4 points out of 10 on the visual analog scale [25].
OMT has been shown to be effective in the treatment of LBP [14,16]. All osteopathic techniques use the concept of fascial connectivity throughout the body and help increase circulation and lymphatic flow [26,27]. As discussed previously, BEMER therapy has been shown to increase microcirculation. Moreover, BEMER with physiotherapy showed reductions in pain and fatigue in patients with chronic low back pain [18]. The results from these studies suggest that a combination of OMT and BEMER therapy could potentially help increase circulation to myofascial structures that influence low back restriction and pain. The overarching goal of this study was to assess the individual and combined effects of OMT and BEMER therapy in patients with chronic LBP.
Methods
This study was approved by the Institutional Review Board at Lake Erie College of Osteopathic Medicine. Prior to the start of the study, written and informed consent was obtained from all research participants. The authors did not prospectively submit this study to a clinical trial registry, but it was registered post hoc at ClinicalTrials.gov (NCT04704375).
Subjects were compensated for their time with a $50 gift card to a local grocery store.
Study participants
To investigate the individual and combined effects of OMT and BEMER therapy on LBP, participants were recruited and randomly placed into one of four treatment arms. A standardized recruitment email was sent to approximately 400 employees and students at Lake Erie College of Osteopathic Medicine. A total of 77 volunteers responded by emailing the lead student researcher, were recorded in an Excel spreadsheet, and were screened for eligibility by student researchers (K.A., G.S., K.C.) based on inclusion and exclusion criteria at the time when informed consent was obtained in hard copy. Participants were included if they self-reported chronic LBP 3 months or longer in duration [3]. Participants were excluded if they were experiencing acute LBP (2 weeks or less in duration), were currently undergoing treatment for LBP, were pregnant, or had a known medical history of psychiatric conditions, skin disorders, myositis, neurological symptoms in the lower extremities, cancer, bone fracture, deep vein thrombosis, osteopenia, a body mass index greater than 30, or autoimmune disorder. Of the 77 respondents, 75 met inclusion criteria; two volunteers were excluded due to having acute LBP at the time of screening. Each of the 75 participants were assigned a unique number, and using the Microsoft Excel RAND function, a random number generator was used to select 40 participants to be involved in the study, because the authors wanted to establish four groups of 10 subjects each. Two participants dropped out during the first week due to scheduling conflicts. These two participants were replaced with participants from the original pool of screened volunteers ( Figure 1). Random number generation using Microsoft Excel RAND and RANK functions was used to randomly place 10 participants in each of the four treatment groups: OMT only, BEMER only, OMT and BEMER (OMT+BEMER), or sham (light touch) therapy. The participants in each group were unaware of other treatment arms.
Treatment groups
The intervention protocol was 3 weeks long and the data collection ranged from February-September 2019, because the OMT and OMT+BEMER groups were treated between February and May, while the BEMER and control groups were treated between July and September. Participants in the OMT group received three treatments per week, participants in the BEMER group received five treatments per week, participants in the OMT+BEMER group received five BEMER and three OMT treatments per week, and participants in the control group received light touch and sham BEMER treatments at the same intervals. Participants who did not adhere to the full treatment regimen or experienced any adverse effects from treatment would have been removed from the study, but none were lost for those reasons. As noted earlier, two of the 40 participants were unable to complete the protocol and dropped out week one of treatment. They were replaced with two participants who had already been screened, met inclusion criteria, and signed informed consent. No adverse events were reported.
Assessment and treatment protocols
A standardized osteopathic assessment and treatment protocol was developed based on common dysfunctions associated with chronic low back pain [28]. This standardized assessment and treatment protocol was used to ensure consistency across participants, as all somatic dysfunctions could not be feasibly treated. This protocol was developed with a board certified NMM/OMM osteopathic physician (N.M.). The osteopathic structural exam and motion testing were focused on the lumbar spine, iliosacral, and sacral areas to diagnose dysfunctions commonly associated with chronic, nonspecific LBP [28]. The experience and comfort level of the osteopathic medical students and the supervising osteopathic physician (N.M.) were considered in the development of the treatment protocol. Common techniques that are well tolerated were identified and a treatment sequence was refined to reduce the number of patient position changes and operator variability.
Second year osteopathic medical students (G.S., K.A., K.C.) were trained for a minimum of 10 h and assessed on two separate occasions by a board certified NMM/OMM osteopathic physician (N.M.) to ensure uniform technique standardized protocol for each group. For every participant during each treatment session, a standardized osteopathic structural examination was performed, and diagnoses of somatic dysfunctions were made. If no Auger et al.: Effects of OMT and BEMER therapy on lower back pain somatic dysfunctions were found in the areas examined, the associated part of the treatment protocol was not performed. The students performed the following protocol for each session and recorded the findings on a standard form: -Observe and palpate thoracic and lumbar muscles for TART (tenderness, asymmetry, restriction of motion, and tissue texture) changes -Perform lumbar intersegmental diagnosis -Perform counterstrain tenderpoint assessment for psoas major, quadratus lumborum, and piriformis -Perform spring test at the base of the sacrum -Perform ASIS compression test to lateralize restriction in iliosacral mobility After the osteopathic structural exam sequence was completed and recorded, participants receiving OMT were treated with a standardized sequence. Techniques were only applied to the areas where somatic dysfunction was found during the structural exam.
-Regional thoracic myofascial release (MFR), prone -Lumbar soft tissue unilateral pressure, prone -Psoas, piriformis, quadratus lumborum counterstrain, supine (only the most severe counterstrain tender point was treated) -Lumbosacral/pelvic myofascial release, supine -Sacrum balanced ligamentous tension, supine -Lumbar muscle energy, seated Participants receiving BEMER therapy laid supine on the BEMER mat (BEMER International AG) in a darkened and quiet room, with the B.Pad (BEMER International AG) on their lower back. The BEMER mat intensity setting was adjusted each week to progressively higher intensity. Intensity three was selected in week 1, intensity four for week 2, and intensity five for week 3. B.Pad ® settings were set for Program 1 (8 min long) in week 1, Program 2 (16 min long) in week 2, and Program 3 (20 min long) in week 3. These settings were selected based on the manufacturer's recommendations. For the participants in the combination therapy group, OMT was performed prior to BEMER therapy on the 3 days the therapies overlapped.
Patients in the control group received light touch and BEMER sham treatments. The trained students placed their hands lightly on lumbar and sacral regions to mimic MFR techniques. However, no lifting or action was done. After the light touch treatment, the subject laid supine on the deactivated BEMER mat.
All treatment sessions, including the osteopathic structural exam, lasted no longer than 30 min for all groups.
Outcomes assessments
Prior to intervention and within 30 min after the 3 week intervention protocol, research participants were required to complete a paper version of validated surveys to assess pain and disability. The validated surveys used were a Visual Analog Scale (VAS) [29], the Short Form 12 item (SF-12) health survey (Appendix 1) [30], and the Oswestry Low Back Pain Questionnaire/Oswestry Disability Index (Appendix 2) [31]. SF-12 data were separated into a Physical Component Summary (PCS) and Mental Component Summary (MCS) [30]. The VAS score [29] measures the subjective pain along a 100 mm line continum from "no pain" to "pain as bad as it could possibly be." The SF-12 health survey [30] is a self-reported survey that measures the effects of health on daily activities. The Oswestry Low Back Pain Questionnaire [31] is a subjective tool to measure the functional effects of low back pain on everyday life that could lead to disability.
All outcomes analyses were performed in a blinded fashion with the subjects' survey and corresponding data deidentified and analyzed retrospectively; the survey contained only the participants' ID numbers. Absolute changes in questionnaire scores from preintervention to postintervention were calculated for each participant. A one-way analysis of variance (ANOVA) was used to determine any statistical significance between mean changes in the four groups. Normal distribution and equal variances were confirmed by Shapiro-Wilk test and Brown-Forsythe, respectively. Significance was set at p<0.05, and values are presented as means ± standard deviation (SD).
Results
The 40 research participants had a mean age of 25 Preintervention mean scores for all groups are shown in the Table 1. One-way ANOVA analyses showed that preintervention mean values were not statistically different between groups.
All groups fell into the minimally disabled category for Oswestry score data, with an average score of 12.8 on the following scale: 0-4 indicating no disability; 5-14 indicating minimal disability; 15-24 moderate disability; 25-34 severe disability; 35-50 complete disability [31]. PCS values on the SF-12 questionnaire were slightly below the national mean for all groups (mean, 46.6 ± 8.4; national mean, 50.0 ± 10.0), whereas the MSC values were slightly above the national mean (mean, 51.7 ± 7.4; national mean, 50.0 ± 10.0). These data suggest that our subjects had lower pain and disability compared with the national mean. The absolute change results for outcomes variable are shown in Figure 2A-D. For three of the four outcomes variables measured, the OMT+BEMER treatment group had decreased LBP and disability scores. For example, the OMT+BEMER group had a mean decrease of 26.2 ± 28.8 in their VAS score; the control group showed the next best decrease in the VAS with a 10.4 ± 17.3 mean decrease. The OMT group and the BEMER group were similar in result, with mean decreases of 3.9 ± 20.3 and 3.7 ± 8.4, respectively. The OMT+BEMER group had the largest change in mean Oswestry score, with a 6.0 ± 8.7 decrease, followed by the OMT group, with a 3.8 ± 5.9 decrease. The OMT+BEMER group also had the largest mean change in the physical component (PAS), with a 7.2 ± 8.1 mean increase. The next largest increase in score was from the OMT group, with a 3.9 ± 6.3 mean increase. The OMT group had the largest mean increase on the mental component of the SF-12 (MAS), with a 5.7 ± 9.6 increase, while the BEMER group and OMT+BEMER group followed with mean increases of 3.3 ± 6.2 and 3.4 ± 6.2, respectively.
One-way ANOVA analyses showed that absolute change values were not statistically different between groups (p>0.05).
Discussion
LBP is a major cause of disease burden and a major reason why patients seek medical attention [32]. Pain of any caliber can affect an individual's productivity and put them at risk for experiencing a decrease in quality of life. National guidelines for treatment of LBP have changed to focus on alternative therapies, including OMT, physiotherapy, and psychotherapy [11]. The overarching goal of the present study was to assess the individual and combined effects of OMT and BEMER therapy in adults with LBP. OMT as an adjunct to traditional treatment of LBP has been studied for cost effectiveness with promising results but more research is needed [14,33,34]. Osteopathic physicians who offer OMT also admit fewer patients to the hospital for LBP and have been shown to successfully treat LBP in fewer office visits [35]. Findings from our study could influence future studies and lead to incorporation of more OMT with BEMER therapy in clinical practice in the treatment of LBP, thus further decreasing healthcare costs and improving patient outcomes.
Our results suggest that the combination of OMT and BEMER therapy may have a positive impact on reducing LBP, although additional studies are warranted. Despite the lack of statistically significant differences between groups, the improvements in LBP observed in the OMT+BEMER group may be meaningful and clinically relevant; this could be demonstrated with a study of larger sample size and higher statistical power. Trends were observed showing that the combined OMT+BEMER treatment had a greater impact on LBP than each of the treatment modalities alone. The VAS analyses showed the largest decrease in LBP after 3 weeks of combined treatment (mean decrease, 26.2 ± 28.8). Moreover, the OMT+BEMER group showed the largest decrease in Oswestry score (mean decrease, 6.0 ± 8.7), lending evidence to the theory that combined treatment may lead to a more beneficial result than each treatment modality separately.
In a systematic review of six randomized controlled trials (n=210) [25], PEMF therapy was shown to improve the functionality of individuals with LBP, but studies failed to show any added benefit when combined with standard therapy [25]. However, none of the studies in that review utilized OMT or BEMER Pro-Set as treatment modalities as ours did. Our results suggest there may be an additive effect between these two modalities. The observed decrease in LBP in our study could be due to the increase in microcirculation after OMT+BEMER sessions, bringing in new nutrients and flushing out cellular waste in a properly aligned lower back. Due to the low risk associated with these two therapies, BEMER can be considered a potentially promising adjunctive therapy in the treatment and management of LBP [14,23]. Future studies should aim to adjust the protocol to maximize the beneficial effects of the combination treatment.
Gyulai et al. [18] investigated the synergistic effect of BEMER therapy with complex standard physiotherapy in 20 patients of 20-80 years of age with chronic LBP in a double-blind study. Results showed that in the shortterm, physiotherapy with BEMER therapy demonstrated a significant improvement in resting VAS (mean decrease of 26.8 ± 13.7; p=0.02), but had no changes in the Oswestry scores and Quality of Life (determined by General Quality of Life Questionnaire SF 36; p>0.05). These results are comparable with our findings. Although our study did not achieve statistical significance, the change in the VAS scores observed in our study was remarkably similar to the results from Gyualai et al., which was approximately 26 mm on the 100 mm visual analogue scale [18].
The lack of statistical significance in our study could be attributed to several factors. First, our study groups consisted of only 10 subjects, whereas the Guyalai et al. study [18] included 20 subjects per group. Moreover, our subjects were young adults (mean age, 25.1 years), while their subjects were older adults (mean age, 67.3 years for men and 66.7 years for women) [18]. Finally, the baseline VAS scores were lower in our study, which leaves a smaller window for any potential improvements. Nevertheless, the results from our study and those from Gyulai et al. [18] suggest a possible potential benefit for the use of BEMER therapy in combination with OMT for the management and treatment of LBP.
Our control group showed unexpected beneficial results, given that the research subjects in that group did not receive any actual treatments. This may have been one important reason for the lack of statistical significance observed, despite improvements in the treatment groups. That said, the subjects in the control group were put through diagnostic screening, went through light touch treatment, and had to lay in a quiet and darkened room for 20 min for the sham BEMER therapy. The sham BEMER therapy could have given the subjects a sense of relaxation, and the light touch of the researcher could have been inferred by the subject as treatment, leading to a placebo effect. It has been shown that simple touch can elicit a neural event, making pain more bearable [36].
Limitations
A small sample size was a limitation to our investigation. The low number of subjects per group (n=10) and the high variability of the measurements could have contributed to the nonsignificance of our results despite the postintervention improvements in some of our outcomes measures. Based on the present data, a power analysis estimated that 17 subjects per group would be needed to demonstrate significant differences between groups.
The length of the protocol must also be considered. Our study was designed to investigate the individual and combined effects of BEMER and OMT therapies. The BEMER protocol was adapted from the manufacturer's recommendations. To explore how OMT compared with or complemented BEMER therapy, we developed a shortened, more targeted OMT protocol. Future follow-up studies with longer OMT protocols are warranted. Furthermore, the low baseline values recorded preintervention would left a smaller window for any potential postintervention improvements. As shown in the preintervention measurements (Table 1), all groups were categorized in the minimally-disabled group by Oswestry score [31] and were generally below 40 out of 100 on the VAS score, placing them in the category of mild low back pain. Therefore, studies investigating the individual and combined effects of OMT and BEMER therapy in individuals with more severe LBP are warranted.
Another limitation that could have led to high variability in our results was the use of second year osteopathic students to perform the standardized treatment protocol. Although the students were trained on the protocol for a minimum of 10 h and assessed on multiple occasions, their limited clinical experience could have altered the effectiveness of treatment and ultimately affected the external validity of the study.
Future studies
There is a definite need for future studies to expand on the findings from our investigation. Additional studies with larger sample sizes to improve the statistical power and validity of the results are warranted. Also, future studies are needed to compare acute vs. chronic LBP and the potential benefits of OMT and/or BEMER treatment for each. The treatment protocol developed for this study could also be adapted for other musculoskeletal complaints such as hip and neck pain. Further research is needed regarding the effects of OMT and BEMER therapy on LBP, including standardized protocols, larger samples, and adjustment for low back pain confounders in order to achieve stronger conclusions.
Conclusions
This study investigated the individual and combined effects of OMT and BEMER therapy for LBP. Our results showed that a combination of OMT and BEMER therapy produced additive, although not statistically significant, effects on decreasing the level of LBP and increasing functionality in 40 volunteer participants. While more studies are warranted to further investigate the combined effects of OMT and BEMER therapy, the trends observed in our study shed light on the possibility of utilizing a combination of OMT and BEMER in the management of LBP in clinical practice.
Informed consent: Prior to the start of the study, written and informed consent was obtained from all research participants. Ethical approval: This study was approved by the Institutional Review Board at Lake Erie College of Osteopathic Medicine. The authors did not prospectively submit this study to a clinical trial registry, but it was registered post hoc at ClinicalTrials.gov (NCT04704375). | 2021-03-04T14:11:55.067Z | 2021-03-02T00:00:00.000 | {
"year": 2021,
"sha1": "4c22f2d51a7d0cab28a5942fde16c101173a2e69",
"oa_license": "CCBY",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/jom-2020-0132/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d9ad564a9358d8b91c8301da4951079bba12ecf0",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
47479704 | pes2o/s2orc | v3-fos-license | Spontaneous Resolution of Paraparesis Because of Acute Spontaneous Thoracolumbar Epidural Hematoma
Symptomatic spontaneous spinal epidural hematoma(SSEH) is an uncommon cause of cord compression that commonly is considered as an indication for emergent surgical decompression. We aimed to investigate a patient with a SSEH that completely resolved clinically and radiographically, without surgical treatment. The patient presented three days after the sudden onset of back pain, numbness, and weakness. Magnetic Resonance Imaging (MRI) revealed a posterior thoracolumbar epidural hematoma extending from the level of T10 to L2 with significant cord compression. Decompression was recommended but he refused surgery and was managed conservatively. One month later, weakness totally recovered and hematoma was absent on MRI.
Introduction
Spontaneous spinal epidural hematoma (SSEH) is an uncommon cause of cord compression and associated with vascular malformations, neoplasm, infections, coagulopathy, pregnancy and idiopathic causes. [1][2][3][4] Magnetic Resonance Imaging (MRI) is the gold standard for diagnosis of SSEH. We want to indicate a patient with a SSEH that a complete motor and sensory recovery was observed at 1-month follow up with resolution of the thoracolumbar epidural hematoma, clinically and radiographically, without surgical treatment.
Case Report
A 46-year-old man presented 3 days after the sudden onset of back pain, numbness, and weakness of lower limbs after warfarin therapy for deep vein thrombosis. Clinical examination showed that the degree of motor weakness of both lower limbs was 3/5 and the level of numbness was T11 dermatome. Reflexes were depressed. Rectal examination showed normal anal tone and urinary retention was not detected. There was no neurological deficit in the upper limbs. The MRI revealed a posterior thoracolumbar epidural hematoma from the level of T10 to L2 with significant cord compression. The epidural mass was hyperintense on the T1W images ( Figure 1).
The patient was admitted to our department, an emergency decompression was recommended and operation preparing was started. But he refused surgical treatment. Therefore, he was managed conservatively with cessation of warfarin therapy and beginning of low-molecular-weight heparin therapy. He was not placed on intravenous or oral steroids due to his neurological complaint started 3 days ago. His complaint of weakness in lower extremities were gradually recovered in one week and he was mobilized. After one month, he regained full power and a control MRI was performed. MRI revealed the resolution of the thoracolumbar epidural hematoma totally ( Figure 2). It has been reported to occur in all age groups. For instance, some pediatric cases of spinal subdural and epidural hematoma have been documented in the literature. They claimed that, aggressive surgical treatment should be delayed as long as possible in pediat-ric patients because of the spinal structure is still developing. 6,8 The causative hematomas most frequently occur at the lower cervical and thoracolumbar spinal levels in adults, from C5 to T1 spinal levels in children. 7,9,10 Symptoms such as numbness, radicular paresthesis, progressive paraparesis appear within minutes to days. 3,11 Children often suffer from additional symptoms of irritability, and occasionally urinary retention. 12 The etiology of SSEH is unknown, but predisposing factors such as increased venous pressure, hypertension, anticoagulant therapy for prosthetic cardiac valves, therapeutic thrombolysis for acute myocardial infarction, hemophilia B, factor XI deficiency, long term acetylsalicylic acid using as a platelet aggregation inhibitor, vascular malformation and pregnancy. However, the exact pathogenesis of the spinal epidural hematomas remains still obscure. 2, [13][14][15] Most authors have contended that, SSEH arise from epidural venous plexus in the spinal epidural space. Because of fluctuations in intrathoracic and intraabdominal pressures after exercise or other maneuvers, reversal of blood flow may induce rupture of a delicate vein in the valveless epidural plexus. Other researchers have proposed the spinal epidural arteries as a source of hemorrhage. 12,16 MRI is the first choice diagnostic method for SSEH. If MRI is unavailable, CT scan should be obtained. In the differential diagnosis of other disease includes a spinal abscess, ischemia, transverse myelitis, acute herniated intervertebral disc and epidural tumor. MRI recognition of the blood products is the most important sign that distinguishes SSEH from other spinal extramedullary lesions. Spinal subdural hematoma was differentiated from spinal epidural hematoma. Spinal epidural hematoma has a more lentiform shape, and occasionally extends into the intervertebral foramina. On the contrary, spinal subdural hematoma has a crescent shape and traps the spinal cord or cauda equina. 8 Our patient was admitted to our department with mild paraparesis and hypoesthesia. We decided to emergent surgical treatment and operation preparing was started. Also his warfarin therapy changed with low-molecular-weight heparin therapy. But the patient refused surgical treatment. Therefore, we decided to give him pain killers and strict bed rest with serial neurological examinations. After a week complaint of weakness in lower extremities, they recovered gradually. After three weeks, he was consulted with the Department of Cardiovascular Surgery and was managed with cessation of low-molecular-weight heparin therapy and beginning of warfarin therapy again. After a month, the patient was recovered completely. His MRI revealed the resolution of the thoracolumbar epidural hematoma totally.
Spontaneous spinal epidural hematoma is an uncommon
So spontaneous spinal epidural hematoma was an uncommon cause of cord compression that commonly was considered as an indication for emergent surgical decompression. It should be considered in the differential diagnosis of the other conditions. In our case, the patient had mild paralysis and he was recovering gradually. So conservative treatment was recommended.
Acknowledgement
This study was done in Bezmialem Vakif University, Faculty of Medicine.
Conflict of interest: None declared. | 2018-04-03T04:08:06.741Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "3ecfd606c8285e0898406761f45c29309e11fa6a",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "e4468ac228c0b86510000680794d19fe006c770f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211228925 | pes2o/s2orc | v3-fos-license | Therapeutic Evaluation of Computed Tomography Findings for Efficacy of Prone Ventilation in Acute Respiratory Distress Syndrome Patients with Abdominal Surgery
Abstract Introduction In Acute Respiratory Distress Syndrome (ARDS), the heterogeneity of lung lesions results in a mis-match between ventilation and perfusion, leading to the development of hypoxia. The study aimed to examine the association between computed tomographic (CT scan) lung findings in patients with ARDS after abdominal surgery and improved hypoxia and mortality after prone ventilation. Material and Methods A single site, retrospective observational study was performed at the Sapporo Medical University School of Medicine, Sapporo, Hokkaido, Japan, between 1st January 2004 and 31st October 2018. Patients were allocated to one of two groups after CT scanning according to the presence of ground-glass opacity (GGO) or alveolar shadow with predominantly dorsal lung atelectasis (DLA) on lung CT scan images. Also, Patients were divided into a prone ventilation group and a supine ventilation group when the treatment for ARDS was started. Results We analyzed data for fifty-one patients with ARDS following abdominal surgery. CT scans confirmed GGO in five patients in the Group A and in nine patients in the Group B, and DLA in 17 patients in the Group A and nine patients in the Group B. Both GGO and DLA were present in two patients in the Group A and nine patients in the Group B. Prone ventilation significantly improved patients’ impaired ratio of arterial partial pressure of oxygen to fraction of inspired oxygen from 12 h after prone positioning compared with that in the supine position. Weaning from mechanical ventilation occurred significantly earlier in the Group A with DLA vs the Group B with DLA (P < 0.001). Twenty-eight-day mortality was significantly lower for the Group A with DLA vs the Group B with DLA (P = 0.035). Conclusions These results suggest that prone ventilation could be effective for treating patients with ARDS as showing the DLA.
Introduction
Acute respiratory failure after abdominal surgery for conditions such as pan-peritonitis frequently necessitates long-term mechanical ventilation, and treatment often ends in failure [1,2]. In particular, respiratory failure after abdominal surgery is known to cause respiratory damage both directly, when intra-abdominal infection or invasion affect scans the lungs via the diaphragm, and indirectly, mediated by the bloodstream [3]. Thoracic X-rays and the PaO 2 /FiO 2 (P/F) ratio are used in the diagnosis of acute respiratory distress syn-drome (ARDS), but lesions in the injured lungs may encompass a variety of different conditions. In ARDS, the heterogeneity of lung lesions results in a mismatch between ventilation and perfusion, leading to the development of significant hypoxia. Prone ventilation may potentially improve this ventilation-perfusion mismatch [4], but has not as yet been fully investigated. The present comparative investigation of the association between computed tomography (CT scan) findings from patients who had developed ARDS after abdominal surgery and improvements in hypoxia as a result of prone ventilation was therefore performed.
The study aimed to examine the association between computed tomographic lung findings in patients with ARDS after abdominal surgery and improved hypoxia and mortality after prone ventilation.
Materials and methods
This study was approved by the Institutional Review Board of Sapporo Medical University (Authorized number 302-156).
The single site retrospective study investigated patients admitted to the intensive care unit (ICU) in the hospital between 1st January 2004 and 31st October 2018.
Inclusion Criteria
Participants were patients admitted to the intensive care unit (ICU) in the hospital between 1st January 2004 and 31st October 2018 who had developed ARDS following surgery for intra-abdominal infection and who had undergone lung CT scan on admission to the ICU. ARDS was developed and diagnosed following the criteria of the Berlin Definition [5] after being admitted to the ICU.
Exclusion Criteria
The patients who have less than 72 hours of mechanical ventilation and who is under 15 years old were excluded.
Subsequently, patients were divided into two groups: Group A that underwent prone ventilation within twenty four hours of the start of mechanical ventilation following ICU admission, and Group B, a Group B that did not.
Classification of lung CT Scan findings
Images from lung CT scan performed on admission to the ICU were evaluated by a single radiologist, and classified into two types according to the following procedure.
Six lung CT scan images were used for evaluation: 5 cm above the tracheal bifurcation, 5 cm below the tracheal bifurcation, and 5 cm above the diaphragm, covering each of the right and left lungs. Patients were categorised as showing either increasing ground glass opacity (GGO) or alveolar shadows with predominantly dorsal lung atelectasis (DLA) if these findings were evident in at least 3 of the 6 images
Mechanical ventilation
Respiration was managed so as to preserve spontaneous respiration. The ventilator mode used was pressure support ventilation. Blood gas analysis was performed every 4 hours, and support pressure was regulated to maintain PaCO 2 at 35-50 mmHg. The fraction of inspired oxygen (FiO2) was regulated to maintain PaO 2 ≥ 60 mmHg. The positive end-expiratory pressure (PEEP) value was set as recommended by the ARDS Network using allowable combinations of FiO2 and PEEP [6] Sedation was carried out with fentanyl continuous infusion combined with continuous infusion of midazolam (0.03-0.06 mg/kg/h), propofol (0.5-3 mg/kg h), or dexmedetomidine (0.2-0.7 μg/kg/h), and was regulated to maintain the patient at between -1 and -2 on the Richmond Agitation Sedation Scale. Prone ventilation did not involve any particular variation in sedation type or dosage.
Prone method
Prone ventilation was carried out using an air-cushioned bed. Patients were moved into the prone position under mechanical ventilation following the method previously reported [7]. An air-floating bed was used for changing patients to the prone position. At least five hospital staff members including medial doctors, intensive care nurses, and clinical engineers participated in each position change. Vital signs are monitored before and after the position change. Prone ventilation was continued for sixteen hours during which time blood gas analysis was performed every 4 hours.
The criteria for ending of prone ventilation were either that PaO 2 was maintained at ≥ 80 mmHg at FiO 2 0.5 for more than four hours after the patient had been returned to the supine position or there was no improvement in oxygenation compared with before the use of prone ventilation even after prone ventilation had been performed twice.
The criterion for ending prone ventilation was: -PaO 2 was maintained at ≥ 80 mmHg at FiO 2 0.5 for more than four hours after the patient had been returned to the supine position. -There was no improvement in oxygenation after prone ventilation had been implemented twice in succession. The criteria for prone ventilation were as follows: -moderate ARDS according to the Berlin definition.
-GGO, DLA and GGO+DLA in CT scan findings -The discussion on treatment for the patients between the attending physician and the intensivists: whether prone ventilation could be effective considering vital signs and general conditions in patients. The following data were obtained -Age, sex, -underlying diseases -Acute physiology and chronic health evaluation (APACHE) II score, -Sequential organ failure assessment (SOFA) scores on ICU admission -Duration of stay in the ICU -Duration of ventilation, outcome after 28 days and 90 days, -PEEP value at the start of ventilation, and maximum PEEP value within 72 hours after the start of ventilation -The number of ventilator-free days (VFDs) during which the patient was not attached to a ventilator ICU-free days (IFDs) during which the patient was cared for in a ward other than the ICU.
To calculate the P/F ratio in the Group A, PaO 2 and FiO 2 were measured before patients were moved into the prone position and 12, 24, 48, and 72 hours after the start of prone ventilation.
In the Group B, PaO 2 and FiO 2 were measured 12 hours after the start of mechanical ventilation and at 12, 24, 48, and 72 hours after the initial measurement.
To analyse the association between CT scan findings and the efficacy of prone ventilation, patients were divided into three groups based on lung CT scan images: The P/F ratio at the start of mechanical ventilation and 72 hours later were compared between the GGO and DLA groups in both Group A and Group B, and VFD, weaning rate from mechanical ventilation, and outcomes twenty days later, were also compared.
Statistical analysis
Changes over time in the P/F ratio using repeatedmeasures analysis of variance were analysed.
The unpaired Student's t-test was used for comparisons between Group A and Group B and between the GGO and DLA groups.
Kaplan-Meier curves were produced for ventilation weaning rates over time.
Intergroup comparisons were made using the logrank test.
The level of significance was set at α = 0.05. Values of P < 0.05 were regarded as significant.
Patient demographics and lung CT scan findings
In total, fifty-one patients were admitted to the ICU during the study period with respiratory failure following abdominal surgery. Twenty-four underwent prone ventilation (Group A ) within 24 hours of being admitted to the ICU, and twenty-seven who underwent ventilation in the supine position (Group B). In Group A, mechanical ventilation was started within 24 hours of being admitted to the ICU followed by prone position.
In Group B, mechanical ventilation was started within 24 hours of being admitted to the ICU. Patient demographics are shown in Table 1.
Comparisons between Group A and Group B showed no significant differences in background CT scan characteristics were evident between the two groups for age or sex, APACHE II scores, SOFA scores on ICU admission or frequency of shock.
Upper gastrointestinal surgery was common in both the supine and Group A.
No significant difference in frequency of steroid administration was identified.
CT scan showed GGO in five patients in Group A and nine patients in Group B.
DLA occurred in 17 patients in Group A and nine patients in the Group B, and both (GGO + DLA) in two patient in the Group A and nine patients in Group B.
No significant differences in ventilation settings of PEEP level, peak pressure and respiratory rate were identified between the two group. The P/F ratio was ≤ 200 in both groups at the start of mechanical ventilation, meeting the diagnostic criteria for ARDS.
Comparison of changes in the P/F ratio and weaning rate from mechanical ventilation Table 1 shows changes in the P/F ratio over time. No significant difference in the P/F ratio at the start of the study was seen between both groups.
In the Group A, the P/F ratio rose significantly by 12 hours after the start of prone ventilation, and was sig-nificantly higher at 24, 48 and 72 hours compared with "prior to prone" ventilation, maintaining the improvement in oxygenation. Also, in Group B the P/F ratio was significantly elevated at 24, 48 and 72 hours after the start of measurements compared with the value at the start of the study.
In Group A, the P/F ratio was significantly higher than the corresponding values for the Group B at each point.
Duration of ventilation, duration of ICU stay, and outcome after 28 days and 90 days
Group A had significantly more VFDs and IFDs ( Table 1). The rate of weaning from mechanical ventilation at twenty-eight days after the start of mechanical ventilation was also significantly higher in Group A than in Group B (P = 0.02) (Fig. 1). After 28 days, four patients in Group A had died (16.7% mortality), and 10 of 27 patients in Group B had died (37.0% mortality). This difference was not significant (P = 0.127). However, the mortality rate after 90 days in Group A was significantly higher than that in Group B (P = 0.048).
Prone ventilation
This technique was only applied once or twice (mean, 1.5 ± 0.5 times overall; 1.6 ± 0.5 times in the DLA group and 1.4 ± 0.5 times in the GGO group).
The mean time spent in the prone position was 16.1 ± 0.8 hours. No serious complications such as accidental removal or kinking of the central venous catheter, tracheal tube or drains, or wound dehiscence occurred during prone ventilation. Mild reddening was identified around the cheekbones, iliac bones, and knees, but this resolved after patients were returned to the supine position. Table 2 shows patients' data where DLA and GGO was shown on CT SCAN scans.
Relationship between lung CT scan findings and efficacy of prone ventilation
In total, 14 patients were classified as belonging to the GGO group on the basis of lung CT scan findings, of whom five underwent prone ventilation.
The DLA group contained 26 patients, 17 of whom underwent prone ventilation. The GGO and GLA groups showed no significant differences in age, sex, APACHE II score, SOFA score, surgical site, use of steroid, or ventilator settings at the start of this study.
In the GGO and DLA groups, no significant difference in the P/F ratio was apparent between Group A and Group B at the start of the study.
In patients with GGO, no significant difference in the P/F ratio was seen in either Group A or Group B 72 hours after the start of this study.
In patients with DLA, the P/F ratio was significantly higher in the Group A 72 hours after the start prone ventilation. There was no significant differences between Group A and Group B in the numbers of VFDs and IFDs for patients with GGO, but in patients with DLA, there was a significant differences between the two groups with regards to VFDs and IFDs (Table 3).
Weaning from mechanical ventilation in patients with DLA was also significantly earlier in Group A than in Group B (P < 0.001), but no significant difference between Group A and Group B was identified for patients with GGO (P = 0.294) (Fig. 2).
The mortality rate after twenty eight days in patients in Group A with DLA was significantly lower than that in patients in Group B with DLA, but no significant difference between groups was seen for patients with GGO.
The outcome after twenty eight days in the DLA group was significantly higher in Group B than in Group A.
The mortality rate ninety days in Group A with DLA was also significantly lower than that in the Group B.
At 90 days, there was no significant difference in -mortality between the two groups with GGO (Table 3).
Fig. 1. Comparison of weaning rates from mechanical ventilation between the prone and supine ventilation groups in patients with intra-abdominal sepsis-induced ARDS.
Cumulative weaning rate over 28 days was compared using the log-rank test. ARDS: acute respiratory distress syndrome Abbreviations: APACHE II, acute physiology and chronic health evaluation II; SOFA, sequential organ failure assessment; PEEP, positive end-expiratory pressure.
Discussion
Prone ventilation was performed for patients who developed acute respiratory failure after surgery for intraabdominal infection, and the association between the efficacy of prone ventilation and image findings from lung CT scan was examined. This technique was only applied once or twice because oxygenation could be improved. Of the 51 subjects in this study, twenty six (51.0%) showed dorsal infiltration as the main finding on lung CT scan images, with only fourteen of the fifty one patients (27.5%) showing mainly diffuse infiltration. Prone ventilation rapidly restored impaired oxygenation, and more than 50% of patients could be weaned off mechanical ventilation after 72 h. A comparison of efficacy in terms of CT scan imaging findings showed that prone ventilation was clearly more effective in patients showing dorsal infiltration as the main finding compared with those with diffuse infiltration. The present results suggest that prone ventilation may be an effective method for treating patients with intra-abdominal infection who develop acute respiratory failure. Acute respiratory failure associated with intra-abdominal infection is frequently extremely difficult to treat [1], and the mortality rate is reportedly high [2].
In an animal sepsis model of intra-abdominal infection, intra-abdominal fluid contained larger amounts of cytokines than seen in circulating blood [8]. These cytokines are continuously transferred into circulating blood, causing damage to the vascular endothelium of internal organs. In the lungs, this increases vascular permeability, increasing the volume of interstitial fluid and causing the appearance of diffuse infiltration on CT [9]. In particular, if vascular permeability increases in the dorsal region or on the surface of the diaphragm, where perfusion is greater because of gravity, intraabdominal fluid with a high concentration of inflammatory mediators may spread inflammation directly to the diaphragm concomitant with inflammation spreading from the diaphragm to the diaphragmatic surface of the lungs, causing the appearance of dorsal infiltration on CT. In addition, exposure of the diaphragm to inflammatory mediators may easily reduce its contractility [8]. The dorsal lungs are always vulnerable to deflation as a result of intra-abdominal pressure, but ventilation is normally maintained by appropriate contraction of the diaphragm during spontaneous respiration. Even during spontaneous respiration, however, atelectasis may easily occur if diaphragmatic function deteriorates [10]. The results of the present study suggest that, in acute respiratory failure associated with intra-abdominal infection, the spread of intra-abdominal inflammation via the surface of the diaphragm and inflammatory mediators in circulating blood may cause more dorsal infiltration in the lungs due to the action of gravity.
Prone ventilation has long been used to treat acute respiratory failure [11]. Randomised controlled trials carried out since 2000 have demonstrated its efficacy in restoring impaired oxygenation [12][13][14]. Although the mechanism whereby prone ventilation improves oxygenation is as yet unknown, the following hypotheses have been proposed: 1) improved diaphragm movement in the prone position [15]; 2) improved ventilation-perfusion mismatch [4]; 3) drainage of secretions that have collected in dorsal lung atelectasis [16]; 4) decrease in gravity-dependent increase of hydrostatic pressure [17]; and 5) improved trans-pulmonary pressure, which decreases due to abdominal pressure or increased lung mass [18]. On the basis of these mechanisms, it has been reported that prone positioning is more effective in patients with dorsal infiltration than in those with diffuse infiltration [19]. The greater improvement in hypoxia as a result of prone ventilation compared with the Group B may have been due to the fact that this study included more patients who showed dorsal atelectasis.
There is scope for debate concerning the improvement in outcomes as a result of prone ventilation. No such improvement in outcomes was evident in a randomised controlled trial carried out by Gattioni et al. (2001) [12], but in a subgroup analysis, severe cases with a P/F ratio ≤150 did show improved outcomes. A recent randomised controlled trial by Guerin et al.(2013) also found that prone ventilation improved outcomes for patients with ARDS and a P/F ratio ≤150 [20]. A meta-analysis by Sud et al. (2010) likewise found that prone ventilation improved outcomes under conditions of a P/F ratio ≤140 [21]. Such findings suggest that prone ventilation may have an important role to play as one method of treatment for severe respiratory failure with a P/F ratio ≤150.
In the present study, prone ventilation did not have any effect on improving outcomes. This may have been because the number of patients included in this study was too small to investigate outcomes, and only around half of the present patients (12 patients in each of Group A and Group B) had a P/F ratio ≤150 at the time of inclusion, meaning that the study included few patients for whom prone ventilation could be expected to be effective.
Most previous studies of prone ventilation have used American-European Consensus Conference criteria to diagnosis of ARDS [22]. These diagnostic criteria stipulate the presence of bilateral infiltration on thoracic Xrays, but investigations of CT scan images have shown that a variety of conditions are included [23]. Few studies have addressed the association between the clinical efficacy of prone ventilation and findings on CT scan images in ARDS, which encompasses a heterogeneous range of conditions. In the present study, prone ventilation did not improve outcomes for patients with diffuse infiltration on CT scans, but did significantly improve outcomes in patients with dorsal infiltration. The great majority of patients with intra-abdominal infectionrelated ARDS also showed dorsal infiltration on CT scans. ARDS associated with intra-abdominal infection may thus be highly likely to progress to a condition in which prone ventilation may be effective, and prone ventilation may be a useful treatment option with mechanical ventilation.
In summary, most patients with ARDS associated with intra-abdominal infection showed dorsal atelectasis on CT scan, and prone ventilation enabled earlier weaning from mechanical ventilation. The present results indicate that prone ventilation may improve outcomes for patients with dorsal atelectasis on CT scan, and suggest prone ventilation as a useful treatment for patients with ARDS associated with intra-abdominal infection.
Declarations of interest
None to declare. | 2020-02-20T09:11:18.390Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "bdc6549867cbf1b8c5dd30f6763836ac20364c92",
"oa_license": "CCBYNCND",
"oa_url": "https://content.sciendo.com/downloadpdf/journals/jccm/6/1/article-p32.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7e7e179520d121ac6eeb82866af5de075e631871",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18775128 | pes2o/s2orc | v3-fos-license | GEVA: grammatical evolution in Java
We are delighted to announce the release of GEVA [1], an open source software implementation of Grammatical Evolution (GE) in Java. Grammatical Evolution in Java (GEVA) was developed at UCD's Natural Computing Research & Applications group (http://ncra.ucd.ie).
Grammatical Evolution in Java
GEVA has been released under GNU GPL version 3, and uses Java 1.5 and greater.As well as providing the characteristic genotype-phenotype mapper of GE, an evolutionary search engine and a GUI are also provided.
GEVA comes out-of-the-box with a number of demonstration problems that can be easily switched between from the GUI or command line.Sample problems include simple String Pattern Matching, an LSystem generator, the Paint problem, and a number of classic Genetic Programming problems such as an example of Symbolic Regression, the Santa Fe ant trail and Even Five Parity.
A screenshot of the default GUI screen showing settings for the pattern matching problem can be seen in Fig. 1.The goal of the problem is to rediscover the string "geva".Simple graphing support is also provided, which allows the user to observe various attributes of the population live during the course of a run.The resulting graphs can then be saved for later use.Attributes that can be plotted include the best fitness, the average fitness (with error bars), the number of invalid (incompletely mapped individuals), the average number of codons in each individual, and the average number of expressed codons in the population.For an example see Fig. 2.
A number of tutorials have been developed to help the novice user get up to speed: from running the software out-of-the-box, to using the command-line parameters, writing your own grammars and fitness functions, to developing your own search engine.These include a tutorial describing a bonus demo problem, Battleship, that the interested user can add to a GEVA installation.
Design Overview
GEVA takes advantage of GE's modular structure as outlined in Fig. 3.This allows us to create a framework in which any search engine algorithm can be used to generate the genotypes (the GEChromosome class) that are used to direct the GE Mapper's use of the Grammar during the development of the output solution.In recent years this approach has included the adoption of a Particle Swarm algorithm and Differential Evolution as alternative search engines resulting in Grammatical Swarm [14] and Grammatical Differential Evolution [16] variants.GEVA facilitates the adoption of alternative search engines through the provision of an Algorithm interface.This will work correctly as long as a GE Mapper object is provided with a legal GEChromosome object, so any alternative algorithm must ensure to map its search engine's individual representation to a GEChromosome to generate an output solution.In this first release a standard Genetic Algorithm engine is provided with plans to add alternative engines in future releases.The current version uses individuals with (32-bit) integer codon values and adopts a corresponding integer mutation operator.Grammars are made available to GEVA through plain text files that adopt BNF notation.A simple parser is provided which handles standard BNF and can also recognise special symbols including GECodonValue, which returns the current codon's numerical value as a terminal symbol to the developing output phenotype sentence.
Simply by altering the contents of a BNF text file you can radically change the output generated by GEVA.A number of studies have illustrated this flexiblity: for example, grammars have been used to represent a diverse array of structures including binary strings, code in various programming languages (e.g., C, Scheme, Slang, Postscript), music, financial trading rules, 3D surfaces, and even grammars themselves (examples include [18,4,19,20,21,22,23,24]). A number of demonstration grammars are provided in the example problems and are available through the GUI and from the command line.
How to find out more
GEVA is available for download from the UCD NCRA group website (http://ncra.ucd.ie/geva) or http://www.grammatical-evolution.org.Included in the release are instructions on how to run GEVA out-of-the-box, and more detailed tutorials for those who wish to modify the software for new purposes.
We also welcome feedback on the software as we plan to actively maintain the code, releasing new versions as features are added.A GEVA Google group has been set up to facilitate communication amongst the GEVA community [25].We hope that GEVA will be a useful resource for the EC community and beyond.
EDITORIAL
Fig. 4: GEVA includes an interactive form of GE as an example problem that allows users to generate interesting LSystems.A similar approach was used to evolve the logo for the Natural Computing Research & Applications Group at UCD (see Fig. 5).
EDITORIALFig. 1 :
Fig.1:A screenshot of the GEVA GUI.When opened for the first time it adopts the parameter settings for the pattern matching example problem where the goal is to rediscover the string "geva".
Fig. 2 :
Fig. 2: An example graph produced while running GEVA on the pattern matching example problem.Displayed are the best fitness and average fitness with error bars.It is also possible to observe the average codon length of individuals, the average expressed codon length and the number of invalid individuals in the population. | 2016-04-07T00:00:00.000Z | 2008-07-01T00:00:00.000 | {
"year": 2008,
"sha1": "70a9d2b6c76d0458e2dd5955c49567c19daad538",
"oa_license": "CCBYNCSA",
"oa_url": "http://researchrepository.ucd.ie/bitstreams/b9c31ee6-a0ca-43e3-8360-6dcdf50eea33/download",
"oa_status": "GREEN",
"pdf_src": "ACM",
"pdf_hash": "70a9d2b6c76d0458e2dd5955c49567c19daad538",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
111102938 | pes2o/s2orc | v3-fos-license | Estimating atmospheric stability from observations and correcting wind shear models accordingly
Atmospheric stability strongly influences wind shear and thus has to be considered when performing load calculations for wind turbine design. Numerous methods exist however for obtaining stability in terms of the Obukhov length L as well as for correcting the logarithmic wind profile. It is therefore questioned to what extend the choice of adopted methods influences results when performing load analyses. Four methods found in literature for obtaining L, and five methods to correct the logarithmic wind profile for stability are included in the analyses (two for unstable, three for stable conditions). The four methods used to estimate stability from observations result in different PDF's of L, which in turn results in differences in estimated lifetime fatigue loads up to 81%. For unstable conditions hardly any differences are found when using either of the proposed stability correction functions, neither in wind shear nor in fatigue loads. For stable conditions however the proposed stability correction functions differ significantly, and the standard correction for stable conditions might strongly overestimate fatigue loads caused by wind shear (up to 15% differences). Due to the large differences found, it is recommended to carefully choose how to obtain stability and correct wind shear models accordingly.
Introduction
Wind shear is a major cause of cyclic loads of wind turbines, and should therefore be described as accurately as possible when designing new wind turbines. The IEC standard [1] prescribes the use of either a power law or logarithmic wind profile(log-profile) as a shear model when performing load calculations. These profiles are independent of atmospheric stability, while it is well known that stability has a major impact on wind shear. As such, recently studies have been carried out to assess the impact of atmospheric stability on resource assessment [2], wind turbine performance [3] and fatigue loads [4].
The impact of stability on wind shear, and subsequently on wind turbine design, can be studied based on observation data. One has to choose however i) how to determine or classify stability from observations and ii) how to correct wind profiles accordingly. For both of these choices various methods are used in literature. This raises the question if choosing a specific methodology to determine stability and correct shear profiles accordingly has a significant impact on wind turbine design. In this research the impact of both choices on estimated lifetime fatigue loads of wind turbines is analysed. The impact of using specific stability correction functions on wind shear is analysed for five stability correction functions (two for unstable and three for stable conditions). The dependency of the frequency of occurrence of stability on a chosen method to estimate stability is studied for four specific methods. As a final analyses the impact of choosing specific methods (both for obtaining and correcting wind shear for stability) on wind turbine design is studied based on load simulations of a reference wind turbine.
Wind shear and Atmospheric Stability
Wind shear in the atmospheric boundary layer is typically well described with a logarithmic shear profile. Neglecting stability effects, the neutral logarithmic wind profile (neutral log-law) is given by Here u(z) is the wind speed at height z, u * is the friction velocity, κ is the von Karman constant and z 0 is the roughness length. Following Monin-Obukhov similarity theory, the neutral log-law can be rewritten to include stability effects Here Ψ is a stability correction function that depends on the Obukhov length L. The last term in equation 2 is generally neglected since Ψ (z/L) Ψ (z 0 /L). The Obukhov length is defined as Here θ v is the mean virtual potential temperature, g is the acceleration due to gravity, (w θ v ) s is the surface virtual potential heat flux and θ * is the surface layer temperature scale. If L is negative the atmosphere is unstable, while for positive values the atmosphere is stable. The definition of the stability correction function in equation 2 varies in literature. Generally the standard corrections proposed by Businger and Dyer (BD-functions [5,6]) are used, which are defined as Ψ(L ≤ 0) = 2 ln 1+x Where the parameters β and γ where first determined as 4.7 and 15 based on the Kansas experiments [5]. In literature various other values of β and γ are found, and here the correction proposed by Högström [7] is adopted (β = 6 and γ = 19.3). The validity of the BD-functions is questionable, either based on dimensional analyses (for unstable conditions) or based on observational data (for stable conditions). For extreme unstable conditions stresses are no longer significant, and one can show that the BD-functions are incorrect [8]. For such conditions u * is no longer a scaling parameter, and based on dimensional analyses one obtains a stability correction function that should hold in the free convection limit Here the parameter γ is set to 10 [8]. A similar expressions was already proposed in the seventies [9], still generally the BD-functions are applied. For stable conditions observations show that the BD-functions overestimate wind shear, specifically for very strong stable stratifications. As such, it is proposed independently by Brutsaert and Holtslag to (empirically) alter the stability correction function for stable conditions [10,11]. The formulation of Brutsaert is given by Whereas the formulation of Holtslag is given by In equation 7 the proposed parameters of Beljaars and Holtslag are used [12]. One can show with equation 2 that the ratio of the wind speed at two heights becomes a function of height, stability and the roughness length only if one assumes u * is constant with height (which is approximately true in the atmospheric surface layer, the lowest 10% of the boundary layer).
Obtaining L from observations
One can use various methods to estimate L from regular observations [13]. If high temporal resolution observation data is available, one can calculate L directly based on the eddy-covariance method and the observed turbulent fluxes of momentum and heat. In absence of this data however, one must rely on empirical methods to estimate L. In general these empirical methods depend either on the Richardson number (RI-methods), or one estimates L iteratively from wind and temperature profiles (Profile-methods). The Richardson number is given by One can come up with a gradient Richardson number by considering wind speed and temperature measurements at two heights in the atmosphere (thus considering the gradient of wind speed and temperature within the atmosphere). If one considers the surface conditions (with u(z 0 ) = 0 m s -1 ) in combination with wind speed and temperature observations of the atmosphere at one height, one comes up with a bulk Richardson number (thus considering the atmosphere as one bulk layer). Subsequently we define a gradient-Richardson method (RI-Grad) and a bulk-Richardson method (RI-Bulk) that can both be used to estimate L. The general form of both methods is similar, but parameter values differ. In this study the RI-Grad method is defined as and the RI-Bulk method is defined as [14] z For the profile methods it is assumed that the stability corrected log-profiles are valid (thus observations must be carried out in the atmospheric surface layer). The following set of equations can iteratively be solved in combination with equation 3 Where z 2 > z 1 and z 1 equals z 0 if the sea surface is considered as lowest observation height. The Ψ-functions for the temperature profile are not equal to those used for the wind profile. One can find the Ψ-functions for temperature in literature [15,16]. For the calculation of u * we consider the BD-correction functions, which as discussed might be inappropriate for very stable/unstable conditions. Just as for the Richardson-methods, we define two profile-methods: one considering observations of surface conditions (Profile-Sea method) and one considering observations of wind speed and temperature at two heights in the atmosphere (Profile-Air method). Both the Richardson and Profile methods depend on Monin-Obukhov similarity theory which is strictly speaking only valid for stationary conditions. We adopt a similar filtering procedure as [13], and only consider situations where the wind speed, wind direction, sea temperature and air temperature do not change significantly in time.
Observation data analyses
Both the accuracy of the various Ψ-functions and the differences found when using the various methods to estimate L are analysed based on observation data of the OWEZ wind farm. At the OWEZ wind-farm a meteorological observation mast is present where wind and temperature observations are carried out at 3 heights (21, 70 and 116m), as well as observation of water temperature, waves and currents. Due to the presence of the wind farm North-East of the meteorological mast, only a limited amount of undisturbed observation data is available. In this study 10 months of data (from July-2005 until May-2006) is included in the analyses. For a detailed description of the meteorological mast and the sensors used, the reader is referred to the website of the OWEZ wind farm (www.noordzeewind.nl).
All temperature sensors have an accuracy of 0.1 • C, and wind sensors have an accuracy of at least 95%. The methodologies assessed to determine stability are most sensitive to measurement errors when ∆u or ∆θ v is small. Since both wind speed and temperature gradients between 21m and 70m height are far smaller than those between 0 and 21m height, the RI-Grad and Profile-Air methods are most sensitive to measurement errors. This is especially true for near neutral conditions (∆θ v ≈ 0) or very unstable conditions (∆u ≈ 0). Besides, both profile methods assume validity of the logarithmic wind speed and temperature profiles, either up to 21m height or up to 70m height. These logarithmic profiles are valid in the lowest 10% of the boundary layer, and for very stable conditions the observation heights (especially at 70m height) are likely no longer located within the surface layer. As such the accuracy of both profile methods decreases for increasing atmospheric stability. Since the bulk-Richardson method is least sensitive to measurement errors, and does not depend on the assumption of validity of the logarithmic wind and temperature profiles, we consider the Richardson-bulk method as reference methodology.
It is noted in [13] that the sea temperature observations have a -0.82 • C offset compared to ECMWF Re-analysis data, hence we correct all sea temperature observations by subtracting 0.82 • C.
Calculating wind turbine fatigue loads
The impact of choosing specific stability correction functions and choosing a method to estimate stability from observations on the design of wind turbines is analysed by looking at lifetime fatigue loads of a wind turbine. For these analyses the software package Bladed is used, and the 5MW NREL wind turbine is used as a reference wind turbine since it is widely used in literature for similar studies. Notice that the 5MW NREL wind turbine has a hub height of 90 m while the OWEZ metmast does not have observations at this height. It is therefore decided to use an iterative method similar to the profile methods to estimate the 90m wind speed based on the observed wind speed (and temperature) at 70m and 116m height, and subsequently fit a Weibull distribution to the calculated wind speed data. Since only wind shear is considered in this study, a steady state situation is imposed in Bladed, thereby neglecting fatigue loads caused by turbulence. Despite being less realistic, this enables us to focus on the impact that stability has on the loads caused by wind shear only. The fatigue load analysis are carried out for the blade root, since here maximum bending moments due to cyclic loadings occur. The bending moments calculated with Bladed for a given wind profile and hub height wind speed are converted to stresses according to Here σ is the stress at the blade root, M is the bending moment calculated with Bladed, y is the distance at which the moment acts and I is the area moment of inertia. The blade root is simulated as a thin cylinder with an inner radius of 3.42 m and an outer radius of 3.5 m (hence y = 3.5 m and I = 0.65 m 4 ). These stresses are plotted against time, and from the (nearly) sinusoidal pattern the stress amplitude S is calculated. One generally converts load cycles to actual fatigue damage with a SN-curve, but since there is no data available for this test case the Damage Equivalent Load (D EQ ) is introduced [17].
For the analyses it is assumed that m = 12 (m is the Wöhler exponent), N EQ = 10 7 and N(u) is the lifetime number of cycles for a given wind speed. We assume a wind turbine lifetime of 20 years, and N(u) is calculated with the known RPM of the turbine By defining several stability classes (here ranging from very unstable to very stable) and assuming the cut-in and cut-out wind speed is respectively 4 and 25 m s -1 , the cumulative lifetime fatigue load D EQ C is calculated following Here P(L|u) equals the chance that for a given hub height wind speed u the given stability class L occurs, and P(u) is the chance that a given hub height wind speed occurs.. The boundaries of the stability classes are found in table 1, where also the specific Obukhov length for each class is given that is used to create wind shear profiles. This classification is based on [18], though here only five classes are included for computational efficiency. Furthermore, extremely stable and unstable situations are also included here that were neglected in the original classification. Figure 1. Relative occurrence of stability classes as a function of wind speed according to the various methods used.
Stability and wind shear
The sensitivity of the occurrence of stability classes to the specified methods to determine L is analysed first. The frequency of occurrence of stability as a function of wind speed is plotted in figure 1. It is clear that the frequency of occurrence of the five stability classes differs when using various calculation methods. In general it is found that for weak wind speeds (below 10 m s −1 ) very unstable conditions prevail, while for strong wind speeds (above 15 m s −1 ) neutral conditions start to occur more frequently. For moderate wind speeds however there is a significant difference in the occurrence of stability classes for the various methodologies considered here. Those methodologies considering surface temperature find prevailing (very) unstable conditions for wind speeds between 10 and 15 m s −1 , while both other methods find far more (very) stable conditions. There is a striking similarity of the general stability distribution if one either considers surface observations or only atmospheric observations. Especially in terms of (very) stable conditions, those methods considering surface observations find a gradual increase in stable conditions for increasing wind speeds. In contrary, those methods considering only atmospheric observations find a majority of stable conditions for wind speeds between 10 and 20 m s −1 . It is generally thought that for strong wind speeds the atmosphere becomes neutrally stratified. Although we do find an increase in neutral conditions for very strong wind speeds, notice that there is still a significant amount of non-neutral observations for wind speeds above 20 m s −1 . Notice also that when using the RI-Bulk and Profile-Sea methods, one finds no stability classification for wind speeds above 22 m s −1 in contrary to the RI-Grad and Profile-Air methods. Reason is that for the few events where such strong wind speeds were observed, sea temperature observations were not available and stability could not be calculated. The impact of using specific stability correction functions for the stability corrected log-law can be seen in figure 2. We consider here stability as calculated by the RI-Bulk method on the x-axes. The assumption frequently used in wind energy that wind shear is independent of stability, assuming a power law (here with a power of 0.2) or neutral logarithmic shear profile, is clearly invalid. For unstable conditions up to 100/L = -1 the stability corrected log-profile performs well, no matter which correction function is used. Differences between both correction functions are small, and the neutral log law and power law both perform worse and overestimate shear. For neutral conditions a lot of scatter is found in the observations and the logarithmic shear profile tends to underestimate wind shear, while the power law overestimates shear. A potential cause of the increase in wind shear compared to the theoretical profiles might be the occurrence of internal boundary layers, as discussed in [2]. For stable conditions scatter increases, though the stability corrected logarithmic wind profile tends to perform reasonably well up to 100/L = 2 if one considers the stability correction of Holtslag. Clearly the BD-functions cause an overestimation of wind shear for very stable situations, which has also been found in other studies. Both other stability correction functions perform better, and the formulation of Holtslag perform slightly better than the formulation of Brutsaert. The power law typically overestimates wind shear for 100/L < 1, and underestimates shear for 100/L > 1.
Equivalent load analyses
Multiple simulations with Bladed are performed with various shear models and stability distributions, and calculated equivalent loads are visualised in figure 3. As a reference we choose the stability corrected log law with BD-functions and the stability distribution obtained with the RI-Bulk method (third bar in figure 3). The remaining bars show the equivalent loads calculated when changing either the shear profile (bar 1 and 2), the Ψ-functions (bar 4, 5 and 6) or stability distribution (bar 7, 8 and 9) from the reference case. One can see here that using the powerlaw or neutral logarithmic wind profile results in significant differences compared to using the stability corrected logarithmic wind profile. When using the power law, we find lifetime fatigue Figure 3. Relative lifetime equivalent loads for various methods (see x-axes) normalised with the equivalent loads calculated when using the stability corrected log law with BD-functions and the stability distribution obtained with the RI-Bulk method (third bar).
loads nearly twice as high compared to considering atmospheric stability, while using the neutral log law results in an underestimation of the fatigue loads by 17%. When changing the stability correction function for unstable conditions little differences are found (1%), hence changing this correction is not significant when calculating lifetime equivalent loads. In contrary, applying the different correction functions for stable conditions leads to a reduction in calculated lifetime equivalent loads of 8% (Brutsaert) or 15% (Holtslag) compared to using the original Businger-Dyer correction functions for stable conditions. Although these differences are significant, the impact of using specific methodologies to estimate stability is even more profound. Using the stability distributions obtained from the RI-Grad and Profile-Air methods result in a significant increase in the expected lifetime fatigue loads (respectively 30% and 70% increase). This is primarily caused by the fact that these methods estimate more frequently stable conditions, which in turn results in high shear and high fatigue loads. Using the stability distribution obtained from the profile-sea method results in a slight decrease in fatigue loads since there are slightly less (very) stable conditions for moderate wind speeds. As a final step, the equivalent loads as a function of wind speed are analysed when using specific shear models, stability correction functions and stability distributions (figure 4). Here again the equivalent loads are normalised similar as was done in figure 3. Results indicate that the simulated equivalent loads are most sensitive to the shear model used and the methodology used to estimate stability. One can also see here that for wind speeds above 11 m s −1 the calculated equivalent loads decrease for all simulations due to pitching of the blades.
Since the power law typically overestimates shear, it is not surprising that equivalent loads are also highest for the far majority of wind speeds considered here. The neutral log law underestimates shear, but the resulting fatigue loads do not differ much compared to using the stability corrected logarithmic wind profile, at least for wind speeds up to 13 m s −1 . This can be explained by the fact that for these wind speeds the atmosphere is primarily (very) unstable or neutrally stratified, and wind shear decreases slightly when the atmosphere changes from neutral to unstable conditions. For higher wind speeds the atmosphere becomes more often Figure 4. Sensitivity of fatigue loads to shear profiles, Ψ-functions and stability distributions for a given hub height wind speed.
stable stratified, and as such we find that using the neutral logarithmic wind profile results in a decrease in the simulated fatigue loads. The simulated fatigue loads are very sensitive to the methodology used to obtain a stability distribution. One can clearly see here that fatigue loads calculated when using stability distribution obtained with the RI-Bulk and Profile-Sea methods correlate quite well, though the loads calculated with the RI-Bulk method are typically slightly higher. Despite the small differences shown here, this accounts for a 10% difference on the lifetime fatigue loads as shown in figure 3. Using the RI-Grad or profile-air methods results in a significant increase in fatigue loads due to the increase in stable conditions, primarily found for wind speeds between 10 and 16 m s −1 .
When looking at the impact of using various stability corrections functions, little differences are found for unstable conditions. For stable conditions, the fatigue loads decrease similarly as wind shear decreases when applying either the Brutsaert (little decrease in wind shear) or Holtslag (large decrease in wind shear) stability correction functions. Difference are relatively small however compared to the impact of using various shear models, or various methodologies to estimate stability as was shown in figure 3 as well.
Conclusions
In this study we assessed the accuracy of shear profiles, and the sensitivity of fatigue load analyses to specific shear models, stability corrections and stability distributions. It is found that wind shear depends strongly on atmospheric stability, especially for stable conditions. The shear profiles that do not consider stability (power law and neutral log law) deviate strongly from the stability dependence observed. For unstable conditions the exact formulation of the stability correction is not significant. For stable conditions however results do differ and the stability correction of Holtslag tends to perform best. The distribution of stability is very sensitive to the method used to determine stability. Especially the choice of incorporating surface observations has a significant impact on the stability distribution. The meteorological results found have a distinct impact on the calculated fatigue loads for the simple reason that if shear increases fatigue loads increase as well. The equivalent loads are most sensitive to the shear profile used (differences up to 88% in lifetime fatigue loads) and the methodology used to obtain the PDF of stability (differences up to 71% in lifetime fatigue loads. The sensitivity to the exact stability correction is less pronounced, but for stable conditions we find that the Businger-Dyer functions that are typically used overestimate fatigue loads by 15%.
Based on the results it is proposed to consider the stability corrected logarithmic wind profile as a shear model in combination with the "Free Convection" and "Holtslag" stability correction functions. Besides it is suggested to consider the Bulk-Richardson method to estimate stability from regular observations since this method is least sensitive to observation errors. | 2019-04-13T13:08:03.268Z | 2014-12-31T00:00:00.000 | {
"year": 2014,
"sha1": "d17a50473b4f1d8975aa9a80b8828eeffe53db31",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/555/1/012052",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c3e3776330379c780f394c3dbed031ead43668a9",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Engineering"
]
} |
268404557 | pes2o/s2orc | v3-fos-license | An Engineered Laccase from Fomitiporia mediterranea Accelerates Lignocellulose Degradation
Laccases from white-rot fungi catalyze lignin depolymerization, a critical first step to upgrading lignin to valuable biodiesel fuels and chemicals. In this study, a wildtype laccase from the basidiomycete Fomitiporia mediterranea (Fom_lac) and a variant engineered to have a carbohydrate-binding module (Fom_CBM) were studied for their ability to catalyze cleavage of β-O-4′ ether and C–C bonds in phenolic and non-phenolic lignin dimers using a nanostructure-initiator mass spectrometry-based assay. Fom_lac and Fom_CBM catalyze β-O-4′ ether and C–C bond breaking, with higher activity under acidic conditions (pH < 6). The potential of Fom_lac and Fom_CBM to enhance saccharification yields from untreated and ionic liquid pretreated pine was also investigated. Adding Fom_CBM to mixtures of cellulases and hemicellulases improved sugar yields by 140% on untreated pine and 32% on cholinium lysinate pretreated pine when compared to the inclusion of Fom_lac to the same mixtures. Adding either Fom_lac or Fom_CBM to mixtures of cellulases and hemicellulases effectively accelerates enzymatic hydrolysis, demonstrating its potential applications for lignocellulose valorization. We postulate that additional increases in sugar yields for the Fom_CBM enzyme mixtures were due to Fom_CBM being brought more proximal to lignin through binding to either cellulose or lignin itself.
Introduction
Lignin is a complex organic polymer that plays a crucial role in the structure of plant cell walls.It is one of the three main components of plant cell walls: cellulose and hemicellulose.While cellulose provides strength and rigidity to the cell wall, and hemicellulose contributes to its flexibility, lignin acts as a binding agent, providing additional support and resistance to decay [1].Lignin has great potential in the biofuel industry, but challenges remain, such as developing cost-effective and scalable processes for lignin depolymerization and conversion to valuable products.Ongoing research and advancements in biotechnology and chemical engineering are critical for unlocking lignin's full potential in producing sustainable biofuels [2].
Lignin degradation in nature is primarily carried out by various microorganisms, including fungi and bacteria, which produce a variety of enzymes to break down the complex structure of lignin.Three key enzymes, Laccases, Lignin Peroxidases (LiP), and Manganese Peroxidases (MnP), involved in lignin degradation are ligninolytic enzymes, and the process is often referred to as ligninolysis.These enzymes work together in a coordinated manner to depolymerize lignin into smaller fragments that microorganisms can further metabolize for energy and carbon.It is important to note that the specific enzymes and mechanisms involved can vary among different microorganisms, and some species may produce a combination of these enzymes to degrade lignin efficiently.Studying these natural lignin-degrading enzyme systems is critical to gaining insights into how they can be harnessed for industrial applications, such as biofuel production and bioremediation.The complex interactions among the agents in secretomes can lead to difficulties in elucidating the mechanisms of lignin-degrading enzymes and make it particularly difficult to compare enzymes from either the same enzyme family or the same fungus.Therefore, instead of mixed secretomes of ligninolytic enzymes, heterologous expression of individual genes, purification of the resulting enzymes, and quantification of bond-breaking events is a valuable approach to studying and comparing the structure-function relationships of these essential enzymes and to building potent enzyme mixtures for efficient lignin depolymerization.
Laccases (EC 1.10.3.2) are copper-containing enzymes capable of oxidizing electron-rich organic and inorganic substrates using molecular oxygen as an electron acceptor and are found in plants, fungi, and bacteria [3].In fungi, they play critical roles in several physical functions, such as morphogenesis, fungal plant pathogen/host interaction, stress defense, and lignin degradation.Due to their ability to oxidize many substrates, laccases have applications in various industries, including pulp and paper processing, textile dyeing, and wastewater treatment.Additionally, laccases have been used in biotechnological processes, such as the modification of lignin for biofuel production and the degradation of environmental pollutants.The versatility of laccases makes them valuable tools in both natural ecosystems and industrial applications.Fomitiporia mediterranea is a polypore fungal species that grows on olive trees [4] and has been associated with esca in grapevines and their roots [5,6].Functionally, F. mediterranea is a white rot-causing basidiomycete that secretes both cellulolytic enzymes that catalyze depolymerization of cellulose and hemicellulose and ligninolytic enzymes, including laccases and peroxidases, which catalyze depolymerization of lignin.It is reported that incubation of purified laccase from the secretomes of F. mediterranea with natural phenolic and polyphenolic compounds resulted in the oxidation of both compounds [6].To date, studies aimed at the recombinant expression of laccases from F. mediterranea, and subsequent characterization of the ability of the purified laccase to catalyze the breaking of bonds commonly found in lignin and their optimal reaction conditions, have not been reported, suggesting further research is needed to explore the potential of F. mediterranea laccase in processes requiring depolymerization of lignin and polysaccharides.
To this end, the primary aim of this work was to characterize the catalytic performance of a heterologously expressed (Komagataella pastoris) and purified laccase from the white rot-causing basidiomycete F. mediterranea (Fom_lac) by quantifying β-O-4 ′ ether, C α -C β , and C α -C 1 bond cleavage products and by-products produced from C α -oxidation and polymerization of reaction products using a nanostructure-initiator mass spectrometry (NIMS) assay with phenolic and non-phenolic lignin-like model compounds [7].Various factors influence lignin degradation, and pH is one of the critical parameters that can significantly affect this process.Indeed, numerous studies have explored the effect of pH on laccase-catalyzed lignin degradation [8,9].Both the effects of pH and the presence of the reaction mediator hydroxybenzotriole (HBT) were investigated.A secondary aim was to evaluate the utility of Fom_lac and an engineered variant of Fom_lac to which a carbohydrate-binding module (CBM) was fused to its C-terminus (Fom_CBM) to enhance saccharification yields from dry milled and cholinium ( pretreated pine (Pinus radiata) by measuring glucose and xylose yields.Pine was chosen for these experiments because it is a highly recalcitrant feedstock with a high lignin content, making it an ideal substrate for studies aimed at exploring the utility, in terms of increased monosaccharide yields, of adding lignin-modifying enzymes such as laccases to the cellulolytic enzyme mixtures.Pretreatment with ionic liquid, particularly [Ch][Lys], was chosen because [Ch][Lys] pretreatment produces a less recalcitrant biomass [10].This study will investigate whether the addition of laccases to hydrolytic enzyme mixtures can further improve sugar yields over what was achieved by a very efficient pretreatment solvent.
Cloning and Expression of Laccase Variants
The laccase gene from the genome of F. mediterranea Fom_Lac was identified in the JGI-MycoCosm database (Protein ID: 127515, https://mycocosm.jgi.doe.gov/Fomme1/Fomme1.home.html,accessed on 31 May 2020) as a recently added entry [11].The CBM from the Trichoderma reesei exoglucanase 1 (Uniprok protein ID: P62694) was selected from a pool of eleven candidate CBMs, as described in the Simulation Methods Section, and fused to the C-terminus of Fom_lac.Expression and purification of both Fom_lac and Fom_CBM followed methods reported in a previous publication [8].Briefly, codon-optimized genes (Genscript Co., Piscataway, NJ, USA) with recognition sites for XhoI and EcoRI restriction sites were cloned into the pPICZαA vector from Invitrogene™ (Life Technologies, Carlsbad, CA, USA), linearized with PmeI and SacI, and transformed into K. pastoris X33 strain, as described in the Pichia Expression Kit User Manual (Life Technologies, USA).Transformants were grown at 30 • C on yeast extract peptone dextrose medium with sorbitol (YPDS) agar plates containing 100 µg/mL Zeocin™ antibiotic (Thermo Fisher Scientific, Waltham, MA, USA).Clones from positive colonies grown on YPDS agar plates containing 1000 µg/mL Zeocin™ were selected and grown in buffered complex methanol medium (BMMY) overnight at 30 • C at 200 rpm.The K. pastoris was grown to an OD 600 of 0.6 in 250 mL flasks with 50 mL BMMY.Cultures were shaken at 30 • C and 200 rpm for 3-5 days, fed daily with a 1% (v/v) methanol solution and 1 mM CuSO 4, and stopped when the laccase activity, measured as oxidation of ABTS substrate, reached saturation levels.
Purification of Recombinant Fom_lac Enzyme
Enzyme purification followed procedures described previously [8].Briefly, culture supernatants were centrifuged (9000 rpm for 15 min), clarified with 0.2 µm membrane filtration, concentrated 15X using Amicon ® Ultra-50-kDa Centrifugal Filter Units (MilliporeSigma, Burlington, MA, USA), dialyzed overnight through a 10-kDa membrane against 100 mM sodium acetate (pH 3.0) at 4 • C, and again dialyzed overnight against 10 mM sodium acetate (pH 6.0) at 4 • C. The precipitate was removed by centrifugation at 13,000 rpm for 15 min and filtered through 0.2 µm membrane filtration, which was then loaded onto the Hitrap-Q XL (5 mL) column with an AKTA FPLC (G.E.Healthcare, Chalfont Saint Giles, UK) purification column.Active fractions were pooled using stepwise gradients of buffer A (10 mM sodium acetate, pH 6.0) and buffer B (500 mM sodium acetate, pH 6.0), and fractions with the highest activity towards ABTS oxidation were collected.
Enzyme Reactions with the Fluorous-Tagged Phenolic/Nonphenolic β-O-4 Linked Model Compound
The phenolic β-O-4 aryl ether lignin-like model compounds, NIMS-tagged guaiacylglycerol-beta-guaiacyl ether (GGE) and NIMS-tagged veratrylglycerol-beta-guaiacyl ether (VGE), were synthesized according to a previously established protocol [7].Enzyme reactions with the NIMS-tagged GGE (1 mM) were performed at pH 2.0-10.0 and in the absence and presence of 20 mM of 1-hydrobenzotriazole (HBT) as a mediator.The reaction was stopped after 3 h, and analysis of reaction products was performed using nanostructure-initiator mass spectrometry (NIMS) as previously described [7].As negative control, substrate solution in the absence of laccase was treated in the same way.
Biomass Feedstock Preparation
Dry-milled pine (Pinus radiata) and 80% [Ch][Lys]-pretreated pine were used as feedstocks for enzymatic saccharification reactions used to study synergy among laccases, cellulases, and hemicellulases.Dry-milled pine was separated into fractions using a sieve with an aperture size of 2 mm.The [Ch][Lys] pretreated pine was generated as reported previously with slight modifications [12].Briefly, pine was pretreated in a 2:8 ratio by weight of pine to the ionic liquid [Ch][Lys] in a 1 L Parr 4520 series benchtop reactor (Parr Instrument Company, model 4871, Moline, IL, USA) for 3 h at 140 • C with stirring at 80 rpm using a three-arm self-centering anchor with Polytetrafluoroethylene (PTFE) wiper blades.The process was controlled and monitored using the Parr 'Instruments' model 4871 process controller and a model 4875 power controller.After 3 h, the pretreated slurry was cooled down to room temperature by removing the heating jacket.Pretreated pine was washed with DI water until the pH of the washing water was neutral and it was finally freeze-dried to obtain a free-flowing solid residue.
Enzymatic Saccharification Reactions
A 9:1 (v/v) mixture of cellulase (Cellic ® Ctec3, 1853 BHU-2-HS g −1 , 1.212 g mL −1 ) (Novozymes North America, Franklinton, NC, USA) and hemicellulase (Htec3 NS 22244, 1760 FXU g −1 , 1.210 g mL −1 ) (Novozymes) was used for all saccharification reactions.Reactions were carried out using a 2.5% biomass loading and an enzyme dose of 10 mg protein per 1 g biomass, and they were supplemented with 0.02% sodium azide to prevent microbial contamination [10].The synergistic effect of laccases with the CTec3/HTec3 enzyme mixture was studied by adding 5 uM Fom_lac or Fom_CBM and 5 mM HBT for reactions with the mediator.Reactions were run for 72 h at pH 5.5 and 60 • C, and hydrolysates were separated from the residual solids by centrifugation followed by filtration through 0.2 µm sterile filter units.
Glucose and xylose yields were quantified by HPLC using an Agilent HPLC 1260 infinity system (Agilent Technologies, Santa Clara, CA, USA) with an Aminex™ HPX-87 H column (Bio-Rad, Hercules, CA, USA) and a Refractive Index detector.The column was eluted using a 4 mM sulfuric acid solution, a 0.6 mL/min rate, and a column temperature of 60 • C. Standards for quantification were obtained from Sigma-Aldrich (St. Louis, MO, USA).As negative control, biomass in the absence of enzymes was treated in the same way.
Simulation Methods
Gibbs free energy of oxidation of the lignin dimers was calculated and AIMD simulations of the cationic radical intermediate and bond dissociation energies for all bond types in lignin dimers were performed as previously described [13].Calculations were performed in the Gaussian 09 software package [14] using unrestricted density functional theory (DFT) with the Becke three-parameter Lee-Yang-Parr hybrid exchange-correlation functional (B3LYP) [15,16] and 6-311G** basis set [17,18] and the implicit solvation model based on density (SMD) [19].
The TeraChem quantum chemistry package (Petachem LLC, Los Altos, CA, USA) was used to perform Ab initio Molecular dynamic (AIMD) simulations [20][21][22][23].The ab initio calculations in these simulations were performed using unrestricted density functional theory (DFT) with the long-range corrected ωPBEh exchange-correlation functional [24] and 6-31 g basis set.A bond was determined to be broken when the separation distance between two atoms exceeded the bond length of the corresponding bond in the initial structure, e.g., breaking of the C a -OH bond, C α -C β Bond, β-O-4 ′ ether bond, and C α -C 1 carbon bonds.Bond was monitored over 5000 AIMD time steps of the AIMD simulation.
Molecular dynamic (MD) simulations were performed using NAMD (NAMD ver.2.1) [25].Potential energies were calculated using the CHARMM36 forcefield [26].Previously published information describes the same methods and conditions used [8].Briefly, the protonation state for each titratable residue at pH 6.0 was determined using PROPKA [27,28].Counterions (Na + , Cl − ) were added to a box of TIP3P water molecules to achieve a NaCl con-centration of 0.1 M. MD simulations were conducted at 333.15 • K and atmospheric pressure using a Langevin thermostat with periodic boundary conditions [29].Long-range electrostatic interactions were calculated using the particle Mesh Ewald method and a cutoff distance of 12 Å [30].The simulation was performed by running a 10 ps energy minimization step, a 10 ps energy minimization step, a 10 ps energy minimization step, and then heating the system to the desired temperature (333.15• K) over 300 ps.The heated system was then equilibrated at 333.15 • K for 5 ns under an isothermal−isobaric ensemble (NPT).Finally, the system's 50 ns production simulation was performed in the canonical ensemble (NVT).The flexibility and dynamics of the protein structure and acids were calculated as the per-residue root mean square fluctuation (RMSF) using VMD 1.9.4 software [31].
Low pH Accelerates Catalysis of Bond Cleavage of Lignin Dimers by Wildtype Laccase from F. mediterranea
Products that resulted from catalysis of four main reactions, β-O-4 ′ ether bond cleavage, C α -C 1 carbon bond (ring A) cleavage, C α -O.H. oxidation, and aromatic ring hydroxylation, by Fom_lac with NIMS tagged GGE and VGE dimers as substrates were quantified using the NIMS assay (Figure 1).The data in Figure 1A,B show that wildtype Fom_lac showed its highest catalytic capability at pH 3.0 for GGE and VGE dimers, with more than 95% of the dimers being converted to products (Figure 1, cyan bars).At pH > 7.0, reaction with wildtype Fom_lac laccase resulted in more than 40% and 95% of the GGE and VGE substrates being unmodified, respectively.At all pH levels, β-O-4 ′ ether bond cleavage (Figure 1, blue bars) was the dominant catalytic event, followed by C α -C 1 carbon bond cleavage (Figure 1, red bars), C α oxidation (Figure 1, green bars) of the GGE dimer, or aromatic ring hydroxylation of the VGE dimer (Figure 1, purple bars).These pH profiles are consistent with previous studies of laccases from the white rot-causing fungus Cerrena unicolor [8] and with reports on the pH effect on the activity of laccases from Trametes versicolor, where the authors suggested the pH of the surrounding environment can influence the configuration of the metal ions of the laccase and the redox potential of the copper ions, which, in turn, affects the enzyme's ability to bind and oxidize substrates [9].It was reported that that under acidic conditions, the formation of protonated hy- It was reported that that under acidic conditions, the formation of protonated hydroxyl groups on GGE (Figure 3, left panel) drives the reaction to further completion by lowering bond dissociation energies.In these simulations, the β-O-4 ′ ether bond cleavage frequency for the GGE cationic radical intermediate was 33.66%, but it was only 11.42% for the non-protonated GGE cationic radical (Figure 3A) [13].The same trend in bond cleavage frequencies was observed from the analysis of AIMD trajectories of the non-phenolic lignin dimer (VGE, Figure 3B).Protonated VGE intermediates at the C α -O.H. or C β -O.H. positions resulted in a bond-breaking frequency of 26.25% for the β-O-4 ′ ether bond, which is much higher than the frequency for the non-protonated VGE cationic radical (0.54%).
Enhanced Lignocellulosic Biomass Degradation Was Achieved by Fusing a CBM to the Laccase from F. mediterranea
CBMs are classified based on substrate binding properties and sequence homology reflecting cellulose binding similarities.Molecular dynamic (MD) simulations were per formed to screen candidate CBM family 1 domains (fungal cellulases) for the consequen generation of Fom_CBM chimeras.Eleven CBM domains from different fungal cellulases were fused to the C terminal of Fom_lac using the polylinker (ASPPPPTTTTSSAPATTT TAS), and their stabilities were investigated using MD simulations (Figure 4A).These sim ulations aimed to identify the most stable CBM-Fom_lac complex, so values of per-resi due root mean square fluctuation (RMSF) for CBM domains were used to guide the selec tion of more stable CBM domains.In general, increased flexibility was observed for adja cent residues (aa 450-498) located on the protein surface of the CBM domain, where this region is more likely to be easily disrupted by heat and solvent interactions (Figure 4B) The most stable CBM domain was CBM10 of Exoglucanase 1 from Trichoderma reesei, and CBM10 was thus used to construct the Fom_CBM used in further studies and applications in this work.
Enhanced Lignocellulosic Biomass Degradation Was Achieved by Fusing a CBM to the Laccase from F. mediterranea
CBMs are classified based on substrate binding properties and sequence homology reflecting cellulose binding similarities.Molecular dynamic (MD) simulations were performed to screen candidate CBM family 1 domains (fungal cellulases) for the consequent generation of Fom_CBM chimeras.Eleven CBM domains from different fungal cellulases were fused to the C terminal of Fom_lac using the polylinker (ASPPPPTTTTSSAPATTTTAS), and their stabilities were investigated using MD simulations (Figure 4A).These simulations aimed to identify the most stable CBM-Fom_lac complex, so values of per-residue root mean square fluctuation (RMSF) for CBM domains were used to guide the selection of more stable CBM domains.In general, increased flexibility was observed for adjacent residues (aa 450-498) located on the protein surface of the CBM domain, where this region is more likely to be easily disrupted by heat and solvent interactions (Figure 4B).The most stable CBM domain was CBM10 of Exoglucanase 1 from Trichoderma reesei, and CBM10 was thus used to construct the Fom_CBM used in further studies and applications in this work.Fom_CBM was characterized for its ability to catalyze bond breaking i GGE and non-phenolic VGE NIMS-tagged dimers over a range of pH con the NIMS assay.Fom_CBM catalysis of bond cleavage efficiencies in GGE a 2-5 was comparable to that for the wildtype Fom_lac laccase (Figure 5), i the addition of a CBM did not alter the catalytic efficiency of the laccase.The from breaking both β-O-4' ether and Cα-C1 bonds in the presence of the m totaled ~80% for phenolic GGE at pH 2.0 and 60% for non-phenolic VGE at p to Fom-lac, the Fom_CMB laccase showed deficient activity towards both dimers at alkaline pH (7)(8)(9)(10), even in the presence of HBT.Fom_CBM was characterized for its ability to catalyze bond breaking in the phenolic GGE and non-phenolic VGE NIMS-tagged dimers over a range of pH conditions using the NIMS assay.Fom_CBM catalysis of bond cleavage efficiencies in GGE and VGE at pH 2-5 was comparable to that for the wildtype Fom_lac laccase (Figure 5), indicating that the addition of a CBM did not alter the catalytic efficiency of the laccase.The total products from breaking both β-O-4 ′ ether and C α -C 1 bonds in the presence of the mediator HBT totaled ~80% for phenolic GGE at pH 2.0 and 60% for non-phenolic VGE at pH 2.0.Similar to Fom-lac, the Fom_CMB laccase showed deficient activity towards both of the model dimers at alkaline pH (7)(8)(9)(10), even in the presence of HBT.Adding laccase enzyme mixtures of cellulases and hemicellulases in saccharification reactions represents a potential synergistic approach to more efficiently convert lignocellulosic biomass into fermentable sugars compared to mixtures of just cellulases and hemicellulases.We further hypothesized that adding a CMB to Fom_lac would allow Fom_lac to bind to biomass, position it in closer proximity to lignin, and increase its ability to catalyze the depolymerization of lignin and thus work synergistically with cellulases and hemicellulases to improve glucose and xylose yields.Conversion of dry-milled pine with particle sizes greater than 250 µm resulted in low yields (~3-5%) of fermentable sugars with and without the addition of Fom_lac or Fom_CBM to the Ctec3/Htec3 enzyme mixture (Figure 6A).In the conversion of finely dry-milled pine (particle size < 25 µm), the addition of Fom_lac to the Ctec3/Htec3 enzyme mixture did not show increased sugar yields; when the mediator HBT was added to the mixture, sugar yields increased by 38.9% compared to the Ctec3/Htec3 enzyme mixture alone (Figure 6B).The highest sugar yields were achieved when Fom_CBM was added to the Ctec3/Htec3 mixture, which resulted in a 41.6% increase in sugar yields without HBT and a 140.3% increase in yields when HBT was included in the reaction (Figure 6B).Increased sugar yields were also observed for mixtures of Ctec3/Htec3, Fom_CBM, and HBT in saccharification of [Ch][Lys]-pretreated pine.The highest sugar yield (72% of glucose and 23% of xylose) was observed from the saccharification of [Ch][Lys]-pretreated pine, which showed a 32.5% increase in total sugars compared to the Ctec3/Htec3 mixture without Fom_CBM (Figure 6C).We did not observe a significant yield of low molecular products from lignin degradation after the reaction.Further analysis of the large structure of lignin before and after the reaction is required in future studies.Adding laccase enzyme mixtures of cellulases and hemicellulases in saccharification reactions represents a potential synergistic approach to more efficiently convert lignocellulosic biomass into fermentable sugars compared to mixtures of just cellulases and hemicellulases.We further hypothesized that adding a CMB to Fom_lac would allow Fom_lac to bind to biomass, position it in closer proximity to lignin, and increase its ability to catalyze the depolymerization of lignin and thus work synergistically with cellulases and hemicellulases to improve glucose and xylose yields.Conversion of dry-milled pine with particle sizes greater than 250 µm resulted in low yields (~3-5%) of fermentable sugars with and without the addition of Fom_lac or Fom_CBM to the Ctec3/Htec3 enzyme mixture (Figure 6A).In the conversion of finely dry-milled pine (particle size < 25 µm), the addition of Fom_lac to the Ctec3/Htec3 enzyme mixture did not show increased sugar yields; when the mediator HBT was added to the mixture, sugar yields increased by 38.9% compared to the Ctec3/Htec3 enzyme mixture alone (Figure 6B).The highest sugar yields were achieved when Fom_CBM was added to the Ctec3/Htec3 mixture, which resulted in a 41.6% increase in sugar yields without HBT and a 140.3% increase in yields when HBT was included in the reaction (Figure 6B).Increased sugar yields were also observed for mixtures of Ctec3/Htec3, Fom_CBM, and HBT in saccharification of [Ch][Lys]-pretreated pine.The highest sugar yield (72% of glucose and 23% of xylose) was observed from the saccharification of [Ch][Lys]-pretreated pine, which showed a 32.5% increase in total sugars compared to the Ctec3/Htec3 mixture without Fom_CBM (Figure 6C).We did not observe a significant yield of low molecular products from lignin degradation after the reaction.Further analysis of the large structure of lignin before and after the reaction is required in future studies.
Discussion
The presence of different bond types and the structural heterogeneity and complexity of lignin make it challenging for enzymes to evolve to break specific bonds selectively.Despite these challenges, understanding lignin degradation is crucial for developing sustainable processes for utilizing lignocellulosic biomass in producing bio-derived fuels and Advancements in analytical techniques, such as nuclear magnetic resonance (NMR) spectroscopy, mass spectrometry, and high-performance liquid chromatography,
Discussion
The presence of different bond types and the structural heterogeneity and complexity of lignin make it challenging for enzymes to evolve to break specific bonds selectively.Despite these challenges, understanding lignin degradation is crucial for developing sustainable processes for utilizing lignocellulosic biomass in producing bio-derived fuels and chemicals.Advancements in analytical techniques, such as nuclear magnetic resonance (NMR) spectroscopy, mass spectrometry, and high-performance liquid chromatography, have enabled researchers to gain insights into the structure of lignin and the products formed during enzymatic depolymerization of lignin.In this study, the highest yield of products from catalysis of C α -O.H. bond, C α -C β bond, β-O-4 ′ ether bond, and C α -C 1 carbon bond cleavage by the laccase from F. mediterranea (Fom_lac) was observed at pH 2-3 for both phenolic and non-phenolic lignin dimers as the substrate.Furthermore, data from AIMD simulations and Gibbs free energy calculations performed on protonated hydroxyl lignin dimers showed lower activation energies for bond breaking, which helped explain why the bond-breaking efficiency was much higher at a low pH, especially for the β-O-4 ′ ether bonds, and results in higher overall conversion yields of dimer degradation.These results again emphasize the important role of low pH conditions in driving reaction equilibrium toward the favorable formation of the active cationic radical intermediate and in controlling bond-cleavage frequencies through the protonation of hydroxyl groups [13].Because the protonation of lignin drives the reaction, these results provide insights into the optimal design of reaction conditions to improve the efficiency of lignin degradation catalyzed by ligninases and potentially any catalytic approach to lignin depolymerization.
Fusing laccase with a carbohydrate-binding module (CBM) is a biotechnological strategy to improve the enzyme's efficiency in lignocellulose and plastic degradation.Laccases are multicopper oxidases capable of oxidizing a variety of phenolic and non-phenolic substrates.At the same time, CBMs are non-catalytic domains that can specifically bind to carbohydrates, facilitating the enzyme's attachment to the complex structure of lignocellulosic biomass or synthetic polymers [32][33][34][35].The potential benefit of fussing a CBM onto a laccase was also emphasized in this study, with wildtype Fom_lac catalyzed oxidation of lignin through long-distance electron transfer through an aqueous solution from lignin to mediator (Figure 7A).Adding a CBM to Fom_lac to form Fom_CBM significantly improved fermentable sugar yields from finely dry-milled pine particles and ionic liquid pretreated pine.Our working model is that the CBM provides Fom_CBM with an additional domain that can specifically bind to cellulose, other carbohydrate components, or possibly even lignin in lignocellulosic biomass, resulting in the laccase being closer in proximity to lignin and thus enhancing the required electron transfer between the laccase and lignin.We observed improved laccase activity and sugar yields only in the presence of a mediator.This result further suggested that via the CBM bringing the laccase much closer in proximity to lignin, the electron transfer pathway between the mediator and lignin was significantly shortened, resulting in enhanced lignin depolymerization, improved access to cellulose and hemicellulose, and enhanced enzymatic saccharification (Figure 7B).This approach reflects the ongoing efforts in bioengineering to create tailored enzymes for more efficient and sustainable bioprocessing of biomass.Optimization of the fusion protein and process conditions may be necessary to fully realize the benefits of adding CBMs to laccases and other lignin-degrading enzymes for lignocellulose degradation.
Conclusions
The results from experimental and computational studies of both the wildtype laccase from F. mediterranea (Fom_lac) and an engineered variant containing a carbohydratebinding module (Fom_CBM) showed that both variants efficiently catalyzed breaking of β-O-4' ether and Cα-C1 bonds in phenolic and non-phenolic lignin model dimers.Both Fom_lac and Fom_CBM showed highest activity at an acidic pH (pH = 3 to 4) and in the presence of the reaction mediator hydroxybenzotriole.Further, this work investigated the use of the two laccases to improve the enzymatic saccharification of ionic liquid pretreated pine.The highest sugar yields were detected when Fom_CBM was added to a mixture of cellulases and hemicellulases.Taken together, adding laccases to current cellulase and hemicellulase mixtures can increase substrate accessibilities of hydrolytic enzymes, which is a promising approach to improve lignocellulose degradation and is essential for producing biofuels, biochemicals, and other value-added products.
Conclusions
The results from experimental and computational studies of both the wildtype laccase from F. mediterranea (Fom_lac) and an engineered variant containing a carbohydrate-binding module (Fom_CBM) showed that both variants efficiently catalyzed breaking of β-O-4 ′ ether and C α -C1 bonds in phenolic and non-phenolic lignin model dimers.Both Fom_lac and Fom_CBM showed highest activity at an acidic pH (pH = 3 to 4) and in the presence of the reaction mediator hydroxybenzotriole.Further, this work investigated the use of the two laccases to improve the enzymatic saccharification of ionic liquid pretreated pine.The highest sugar yields were detected when Fom_CBM was added to a mixture of cellulases and hemicellulases.Taken together, adding laccases to current cellulase and hemicellulase mixtures can increase substrate accessibilities of hydrolytic enzymes, which is a promising approach to improve lignocellulose degradation and is essential for producing biofuels, biochemicals, and other value-added products.
DE-AC02-05CH11231 with Lawrence Berkeley National Laboratory.Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly-owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.The U.S. Government retains, and the publisher, by accepting the article for publication, acknowledges that the U.S. Government has a non-exclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript or allow others to do so for U.S. Government purposes.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
15 Figure 1 .
Figure 1.Product distribution from bond cleavage of fluorous-tagged GGE (A) and VGE (B) by wildtype Fom_lac.The reaction contained 1 mM of NIMS-tagged lignin dimer, 5 µM of Fom_lac enzyme, and 20 mM of 1-hydroxybenzotriole as mediator and was performed in sodium acetate buffer pH 2.0-10.0.Error bars are the standard deviation for three replicates.Pham et al. [13] proposed a pH-dependent mechanism for degrading a phenolic lignin dimer degradation and used quantum calculations and AIMD simulations.These calculations were extended to include the Gibbs free energy of reaction (ΔG, kcal/mol) and AIMD simulations of the non-phenolic lignin dimer.The cationic radical intermediate formed from Fom_lac-catalyzed 1-electron oxidation of GGE (Figure 2A) and VGE (Figure 2B) can undergo a variety of reactions such as side-chain oxidation (Cα oxidation of GGE dimer or aromatic ring hydroxylation of VGE dimer), C-C bond, and β-O-4' ether bond cleavage.β-O-'4' ether bond cleavage required +32.8 kcal/mol for the GGE dimer and +15.6 kcal/mol for the VGE dimer, and the calculated β-O-'4' bond breaking was more energetically favorable than Cα-C1 carbon bond cleavage in both of the lignin dimer types.The high ΔG (+312 kcal/mol) for Cα-C1 carbon bond breaking also helps explain its products' lack of detection in the non-phenolic dimer reaction.It was reported that that under acidic conditions, the formation of protonated hy-
Figure 1 . 15 Figure 2 .
Figure 1.Product distribution from bond cleavage of fluorous-tagged GGE (A) and VGE (B) by wildtype Fom_lac.The reaction contained 1 mM of NIMS-tagged lignin dimer, 5 µM of Fom_lac enzyme, and 20 mM of 1-hydroxybenzotriole as mediator and was performed in sodium acetate buffer pH 2.0-10.0.Error bars are the standard deviation for three replicates.Pham et al. [13] proposed a pH-dependent mechanism for degrading a phenolic lignin dimer degradation and used quantum calculations and AIMD simulations.These calculations were extended to include the Gibbs free energy of reaction (∆G, kcal/mol) and AIMD simulations of the non-phenolic lignin dimer.The cationic radical intermediate formed from Fom_lac-catalyzed 1-electron oxidation of GGE (Figure 2A) and VGE (Figure 2B) can undergo a variety of reactions such as side-chain oxidation (C α oxidation of GGE dimer or aromatic ring hydroxylation of VGE dimer), C-C bond, and β-O-4 ′ ether bond cleavage.
Biomolecules 2024 , 15 Figure 2 .
Figure 2. The proposed scheme of Fom_lac laccase catalyzed depolymerization and ΔG.Relative Gibbs free energy of reaction (kcal/mol) was calculated for various bond types via cationic radica intermediate from phenolic lignin dimer (A) and from non-phenolic lignin dimer (B).
Figure 3 .
Figure 3. Bond-cleavage frequency computed by AIMD simulation of non-protonated and proto nated OHs GGE (A) and VGE (B) cationic radicals for different linkages.
Figure 3 .
Figure 3. Bond-cleavage frequency computed by AIMD simulation of non-protonated and protonated OHs GGE (A) and VGE (B) cationic radicals for different linkages.
Figure 5 .
Figure 5. Product distribution from bond cleavage of fluorous-tagged GGE (A) and VGE (B) by Fom_lac fused with CBM-Fom_CBM.The reaction contained 1 mM of NIMS-tagged lignin dimer, 5 µM of Fom_lac enzyme, and 20 mM of 1-hydrobenzotriole as mediator and was performed in sodium acetate buffer pH 2.0-10.0.Error bars are the standard deviation for three replicates.
Figure 5 .
Figure 5. Product distribution from bond cleavage of fluorous-tagged GGE (A) and VGE (B) by Fom_lac fused with CBM-Fom_CBM.The reaction contained 1 mM of NIMS-tagged lignin dimer, 5 µM of Fom_lac enzyme, and 20 mM of 1-hydrobenzotriole as mediator and was performed in sodium acetate buffer pH 2.0-10.0.Error bars are the standard deviation for three replicates.
Figure 7 .
Figure 7. Proposed mechanism of lignin-catalyzed Fom_lac in the presence/absence of CBM.(A) Long-distance electron transfer through an aqueous solution is inefficient from lignin to mediator.(B) CBM brings Fom_lac in close contact with lignin and shortens the electron transfer pathway between mediator and lignin.
Figure 7 .
Figure 7. Proposed mechanism of lignin-catalyzed Fom_lac in the presence/absence of CBM.(A) Long-distance electron transfer through an aqueous solution is inefficient from lignin to mediator.(B) CBM brings Fom_lac in close contact with lignin and shortens the electron transfer pathway between mediator and lignin. | 2024-03-15T16:28:29.230Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "ea3ed069e39807b3c174f7d1b41b9fd0f88488b6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-273X/14/3/324/pdf?version=1709888244",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "126cced8ac012d70baa0e9a16744649cefeb70a2",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
} |
258402838 | pes2o/s2orc | v3-fos-license | Primary and Secondary Reflective Color Realized by Full‐Solution‐Processed Multi‐Layer Structures
A full solution‐based method is reported to fabricate multilayer thin film stacks for structural color applications. This is in contrast to the conventional fabrication methods that require high vacuum processes such as physical vapor deposition and magnetron sputtering, which significantly limits their practical use due to the high cost and long processing time. Copper/silicon dioxide/copper (Cu/SiO2/Cu) and copper/titanium dioxide/copper (Cu/TiO2/Cu) are chosen as the model system due to their simple structure and wide color tunability. A systematic investigation is carried out for each layer to ensure good film quality as well as its compatibility with all previous layers. Especially, a particulate Cu layer having optical properties distinct from that of bulk is obtained through such solution deposition, where the refractive index of Cu can be continuously tuned with deposition time. Both primary and secondary colors are achieved with a continuous manipulation of both the dielectric thickness and top Cu morphology on different substrates (e.g., silicon, glass, plastics, etc.).
Introduction
Color pigments of all types have been used extensively in our daily life. In contrast to the natural dye pigments that usually suffer from long-term degradation, artificial structural colors made of inorganic elements are often much more durable, robust, and environmental-friendly. However, conventional fabrication of structural colors either relies heavily on vacuum-based deposition techniques [1][2][3][4] or involves complicated patterning procedures, [5][6][7][8][9] which greatly limits their applications for largescale and cost-effective production. Therefore, much effort has DOI: 10.1002/adom.202300456 been devoted to alternative, solution-based processes. Though photonic crystals synthesized through nanoparticles [10][11][12][13][14] or block-copolymer self-assembly [15][16][17][18] with various colors have been extensively explored, their color properties are largely compromised when being ground into micro/nano-flakes. Besides, these photonic crystals usually take several to hundreds of microns thickness in order to achieve a high color brilliancy. In comparison, simple layered structures such as metal-dielectricmetal (MDM) structure and its variant [19][20][21][22] could easily give strong coloration and only require thickness within several hundreds of nanometers. Reflection color is produced when light at a certain wavelength resonates inside the thin film cavity and gets absorbed, giving a substrative color (i.e., secondary color). Though the optical properties of such simple tri-layer structure have been widely studied in the past decades, only quite recently [23,24] have researchers realized the usage of nanoparticles as a top broadband absorbing layer, where primary reflection peak can be achieved. Since, almost all current fabrication strategies involve some level of vacuum-based deposition, a more cost-effective way of multilayer film fabrication is preferred. Previously, our group has reported the first effort of utilizing electrochemical deposition to fabricate a layered gold (Au)/cuprous oxide (Cu 2 O)/Au structural color. [25] Both film thickness and roughness are well controlled with the film nucleation and growth rate. However, in the electrochemical deposition process, a conductive substrate is required for each add-on layer, and the cost of gold is also a big concern. In this work, we focused on developing a general solution-based strategy for MDM structure fabrication for both reflective primary and secondary color on various substrates, and do not require conductive substrate that was needed for electrochemical deposition. The optical properties of the metal layer due to different morphology can also be tuned by the deposition process.
In our experiment, inexpensive copper (Cu) was chosen as the top and bottom metals and was electroless-deposited on substrates. Full solution-processed dielectrics with different refractive indices, such as silicon dioxide (SiO 2 ) or titanium dioxide (TiO 2 ) from sol-gel process were chosen as the middle dielectric layer to tune different colors. Appropriate deposition condition for each layer has been successfully investigated with an aim of not only good morphology or thickness control, but also chemical compatibility with all previous layers. All depositions were www.advancedsciencenews.com www.advopticalmat.de Scheme 1. An overview of full solution process of MDM structural color fabrication.
driven with the inherent chemical potentials of the solution, without any external power (i.e., electric, photonic, mechanic, etc.) input. Different substrates including silicon wafer, glass, polyethylene terephthalate (PET), acrylonitrile butadiene styrene (ABS) were tested to demonstrate the generality of our full-solutionbased method. Color coatings from orange to cyan were produced by varying the thickness of the dielectric layer. Angle-dependent spectra responses were also observed for SiO 2 -based samples and gave the sample an iridescent appearance due to the low refractive index of SiO 2 . This fabrication strategy can be extended to the deposition of other metals (e.g., nickel (Ni), silver (Ag), etc.) and dielectrics (zirconium dioxide (ZrO 2 ), etc.) as well, which greatly enriches the material library. We also expect that such facile and cost-effective fabrication can be easily scaled up for mass production and leads to an extensive usage of structural color in diverse fields such as color displays, cosmetics, solar cells, aesthetic decorations, etc
Results and Discussion
Scheme 1 gives the MDM structure fabrication overview, with each layer being deposited additively. The bottom Cu layer was deposited through a Cu electroless deposition process. Silicon or glass substrate underwent a silanization [26,27] step with 3aminopropyltrimethoxysilane (APTMS) to ensure an amine terminated surface, which serves to anchor the catalyst particles. For plastic substrate such as ABS, PET, etc, where silanization is not possible, stearyltrimethylammonium chloride (SC) [28] was adopted for surface treatment to help with palladium (Pd) nanocolloids adsorption due to the positively-charged amine group. The treated substrate was then immersed in a presynthesized Pd nanocolloidal solution for Pd absorption. The autocatalytic Cu electroless deposition process was then carried out on the silicon surface, where Cu complex was reduced by formaldehyde: [29] [Cu (II) − Tar] + 2HCHO + 2 OH − → 2 HCOOH+H 2 where Tar [2]− is the tartrate anion. A strong basic solution (pH > 11.5) is typically required since the formation of methylene glyco-late anion is favored at high pH. The Cu color gradually appeared on the top of the silicon substrate, indicating an increasing Cu layer thickness with immersing time. As shown in Figure 1a, the Cu layer grows at the rate of 0.27 nm s −1 , following a nucleation and growth mechanism ( Figure 1c). The Pd nanocolloids serve as an autocatalytic center as well as a nucleation center, where Cu nanoparticles start to form and merge as the deposition goes on. Such mechanism could be clearly seen from the scanning electron microscopy (SEM) images ( Figure 1c) as well as the refractive index changes with increasing Cu thickness ( Figure S8, Supporting Information). Since the bottom copper serves as a good reflector, we adopted a longer deposition time of 120 s for ≈40 nm thick Cu where the reflection spectra hardly change thereafter. The resulted shiny Cu film has a root-mean-square (RMS) surface roughness of 5.8 nm ( Figure S1a, Supporting Information), which ensures the smoothness and uniformity for the bottom reflector in the MDM structure. The second dielectric layer controls the resonance wavelength, which is used to tune the color appearance. Hence, a smooth dielectric layer with good thickness control is highly desired. A typical sol-gel process [30][31][32] was carried out, where tetraethyl orthosilicate (TEOS) and titanium isopropoxide (TTIP) solution were first prepared to ensure hydrolysis (Equations S1 and S3, Supporting Information). The pH of solution was carefully tuned to 3 for SiO 2 coating to maximize TEOS hydrolysis kinetics while minimizing bulk condensation. The substrate deposited with Cu was then immersed into the solution for several minutes followed by widthdrawing the substrate from the TEOS solution with a constant speed. As the solvent quickly evaporates upon withdrawing, condensation reactions (Equations S2 and S4, Supporting Information) take place among the pre-hydrolyzed precursors, forming a thin coated oxide layer. The thickness can be controlled by either varying the precursor concentration or changing the withdrawing rate. A linear concentrationdependent thickness increase was observed (Figure 2a,b) with TEOS concentration increasing from 14.8 to 35.6 wt.% and TTIP concentration from 13.1 to 34.1wt.% at a constant withdrawing rate of 200 μm s −1 . Accordingly, the coated samples appeared to show a color from pale orange to pale yellow with a negligible dip across the spectra ( Figures S2 and S3, Supporting Information). Another facile way to tune the dielectric film is investigated through the control of withdrawing rate. As shown by Landau and Levich [33] and Faustini, [34] a change in the withdraw rate leads to various deposition layer thickness: where U is the withdraw rate, E, L, and D stands for the solvent evaporation rate, width of the film, and physicochemical constant of the solution, respectively. In Figure 2c and 2d, we have observed a similar relation between the deposited dielectric thickness and the withdrawing rate from 100 μm s −1 to 500 μm s −1 , as shown by the fitted curves. The photos in Figure S12 (Supporting Information) show samples of several cm 2 area, limited by the size of the beaker used in our experiment. Scale bar in the pictures is 1 cm. Withdrawing is in the vertical direction for both of the Cu/SiO 2 /Cu (left) and Cu/TiO 2 /Cu (right) samples. The color difference at the edge of the sample is due to the faster evaporation of solvent. The bottom edge color variation is caused by the remaining meniscus liquid on the sample surface when the substrate leaves the solution. Note that there is no fundamental limitation on color uniformity with the dip coating process as long as rigorous engineering controls are employed.
The resulted SiO 2 and TiO 2 films have refractive indices of 1.45 and 1.81 @550 nm by ellipsometry measurement, respectively ( Figure S4, Supporting Information), which are lower than those vacuum deposited ones. We attributed the reduction of the refractive index to the low density of the solution-deposited materials, which is likely due to nanopores generated within the film during the condensation reaction. A very small RMS surface roughness of 1.8 nm (Figure S1b, Supporting Information) was obtained across the sample surface except for the drying front where a meniscus formed once the substrate left the solution. The smoothly and uniformly coated dielectric layer provides a solid foundation for the top Cu layer coating as well as ensures a good overall color performance.
A similar repetition of the top Cu layer plating was carried out after silanization of the sol-gel deposited dielectric layer and Pd nanocolloids activation. However, Cu plating recipe developed for the bottom Cu layer could not be applied due to the residue hydroxyl groups left in the dielectric layer, which gives a local high basic environment (pH > 11.5). Because such high pH would further trigger the condensation reaction inside the dielectric layer, leading to an internal stress build-up across the film. The stressed film can crack and delaminates from the substrate. Hence, a milder Cu plating recipe was developed to work under neutral pH [35][36][37] with minimum heating. Instead of the formation of methylene glycolate at high pH with formaldehyde, copper-boron (Cu-B) phase with dimethylamine-borane (DMAB) is more favorable for Cu plating at lower pH value.
The growth of the top copper layer also follows a nucleation and growth mechanism (Figure 1c and Figure S7, Supporting Information), where the refractive index changes with increasing deposition time. Such tunable property can lead to new colors that is not possible with the conventional MDM structure. As shown in Figure 3a, a decrease in the real part of the refractive index was observed with time, while the imaginary part shows an opposite trend, especially in the red end of the spectrum. We noticed a moderate imaginary part of the refractive index leads to a lossy metal, i.e., a too large imaginary part gives too much reflection (like a shiny metal) but very limited absorption, while a too small imaginary part leads negligible absorption across a thin film. Therefore, 1 min electroless-deposited copper was chosen as the top absorbing layer due to its lossy nature in the spectra simulation with various dielectric layer (i.e., SiO 2 ) thicknesses ( Figure 3b). It can be seen that a broadband absorber showed up with thin dielectric layer thickness followed by a reflection peak (Figure 3d inset) from blue to red end when the resonance builds up. In comparison to a continuous metal (i.e., 3 min Cu electroless deposition, Figure 3c) where a typical secondary color is observed, the color gamut is greatly expanded (Figure 3d).
As extracted from ellipsometry data, the discontinued copper layer showed a very distinctive refractive index than the bulk copper, where the imaginary part k is greatly reduced in the long wavelength range (> 600 nm). Especially, the reduction in the k values makes the Cu extremely lossy toward longer wavelength and results in a broadband absorption above 600 nm (Figure 3a, blue curve). This is in contrast with bulk Cu, which is highly reflective toward NIR wavelength range. The sample gave a reflective blue color with only 30 s for top Cu layer plating due to this stronger absorption in red, followed by a dark purple color at 1 min (Figure 4a). These colors are not obtainable for a typical MDM structure and offer great potential in expanding the possible color gamut of the simple tri-layered structures. Prominent peaks show up when the dielectric layer becomes thicker. This can be understood as the particulate Cu morphology broadens both the first-order and second-order resonance at longer and shorter wavelengths respectively. Figure 4d shows the non-trivial color generated with 1 min Cu electroless deposition on top of the SiO 2 dielectric surface, giving reflection peaks across the visible spectra (Figure 4e, black arrows indicate the peak position).
Here, we treated the discontineoues Cu as effect medium [38,39] made of Cu islands filled with air. As it is hard to distinguish the host and inclusion for the discontinuous Cu layer, we adopted a Bruggeman model with appropriate Cu filling fraction to simulate the spectra of these non-conventional colors, which matches the measured spectra accurately (Figure 4b,c). The simulated reflection spectra (Figure 4f) match fairly well with the experimental results with the top Cu layer refractive index extracted from ellipsometry.
Secondary (or subtractive) colors can be obtained with longer deposition time (> 3 min) for a continuous Cu layer with ≈25 nm thickness where the structure behaves like that of a typical MDM. The cross-section SEM image (Figure 5a) clearly showed the metal-dielectric-metal stack in a layer-by-layer fashion. Colors (Figure 5b and Figure S9, Supporting Information) from orange to cyan (depends on dielectric layer thickness) were observed immediately after taking the substrate out of the plating bath. A red shift in the resonant wavelength in the reflection spectra (Figure 5c-d and Figure S10a size or the uniformity of the fabricated sample as long as rigorous engineering controls are employed. A potential continuous fabrication of large films could also be achieved with a roll-to-roll setup. We also showed that this MDM structure can be coated on various substrates ( Figure S6) including glass, ABS plastics, and PET films, to name a few. The ability to coat either a high index dielectric or low index dielectric gave us more flexibility in tuning the color performance in practical applications It is interesting to note that the Cu/SiO 2 /Cu samples demonstrated an iridescent color (Figure 6a) by changing the viewing angles, while the color change in the Cu/TiO 2 /Cu samples was less noticeable. As clearly shown in Figure 6b-d, the resonance wavelength blue shifted for all samples with SiO 2 as the middle dielectric. The angle-dependent color can be understood with Bragg's law [40] (assuming air as the incident medium), where the resonance wavelength of the cavity depends on the incident angle : where n, d, and N are the refractive index of the cavity material, cavity length and an integer, respectively. It is also easy to note that an increase in the refractive index could reduce the angle-dependent feature: where A = − 2Ndsin( )cos( ) and thus, the color of the TiO 2 samples ( Figure S5, Supporting Information) is much less anglesensitive than that of the SiO 2 .
Conclusion
In summary, we have developed a full-solution-based additive approach toward both primary and secondary reflective structural color fabrications. The novel fabrication process at ambient condition with minimum heating (50˚C for faster top copper layer deposition rate) distinguish our work from all previous works with vacuum-based deposition technologies. An elaborate tuning of the metal and dielectric layer together gives colors from orange to cyan with a good control of metal layer morphology and dielectric layer thickness. The color angle-sensitivity could also be easily tuned by switching the dielectric between SiO 2 and TiO 2 under an almost identical fabrication process. Other metals (e.g., Ni or Ag) and dielectrics (e.g., ZrO 2 ) can be deposited in similar manner, which enriches the material library. Colors beyond the conven- tional MDM color gamut were achieved by carefully tuning the top Cu layer morphology, where an effective layer plays an important role in modifying the imaginary part of the refractive index. Further pigmentation and packaging strategies (including antiscratching coating) will be implemented in the future for scaleup productions but is out of the scope of this work. Taking advantage of the electroless deposition and dip-coating techniques, we expect the approach presented here can be scaled up to a continuous fabrication in the near future. All these aspects help to reduce the cost and accelerate the implementation of MDM structural colors for practical applications.
Substrate Preparation: All substrates (i.e., Si wafer, glass, PET, ABS) were sequentially sonicated for 5 min in acetone, methanol, isopropanol, and water before any surface treatment. Si wafer and glass substrate were immersed into a 10 wt.% APTMS solution (ethanol as solvent) for 1 h, followed by a rinse with ethanol. PET and ABS substrates were immersed into a 0.1 wt.% SC solution for 1 min followed by a rinse with water. The pretreated substrates were then immersed into the Pd nanocolloidal solution for 10 min for surface activation followed by rinsing with water. The Pd nanocolloidal solution was prepared as follows: a) dissolve 1 g of sucrose and 10 mg of dextran (MW = 200000) in 92.5 mL water as solution A; b) solution B contains 20 mM PdCl 2 and 100 mM NaCl; c) solution C contains 40 mM NaBH 4 ; d) then add 2.5 mL solution B and 5 mL solution C into solution A followed with heating at 80°C for 24 h. The solution was then cooled down to room temperature and stored as the Pd nanocolloidal solution. Note the nanocolloidal solution should be good to use for at least 6 months with proper handling.
Deposition of the Bottom Cu Layer: The electroless plating bath consist of three parts: solution A contains 2 wt.% CuSO 4 ·5H 2 O; solution B contains 26 wt.% potassium sodium tartrate dihydrate and 8.5 wt.% NaOH; solution C contains 37 wt.% formaldehyde solution. The plating bath was then mixed with equal weight of solution A, B, and C in sequential order. The Pd nanocolloid activated substrate was then immersed into the plating bath for 120 s followed by a rinse with water. A shiny Cu appearance could be observed.
Deposition of the Dielectric Layer-SiO 2 Deposition Solution: TEOS solution was prepared by mixing 35.5 mL ethanol, 5 mL water, TEOS, and MTES in order. 0.1 M HCl was used to the solution pH to 3 for hydrolysis. The amount of TEOS added increased gradually from 6.4 mL to 22.4 mL with 3.2 mL interval, marked as #1 to #6 (Table S1, Supporting Information). The MTES was added in proportion to TEOS with a volume ratio of 4:1 (TEOS:MTES). The Cu coated substrate was then immersed into the TEOS solution for several minutes and withdraw at a constant rate of 200 μm s −1 using a linear actuator (Z6254B 25 mm motorized actuator, Thorlabs Inc.). Various of withdrawa rates from 100 μm s −1 to 500 μm s −1 were also carried out with recipe #4.
Deposition of the Dielectric Layer-TiO 2 Deposition Solution: The TTIP solution (Table S2, Supporting Information) was first obtained by mixing 30 mL ethanol, AcAc, and TTIP for 30 min. The amount of TTIP was increased from 4.44 mL to 17.76 mL with 4.44 ml interval, marked as #7 to #10. The AcAc was added in proportion to TEOS with a volume ratio of 2.9:1 (TTIP:AcAc). Then 1.2, 2.4, 3.6, and 4.8 mL water was added dropwise to the TTIP solution #7 to #10 respectively. The final solution was stirred for another hour to ensure hydrolysis. The Cu-coated substrate was then immersed into the TTIP solution for several minutes and withdrawn at a constant rate of 200 μm s −1 using a linear actuator. Various of withdraw rates from 100 μm s −1 to 500 μm s −1 were also carried out with recipe #9.
Deposition of the Top Cu Layer: The top Cu layer plating bath contained 50 mM CuCl 2 , 50 mM EDTA-2Na, 0.1 m boric acid and 0.9 m NaOH and tuned to neutral pH with 1 m NaOH. The plating solution was then heated to 50°C with the addition of DMAB (0.4 g/50 mL plating bath). The dielectric/Cu coated substrate was then immersed in solution for 3 min and an MDM structural color sample was obtained (Z6254B 25 mm motorized actuator, Thorlabs Inc.).
Optical Simulation: Simulated reflectance spectra were calculated based on the transfer matrix method. Refractive index of each layer was extracted from spectroscopic ellipsometer (M-2000, J. A. Woollam Co.) as inputs.
MDM Film Characterization: Reflectance spectra at normal incidence were measured using a thin-film measurement instrument integrated with a spectrometer (HR4000CG-UV-NIR, Ocean Insight) and a white halogen light source (HL-2000-FHSA, Ocean Insight). Grazing incidence XRD was taken with Rigaku Smartlab XRD from 0-5˚for the dielectrics. Angle-resolved reflectance spectra of as-prepared MDM stacks were measured with an unpolarized light source with the UV-Vis-NIR spectrometer (Lamda 1050, PerkinElmer Inc.) and determination of the thickness and refractive index of each layer were performed with a spectroscopic ellipsometer (M-2000, J. A. Woollam Co.). Cross-sectional SEM was performed with an FE-SEM (TFS Helios 650 Nanolab SEM/FIB) with a Schottky field emitter operated at 5 keV beam voltage. The surface morphology of each layer was also investigated by tapping mode AFM (TESPA-V2 tip and Dimension icon AFM, Bruker Corporation).
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 2023-04-30T15:16:01.940Z | 2023-04-28T00:00:00.000 | {
"year": 2023,
"sha1": "d1345bfdf441eea78a97bfa1041f5e25075c8cac",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/adom.202300456",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "32ca084bb410300b034d0753f50ae30241021088",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
212969536 | pes2o/s2orc | v3-fos-license | Research of dynamic characteristics of the hybrid electrohydraulic aircraft actuator with combined speed control
The paper presents the results of mathematical modeling and experimental research of the dynamic characteristics of the prototype of aircraft hybrid electrohydraulic steering actuator with combined speed control. The mathematical modeling of dual-mode electrohydraulic actuator was performed in the MatLab Simulink and the experimental research was carried out at the control systems test-rig in Central Aerohydrodynamic Institute. The main goal of the experimental research was to determine the output characteristics of actuator, among them static (speed and mechanical) and dynamic (bode-diagrams, transients responses and dynamic stiffness) in the backup (autonomous) and main modes of its operation, including when the actuator was under load.
Introduction
Currently, in the technical literature is increasingly common term «More Electric Aircraft» (MEA) [1,2]. Under this term, experts in the field of automatic control systems understand the aircraft, in which the total number of centralized hydraulic systems and pneumatic systems with their units is reduced and being replaced by power electrical systems and actuators with electric power supply.
From the point of view of the executive part of the integrated flight control system (IFCS) of the aircraft, this trend leads the necessity to develop and implement new types of steering actuators that are capable to get a power supply from the onboard aircraft power electrical system and have high dynamic characteristics, energy efficiency and a sufficient level of reliability and failure safety. Among the existing steering actuators that are used or planned to be used onboard of «more electric aircraft» for the primary control system, the following main types can be distinguished: addition, actuators of the main control surfaces of the modern passenger aircraft Airbus A-380 are heterogeneous both in terms of the type of power supply and the type of regulating the speed of the output link [2,5]. For example, each sections of rudder (rudder surface is split) are deflected by EBHA, and the elevators and ailerons (middle and inboard) are equipped with EHSV and EHA [5,6].
The hybrid electrohydraulic actuator that was the object of research is a modification of electrohydraulic steering actuators with a backup source of hydraulic power supply [12], but it was made according to the original design [13], which allowed to significantly reducing the weight and dimensions of the hydraulic unit. This means that the research of the characteristics of this type of actuators is an urgent task.
Object of research and research methods
The object of the research, as was said before, was a prototype of a hybrid electrohydraulic steering actuator with combined speed control in a backup (autonomous) mode of its operation. Actuator was made according to the original design scheme [13]. The photography of investigated hybrid actuator with combined speed control on the test rig of flight control systems in a Central Aerohydrodynamic Institute [14] is shown in figure 1. To analyze of the operation of a hybrid electrohydraulic actuator the detailed mathematical model was created in the special software MatLab Simulink. Mathematical model was made according to the modular scheme, and its general view is shown in figure 2.
The following factors were taken into account when the mathematical model was compiling: the hydraulic fluid is compressible: the module of volume elasticity (bulk module) depends on the fluid pressure and the set value of the air that contained in the fluid; the control object and the actuator rod have a mass: the effect of inertial load on the output characteristics of the actuator was taken into account; during the implementation of the ECU was takes into account the "digital part" with specified time delays and sampling of control signals; mathematical model of brushless DC motor was made with using of elements of the SimPowerSystems library; 3 the mathematical model of the hydraulic cylinder takes into account the possible differential area of the piston; EHSV model takes into account the microgeometry of the valve spool. Mathematical model allows defining a static and dynamic characteristic of hybrid electrohydraulic actuator. Mathematical model also allows investigating actuator work when it is powered by the external (centralized) hydraulic system, and from the internal source of hydraulic energy. Switching of modes of actuator operations is carried out by a command signal in ECU.
The results of research
The results of mathematical modeling of step response of hybrid electrohydraulic actuator are shown in figures 3 and 4. Input control signals was corresponded to amplitudes of movement of a piston rod on 2 mm and 35 mm (35 mm is a full stroke). Figure 3 shows the characteristics of actuator in autonomous working mode, and figure 4 shows the characteristics of actuator in the main mode of its operation (powered by an external hydraulic system). Figures 3 and 4 also show the experimental characteristics of actuator. As can be seen from figures, the model and experimental characteristics have a high convergence between them. A slight discrepancy was observed when the actuator worked at low amplitudes of control signals in the autonomous mode of its operation. According to the authors, this is due to the configuration of the control unit of the brushless ВС motor which could not be compleatly implemented in the mathematical model because the control algorithms in BDC-motor are closed and hidden.
Diagrams of logarithmic amplitude and phase frequency characteristics (frequency response diagrams) of the actuator are shown in figures 5(a,b). Figure 5a shows the frequency response of the actuator when it operates in autonomous mode with the implementation of the combined speed control, figure 5b shows the frequency response characteristics when it operates in the main mode. The characteristics were determined for the piston rod displacement amplitudes of 1, 2 and 5 mm.
(a) (b) Figure 5. Frequency response of actuator in main mode. As it can be seen from the figures, when actuator worked in main mode, it had a fairly wide bandwidth (more than 10 Hz). In this case, in the autonomous mode of operation the power of actuator's backup hydraulic supply source was limited and high dynamic characteristics were provided only in the region of low amplitudes of control signals (due to the implementation of combined speed control).
In general, according to the results of the experimental research, it can be noted that the obtained dynamic characteristics of hybrid actuator were correspond to the laid requirements. At the same time, the dynamic capabilities of actuator were partly overstated during its operation in the main mode and can be reduced in a future.
Conclusions
As a result of research, a highly detailed mathematical model of the hybrid electrohydraulic steering drive was compiled. Mathematical model allows to study its operation in main mode (when hybrid actuator is powered from the external (centralized) hydraulic system) and in a backup (autonomous) mode when it is powered from the aircraft power electrical system. Mathematical model allows researching the different methods and strategies of controlling the speed of the output link.
Mathematical model was confirmed on the results of an experimental research in terms of the dynamics of the control elements and the characteristics of sensors and showed a high convergence with the experimental characteristics. On the basis of the refined mathematical model, the ways of improving of actuator dynamic characteristics were considered and the influence of control settings on the output characteristics was detected.
The developed mathematical model can be recommended for a detailed study of hybrid electrohydraulic actuators that will be implemented according to the scheme given in [15], as well as in the educational process that is conducted at the 702 department of MAI. Also it can be useful for specialists in the field of actuators technology.
According to the results of tests in Central Aerohydrodynamic Institute hybrid actuator can be used for the primary control system of perspective civil aircrafts those will be realized with "more electric" concept. Actuator provides the required level of flight safety due to the selected design scheme and heterogeneity of power supply channels and has satisfactory dynamic characteristics. | 2020-01-30T09:15:14.907Z | 2020-01-29T00:00:00.000 | {
"year": 2020,
"sha1": "7f6665cacdb8effda355913a1147a4577fc02546",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/734/1/012016",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "d43e4ab8edf3105053f3171653be8810a7ac0d0b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
119272171 | pes2o/s2orc | v3-fos-license | Classification of Leavitt path algebras with two vertices
We classify row-finite Leavitt path algebras associated to graphs with no more than two vertices. For the discussion we use the following invariants: decomposability, the $K_0$ group, $\det(N'_E)$ (included in the Franks invariants), the type, as well as the socle, the ideal generated by the vertices in cycles with no exits and the ideal generated by vertices in extreme cycles. The starting point is a simple linear algebraic result that determines when a Leavitt path algebra is IBN. An interesting result that we have found is that the ideal generated by extreme cycles is invariant under any isomorphism (for Leavitt path algebras whose associated graph is finite). We also give a more specific proof of the fact that the shift move produces an isomorphism when applied to any row-finite graph, independently of the field we are considering.
Introduction and preliminary results
In 1960's Leavitt algebras arose from the work of Leavitt on his search for non-IBN algebras [16]. The name Leavitt path algebras was associated to this structure, in particular, because the Leavitt path algebra on a graph with one vertex and n-loops, where n ą 1, is exactly the Leavitt algebra of type p1, nq. However, there are a lot of Leavitt path algebras having IBN. For the definition of the type of a ring see, for example, [3, Definition 1.1.1].
The classification problem of Leavitt path algebras (up to isomorphisms) has been present in the literature since the pioneering works [1] and [2]. The study of the classification of Leavitt path algebras associated to small graphs was started in [6], where the authors considered graphs with at most 3 vertices satisfying Condition (Sing), i.e, there is at most one edge between two vertices. This work can be also of interest, not only for people studying Leavitt path algebras, but also for a broader audience; concretely, for those working on graph C˚algebras (as these are the analytic cousins of Leavitt path algebras). Moreover, one can view Leavitt path algebras as precisely those algebras constructed to produce specified K-theoretic data in a universal way, data arising naturally from directed graphs (sic [3]), which could make these algebras and results a source of inspiration.
Throughout this paper we mean algebra isomorphism whenever we mention isomorphism. When referring to a ring isomorphism we will specify. In general, when there is a ring isomorphism between two algebras, these are not necessarily isomorphic as algebras. However, we will prove in Proposition 1.2 that when the center of the Leavitt path algebra is the ground field, then any ring isomorphism gives rise to an algebra isomorphism.
The goal of this article is the classification of Leavitt path algebras with at most two vertices and finitely many edges. This study will be initiated by fixing our attention in the IBN property, concretely, our starting point is [15,Theorem 3.4], which gives the necessary and sufficient condition that determines when a Leavitt path algebra has the IBN property in terms of a simple linear span of vertices. The outlay of the paper is as follows. Section 1 gives the necessary preliminaries. Moreover, we give a detailed proof of the fact that shift move produces isomorphisms for row-finite graphs (see Theorem 1.1). We also prove in Proposition 1.2 that a ring isomorphism between two Leavitt path algebras whose center is the ground field produces an algebra isomorphism between them. In Section 2 we compute the type of Leavitt path algebras not having the IBN property via the criteria given in [15] and we give a first classification in Figure 3. Section 3 contains the computation of the K 0 -groups, which is stated in Figure 4. The main section of the paper, Section 4, follows the procedure of the decision tree given in Figure 1 and discusses some algebraic invariants which are listed in Figures 5, 6 and 7. The core resut, Theorem 4.6, classifies Leavitt path algebras not having the IBN-property. As a result of our research we prove in Theorem 4.1 that for a finite graph the ideal generated by the vertices in extreme cycles is invariant under ring isomorphisms.
Finally, in Section 5 we classify the Leavitt path algebras having the IBN property and conclude the sequel by addressing an open problem on the isomorphism of Leavitt path algebras over a particular pair of non-isomorphic graphs. In Theorem 5.1 we classify Leavitt path algebras having the IBN-property; the invariants we use are listed in Figures 8 and 9. Figure 1. Decision tree Throughout the paper, E " pE 0 , E 1 , s, rq will denote a directed graph with set of vertices E 0 , set of edges E 1 , source function s, and range function r. In particular, the source vertex of an edge e is denoted by speq, and the range vertex by rpeq. We call E finite, if both E 0 and E 1 are finite sets and row-finite if s´1pvq is a finite set for all v P E 0 . A sink is a vertex v for which s´1pvq " te P E 1 | speq " vu is empty. For each e P E 1 , we call e˚a ghost edge. We let rpe˚q denote speq, and we let spe˚q denote rpeq. A path µ of length |µ| " n ą 0 is a finite sequence of edges µ " e 1 e 2 . . . e n with rpe i q " spe i`1 q for all i " 1, . . . , n´1. In this case µ˚" en . . . e2e1 is the corresponding ghost path. A vertex is considered a path of length 0. The set of all vertices on the path µ is denoted by µ 0 . The set of all paths of a graph E is denoted by PathpEq.
A path µ " e 1 . . . e n in E is closed if rpe n q " spe 1 q, in which case µ is said to be based at the vertex spe 1 q. A closed path µ is called simple provided that it does not pass through its base more than once, i.e., spe i q ‰ spe 1 q for all i " 2, . . . , n. The closed path µ is called a cycle if it does not pass through any of its vertices twice, that is, if spe i q ‰ spe j q for every i ‰ j.
An exit for a path µ " e 1 . . . e n is an edge e such that speq " spe i q for some i and e ‰ e i . We say the graph E satisfies Condition (L) if every cycle in E has an exit. We denote by P E c (P c if there is no confusion about the graph) the set of vertices of a graph E lying in cycles without exits.
A cycle c in a graph E is called an extreme cycle if c has exits and for every path λ starting at a vertex in c, there exists µ P PathpEq, such that 0 ‰ λµ and rpλµq P c 0 . A line point is a vertex v whose tree T pvq does not contain any bifurcations or cycles. We will denote by P E l the set of all line points, by P E ec the set of vertices which belong to extreme cycles, while P E lec :" P E l \ P E c \ P E ec . We will eliminate the superscript E in these sets if there is no ambiguity about the graph. We refer the reader to the book [3] for other definitions and results on Leavitt path algebras.
If there is a path from a vertex u to a vertex v, The set of all hereditary saturated subsets of E 0 is denoted by H E , which is also a partially ordered set by inclusion.
Let K be a field, and let E be a row-finite graph. The Leavitt path K-algebra L K pEq of E with coefficients in K is the K-algebra generated by the set tv | v P E 0 u, together with te, e˚| e P E 1 u, which satisfy the following relations: (V) vw " δ v,w v for all v, w P E 0 , (E1) speqe " erpeq " e for all e P E 1 , (E2) rpeqe˚" e˚speq " e˚for all e P E 1 , and (CK1) e˚e 1 " δ e,e 1 rpeq for all e, e 1 P E 1 . (CK2) v " ř tePE 1 |speq"vu ee˚for every v P E 0 which is not a sink. It was studied in [15] the necessary and sufficient conditions for a separated Cohn-Leavitt path algebra to have the Invariant Basis Number (IBN) property. In particular, when a Leavitt path algebra has IBN. We refer the reader to [3] for the definitions of separated graph, separated Cohn-Leavitt path algebra, etc.
The monoid of isomorphism classes of finitely-generated projective modules over a ring A is denoted by VpAq. Recall also that UpAq is the cyclic submonoid of VpAq generated by the isomorphism class of A. The Grothendieck group of VpAq is the K 0 -group of A denoted K 0 pAq, and by [15,Proposition 2.5], there is a monomorphism from the Grothendieck group of UpAq into K 0 pAq.
By [8,Theorem 3.5] the abelian monoid M E associated with a row-finite graph E is isomorphic to VpL K pEqq. Concretely, when E is finite, the isomorphism class of L K pEq is mapped to r ř vPE 0 vs P M E . Denote it by r1s E . Note that a Leavitt path algebra L K pEq which does not have IBN, necessarily has type p1, mq for some natural number m ą 1. The reason is the following: If pn, mq were the type of L K pEq for 1 ă m ď n, then nrRs " mrRs and, by the separativity of the monoid VpL K pEqq (see [8,Theorem 3.5 and Theorem 6.3]), pn´1qrRs " pm´1qrRs, a contradiction to the type of the Leavitt path algebra.
For any finite graph E, we denote by A E the incidence matrix of E. Formally, if E 0 " tv i | 1 ď i ď nu, then A E " pa i,j q is the nˆn matrix for which a i,j is the number of edges e having speq " v i and rpeq " v j . In particular, if v i P E 0 is a sink, then a i,j " 0 for all 1 ď j ď n, i.e., the i th row of A E consists of all zeros. Following [7] we write N E and 1 for the matrices in Z pE 0ˆE0 zSinkpEqq obtained from A t E and from the identity matrix after removing the columns corresponding to sinks. Then there is a long exact sequence (n P Z)¨¨Ñ In particular K 0 pL K pEqq -cokerp1´N E : Z pE 0 zSinkpEqq Ñ Z pE 0 q q. The effective computation of the K 0 group of a given L K pEq is explained in [1,Section 3].
Note that the K 0 pL K pEqq can be computed by obtaining the Smith normal form of the matrix N 1 E :" A E´1 1 , where 1 1 denotes the matrix built from the identity matrix changing the columns corresponding to sinks by columns of zeros. The element r1s E , seen inside K 0 pL K pEqq, will be called the order unit.
We will use intensively the Smith normal form of a matrix with entries in Z. Denote by M n pZq the ring of nˆn matrices with integer coefficients. Following [17], for any matrix A P M n pZq there are invertible matrices P, Q in M n pZq such that P AQ is a diagonal matrix P AQ " diagpd 1 , . . . , d n q P M n pZq, where d i |d i`1 and the diagonal entries are unique up to their signs. The diagonal matrix P AQ is called the Smith normal form of A.
For the definition of the shift move we refer the reader to [1, Definition 2.1]. It was shown in [1, Theorem 2.3] that every shift of a graph E produces an epimorphism between the corresponding Leavitt path algebras over a field K, which is an isomorphism provided the graph E satisfies Condition (L) or the field K is infinite. This result can be extended to arbitrary fields, and the condition can be eliminated, as the second, third and fourth author mentioned in [6] (see page 583) and proved in a condensed way. Here we include a more detailed proof. Theorem 1.1. Let K be an arbitrary field and let E be a row-finite graph. Assume that F is a graph obtained from E by shift moves. Then L K pEq and L K pF q are isomorphic.
Proof. Let ϕ : L K pEq Ñ L K pF q be the K-algebra epimorphism defined in [1, Theorem 2.3]. Take K, the algebraic closure of K, and consider the K-algebra homomorphism ϕ b 1 K : L K pEq b K Ñ L K pF q b K, where 1 K is the identity from K into K. Since ϕ and 1 K are epimorphisms, then by [18,Theorem 7.7] the map ϕ b 1 K is an epimorphism. The same result states that the kernel of ϕ b 1 K is generated by By [3, Corollary 1.5.14] we have that L K pEq and L K pF q are isomorphic to L K pEq b K and to L K pF q b K, respectively, via isomorphisms that we will denote by α and β, respectively. Therefore, there exists a unique K-algebra homomorphism ϕ that makes the following diagram commute.
Note that ϕ is just the K-algebra homomorphism given in [1,Theorem 2.3]. Since K is an infinite field, this result states that ϕ is an isomorphism. By the commutativity of the diagram, the map ϕ b 1 K is an isomorphism, therefore Kerpϕq b K " 0. This implies Kerpϕq " 0, as required.
We finish this section by including two results on isomorphisms which will be used in the sequel.
Proposition 1.2. Let E be a graph such that the center of L K pEq is isomorphic to K (which implies that E is a finite graph), and let F be another graph. Then there is a ring isomorphism L K pEq Ñ L K pF q if and only if there is an algebra isomorphism L K pEq Ñ L K pF q.
Proof. Assume that f : L K pEq Ñ L K pF q is a ring isomorphism. We can restrict the map f to f | ZpL K pEqq : ZpL K pEqq Ñ ZpL K pF qq, where Zp¨q denotes the center of the algebra, to get an automorphism σ : K Ñ K such that f pk1q " σpkq1 for any k P K. We can say that f is σ-linear in the sense that f pkxq " σpkqf pxq for any k P K and x P L K pEq. Now, by [3, Corollary 1.5.12], we may fix a basis tw i u iPΛ of L K pF q whose structure constants are 0, 1,´1. Assume w i w j " ř l c l ij w l where c l ij P t0,˘1u. Define ψ : L K pF q Ñ L K pF q by ψp ř i k i w i q :" This map is a σ´1-linear bijective map and ψpw i w j q " ψp ř l c l ij w l q " ř l c l ij w l " w i w j " ψpw i qψpw j q. From this, we deduce that ψpxyq " ψpxqψpyq for any x, y P L K pF q. Thus the composition ψf is a K-linear isomorphism from L K pEq to L K pF q. Remark 1.3. Although we have stated Proposition 1.2 for Leavitt path algebras, because we are in this setting, the result is more general: it is true for arbitrary K-algebras having center isomorphic to K and a basis with structure constants in the prime field of K. Proof. Assume that there is an isomorphism ϕ : M 8 pL K p1, mqq Ñ M 8 pL K p1, nqq. Let e P M 8 pL K p1, mqq be the matrix having 1 in place 1,1 and zero everywhere else and let e 1 :" ϕpeq. Then L K p1, mq -eM 8 pL K p1, mqqee 1 M 8 pL K p1, nqqe 1 , which is Morita equivalent to L K p1, nq. This implies that L K p1, mq is Morita equivalent to L K p1, nq and, consequently, their K 0 groups are isomorphic. Since the first one is isomorphic to Z m´1 and the second one is isomorphic to Z n´1 , necessarily m " n.
Computation of the type of Leavitt path algebras not having IBN
In this section we will determine all Leavitt path algebras not having the IBN property and compute their types in terms of the number of edges of the associated graphs.
We start by quoting [15,Theorem 3.4], which gives a necessary and sufficient condition for the algebra to have the IBN property in the more general setting of separated Cohn-Leavitt path algebras.
Theorem 2.1. For a given triple pE, Π, Λq, with E finite, let L denote the separated Cohn-Leavitt path algebra CL K pE, Π, Λq over the triple. Then L is IBN if and only if ř vPE 0 v is not in the Q-span of the relations tsX´ř ePX rpequ XPΛ in QE 0 .
If the Leavitt path algebra has type p1, mq, for some natural m ą 1, then where m is the minimum natural number satisfying this property.
Before we move on to the two-vertex graphs, for completeness of the argument, we state the easy case of one-vertex graphs. The Leavitt path algebras associated to one-vertex graphs are isomorphic either to the ground field K or to the Laurent polynomial algebra Krx, x´1s, which have the IBN property, or to the Leavitt algebras Lp1, nq, with n ą 1, which do not have the IBN property and are of type p1, nq.
Graph Let us consider a finite graph E with two vertices and assume l 1 , l 2 , t 1 , t 2 P N " t0, 1, 2, . . . u are the number of arrows appearing in the graph, that is, Now, consider the set NˆN and identify u with p1, 0q and v with p0, 1q. According to the number of sinks in E, we have several different relations in the monoid M E . If all the vertices are sinks, the graph consists of two isolated vertices and its Leavitt path algebra is KˆK which has clearly the IBN property. So we only consider the two cases below.
2.1. One sink case. Without loss of generality, let u be the sink so that the graph looks like: Since there is only one vertex which is not a sink, we have: Then M E is identified with NˆN M xp0, 1q " pt 2 , l 2 qy and we get the equivalence relation generated by the pair A consequence of Theorem 2.1 is that the algebra L K pEq has not the IBN property and is of type p1, mq, m ą 1, if and only if pm´1, m´1q is in the integer span of the pair in (3). In other words, if and only if there is a nonzero natural number k such that We will split the discussion of the solution of this system into two cases: Case 1. If t 2 " 0 or l 2 " 1 then the system is inconsistent (it has no solution) and L K pEq has IBN. More precisely: when t 2 " 0, then L K pEq is isomorphic to KˆKrx, x´1s when l 2 " 1 or to KˆLp1, l 2 q when l 2 ‰ 1 and in every case it is an IBN algebra. Case 2. If t 2 ‰ 0 and l 2 ‰ 1 then: (a) If t 2 ‰ l 2´1 , then the system is again inconsistent and L K pEq is IBN. (b) If t 2 " l 2´1 , then m´1 " k t 2 and the minimum solution is m " 1`t 2 " l 2 , so L K pEq has not IBN and type p1, l 2 q. Summarizing the results of the one sink case, we get the following lemma.
Lemma 2.2. Let K be a field and E be a graph with two vertices having exactly one sink. Then L K pEq has not IBN if and only if E is of the form where n ě 2. Furthermore, L K pEq has not IBN and has type p1, nq.
2.2.
No sink case. If both u and v are not sinks, then we have the following relations Then M E can be identified with NˆN M xp1, 0q " pl 1 , t 1 q, p0, 1q " pt 2 , l 2 qy and the equivalence relation is the one generated by the pairs Using Theorem 2.1 we can affirm that if there exists m P N, m ą 1 such that (1) is satisfied, then pm, mq´p1, 1q is in the Z-span of the relations given in (6), that is, there exist k 1 , k 2 P Z such that Our aim is to find the minimum value of m P N satisfying the system above, if it exists.
We may assume that pl i , t i q ‰ p0, 0q for any i " 1, 2 (otherwise the graph has a sink and this case has been considered already). There are also some particular cases to consider: Case 1. pl i , t i q " p1, 0q for any i " 1, 2. The associated Leavitt path algebra is isomorphic to Krx, x´1sˆKrx, x´1s which has IBN. Case 2. Without loss of generality we may assume pl 1 , t 1 q " p1, 0q but pl 2 , t 2 q ‰ p1, 0q. The graph is which is the same system as (4). Consequently, the algebra does not have IBN and has type p1, l 2 q if and only if t 2 " l 2´1 .
Here, we get another class of Leavitt path algebras not having IBN and we note the result as the following lemma. Lemma 2.3. Let K be a field and E be a graph of the form where l 2 ě 2. Then L K pEq does not have IBN if and only if t 2 " l 2´1 . In this case, L K pEq has type p1, l 2 q. Case 3. pl i , t i q ‰ p1, 0q for any i " 1, 2.
We state this result in the following lemma.
Lemma 2.4. Let K be a field and E be a graph of the form where t 1 , t 2 ě 1. Then L K pEq does not have IBN and has type p1, 1`gcdpt 1 , t 2 qq.
Case 3a (ii). If t 2 ‰ l 2´1 we have k 2 " 0 and the minimum solution for m is m " 1`t 1 " l 1 , which gives a Leavitt path algebra not having IBN and of type p1, l 1 q.
This case is summarized below.
Lemma 2.5. Let K be a field and E be a graph of the form where pl 2 , t 2 q ‰ p0, 0q, l 2´t2 ‰ 1 and t 1 ě 1. Then L K pEq does not have IBN and has type p1, 1`t 1 q.
Case 3b. We analyze now the case l i´1 ‰ t i for any i.
In what follows we will recall how to solve the following system of equations on Z. Let a, b, a 1 , b 1 , c, c 1 P Z be such that at least one of the following elements: a, a 1 , b, b 1 is non zero, and consider: Without loss in generality we may assume a or a 1 is different from zero. Since a or a 1 is nonzero, we may define d :" gcdpa, a 1 q, which is nonzero. We know that there exist s, t P Z such that d " as`a 1 t.
Now (8) can be rewritten as the matrix equation A simple computation shows that the matrix A "ˆs t Multiplying (9) by A on the left hand side we get where ∆ " pl 1´1 qpl 2´1 q´t 1 t 2 . By swapping the roles of the vertices and following a similar argument, we get Now we consider the cases that follow, taking into account if ∆ is zero or not.
Case 3b (i). If ∆ " 0 there is no solution neither for (11) nor (12). Hence the Leavitt path algebra has IBN and it will be studied later.
At this point, to make reference to the graph we are considering, we change the notation and take ∆ E " ∆ " pl 1´1 qpl 2´1 q´t 1 t 2 for simplicity, we have These computations are summarized in the result that follows.
Lemma 2.6. Let K be a field and E be a graph of the form where pl i , t i q ‰ p0, 0q and l i´ti ‰ 1, for i " 1, 2. Then L K pEq does not have IBN if and only if ∆ E ‰ 0. Moreover, L K pEq has type´1, 1`| ∆ E | gcdpl 1´1´t1 , l 2´1´t2 q¯.
We collect all the information of Lemmas 2.2, 2.3, 2.4, 2.5 and 2.6 in Figure 3. To simplify, we associate the set S " tpl 1 , t 1 q, pl 2 , t 2 qu to any two-vertex Leavitt path algebra. All possible Leavitt path algebras not having IBN are the ones whose associated sets S are listed below: In the previous section we have computed the type of the two-vertex Leavitt path algebras which do not have IBN. The type is not the only invariant that we must use in order to classify those algebras. This is why we compute here K 0 pL K pEqq in the cases that appear in Figure 3. We will remark that the order of the order unit is related to the type.
Recall that N 1 E :" A E´1 1 , where 1 1 denotes the matrix built from the identity matrix changing the columns corresponding to sinks by columns of zeros.
, whose associated graph E is By [14], the Smith normal form of N 1 This implies (as follows by [7,Theorem 4.2]) that K 0 pL K pEqq is isomorphic to and its associated graph E is Again, the Smith normal form of N 1 Case III. We have S " , whose associated graph E is Case IV. Here S " ! pt 1`1 , t 1 q, pl 2 , t 2 q | pl 2 , t 2 q ‰ p0, 0q, l 2´t2 ‰ 1, t 1 ě 1 ) and its associated graph E is In this case, the Smith normal form of ) and the associated graph E is as follows We will write d E and ∆ E to refer to the greatest common divisor of the entries and to the determinant of N 1 E , respectively. To end this section we remark that the order of r1s E in K 0 pL K pEqq is n, where p1, 1`nq is the type of L K pEq. To prove this, take into account the monomorphism from the Grothendieck group of UpL K pEqq into K 0 pL K pEqq given in [15,Proposition 2.5], together with the fact that L K pEq n`1 -L K pEq (as L K pEq-modules). Observe that in Case IV, t 1 divides ∆ E d E and in Case V, we have that In Figure 4, that summarizes the computation of the K 0 groups, we note that K 0 pL K pEqq is of the form Z|∆ E | d EˆZ d E in each of the cases.
S
Type p1, kq and III in Figure 4, ∆ E " 0, hence the K 0 groups are isomorphic to Z d EˆZ . So, in these cases the K 0 group has a torsion-free part while in Cases IV and V the K 0 is a torsion group.
Classification of Leavitt path algebras not having IBN
In this section we study the isomorphisms between the algebras in the different cases in Figure 4 following the decision tree. Recall that P l , P c and P ec denote the set of all line points, vertices in cycles with no exits and vertices in extreme cycles of E, respectively. We will compute the ideals generated by the above sets, namely, IpP c q, IpP l q " SocpL K pEqq and IpP ec q. Clearly, the socle is invariant under isomorphisms and the ideal IpP c q is proved to be also invariant under isomorphisms in [9].
We start by proving that IpP ec q remains invariant under ring isomorphisms when E is a finite graph. Proof. Assume that E and F are finite graphs and that ϕ : L K pEq Ñ L K pF q is a ring isomorphism.
Denote by P E ec and P F ec the hereditary subsets of E 0 and F 0 , respectively, consisting of vertices in extreme cycles in E and F , respectively.
As any isomorphism sends idempotents to idempotents and by [3, Corollary 2.9.11], ϕpIpP E ec qq is a graded ideal (at a first glance it is a graded ring ideal but, taking into account [3, Remark 1.2.11] it is actually a graded algebra ideal). Hence, ϕpIpP E ec qq " IpHq for some hereditary saturated set H in F . Moreover, H " F 0 X ϕpIpP E ec qq by [3, Theorem 2.4.8]. Take v P H. Since the graph is finite, v has to connect to a line point, to a cycle without exits, or to an extreme cycle (by [13, Theorem 2.9 (ii)] the ideal of L K pF q generated by the hereditary set P F lec consisting of line points, vertices in cycles without exits and vertices in extreme cycles is dense and by [13, Propostion 1.10] we have that IpP F lec q is dense if and only if every vertex of the graph connects to a vertex in P F lec ). We are going to prove that the only option for v is to connect to an extreme cycle. Assume that v connects to a line point, say w, or to a cycle without exits, say c. Since H is hereditary, w P H or c 0 Ď H. This implies that IpHq contains a primitive idempotent (see [9,Proposition 5.3]). Since primitive idempotents are preserved by isomorphisms, this means that IpP E ec q contains a primitive idempotent. But this is a contradiction because of [9, Corollary 4.10], since IpP E ec q is purely infinite simple (by [13, Proposition 2.6]). Applying the (CK2) relation, v is in the ideal IpP F ec q; therefore, ϕpIpP E ec qq Ď IpP F ec q. Reasoning in the same way with ϕ´1 we get ϕ´1pIpP F ec qq Ď IpP E ec q, implying ϕpIpP E ec qq " IpP F ec q. In the proceeding study we will follow the steps indicated in Figure 1. Note that the three sets P l , P c , P ec will play an important role in the classification of two-vertex Leavitt path algebras not having IBN. We start by considering the different possibilities for the socle (non-zero or zero).
SocpL
From now on we will denote by C the class of Leavitt path algebras with nonzero socle, not having IBN which are associated to two-vertex graphs.
Summarizing the information contained in the one-sink case (Subsection 2.1) we get the lemma that follows, where TypepXq denotes the type of X (in the sense of [3, Definition 1.1.1]). Lemma 4.2. For every A P C the associated two-vertex graph, say E l 2 , is with l 2 ą 1. Then, SocpAq -M 8 pKq,Ā " A{ SocpAq -Lp1, l 2 q and the type of A is p1, l 2 q.
A complete system of invariants for C is the type. Also the quotient algebra is an invariant for C. More pecisely, for two algebras A and B in C the following assertions are equivalent: Proof. Since SocpAq ‰ 0 there must be line-points in E l 2 , so this graph contains sinks. The unique graphs of this type which produce a Leavitt path algebra not having IBN are given in Lemma 2.2. Assume A, B P C. If A -B thenĀ -B since every isomorphism preserves the socle. Now, ifĀ -B, A " L K pE l 2 q and B " L K pE m 2 q, then we haveĀ -Lp1, l 2 q and B " Lp1, m 2 q, hence p1, l 2 q " TypepLp1, l 2 qq " TypepLp1, m 2 qq " p1, m 2 q giving l 2 " m 2 hence the underlying graphs are the same and so A " B. Finally, if TypepAq " TypepBq, then, by Lemma 2.2, l 2 " m 2 so that A -B.
We focus our attention on algebras A " L K pEq with |E 0 | " 2 and SocpAq " 0. We also rule out the purely infinite simple case because for this class a system of invariants is well known: the Franks triple pK 0 , r1s E , |N 1 E |q (see [ ‚ v pl 2 q h h with l 1 , l 2 ą 1. Given two graphs E l 1 ,l 2 and E n 1 ,n 2 , we have L K pE l 1 ,l 2 q -L K pE n 1 ,n 2 q if and only if pl 1 , l 2 q " pn 1 , n 2 q or pn 2 , n 1 q. Thus, a complete system of invariants for the algebras A in D is the pair pl 1 , l 2 q, where l 1 , l 2 are the types of the unique two graded ideals of A, considered as algebras.
Proof. Take A P D and let E be the two-vertex graph associated to A. By [11,Theorem 6.5] there must be hereditary saturated nonempty subsets H 1 , H 2 whose union is E 0 " tu, vu. Then, necessarily H 1 " tuu and H 2 " tvu, so there is no edge connecting u to v and vice versa.
Thus, E consists of l 1 loops based at u and l 2 loops based at v. Taking into account Case 3.b in Subsection 2.2, the necessary and sufficient conditions for A not to have IBN are l 1 , l 2 ą 1. Suppose A " L K pE l 1 ,l 2 q -Lp1, l 1 q ' Lp1, l 2 q and B " L K pE n 1 ,n 2 q -Lp1, n 1 q ' Lp1, n 2 q are in D. The unique proper non-zero graded ideals in A are isomorphic to Lp1, l 1 q and Lp1, l 2 q, while for B they are isomorphic to Lp1, n 1 q and Lp1, n 2 q. By [8, Proof of Theorem 5.3] an ideal is graded if and only if it is generated by idempotents, therefore graded ideals are preserved by isomorphism. Thus, if A and B are isomorphic, then pl 1 , l 2 q " pn 1 , n 2 q or pn 2 , n 1 q. By [13,Lemma 2.7] the sum of the ideals IpP l q, IpP c q and IpP ec q is direct. In fact, since E is finite, then this sum is a dense ideal in A by [13,Theorem 2.9]. Since A is simple then A must coincide with IpP l q, with IpP c q or with IpP ec q.
Using Remark 4.4 we get that IpP l q " 0; also IpP c q " 0 because the algebra is purely infinite simple; therefore, necessarily A " IpP ec q.
Case 2.2. Non-purely infinite simple algebras. By Remark 4.4, the non-purely infinite simple Leavitt path algebras in this case are nonsimple.
Case 2.2.1. IpP c q ‰ 0. Notice that there are no sinks and there is a cycle with no exits. The possibilities are: The Leavitt path algebra associated to the first graph does not have IBN only if t 2 " l 2´1 (see Lemma 2.3), the type of the corresponding Leavitt path algebra is p1, l 2 q. The Leavitt path algebra associated to the second graph has IBN, concretely it is isomorphic to M 2 pKrx, x´1sq by [5,Theorem 3.3]. So the type is again a sufficient invariant to determine the isomorphism classes in this case.
Case 2.2.2. IpP c q " 0. By [13, Proposition 1.10 and Theorem 2.9 (ii)] we have that in a finite graph any vertex connects either to a sink or to a cycle without exits or to an extreme cycle. Since there are no sinks and no cycles without exits, any vertex connects to an extreme cycle. Hence IpP ec q ‰ 0. Moreover, as IpP ec q is purely infinite simple (see [13,Proposition 2.6]), it has to be a proper ideal of L K pEq. We see that the only possible graph is of the form: with l 2 ą 1, as any cycle should have an exit. Also t 1 ě 1, otherwise the algebra would be decomposable. Moreover, l 1 ě 1 because if l 1 " 0 then the Leavitt path algebra would be simple (by [4, 3.11 Theorem]), a contradiction as we are assuming that the algebra is nonsimple. Furthermore, if l 1 " 1 the algebra would have IBN by Lemma 2.6 because ∆ E " 0 in this case, so we have to assume l 1 ě 2.
To convince the reader that we have completed the decision tree we just point out that P ec ‰ H because every vertex connects to one vertex in P l \ P c \ P ec and, in our case, only P ec survives.
We summarize all the data of this section in Figure 5. The order in which the graphs appear in the table below corresponds to the order in which the cases have been studied in the decision tree.
h h with l1, l2 ě 2; t1 ě 1 0 0 M8pLKp1, l2qq Lp1, l1q The cases appearing in this table follow from the cases in Figure 4. We call Case IV (a) to Case IV in Figure 4 for t 2 ‰ 0, Case IV (b) is Case IV in Figure 4 for t 2 " 0, and Case V(b) is Case V in Figure 4 for t 2 " 0, l 1 , l 2 ě 2.
We justify that Cases IV(b) and V(b) are isomorphic. Any graph E in Case IV(b) is as follows: where s " pl 2´1 q`t 1 and ∆ F " t 1 pl 2´1 q ‰ 0, which is in Case V(b). Note that E is produced by a shift move from F and by Theorem 1.1 the Leavitt path algebras L K pEq and L K pF q are isomorphic. Therefore, it is enough to find the isomorphism classes in Case V(b).
In what follows we are going to compare Cases V(c) and V(d) to V(e).
Take any graph from V(c), where t 1 ě 2 or t 2 ě 2 and without loss of generality t 1 ě t 2 .
Consider the graph which is in V(e). It can be transformed into the graph E via consecutive t 2 -many shift moves of s´1puq Ñ s´1pvq. By Theorem 1.1, the Leavitt path algebras L K pEq and L K pF q are isomorphic.
Take any graph from V(d), for example: which is in V(e), can be transformed into the graph E via consecutive t 2 -many shift moves of s´1puq Ñ s´1pvq. Again, by Theorem 1.1, the Leavitt path algebras L K pEq and L K pF q are isomorphic. Thus, in Figure 5 we may eliminate the rows corresponding to the cases V(c) and V(d).
Any graph from V(e) produces a Leavitt path algebra isomorphic to M t 1`1 pLp1, l 2 qq. Take a graph E in Case V(e): The previous reasoning, as well as the table that follows will allow to refine Figure 5. Recall that the Betti number of a finitely generated abelian group G, denoted by BpGq is the dimension (as a Z-module) of the free part of G.
In Figure 6, an entry 1 or 0 in the first and the second columns will mean that P l pEq and P c pEq are non-empty or empty, respectively. In the third column an entry 1 will mean that the Leavitt path algebra is decomposable, while 0 will stand for the opposite. An entry 1 in the PIS column stands for a Leavitt path algebra which is purely infinite simple. An entry 1 in the BpK 0 q column represents the Betti number of the K 0 of the corresponding Leavitt path algebra. Figure 6. Non-IBN cases There is an overlap in Cases IV(a), V(e) and V(f). Consider, for example, the graphs We have that E, F and G are in cases IV(a), V(e) and V(f), respectively, and the corresponding Leavitt path algebras are isomorphic (via shift moves).
We state the main result for Leavitt path algebras not having IBN under consideration.
Theorem 4.6. Let E be a finite graph with two-vertices whose Leavitt path algebra L K pEq does not have IBN. Then, L K pEq is isomorphic to a Leavitt path algebra whose associated graph is (i) in Case I, if and only if P l pEq ‰ H (which implies P c pEq " H, P ec pEq " H, L K pEq is neither decomposable nor purely infinite simple and BpK 0 pL K pEqqq " 1). Any two Leavitt path algebras in this situation are isomorphic if and only if their types are the same. (ii) in Case II, if and only if P c pEq ‰ H (which implies P l pEq " H, P ec pEq " H, L K pEq is neither decomposable nor purely infinite simple and BpK 0 pL K pEqqq " 1). Any two Leavitt path algebras in this situation are isomorphic if and only if their types are the same. (iii) in Case V(a) if and only if L K pEq is decomposable (which implies P l pEq " H, P c pEq " H, P ec pEq ‰ H, L K pEq is not purely infinite simple and BpK 0 pL K pEqqq " 0). Any two Leavitt path algebras in this situation are isomorphic if and only if the sets of the types of the non-zero proper ideals coincide. (iv) in Case III if and only if L K pEq is purely infinite simple and BpK 0 pL K pEqqq " 1 (which implies P l pEq " H, P c pEq " H, P ec pEq ‰ H). Any two Leavitt path algebras in this situation whose associated graphs have S " tpt 1`1 , t 1 q, pt 2`1 , t 2 qu, S 1 " tpt 1 1`1 , t 1 1 q, pt 1
21
, t 1 2 qu are isomorphic if and only if gcdpt 1 , t 2 q " gcdpt 1 1 , t 1 2 q. (v) in Cases IV(a), V(e) or V(f ) if and only if the Leavitt path algebra L K pEq is purely infinite simple and BpK 0 pL K pEqqq " 0 (which implies P l pEq " H, P c pEq " H, P ec pEq ‰ H). Any two Leavitt path algebras in this situation whose Franks triples coincide are isomorphic. On the other hand, if two Leavitt path algebras in these cases are isomorphic, then their Franks triples coincide up to the sign of the determinant. (vi) in Case V(b) if and only if P l pEq " H, P c pEq " H (which implies P ec ‰ H), L K pEq is neither decomposable nor purely infinite simple and BpK 0 pL K pEqqq " 0. Any two Leavitt path algebras in this situation whose associated graphs E and F are in Case V(b) which are isomorphic must satisfy d E " d F and gcdpl 1´1´t1 , l 2´1 q " gcdpl 1´1´t 1 1 , l 2´1 q.
Proof. By looking at Figures 6 and 7 we can distinguish the different cases that appear in the statement. Now we study isomorphisms within each case. Consider a graph E in either Case I or Case II. Since L K pEq{IpP lec q is determined by t 2 , which is the only variable of the graph, each graph in these cases produces a non-isomorphic Leavitt path algebra. Similarly, any graph in Case V(a) produces a distinct isomorphism class by Lemma 4.3. Let us study the graphs in Case III. Consider E to be the graph with S E " tpt 1`1 , t 1 q, pt 2`1 , t 2 qu, and F to be the graph with S F " tpn`1, nq, pn`1, nqu, where n :" gcdpt 1 , t 2 q. Then K 0 pL K pEqq and K 0 pL K pF qq are both isomorphic to ZˆZ n and there is an isomorphism from K 0 pL K pEqq to K 0 pL K pF qq sending r1s E to r1s F , which are both mapped to p0,1q in ZˆZ n . Moreover, the determinants agree: ∆ E " ∆ F " 0. So, by [2, Corollary 2.7], L K pEq is ring isomorphic to L K pF q. Since the center of L K pEq is isomorphic to K because the Leavitt path algebra is unital and purely infinite simple (see, for example [13,Theorem 3.7] and [10, Theorem 4.2]), we may apply Proposition 1.2 to get that there is an algebra isomorphism from L K pEq to L K pF q. Therefore, for any positive integer n, the graph with S " tpn`1, nq, pn`1, nqu produces an algebra isomorphism class.
Consider any two graphs
with t 1 , t 1 1 ě 1; l 1 , l 2 , l 1 2 , l 1 1 ě 2. Recall that IpP E ec q is isomorphic to M 8 pL K p1, l 2 qq and IpP F ec q is isomorphic to M 8 pL K p1, l 1 2 qq. The unique proper nonzero graded ideals in L K pEq and L K pF q are IpP E ec q and IpP F ec q, respectively (because in both graphs, the only proper nontrivial hereditary and saturated subset is tvu). We know, by [8, Proof of the Theorem 5.3], that an ideal in a Leavitt path algebra is graded if and only if it is generated by idempotents. Therefore, graded ideals are preserved by isomorphisms. Thus, if L K pEq and L K pF q are isomorphic, by Theorem 4.1 the isomorphism maps IpP E ec q to IpP F ec q. By Proposition 1.4 we get l 2 " l 1 2 . Moreover Lp1, l 1 q -L K pEq{IpP E ec q -L K pF q{IpP F ec q -Lp1, l 1 1 q, which implies l 1 " l 1 1 . Note also that t 1 " pl 2´1 qq`r for some q P N and 0 ă r ď l 2´1 . Observe that by applying successively shift moves to E, we produce G, where By Theorem 1.1, L K pEq is isomorphic to L K pGq. Hence, to find the isomorphism classes in Case V(b), it is enough to consider the graphs: where 1 ď t 1 , t 1 1 ď l 2´1 . Now, ∆ E " |∆ E | " |∆ F | " ∆ F . If d E ‰ d F , then K 0 pL K pEqq is not isomorphic to K 0 pL K pF qq, and L K pEq cannot be isomorphic to L K pF q.
If d E " d F and gcdpl 1´1´t1 , l 2´1 q ‰ gcdpl 1´1´t 1 1 , l 2´1 q, then L K pEq cannot be isomorphic to L K pF q as they have different types.
Remark 4.7. We do not know if the converse of (vi) in Theorem 4.6 is true or not. Take E and F as in Case V(b). If d E " d F and gcdpl 1´1´t1 , l 2´1 q " gcdpl 1´1´t 1 1 , l 2´1 q, then both the type and the K 0 groups are the same. Moreover, when the Leavitt path algebras have type p1, n`1q, the order unit will be an element of order n. But we do not know if this implies that the Leavitt path algebras L K pEq and L K pF q are isomorphic or not. Note that the graphs of this form do not produce purely infinite simple Leavitt path algebras, hence we cannot use the algebraic Kirchberg-Philips Theorems.
We illustrate that there are graphs in Case V(b) such that some of them produce Leavitt path algebras which are not isomorphic while others produce Leavitt path algebras for which we cannot say whether they are isomorphic or not.
Example 4.8. Consider the three graphs that follow.
The first one produces a Leavitt path algebra of type p1, 2q whereas both, the second and the third graphs, produce Leavitt path algebras of type p1, 4q such that their K 0 groups are It is an open question (at least for the authors of this paper) if the Leavitt path algebras associated to the graphs in Example 4.8 are isomorphic. However, we have studied if they are graded isomorphic, and the answer is no. Example 4.9. Consider the graphs that follow.
Classification of Leavitt path algebras having IBN
In the previous section we have classified the Leavitt path algebras not having IBN. Now we complete the classification of Leavitt path algebras associated to finite graphs having two vertices by describing Leavitt path algebras having IBN. We list them in the same order of dichotomies outlined in Figure 1. The invariant ideals of the families of Leavitt path algebras having IBN are summarized in Figure 8.
DEC PIS Graph
IpP l q IpP c q IpP ec q L K pEq{I Figure 9. K 0 for two-vertex Leavitt path algebras having IBN By looking at the invariants in Figure 8, it is easily deducible that two Leavitt path algebras whose graphs are in different families from A1 through A14 are non-isomophic. Now, we study the isomorphisms within the Leavitt path algebras in each family. Every graph in A1, A2, A3, A4, A7, A8, A9 and A12 produces a unique Leavitt path algebra which is not isomorphic to any other. By looking at Figure 9, the K 0 group in the classes A5 and A10 is ZˆZ t 2 and each graph produce a distinct isomorphism class again.
In A6 and A11, the Leavitt path algebra L K pEq{IpP lec q is isomorphic to Lp1, l 2 q this assures that any two graphs from the same family giving rise to isomorphic Leavitt path algebras must have the same l 2 . By looking at Figure 9, for distinct t 2 , t 1 2 , with l 2´t2 , l 2´t 1 2 ‰ 1, if gcdpt 2 , l 2´1 q ‰ gcdpt 1 2 , l 2´1 q, the K 0 groups are non-isomorphic, hence they produce different isomorphism classes. However, if gcdpt 2 , l 2´1 q " gcdpt 1 2 , l 2´1 q, then we do not know whether the corresponding graphs produce isomorphic Leavitt path algebras.
In the group A13, similarly, the invariant ideal IpP ec q of L K pEq is isomorphic to M8 pLp1, l 2 qq and hence any two graphs in this group having isomorphic Leavitt path algebras will have the same l 2 , by Proposition 1.4. For distinct t 1 , t 1 1 , if gcdpt 1 , l 2´1 q ‰ gcdpt 1 1 , l 2´1 q, the K 0 groups are non-isomorphic, hence producing different isomorphism classes. However, if gcdpt 1 , l 2´1 q " gcdpt 1 1 , l 2´1 q, then we do not know whether the corresponding graphs produce isomorphic Leavitt path algebras.
The family A14 contains Leavitt path algebras having IBN that are purely infinite simple. For any Leavitt path algebra L K pEq, where E is in the family A14, ∆ E " 0, so the Franks triple determines when the graphs belonging to A14 induce isomorphic Leavitt path algebras. Now, we can state the following theorem, that we have proved above.
Theorem 5.1. Let E be a finite graph with two-vertices whose Leavitt path algebra L K pEq has IBN. Then, L K pEq is isomorphic to a Leavitt path algebra whose associated graph is one in Cases A1-A14. Moreover: (i) In each of the Cases A1, A2, A7 and A12 the Leavitt path algebra is isomorphic to KˆK, KˆKrx, x´1s, Krx, x´1sˆKrx, x´1s and M 2 pKrx, x´1sq, respectively. (ii) In Case A3 the Leavitt path algebra is isomorphic to KˆLp1, l 2 q. Two Leavitt path algebras KˆLp1, l 2 q and KˆLp1, l 1 2 q are isomorphic if and only if l 2 " l 1 2 . (iii) In Case A4 the Leavitt path algebra is isomorphic to M t 2`1 pKq. Two Leavitt path algebras M t 2`1 pKq and M t 1 2`1 pKq are isomorphic if and only if t 2 " t 1 2 . (iv) In Cases A5 and A10, the Leavitt path algebras are determined by their K 0 groups.
(v) For every graph in Case A8 the associated Leavitt path algebra is decomposable and isomorphic to Krx, x´1sˆLp1, l 2 q, for l 2 ě 2. The isomorphisms are determined by the value of l 2 . (vi) In Case A9 the associated Leavitt path algebra is isomorphic to M t 2`1 pKrx, x´1sq. The isomorphisms are determined by t 2 . (vii) In Case A14 the associated Leavitt path algebra is purely infinite simple (in fact, it is the only one purely infinite simple having IBN). Two Leavitt path algebras associated to graphs in this case are isomorphic if and only if their Franks triples coincide because the determinant is zero. (viii) In Cases A6, A11 and A13 two different graphs E and F with d E ‰ d F give rise to non-isomorphic Leavitt path algebras.
We conclude this paper with the following open question which will complete the full classification if answered either affirmative or negative.
Question 5.2. Given any two graphs E and F with d E " d F , either in the same family C, where C P tA6, A11, A13u, or in the family of V(b), with associated sets S E " tpl 1 , t 1 q, pl 2 , 0qu and S F " tpl 1 , t 1 1 q, pl 2 , 0qu having gcdpl 1´1´t1 , l 2´1 q " gcdpl 1´1´t 1 1 , l 2´1 q, are the Leavitt path algebras L K pEq and L K pF q isomorphic? | 2017-09-14T08:54:28.000Z | 2017-08-10T00:00:00.000 | {
"year": 2019,
"sha1": "b1b00585fb5e60141463f6ae259014539d2ac48f",
"oa_license": "CCBYNCND",
"oa_url": "http://acikerisim.duzce.edu.tr/xmlui/bitstream/20.500.12684/3064/1/3064.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b1b00585fb5e60141463f6ae259014539d2ac48f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
261076237 | pes2o/s2orc | v3-fos-license | Audio Difference Captioning Utilizing Similarity-Discrepancy Disentanglement
We proposed Audio Difference Captioning (ADC) as a new extension task of audio captioning for describing the semantic differences between input pairs of similar but slightly different audio clips. The ADC solves the problem that conventional audio captioning sometimes generates similar captions for similar audio clips, failing to describe the difference in content. We also propose a cross-attention-concentrated transformer encoder to extract differences by comparing a pair of audio clips and a similarity-discrepancy disentanglement to emphasize the difference in the latent space. To evaluate the proposed methods, we built an AudioDiffCaps dataset consisting of pairs of similar but slightly different audio clips with human-annotated descriptions of their differences. The experiment with the AudioDiffCaps dataset showed that the proposed methods solve the ADC task effectively and improve the attention weights to extract the difference by visualizing them in the transformer encoder.
INTRODUCTION
Audio captioning is used to generate the caption for an audio clip [1][2][3][4][5][6][7][8][9][10]. Unlike labels for scenes and events [11][12][13][14][15], captions describe the content of the audio clip in detail. However, conventional audio captioning systems often produce similar captions for similar audio clips, making it challenging to discern their differences solely based on the generated captions. For instance, suppose two audio clips of heavy rain are input into a conventional captioning system. The system will generate a caption describing the content of each, like "It is raining very hard without any break" and "Rain falls at a constant and heavy rate" 1 as illustrated in Fig. 1(a). The difference, such as which rain sound is louder, is difficult to understand from the generated captions in this case.
To address this problem, we propose Audio Difference Captioning (ADC) as a new extension task of audio captioning. ADC takes two audio clips as input and outputs text explaining the difference between two inputs as shown in Fig. 1. We make the ADC clearly describe the difference between the two audio clips, such as "Make the rain louder," which describes what and how to modify one audio clip to the other in the instruction form, even for audio clips with similar texts. Potential real-world applications include machine condition and healthcare monitoring using sound by captioning anomalies that differ from usual sounds.
The ADC task has two major challenges: different content detection and detection sensitivity. Since the difference between a pair of audio clips can be classes of contained events or an attribute, such 1 These captions were taken from the Clotho dataset [2] Figure 1: Conceptual diagram of conventional audio captioning and audio difference captioning. Audio difference captioning describes the difference between pair audio clips, while conventional audio captioning describes the contents of each.
as loudness, the ADC needs to detect what difference to describe. When the difference lies in an attribute, the ADC needs to be sensitive enough to detect the magnitude of the attribute, such as rain is hard or moderately shown in the example in Fig. 1.
To handle these challenges, the ADC should extract features of difference based on the cross-reference of two audio clips. These features should carry enough information to differentiate critical attributes such as loudness. A typical choice of a feature extractor could be pre-trained models to classify labels [16][17][18]. However, these models learn to discriminate sound event classes, learning what is common while ignoring subtle differences such as raining hard or quietly unless the class definition covers that.
To meet the requirements of the ADC mentioned above, we propose (I) a cross-attention-concentrated (CAC) transformer encoder and (II) a similarity-discrepancy disentanglement (SDD). The CAC transformer encoder utilizes the masked multi-head attention layer, which only considers the cross-attention of two audio clips to extract features of difference efficiently. The SDD emphasizes the difference feature in the latent space using contrastive learning based on the assumption that two similar audio clips consist of similar and discrepant parts.
We demonstrate the effectiveness of our proposals using a newly built dataset, AudioDiffCaps, consisting of two similar but slightly different audio clips synthesized from existing environmental sound datasets [11,15] and human-annotated difference descriptions. Experiments show that the CAC transformer encoder improves the evaluation metric scores by making the attention focus only on cross-references. The SDD also improves the scores by emphasizing the differences between audio clips in the latent space. Our contributions are proposals of (i) the ADC task, (ii) the CAC transformer encoder and SDD for solving ADC, (iii) the AudioD-iffCaps dataset, and (iv) demonstrating the effectiveness of these proposals.
AUDIO DIFFERENCE CAPTIONING
We propose ADC, a task for generating texts to describe the difference between two audio clips. ADC estimates a word sequence w from the two audio clips x and y.
The general framework to solve ADC includes three main functions: audio embedding, audio difference encoding, and text decoding. Audio embedding calculates two audio embedding vectors from two audio clips, respectively. Audio difference encoding captures the difference between two audio embedding vectors. Text decoding generates a description of the differences from captured differences. Audio embedding and audio difference encoding require approaches specific to ADC. In particular, difference encoding is the function unique to audio difference captioning. This function requires a model structure to capture the subtle differences between two audio clips, unlike conventional audio captioning that captures the content of a single audio clip. Moreover, the sensitivity to the subtle difference between two similar audio clips is also necessary for audio embedding. The pre-trained audio embedding models widely used for conventional environmental sound analysis tasks are often trained for classification tasks and are suitable for identifying predefined labels. Consequently, the outputs of these pre-trained audio embedding models are not sensitive to the subtle differences between audio clips with the same label. Therefore, learning to emphasize the differences between similar audio clips in the latent space is necessary when applying pre-trained audio embedding models to the ADC.
PROPOSED METHOD
Based on the above discussion, we propose the ADC system illustrated in Fig. 2. Our system consists of an audio feature extractor (red), difference encoder (blue), text decoder (green), and similarity-discrepancy disentanglement (purple).
Audio feature extractor
The audio feature extractor uses a pre-trained audio embedding model to calculate audio embedding vectors. Two audio clips x and y are the input, and the audio embedding vectors corresponding to the clips X ∈ R H×Tx and Y ∈ R H×Ty are the output, where H is the size of hidden dimension, Tx is the time length of X, and Ty is the time length of Y
Difference encoder
The difference encoder extracts information about the differences between the two audio clips from audio embedding vectors X and Y . To extract difference information efficiently, we utilize a cross-attention-concentrated (CAC) transformer encoder as the main function of the difference encoder. The CAC transformer encoder utilizes the masked multi-head attention layer, allowing only mutual cross-attention between two audio clips by the attention mask illustrated in the upper right of Fig. 2.
The detailed procedure is as follows. First, special tokens that indicate the order of the audio clips X ∈ R H×1 and Y ∈ R H×1 are concatenated at the beginning of X and Y , respectively. Next, these two sequences are concatenated to make the input of the difference encoder Z like Z = [X , X, Y, Y ]. Then, positional en-coding P is applied to Z. Finally, P(Z) is input to CAC transformer encoder to obtain the outputẐ = [X ,X,Ŷ,Ŷ ].
Text decoder
The transformer decoder is utilized as a text decoder like as [5]. The text decoder calculates word probability from the output of the difference encoderẐ.
Similarity-discrepancy disentanglement
The similarity-discrepancy disentanglement (SDD) loss function is an auxiliary loss function aimed at obtaining a differenceemphasized audio representation. When there is an explainable difference between two audio clips, these clips consist of similar and discrepant parts. To introduce this hypothesis, we design contrastive learning to bring similar parts closer and keep discrepant parts. We propose two types of implementations that apply SDD to the input of the difference encoder Z or the output of itẐ, as shown in Fig. 2, and call the former and latter implementations early and late disentanglement, respectively.
We explain the procedure in the case of early disentanglement. Note that the case of late disentanglement only replaces Z withẐ. First, Z is split along the hidden dimension and assigned to similar and discrepant parts like in the upper left illustration of Fig. 2 SymInfoNCE is the symmetric version of the InfoNCE loss used in [19],
EXPERIMENT
Experiments were conducted to evaluate the proposed CAC transformer encoder and SDD loss function. We constructed the Au-dioDiffCaps dataset consisting of pairs of similar but slightly different audio clips and a human-annotated description of their differences for the experiments.
AudioDiffCaps dataset
The constructed AudioDiffCaps dataset consists of (i) pairs of similar but slightly different audio clips and (ii) human-annotated descriptions of their differences.
The pairs of audio clips were artificially synthesized by mixing foreground event sounds with background sounds taken from existing environmental sound datasets (FSD50K [15] and ESC-50 [ using the Scaper library for soundscape synthesis and augmentation [20]. We used the same mixing procedure as our previous work [21]. Data labeled rain or car passing by in FSD50K was used as background, and six foreground event classes were taken from ESC-50 (i.e., data labeled dog, chirping bird, thunder, footsteps, car horn, and church bells). Each created audio clip was 10 seconds long. The maximum number of events in one audio clip was two, with 0-100% overlap (no overlap-range control applied). Each foreground event class had 32 or 8 instances in the development or evaluation set, respectively. Similar to previous work, we focused on the three types of difference: increase/decrease of background sounds, increase/decrease of sound events, and addition/removal of sound events. The development and evaluation sets contained 5996 and 1720 audio clip pairs, respectively. (That is, development and evaluation sets contained 11992 and 3440 audio clips.) The human-annotated descriptions were written as instruction forms explaining "what and how" to change the first audio clip to create the second audio clip. In the preliminary study, we found that declarative sentences, in some cases, tend to use ordinal numbers such as "First sound is louder than second sound". Since these cases do not express what the actual difference is, the AudioDiff-Caps dataset uses instruction forms with a fixed direction of change from the first audio clip to the second one, e.g., "Make the rain louder" 2 . A wider variety of descriptions explaining the same concept, such as declarative sentences, should be included in future works. The presentation order of the pair to the annotator was randomly selected. Annotators were five naïve workers remotely supervised by an experienced annotator. Each pair of audio clips in the development set had between 1 and 5 descriptions (a total of 28,892) while each pair in the evaluation set had exactly five descriptions assigned to it (a total of 8600).
Experimental conditions
We used 10% of the development set for validation. The optimizer was Adam [22]. The number of epochs was 100. We used the BLEU-1, BLEU-4, METEOR, ROUGE-L, CIDEr [23], SPICE [24], and SPIDEr [25] as evaluation metrics. They were also used for conventional audio captioning [26].
We used BYOL-A [27], a pre-trained audio embedding model, as the audio feature extractor in our ADC implementation, and we fine-tuned the BYOL-A throughout experiments. The transformer encoder and decoder used the official implementation of PyTorch. The number of layers was 1. The hidden size was 768. The number of heads was 4. The activation was RELU. The dimension of the feedforward layer was 512. The dropout rate was 0.1. For the attention mask of the transformer encoder, we compared two types; one with the proposed cross-attention mask and the other without a mask. The text decoder used the teacher forcing algorithm during training and the beam search algorithm [28,29] during inference. The value of λ was empirically set to 0, 0.5, 1.0, or 2.0.
Results
The results of evaluation metrics are shown in Table 1, where bold font indicates the highest score, "Mask" and "Disent." indicate the attention mask utilized in the transformer encoder and input of SDD loss function, respectively. When the CAC transformer encoder was evaluated by comparing the two lines above, the proposed method had superior or equivalent scores to the conventional method in all evaluation metrics. There was no significant difference in the evaluation metrics related to the degree of matching with single-word references, such as BLEU-1. One likely reason is that the scores above a certain level can be obtained by outputting words in arbitrary sentences, such as "a" and "the" in these metrics. In contrast, the scores of BLEU-4, ROUGE-L, CIDEr, and SPIDEr, affected by the accuracy of consecutive words, were improved using the proposed cross-attention mask. Therefore, the proposed cross-attention mask was thought to make the feature extraction of differences more efficient and simplify the training of the text decoder. As a result, phrase-level accuracy was improved.
The effect of SDD was verified from the results of the second to eighth lines. The results in (a) and (b) were the conventional transformer without cross attention mask or SDD loss and the CAC transformer without SDD loss (λ = 0) Ones from (c) to (h) were the result when using early/late disentanglement. Since the scores of BLEU-4, ROUGE-L, CIDEr, and SPIDEr improved under all conditions comparing (b) and others, the SDD loss function was effective for the audio difference captioning task. The improvement in the case of late disentanglement (f), (g), and (h) was remarkable, and the results obtained the best scores in all evaluation metrics with late disentanglement. In other words, it was essential to use the information to be compared to decompose the similar part and the different parts in the feature amount space. That corresponds to the difference determined depending on the comparison target. Fig. 3 shows one of the evaluation data and estimated caption and attention weight of the transformer encoder from each system. The leftmost colomn is the Mel-spectrogram of the two input audio Figure 3: Examples of output caption and attention weights. The leftmost row was the Mel-spectrogram of two audio clips and one reference caption. The three on the right were the attention weights of the transformer encoder and the output caption.
clips and one of the reference captions. The three on the right are the attention weight of the transformer encoder and output caption, where the attention weight shows the average of multiple heads. The audio clips on the left and above the weights correspond to the input and memory of the transformer, respectively. The area colored pink and yellow on the weights corresponds to the dog barking. Since there was a difference in the loudness of the dog barking between the two clips, the attention was expected to focus on areas where pink and yellow overlap to extract the difference.
First, in (a), since the attention weight was not constrained, it was also distributed widely to areas other than the above compared with the other two. On the other hand, the attention weights of (b) and (h) concentrated on areas where pink and yellow overlap since the attention of the same input and memory was unavailable. Comparing (b) and (h), while the attention of the part containing the barking of the dog in the memory was large at any time-frame in (b), more attention was paid to the pink and yellow overlapping areas where both input and the memory contain the barking of the dog in (h). Since the late disentanglement required that similar and discrepant parts be retained in the output of the transformer encoder calculated using these attention weights, it was thought that the late disentanglement induced attention to be paid to the part where there was a difference when comparing the two sounds instead of paying attention to the parts that are likely to exist the difference compared with the distribution of training data, such as a dog barking.
CONCLUSION
We proposed Audio Difference Captioning (ADC) as a new extension task of audio captioning for describing the semantic differences between similar but slightly different audio clips. The ADC solves the problem that conventional audio captioning sometimes generates similar captions for similar but slightly different audio clips, failing to describe the difference in content. We also propose a cross-attention-concentrated transformer encoder to extract differences by comparing a pair of audio clips and a similaritydiscrepancy disentanglement to emphasize the difference feature in the latent space. To evaluate the proposed methods, we newly built an AudioDiffCaps dataset consisting of pairs of similar but slightly different audio clips and a human-annotated description of their differences. We experimentally showed that since the attention weights of the cross-attention-concentrated transformer encoder are restricted only to the mutual direction of the two inputs, the differences can be efficiently extracted. Thus, the proposed method solved the ADC task effectively and improved the evaluation metric scores.
Future work includes utilizing a pre-trained generative language model such as BART [30] and applying a wider variety of audio events and types of differences.
ACKNOWLEDGMENTS
BAOBAB Inc. supported the annotation for the dataset. | 2023-08-24T06:41:07.276Z | 2023-08-23T00:00:00.000 | {
"year": 2023,
"sha1": "2e3a53b871dd008dcf4fc378468e94c60fd6d5a5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2e3a53b871dd008dcf4fc378468e94c60fd6d5a5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
244761512 | pes2o/s2orc | v3-fos-license | High‐throughput and multi‐phases identification of autoantibodies in diagnosing early‐stage breast cancer and subtypes
Abstract Autoantibodies (AAbs) targeted tumor‐associated antigens (TAAs) have the potential for early detection of breast cancer. Here, 574 early‐stage breast cancer (ES‐BC) patients containing 4 subtypes (Luminal A, Luminal B, HER2+, TN), 126 benign breast disease (BBD) patients, and 199 normal healthy controls (NHC) were separated into three‐phases to discover, verify, and validate AAbs. In discovery phase using high‐throughput protein microarray, 37 AAbs with sensitivity of 31.25%‐86.25% and specificity over 73% in ES‐BC, and 40 AAbs with different positive rates between subtypes were identified as candidates. In verification phase, 18 AAbs were significantly increased compared with the Control (BBD and NHC) in focused array. Ten out of 18 AAbs exhibited a significant difference between subtypes (P < .05). In ELISA validation phase, 5 novel AAbs (anti‐KJ901215, ‐FAM49B, ‐HYI, ‐GARS, ‐CRLF3) exhibited significantly higher levels in ES‐BC compared with BBD/NHC (P < .05). The sensitivities of individual AAb and a 5‐AAbs panel were 20.41%‐28.57% and 38.78%, whereas the specificities were over 90% and 85.94%. Simultaneously, 4 AAbs except anti‐GARS differed significantly between TN and non‐TN subtype (P < .05). We constructed 3 random forest classifier models based on AAbs to discriminant ES‐BC from Control or BBD, and to discern TN subtype, which yielded an area under the curve of 0.870, 0.860, and 0.875, respectively. Biological interaction analysis revealed 4 TAAs, except for KJ901215, that were associated with well known proteins of BC. This study discovered and stepwise validated 5 novel AAbs with the potential to diagnose ES‐BC and discern TN subtype, indicating easy‐to‐detect and minimally invasive diagnostic value of serum AAbs ahead of biopsy for future application.
| INTRODUC TI ON
According to GLOBOCAN 2020, BC has risen to be the most prevalent malignant tumor and remains the first leading cause of cancerrelated death in women worldwide. 1 China accounts for 18.4% of the global incidence and 13.2% of the global death. Early diagnosis and treatment are critical to improving BC survival, as the 5-y relative survival rate for 44% of patients with Stage I BC approaches 100%. 2 Mammography is the contemporary detective modality but cannot detect carcinoma grown within normal breast architecture and encounters low specificity in woman with dense breasts. 3,4 Therefore, rapid and cost-effective blood-based biomarkers that could be de- TAAs are easier to detect for the magnifying generation from cloning B cells and longer half-life period; (c) AAbs produced early during the tumorigenesis are present in serum several months, even years, before clinical diagnosis. 5 Currently, biomarkers reported often in BC are p53, MUC1, and HER2/Neu AAbs. Anti-p53 is the mostly studied with ubiquitous presence in various cancers. 6 Anti-MUC1 is a classic AAb that frequently appears in BC and other cancers and that was reported to show no significant difference between BC patients and controls. 7 Anti-HER2 was found to be increased in BC before and at the time point of diagnosis in a rigorously designated study but with a relatively small cohort. 8 The positive rate of the 3 and other AAbs ranged from 10% to 20%, with a specificity ~90%. It is acknowledged that individual AAbs encounter low sensitivity problems, so that a combinatorial AAbs panel approach is likely to be a better approach for early detection of BC. However, AAbs panels performed variably from different studies even with similar AAbs. AAbs against 7 TAAs (p53, c-MYC, HER2, NY-ESO-1, BRCA1, BRCA2, and MUC1) in a European cohort reached a sensitivity of 60% or 45% to distinguish primary BC or ductal carcinoma in situ from controls at 85% specificity. 9 More recently, 6 AAbs in combination (p53, Cyclin B1, p16, p62, 14-3-3ξ, survivin) used to construct discriminant models showed a high sensitivity of 69.5%-78.2% at 64.8%-89.0% specificity in a Chinese cohort. 10 Additionally, BC is highly heterogeneous for the expression difference of estrogen receptor (ER), progesterone receptor (PR), and epidermal growth factor receptor ERBB2/HER2, which are now divided into 4 accepted subtypes: Luminal A, Luminal B, HER2enriched, and basal-like (most are triple negative). 12,13 There are still no validated serum biomarkers to characterize different subtypes and only rare studies have investigated AAbs in some specific subtype. 14 In this study, we utilized a comprehensively high-throughput protein microarray containing ~20 K proteins from the human proteome to survey novel AAbs to diagnose ES-BC and characterize subtypes. Furthermore, we established differentiating classifier models based on the 5AAbs for diagnosis of ES-BC and subtyping.
| Construction of high-density microarrays and serum profiling assays
High-density microarrays, HuProt TM version 3.0, were provided by CDI Laboratories, Inc. HuProt TM library clones from public opening reading frames (ORFs) or independently synthesized were expressed in proteins with the GST-His6 tag through a yeast expression system. 19 HuProt TM v.3.0 contained 21 888 proteins covering >81% of canonically expressed proteins defined by the Human Protein Atlas and 21 888 proteins plus 2304 controls were printed as 24 blocks onto glass slides.
The experiment procedures for AAbs profiling have been described in previous studies. 20 Briefly, microarrays were blocked with 5% BSA diluted in PBS at room temperature for 1.5 h. After discarding the BSA, microarrays were incubated with serum samples diluted with 5% BSA at a 1:1000-fold ratio, for 1 h. After washing, Alexa fluor 647 goat anti-human IgG (Jackson) diluted in 5% BSA at 1:1000-fold ratio was added to microarrays with 0.1% PBS, and incubated at room temperature in darkness for 1 h. After thorough washing with PBST, microarrays were dried naturally and scanned using a GenePix 4000B microarray scanner (Grace Bio-Labs) with a 635 nm excitation laser. GenePix Pro v.6.0 software (Molecular Devices) was used to obtain signal intensities of the foreground signal divided by the background signal (F/B). Positive hits were defined as average signal intensities above the cut-off, set as the mean + 6SD of all the signal points per chip after block correction and Z-score normalization. 21
| Construction of focused arrays and serum profiling assays
Candidate proteins from HuProt TM selection and the literature were picked to fabricate focused arrays designated as 2 × 7 subarrays by 14-chamber rubber gasket separation. The hybridization process, scanning, and data acquisition were similar to that of the highdensity microarray except that the blocking and dilution buffer was changing to 3% BSA.
| ELISA assay
Recombinant proteins (CDI) with the GST tag were used to detect serum AAbs according to a protocol described in previous studies. 22 Briefly, 50 ng recombinant proteins were coated onto each well of 96-well plates (Corning) at 4℃ overnight. After blocking with 5% skimmed milk for 2 h and washing with 0.2% PBST, 50 μL of each serum sample diluted in 1:100-fold were added and incubated at 37℃ for 1 h. Next, 50 μL 1:20 000-fold diluted peroxidase goat anti-human IgG antibody (Jackson) was added at 37℃ for 1 h, then the chromogenic reaction was conducted at room temperature for 15 min and then the reaction was stopped. Plates were scanned on a Multiskan GO automatic microplate reader (Thermo), OD value of the blank control was subtracted from the OD of each well.
| Study design and objects
In total, 899 sera from 574 ES-BC and 126 BBD patients, and 199 NHS participants were collected to conduct the high-density HuProt TM array, low-density focused array and ELISA detection for novel AAbs discovery, verification, and validation, respectively (Table 1).
| Discovery of candidate AAbs in high-density protein microarray
In the discovery cohort consisting of 80 ES-BC patients, 20 BBD patients, and 19 NHC participants, the high-density microarray with a linear correlation of 0.93 between parallel duplicates ensured the reproducibility of the detection of serum IgG AAbs ( Figure S1A).
Using a stringent cut-off Z-score ≥ 6 to determine positive AAbs, 37 differential IgG AAbs candidates were identified for comparisons of ES-BC vs BBD/NHC according to the filtration criteria: Fisher exact test P < .05 for ES-BC vs BBD/NHC, positive rate over 30% in ES-BC and specificity ranked at the top. The positive rate of 37 AAbs in patients with ES-BC ranged from 31.25% to 86.25%, while the positive rate of the majority of AAbs was no more than 15% in BBD/ NHC (Table S1). In total, 40 AAbs were selected by considering the top-ranking positive rate ratio and P < .05 in pairwise comparisons between the ES-BC subgroups (Table S2). Profiles of the differential AAbs showed higher levels in ES-BC compared with NHC, and a relatively higher level in the HER2+ subtype, but the difference was less obvious between subtypes and warranted later validation ( Figure 2). Finally, the aforementioned 76 AAbs from inter-group and inner-group comparisons in ES-BC, and other 24 available AAbs formed a 100 AAbs panel for focused array fabrication (Table S3).
| Verification of AAbs in focused protein microarray
Signals of negative control between ES-BC and Control (Table S4). In total, 18 AAbs were taken as preliminarily validated biomarkers highlighted in a volcano plot ( Figure 3B). Levels of the 18 AAbs were significantly higher than those in the Control ( Figure 3C). Generally, the majority of the 18 AAbs exhibited a gradually descending trend from ES-BC to BBD to NHC and a significant difference was observed in ES-BC vs NHC groups. Compared with BBD, anti-GARS remained significantly higher in patients with ES-BC ( Figure S1D). In comparisons between subtypes and Control shown in Figure 3D, A vs HER2+ contained the greatest number of differential AAbs (n = 10; Figure 3D,E).
| Validation of AAbs in ELISA
We Number of differential AAbs ES-BC from BBD or NHC, respectively ( Figure 4E and Table S5).
Subtype
This is a newly found AAbs panel for diagnosing ES-BC apart from reported combinational AAbs in BC. In addition, the BI-RADS category of breast imaging of ES-BC tended to have higher grades (4, 5, and 6) than that in BBD (3 and 4) as expected ( Figure 5A). We also observed that the levels of 5AAbs generally increased with the increasing category of BI-RADS, indicating positive relevance between them and underlying the mutually complementary diagnostic value (Figure 5 A). Moreover, the 5 AAbs were not associated with clinical characteristics including age, stage, lymph node invasion, vascular invasion, nerve invasion, and EGFR in IHC, even though ES-BC patients at Stage IIB tended to have higher AAbs levels ( Figure 5B).
We then investigated the 5-AAbs profiles in subtypes and BBD/NHC. First, compared with BBD/NHC, Luminal A, Luminal B and HER2+ subtypes had significantly increased levels of the 5 AAbs (P <.05). The 5 AAbs in the TN subtype also tended to increase but without significance ( Figure 5C and Figure 5C and Table S6). Therefore, we compared non-TN subtype (Luminal A plus Luminal B plus HER2+) with TN subtype. In total, 4 AAbs showed significantly higher levels (P < .05) in non-TN than those in the TN subtype and anti-GARS displayed marginal significance (P = .050) between non-TN and TN subtypes ( Figure 6A).
The positivity of the 5 AAbs in subtypes varied from 14.29% in TN to 36.59% in HER2+, the positive rate of the anti-KJ901215, anti-FAM49B, and anti-CRLF3 subtype was >30% in 1 or more non-TN subtypes ( Figure 6B and Table S7).
| Classifier models and interaction network analysis
We further established classifier models for diagnosis of ES-BC using RF based on the ELISA cohort. The cohort was randomly partitioned into training set and testing set after oversampling at the ratio of 70%:30%. Ten-fold cross-validation was proceeded in training set, followed by validation in the testing set. First, we built an RF model Figure S2D). Similarly, except for HYI in opposite trend but without significance, the mRNA data for the 3 proteins from tissues in TCGA and the GTEx databases were higher in BC compared with those in the normal controls ( Figure S2E).
| DISCUSS ION
The lack of validated serum molecular biomarkers to detect ES-BC more accurately is the utmost challenge to prolong patient survival, therefore it is imperative to discover novel biomarkers. 23 Notably, the lower levels of all 5 AAbs, especially anti-KJ901215 and anti-CRLF3, in TN compared with others was probably a result of the "cold" tumor immune milieu in TN. 34 This was significant for its ability to discriminate the TN subtype due to its association with a poor prognosis. Previous studies using the NAPPA microchip with 10 000 antigens identified a 13-AAbs panel to differentiate basal-like BC (most were TN) controls with 33% sensitivity and 98% specificity, but the inestimable effect of therapy on AAbs was an added caveat because only 52% of basal-like BC were pre-treated. 14 Minimally invasive serum AAbs characterization prior to invasive biopsy is meaningful for early diagnosis as it is both more cost effective and a more acceptable method of detection for patients. In addition, use of a comprehensive proteome-wide microchip facilitated the de novo identification of novel AAbs, and stepwise validation ensured the reliability of the AAbs. More importantly, the convenient and rapid, easy and widely accessible ELISA detection suggested potential future clinical translation.
However, serum samples obtained from single center in this retrospective case-control study were subjected to selection bias, small sample size, and lack of in-depth molecular function experiments.
Additional efforts are warranted especially in prospective multicenter studies with larger sample sizes to extrapolate the 5 AAbs to the general population and further investigate their biological function.
Altogether, to the best of our knowledge, our discovery and stepwise validation of 5 AAbs is reported here for the first time in ES-BC, and we characterized different subtypes of ES-BC with AAbs. The findings here suggest the potential of the 5 AAbs to distinguish BC at early stages and for the TN subtype, further explanation of their roles in diagnosis is needed in the future.
ACK N OWLED G M ENTS
We thank all the participants in this study and thank Dr. Heng Zhu's laboratory at Johns Hopkins University for providing microarrays.
This work was supported by the China National Major Project for New Drug Innovation (2017ZX09304015, 2019ZX09201-002).
D I SCLOS U R E
The authors have no conflict of interest. | 2021-11-30T06:22:57.711Z | 2021-11-29T00:00:00.000 | {
"year": 2021,
"sha1": "d6cbbdd3bd2b68af38373515ece3b92f0064ffc3",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cas.15227",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e95ee59d31dc6f9b7803afe17fa57aecd0a0d514",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8743777 | pes2o/s2orc | v3-fos-license | Emissivity Measurements of Foam-Covered Water Surface at L-Band for Low Water Temperatures
For a foam-covered sea surface, it is difficult to retrieve sea surface salinity (SSS) with L-band brightness temperature (1.4 GHz) because of the effect of a foam layer with wind speeds stronger than 7 m/s, especially at low sea surface temperature (SST). With foam-controlled experiments, emissivities of a foam-covered water surface at low SST (−1.4 °C to 1.7 °C) are measured for varying SSS, foam thickness, incidence angle, and polarization. Furthermore, a theoretical model of emissivity is introduced by combining wave approach theory with the effective medium approximation method. Good agreement is obtained upon comparing theoretical emissivities with those of experiments. The results indicate that foam parameters have a strong influence on increasing emissivity of a foam-covered water surface. Increments of experimental emissivities caused by foam thickness of 1 cm increase from about 0.014 to 0.131 for horizontal polarization and 0.022 to 0.150 for vertical polarization with SSS increase and SST decrease. Contributions of the interface between the foam layer and water surface to OPEN ACCESS Remote Sens. 2014, 6 10914 the foam layer emissivity increments are discussed for frequencies between 1 and 37 GHz.
Introduction
Passive microwave measurements of sea surface brightness temperatures allow retrievals of geophysical variables, such as wind speed, sea surface salinity (SSS), and sea surface temperature (SST).The Aquarius Mission and European Soil Moisture and Ocean Salinity Mission [1,2] are prime examples.However, under high wind speeds, the foam layer produced by wave breaking over a sea surface always affects the microwave emissive brightness temperature, because foam permittivity is different from that of seawater [3,4].To adequately account for foam effects on brightness temperature or emissivity, one needs to understand the microwave electromagnetic properties of a foam layer, such as permittivity and emissivity.In fact, these two complex properties are not only related to foam microstructure parameters such as air volume fraction (AVF), foam layer thickness and size of seawater-coated air bubbles, but also microwave frequency, SST and SSS [5][6][7][8][9][10][11].Although the sea foam layer can increase sea surface emissivity [12], this mechanism is not clearly understood, especially in calculating or predicting emissivity and permittivity of that layer.
Recently, much research has focused on foam permittivity and emissivity through both experimental and theoretical investigation, toward developing a specific forward geophysical model of ocean remote sensing.Over the sea surface, the sea foam layer is an aggregation of seawater-coated air bubbles and free water between interstitial spaces of air bubbles [13][14][15].Therefore, several effective media approximations (EMA) have been developed to quantitatively calculate the effective permittivity of a foam layer for microwave wavelengths greater than mean air bubble size [16][17][18][19].For example, considering the interactions of dense coated spherical particles, Liu et al. [20] developed Rayleigh method to predict the effective permittivity of a foam layer at different microwave frequencies.Anguelova [3] systematically investigated well-known effective permittivity formulae of composite media according their applicability to the sea foam layer.To investigate the emissivity increment induced by a foam-covered sea surface, theoretical models [8][9][10][11][16][17][18][19][20][21] were developed with electromagnetic wave theory, microwave vector radiative transfer equation (VRTE), and EMA theory.For instance, Guo et al. [17] investigated the influence of foam microstructures, microwave frequency and foam-layer thickness on the emissivity of a foam-covered sea surface with quasi-crystalline approximation and VRTE, by treating the foam as densely packed air bubbles coated with seawater.To disclose the effects of other foam parameters, such as the ratio of coated air bubbles' inner to outer radii, coherent wave interaction, and sticky force parameter on foam emissivities, microwave radiative transfer theory has also been applied to the foam layer [7,9,22,23].For the AVF with vertical distribution in a foam layer, Wei [10] proposed an EMA method to estimate foam emissivity of a non-uniform AVF.Similarly, Anguelova et al. [5] and Raizer [21] presented a radiative transfer model for estimating the emissivity of a vertically structured foam layer at microwave frequencies.Although the aforementioned theoretical models have demonstrated the influence of foam structures on emissivity, it is still difficult to calculate foam layer emissivity.
To improve the accuracy of theoretical models and obtain more experimental data on foam layers, various controlled experiments of foam-covered sea surfaces have been recently conducted to measure the emissivity and geometric parameters of those layers, such as bubble size distribution, coating thickness, foam thickness, and AVF [6].As an example, at 10.8 GHz and 36.5 GHz, Rose et al. [8] obtained a formula of foam emissivity by means of a power-series polynomial of incidence angles for foam thickness 2.8 cm.To investigate effects of breaking waves and the foam layer on sea surface brightness temperatures, Padmanabhan et al. [24] conducted an emissivity experiment of the wave-breaking surface at 10.8, 18.7, and 37 GHz.With experiments on various foam shapes and foam-water interfaces, Williams [25] showed that the meniscus of the foam-water interface can contribute a significant fraction of foam emissivity increments at 9.2 GHz.With an artificial foam experiment at 37.5 GHz, Millitskii [26] revealed that the major contributor to foam emissivity increase is the thin monolayer of bubbles near the foam-water interface.To address the effects of SST and foam thickness on foam emissivity at 6.8 GHz, Wei et al. [11] conducted an experiment with a completely foam-covered surface, obtaining averaged emissivity increments, with foam thickness of 1 cm, from 0.25 to 0.35 for both horizontal (H) and vertical (V) polarizations at incidence angles from 20° to 40°.To eliminate foam layer influences on SSS retrieval at L-band (1.4 GHz), Camps et al. [6] carried out a specific experiment to measure foam emissivity for various SSSs and higher SSTs ( ≥ 14 °C).The above experiments were conducted at water temperatures of 10 °C-30 °C.There have been few experiments focused on emissivity at low SST.
For remote sensing of SSS, because the absolute value of the sensitivity of brightness temperature at 1.4 GHz to SSS declines with decreasing SST, SST is a key parameter affecting SSS retrieval [27][28][29].Moreover, in high-latitude ocean regions, SST is low, from −2 °C to 5 °C.To develop a theoretical model of a foam-covered sea surface at L-band (1.4 GHz), we performed emissivity experiments of such a surface at low SST for varying SSSs, so that the emissivity increment induced by the foam layer is estimated by various foam factors.Furthermore, based on experimental data on foam parameters, a refractive formula of foam permittivity is determined to simulate measured emissivities using a two-layer emission model derived by the wave approach theory.
Measurement Approach
To derive the emissivity of a foam-covered sea surface from measured brightness temperatures of the foam layer and atmospheric downwelling radiation, brightness temperature noise of the foam experimental system was reduced.If brightness temperatures of both foam-free flat and foam-covered water surfaces are measured within a short time, brightness temperature noise N p T of the experimental system was estimated with that of the foam-free calm water surface: where subscript p = H or V corresponds to H or V polarizations, respectively.where the coefficients a and b were determined by simulated brightness temperature (i.e., m Bp T was calculated by the flat water surface model) and antenna temperature TA of flat water surface by applying measured antenna pattern.With this relationship, the effect of sidelobes picking radiation from the surroundings was removed.The main beam efficiency of radiometer antenna is 98.36% at whole space integration of antenna radiation pattern, which results that the maximum bias between the brightness temperature of theoretical model and the linear fitting brightness temperature with measured antenna temperature is about 0.1 K.For the case of foam-covered water surface, the above linear relationship between brightness temperature and measured antenna temperature was applied to derive the brightness temperature of foam-covered surface.
In the foam generation procedure, the foam generating area (Figure 1b) is the sum of foam region "A" (i.e., water-coated air bubbles over the water surface) and air-water mixture region "B" (i.e., air bubbles immersed in water).Foam coverage fraction w1 and air-water mixture coverage fraction w2 were calculated with the foam generating area and L-band automatic radiometer (LBAR) antenna-boresighted area over the experimental water surface at different incidence angles.Then, the coverage fraction of the foam-free water surface was where the unknown emissivity mixt p e of the air-water mixture can be calculated by the following method.Because the microwave wavelength of L-band (1.4 GHz) is larger than the size of air bubbles in the air-water mixture, that mixture is regarded as an effective medium.Then, the effective permittivity ε e of the mixture is estimated by the Maxwell-Garnett Equation ( 4) of the spherical composite (i.e., air bubbles embedded in seawater) [30]: where 0 (ε ε ) / (ε 2ε ) a w a w b = − + and ε a and ε w are the dielectric constants of air and seawater, respectively.The seawater dielectric constant is a function of microwave frequency, SST and SSS [31].
The AVF aw f of the air-water mixture was extracted by Equation ( 4), for which effective conductivities of the air-water mixture and seawater were measured with a conductivity instrument.Note that the conductivity and permittivity are interchangeable in that equation.Furthermore, it is assumed that the air-water mixture surface is flat.Emissivity mixt p e is estimated by the Fresnel reflection coefficient.
Experiment Description
A foam emission experiment was conducted in December 2012 at Tangdao Bay of Qingdao, to measure foam emissivity with the LBAR designed by the National Space Science Center of the Chinese Academy of Sciences.The LBAR was installed on a trestle platform of height 4.5 m over a foam-covered water pool of length 32 m, width 11 m, and depth 1.2 m.The LBAR height ensured far-field conditions for the conic antenna size of bore diameter 0.5 m (Figure 1a).Incidence angle was automatically recorded by the radiometer control system.The LBAR had a dual-polarization (V and H) 25° antenna half-power beam width with sensitivity 0.2 K (Kelvin) and integration time 1 s.With a hot load in a temperature-controlled box coupled with a noise source and cold reference at ambient temperature, combined with an outside microwave absorber and cold space, on-site calibration error of brightness temperature was less than 1 K for a flat sea surface.
Foam generators fixed under the bottom of the water pool were of length 12 m and width 11 m.During the experiment, a Sony DSC-HX7 digital camera and Sony HDR-PJ50E video camera were used to record the foam generation area, foam thickness, and bubble sizes in the foam layer (Figure 1b-d).SST was −1.4 °C to 1.7 °C, and SSS was controlled by sea salt, with salinity between 31 and 38 psu.Conductivity and temperature sensors of an Infinity-A7CT (JFE Advantech Co. Ltd., Kobe, Japan) were used to measure SST, SSS, and conductivities of seawater and air-water mixtures.Mean diameter of measured air bubbles within the foam layer was about 1.6 mm.The foam generation area (Figure 1b) was calculated by the incidence angle, microwave beamwidth, and antenna height.The ratio of 1 w to 2 w was statistically estimated at ~1.2 by analyzing areas of the foam and air-water mixture regions in photos of the foam generation surfaces.Effective conductivities of the air-water mixture and seawater were measured with the conductivity instrument and were used to retrieve the AVF aw f (i.e., retrieved value 05 .0 = aw f in our experiments) of the former mixture using Equation (4).For each experiment with variable SST and SSS, the two measurements of the flat water and foam-covered surfaces were completed within an hour.First, brightness temperatures of the calm water surface, sky and atmosphere downwelling radiation were measured within the first half hour at different incidence angles, when the antenna scanned the water surface and sky (i.e., including the atmospheric downwelling radiation and attenuated cosmic radiation brightness temperature), respectively.Similarly, brightness temperatures of the foam-covered water surface and sky downwelling radiation were measured within the last half hour.Here, the average brightness temperature sky p T for each experiment was used in Equation (3).
Theoretical Emissivity Model
To seek a theoretical emissivity model of the foam-covered water surface, we regarded the foam as a medium consisting of densely packed air bubbles embedded in the seawater.The foam-covered surface can be modeled by three layers.These are the air layer on top (Layer 0), foam layer in the middle (Layer 1), and seawater at the bottom (Layer 2), for which the interfaces between each layer are assumed flat.With the wave approach theory of a two-layer medium [32], emissivity where the seawater dielectric constant w ε of low temperature at L-band is given in [31], and e ε is effective permittivity of the foam layer.Clearly, the emissivities of theoretical Equation ( 5) are determined by effective permittivity of the foam layer and its thickness.Thus, given that thickness, the foam effective permittivity will be the sole factor in modeling the experimental emissivities.It is well known that there are many effective permittivity formulas of two-phase composites, such as Maxwell-Garnett (MG), refractive model (RM), Looyenga model (LM), and Polder-van Santen model (PSM) [3].The question is which formula is suitable for modeling our measured emissivities.As a reference, Anguelova [3] theoretically ranked these permittivity formulae according to their applicability to sea foam, in which the three top-ranking formulae were RM (Equation ( 8)), LM (Equation ( 9)), and MG (Equation ( 4)), respectively.
where a f is the AVF of the foam layer, which replaces aw f of Equation ( 4).In these formulae, this AVF is very important for computing emissivities.
To seek the best theoretical model for our experimental emissivities, we constructed a cost function 2 χ by means of the experimental data and theoretical models, so that the valid theoretical model attains the minimum cost function by tuning AVF: here, the foam layer AVF in our experiment is unknown.To determine the best theoretical model with the measured data, we should constrain the AVF in Equation (10) because the theoretical emissivity model depends on it.Some references show that the average AVF of an artificial foam layer is larger than 85% [6,8,11], which is different from those of natural sea foam produced by wave breaking, i.e., 55% to 76% [13].For an example artificial foam experiment, Rose et al. [8] found that the AVF is about 85% in the center of the foam layer, and that on its top surface increases as the foam ages and water drains from bubble interstitial areas.Chen et al. [7] showed that the AVF of artificial foam was 80% to 90% in most cases, by analyzing bubble images; an AVF of 90% was adopted for their emissivity model.AVF differences between artificial and natural sea foam layers result from air bubbles continuously aggregating over the sea surface during artificial foam experiments, so that the AVF of the artificial layer is larger than that of the natural layer.Here, the AVF constraint condition greater than 85% was used to select the best theoretical model for our experiment.Based on the theoretical models of Equations ( 4), ( 8) and ( 9), minimum root mean square errors (RMSE) between the experimental and theoretical emissivities are presented in Table 1, along with the tuning AVF a f of the foam layer.From Table 1, although the minimum RMSEs of the LM emissivity model at both H and V polarizations are smaller than those of the other two models, its average AVF does not meet the AVF constraint condition.Thus, the RM and MG models are valid for our experiments, considering the values of AVF.However, because the RMSEs of the MG model are larger than those of the RM, the latter model was chosen to analyze our experiments.Here, the effect of water and foam surface roughness on the measured brightness temperature is neglected.
Experimental Results and Theoretical Analyses
The measured foam emissivity was obtained via Equation (3).For the same experimental conditions (SST, SSS, and foam thickness d), we modeled foam emissivity using Equation (5) in combination with RM for foam permittivity; hereafter, we refer to this combination of emissivity and permittivity models as the RM emissivity model.We compared measured and modeled foam emissivities and found their strongest agreement by tuning the AVF values (Table 2).Figure 2 shows that the theoretical results are generally in good agreement with measured emissivities at both H and V polarizations for incidence angles from 30° to 59°.By comparing their RMSEs, the H polarization agreement with measured emission data are stronger than those of V polarization.To qualitatively validate the estimated AVF by the RM emissivity model, we analyzed characteristics of AVF variation with SST and SSS, which are shown in Figure 3a,b, respectively.Moreover, from Camps' foam experimental data [6], the AVFs retrieved by the RM emissivity model and Rayleigh method [19] are plotted in Figure 3c,d.Although there are some differences between the AVF values retrieved by the two models from Camps' data, the extracted AVFs have similar trends with SST (or SSS).That is, the AVFs increase (or decrease) with increasing SST (or decreasing SSS).This result implies that AVF increase with SST increase is reasonable in the thermophysics.The emissivities decreasing with SSS increase can be explained by the mean radius of water-coated air bubbles decreasing non-linearly with increasing SSS in previous foam observations [6,11].This is because the AVF a f of a foam layer can be approximated by , where the constant δ is film thickness of the water coating of an air bubble and b is the outer radius of a coated air bubble.To discern the mechanism of emissivity increment for the foam-covered sea surface, we investigated the effect of a foam factor on foam emissivity using the RM emissivity model, with other factors fixed.For example, with SST 0.5 °C, SSS 34 psu, foam thickness 1.3 cm, AVF 0.9, and incidence angle 35°, sensitivities of the emissivities to SST were about (1/psu), respectively.Thus, SSS and SST were almost equally important in estimating the foam layer emissivity at low SST.However, the AVF and foam thickness had stronger effects on the emissivities, with vertical (horizontal) sensitivities of emissivities to them around ) (both with units 1/0.01) and 0.021 (0.025) (both with units 1/mm)), respectively.It is clear that the sensitivity of H polarization was larger than that of V polarization.Furthermore, regarding the effect of AVF on foam emissivity with the RM emissivity model, the emissivity decreases (increases) with increasing AVF for AVF greater (less) than 0.7 (Figure 4a).This result indicates that air of the foam layer more strongly controlled foam emissivity than did the water, owing to a large AVF. Figure 4b implies that emissivity of the RM emissivity model generally fluctuated with increasing thickness of the foam layer.Clearly, emissivity rose with initially increase of foam thickness for thickness less than about 3 cm.For thickness greater than 25 cm, saturation emissivity was maintained, for which the threshold of the thickness is related to the microwave wavelength and optical thickness of the foam composite.The fluctuation characteristic of emissivity for foam thickness 3-25 cm results from the phase coherent effect of the two-layer model [32].The oscillatory behavior is caused by the coherent addition of the multiple reflections at the air-foam and foam-water boundaries.As foam thickness is increased, the attenuation through the foam medium increases, thereby reducing the magnitude of the reflections from the water surface [33].However, the emissivity increases with foam thickness up to its saturation value in [7] with an incoherent model [33], which does not exhibit the oscillatory behavior of coherent reflectivity, because it does not account for phase interference effects.Note that emissivity model of wave approach is a coherent model, and its oscillatory magnitude is related to both foam thickness and microwave frequency.In our experiments, the emissivity declined with increasing AVF and decreasing foam thickness, because the AVF was larger than 0.7 and foam thickness was less than 3 cm.From the above theoretical discussion, we conclude that the AVF and foam thickness are key parameters in predicting the emissivity of a foam layer.However, if the foam thickness and AVF vary within a very small range, SSS and SST are important for calculating the emissivity.
Emissivity Increments Induced by Foam Layer
Compared with the emissivities of flat sea surfaces, emissivity increments foam-covered water surfaces were calculated with the measured emissivities.In Figure 5, for foam thickness 1 cm and SSS increasing from 31 to 38 psu, average emissivity increments increase from about 0.014 to 0.131 for H polarization and 0.022 to 0.150 for V polarization, respectively.This result is very similar to that of Camps' experiment for higher SST [6].However, with AVF and SST increase, the emissivity increments of both polarizations generally decrease for the foam thickness fixed at 1 cm.For this thickness in our experiments varying between 1.1 and 1.5 cm, the emissivity increments fluctuated around averages of 0.081 for H polarization and 0.089 for V polarization, under the influences of other foam factors.Therefore, interactions of foam factors such as AVF, foam thickness, SSS, and SST are also important in estimating the foam emissivity increments.These increments did not clearly depend on incidence angle.To address effects of the foam layer bottom boundary on the emissivity, some research investigated the influences of a distorted water surface between the foam and water surface [25,34], i.e., the meniscus interface.The contribution of the meniscus zone to the increment of sea foam emissivity depends on the size of air bubbles and microwave frequencies, owing to a gradual transition from permittivity of the air-water mixture (or air) to that of seawater [25].For example, Anguelova [35] concluded that wet foam near the foam-water interface has a greater impact on emissivity than dry foam at the top of the foam layer.For 6.6 and 10.7 GHz, Wilheit pointed out that a significant fraction of foam emissivity increment comes from the contribution of meniscus interfaces [34].In the present study, to theoretically investigate the contribution of meniscus to the foam emissivity increment, RM emissivity model was used to estimate emissivity of the meniscus interface from 1 to 37 GHz.For simplicity, the meniscus zone was approximately regarded as a periodic unit cube medium of single-layer dense spherical air bubbles embedded in seawater across the sea surface, where the meniscus zone thickness was around the diameter of an air bubble.The theoretical AVF of the meniscus zone is about π / 6 [11].Figure 6 shows calculated emissivity increments caused by a meniscus with thickness 1.5 mm (air bubble diameter) and AVF 0.5236, and foam layer thickness 1.3 cm and AVF 0.91; other parameters were SSS = 34 psu, SST = 0.5 °C, and incidence angle = 35°.Clearly, there was a peak emissivity increment F p e Δ varying with frequency, it was found that the meniscus made the largest contributions of 59% and 66% to the foam layer emissivity increments at 8 GHz for H and V polarizations, respectively.For frequencies higher than 20 GHz, the ratios were stable about 36% for H polarization and 45% for V polarization.However, at 1.4 GHz, the meniscus zone had a small fraction of foam emissivity increments, 7.8% and 8.6% for H and V polarizations, respectively.Generally, from the aforementioned findings, it is concluded that the meniscus transition zone has a stronger effect on foam emissivity increase for microwave frequency higher than 5 GHz.Nevertheless, for complex meniscus structures of a natural sea surface, its emissivity should be further investigated by theoretical and experimental methods.
Effects of Foam Layer on Retrieving Sea Surface Salinity
Considering the natural ocean, the effect of the foam layer on SSS retrieval can be estimated by combining foam coverage fraction w with the emissivity increment of the foam-covered surface, where w depends on wind speed, the air-water temperature difference, and other parameters.As an example, foam coverage fraction on the sea surface is about 1% at (10 m height) wind speed 10.0 m/s [36].For a flat sea surface with w , the total brightness temperature is where sea T is sea surface temperature (unit K).For SST = 1.52 °C, SSS = 33.63psu, foam thickness 1.5 cm, and tuning AVF 0.9137, emissivity increments F p e Δ in our experiments were about 0.079 for H polarization and 0.083 for V polarization at incidence angle 44.6°.Brightness temperature errors induced by the foam layer were about 0.22 K for the H polarization model and 0.23 K for the V polarization model for w = 1%, (i.e., the second term on right side of Equation ( 11)).For the low SST of 1.52 °C and SSS = 33.63psu, sensitivities of sea surface brightness temperatures to SSS were about 0.21 and 0.31 K/psu for H and V polarizations, respectively.Then, the SSS retrieval errors were about 1.0 and 0.74 psu for H and V polarization models of the flat sea surface, respectively.For comparison to the case of low SST, SSS retrieval errors of higher SST were estimated using the measured emissivity increments (0.098 for H polarization and 0.15 for V polarization) of Figure 11g in [6] for a foam-covered sea surface at incidence angle 45°, where SST = 18.7°C,SSS = 33.21psu, and foam thickness = 1.665 cm.In this case, sensitivities of sea surface brightness temperatures to SSS were about 0.45 and 0.69 K/psu for H and V polarizations, respectively.For w = 1%, retrieval errors of SSS were around 0.64 and 0.63 psu for H and V polarizations, respectively.This result indicates that the effect of the foam layer on SSS retrieval with the low SST is greater than that of the high SST, owing to the weak sensitivity of sea surface brightness temperatures to SSS at the low SST.For a rough sea surface, emissivity of the flat surface can be replaced by that of a rough surface in Equation (11).This example indicates that the foam layer indeed generates a large error of SSS retrieval under high wind speeds and low SST, and should be considered in establishing a theoretical retrieval model of SSS. ).The SSS retrieval error will be estimated by p w T Δ divided by sensitivities of sea surface brightness temperatures to SSS.
Conclusions
At low SST, emissivity experiments of an artificial foam-covered sea surface at L-band were conducted for variable salinities and incidence angles.Emissivities were obtained from measured brightness temperatures of both foam-free and foamy surfaces.Based on the experimental data, the RM emissivity model was confirmed by well-known theoretical EMA models.In this experiment, the emissivity increments were from 0.016 to 0.161 for H polarization and 0.025 to 0.184 for V polarization.These emissivity increments indicate large retrieval error of SSS with sea surface brightness temperature at L-band under high wind speeds.Furthermore, the mechanism of emissivity increase of the foam-covered surface was investigated with both experimental data and a theoretical model.The results show that foam thickness, AVF, SSS and SST are important factors for predicting foam emissivities.The effects of AVF and foam thickness are stronger than those of SSS and SST.The theoretical RM emissivity model implies that at a fixed foam thickness, the emissivity increments increase with SSS increase and AVF decrease when the AVF is larger than 0.7.With the foam thickness increase, emissivity of the wave approach clearly fluctuates up to a specific saturation value, which depends on SST, SSS, AVF, and incidence angle.In addition, foam coverage fraction is also an important parameter of effecting SSS retrieval.
For the interface between the foam layer and water surface, we discussed contributions of the meniscus zone to the emissivity increments varying with microwave frequencies 1-37 GHz.The result indicates that the greatest contribution of the meniscus layer to emissivity increments of a 1.3 cm foam layer was at ~8 GHz.Stable ratios of emissivity increment of the meniscus to that of the foam layer were about 36% for H polarization and 45% for V polarization, for frequencies between 20 and 37 GHz.However, at L-band (1.4 GHz), the meniscus had a weak effect on increasing the emissivity of the foam layer.
In summary, our experimental results are applicable to building an emissivity model of the sea surface with foam coverage and SSS retrieval model with satellite-observed brightness temperatures at L-band.To reduce brightness temperature error induced by the foam layer, the AVF and foam layer thickness are key parameters, owing to greater sensitivities of emissivity to them.However, for the complex microstructure of a foam layer, because the AVF and bubble size vary with foam layer depth, it is very difficult to measure the AVF exactly.Generally, AVF vertical distributions and the foam layer thickness depend on the dynamics of wave breaking on the ocean surface.To retrieve geophysical parameters from satellite data at various frequencies, spatiotemporal distributions of the AVF, foam thickness and foam coverage should be measured for the ocean.
surface includes the brightness temperatures of the foam-generating region, seawater region, and the reflected sky and atmospheric downwelling radiation: two terms on the right side of the equation are brightness temperatures of the foam-covered and air-water mixture surfaces, respectively.F p e and mixt p e are emissivities of the foam and air-water mixture regions, respectively.The third term is the brightness temperature contribution of the foam-free water surface.The fourth term results from the total brightness temperature of the reflected sky and atmosphere downwelling radiation by the experimental water surface, where reflectivity
Figure 1 .w to 2 w
Figure 1.Photography of foam emissivity experiment: (a) experimental scene; (b) image of foam region (denoted by "A") and air-water mixture region (region denoted by "B"); (c) top view of air bubble size; (d) side view of foam thickness and air bubble size.
Figure 3 .
Figure 3. Air volume fraction (AVF) of foam layer versus SST and SSS: (a) AVF obtained by RM emissivity model in our experiments versus SST; (b) AVF obtained by RM emissivity model for our experiments versus SSS; (c) AVF obtained by RM emissivity model and Rayleigh method from Camps' experiments [6] versus SST; (d) AVF obtained by RM emissivity model and Rayleigh method from Camps' experiments [6] versus SSS.
°C) for V and H polarizations, respectively.Vertical and horizontal sensitivities of SSS were about
Figure 5 .
Figure 5. Average emissivity increments meniscus at frequency 12 GHz, and the increments increased (or decreased) with frequencies increasing from 1 to 12 GHz (or from 12 to 25 GHz).However, emissivity increments thickness of 1.3 cm had strong fluctuation with microwave frequency, with two maxima of emissivity increments, at 4 and 12 GHz.From the ratios of the emissivity increments menis p e Δ to
Table 1 .
RMSE between experimental and theoretical emissivities for H and V polarizations, and the tuning AVF a f of the foam layer.
Table 2 .
Measured parameters of Figure2and tuning AVF used in RM emissivity model.
(11)ddition, for the open ocean with high wind speeds, foam coverage fraction w is an important variable in retrieving SSS.From Equation(11), brightness temperature error | 2016-03-01T03:19:46.873Z | 2014-11-07T00:00:00.000 | {
"year": 2014,
"sha1": "cfe2c61f10d9689d39872cb53d1f3d1d7a5b8e6d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/6/11/10913/pdf?version=1415355043",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cfe2c61f10d9689d39872cb53d1f3d1d7a5b8e6d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science",
"Geology"
]
} |
252776968 | pes2o/s2orc | v3-fos-license | A Review of Recent Advances in Microbial Fuel Cells: Preparation, Operation, and Application
The microbial fuel cell has been considered a promising alternative to traditional fossil energy. It has great potential in energy production, waste management, and biomass valorization. However, it has several technical issues, such as low power generation efficiency and operational stability. These issues limit the scale-up and commercialization of MFC systems. This review presents the latest progress in microbial community selection and genetic engineering techniques for enhancing microbial electricity production. The summary of substrate selection covers defined substrates and some inexpensive complex substrates, such as wastewater and lignocellulosic biomass materials. In addition, it also includes electrode modification, electron transfer mediator selection, and optimization of operating conditions. The applications of MFC systems introduced in this review involve wastewater treatment, production of value-added products, and biosensors. This review focuses on the crucial process of microbial fuel cells from preparation to application and provides an outlook for their future development.
Introduction
With population growth and industry development, the global energy demand is increasing rapidly. At present, human lives and industrial productions mainly depend on fossil fuels. However, gaseous emissions from the combustion of fossil fuels lead to air pollution and the greenhouse effect. In addition, the massive consumption of fossil energy will also result in a potential energy crisis [1]. Although clean energy sources, such as wind and nuclear energy, have been widely developed and deployed, no solution can replace fossil fuels independently [1,2]. Therefore, it is still necessary to further develop renewable energy alternatives to achieve efficient environmental protection and sustainable economic development.
In recent years, the microbial fuel cell (MFC) technology has become one of the most representative research hotspots in the bioenergy field. It has been considered a promising solution with the sustainable potential to meet energy demands [3]. The MFC system works depending on the conversion of chemical energy to electrical energy supported by the metabolic activity of certain microbes ( Figure 1). As a typical bioelectrochemical system, the MFC consists of an anode region and a cathode region separated by a proton exchange membrane (PEM). The electricity generation of MFCs relies on biological oxidation and oxygen reduction occurring in the anode and cathode regions, respectively. In the anode region, microbes act as biocatalysts to decompose substrates for the generation of electrons and protons through cellular respiration [4]. These electrons transported through the external circuit and protons transported through the PEM result in a reduction reaction with oxygen to generate water in the cathode region [5]. This energy generation process through the external circuit and protons transported through the PEM result in a reduction reaction with oxygen to generate water in the cathode region [5]. This energy generation process has many advantages, such as mild production conditions, simple operations, and a wide range of biocatalyst sources [6,7]. The application of microbial fuel cells is mainly combined with wastewater treatment [8]. It provides a feasible way to solve the issues of both water pollution and energy shortage. The discharge and accumulation of organic substances in wastewater can result in heavy pollution of water. At present, the widely used aerobic digestion treatment can efficiently decompose organic pollutants in wastewater into carbon dioxide under the action of microorganisms [9]. However, like other conventional wastewater treatment methods, this treatment still results in a lack of potential utilization of chemical energy in organic pollutants. These organic substances in wastewater have been considered available substrates for many strains [10]. As the microbes can use these organic pollutants to support metabolic activities and generate electrons, the MFC system can achieve simultaneous organic pollutant degradation and electricity production [11,12]. In addition, the MFC-based anaerobic digestion technology also avoids the higher energy consumption in conventional aerobic wastewater treatment methods [1,11].
Currently, the application of microbial fuel cells also focuses on the simultaneous production of electricity and value-added products due to the diversity of strains and metabolic pathways [6,13]. Microbes can produce a variety of biofuels, volatile fatty acids, biopolymers, and other platform compounds through the fermentation process during the electricity generation of MFCs [14][15][16][17]. Furthermore, substrates for MFCs also extend from pure chemicals and organic wastewater to lignocellulosic biomass (LCB) due to the The application of microbial fuel cells is mainly combined with wastewater treatment [8]. It provides a feasible way to solve the issues of both water pollution and energy shortage. The discharge and accumulation of organic substances in wastewater can result in heavy pollution of water. At present, the widely used aerobic digestion treatment can efficiently decompose organic pollutants in wastewater into carbon dioxide under the action of microorganisms [9]. However, like other conventional wastewater treatment methods, this treatment still results in a lack of potential utilization of chemical energy in organic pollutants. These organic substances in wastewater have been considered available substrates for many strains [10]. As the microbes can use these organic pollutants to support metabolic activities and generate electrons, the MFC system can achieve simultaneous organic pollutant degradation and electricity production [11,12]. In addition, the MFC-based anaerobic digestion technology also avoids the higher energy consumption in conventional aerobic wastewater treatment methods [1,11].
Currently, the application of microbial fuel cells also focuses on the simultaneous production of electricity and value-added products due to the diversity of strains and metabolic pathways [6,13]. Microbes can produce a variety of biofuels, volatile fatty acids, biopolymers, and other platform compounds through the fermentation process during the electricity generation of MFCs [14][15][16][17]. Furthermore, substrates for MFCs also extend from pure chemicals and organic wastewater to lignocellulosic biomass (LCB) due to the wide substrate availability of strains [18]. As one of the most abundant renewable resources, the annual production of LCB reaches about 200 billion tons [19]. LCB resources mainly exist in the form of agricultural and forestry wastes. Disposal and burning of such resources will cause a serious waste of resources and environmental pollution. However, the sugars generated by the hydrolysis of LCB are ideal carbon sources for the growth and metabolism of microbes. Similar to that of organic wastewater, the utilization of LCB hydrolysates as the substrate in MFCs can effectively achieve the recycling of biomass energy and the treatment of agricultural and forestry wastes. Therefore, the MFC system is a promising sustainable technology for simultaneous energy production and waste valorization ( Figure 2). wide substrate availability of strains [18]. As one of the most abundant renewable resources, the annual production of LCB reaches about 200 billion tons [19]. LCB resources mainly exist in the form of agricultural and forestry wastes. Disposal and burning of such resources will cause a serious waste of resources and environmental pollution. However, the sugars generated by the hydrolysis of LCB are ideal carbon sources for the growth and metabolism of microbes. Similar to that of organic wastewater, the utilization of LCB hydrolysates as the substrate in MFCs can effectively achieve the recycling of biomass energy and the treatment of agricultural and forestry wastes. Therefore, the MFC system is a promising sustainable technology for simultaneous energy production and waste valorization ( Figure 2). The electrogenic capacity of MFCs generally depends on strains, substrates, electrode properties, and operating conditions. The application of electron mediators and the control of operating conditions can also improve the performance of the MFCs. Therefore, this review provides a detailed discussion of recent progress in these research fields. In addition, this review summarizes the recent advances of MFC systems in wastewater treatment, production of value-added products, and applications as biosensors. It also includes prospects for the future development of the MFC system.
The Form of Cell Cultures
Substrate oxidation by microbes in the anode is the only source of electron generation in MFC systems. Geobacter and Shewanella are electrogenic microbes commonly used in MFCs [20]. Several yeast strains, such as Saccharomyces cerevisiae, Candida melibiosica, and Kluyveromyces marxianus, have also participated in the operation of MFC systems [21]. In addition, archaebacteria, cyanobacteria, and proteobacteria are promising strains for electricity generation [22]. However, the eukaryotic algae can act as both electron producers and acceptors in the anode and cathode, respectively [22]. The anodic inoculations of The electrogenic capacity of MFCs generally depends on strains, substrates, electrode properties, and operating conditions. The application of electron mediators and the control of operating conditions can also improve the performance of the MFCs. Therefore, this review provides a detailed discussion of recent progress in these research fields. In addition, this review summarizes the recent advances of MFC systems in wastewater treatment, production of value-added products, and applications as biosensors. It also includes prospects for the future development of the MFC system.
The Form of Cell Cultures
Substrate oxidation by microbes in the anode is the only source of electron generation in MFC systems. Geobacter and Shewanella are electrogenic microbes commonly used in MFCs [20]. Several yeast strains, such as Saccharomyces cerevisiae, Candida melibiosica, and Kluyveromyces marxianus, have also participated in the operation of MFC systems [21]. In addition, archaebacteria, cyanobacteria, and proteobacteria are promising strains for electricity generation [22]. However, the eukaryotic algae can act as both electron producers and acceptors in the anode and cathode, respectively [22]. The anodic inoculations of MFCs include pure cultures and mixed cultures. Pure cultures might achieve more efficient conversion of substrates to electricity because of simple and well-defined metabolic pathways. However, it also has higher requirements on the purity and concentration of the substrate. Therefore, selectivity for specific substrates might limit the ability of pure cultures to generate electricity using complex substrates such as wastewater and LCB hydrolysates.
Currently, pure cultures mainly participate in studies on electricity generation performance and electron transport mechanisms of specific strains. However, Pandit et al. [23] developed a pure culture-based bioaugmentation strategy to improve the volumetric current density and shorten the start-up time of MFCs. Mixed cultures are advantageous for the scale-up of MFC systems due to their higher adaptability to complex substrates. The synergistic effect of various strains in the mixed culture might also be conducive to the efficient operation of MFC systems. Activated sludge is the most representative mixed culture used for MFC systems. The sludge pretreated with acid and heat can further enhance electricity generation [24]. However, the microbe composition of activated sludge is very complex. It is difficult to determine the precise pathway of substrate conversion. Therefore, the co-culture of defined strains might enhance the performance of MFC systems through synergy based on their specific functions. Schmitz and Rosenbaum [25] developed a co-culture scheme of Pseudomonas aeruginosa and Enterobacter aerogenes. The electron mediator produced by Pseudomonas aeruginosa can improve the electron transfer efficiency to the anode. This co-culture system can achieve an over 400% increase in electrical current generation under an optimized oxygen supply.
Strain Modification Based on Genetic Engineering
Although various wild-type strains have successfully achieved electricity production, it is still necessary to further improve the electrochemical activity of strains through suitable methods. Several physical and chemical methods have been considered as potential ways to enhance the ability of strains to generate electricity [26]. Genetic engineering is also a promising strategy to improve the electrochemical activity of strains. It mainly involves gene modification related to metabolic activity, the electro-shuttle pathway, and substrate utilization. The enhancement of extracellular electron transfer (EET) based on genetic engineering is an effective method to improve the performance of MFC systems. The modification of cytochrome c maturation can achieve a 77% increase in the current generation [27]. A constructed hybrid system of cytochrome c maturation can also increase the overall current by 121% [28]. In addition, the cytochrome OmcZs expressed by Escherichia coli can enhance the current production by binding riboflavin. Synthetic biology has played a role in enhancing the EET efficiency of strains. Liu et al. [29] enhanced the electricity output of the Pseudomonas aerugino strain based MFC by assembling type IV pili with high conductivity. Lin et al. [30] promoted the EET efficiency of Shewanella oneidensis by enhancing the biosynthesis and transportation of flavins. Kasai et al. [31] and Cheng et al. [32] also enhanced the EET of Shewanella oneidensis by improving the intracellular level of 3 ,5 -cyclic adenosine monophosphate. Min et al. [33] developed an engineered Shewanella oneidensis strain with a gene cluster of flavin biosynthesis. This strain can achieve a 110% increase in the maximum current density of MFC systems. In addition, the increase in intracellular NAD(H/+) based on a modular synthetic biology strategy can improve both intracellular electron flux and EET efficiency [34]. McAnulty et al. [35] focused on the substrate expansion for power generation of MFC systems. They achieved the conversion of methane to electricity by developing a synthetic consortium with the main strains comprised of engineered Methanosarcina acetivorans, Paracoccus denitrificans, and Geobacter sulfurreducens. Li et al. [36] developed an engineered Shewanella oneidensis strain for electricity generation directly from xylose. It can achieve a maximum power density of 2.1 mW/m 2 . Genetic-engineering-based strain modification for enhancing electricity generation has achieved desired results in laboratory-scale MFC systems. However, there is still a lack of specific progress in the scale-up of MFC systems.
Defined Substrates
Currently, defined substrates for MFC systems mainly include sugars and organic acids. Glucose is a common substrate for MFC systems. Christwardana et al. [37] used glucose for a yeast-based MFC to achieve a maximum power density of 374.4 mW/m 2 .
However, Obileke et al. [4] pointed out that glucose might lead to low coulombic efficiency of MFC systems due to the electron loss caused by competition strains and the substrate consumption for fermentation. There are also studies using xylose as the substrate for MFC systems. Haavisto et al. [38] observed the highest power density of 333 mW/m 2 using xylose for an up-flow MFC system. Li et al. [39] developed a microbial consortium consisting of engineered Klebsiella pneumoniae and Shewanella oneidensis. It can achieve a maximum power density of 104.7 mW/m 2 using co-substrates of xylose and glucose. Several studies have compared the performance of MFC systems using different substrates. Ullah and Zeshan [40] studied the electricity generation of the double-chamber MFC using glucose, acetate, and sucrose, respectively. They reported a maximum power density of 91 mW/m 2 using acetate as the most effective substrate. Jin et al. [41] also obtained similar results. They observed the best electricity production performance of the dual-chamber MFC using sodium acetate compared with glucose and lactose. In addition, acetates also have an advantage in the electrochemical performance of MFC compared to lactate and octanoate [42]. It might be related to the lower ohmic loss of the biofilm when using acetate as the substrate for MFC systems [42]. In recent years, acetate has also participated in the scheme of co-substrates with pollutants to improve the performance of MFCs in electricity generation and toxicant degradation. Shen et al. [43] obtained a voltage output of 389.0 mV with an initial phenol degradation of 78.8% using acetate as the co-substrate for the single-chamber MFC. Yu et al. [44] also used the dual-chamber MFC with acetate co-substrate to achieve increases of 4.3-fold in power generation and ∼42% in removal efficiency of 4-chlorophenol. In addition, Ndayisenga et al. [45] obtained increases of 60.1% in coulombic efficiency and 64.7% in microcystin-LR using acetate co-substrate for the dualchamber MFC. Mancilio et al. [46] used acetate co-substrate to achieve a power density of 398 mW/m 2 and a p-Coumaric acid degradation of 79%. However, they also reported higher potential and power density of the MFC using acetate as the single substrate.
Wastewater
Organic wastewater can provide essential nutrients for the growth and metabolism of microbes. This review focuses on the utilization of practical wastewater for MFC systems. There are studies using municipal wastewater as the substrate for MFCs. A natural microflora-based MFC system [47] achieved the maximum current density of 525 ± 20 mA/m 2 and coulombic efficiency of 54% using 100% septic tank wastewater (STWW). Thulasinathan et al. [48] compared the electricity generation by Cronobacter sakazakii AATB3 and Pseudomonas otitidis AATB4 using STWW in the dual-chamber MFC. They observed a higher power density of 280 mW/m 2 and a higher current density of 800 mA/m 2 by Pseudomonas otitidis AATB4. This strain can also achieve the maximum coulombic efficiency of 15.5%. The study also compared the electricity generation by the co-culture of Serratia marcescens AATB1 and Klebsiella pneumoniae AATB2 and the single culture of each strain using STWW as the substrate [49]. It indicated that the co-culture has the best performance. It can achieve the maximum power density of 398.69 mW/m 2 and the maximum current density of 869.11 mA/m 2 . Ramu et al. [50] focused on the food industry wastewater substrate for the MFC system. They used a Klebsiella pneumoniae-FA2 strain based MFC with food industry wastewater to obtain the maximum power generation of 428.71 mW/m 2 and coulombic efficiency of 74.6%. Dairy wastewater is also an available substrate to achieve the maximum power density of 621.13 mW/m 2 and the maximum current density of 795.74 mA/m 2 [51]. In recent years, agro-processing wastewater for electricity generation has received extensive attention. Raychaudhuri and Behera [52] observed the maximum volumetric power density of 656.10 mW/m 3 and coulombic efficiency of 17.21% using rice mill wastewater in the anode chamber with intermittent air exposure. Ng et al. [53] used the palm oil mill effluent (POME) to obtain the maximum power density of 0.45 mW/m 2 by an algal-biophotovoltaic device. Islam et al. [54,55] studied electricity generation from POME by co-cultures. They observed the maximum power density of 12.87 W/m 3 by the co-culture of Klebsiella pneumonia and Lipomyces starkeyi [54] and 14.78 W/m 3 by the co-culture of Pseudomonas aeruginosa and Klebsiella variicola [55]. Sarmin et al. [56] also used POME to achieve a power density of 500 mW/m 2 by the co-culture of Saccharomyces cerevisiae, Klebsiella variicola, and Pseudomonas aeruginosa. Zhang et al. [57] focused on electricity generation of the MFC system using molasses wastewater as the substrate. It can achieve a maximum power density of 1410.2 mW/m 2 . Naina et al. [58] observed the highest power density of 194.7 mW/m 2 using distillery wastewater under the borate buffer environment. Several studies also focus on animal wastewater for electricity generation. Oyiwona et al. [59] obtained a volumetric power density of 6.9 W/m 3 using poultry wastewater. Ren et al. [60] observed a power density of 33.3 mW/m 3 using swine wastewater. Ni et al. [61] also used swine wastewater to achieve a power density of 770.97 mW/m 2 .
LCB Substrates
As a renewable source rich in carbon, LCB exists mainly in the form of agricultural and forestry waste. LCB with appropriate pretreatment can participate in the operation of the MFC system as a hydrolysate substrate or direct substrate ( Figure 3). The proper hydrolysis method can efficiently decompose the cellulose and hemicellulose contained in LCB into monosaccharides. These LCB hydrolysates containing a variety of hexoses and pentose have been considered promising substrates for cell growth and metabolism. Catal et al. [62] used the sulfuric acid hydrolysate of pinewood flour to achieve a voltage of 0.43 V at 1000 Ω external resistance of a single-chamber MFC. Jablonska et al. [63] obtained a power density of 54 mW/m 2 using rapeseed straw hydrolysates produced by hydrothermal pretreatment and enzymatic hydrolysis. Gurav et al. [64] compared the electricity generation of a Shewanella marisflavi BBL25 strain based MFC using hydrolysates of barley straw, Miscanthus, and pine, respectively. As the most effective substrate, barley straw hydrolysate can achieve the maximum current output density of 6.850 mA/cm 2 and the maximum power density of 52.80 mW/cm 2 . The authors also pointed out that barley straw hydrolysates lead to more elongated strain cells due to higher concentrations of lactate and formate. However, there are also studies directly using LCB materials as substrates for electricity generation. Simultaneous LCB degradation and electricity generation might rely on the combined action of multiple strains. Flimban et al. [65] studied the electricity generation of a dual-chamber MFC using the direct substrates of potato peels and rice straw, respectively. The power densities obtained from potato peels and rice straw can reach 152.55 mW/m 2 and 119.35 mW/m 2 , respectively. Mohd Zaini Makhtar and Tajarudin [66] compared the electricity generation of a membrane-less MFC system using banana peel, corn bran, and POME. They observed a voltage generation of 237.1 mV with a power density of 23.75 mW/m 2 achieved using the banana peel as the most effective substrate. Yoshimura et al. [67] developed a hydrodynamic cavitation system for the pretreatment of rice bran. They reported an increase of 26% in the total electricity generation using such pretreated rice bran because of the efficient substrate utilization. In addition, Jenol et al. [68] compared the electricity generation of a Clostridium beijerinckii SR1 strain based MFC using the direct substrate and hydrolysate substrate of sago hampas. The power density achieved from these two substrate forms of sago hampas can reach 73.8 mW/cm 2 and 56.5 mW/cm 2 , respectively. Despite the potential of LCB for MFC-based biomass valorization, difficulties in collection and transportation limit the large-scale application of LCB.
Anode Modification
The modification of MFC anodes mainly focuses on improving the specific surface area and surface properties. Heat treatment and acid treatment are feasible surface treatment methods to increase the specific surface area of the anode [69]. Electrochemical oxidation methods can increase the specific surface area of the anode and introduce new functional groups to the anode surface [69,70]. These methods are all conducive to facilitating electrical contacts of strain cells to form electron-donating biofilms. However, more studies have used different materials for electrode modification to enhance the adhesion of strain cells and promote electron transfer to the anode surface. Metals and metal oxides have widely participated in anode modification. Xu et al. [71] studied the electricity generation of dual-chamber MFCs with carbon cloth anodes modified with MnO2, Pd, and Fe3O4, respectively. The maximum power densities achieved by anodes modified with such materials can reach 824, 782, and 728 mW/m 2 , respectively. The authors also pointed out that anodes modified with different materials lead to the enrichment of different strains on the anode surface. Yu et al. [72] observed the maximum power density of 29.98 mW/m 2 using the anode modified with bentonite-Fe, and 18.28 mW/m 2 using the anode modified with Fe3O4. They also reported increases in the stable voltage and decreases in the internal resistance of the MFCs with modified anodes compared to the bare graphite felt anode. The carbon cloth modified with cobalt oxide [73] and the nitrogen-doped carbon nanorods modified with Co-modified MoO2 nanoparticles [74] can also improve the electricity generation of MFCs. Li et al. [75] reported a positive effect of zero-valent iron on the improvement of maximum power density with structured biofilm and enriched functional microbial communities. However, they also observed the inhibition of electricity generation by the high concentration of zero-valent iron. Several studies have modified anodes with carbon materials, such as graphene oxide (GO) and carbon nanotubes (CNT). Paul et al. [76] used the carbon felt (CF) anode modified with GO and zeolite to achieve a
Anode Modification
The modification of MFC anodes mainly focuses on improving the specific surface area and surface properties. Heat treatment and acid treatment are feasible surface treatment methods to increase the specific surface area of the anode [69]. Electrochemical oxidation methods can increase the specific surface area of the anode and introduce new functional groups to the anode surface [69,70]. These methods are all conducive to facilitating electrical contacts of strain cells to form electron-donating biofilms. However, more studies have used different materials for electrode modification to enhance the adhesion of strain cells and promote electron transfer to the anode surface. Metals and metal oxides have widely participated in anode modification. Xu et al. [71] studied the electricity generation of dualchamber MFCs with carbon cloth anodes modified with MnO 2 , Pd, and Fe 3 O 4 , respectively. The maximum power densities achieved by anodes modified with such materials can reach 824, 782, and 728 mW/m 2 , respectively. The authors also pointed out that anodes modified with different materials lead to the enrichment of different strains on the anode surface. Yu et al. [72] observed the maximum power density of 29.98 mW/m 2 using the anode modified with bentonite-Fe, and 18.28 mW/m 2 using the anode modified with Fe 3 O 4 . They also reported increases in the stable voltage and decreases in the internal resistance of the MFCs with modified anodes compared to the bare graphite felt anode. The carbon cloth modified with cobalt oxide [73] and the nitrogen-doped carbon nanorods modified with Co-modified MoO 2 nanoparticles [74] can also improve the electricity generation of MFCs. Li et al. [75] reported a positive effect of zero-valent iron on the improvement of maximum power density with structured biofilm and enriched functional microbial communities. However, they also observed the inhibition of electricity generation by the high concentration of zerovalent iron. Several studies have modified anodes with carbon materials, such as graphene oxide (GO) and carbon nanotubes (CNT). Paul et al. [76] used the carbon felt (CF) anode modified with GO and zeolite to achieve a 3.6-times higher power density and 2.75-times higher coulombic efficiency than having used the bare CF anode. They indicated the higher biocompatibility that originated from the improved specific surface area by graphene oxide and enhanced microbe adhesion by zeolite. The power densities achieved by the CF anode modified with GO and Fe 2 O 3 are 1.72 times and 2.59 times that of MFCs with the graphene anode and the unmodified anode, respectively [77]. Liang et al. [78] pointed out that anodes modified with graphene, GO, and CNT have higher electrochemically active surface areas and enriched microbial communities. Zhang et al. [79] indicated that the graphite felt modified with CNT can promote biofilm growth and enhance electron transfer. In addition, polymers usually participate in anode modification combined with metal-type or carbon-type materials as composites. The power density achieved by the anode modified with polydopamine and reduced GO reaches 2.2 and 1.9 times that of MFCs with anodes modified with polydopamine and reduced GO, respectively [80]. The anode modified with polyaniline (PANI) and Au can also improve bioelectrochemical activity [81]. Mashkour et al. [82] pointed out the positive effect of PANI on biofilm growth. The CF anode modified with nitrogen-doped CNT, PANI, and MnO 2 can achieve a 2.76-times higher cell biomass content than that of the bare anode [83].
Cathode Catalyst
The efficiency of cathode-based oxygen reduction directly affects the electricity generation of the MFC system. Appropriate cathode catalysts can improve the power output efficiency of MFC systems by promoting electron transfer and enhancing oxygen reduction. Platinum-based cathode catalysts can improve the activity of oxygen reduction [84]. However, they have limited availability in large-scale applications due to high costs and low stability [85]. Currently, more studies are attempting to develop nanocomposite-based cathode catalysts to enhance the electrochemical activity of MFC systems. Liu et al. [86] focused on the cathode catalysts based on metals and metal oxides. They developed an activated carbon cathode modified with Cu 2 O and Cu to achieve a peak power density of 16.12 W/m 2 . They also pointed out that the catalytic activity of Cu 2 O to oxygen reduction and the high electrical conductivity of Cu improve the performance of the cathode. Majidi et al. [87] observed a power density of 180 mW/m 2 using a carbon cloth cathode modified with α-MnO 2 nanowires and carbon Vulcan. Chiodoni et al. [88] also reported the positive effect of manganese-oxide-based cathode catalysts on MFC performance. Rout et al. [89] focused on the cathode catalysts combined with metal oxides and non-metal materials. They developed a nanocomposite of MnO 2 and reduced GO to achieve a 2.7-times increase in volumetric power density. They also pointed out that this nanocomposite can provide a four-electron oxygen reduction pathway and enhance electron transfer. Mecheri et al. [90] reported that the cathode catalyst based on FePc and GO can improve the electrochemical performance of the MFC. The 3D composite of CNT and MoS 2 has also been considered an available cathode catalyst for efficient oxygen reduction [91]. Li et al. [92] observed a maximum power density of 1177.31 mW/m 2 and a current density of 6.73 A/m 2 using the cathode catalyst of bacterial cellulose doped with P and Cu. They indicated that more active sites in this cathode catalyst improve the catalytic activity of oxygen reduction. Kaur et al. [93] developed a composite catalyst of PANI and an iron-based metal-organic framework. It can achieve a power density of 680 mW/m 2 and a limiting current density of 3500 mA/m 2 . The Ni-based metal-organic framework has also been considered an efficient cathode catalyst to promote oxygen reduction [94]. In addition, the layered double hydroxide (LDH) has also participated in cathode catalyst development. Jiang et al. [95] developed a composite of Fe 3 O 4 and NiFe-based LDH to achieve the maximum power density of 211.40 mW/m 2 . They indicated the advantages of LDH in terms of electroactive site availability, rate capability, and cycling stability. In another study, the composite catalyst of NiFe-based LDH and Co 3 O 4 achieved the maximum power density of 467.35 mW/m 2 [96]. Tajdid et al. [97] synthesized CoNiAl-based LDH. This material improved the performance of graphite cathode by working independently or combined with NiCo 2 O 4 . With the development of material science, co-composites based on various materials have become the choice for cathode catalyst development [98,99]. These co-composite cathode catalysts can exploit the specific advantages of each material. It is conducive to improving the comprehensive electrochemical performance of MFC systems, including electron transfer efficiency, oxygen reduction catalytic efficiency, and operation stability.
Electron Transfer Mediators
EET in MFC systems includes direct electron transfer and mediated electron transfer. Several strains, such as Geobacter and Shewanella, can transfer electrons directly to the anode surface via intricate networks of outer membrane cytochromes [100,101]. However, more strains need redox mediators for electron transfer due to the lack of electrochemically active surface proteins [102]. Electron transfer mediators acquire electrons within strain cells and transfer the electrons to the cathode surface. They can achieve continuous electron transfer via the conversion of oxidized and reduced states. A variety of organic compounds are common artificial exogenous mediators for electron transfer in MFC systems. Pal and Sharma [103] studied the electricity generation of a Pichia fermentans strain based MFC with the mediator of methylene blue (MB). They reported higher maximum power densities of both single-and dual-chamber MFCs containing MB than those without mediators. MB can also achieve a 1.22-fold increase in the steady-state voltage of a dual-chamber MFC [104]. Christwardana et al. [105] compared the electricity generation of a Saccharomyces cerevisiae strain based MFC with the addition of MB and methyl red. They pointed out a more efficient electron transfer of MB due to more effective capture by yeast and higher electron collection. MB also has advantages in enhancing the electricity generation of MFC systems compared to congo red and crystal violet [106]. Chauhan et al. [107] reported positive effects of both MB and methyl orange on the electricity generation of a dualchamber MFC. Chen et al. [108] reported a~400% increase in coulombic efficiency of a dual-chamber MFC using neutral red (NR) as the mediator. They pointed out that the proper concentration of NR can improve electricity transfer efficiency and promote the growth of the exoelectrogens. Moreno et al. [109] also observed the positive effect of NR on the electricity generation of continuous flow MFCs. The addition of NR can improve the maximum power density from 777.8 mW/m 3 to 1428.6 mW/m 3 and the maximum current density from 3444.4 mA/m 3 to 5714.3 mA/m 3 . Marcílio et al. [110] used methylene green as the mediator to achieve a 20% increase in the voltage of an acetate-fed MFC with a stable operation for about six days. In addition, metabolites of specific microbes can also act as endogenous mediators to participate in the electron transfer [20]. Ajunwa et al. [111] determined the electricity generation of the glucose-fed MFC with flavins and pyocyanin as mediators. These endogenous mediators can improve the power production efficiency of MFC systems by simplifying the electron transfer process.
Operation Conditions of MFC Systems
As with microbial fermentation, microbial activity in MFC depends on multiple operating parameters. Mechanistic studies and parameter optimization of various operating conditions are conducive to further improving the performance of MFCs. The temperature is considered one of the primary conditions affecting the electrical power generation of MFC systems due to its significant influences on the metabolic activity of microbes, mass transfer efficiency, and thermodynamic properties. The increased temperature can increase power density and reduce internal resistance due to an improved conductivity [112][113][114]. However, higher temperatures also negatively affect microbial activity, membrane stability, and partial pressure of oxygen [4]. The temperature range of 30 to 45 • C is conducive to maintaining growth efficiency and electrochemical activity of microbes in the MFC system [114]. Environment or room temperature is generally the operating temperature of MFCs, although it might reduce the efficiency of electrical power generation. However, Heidrich et al. [115] reported a minor effect of low temperature on the power density of MFCs due to the potential self-heating performance of MFC biofilms. Gonzalez-Martínez et al. [116] studied the performance of the MFC system at 25 • C and 8 • C, respectively. They observed a difference in bacterial communities but similar voltage at these temperatures. It indicates the potential for stable MFC operation at lower temperatures. The pH is the other primary condition affecting the performance of MFC systems. The promotion of proton transfer from anode to cathode in MFC systems usually depends on the different pH of the anode and cathode. However, the limited proton transfer efficiency of the PEM might lead to a decrease in anodic pH due to proton accumulation and an increase in cathodic pH due to a lack of protons. The lower anolyte pH might result in lower electron generation efficiency due to the inhibition of the growth and metabolism of microbes. The higher catholyte pH might decrease oxygen reduction efficiency [117]. Therefore, the unstable anodic and cathodic pH might reduce the power production efficiency of MFC systems. Phosphate- [118] and borate-based [58] buffers can effectively maintain the electrolyte pH of MFCs. HCO 3 − /H 2 CO 3 buffer systems based on anolyte or catholyte recirculation are promising alternatives to phosphate-based buffers [119,120]. There are also studies focusing on effects of initial substrate concentration, aeration rate, and hydraulic retention time [121,122]. Promoting cell growth and metabolism is still the primary solution to enhance MFC performance by controlling MFC operating conditions. In addition, the parameter optimization for the operating conditions of MFC systems can further improve power generation efficiency (Table 1).
Wastewater Treatment
Redox reactions based on MFC systems have achieved simultaneous chemical oxygen demand (COD) removal and electricity generation from wastewater. COD removal efficiency and maximum power density are the main parameters to describe the performance of MFC systems in wastewater treatment. Currently, both single-and dual-chamber MFC systems perform good COD removal efficiencies. Dual-chamber MFCs can achieve COD removal efficiencies of 79.8% [129], 83% [130], and 94.6% [131] from sugar wastewater, seafood processing wastewater, and brewery wastewater, respectively. Similarly, single-chamber MFCs can achieve COD removal efficiencies of 88% [132], 90% [133], and 96% [134] from tannery wastewater, wastewater of fish markets, and dairy wastewater, respectively. Constructed wetland (CW) MFC systems have also focused on wastewater treatment. The COD removal efficiency can reach 70% from dyestuff wastewater using the CW dual-chamber MFC [135] and 79.83% from Zn (II) contaminated wastewater using the CW single-chamber MFC [136]. However, there are significant differences in the maximum power density of these MFC systems. Zhang and Liu [137] reported the positive effect of electrode modification on the performance of MFCs in wastewater treatment. They obtained a COD removal capacity of 3.07 kg COD/m 3 /d and a maximum power density of >1680 mW/m 3 from coking wastewater using a dual-chamber MFC-membrane bioreactor system with a modified granular activated carbon cathode and catalytic cathode membrane. Kadivarian et al. [138] studied the COD removal and electricity generation of single-chamber MFC packs with parallel and serial connections. They pointed out that the serial connection of MFCs can achieve a higher COD removal efficiency while the parallel connection of MFCs can achieve a higher power density. Therefore, the performance of MFC systems in COD removal and electricity generation depends on the combined effect of multiple factors. In recent years, wastewater treatment based on MFC systems has also focused on certain pollutants. Xia et al. [139] used the dual-chamber MFC to achieve a power density of 543.75 mW/m 2 with 76.15% total nitrogen removal and 83.23% ammonia-nitrogen removal from organic acid fermentation wastewater. Zeng et al. [140] also achieved a nitrogen removal of 63.4% using a three-phase single-chamber MFC with a phase of immobilized Halomonas strain. However, Srikanth et al. [141] obtained a power density of 225 mW/m 2 with removal efficiencies of 95% oil, 80% phenol, and 79.5% sulfide using the single-chamber MFC in a continuous mode. The capacity of MFC systems to remove different pollutants depends on the metabolic pathway and activity of strains. In addition, there are also studies using the MFC system to achieve simultaneous wastewater treatment and the recovery of heavy metals, such as silver [142] and copper [143].
The Production of Value-Added Products
The production of value-added products from various organic wastes has achieved significant progress using conventional fermentation equipment. However, the application of MFC technology in this field is still limited. In the anaerobic environment of the anode chamber, microbes cannot fully oxidize substrates to CO 2 to generate electrons. Substrates might also flow into the fermentation pathway to generate anaerobic metabolites. Therefore, the MFC system can achieve integrated production of electricity and value-added products by combining fermentation and electrochemical processes. As typical bioenergy, bioethanol originates from the conversion of sugars by ethanologenic strains in anaerobic fermentation. The operation of yeast-based MFCs can achieve simultaneous production of bioethanol. Birjandi et al. [144] used a Saccharomyces cerevisiae strain based MFC to obtain a maximum ethanol production of 11.52 g/L with a maximum power density of 30.46 mW/m 2 from glucose. There are a variety of wild-type and engineered strains that can utilize different sugars for ethanol production. Metabolic engineering and mixed culture technologies can achieve an integrated MFC-based system for ethanol production and electricity generation from LCB hydrolysates and sugar-containing wastewaters with complex sugar compositions and concentrations. In addition, the MFC system can achieve efficient conversion of LCB substrates to ethanol and electricity with the combination of advanced fermentation strategies, such as simultaneous saccharification and fermentation (SSF) and simultaneous saccharification and co-fermentation (SSCF). Moradian et al. [145] focus on the integrated production of electricity and gaseous bioenergy using MFC systems.
They isolated the yeast Cystobasidium slooffiae strain JSUX1 from the activated sludge. This strain can achieve a power output of 67 mW/m 2 and a hydrogen production of 23 L/m 3 from xylose in the anode chamber of a dual-chamber MFC. With the increased focus on bioplastics, there are also attempts to produce polyhydroxybutyrate (PHB) using MFC systems. Lee et al. [17] developed an engineered Shewanella marisflavi BBL25 strain inserting polyhydroxyalkanoate synthesis genes from Ralstonia eutropha H16. This strain can achieve a PHB production of 6.31 g/L with a maximum current output density of 1.71 mA/cm 2 from barley straw hydrolysates in the anode chamber of a dual-chamber MFC. However, Srikanth et al. [146] achieved a PHB production of 19% dry cell weight from synthetic wastewater in the cathode chamber. They indicated that the low level of dissolved oxygen resulting from the oxygen reduction in the cathode chamber is conducive to PHB accumulation. For the metabolic process in the anode chamber, the generation of metabolites and electricity might be affected by organic loading. Kondaveeti et al. [16] studied the production of electricity and volatile fatty acids (VFAs) from citrus peel extract using a single-chamber MFC. They reported that a high level of electricity generation can be achieved at low organic loading, while a high production of VFAs can be achieved at high organic loading.
The Application of MFC-Based Biosensors
The MFC has been considered a promising system for biosensors. According to the current flow of MFC affected by biological activities, MFC-based biosensors can reflect various conditions of liquid samples, such as biochemical oxygen demand (BOD) and toxicity [117]. The operation of MFC-based BOD biosensors depends on the positive linear correlation between the electrical current output and the BOD value in a specific range. Karube et al. [35] were the first to confirm the availability of biosensors based on the MFC system to determine the BOD concentration of the wastewater. In recent years, there have been several MFC-based biosensors for BOD measurement. Commault et al. [147] developed the MFC-based biosensor with Geobacter-dominated biofilms to determine the BOD of milk over 17.5 h. This biosensor can also achieve reproducibility of 94% with only a 7.4% error during milk BOD determination, compared with the conventional BOD 5 method. It indicates the potential of MFC-based biosensors to accurately determine the BOD of dairy wastewater in a much shorter period. Hsieh and Chung [148] developed the MFC-based biosensor with a mixed culture of six strains to determine BOD concentrations lower than 240 mg/L in practical wastewater. They also indicated high reproducibility and stability of this biosensor in long operation periods. Similarly, a self-powered floating MFC-based biosensor based on MFC systems can achieve an autonomous operation for 150 days for monitoring and early warning of water pollutants [149]. In addition, the development of MFC-based BOD biosensors has also focused on the selectivity of specific compounds [150]. The toxicity detection of MFC-based biosensors usually depends on the inhibitory effect of toxic substances on cell metabolism. Therefore, the electrical current output usually has a negative linear correlation with the concentration of cytotoxic substances in a specific range. Some studies have developed MFC-based biosensors based on this mechanism to detect antibiotics [151] and organic toxicants [152]. Yu et al. [153] also used the MFC-based biosensor to determine the biotoxicity response of Cu 2+ , Hg 2+ , Zn 2+ , Cd 2+ , Pb 2+ , and Cr 3+ . In addition, this mechanism and function might also extend to the determination of pH values [154]. However, the toxicity detection of MFC-based biosensors might also involve other actions of microbes. The detection of Cr 6+ depends on the electron competition between the anode and the Cr 6+ reduction by Cr 6+ -reducing anaerobes, such as Ochrobactrum anthropi YC152 [155] and Exiguobacterium aestuarii YC211 [156]. Furthermore, there are also MFC-based biosensors based on the positive linear correlation between the electrical current output and some substances, such as Fe 2+ [157] and p-nitrophenol [158]. Guo et al. [159] focused on maintaining MFC-based biosensors for long-term use. They reported that implementing hibernations of electroactive bacteria is an available maintenance method to efficiently recover the accurate BOD detection of MFC-based biosensors after their lay-up periods.
Future Perspectives
The MFC technology has shown potential as an integrated system for sustainable energy recycling, waste treatment, and biomass valorization. Lab-scale MFC systems have met the primary requirement for enhanced electricity generation. They have also achieved considerable progress in COD removal, the production of value-added products, and biosensor applications. However, there are still many challenges in the scale-up and commercialization of MFC systems.
The low output power of the MFC system is still one of the main problems in this field. The electricity generation of MFC systems is directly affected by the metabolic activity of microbes. Therefore, the screening of high-performance electrogenic strains and the strain modification based on genetic manipulation are still effective methods to improve the performance of electrogenicity by microbes. With the further development of genetic engineering and synthetic biology technology, the gradual replacement of wildtype strains with engineered strains might be the future trend of strain selection for MFC systems. In addition, the development of electrode materials and the modification of electrode structure are also effective methods to enhance the electricity generation of MFC systems. The main aim of electrode improvement is to increase microbial adhesion and electron transfer efficiency. Therefore, the developments of electrodes for MFCs still need to focus on promotions in electrical conductivity, surface area, and microbial affinity. Currently, the high cost of electrode materials is also one of the main reasons limiting the commercialization of MFC systems. The production of high-performance electrode materials from wastes and renewable natural feedstocks will be a way to solve this problem in the future. In addition, 3D technology has great application potential in improving the electrode surface structure. However, electrode modification also needs to focus on other physical and chemical properties to maintain stability in various substrate environments while improving electrical performance.
The operation of MFC systems relies on the utilization of substrates by microbes. Therefore, the selection of substrates needs to focus on the utilization efficiency of specific substrates by specific strains. Currently, more studies have determined the capabilities of wastewater treatment and power generation by MFC systems using synthetic wastewater. However, it is still necessary to further study the adaptability of different strains to the complex environment of composition and concentrations in practical wastewater. The recalcitrance of lignin is a challenge for the large-scale bioconversion of LCB. Efficient pretreatment methods still need to be developed to improve the substrate availability of LCB. For LCB hydrolysates, sufficient removal of cytotoxic substances might be conducive to improving the metabolic activity of strains but leads to an increase in the process costs. Although strains with tolerance to such cytotoxic substances might be a way to reduce such costs, the performance of these strains in MFC systems still needs to be determined. In addition, the selection of substrates and strains needs to aim at the specific function and application of MFC systems. For MFC systems focusing on electricity generation, more substrates need to be utilized for oxidation and electron generation instead of the conversion to unnecessary metabolites through other metabolic pathways. In contrast, if the priority of the MFC system is to produce value-added products, more substrates need to be converted to desired products rather than oxidation. The proper metabolic regulation might be an effective way to enhance the performance of strains consuming specific substrates for specific functions and applications of MFC systems. The better performance of MFC systems relies on the better fitness between strains, substrates, and functions.
The efficient operation of the MFC system depends on a stable internal and external environment. The scale-up of MFC systems needs to be supported by proper control of operating conditions. The temperature is still the main external factor affecting the performance of MFC systems. The low-temperature environment might lead to longer start-up times and lower operating efficiency of MFCs. Therefore, technical improvement of MFC systems is still needed to achieve stable performance in a wide range of temperatures. As for internal factors, such as pH, cell biomass concentration, substrate concentration, and electron mediator concentration, parametric optimizations based on experimental design and mathematic modeling are helpful. For the maintenance of MFC devices, growth inhibition and periodic removal of non-electroactive biofilms on the cathode are conducive to maintaining oxygen reduction efficiency during long-term MFC operation. However, the feasibility of this approach in the large-scale application of MFCs has not been clear. In addition, there are technical difficulties in maintaining the activity of anodic electroactive microbes for a long time in the scale-up of MFC systems. Therefore, it is necessary to develop convenient maintenance methods for MFC devices. The operation and maintenance of MFCs need to focus on maintaining high energy production efficiency and reducing energy loss.
Conclusions
As an emerging energy technology, the MFC system realizes the effective combination of electricity production, waste treatment, and biomass valorization. Mixed culture techniques and genetic engineering strategies are conducive to improving the substrate availability and electricity generation of electrochemically active microbes. Wastewater and LCB materials are promising substrates to achieve efficient operation of MFC systems and reduce substrate costs. Anode and cathode modifications based on different materials can improve the electricity output of MFC systems by enhancing extracellular electron transfer and oxygen reduction efficiency, respectively. The participation of the electron transfer mediators and the optimization of operating conditions have also improved the performance of MFC systems. Despite the challenges, MFC systems still have significant potential for scale-up for various applications. Acknowledgments: The authors thank SUNY College of Environmental Science and Forestry for the help and support in this study.
Conflicts of Interest:
The authors declare no conflict of interest. | 2022-10-10T15:14:30.625Z | 2022-09-30T00:00:00.000 | {
"year": 2022,
"sha1": "f8eaea1c149d58af965c2e98b3122cc608df75d0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2673-6284/11/4/44/pdf?version=1664547677",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "06ca3ccdabd10a6c1489393bc1105cd7e98333e0",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
110932128 | pes2o/s2orc | v3-fos-license | A reactive transport model for geochemical mitigation of CO 2 leaking into a confined aquifer
Long-term storage of anthropogenic CO 2 in the subsurface generally assumes that caprock formations will serve as physical barriers to upward migration of CO 2 . However, as a precaution and to provide assurances to regulators and the public, monitoring is used detect any unexpected leakage from the storage reservoir. If a leak is found, the ability to rapidly deploy mitigation measures is needed. Here we use the TOUGHREACT code to develop a series of two-dimensional reactive transport simulations of the hydrogeochemical characteristics of a newly formed CO 2 leak into an overlying aquifer. Using this model, we consider: (1) geochemical shifts in formation water indicative of a leak; (2) hydrodynamics of pumping wells in the vicinity of a leak; and (3) delivery of a sealant to a leak through an adjacent well bore. Our results demonstrate that characteristic shifts in pH and dissolved inorganic carbon can be detected in the aquifer prior to the breakthrough of supercritical CO 2 , and could offer a potential means of identifying small and newly formed leaks. Pumping water into the aquifer in the vicinity of the leak provides a hydrodynamic control that can temporarily mitigate the flux rate of CO 2 and facilitate delivery of a sealant to the location of the caprock defect. Injection of a fluid-phase sealant through the pumping well is demonstrated by introduction of a silica-bearing alkaline flood, resulting in precipitation of amorphous silica in areas of neutral to acidic pH. Results show that a decrease in permeability of several orders of magnitude can be achieved with a high molar volume sealant, such that CO 2 flux rate is decreased. However, individual simulation results are highly contingent upon both the properties of the sealant, the porosity-permeability relationship employed in the model, and the relative flux rates of CO 2 and alkaline flood introduced into the aquifer. These conclusions highlight the need for both experimental data and controlled field tests to constrain modelling predictions.
Introduction
Physical and chemical trapping mechanisms are both associated with long-term storage of CO 2 .Initially, structural and hydrodynamic trapping dominates, requiring low permeability units, or caprocks, to act as seals above the injection reservoir.Efficient physical trapping will lead to chemical trapping processes over longer periods of time, including dissolution, sorption and mineralization [1].The time period over which chemical processes become dominant is long enough that physical trapping offers the first defence against CO 2 migration and loss.
Although tens to thousands of years may be required for chemical trapping to become a substantial contribution to CO 2 sequestration [2], perturbations to the geochemical composition of formation water in response to the introduction of concentrated CO 2 can be rapid.As a result, if CO 2 migrates into an overlying reservoir due to a defect in the caprock structure, this migration could be associated with abrupt chemical changes [3].Furthermore, the response of the formation water to CO 2 intrusion (e.g., decreased pH) may be leveraged for engineered intervention strategies.
Remediation strategies for leakage scenarios commonly require discontinuation of injection into the primary reservoir, but from an operations standpoint, such disruption to a full-scale project may be costly and prohibitively difficult.Here we evaluate a chemical mitigation strategy requiring continuous injection of CO 2 to establish a reactive mixing zone with a chemical sealant.
Geochemistry of a CO 2 leak
Solubility trapping involves the dissolution of CO 2 into the aqueous phase and is thus a function of pressure, temperature and ionic strength of the fluid [2].While the majority of total DIC in solution exists as CO 2(aq) , the dissociation of carbonic acid governs the shift in pH and the stability of solid phases.The composition and mineralogy of the reservoir thus exert a principle control on the geochemical response of the system to a CO 2 leak [4].Initial pore water acidification after introduction of CO 2 results in dissolution of readily soluble minerals, such as calcite, dolomite, siderite, iron (oxy)hydroxides and even the cement and steel comprising installed well casings [5,6].This enhanced dissolution increases the concentration of cations, carbonate and bicarbonate ions and thus subsequently increases pH [3,7], and potentially trace metal concentrations [8].During the initial phases of a leak, the abundance of carbonate minerals in the upper reservoir influences both the magnitude of the pH decrease and the increase in cation and trace element concentrations in the vicinity of this pH drop.
The combined effect of the processes described above leads to a general expectation that the pore water immediately adjacent to a newly formed CO 2 leak will exhibit an initial drop in pH and increase in DIC.This chemical shift is significant because it may precede the breakthrough of a supercritical CO 2 plume in a downgradient monitoring well.Such pH and DIC shifts prior to the arrival of CO 2 were observed in the primary injection reservoir during the Frio Carbon Capture and Storage (CCS) project [5].Detecting such a shift requires that the chemical and hydrodynamic conditions allow for pore water in contact with the leak to subsequently mix into the surrounding fluid and advect down-gradient in advance of the CO 2 plume.In total, pH and DIC responses associated with a CO 2 leak into an upper reservoir are predictive, but the magnitude of the observed response will also depend on the timing and location of the leak relative to monitoring well observations, the hydrodynamics of the system and the host rock mineralogy.
Observations from natural [9,10,11] and anthropogenic [5,[12][13][14][15][16][17][18][19] examples of subsurface CO 2 infiltration highlight two important points.First, characteristic responses in pore water chemistry, namely decreasing pH and elevated alkalinity, may be detected before the breakthrough of CO 2 .These indicators may offer a means of advanced warning where measurements are feasible.Second, these shifts distinguish the geochemical signature of the CO 2 leak from the undisturbed reservoir and thus allow for engineered interventions, such as the introduction of a sealant with pH-or CO 2 -dependent solubility.
Properties of chemical sealants
Creating a seal is a matter of preventing fluid flow through a specified volume by emplacement of a permeability or pressure barrier [20].Current research is underway to develop CO 2 resistant cements to seal potential leaks in existing wellbores, as well as to improve resistance of new wells to acidic fluids [21,22].In contrast, remediation of CO 2 leakage through fault or fracture systems presents a more substantial technological challenge and has received comparatively less attention [e.g.20].
One of the key issues in mitigating extended leakage zones (e.g., fractured systems in connected wells or in fault/damage zone systems) with current sealant technology is that the initial viscosity and setting time of most sealants will not allow sufficient lateral penetration across a fracture system.For example, hydraulic cements have high viscosity (up to 1000's of cP) and a short setting time of a few hours [22,23,24].Resins offer somewhat better parameters, with typical initial viscosities on the order of several 10's of cP and setting times of about 10 hours, though this still results in complete polymerization within a few meters of the injection well [22].Currently only a few sealants reported in the literature have an initial viscosity low enough to support delivery to large damage zones.These include in situ generated polymer (IGP) with an initial viscosity similar to water [25,26] and a CO 2 activated silicate polymer initiator (SPI-CO 2 ) [DOE project DE-FE0005958].The IGP phase change is temperature dependent, resulting in a maximum gelation time on the order of 10 hours [26,27,28].The SPI-CO 2 gelation time is highly dependent upon the pH of the solvent, but can remain in solution for up to several days [29,30].The high pH required to maintain SPI in solution reflects the pH dependence of silica saturation.This pH dependence suggests the potential development of sealants that are not dictated by a setting time, but rather remain in solution until an acidic CO 2 plume is contacted.
In general, effective sealant delivery to a CO 2 leakage zone requires extensive study.Conformance issues present a significant challenge because a comprehensive rock/fluid property database and operational experience are still lacking [20,31], and rock formations are heterogeneous at all scales and cannot be fully characterized with enough resolution to detect small fractures in the caprock by geophysical techniques.An additional limitation is that current reactive transport simulators, which should be used to guide intervention strategies, do not yet have capabilities for accurately modeling the chemical and transport properties of many sealant classes, and for many of these sealants the properties required to develop even basic simulations are currently unavailable.The purpose of the present chapter is to initiate this effort by demonstrating the current capabilities and limitations of a reactive transport modeling approach to describe emplacement of a reactive barrier to mitigate a CO 2 leak.Here we present a conceptual model of CO 2 infiltrating into a confined aquifer above the primary storage reservoir and examine the delivery of a hypothetical SiO 2 -based sealant to the damage zone through a nearby injection well.
Model Development
Simulations are conducted using the TOUGHREACT nonisothermal multicomponent reactive transport code [32,33,34,35] with the ECO2N thermophysical property module for H 2 O-NaCl-CO 2 mixtures in the range of temperatures and pressures appropriate for CO 2 sequestration [36].Mathematical formulations for the sequential iteration approach [37] utilized in TOUGHREACT to solve the basic mass and energy conservation equations are described in detail elsewhere [34,35,38].The TOUGHREACT code has been used previously to simulate the hydrogeochemistry of CO 2 storage in saline aquifers [39,40,41,42], and more recently to consider both the geochemical behaviour of a CO 2 leak into an overlying aquifer [43,44,45] and the associated consequences for drinking water quality [46,47,48].To the authors' knowledge, the current study is the first to report the application of a reactive transport code to the simulation of sealant delivery for remediation of a CO 2 leak into an overlying aquifer.
Model domain and thermophysical conditions
The current study focuses on the immediate geochemical response to a newly formed CO 2 leak into an overlying aquifer.As a result, the model domain is restricted to the area immediately adjacent to the leak, comprising a 50 m vertical and 2 km lateral extent and simplified to two dimensions (2-D).The upper boundary of the domain is held to a no-flow condition, representing an upper confining unit above the aquifer.A highly refined grid of 1m 2 blocks is used to discretize the domain from the left boundary to a distance of 450 m laterally, at which point the grid blocks increase exponentially in area with further distance to yield a quasi-infinite boundary condition on the right side of the domain.The left boundary of the domain is specified as no-flow and may be conceptualized as a fault or sedimentary basin margin.Grid cells in this left boundary are specified with a low injection rate, resulting in a net flux of 1 cm/day from left to right in the aquifer to represent regional groundwater flow.The bottom boundary of the domain constitutes the upper section of the primary CO 2 injection reservoir, and is separated from the overlying aquifer by a thin, horizontal caprock.The initial thermophysical conditions of the system are shown in Table 1.Prior to initialization, a pressure of 17 MPa and a temperature of 55 °C are specified across the domain.These values fall within the range bounded by the hydrostatic and lithostatic gradients and correspond to an approximate depth of 1 -1.5 km below land surface.To separate the injection reservoir from the overlying aquifer, permeability in the caprock is set to 1 x 10 -20 m 2 , eliminating any flow through this portion of the domain.Two adjacent grid cells within the caprock unit centered 150 m from the left boundary of the domain are assigned an initial z-permeability less than that of the surrounding caprock to create a 2 m wide 'defect' in the otherwise impermeable unit.Two starting values of z-permeability for this defect of 1.0 x 10 -15 and 1.0 x 10 -17 m 2 were tested in model simulations At the specified temperature and pressure range of these simulations, two liquid phases are present as saline H 2 O and supercritical CO 2 .Relative permeability of the H 2 O liquid phase (k rl ) is calculated based on H 2 O saturation (S l ) using a van Genuchten relation [49] for a specified irreducible water saturation as (all parameters defined in Table 1): ( 1 ) where . Relative permeability of the CO 2 phase (k rc ) is calculated using a Corey relation [50] based on H 2 O saturation, irreducible water and CO 2 saturation (Table 1) as: ( where . The capillary pressure (p cap ) necessary to overcome interfacial tension between H 2 O and CO 2 phases in the porous media is also calculated using a van Genuchten relationship [49] as: ( 3 )
Geochemical conditions
In the TOUGHREACT code, chemical mass balance is calculated in terms of the total number of linearly independent (basis) species [51], leading to a mixed equilibrium -kinetic (i.e.differential -algebraic) equation set [52].The short timespan of the current simulation relative to the chemical trapping mechanisms supports the use of simplified geochemical conditions, as the extent of primary silicate dissolution expected to occur is extremely limited.The initial reservoir composition was thus simplified to 80% quartz, 20% feldspar.The feldspar composition is a 20% anorthite, 80% albite solid solution (Ca 0.2 Na 0.8 Al 1.1 Si 2.8 O 7.8 ) with temperature-dependent equilibrium constants calculated from Arnórsson and Stefásson [53] to account for non-ideal mixing.The regression coefficients necessary to obtain this temperature dependence were refit from the Arnórsson and Stefásson [53] values to match the equation used in the TOUGHREACT code.
In addition to the quartz and feldspar initially present in the domain, the secondary minerals kaolinite and calcite are allowed to form.As noted previously, the small time interval of this simulation negates appreciable accumulation of these secondary clays and carbonates, but they are included for completeness.All thermodynamic data other than those specified for the albite-anorthite solid solution were taken from Wolery [54].Initialized fluid concentrations, mineral volume fractions and kinetic rate parameters for the current simulations are provided in Table 2.
CO 2 leak and sealant
The primary injection well for CO 2 into the lower reservoir is located outside of the current high-resolution model domain.As a result, the presence of a CO 2 source is simulated by fixing the grid cell in the lower left boundary of the domain to a constant CO 2 saturation of 75% and an elevated pressure.Two fixed pressure values of 18 MPa and 20 MPa (approximately 1 MPa and 3 MPa higher than the ambient pressure in the upper aquifer, respectively) were tested.For each combination of caprock defect permeability and initial CO 2 reservoir pressure (Table 3), a 1 year simulation was run prior to implementing any sealant or hydrodynamic mitigation in order to establish the presence of a CO 2 leak in the upper aquifer.A pH-dependent silica-based sealant ("hypothetical sealant") was simulated using the thermodynamic and kinetic properties of amorphous silica.Due to the lack of published data on Si polymers, we use the properties of SiO 2(aq) and SiO 2(am) , and increase the molar volume of SiO 2(am) to 500 cm 3 /mol to represent a hypothetical gel or polymer that undergoes a large volume increase during gelation.Dissolved silica is introduced to the system through a pumping well along the left boundary of the upper aquifer at a uniform fluid injection rate of 0.001 kg/m 2 /s for a total fluid injection across the boundary of 0.047 kg/s.This injection raises the pressure near the left boundary of the aquifer to approximately 18.2 MPa.The injectate solution is comprised of the same initial concentrations as aquifer formation water (Table 2) but equilibrated with Na 2 SiO 3 such that pH is increased to 10, SiO 2(aq) is increased to 0.1 M. Bromine, to serve as an inert tracer, is increased to 0.01 M. The concentration of sodium in this alkaline flood is also slightly increased.
The accumulation of the hypothetical sealant at the interface between the alkaline flood and acidic CO 2 plume is intended to reduce the porosity and thus the permeability of the aquifer in the region of reactivity.This porosity-permeability relationship is difficult to constrain and is often dependent upon site-specific geometry and reactivity of a given porous media.The present simulations utilize a modified Hagen-Poiseuille [55]: ( 4 ) where C k is the number of pore throats connecting to an individual pore, NP is the number of pores in a given area of porous media and d is the average diameter of a pore throat (which will decrease with secondary mineral precipitation).This relationship is suggested as an accurate description of the porosity-permeability relationship in conglomerates and sandstones [55].Typical values for C k and NP of 2 throats/pore and 1000 pores/m 2 were used, respectively [56,57].
Primary injection of CO 2 into the lower reservoir and the associated pressure build-up due to this injection continue uninterrupted regardless of the development of a leak or any remediation efforts tested in the upper aquifer.As will be shown in the subsequent sections, this continued CO 2 injection pressure is required in order to establish a reactive mixing zone sufficient to precipitate substantial quantities of sealant.
Results and discussion
Each leak was allowed to develop for 365 days of simulation prior to introducing any hydrodynamic or chemical mitigation measures.These parameters resulted in a variety of initial scenarios ranging from substantial influx and pooling of CO 2 against the upper no-flow boundary of the domain (scenario A), to moderate infiltration that may be barely possible to detect by seismic methods (scenarios B and C), to virtually undetectable presence of CO 2 within the caprock (scenario D).The flux rate of CO 2 out of the top of the caprock defect at 365 days of uninterrupted flow for scenario A is 3.60 g/s.This is the largest flux rate considered in these simulations.Scenario C corresponds to a CO 2 flux rate of 0.56 g/s.The flux rate for scenario B is 0.05 g/s and that for scenario D is 0.02 g/s, both virtually undetectable after 365 days of CO 2 injection.Each of these scenarios will be used as the initial conditions for introduction of an alkaline flood containing a hypothetical sealant from the left boundary.
For the conditions generating the largest CO 2 leak rate (scenario A), amendment of the alkaline flood continues for three consecutive years.Interaction of the alkaline flood with the acidic CO 2 plume results in some enhanced mixing at the pH boundary [58], though this reactive front stabilizes after approximately two years of continuous flooding and forms a fairly stable reactive front thereafter (Fig. 1).At the specified sealant injection rate of 0.047 kg/s, the introduction of this flood pressurizes the overlying aquifer such that CO 2 flux through the top of the caprock defect is reduced from 3.60 to 3.03 g/s (i.e. by 16%).However, the pressure from the underlying CO 2 injection reservoir is still great enough that the shape of this stabilized profile is influenced by the balance between injection of alkaline flood at the left boundary, infiltration of CO 2 from the bottom of the domain at x = 150 m and a quasi-infinite right boundary.
Figure 1: pH of fluid after 3 years of constant sealant amendment from the left boundary.Sealant is formed in the mixing zone between the alkaline flood (pH 10, red) and acidic CO2 plume (pH 5, blue).All distances reported in meters.
In the case of a high (3 MPa; scenarios A&B) pressure differential between the lower CO 2 reservoir and upper aquifer, a reactive mixing front develops between the alkaline Si-flood and acidic CO 2 plume resulting in the precipitation of large volumes of silicate polymer (ca.2% of the total volume of solid.)This porosity decrease is associated with a substantial reduction in permeability, as high as 16 orders of magnitude assuming the modified Hagen -Poiseuille porosity -permeability relationship (eq.4).However, the high injection reservoir pressure supports rapid infiltration of CO 2 into the upper aquifer even during sealant delivery, thus precluding the formation of a coherent barrier.After three consecutive years of flooding, the sealant injection well was shut off and the simulation continued, demonstrating no observable decrease in CO 2 flux rate into the upper aquifer despite substantial accumulation of sealant in the area adjacent to the leak.Thus, for a high pressure differential between the injection reservoir and upper aquifer the accumulation of precipitated sealant may influence the location and flow velocity of CO 2 , but will not decrease the loss of CO 2 from the injection reservoir while primary CO 2 delivery is still active.
In contrast, simulations at a lower pressure differential (1 MPa; scenarios C and D) result in a less distinct mixing zone as a result of the high pressure in the upper aquifer that forces the alkaline, sealant-bearing fluid into the primary CO 2 reservoir.Sealant introduced to this portion of the domain is rapidly precipitated as a result of mixing with the acidic, CO 2 -rich plume (eq.2) and interaction with residual CO 2 trapped within the low-permeability caprock (eq.3).In contrast to the previous cases, this rapid consumption of dissolved sealant is not easily replenished by incoming alkaline flood, as the flow rate of the alkaline fluid into the primary CO 2 reservoir is very low.Since reactant is not easily supplied to this mixing zone, the total accumulation of solid phase sealant is quite small compared to scenarios A and B. For both scenarios C and D, the minimum permeability in the lower reservoir after 730 days of flooding is only decreased to 85% of the original value.However, this minor decrease occurs in the area immediately adjacent to and within the caprock defect itself, leading to formation of a coherent seal that limits CO 2 migration.After sealant delivery is stopped, the newly emplaced seal reduces the net flux of CO 2 through the defect by 21% of the original value in scenario C and 90% in scenario D. This outcome represents a substantial reduction in the leakage rate of CO 2 while the primary injection reservoir remains active, provided that the pressure differential between the lower CO 2 reservoir and upper aquifer remains small.
Summary and conclusions
Emplacing a seal where CO 2 leakage has already occurred is challenging because of multiphase flow dynamics.These results demonstrate that effective delivery of a pH-dependent sealant to a leaking damage zone during active CO 2 injection requires that the pressure buildup during seal emplacement to be comparable to the pressure buildup in the underlying CO 2 reservoir.However, if a sealant with a higher viscosity is able to penetrate the CO 2 plume, this constraint may be relaxed.For a pH-dependent sealant, generating an effective seal requires establishing a mixing zone between the alkaline flood and the acidic CO 2 -rich water.Development of the mixing zone is enhanced by providing a constant supply of both fluids to the reaction front, which requires a continued flux of CO 2 during the emplacement process.Shutting off injection prior to sealant emplacement can limit the creation of a mixing zone and hence, limit the establishment of an effective barrier.
These results are predicated on the assumption of a porosity-permeability relationship that achieves large permeability reduction for fairly small changes in mineral volume fraction.If a relationship such as the Kozeny-Carman or cubic law provides a better description of the behaviour observed in a given reservoir, then generation of an effective seal through precipitation of a solid phase at the CO 2 -flood boundary will be much harder to achieve.Therefore a fundamental result of this study is the recognition that sealant properties must be developed in conjunction with an accurate understanding of the porosity-permeability relationship of the target system.
Table 1 :
Hydrogeologic parameters for the confined aquifer | 2019-04-13T13:04:19.677Z | 2014-01-01T00:00:00.000 | {
"year": 2014,
"sha1": "1663e44233dd069ea0d6f1f4f9f0d1423cd4a369",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.egypro.2014.11.495",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "af9969048167bf4545832d89ad026bec4b93de53",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
119345121 | pes2o/s2orc | v3-fos-license | THE OSCILLATORY BEHAVIOR OF LIGHT THE OSCILLATORY BEHAVIOR OF LIGHT IN THE COMPOSITE GOOS-H ¨ANCHEN SHIFT IN THE COMPOSITE GOOS-H ¨ANCHEN SHIFT
. For incidence in the critical region, the propagation of gaussian lasers through triangular dielectric blocks is characterized by the joint action of angular deviations and lateral displacements. This mixed effect, known as composite Goos-H¨anchen shift, produces a lateral displacement dependent on the axial coordinate, recently confirmed by a weak measurement exper-iment. We discuss under which conditions this axial lateral displacement, which only exists for the composite Goos-H¨anchen shift, presents an oscillatory behavior. This oscillation phenomenon shows a peculiar behavior of light for critical incidence and, if experimentally tested, could stimulate further theoretical studies and lead to interesting optical applications.
I. INTRODUCTION
The easiest way of describing laser propagation through dielectric structures is in terms of ray optics. The geometrical approach is useful to explain most of the practical applications [1,2]. In the extraordinary experiment realized by Goos and Hänchen in 1947 [3], the discovery of a lateral displacement of transverse electric (TE) beams totally reflected by a dielectric/air interface suggested deviations from the laws of geometrical optics and stimulated new studies in looking for them. For θ 0 > θ cri + λ/w 0 , where θ 0 is the incidence angle at the left air/dielectric side of the triangular dielectric block depicted in Fig. 1, (sin θ = n sin ψ, ψ = ϕ + π/4 and sin ϕ cri = 1/n), λ is the wavelength of the beam and w 0 its minimal waist, the gaussian beam is totally reflected by the lower dielectric/air interface. The Fresnel coefficient is complex and acts on the whole gaussian angular distribution. The complex phase is responsible for the transversal shift of the beam. For θ 0 ≫ θ cri + λ/w 0 , the shift is of the order of λ. Its derivation, done by using the stationary phase method, appeared for the first time in literature one year after the experiment of Goos-Hänchen and was proposed by Artmann [4]. He also observed that for transverse magnetic (TM) beams a different lateral displacement occurs. In 1949, this prediction was experimentally confirmed by Goos and Hänchen [5]. Approaching the critical region, θ 0 ≈ θ cri + λ/w 0 the lateral displacement gains an amplification, passing from λ to √ λ w 0 [6][7][8]. In the Artmann region, θ 0 ≫ θ cri + λ/w 0 , where the shift is proportional to λ, amplifications can be obtained by a multiple reflections device, as it is the case in the first Goos-Hänchen (GH) experiment [3], or by using the weak measurement technique [9,10], as recently done by Jayaswal, Mistura, and Merano [11].
For θ 0 < θ cri − λ/w 0 , the angular gaussian distribution is modulated by real Fresnel coefficients, the complex phase is lost, and, consequently, the lateral GH shift is not present. Nevertheless, in this incidence region, angular deviations from the laws of ray optics occur. This is due to the fact that while the incident gaussian field, has a symmetric angular distribution, g(θ − θ 0 ) = exp[ − (θ − θ 0 ) 2 (k w 0 ) 2 /4 , k = 2 π/λ, centered at θ 0 , and consequently moves along the z LAS axis, the transmitted field (see Fig. 1) is where y 0 = ( sin θ 0 +cos θ 0 ) a+{ [ cos θ 0 −n cos ψ(θ 0 ) ] sin θ 0 / n cos ψ(θ 0 ) } b is the geometrical shift [12], α the polarization, ϕ = π/4+ψ, the reflection coefficient at the down interface, is characterized by an angular distribution whose initial gaussian shape is now distorted by the Fresnel coefficients. The symmetry breaking 1 caused by the Fresnel coefficient [13,14], creates an axial dependence of the transversal component of the transmitted field and, as a consequence, angular deviations from the optical path predicted by the laws of geometrical optics. These deviations are of the order of (λ/w 0 ) 2 . For transverse magnetic waves and incidence in the vicinity of the Brewster angle, the angular deviations gain an amplification of w 0 /λ leading to the giant GH angular shift [15]. This amplification has recently been detected in direct [16] and weak measurement technique-based [17,18] experiments.
The GH lateral displacement has also recently been investigated in the reflection of a light beam by a graphene layer and controlled by a voltage modulation [19] and in the reflection of terahertz radiation from an uniaxial antiferromagnetic crystal, where the action of an external magnetic field induces a nonreciprocity in the shift for positive and negative incidence [20].
We did not aim to give a complete review of theoretical analyses or experimental facts. We confined ourselves to outline the general field in which their investigation is seated. For the reader who wishes to deepen any question of lateral displacements, angular deviations and/or breaking of symmetry in its entirety, we suggest to read the excellent works of Bliokh and Aiello [21] and Götte, Shinohara, and Hentschel [22], where clear presentations, detailed discussions, and relevant aspects of deviations from the laws of ray optics are reported.
In the next section, we give a brief discussion of the motivations which stimulated our investigation in the critical region of incidence, where angular deviations and lateral displacements act together generating the composite GH shift. This discussion will then be followed (section III) by an analytical description of the beam transmitted through a triangular dielectric block and by the calculation of the transversal displacement of the transmitted beam as a function of its axial coordinate z. The analytical approximation is then tested and confirmed, in section IV, by the numerical calculation done by using directly Eq.(3). The surprising result of oscillations in the lateral displacement is, probably, the most important evidence of the strange behavior of light near the critical incidence. Conclusion and outlooks are drawn in the final section.
II. THE COMPOSITE GOOS-HÄNCHEN SHIFT
The critical region, see Eq. (5), surely represents the most interesting incidence region to study deviations from the laws of ray optics. This is due to the fact that, in this region, angular deviations and lateral displacements are both present. Furthermore, at critical incidence, the breaking of symmetry is maximized [13,14] and the Goos-Hänchen shift amplified by a factor √ k w 0 [6][7][8]. In such a region, we thus expect amplified angular deviations. Numerical calculations, done in this region, show a clear amplified axial dependence of the GH shift [23]. These (numerical theoretical) predictions have recently been confirmed by a weak measurement experimental analysis [24]. Once confirmed the amplified axial dependence, it would be important to understand if, and if so, under which conditions negative lateral displacements occur. The idea of light oscillations around the path predicted by the geometrical optics, for incidence in the critical region, was stimulated by the fact that in this region the gaussian angular distribution is modulated by a reflection Fresnel coefficient which is in part real (θ < θ cri ) and in part complex (θ > θ cri ). The modulated angular distribution, which is responsible for the spatial shape of the transmitted field (no longer a gaussian field), should present an interference between the real and complex part generating oscillation phenomena.
The possibility to find an analytical expression for the transmitted field allows to calculate the maximal transmitted intensity and to determine the beam parameters for which oscillations can be experimentally detected. Clearly, once obtained the analytical approximation, the analytical expression has to be tested by numerical calculations done by using directly the transmitted intensity given in Eq. (3). Numerical calculations are often hard, require time and leave obscure how to optimize the choice of the beam parameters or the experimental device. So, the analytical expression proposed in the next section could be very useful for experimentalists interested to study, investigate and, possibly, detect, in the critical region, angular deviations from the law of geometrical optics and oscillation phenomena.
Analytical expressions for the GH shift, extensively studied in literature, always represented an intriguing challenge. The first attempt was carried out by Artmann in 1948 [4]. Its derivation explained the lateral displacement of TE waves found in the experiment realized by Goos and Hänchen one year before [3] and predicted a different behavior of TM waves, later confirmed by Goos and Hänchen in the experiment of 1949 [5]. Nevertheless, the Artmann formula contains a divergence at the critical angle and this is due to the fact that the Taylor expansion used for the complex phase breaks down when the incidence angle approaches the critical one. In 1971 [6], Horowitz and Tamir (HT) proposed an analytical expression for the lateral displacement of a gaussian light beam incident from a denser to a rarer medium. They obtained an approximation for the Fresnel coefficient which allowed to analytically solve the integral determining the propagation of the reflected beam and found for the TE and TM lateral displacements, a closed expression in terms of parabolic-cylinder (Weber) functions. They also found normalized curves valid for a wide range of parameters and suggested that the general functional behavior of the lateral shift should be similar for other symmetric angular distributions. In 1986 [25], Lai, Cheng, and Tang (LCT) overcame the cusp-like structure in the HT formula obtaining a theoretical result for the lateral shift of a gaussian light beam which varies continuously and smoothly around the critical angle. Recently [8], a closed form expression for the GH lateral displacement was proposed by Araújo, De Leo, and Maia (ADM). The ADM formula, differently from the HT and LCT ones, is not based on the reflection coefficient expansion but on the integral analysis of the complex phase. In ref. [8], the analytical expression obtained for the lateral displacement of gaussian light beam neglecting the axial dependence is given in terms of modified Bessel functions of the first kind. The analysis done in [8] is also extended to different angular distribution shapes and also distinguishes between the lateral displacement of the optical beam maximum and the mean valued calculation of its shift which, due to the angular breaking of symmetry in the critical region, are different. The HT [6], LCT [25], and ADM [8] formulas reproduce for θ 0 θ cri + λ/w 0 the Artmann prediction and overcome for incidence at critical angle the infinity problem. Such formulas are obtained for z ≪ z R (= πw 2 0 /λ) and do not contain any axial dependence. This means, for example, that to experimentally reproduce the theoretical results given in ref. [6,8,25] the camera has to be moved very close to the (right) interface of the triangular dielectric block. Axial dependence requires a more complicated study and often numerical calculations [23]. Its effect, which also appears before the critical region leading to angular deviations, produces, in the critical region, an axial amplification of the lateral displacement with respect to the amplification proportional to √ k w 0 predicted by the HT, LCT, and ADM formulas. This amplification has recently been seen in the weak measurement experiment cited in ref. [24]. In the next section, we will find an analytical expression of the transmitted intensity without any axial simplification. So, our final formula will explicitly contain the z-dependence of the camera position coming from the z-depending term of the spatial phase appearing in Eq. (3), i.e.
If a gaussian beam, incident upon a dielectric/air interface (like the lower interface of the triangular prism), has its incidence angle in the critical region and if its angular distribution is broad enough (w 0 ≪ 1 mm), plane waves in the angular spectrum with θ > θ cri will be totally internally reflected and plane waves with θ < θ cri partially reflected. The angular breaking of symmetry and the real (θ < θ cri ) and complex (θ > θ cri ) nature of the reflection coefficient for incidence in the critical region play a fundamental role in the the oscillatory behavior of the lateral displacement seen in the composite GH shift. It is the interplay between the Goos-Hänchen shift (total internal reflection) and the angular deviations (partial reflection) which generates the composite Goos-Hänchen effect. The z-dependence of the lateral displacement in the critical region [23] has recently been experimentally confirmed by using the weak measurement technique in ref. [24]. In this paper, we analyze under which conditions an oscillatory behavior occurs and when the pattern of oscillation can be reproduced by wider beams. The analysis presented in this paper applies to coherent light fields and leads to an analytical formula in terms of confluent hypergeometric functions of first kind. Partially coherent light fields have to be treated by using the Mercer expansion as done in ref. [26].
III. TRANSMITTED INTENSITY'S ANALYTICAL EXPRESSION
To obtain an analytical formula for the transmitted beam, some approximations have to be done. The first approximation is to factorize the left/right transmission coefficients. Such coefficients are very smooth varying functions in the critical region and can thus be calculated in θ 0 . The second approximation is to change the limits of integration from ± π/2 to ± ∞. This is possible as our incident gaussian is strongly centered in θ 0 which varies between θ cri − λ/w 0 and θ cri + λ/w 0 (for BK7 prism θ cri = − 5.603 o ). Without loss of generality, we can thus rewrite the transmitted intensity as follows where δ GH = (y−y 0 )/w 0 and ζ(z) = 1 + 2 i z/k w 2 0 . The crucial point in the analytical approximation is to develop the reflection coefficient D α (θ) in square-root powers around the critical angle. To do it, we rewrite the incidence angle θ in terms of the critical one, θ cri , θ = θ − θ cri + θ cri = δθ + θ cri .
By observing that sin θ = n sin ψ ⇒ δθ cos θ cri = n δψ cos ψ cri and ψ = ϕ + π/4 ⇒ δψ = δϕ , we obtain δϕ = cos θ cri cos ψ cri δθ n ≈ δθ n , for BK7 ϕ cri = arcsin(1/n) = 41.305 o , ψ cri = − 3.695 o and θ cri = − 5.603 o . By expanding around the critical angle n cos ϕ ≈ n cos ϕ cri − n sin ϕ cri δϕ = n cos ϕ cri − δϕ , 1 − n 2 sin 2 ϕ ≈ 1 − n 2 sin 2 ϕ cri − 2 n 2 sin ϕ cri cos ϕ cri δϕ = − 2 n cos ϕ cri δϕ , and introducing the quantity δφ = −δϕ/n cos ψ cri , we can approximate the TE and TM reflection coefficients, given in (4), as follows where { γ TE , γ TM } = 4 { 1 , n 4 } / n √ n 2 − 1. By using this expansion and introducing the new integration variable τ = k w 0 ζ(z) (θ cri − θ)/2, we obtain where d(δ GH , z; θ 0 ) = k w 0 ζ(z) (θ 0 − θ cri )/2 + i δ GH /ζ(z) contains, besides the transversal and axial variables, the incidence angle dependence and G(δ GH , z) represents the gaussian function exp[− 2 δ 2 GH /|ζ(z)| 4 ]. The square-root and linear terms in the brackets of Eq. (8) act as modulators of the gaussian function G(δ GH , z). The effect of their modulation can be evaluated once the integral in Eq. (8) is analytically solved. To do this, we observe that the integral with a square-root integrand can be converted into a series The series found is a well-known Gamma functions series leading to a combination of confluent hypergeometric functions of first kind. Thus, the integral can be analytically solved in terms of these functions where Finally, the analytical approximation for the transmitted intensity is given by The study of the maximum of this function will be the topic of the next section.
IV. THE AXIAL OSCILLATORY BEHAVIOR
In order to check the validity of our approximation and compare our results with the previous ones appeared in literature, we first analyze the case in which the axial dependence is removed from Eq. (11).
In the experimental setup this means the case in which the camera is positioned very close to the right side of the dielectric block, i.e. z ≪ z R . This is for example the situation of the experiment done by Cowan and Anicin in ref. [27]. In such an experiment, the collected data were compared with the theoretical formulas of Artmann [4] and Tamir and Horowitz [6]. In Fig. 2a, we plot, for TE waves, the lateral displacement for a laser gaussian beam with wavelength λ = 0.633 (the wavelength of choice for most HeNe lasers) and beam waists of 150, 300, and 600 µm. transmitted through a triangular BK7 (n = 1.515) block, and detected by a camera very close to the right side of the dielectric block. As observed, this means to remove the axial dependence in Eq. (11). In this case, we obtain, in accordance with the previous theoretical calculations appeared in literature [6,8,25], an amplification of the lateral shift proportional to √ k w 0 for critical incidence, see Fig. 2a. In this approximation, the amplification prefers wider spatial distributions. The numerical calculations, obtained by using directly Eq. (6) with ζ(z) ≈ 1 are in excellent agreement with the results predicted by the analytical formula Eq. (11) for z ≈ 0. Phenomena as angular deviations and/or oscillations cannot be seen in this case. By moving the camera away from the right side of the dielectric block, along the axial propagation direction of the transmitted beam predicted by geometric optics, an additional z-dependent lateral displacement appears. This is for example the case of the experimental setup in the ref. [28]. This z-dependent lateral displacement is called in literature angular shift. This angular shift is clearly visible in the left incidence region of Fig. 2b-c. The axial dependence for critical region mixes two effects: the angular deviations (caused by the symmetry breaking of the angular distribution) and the GH shift (caused by the additional complex phase in the Fresnel coefficient). This mixed effect, known as composite GH shift, was recently proven by a weak measurement experiment [24]. The axial effects depend on the ratio z/z R and consequently, for a fixed axial position of the camera, narrower spatial beams experience larger amplifications. This can be understood by observing that narrower spatial beams have wider angular distributions and consequently they are more sensible to the breaking of symmetry caused by the Fresnel reflection coefficient. This axial amplification, different from the standard amplification obtained for z ≈ 0, is clearly seen in the plots of Figs. 2b-c. In such plots, we also see new oscillation phenomena for w 0 = 150 µm. The numerical data show an excellent agreement with the analytical calculation. The axial amplification was recently confirmed by the experiment done in ref. [24]. Nevertheless, the oscillatory behavior was not detected in such an experiment because, as seen in Fig. 2b-c, the oscillatory behavior starts, for a beam waist of 150 µm, from an axial position of the camera of 25 cm and it is seen for incidence greater than the critical ones. In the referred experimental article, the beam waist was of 170 µm, the camera positioned at z = 20 and 25 cm and the incidence angle not great enough to reach the right zone of the critical region. The mathematical explanation of the oscillation phenomenon comes from the presence of confluent hypergeometric functions in the transmitted intensity. For θ 0 ≤ θ cri (the dominant part of d is the real one) and θ 0 ≥ θ cri + λ/w 0 (the dominant part of d is the imaginary one), the argument of the confluent hypergeometric functions is real and no oscillation can be seen. In the right zone of the critical region, θ cri ≤ θ 0 ≥ θ cri + λ/w 0 , depending on the value of z/z R the real and complex parts of d become comparable and the confluent hypergeometric functions will have a complex argument opening the door to oscillation phenomena. For increasing values of the incidence angle the beam reaches the Artmann zone and the angular distribution recovers its original gaussian symmetry leading to the Artmann results. In this incidence region, the composite GH shift tends to the standard GH shift which only depends on the geometrical dielectric structure and on the beam wavelength and, consequently, the shift for different beam waists is the same for different beam waists, see the right zone of the critical incidence in Figs. 2.
The amplification γ TM /γ TE = n 2 between TE and TM waves is of a factor 2.3 (BK7 prism) and it is shown in Figs. 3a-b. The amplification between the different beam waist 150 µm (Fig. 3c) and 600 µm (Fig. 3d) is of a factor w 600 /w 150 = 2. The fact that the axial coordinate z always appears in the analytical formula with the denominator k w 2 0 also allows to predict when, given two different gaussian beam waists, it is possible to visualize the same pattern of oscillation. For the cases examined in our study, the axial coordinate multiplication factor is given by z 600 /z 150 = (w 600 /w 156 ) 2 = 16. According to the results shown in Fig. 3, it seems that by increasing the beam waist we improve our experimental measurement. In reality, what we improve is the lateral displacement of a factor w 600 /w 150 = 2. Nevertheless, what we measure is the lateral displacement of a beam with waist w(z). Thus, it would be more appropriate to introduce an adimensional quantity given by the ratio between the lateral displacement and the beam waist at the axial point where the camera is located. This ratio assesses the experimental performance needed to measure the lateral displacement. For the cases examined in Fig. 3c and 3d, we have λ/w 150 and λ/w 600 respectively, clearly showing a better efficiency for a measurement done with a beam waist of 150 µm.
The analysis presented in Fig. 4 has been carried out by calculating the lateral displacement as a function of the incidence angle θ 0 for different axial position of the camera. It is also interesting to calculate such a displacement as a function of the axial position z for fixed incidence angles. Approaching the critical angle from the left (see Fig. 4a
V. CONCLUSIONS
Finally, we can conclude that, in the critical region, for θ 0 < θ cri the real part of the angular distribution wins over the complex one and angular deviations are the main evidence of the angular breaking of symmetry. For θ 0 > θ cri , the situation is reversed and the main contribution comes from the complex part generating oscillation phenomena. By increasing the incidence angle, and approaching the right border of the critical region, the real part of the angular distribution vanishes and we recover the gaussian symmetry in the transmitted beam. In this case, we do not find any notable angular deviations. The beam practically moves along the z-axis but it is transversally displaced by the GH shift. For incidence at θ 0 = − 5.4 o , and different axial positions of the camera, say 20, 40, 70, 100 cm, 6 we, for example, find the following lateral displacements 2.6, 2.7, 2.8, 2.9 µm.
To conclude this work let us emphasize once more that the challenge of detecting deviations from the laws of geometrical optics is still a current issue of optics containing a number of unsolved and, at the same time, interesting questions of very general significance. We have not given a rigorous mathematical elaboration of the theory but only a simplified analytical formula to calculate angular deviations and oscillation phenomena in the critical region. It is the authors hope that this study will find many readers among theoretical and experimental physicists and specialists in related branches of optics, by helping them in future theoretical studies as well as stimulating experiments that could confirm the oscillatory Goos-Hänchen shift.
ACKNOWLEDGEMENTS.
In deep gratefulness an appreciation, the authors would like to thank the referees and editor for their attentive reading, their willingness to discuss, their suggestions, and their challenging questions.
They belong to what is still a small but fortunately growing minority of revisors: people who are able to delve deeply into the article and with their stimulating questions allow to improve its scientific content. Some additional questions were asked by the editors in order to provide further clarification of certain critical points. The paper in its present form would not have come into existence without their support. Figure 1: The geometrical set-up of the experiment on detecting oscillations. The incoming Gaussian beam propagates along the z INC axis forming an angle θ 0 (the incidence angle) with the normal to the left (air/dielectric) prism interface. Its minimal beam waist is found at the point in which the beam is refracted by the left interface. After the first refraction (ψ 0 ), the beam is reflected (ϕ 0 ) by the down (dielectric/air) interface and, finally, refracted (θ 0 ) by the right one, reaching the camera positioned at an axial distance z CAM from the point of minimal beam waist. Figure 3: In (a) and (b), the lateral displacement for an optical beam with w 0 = 150 µm is shown for TE and TM waves respectively. For a BK7 prism the amplification factor is 2.3. In (c) and (d), the lateral displacement is plotted for two different beam waists, 150 and 600 µm. For an axial distance amplified by (w 600 /w 150 ) 2 and an incidence region reduced by w 150 /w 600 , we recover in (d) the same oscillation pattern of (c). The numerical calculations (circles and triangles) show an excellent agreement with the analytical results (continuous and dashed lines). As expected the axial dependence breaks down when the incidence angle approaches the Artmann zone where the stationary method works fine. Figure 4: Lateral displacement as a function of the axial position of the camera. In (a), the phenomenon of angular deviations as well as its amplification for incidence angles approaching the critical one is evident. In (b-f), oscillations are visible. Their amplitude is reduced when the incidence angle approaches the left border of the critical region. In (g), the symmetry of the angular distribution is completely recovered and angular deviations as well as oscillations are lost. | 2017-05-16T20:55:59.000Z | 2017-05-12T00:00:00.000 | {
"year": 2017,
"sha1": "2767022e847bae63cfbdfb71a5172832a26227e5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1705.05914",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2767022e847bae63cfbdfb71a5172832a26227e5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
213204349 | pes2o/s2orc | v3-fos-license | Renewable Energy Generation Assessment in Terms of Small-Signal Stability
: The popularity and role of renewable energy in the power grid are increasing nowadays as countries are shifting to cleaner forms of energy. This brings new challenges in maintaining a secure and stable power system, as renewable energy is known to be intermittent in nature and may introduce stability issues to the grid. In this paper, a screening framework of renewable energy generation scenarios is proposed to determine which power system conditions and scenarios will make the system unstable. The scenario screening framework is based on a sensitivity analysis of the system eigenvalues with respect to the renewable energy penetration to the system. The average scheduled renewable energy output, forecasting error standard deviation, average forecasting error, and bus location of the renewable energy source were used to define a renewable energy generation scenario. Depending on the amount and variability of renewable energy, there is a possibility for a critical eigenvalue to cross the imaginary axis. The estimated eigenvalue location resulting from the penetration of variable renewable energy is computed by adding the computed eigenvalue sensitivity to the initial operating point. If any of the estimated system eigenvalues cross the imaginary axis, the power system might be unstable in this scenario, so it requires more detailed simulations and countermeasures. Renewable energy forecasting was done using the long short-term memory model, and the proposed method was simulated using the IEEE 39-bus New England test system. The results of the proposed method were verified by comparing the simulation results to the eigenanalysis solution. The obtained results have shown that the proposed method can determine whether the renewable energy generation scenario is critical in power system operation.
Introduction
As the safe global warming limit has changed from 2 to 1.5 • C above preindustrial levels [1], tightened policies and procedures from governments around the world are expected to be passed in order to eradicate CO 2 emissions and to shift fully to renewable energy (RE). Because of the heightened interest in integrating RE sources such as wind and solar energy in the power system, the need for integration studies of these renewable sources also increases. This is because of the intermittent nature that is associated with RE sources, a secure power system should be able to maintain stability, even with fluctuating RE output and be able to withstand the resulting contingencies or disruptions. Security assessment is done to determine if the power system can withstand disturbances without suffering a drop in the customer service level provided by the transmission and distribution companies [2]. Dynamic security assessment (DSA) is a type of security assessment that studies the security of the transient and dynamic responses to system disturbances in different timescales [3]. With the help of an online DSA [4,5], power system planners and operators can monitor the power system security in real-time, and they have the ability to act immediately in case of a critical disturbance occurring in the system. The transient energy function [6] and trajectory sensitivities [7] have both been used to analyze dynamic security assessments of the system. The difference between the two methods is that the analysis becomes more complex while using transient energy functions especially when dealing with big differential and algebraic equation (DAE) models of power systems [7]. The main features of an online DSA system include contingency screening, time simulations for stability assessment, calculation of power transfer limits, and concurrent processing of contingencies [4]. Contingency analysis is done on a single-base case scenario to determine how the system will react to any contingency event, which can be a planned or unplanned loss of a system element. The contingency screening function reduces the number of contingencies to be considered by the system operators by identifying the list of contingencies with a high likelihood of occurrence. Similarly, a scenario screening function identifies the base case scenarios that will be critical to the stability of the power system. There are different power system parameters in a base case scenario that can be analyzed, but for the purposes of this paper, scenarios will be limited to an analysis of RE generation scenarios.
Depending on the amount of RE generation forecasted for the next day, system operators will plan a day ahead which generators will be operated and at what generation output each generator is contracted to produce. During actual operations, the actual RE generation may differ from the forecasted RE output because of unforeseeable and uncontrollable external factors. The fluctuations in actual RE generation may result in a lower or higher total generated energy. This intermittency of RE can cause small disturbances to the power system because the power balance between the generated and distributed energy will not be equal. Small-signal stability refers to the ability of the power system to maintain synchronism when subjected to small disturbances [8].
In this paper, RE is considered as a small disturbance because the equations representing the power system can be linearized for the purpose of small-signal analysis [8]. In order to describe the small-signal stability of the power system for RE generation scenarios, eigenvalues of the power system must be analyzed. If an eigenvalue is found at the right-half of the complex plane, the system is unstable. Various methods were proposed to determine the trajectory of the eigenvalues as a system parameter is increased [9][10][11]. In [9], eigenvalue tracing is done by combining the invariant subspace method and the predictor-corrector scheme. An integration-based approach is used in [10] to trace critical eigenvalues and is combined with an index to determine which eigenvalues are considered critical. While in [11], the invariant subspace method is used with the projected Arnoldi method to trace the critical eigenvalues. The difference of trajectory tracing with the proposed method is that trajectory tracing involves a single parameter to be increased until a bifurcation occurs. This will not be applicable for scenario screening because each scenario is tested for its stability according to the small-signal stability rules. A more applicable approach is to determine the sensitivity of the eigenvalues. An eigenvalue sensitivity analysis was proposed in [12][13][14][15] to depict how the eigenvalues are affected by any change in the parameters of the power system. The method for determining the sensitivity of eigenvalues for sparse formulations of the power system equations has been presented in [12]. Eigenvalue sensitivity analyses, with respect to operating parameters [13] and power system parameters [14], have also been proposed in the literature. The augmented matrix using only dominant eigenvalues and their corresponding left and right eigenvectors were used in [15]. Lower-upper (LU) factorization, sparse matrices, and parallel computing techniques were used in computing the eigenvalue sensitivity of operating parameters [16]. Aside from power systems, the eigenvalue sensitivity was also used in parametric optimization. For instance, eigenvalue sensitivity was used to optimize the parameters of a doubly-fed induction generator (DFIG) wind turbine system [17,18]. Variation of different system parameters of microgrids with constant power loads was analyzed using eigenvalue sensitivity analysis in [19]. In the methods mentioned above, the screening of scenarios by analyzing the sensitivity of the eigenvalues with respect to the variation of RE has not been studied for power systems.
In daily operations, RE generation at each time period needs to be estimated based on day-ahead forecasting, so that operating points are established for security assessment. In the literature, several regression techniques have been adopted for RE forecasting. Autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) have become two of the most popular methods that can be used for linear time series analysis [20]. Aside from regression techniques, artificial intelligence (AI) techniques and hybrid methods were also proposed in the literature. For photovoltaic (PV) forecasting, the long short-term memory (LSTM) recurring neural network, which is a method under AI techniques, has become popular to implement because of its better performance compared to other conventional implementations [21]. This paper employs LSTM as the method for forecasting RE day-ahead generation because, aside from the historical PV output, historical cloud cover data are also utilized. Even though the accuracy of the forecasting techniques has improved over time, the actual generated output might still be different from the forecasted output because of the limitations of forecasting as well as unpredictable external sources affecting the generation of RE. Thus, for secure operation of the power system with RE resources, diverse RE generation scenarios need to be assessed in terms of several security issues. RE generation scenarios for examination are established for a specific time period using RE bus location, average forecasting error, forecasting error standard deviation, and average scheduled RE output. Depending on these factors, certain scenarios might be critical and result in insecure or unstable conditions. In order to keep the power system secure and stable under any circumstances, critical scenarios must be identified by power system planners to come up with countermeasures against the corresponding insecure phenomena. This paper presents a framework for screening of RE generation scenarios in terms of small-signal stability, and it is based on eigenvalue sensitivity analyses with respect to RE generation. Eigenanalysis for the initial operating point is performed first, and sensitivities of critical eigenvalues with respect to RE variation are evaluated to estimate the movement of the eigenvalues and to check whether the resulting locations violate small-signal stability criteria. If some of the estimated eigenvalue locations are not in the acceptable region, the corresponding RE generation scenario is included in the critical scenario list and should be reassessed using a detailed stability analysis. The results of the proposed method were verified by comparing it with an eigenanalysis of the newly established operation with the corresponding RE generation scenario. Instead of performing an eigenanalysis for all possible operating points, the proposed screening framework can quickly determine whether each RE generation scenario might cause insecurity and instability. By implementing the proposed method in screening RE generation scenarios, only the critical scenarios need to be analyzed in depth by system operators for countermeasure determination. The contributions of this paper can be summarized as follows: (i) establishment of the framework to screen RE generation scenarios in terms of small-signal stability, (ii) inclusion of RE generation parameters such as average forecasting error, forecasting error standard deviation, and average scheduled RE output in the computation of the RE variability factor, (iii) application of eigenvalue sensitivity with respect to RE generation to determine small-signal stability, and (iv) formulation of the uncertainty of the real part of critical eigenvalues with respect to RE generation.
The paper is organized as follows: In Section 2, the mathematical background and the formulation of the system model, system state matrix and the eigenvalue sensitivity concept are presented. In Section 3, the proposed scenario screening framework based on the eigenvalue sensitivity analysis is presented. In Section 4, a day-ahead RE forecast is made using a LSTM model for RE generation scenarios. In the same section, the proposed method is applied to the New England Test System (IEEE-39 bus test system) to demonstrate its capability of determining the small-signal stability of the system for the five given scenarios. In Section 5, the conclusions of the study are presented.
Mathematical Background and System Formulation
This section presents the formulation and mathematical background of the system model, system state matrix, and the eigenvalue sensitivity concept used in this paper. The descriptions of the variables and parameters in the formulation of the system model are shown in Table 1. The units of them are also given in Table 1. The test system used for this analysis was the modified New England 39 bus test-system including RE generation, the one-line diagram of which is shown in Figure 1. There are 39 buses and 10 synchronous generators in the New England test system. The five locations for the RE source that were considered in the simulations are also illustrated in Figure 1. The original case power flow without RE generation and dynamic data are provided in [22,23]. The state equations for each state variable are discussed in Section 2.1. The state equations are also called the differential equations of the system. The active and reactive power network equations for each bus are discussed in Section 2.2. The power network equations are also called the algebraic equations of the system. The formulation of the differential algebraic equations (DAEs) model of the system is shown in Section 2.3. From the DAE model, the system state matrix and the concept of eigenvalue sensitivity can be derived. This will serve as the foundation for the scenario screening framework that will be presented in Section 3.
State Variables
In this paper, each synchronous nonreference generator was represented by 10 state variables. The 10 state variables were rotor angle ( ) , machine frequency ( ), quadrature axis emf ( ), direct axis emf ( ), field winding voltage ( ), AVR output ( ), exciter feedback ( ), prime mover mechanical power output ( ), and generator steam valve opening ( ). For reference generators, there will only be 9 state equations. There will be no state equation for the rotor angle of the reference generator since the rotor angles of the nonreference generators are referenced to the rotor angle of the reference generator.
The differential equations for the rotor angle ( ) , machine frequency ( ), quadrature axis emf ( ), and direct axis emf ( ) of the synchronous generators are written below [22]. The rotor angle of the mth generator is chosen as the reference angle of the system. In this paper, the generator in bus 39 was chosen as the reference generator.
In the two-axis model shown in Equations (1)-(4), the stator transients were ignored. The machine direct axis ( ) and quadrature axis currents ( ) are expressed [22] in the succeeding equations. , , and in the following equations refer to the bus voltage magnitude, bus voltage angle, and rotor angle respectively.
State Variables
In this paper, each synchronous nonreference generator was represented by 10 state variables. The 10 state variables were rotor angle (δ i ), machine frequency (ω i ), quadrature axis emf (E qi ), direct axis emf (E di ), field winding voltage (E f di ), AVR output (V ri ), exciter feedback (R f i ), prime mover mechanical power output (P mi ), and generator steam valve opening (µ i ). For reference generators, there will only be 9 state equations. There will be no state equation for the rotor angle of the reference generator since the rotor angles of the nonreference generators are referenced to the rotor angle of the reference generator.
The differential equations for the rotor angle (δ i ), machine frequency (ω i ), quadrature axis emf (E qi ), and direct axis emf (E di ) of the synchronous generators are written below [22]. The rotor angle of the mth generator is chosen as the reference angle of the system. In this paper, the generator in bus 39 was chosen as the reference generator. . . .
In the two-axis model shown in Equations (1)-(4), the stator transients were ignored. The machine direct axis (I di ) and quadrature axis currents (I qi ) are expressed [22] in the succeeding equations. V, θ, and δ in the following equations refer to the bus voltage magnitude, bus voltage angle, and rotor angle respectively. The nonreference generator (i = 1, . . . , m − 1) is as follows: The reference generator (i = m) is as follows: For the excitation control system of the synchronous generator, IEEE type DC-1 excitation system was used. The model for this excitation system [22] is shown in Equations (9)-(11). The following equations are for the field winding voltage (E f di ), AVR output (V ri ), and exciter feedback (R f i ): . .
The differential equation for the prime mover and speed governor model are shown in Equations (12) and (13) [22]. P mi refers to the prime mover mechanical power output, while µ i is the mathematical expression for the physical representation of the generator steam valve opening. To compensate for any governor speed deviation from the reference speed (ω re f ), there is a corresponding change in µ i depending on the speed difference and the droop constant (R i ). The droop constant represents the speed governor inherent speed-droop characteristics [22]. A typical value for the droop constant is 5% droop. .
Algebraic Variables
Consider a system with n buses, the network power equations for each bus of the system are shown as: In this paper, the test system used was the New England test system and was composed of 39 buses. Since there is one algebraic equation each for the active power and reactive power, there will be 78 power network equations for this test system. By following this, a test system with n buses is expected to have 2n power network equations. P gi and Q gi refer to the active power output and reactive power output of the generator at bus i. P gi and Q gi are zero if there is no generator in bus i. Otherwise, the equations for P gi and Q gi are shown below [22]. In (14), P REi stands for RE generation at bus i. In this paper, P REi was considered as a given parameter at a specific time period. Similar to the notation in the previous subsection, the variables V, θ, and δ refer to the bus voltage magnitude, bus voltage angle, and rotor angle respectively.
The nonreference generator (i = 1, . . . , m − 1) is as follows: The reference generator (i = m) is as follows: P ti and Q ti refer to the total network active power and reactive power injections at the ith bus respectively. These equations are similar to the equations used in the power flow analysis [24]. Y ij and γ ij in the following equations refer to the bus admittance magnitude and angle respectively.
The active power load model and the reactive power load model are given as shown in Equations (22) and (23) respectively [22]. P lo,i , Q lo,i , and V o,i refer to the initial load active power, initial load reactive power, and initial load voltage respectively. KP, KQ, and K f req are load factors.
By inspecting the active power and reactive power network equations, it can be seen that the equations are in terms of V and θ. Because of this, V and θ are called the algebraic variables of the system.
System State Matrix and Eigenvalue Sensitivity
The general form of the differential and algebraic equations (DAEs) describing the power system is shown as [22]: where f and g refer to the set of differential and algebraic equations respectively. f and g can be represented in terms of the state variables (x), algebraic variables (y), and the control parameter (a). From Sections 2.1 and 2.2, x and y are defined as the set of differential and algebraic variables. In this paper, the control parameter is the RE variation in the system.
The solution of the DAE model is the equilibrium point of the system. By linearizing the DAE model around the equilibrium point, the resulting equations are shown in state matrix form in Equation (28). The matrix elements f x , f y , g x , and g y are called the Jacobian matrices of the system. The symbol ∆ represents parameter deviation.
. (28) By assuming that the Jacobian matrix g y is nonsingular, the variable ∆y can be eliminated ∆x. The resulting equation is shown in Equation (30). From Equation (30), the equation for the system state matrix A s is derived. .
The sensitivity of the eigenvalue λ with respect to system parameter α is given by Equation (32) [8]. Eigenvalue sensitivity depicts how the system eigenvalues are affected by a change in a system parameter.
In Equation (32), it is shown that the eigenvalues and eigenvectors of the system state matrix A s are used to determine the eigenvalue sensitivity of the system. ψ pertains to the left eigenvector, while φ pertains to the right eigenvector of the system state matrix A s .
Renewable Energy (RE) Generation Scenario Screening Framework Based on Eigenvalue Sensitivity
The overview of the proposed scenario screening framework is shown in Figure 2. The generation dispatch was scheduled based on the day-ahead forecast of RE. The initial eigenanalysis was then analyzed according to the scheduled dispatch. Based on the initial eigenanalysis, the eigenvalues with the lowest damping ratios and eigenvalues with the largest real parts were considered as critical eigenvalues. The critical eigenvalues were monitored closely because they were the most probable candidates to cross the imaginary axis first. The eigenvalue sensitivity with respect to the renewable energy was simulated based on the eigenvalue and eigenvector information from the initial eigenanalysis. This is discussed further in Section 3.1.
The sensitivity of the eigenvalue λ with respect to system parameter is given by Equation (32) [8]. Eigenvalue sensitivity depicts how the system eigenvalues are affected by a change in a system parameter.
In Equation (32), it is shown that the eigenvalues and eigenvectors of the system state matrix are used to determine the eigenvalue sensitivity of the system. pertains to the left eigenvector, while pertains to the right eigenvector of the system state matrix .
Renewable Energy (RE) Generation Scenario Screening Framework Based on Eigenvalue Sensitivity
The overview of the proposed scenario screening framework is shown in Figure 2. The generation dispatch was scheduled based on the day-ahead forecast of RE. The initial eigenanalysis was then analyzed according to the scheduled dispatch. Based on the initial eigenanalysis, the eigenvalues with the lowest damping ratios and eigenvalues with the largest real parts were considered as critical eigenvalues. The critical eigenvalues were monitored closely because they were the most probable candidates to cross the imaginary axis first. The eigenvalue sensitivity with respect to the renewable energy was simulated based on the eigenvalue and eigenvector information from the initial eigenanalysis. This is discussed further in Section 3.1. RE generation scenarios were defined by the RE location, average forecasting error, forecasting error standard deviation and average forecasted RE output. The computation for the input was based on the RE generation scenario variables. The formulations for the and RE generation scenario variables are shown in Section 3.2.
The estimated eigenvalues were calculated by adding the eigenvalue sensitivity information to the initial critical eigenvalues. If an estimated eigenvalue was found in the right complex plane, then the system might be unstable in this RE generation scenario, so it requires more detailed The estimated eigenvalues were calculated by adding the eigenvalue sensitivity information to the initial critical eigenvalues. If an estimated eigenvalue was found in the right complex plane, then the system might be unstable in this RE generation scenario, so it requires more detailed simulation and countermeasure determination. Likewise, if all eigenvalues were found in the left complex plane, then the system is said to be stable in this RE generation scenario.
The formulation for calculating the uncertainty of the real part of critical eigenvalues is discussed in Section 3.3. Additional discussion and a detailed flowchart are presented in Section 3.4.
Eigenvalue Sensitivity with Respect to Variation of RE
The sensitivity of the eigenvalues with respect to RE was calculated because it could determine the effect of the variable RE to the eigenvalues of the power system. The eigenanalysis of the base case scenario was evaluated first to determine the initial eigenvalues and eigenvectors of the system. The eigenvalues and the eigenvectors were used to determine the sensitivity of eigenvalues. From Equation (32), the sensitivity of the critical eigenvalue λ c with respect to RE generation P RE can be derived. The resulting equation is shown in Equation (33).
To determine the partial derivative of the system state matrix A s with respect to P RE , the derivative of the system state matrix with respect to V and δ were used in this method. V refers to the voltage magnitude and δ refers to the phase angle.
When connected to the grid, the added RE acts to decrease the active power being supplied by the generators to the loads. P RE is added in the Jacobian matrix used for power flow analysis. ∂|V| ∂P RE and ∂δ ∂P RE can be derived from the inverse of the Jacobian matrix.
By taking the derivative of the system state matrix with respect to V, Equation (31) is changed to the following equation: In [25], a simplified expression for Likewise, an analysis is done to get the derivative of the system state matrix A s with respect to δ.
RE Generation Scenario
A RE generation scenario is a set of conditions that describes how much the actual RE output deviates from the forecasted RE output. In some cases, the actual output differs from the forecasted output because of the intermittency of RE. Obviously, several RE generation scenarios need to be checked in operation. The framework of this paper can quickly evaluate the impact of each scenario, in terms of small-signal stability, to screen critical RE generation scenarios.
Each RE generation scenario is defined by RE location, average forecasting error (mean absolute percent error, MAPE), forecasting error standard deviation (σ err ), and average forecasted RE output (P f or RE ). The location of the RE source is included as part of the scenario because it also affects the generator loading and the exchange of power within the grid.
PE k,t refers to the percent error between the forecasted (P f or RE ) and actual (P act RE ) RE output for time slot t in the kth RE generation scenario.
The formulations for solving the MAPE, σ err , and P f or RE for each RE generation scenario are shown in the following equations.
The corresponding ∆P RE for each RE generation scenario can be evaluated using Equation (44). When setting K to 3, 99.7% of all probable forecasting error margins will be covered in the analysis.
The estimated critical eigenvalues can be calculated using Equation (45). The calculated eigenvalue sensitivity ∂λ c ∂P RE from Equation (33) is multiplied by the change in the RE generation, ∆P RE . This term is added to the initial critical eigenvalue λ c to estimate the movement of the eigenvalue.
Using Equation (45), all critical eigenvalues are estimated with respect to ∆P RE . Then, the framework checks whether there are any estimated eigenvalues in the right side of the complex plane. If there are eigenvalues in the right side of the complex plane, then the system might be unstable in this RE generation scenario, and it requires more detailed analysis in terms of small-signal stability.
Uncertainty of the Real Part of Critical Eigenvalues
In [26], a framework for quantifying the uncertainty of the transmission reliability margin was proposed with respect to uncertain parameters. In this paper, a similar formulation was employed to evaluate the uncertainty of the real part of the critical eigenvalues with respect to the RE generation scenarios. The formulation for the uncertainty U c of the real term of a critical eigenvalue, λ c , is shown in Equation (46). In the evaluation, the sensitivity of the real part with respect to renewable energy and the variance of the renewable energy are both used.
where l is the total number of considered RE generation scenarios.
Additional Discussion on the Proposed Framework
As shown in Equation (45), the sensitivity of the eigenvalues is used to calculate the estimated eigenvalues. Figure 3 depicts how eigenvalue sensitivity can be used to screen RE generation scenarios. The blue circle in Figure 3 represents the critical eigenvalues of the system. The critical eigenvalues can either be eigenvalues with the lowest damping ratios or eigenvalues with the largest real parts. The black dashed lines represent the damping ratio margin that will serve as the criteria to determine which eigenvalues are considered as critical. Depending on the variability of RE in the system for a given scenario, the system eigenvalues may cross the imaginary axis and move to the right half of the complex plane. The area within the red circle represents the possible location of the eigenvalues as a result of the added variable RE. If the eigenvalues are found to be at the right side of the complex plane or if they cross the imaginary axis, the system is unstable for that scenario. location of the eigenvalues as a result of the added variable RE. If the eigenvalues are found to be at the right side of the complex plane or if they cross the imaginary axis, the system is unstable for that scenario. A flowchart is presented in Figure 4 to summarize the process of screening RE generation scenarios. A flowchart is presented in Figure 4 to summarize the process of screening RE generation scenarios. A flowchart is presented in Figure 4 to summarize the process of screening RE generation scenarios.
Simulation Results and Analysis
As in Figure 1, the modified New England system was adopted in the simulation. The eigenvalues of the New England Test System without any RE penetration are shown in Figure 5 below. The eigenvalue analysis shows that the system had 89 eigenvalues: 41 of which were real
Simulation Results and Analysis
As in Figure 1, the modified New England system was adopted in the simulation. The eigenvalues of the New England Test System without any RE penetration are shown in Figure 5 below. The eigenvalue analysis shows that the system had 89 eigenvalues: 41 of which were real eigenvalues and the other 48 were complex eigenvalues (24 complex eigenvalue pairs). Eighty-nine eigenvalues were expected because each nonreference generator was represented by 9 state equations, and the reference generator was represented by 8 state equations. eigenvalues and the other 48 were complex eigenvalues (24 complex eigenvalue pairs). Eighty-nine eigenvalues were expected because each nonreference generator was represented by 9 state equations, and the reference generator was represented by 8 state equations.
In Figure 5a, all eigenvalues are shown. The eigenvalues were all in the left-hand side of the imaginary axis. Hence, in this case, the system was stable. The system will be unstable if any of the system eigenvalues cross the imaginary axis and move to the right-hand side of the complex plane. Some of the closest eigenvalues near the origin are shown in Figure 5b. These eigenvalues were considered as critical eigenvalues and may have the highest probability of crossing the imaginary axis first, depending on the amount of RE added to the system. The type of RE considered in the simulation for this paper was solar energy. The long short-term memory (LSTM) model was used in this paper to predict the photovoltaic (PV) cell system output based on historical one-year PV output data and cloud cover data. The standard deviation of the error between the actual and predicted PV output ( ), average forecasted PV output ( ), and average forecasting error (MAPE) were used to define the scenario conditions for the proposed method. A sample one-week snapshot of the training data is shown in Figure 6. The total PV output per day is also shown in Figure 6. While in Figure 7, a sample prediction and actual curve of the total PV plant for a sunny and cloudy day are shown. The amount of cloud cover affects PV generation output, and this makes it harder for system operators to forecast PV In Figure 5a, all eigenvalues are shown. The eigenvalues were all in the left-hand side of the imaginary axis. Hence, in this case, the system was stable. The system will be unstable if any of the system eigenvalues cross the imaginary axis and move to the right-hand side of the complex plane. Some of the closest eigenvalues near the origin are shown in Figure 5b. These eigenvalues were considered as critical eigenvalues and may have the highest probability of crossing the imaginary axis first, depending on the amount of RE added to the system.
The type of RE considered in the simulation for this paper was solar energy. The long short-term memory (LSTM) model was used in this paper to predict the photovoltaic (PV) cell system output based on historical one-year PV output data and cloud cover data. The standard deviation of the error between the actual and predicted PV output (σ err ), average forecasted PV output (P f or RE ), and average forecasting error (MAPE) were used to define the scenario conditions for the proposed method. A sample one-week snapshot of the training data is shown in Figure 6. The total PV output per day is also shown in Figure 6. While in Figure 7, a sample prediction and actual curve of the total PV plant for a sunny and cloudy day are shown. The amount of cloud cover affects PV generation output, and this makes it harder for system operators to forecast PV accurately [27]. The type of RE considered in the simulation for this paper was solar energy. The long short-term memory (LSTM) model was used in this paper to predict the photovoltaic (PV) cell system output based on historical one-year PV output data and cloud cover data. The standard deviation of the error between the actual and predicted PV output ( ), average forecasted PV output ( ), and average forecasting error (MAPE) were used to define the scenario conditions for the proposed method. A sample one-week snapshot of the training data is shown in Figure 6. The total PV output per day is also shown in Figure 6. While in Figure 7, a sample prediction and actual curve of the total PV plant for a sunny and cloudy day are shown. The amount of cloud cover affects PV generation output, and this makes it harder for system operators to forecast PV accurately [27]. In the first two scenarios, the PV resource was placed first on bus 20 and then changed to bus 8 to compare results. The same RE generation conditions were assumed for the first two scenarios. In the third scenario, the PV resource was moved to bus 4 with higher variability of PV assumed in the system. The results of the proposed method were verified by comparing it with the eigenanalysis results.
In this paper, the least damping ratio criterion, used to classify which eigenvalues are considered critical, was 0.10 or 10%. The damping ratio of each eigenvalue (λ = σ + jω) is given by Equation (47) [8]. For the largest real parts criterion, eigenvalues with real part greater than −0.10 In the first two scenarios, the PV resource was placed first on bus 20 and then changed to bus 8 to compare results. The same RE generation conditions were assumed for the first two scenarios. In the third scenario, the PV resource was moved to bus 4 with higher variability of PV assumed in the system. The results of the proposed method were verified by comparing it with the eigenanalysis results.
In this paper, the least damping ratio criterion, used to classify which eigenvalues are considered critical, was 0.10 or 10%. The damping ratio of each eigenvalue (λ = σ + jω) is given by Equation (47) [8]. For the largest real parts criterion, eigenvalues with real part greater than −0.10 were considered as critical eigenvalues.
The results of the simulations for the three scenarios are summarized in Tables 1-3. The results shown in the initial critical eigenvalues column are based on the scheduled dispatch. If an eigenvalue had a damping ratio of less than 0.10 or 10%, or if the real part of the eigenvalue was greater than −0.1, the eigenvalue was considered as a critical eigenvalue. The corresponding damping ratio for each complex eigenvalue is shown in the next column. Critical eigenvalues were given special importance in the analysis because they were the most probable candidates to cross the imaginary axis first. If any of these eigenvalues crossed the imaginary axis to the right-hand side of the complex plane, then the system might be unstable in this scenario. The estimated eigenvalues column shows the result of the proposed method by adding the eigenvalue sensitivity with respect to PV variability to the initial critical eigenvalues. These results were then verified with the results of the eigenanalysis with the corresponding PV generation. In the results shown in Table 2, the assumed condition was that the scheduled dispatch included a mean PV output of 5.41 pu in bus 20. The values shown in the estimated eigenvalues column were evaluated using the proposed method and without solving for the eigenanalysis solution. Based on the results, the proposed method showed that the eigenvalues were still in the left-hand side of the complex plane. As such, the system was considered stable in this scenario. To verify this result, PV output was then added to the system, and an eigenanalysis was done to determine the eigenvalues of the system in this case. The results are shown in the eigenanalysis verification column. The eigenanalysis result also showed that the eigenvalues were in the left-hand side of the complex plane. Hence, Scenario 1 was considered as a stable scenario as both estimated eigenvalues and the eigenanalysis verification showed the same result.
To test if the method will be applicable in other locations in the system, the same analysis was done with the same scenario conditions but with a different PV location. In the simulation results shown in Table 3, the same scenario conditions were used, but the PV location was changed to bus 8. It showed that the results of the proposed method agreed with the results of the eigenanalysis verification for this scenario as well. The system was shown to be stable for the assumed conditions in this scenario. In the last scenario, the PV bus was changed to bus 4, but with a higher scheduled PV output mean and higher PV output variability. The scheduled dispatch included a mean PV output of 5.65 pu. The results for the analysis done with this scenario are shown in Table 4. For increased variability of the PV output, the proposed method showed that the system was stable in this scenario since all eigenvalues were in the left-hand side of the complex plane. The results agreed with the eigenanalysis verification in this scenario. In the evaluation of the uncertainty of the real part of the critical eigenvalues, two (2) additional scenarios were considered, as shown below: By considering the five scenarios, the uncertainty of the real part of the critical eigenvalues can be evaluated using Equation (44). The real part of the five largest and smallest real parts of the critical eigenvalues for each scenario are tabulated in Table 5. The uncertainty for the real part of each critical eigenvalue are shown in the last column of the table. In Scenario 4, the RE generation scenario was similar to Scenario 3, but the PV location was moved from bus 4 to bus 29. After evaluating the sensitivity of the critical eigenvalues with respect to PV generation, the proposed scenario screening framework determined that Scenario 4 was a stable scenario as all of the estimated critical eigenvalues were in the left complex plane.
While in Scenario 5, the scheduled PV generation was higher compared to the other scenarios, and the PV location was moved to bus 39. After applying the proposed method, it was determined that there was an estimated critical eigenvalue in the right complex plane. If an eigenvalue is found in the right complex plane, the system might be unstable in this scenario. The results of both scenarios agreed with the eigenanalysis verification. The system eigenvalues of the critical scenario are shown in the left image of Figure 8. The eigenvalues near the imaginary axis are shown in the right image of Figure 8 for easier visualization. It can be seen from this image that a complex eigenvalue pair crossed the imaginary axis, therefore making the scenario unstable and critical. to PV generation, the proposed scenario screening framework determined that Scenario 4 was a stable scenario as all of the estimated critical eigenvalues were in the left complex plane. While in Scenario 5, the scheduled PV generation was higher compared to the other scenarios, and the PV location was moved to bus 39. After applying the proposed method, it was determined that there was an estimated critical eigenvalue in the right complex plane. If an eigenvalue is found in the right complex plane, the system might be unstable in this scenario. The results of both scenarios agreed with the eigenanalysis verification. The system eigenvalues of the critical scenario are shown in the left image of Figure 8. The eigenvalues near the imaginary axis are shown in the right image of Figure 8 for easier visualization. It can be seen from this image that a complex eigenvalue pair crossed the imaginary axis, therefore making the scenario unstable and critical.
(left) (right) The significance of the uncertainty results showed that, by considering the variation in the PV output in any of the considered PV locations and the sensitivity of the eigenvalues, the real part of the critical eigenvalues varied by the uncertainty value. This is a measure of how the critical eigenvalues were affected on a system-level perspective.
The results of the proposed scenario screening framework and their run times are tabulated in The significance of the uncertainty results showed that, by considering the variation in the PV output in any of the considered PV locations and the sensitivity of the eigenvalues, the real part of the critical eigenvalues varied by the uncertainty value. This is a measure of how the critical eigenvalues were affected on a system-level perspective.
The results of the proposed scenario screening framework and their run times are tabulated in Table 6. The simulations were done on a computer with a 3.30 GHz Intel processor and 8 GB RAM. All scenarios, except Scenario 5, were stable scenarios since the estimated critical eigenvalues in these scenarios were found in the left complex plane. Since there was a critical eigenvalue that crossed the imaginary axis in Scenario 5, the system might be unstable in this scenario, according to small-signal stability rules, and it requires further detailed simulations. The run times of the proposed method were similar to those of the analytical methods using the original New England test system in [16]. However, it should be noted that the proposed framework had a different applicability, which was to quickly screen critical RE generation scenarios in terms of small-signal stability, and not only to calculate eigenvalue sensitivity.
For critical scenarios determined by the proposed framework, countermeasure determination was needed with several control parameters such as active power injection with energy storage systems (ESSs) and generation re-dispatch. As in [28,29], the ESS capability of local energy supplying systems including microgrids could increase the operational efficiency by reducing uncertainty of RE. In addition, it could improve system stability, which might be threatened by RE intermittence, if the local systems with ESS followed the signals to meet the required control amount as ancillary service providers. As in [30,31], the concept of microgrids also included the role of energy hub and energy router. If power injections from those energy hubs are properly scheduled to minimize the negative impacts by RE intermittence in several security issues, system security can be enhanced as well. The proposed method of this paper can be used to quickly filter alarming situations in term of small-signal stability; hence, adequate control signals of required power injection are determined.
Conclusions
This paper presents the framework for screening RE generation scenarios in terms of small-signal stability. Instead of performing eigenanalyses for all possible operating points, the proposed screening framework adopted eigenvalue sensitivity with respect to renewable generation and checked if the system might violate the small-signal stability rules by RE fluctuation. The case studies using the proposed method showed that it could quickly determine whether there might be the possibility of instability for several RE scenario conditions such as different bus locations and different RE variability levels. For all the scenarios in the simulation, the proposed method showed that the results agreed with the eigenanalysis verification with the corresponding operating points. The proposed method was very helpful to find critical scenarios that may possibly lead to severe conditions. By using the information from the method, a more detailed analysis may be done in terms of small-signal stability, especially for unstable scenarios. Thus, the method can be used in system planning and operational planning stages to evaluate the effect of different RE generation scenarios and to quickly filter critical ones. As a result, system engineers could come up with adequate countermeasures against those cases to improve the grid reliability and to mitigate the security risks. | 2019-12-19T09:16:13.956Z | 2019-12-11T00:00:00.000 | {
"year": 2019,
"sha1": "959f38b17fd80036fa06683479b287cfb38c8b67",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/11/24/7079/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ba5fc029ee1c62bb202307dc66e168d984c2ab86",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Economics"
]
} |
56230285 | pes2o/s2orc | v3-fos-license | THINK-PAIR-SQUARE LEARNING : IMPROVING STUDENT ’ S COLLABORATIVE SKILLS AND COGNITIVE LEARNING OUTCOME ON ANIMAL DIVERSITY COURSE
Empowering collaborative skills and optimizing learning outcomes are essential goals in every course. The aim of this study was to determine the effect of Think-Pair-Square (TPS) learning model on student collaborative skills and their cognitive learning outcomes. This study was Lesson-Study-based Classroom Action Research (CAR) carried out in two cycles. The subjects of this study consisted of 32 students who took Animal Diversity course. The CAR consisted of four phases i.e. planning, action, observation, and reflection. At the action phase, LS was conducted and consist of Plan, Do, and See. The instruments used were LS observation sheet, collaborative observation sheet, and cognitive test. The observation and test results of the both cycles were calculated and compared each other. There were improvements in the both student’s collaborative skills and cognitive learning outcome as high as 14% and 7.56, respectively. Therefore, TPS model can strengthen the student’s collaborative skills and cognitive learning outcome.
INTRODUCTION
Education is a process that facilitates a person or group of people to gain knowledge, skills, and attitudes.Education can be managed in formal or informal setting.Formal education is divided into several levels, namely early childhood education and kindergarten, elementary school, secondary school, and higher education.Some levels of formal education can be experienced in various educational institutions, both public and private.
The purpose of various educational institutions in carrying out the process of education is to develop the nation intellectual life.By achieving the goal, the community can build a nation with good moral through education.To meet these objectives, not only does the learning process need to emphasize the concept understanding which is reflected in learning outcomes (Fauzi, 2013;Sukmawati, Ramadani, Fauzi, & Corebima, 2015), but it also should ensure the empowerment of skills needed in 21 st century era, such as metacognition (Ramadani, Fauzi, Sukmawati, & Corebima, 2015), critical thinking skills, and collaborative skills (Ladd et al., 2014).
Collaboration is one of social interaction form in society.Thus, the skills needed in the interaction must be possessed by students as they graduated (Huang et al., 2010;Ouellet, Sabbagh, Bergeron, Mayer, & St-Onge, 2016).Consequently, the efforts in empowering collaborative skills is essential.One of the learning conditions support the efforts is by setting the students to face some communal problems.
However, based on the observation results toward B class students in 2016 academic year of the State University of Malang on animal biodiversity course on September 30, 2017, the ability of students during working in their teamwork was relatively limited.Yet the students solved the problem given without any discussion.This evidence indicated that the students' collaborative skills were not empowered yet.These skills belong to social behavior for students in which important to be developed in their social life.Thus, it is necessary to design the learning which develop the skills through observing and improving the learning process itself.
Problems that arise in the learning process, including problem in empowering some important skills, is a condition that must be solved by educators.The effort to overcome this kind of learning problem is aimed to increase the learning process.In overcoming various problems that arise, various ways can be used as solutions to solving the problem.Various solutions have been reported could solve learning problem in biology learning, such as through the development of learning media or learning source (Fauzi, 2017;Widiansyah, Indriwati, Munzil, & Fauzi, 2018), the use of model organisms (Fauzi & Corebima, 2016c, 2016a, 2016b), the application of research activities in the learning process (Fauzi, Corebima, & Zubaidah, 2016;Fauzi & Ramadani, 2017), as well as the application of cooperative learning (Fauzi, 2013;Ramadani et al., 2015).
Cooperative learning is learning model that classifies students with the aim of creating a learning condition that effectively facilitate the improvement of social skills (Lavasani, Afzali, & Afzali, 2011;Lie, 2005;Strebe, 2013;Suprijono, 2009).The advantages of implementing cooperative learning are to give opportunities to students to express and discuss the learning material.The cooperative learning is a learning form that covering all types of group learning.Through cooperative learning, the teacher assigns tasks and questions and provides materials and design information to help students to solve the problem.
Think-Pair-Square (TPS) learning model is the modification of Think Pair Share model that developed by Spencer Kagan (Fisher & Frey, 2014;Lapp & Moss, 2012;Strebe, 2013).TPS provides students the opportunity to work on their own and work with the others.The syntax of TPS is basically similar as Think Pair Share.First, the students think for themselves.Second, the students discuss with their partner.Finally, the students make a group that consists of four members to solve the existing problems.Due to the possibility of conduct problem solving activities during learning process, TPS probably can facilitate the improvement of students' collabarotaive skills.Some previous studies used this cooperative learning as the alternative solution to solve the problem in the learning process (Erra, Portnova, & Scanniello, 2010;Erra & Scanniello, 2011;Karyawati, Murda, & Widiana, 2014;Magsino, 2014;Scanniello & Erra, 2014;Tahueyo, Martawijaya, & Azis, 2013) Beside collaborative skills, learning outcomes are one of the goals to be achieved in the learning process (Buku, Mite, Fauzi, Widiansyah, & Anugerah, 2015;Fauzi, 2013;Ramadani et al., 2015).The results achieved by the students give an information about the position of their academic success compared to others.Learning outcomes can be measured through tests that are often known as learning result tests.Moreover, the result of the test can reveal the quality of the learning.
Previous reports have reported the potency of implementing TPS in classroom learning.Several previous studies informed the implementation of TPS can improve student comprehension of learning material (Bennett, 2012;Hermiati, 2017).Several other reports have reported that the application of this learning model may improve critical thinking skills (Sumaryati & Sumarmo, 2013), creative thinking skills (Utami, 2014), communication skills (Talat & Chaudhry, 2014), and learning outcomes (Isharyadi, 2015;Januartini, Agustini, & Sindu, 2016;Karyawati et al., 2014;Tahueyo et al., 2013), beside also have impact on students' participation (Zainollah, 2014) and students' motivation (Januartini et al., 2016) during learning process.However, no reports have reported the application of TPS as a solution to improve learning outcomes as well as students' collaborative skills.In fact, the stages of the learning process in this model facilitate students to empower their collaborative skills.
Based on the problems found in the Animal Biodiversity Class and based on the potency of TPS model in empowering the learning outcomes and collaborative skills, classroom action research (CAR) would be conducted in the B class that followed Animal Biodiversity course.Moreover, Several recent CAR reports have informed that LS-based CAR implementation can provide the learning process more optimally (Buku et al., 2015;Mustofa et al., 2016).Therefore, in order to improve the learning process, lesson study would also be conducted on this CAR
This study is a Lesson Study based-Classroom Action Research (LS-based CAR).
Think-Pair-Share learning …. 137 The research was conducted in the Department of Biology Education, the State University of Malang in Animal Biodiversity Course.The research subjects were B class students in 2016 academic year.This CAR consists of two cycles, the first cycle consists of three meetings, while the second cycle consists of four meetings.The course material taught in cycle 1 was "Low Cordata", while in cycle 2 was "Mammalia".
Each CAR cycle consists of four phases, namely planning, action, observation, and reflection.Meanwhile, LS was composed of three phases, namely plan, do, and see.In the planning phase, the lecturers collaborate with LS members to design the lecture plan and arrange the lecture plan at the next meeting.The lecture plan is focused on the studentcentered learning process.Learning model that was planned in this research was TPS.Learning phase in this model was: 1) "Thinking", the lecturer asks a question or issue related to the lesson and asks the students to think independently to solve the issue; 2) "Pairing", the lecturer asks the students to pair up and discuss what they think.Interactions during this period can facilitate the students to share their answers to their teammate; 3) "Square", in this final step, the lecturer asks both of partners to meet again in a group of four.The students have the opportunity to share their work in a group of four.
In the do phase, there were two main activities, namely: (1) learning activities conducted by lecturers that practice the lesson plans that have been prepared together, and (2) observation activities conducted by other LS members.At the see phase, LS members review the lecture process that has been implemented and the proposed improvement of the next meeting.
The research instruments used include of; 1) observation sheets of LS activities; 2) collaboration skill observation sheet; and 3) cognitive tests.The second and third instruments were used as a means of collecting data on collaborative skills and student learning outcomes.Therefore, the collaborative skills data were obtained from observation result by the observer and cognitive learning outcomes data were obtained from the test result.Some of the indicators used in scoring collaborative skills, were 1) positive interdependence; 2) face-to-face promotive interaction; 3) individual accountability and personal responsibility; 4) interpersonal and small group skills; and 5) group processing.The results of the collaborative skill assessment observation were analyzed by using Formula 1 (Purwanto, 2014).
Np = (R/SM) x 100%
(1) Information: Np : percentage rate of achievement collaborative skills R : the total score of all obtained points SM : the maximum score of the total points At the end of CAR cycle, both collaborative skill and cognitive learning outcomes were analyzed to know the improvement of those parameters from cycle 1 to cycle 2.
RESULTS AND DISCUSSION
The calculation of students' collaborative skill at Cycle 1 and Cycle 2 can be seen in Table 1 and Table 2. 2, improvement of students' collaborative skills occurred in each indicator.Each indicator has increased more than 5% and the indicator "group processing" is an indicator that the greatest increase, from 61% to 88%.
The results of this study are in line with some previous research reports that also used cooperative learning model (Ding et al., 2014;Talat & Chaudhry, 2014).The improvement of collaborative skills through cooperative learning due to in this learning students are divided into small groups, whereas each group member has different abilities.Each member of the group is responsible not only for learning the material but also learning to help friends in one group.
Related to the implementation of TPS, the increasing of collaborative skills from cycle 1 to cycle 2 shows that the TPS learning model could empower the students' skills in the collaborating with each other.In collaborative skills, several skills related to the collaboration process among students is required.In connection with this, several previous reports reported that the application of TPS learning model is able to empower and improve students' communication skills (Talat & Chaudhry, 2014;Zainollah, 2014), students' social skills (Apriliyani, Wasis, & Supardi, 2015), as well as speaking skills (Lubis, 2014).
If TPS is analyzed in more depth, in this learning model, students' collaborative skills can be raised at every stage of the TPS learning model.The first stage, Think, is aimed to introduce the concept of matter in the presence of a given phenomenon.Moreover, at this stage, students will think individually about an existing problem since it can open selfawareness in solving a problem by working together.At the stage of Pair, students work in pair to solve problems, in which case a paired discussion will appear.In the last stage of the Square, 2 pairs of groups will merge into one group to discuss the existing problems so that the discussion will proceed actively (Fisher & Frey, 2014;Lapp & Moss, 2012;Strebe, 2013).
Furthermore, TPS learning model is a model of cooperative learning that requires students to work together in solving a problem.TPS learning model also gives students the opportunity to work on their own and work with others and optimize student participation.Moreover, The TPS model provides at least eight times more opportunities for each student to be recognized and show their participation to others.This explanation is in line with Anas, Atmoko, & Suyono, (2012) that explained the TPS learning model allows the students to work individually or in groups as well as optimize student participation.This condition is essential to empower collaborative skills.Moreover, this learning model has also given more opportunities to each student to be recognized and show their participation to others (Lie, 2005).
In the second parameter, students' cognitive learning outcomes were also improving from cycle 1 to cycle 2.The improvement of students' cognitive learning outcome as high as 7.56.The data on students' cognitive learning outcomes are presented in Table 3.The results of this study that indicate TPS could improve students' learning outcome are in line with some previous reports (Isharyadi, 2015;Januartini et al., 2016;Karyawati et al., 2014;Tahueyo et al., 2013).Learning experiences that occur in a learning process will affect the achievement of student learning outcomes (Sudjana, 2017).In this regard, the use of appropriate learning models will have a positive impact on student learning outcomes.These positive results are often caused by the selection of instructional models that lead students to be active during learning (Savitri & Wahyuni, 2013).Related to the statement, a previous study has informed the implementation of TPS could facilitate students to more active during the learning process in class (Zainollah, 2014).The reason, the implementation TPS learning model will give students the opportunity to discuss possible ideas and the solutions for a particular problem through discussions activities (Scanniello & Erra, 2014).
Related to its syntax, TPS learning model designed a paired group consisting of 2 students and each group will discuss and solve the given problem (Januartini et al., 2016).From this activity, the quality of learning will improve due to students will more engaged with learning process through interviews, discussion, as well as question and answer activity (Suyanto, 2008).Furthermore, through TPS learning model, it provides an advantage to students to discuss their ideas and provide an opportunity to understand problem solving in different ways.All of these activities will facilitate students to better understand the concepts being studied.
Think-Pair-Share learning …. 139 Collaborative skills and cognitive learning outcomes are two essential components and need to be empowered during learning.If students are not getting an optimal achievement on one or both of these parameters, it may be caused by a less precise learning process.Therefore, the selection of appropriate learning models is one key to success in facilitating students to achieve optimum competence.One of the appropriate learning models, in accordance with the results of this study, is TPS.
Table 1 .
The average score of all the collaboration skills in cycle 1 and cycle 2 | 2018-12-19T18:53:35.260Z | 2018-07-09T00:00:00.000 | {
"year": 2018,
"sha1": "f2012ba51aab864111abf3b9c03e2ddedf9d2b49",
"oa_license": "CCBYSA",
"oa_url": "http://ejournal.umm.ac.id/index.php/jpbi/article/download/5514/5521",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f2012ba51aab864111abf3b9c03e2ddedf9d2b49",
"s2fieldsofstudy": [
"Education",
"Environmental Science"
],
"extfieldsofstudy": [
"Psychology"
]
} |
29405734 | pes2o/s2orc | v3-fos-license | Genetic dissection of the fuzzless seed trait in Gossypium barbadense
Five genetic loci were found to be associated with the fuzzless seed trait in Gossypium barbadense, one of them containing MYB25-like_Dt, the best candidate for the N2 gene.
Introduction
Mature cotton seeds are covered with two types of fibres: lint (up to ~3.5 cm) and fuzz (<0.5 cm). Both are single-celled, tubular outgrowths that arise from the epidermal cells of the seed coat and are indistinguishable in appearance during the early stages of their growth (Lang, 1938;Stewart, 1975), suggesting that their growth may involve the same physiological and biochemical processes. Lint fibre initials in Gossypium hirsutum (Gh) and G. barbadense (Gb) start developing on the day of anthesis, i.e. 0 d post-anthesis (dpa), with approximately a quarter to a third of the epidermal cells becoming fibre initials and finally lint fibres (Lang, 1938;Stewart, 1975). Fuzz fibres start developing after the lint at ~4 dpa, do not elongate to the same extent as the lint, and are variable in length and abundance among different genotypes (Lang, 1938;Joshi et al., 1967;Lee et al., 2006), with some cottons having no fuzz fibres, but still producing normal lint fibres.
Fuzzless cotton seeds are referred to as 'naked seeds' and have advantages during ginning because they generally require much less force to remove the lint from the seed than fuzzy seeded cottons, and hence less power consumption at the gin and less breakage of the lint fibres (Boykin, 2007;Bechere et al., 2009Bechere et al., , 2012. A number of fuzzless loci have been reported in cotton, including the dominant N 1 and the recessive n 2 (Percy & Kohel, 1999). Homozygous N 1 N 1 mutants are completely fuzzless, and lack any tuft (seen in the recessive mutants) at the micropylar tip of the seed and also have a significantly reduced lint percentage (Ware, 1940;Turley & Kloth, 2002;Turley et al., 2007) probably because of delayed lint initiation and their shorter lint (Lee et al., 2006;Turley et al., 2007;Zhang et al., 2007;Romano et al., 2011). In earlier genetic studies, the recessive n 2 fuzzless mutant was believed to have a genotype of n 2 n 2 . However, Turley & Kloth (2002) demonstrated that a third unlinked recessive locus, n 3 , is required for expression of the fuzzless trait in accessions carrying n 2 . This second locus may have confounded some earlier genetic studies of the n 2 mutant, which shows variable fuzz development that is influenced by genetic background and environmental conditions (Turley & Kloth, 2002;Rong et al., 2005). Compared with the wild-type TM-1, in which fuzz initiation was observed at 4 dpa, few or no epidermal protrusions were observed in the N 1 and n 2 n 3 mutants at the same time point (Zhang et al., 2007). Cotton plants homozygous for all three fuzzless loci (N 1 N 1 n 2 n 2 n 3 n 3 ) are fibreless (i.e. lack both lint and fuzz; Turley & Kloth, 2002), so clearly the genes at these loci have central roles in the development of both types of fibres. A new ethyl methanesulfonate-induced mutant, n 4 t , was recently reported that appears to be different from the other naked seed loci (Bechere et al., 2009(Bechere et al., , 2012. The n 4 t mutation has been reported to have a less negative effect on lint percentage and hence lint development (Bechere et al., 2012). In addition, no fuzzy seeded but lintless phenotype has ever been observed in cotton, suggesting that fuzz genes are epistatic to lint genes (Du et al., 2001) and/or that development of fuzz and lint are temporally and spatially regulated by the same regulatory genes. The genetic control of fuzz development is clearly very complex and there are many different genes other than these major naked seed genes that modify the amount of fuzz on the seed (Turley & Kloth, 2002) even on the same plant (Kearney & Harrison, 1928), and few of these have been characterized. Cotton breeders have long aimed to develop cotton cultivars with fuzzless seeds and a high lint percentage to capitalize on their ginning advantages, but to achieve that goal, it is essential to identify the genes underlying the regulation of fuzz and lint development and to understand their biological roles and genetic interactions.
Cotton is an allotetraploid with At and Dt subgenomes. The N 1 and the n 2 genes have been localized to chromosomes 12 (A12) and 26 (D12) (a pair of homoeologous chromosomes), respectively (Endrizzi & Ramsay, 1980;Samora et al. 1994). The n 3 locus has not yet been mapped, but is reported to be unlinked to n 2 (Turley & Kloth, 2002). Commercial Gb cultivars produce some of the highest quality lint fibres of any cotton species, but all appear to contain the n 2 gene making them almost universally fuzzless, although there is considerable environmental effect on the amount of fuzz (Kearney & Harrison, 1928). It is not known if all Gb cultivars also carry the n 3 locus, but this is highly likely. A recent study using a map-based cloning strategy has identified the dominant N 1 gene to be a defective allele of the At-subgenome homoeolog of MYB25-like (i.e. MYB25-like_At) that generates small interfering RNAs from its 3′ end due to production of an overlapping antisense transcript that post-transcriptionally silences both homoeologs of this gene (Wan et al., 2016). MYB25-like is a master regulator involved in fibre initiation and development as silencing MYB25-like results in cotton seeds lacking both fuzz and lint fibres (Walford et al., 2011). However, the identities of the n 2 and n 3 genes are as yet unknown.
High-throughput genotyping and next-generation sequencing technologies are revolutionizing the speed and ability to fine-map and clone genes underlying agronomically important traits, which are usually controlled by multiple genes with complex interactions (Schneeberger, 2014;Zhu et al., 2014a). In cotton, a large number of single nucleotide polymorphism (SNP) markers have been reported based on whole genome re-sequencing of diverse cotton cultivars and transcriptome sequencing (Byers et al., 2012;Zhu et al., 2014b;Hulse-Kemp et al., 2015a;Wang et al., 2015;Fang et al., 2017). A cotton SNP array containing ~63 000 SNPs, mainly from US and Australian cotton cultivars, was recently developed (Hulse-Kemp et al., 2015b) and has been widely used by the cotton community for a diversity of studies (Ellis et al., 2016;Li et al., 2016;Wang et al., 2016;Gapare et al., 2017;Hinze et al., 2017;Huang et al., 2017a), including identification of the gene responsible for okra leaf shape . One of the applications of next-generation sequencing in identification of genes underlying specific phenotypes is mapping-by-sequencing (MBS) where pools of segregating progeny are bulked according to their mutant phenotype and their genome sequenced to identify genomic regions inherited predominantly from the parent displaying the phenotype . This approach has been applied in cotton to map genes related to the development and properties of fibres (Thyssen et al., 2014(Thyssen et al., , 2015(Thyssen et al., , 2017Islam et al., 2016) and branching architectures . In model plant species, such as Arabidopsis, it is possible to narrow down the resolution of MBS to the single gene level as demonstrated recently by the cloning of an imprinted gene involved in endosperm cellularization (Huang et al., 2017b), but this is still difficult in allopolyploid species such as cotton, and therefore final identification of the causative genes usually requires other additional approaches once a genomic region containing the gene is identified.
In this study, we aimed to uncover the genetic basis of the fuzzless trait from G. barbadense and to explore the possibility of breeding G. hirsutum cottons with fuzzless seeds and without the lint percentage penalty normally associated with this trait that has so far prevented its adoption in commercial breeding programmes. To this end, we developed and used near isogenic lines (NILs) for identification of genetic loci tightly linked to the fuzzless trait using the Cotton SNP63K array and the MBS approach. We found that the fuzzless seed trait in Pima S-7, a Gb cultivar, is controlled by multiple recessive loci, including a locus containing MYB25-like_Dt.
Our results indicate that lint development is associated with the expression levels of both MYB25-like_At and MYB25-like_Dt at ~0 dpa, while fuzz development is mainly determined by the expression level of MYB25-like_Dt at ~3 dpa.
Methods and materials
Plant materials, development of nearly isogenic lines, and segregating populations One G. barbadense (Pima S-7, fuzzless) and four G. hirsutum accessions (Sicala 40, Sicala V-2, T586, and Xu142fl) were used in genetic and gene expression analyses. Sicala 40 and Sicala V-2 are normal fuzz commercial cultivars producing copious amounts of lint, T586 is a lint bearing cotton genetic standard line containing the dominant fuzzless gene N 1 (Endrizzi & Taylor, 1968), while Xu142fl is a fibreless mutant known to carry n 2 amongst other fibre mutations (Zhang & Pan, 1991;Du et al., 2001). In addition, 134 Gb accessions with variable fuzz phenotypes were selected from the National Midterm Genetic Bank of Cotton at the Institute of Cotton Research (ICR) of the Chinese Academy of Agricultural Sciences, Anyang, China, and used for association analysis.
An F 2 population derived from Pima S-7 × Sicala 40 (PS; 169 plants) was used in segregation and fuzz percentage analysis. Pima S-7 and two genetically related Gh cultivars, Sicala V-2 and Sicala 40, were used in the development of a set of NILs each showing fuzzless or fuzzy seeded phenotypes that were selected from a breeder's population aimed at generating a fuzzless seed in an elite commercial Gh background. After four backcrosses, homozygous fuzzless and normal fuzz NILs were identified and confirmed in BC 4 F 4 and BC 4 F 5 generations (see Supplementary Fig. S1 at JXB online). Selected BC 4 F 4 and BC 4 F 5 plants showing normal fuzz (eight NILs) or reduced fuzz (20 fuzzless or intermediate fuzz NILs) were genotyped using the Cotton SNP63K array. BC 4 F 6 progeny showing uniform fuzzless or segregating fuzz phenotypes were used in bulk segregant sequencing ( Supplementary Fig. S1A). A BC 5 F 2 population (designated FLNS) derived from a cross between one of the BC 4 F 5 fuzzless NILs, FLN1-10, and Sicala 40 was used in fuzz segregation and lint percentage analyses. Gene expression analysis was performed using Pima S-7, Sicala 40, T586, Xu142fl, and different BC 4 F 5 NILs that had normal fuzz or were fuzzless. The contributing effects of MYB25-like homoeoalleles on fuzz and lint development were evaluated using two more F 2 populations: PX (92 plants) from Pima S-7 × Xu142fl and FLNX (87 plants) from FLN1-10 × Xu142fl. The reason for use of Xu142fl was that both of its MYB25like homoeologs are very lowly expressed and therefore potentially dysfunctional. All cotton plants, except the 134 Gb accessions, which were planted in the field of ICR's Experiment Station in Xinjiang, China (2015), were grown in a glasshouse (Canberra, Australia) at 28 ± 2 °C with natural lighting.
Phenotyping
Fuzzy, intermediate, and fuzzless seed phenotypes were scored based on visual inspection. The fuzz phenotype of the PS F 2 population and the 134 Gb accessions was measured quantitatively using fuzz percentage, which was calculated using at least 100 seeds and the formula of [(weight of ginned seeds−weight of delinted seeds)/weight of ginned seeds]×100, where the seeds were equilibrated to a constant moisture content before and after sulfuric acid delinting to burn off the fuzz fibres.
DNA sample preparation and SNP genotyping DNA extraction, SNP genotyping using the Cotton SNP63K array or KASP (Kompetitive Allele Specific PCR) were performed as previously described . For calculation of SNP frequencies based on the SNP array, Pima S-7, Sicala 40 and heterozygous alleles were designated 1, 0 and 0.5, respectively. For MYB25-like_Dt, a KASP marker was designed using a SNP located within its coding region (see Supplementary Fig. S2A). The MYB25-like_At alleles of Pima S-7 and Xu142fl were distinguished using species-specific SNPs (Gb vs Gh) and a non-synonymous SNP within the coding region of the Xu142fl MYB25-like_At ( Supplementary Fig. S2B), respectively. KASP oligos are shown in Supplementary Table S1.
Bulk segregant analysis and mapping-by-sequencing DNA was extracted from 24 randomly selected BC 4 F 6 progeny derived from NILs segregating for the fuzz phenotype and bulked to form the Recessive Fuzzless Bulk1 (RFB1) DNA pool. DNA was extracted from 15 BC 4 F 6 progeny derived from fuzzless NILs and bulked to form the RFB2 DNA pool. Two barcoded DNA sequencing libraries were created using RFB1 and RFB2, and sequenced in a single lane using the Illumina HiSeq2500 platform at the Australian Genome Research Facility (Melbourne, Australia). Approximately 50 Gb of 100-bp paired-end reads were generated.
After quality check using FastQC, the clean reads of the two bulks were separately mapped to the Sicala 40 genome sequence, which was generated by mapping re-sequenced Sicala 40 reads to the TM-1 (G. hirsutum) reference genome using CLC Genomics Workbench (version 7.5.1) with the following parameter settings: mismatch cost, 2; insertion and deletion cost, 3; length fraction, 0.5; similarity fraction, 0.95; and non-specifically matched reads ignored. Variants (including SNPs and indels) between RFB1 or RFB2 and Sicala 40 were identified using the 'basic variant detection' module implemented in CLC Genomics Workbench with the following settings: ploidy, 2; default read quality filters (i.e. neighbourhood radius, 5; minimum neighbourhood quality, 15; minimum central quality, 20); minimum read coverage, 10; minimum variant read count, 3; minimum variant frequency, 35%. SNPs were filtered from the results. Average SNP, i.e. Pima S-7 allele, frequency of a 1-Mbp window (500 kb overlapping between the two adjacent windows) was calculated for RFB1 and RFB2 using a sliding-windowbased approach and plotted for each chromosome using the physical co-ordinates of the TM-1 genome . Candidate regions associated with the recessive fuzzless trait should be homozygous for Pima S-7 in the RFB2 pool and hence have a Pima S-7 allele frequency of 1 or close to 1, while in the segregating RFB1 pool the Pima S-7 allele frequency should be ~0.5 because this pool contains plants that are homozygous Pima S-7, heterozygous Pima S-7 and homozygous Sicala 40.
Gene expression analysis using quantitative real-time PCR Two types of tissues, whole ovules and ovule outer integuments (a 3-4-cell layer including the epidermis that produces the lint and fuzz fibres) were used in gene expression analysis. Whole ovules of different developmental stages were collected from normal fuzz (NFN) or fuzzless (FLN) NILs and used for dissection of outer integuments according to a previously reported protocol (Bedon et al., 2014). For the four cotton accessions (Sicala 40, Pima S-7, T586, and Xu142fl), whole ovules of different developmental stages were directly used for RNA extraction. Total RNA extraction and quantitative realtime PCR (qRT-PCR) procedures were performed as previously described (Zhu et al., 2009(Zhu et al., , 2013 except that the reference gene used was the cotton ubiquitin gene (GenBank accession no. EU604080), and the reactions were run on the ViiA7 Real-Time PCR System (Life Technologies) using the FasterStart Universal SYBR Green Master Mix (ROX) (Roche). Gene expression levels were determined based on three biological replicates each with three technical replicates. Primers specifically amplifying MYB25-like_At could not be designed, so the expression level of MYB25-like_At was thus determined by subtraction of the expression level of MYB25-like_Dt from the total expression level of MYB25-like. Primers used in qRT-PCR are shown in Supplementary Table S1. All primer pairs had a similar PCR efficiency (89.9-99.2%) determined by LinRegPCR (http:// www.hartfaalcentrum.nl/index.php?main=files&sub=LinRegPCR).
Results
The fuzzless trait in G. barbadense is controlled by multiple recessive genetic loci F 3 seeds of 169 F 2 plants derived from Pima S-7 (Gb) × Sicala 40 (Gh; hereafter named PS) showed a continuous gradation of the fuzz phenotype, from Pima S-7-like (fuzzless) through to Sicala 40-like (normal fuzz) (Fig. 1). Of the 169 F 2 :F 3 families, seeds of only three families (e.g. PS-38; Fig. 1) looked similar to the naked seeds of Pima S-7, fitting a model with three recessive genes controlling fuzz formation (χ 2 63:1,1 =0.0497, P=0.8236). However, for the FLNS F 2 population created using Sicala 40 and a fuzzless NIL (FLN1-10; Fig. 1; Supplementary Fig. S1), five out of the 60 F 2 :F 3 families showed a Pima S-7-like naked seed phenotype (e.g. FLNS-37; Fig. 1), fitting a model with two recessive genes (χ 2 15:1,1 =0.4444, P=0.5050). We noticed that the three fuzzless F 2 :F 3 families observed in the PS population were not completely fuzzless and had a little more fuzz than Pima S-7, and we also observed transgressive segregants (with more fuzz than Sicala 40) in the PS but not in the FLNS F 2 populations. These results indicate that (i) fuzz development is complex with at least three recessive genes being involved in the regulation of the fuzzless trait in Pima S-7, and (ii) Pima S-7 must contain additional epistatic modifier(s) affecting the function and/or the interaction of the three loci responsible for its fuzzless phenotype that have not been identified here due to the small population sizes used.
Identification of genetic loci, including the MYB25-like_ Dt containing locus, associated with low seed fuzz
We used NILs with different fuzz phenotypes to identify genetic loci controlling fuzz development (see Supplementary Fig. S1). Eight normal fuzz and 20 reduced fuzz (fuzzless or intermediate) BC 4 F 4 or BC 4 F 5 NILs were used in the SNP array analysis. Of the ~63 000 SNPs, 5426 were polymorphic between Pima S-7 and Sicala 40, and these were distributed across all 26 chromosomes, although chromosomes A02, A04, and D04 had less than 20 SNPs each (Supplementary Table S2). The Pima S-7 allele frequency of the polymorphic SNPs was plotted for each chromosome based on their genomic coordinates relative to the TM-1 genome. For the pool of the 20 reduced fuzz NILs, each candidate locus associated with low fuzz would have a Pima S-7 allele frequency close to 1 because the fuzzless NILs should be homozygous for the Pima S-7 allele and the NILs close to being fuzzless could be still heterozygous for the Pima S-7 allele. For the pool of the eight normal fuzz NILs, each candidate locus would have a Pima S-7 allele frequency <0.5 because these NILs could contain heterozygous or null Pima S-7 alleles. Three regions on chromosomes A12, D05, and D12 met these criteria (Supplementary Table S3; Fig. 2E; Supplementary Fig. S3). The A12 locus contained only three SNP markers and all of them were mapped to the D12 interval based on previous interspecific mapping results (Hulse-Kemp et al., 2015b) and so can be excluded as their positions were incorrectly assigned based on blasting. We further created and sequenced two bulked DNA libraries, RFB1 (progeny of normal fuzz NILs) and RFB2 (progeny of fuzzless NILs). Genome-wide SNP identification was performed for RFB1 and RFB2 using Sicala 40 as a reference. Candidate loci associated with low fuzz seeds would be expected to have a Pima S-7 allele frequency of ~0.5 and 1 in RFB1 and RFB2, respectively. Nine regions on seven different chromosomes met these criteria (Supplementary Table S3; Fig. 2A-D; Supplementary Fig. S4). The D12 locus was identified in both MBS and SNP array analyses, and so represents the best candidate for a major locus contributing to fuzzless seeds.
Because the fuzzless NILs used were self-pollinated for a few generations, some of the ten candidate regions identified could thus be false positives due to fixation of Pima S-7 alleles not related to the fuzzless phenotype. To refine these candidate loci we designed KASP marker assays (between 1 and 3 assays for each region) across all the ten candidates and genotyped the PS F 2 population with those markers and also measured fuzz percentage for each F 2 individual. For each SNP marker, F 2 plants were grouped based on their genotypes (i.e. homozygous Pima S-7 or Sicala 40, and heterozygous) and the average fuzz percentage was compared for each genotype. Of the ten candidate regions, five (regions 2, 4, 5, 9, and 11) were found to be positively or negatively correlated with the amount of seed fuzz. They were designated loci I-V (Supplementary Table S3; Fig. 3). For loci II-V, plants with a genotype of homozygous Pima S-7 regions had a significantly lower fuzz percentage than those with a genotype of homozygous Sicala 40, but for locus I (region 2), plants with a genotype of homozygous Pima S-7 had a significantly higher fuzz percentage than those with a genotype of homozygous Sicala 40 (Fig. 3A), suggesting Pima S-7 contained an enhancer rather than an inhibitor of fuzz development within that locus. For locus V (region 11), the fuzz percentage of heterozygous plants was intermediate and significantly different from either homozygous Pima S-7 or Sicala 40 (Fig. 3A), suggesting a major and additive effect of locus V on fuzz percentage, whereas the other loci were closer to one or other of the homozygous parental genotypes. This was supported by the fuzz phenotypes of the segregants within the FLNS population, as its F 2 :F 3 families with a homozygous Pima S-7 region at locus V always had less fuzz than those with a homozygous Sicala 40 region (see Supplementary Fig. S5), and this trend was not observed for the other loci.
By checking the co-segregation of markers on the individual plants used with the Cotton SNP63K array analysis and the overlap between the SNP array and MBS analyses, we were able to deduce the intervals containing the candidate gene(s) for the five individual loci that contained between 20 and >300 annotated genes. For example, the size of the overlapping interval of locus V is ~1.4 Mbp containing 84 annotated genes (see Supplementary Tables S3 and S4). Fine mapping and/or other strategies will be required to pinpoint the actual causal gene(s) within these loci. However, one of the locus V genes is MYB25-like_Dt (Gh_D12G1628), a homoeolog of MYB25-like_At that has been shown to be critical for fuzz development (Wan et al., 2016). MYB25-like_Dt is thus the best candidate at this locus.
We also found that the average fuzz percentage of the 134 Gb accessions used in this study was low at 2.68% (with a range of 0.29-9.20%), and generally considerably less than the 10-12% found with normal fuzz Gh cultivars. Although the majority of these Gb accessions were fuzzless or close to fuzzless, a few were normal fuzzy seeded (e.g. M210353 in Supplementary Fig. S6). Despite their variable fuzz phenotypes, all of these Gb accessions had a homozygous Pima S-7 genotype in the five loci we have identified, suggesting that there could be additional modifiers of those five fuzzless loci in the rare fuzzy seeded Gb accessions, but this will have to await a more detailed genetic and molecular analysis to ensure that these accessions have not been misclassified or are partial hybrids with Gh.
Expression of MYB25-like_Dt is down-regulated in outer integuments of fuzzless NILs during fuzz initiation
In 0-9 dpa ovule outer integuments of NILs with normal fuzz (NFN) or fuzzless (FLN) seeds, MYB25-like_At and MYB25-like_Dt had a similar expression profile to each other, but were differentially expressed in both seed phenotypes, with approximately 70-80% of the total expression of MYB25-like contributed by its At homoeolog (Fig. 4). In NFN, the expression level of MYB25-like_Dt dropped steadily from 0 to 5 dpa, and then remained relatively low thereafter. The fuzzless FLN had a lower expression level of MYB25-like_Dt from 0 dpa and declined more rapidly than in the normal fuzz NFN NILs (Fig. 4C), such that there was little expression at 3 dpa, just prior to when fuzz fibres would normally initiate. Although the significantly reduced fuzz percentage observed in F 2 plants homozygous for the Pima S-7 allele of MYB25-like_Dt (i.e. MYB25-like_Dt PimaS7/PimaS7 ; Fig. 3A) could be contributed by other gene(s) within the same linkage block as MYB25-like_Dt, this expression profile of MYB25-like_Dt, together with the previously reported role of its At homoeolog in fuzz development (Wan et al., 2016), provides additional support for MYB25-like_Dt being one of the best candidates for a gene involved in the reduced fuzz of Pima S-7.
The protein coding sequences of MYB25-like_Dt between the reference sequence of TM-1 (fuzzy seeded) and Pima S-7 differ at three nucleotides with two of them giving rise to changed amino acids in Pima S-7 (see Supplementary Fig. S2A). In both cases, however, the changed bases are all found in either the Dt-subgenome or the At-subgenome homoeologs from other cultivars including TM-1, Sicala 40, and Xu142 (Supplementary Figs S2A and S7) that have fuzzy seeds, so these SNPs are unlikely to lead to a loss of function of MYB25-like_Dt in Pima S-7. The core promoters (−1 to −250 bp) of MYB25-like_Dt and MYB25-like_At in Pima S-7, Sicala 40, and TM-1 were very similar, but the remaining upstream sequences of their promoters were quite different. Whether any of these differences is the cause of their differential expression remains to be investigated by fusing different lengths of promoter to reporter genes. The promoter of MYB25-like_Dt in Pima S-7, compared with that in Sicala 40 and TM-1, had seven differences, including three SNPs, two small indels (1 bp deletion or insertion), and two deletions (>20 bp) ( Supplementary Fig. S8).
Expression levels of MYB25-like_Dt but not of MYB25-like_At predominantly determine the fuzz phenotypes
We compared the expression levels of the two MYB25-like homoeologs in 0-5 dpa whole ovules of the linted-fuzzless T586 (N 1 ) and fibreless Xu142fl lines with those of Pima S-7 and Sicala 40 (Fig. 5). At 0 dpa when lint fibres have started to initiate, the total expression level of MYB25-like was highest in Sicala 40 and lowest in Xu142fl with a ranking of Sicala 40>Pima S-7>T586>Xu142fl (Fig. 5A), which is consistent with their rankings in final lint percentage (Sicala 40: 41.58%; Pima S-7: 34.51%; T586: 11.37%; Xu142fl: 0), suggesting that the expression level of MYB25-like at ~0 dpa is correlated . The graphs were generated using a sliding-window-based approach (1 Mbp window length and 500 kb overlap between the two adjacent windows) with the x-axis representing chromosomal coordinates (kb). The regions indicated by I, II, III, IV and V were the five loci associated with fuzz development, whereas the region indicated by an asterisk was not associated with fuzz development based on fuzz percentage analysis of the PS F 2 population and the results from other segregating NILs genotyped by the Cotton SNP63K array. (E) The Pima S-7 allele frequency of the SNP markers polymorphic between Pima S-7 and Sicala 40 on D12 (267 markers). The result was based on the genotyping of eight normal fuzz and 20 reduced fuzz (including fuzzless) BC 4 F 4 or BC 4 F 5 NILs using the Cotton SNP63K array. The graph was generated using the same approach as those in (A-D).
with lint fibre initiation and development. In 1-5 dpa ovules, Pima S-7 had a more lowly expressed MYB25-like_Dt than Sicala 40, but a significantly more highly expressed MYB25-like_At (Fig. 5B, C) despite being fuzzless, supporting the association between the expression of MYB25-like_Dt, but not MYB25-like_At, and the poor fuzz development in Pima S-7. From 0 to 5 dpa, the expression levels of both MYB25like homoeologs were consistently lower in Xu142fl and T586 than in Sicala 40 and Pima S-7. Around the time of fuzz initiation at 3 dpa, Xu142fl had the lowest expression of both homoeologs in any of the genotypes, but particularly the Dt homoeolog. In addition, there is a non-conservative substitution at position 314 within the conserved MYB DNA binding domain of MYB25-like_At in Xu142fl (see Supplementary Fig. S2B) that is likely to negatively impact on the activity of this transcription factor, supporting the assertion previously reported (Walford et al., 2011) that this fibreless mutant line has two dysfunctional MYB25-like homoeologs.
To investigate the effect of different MYB25-like homoeoalleles from Gh and Gb on lint and fuzz development, we generated two other segregating F 2 populations, PX from Pima S-7 x Xu142fl and FLNX from FLN1-10 (MYB25-like_Dt Pima S7/ Pima S7 locus in an essentially Sicala 40 genetic background) × Xu142fl and examined lint and fuzz phenotypes of the F 2 plants with different allele combinations determined by the presence of SNPs specific to each of the MYB25-like homoeologs and their parental origins (Table 1; Supplementary Fig. S9). The difference in these two populations was in the maternal allele of MYB25-like_At, which was from Pima S-7 in PX and from Sicala 40 in FLNX, both of which should be highly expressed and as demonstrated below, fully functional (Fig. 5B). For each F 2 population, individual plants were classified into 9 types (1-9 and 10-18 for PX and FLNX, respectively) based on the homozygosity or heterozygosity of parental origin for the At and Dt homoeolog of MYB25-like. A number of observations were clear from this analysis: (i) Fuzzless-lintless seeds (all lacking a micropylar tuft) (types 1 and 10, Table 1) were only produced when the two homozygous defective homoeologs, MYB25-like_ At Xu142fl/Xu142fl and MYB25-like_Dt Xu142fl/Xu142fl , were combined together, as in the original Xu142fl fibreless parent. Replacing a single copy of MYB25-like_Dt Xu142fl with the Pima S-7 Dt allele (types 2 and 11) or the more highly expressed At allele from Pima S-7 (type 4) was sufficient to allow lint, but not fuzz, to develop on the seeds. This suggests that MYB25-like_Dt PimaS7 and MYB25-like_At PimaS7 both produce fully functional MYB proteins able to activate lint fibre initiation. However, although MYB25-like_At PimaS7 is highly expressed from 0 through to 5 dpa (Fig. 5B), a single copy (type 4) was unable to rescue fuzz fibre production (Table 1), although two copies (type 7) allowed the development of some intermediate and normal fuzzy seeds in some plants, suggesting that fuzz and lint development are both dependent on gene dosage and hence total expression of MYB25-like homoeoalleles at specific times during development. Interestingly, plants in the PX population with homozygous Pima S-7 alleles for both homoeologs (type 9) were not all fuzzless like the Pima S-7 parent, attesting to the influence of some of the other identified fuzzless loci. All plants in the two populations have a lowly expressed MYB25-like_Dt allele, i.e. MYB25-like_Dt Xu142fl or MYB25-like_Dt PimaS7 , and regardless of whether their MYB25-like_At allele was functional or mutant, produced mostly fuzzless or reduced fuzz seeds, suggesting it is the expression level of the Dt homoeolog that is critical for fuzz development, but either homoeolog can support lint fibre development provided that it expresses a functional protein during lint initiation.
As Pima S-7 and Xu142fl are both believed to carry alleles of n 2 causing their fuzzless seeds, this provides support for MYB25-like_Dt being a good candidate for N 2 . (ii)The Gb allele MYB25-like_At PimaS7 was less effective than that from Gh, MYB25-like_At Sicala40 , in suppressing fuzz development in the presence of mutated Dt homoeologs, producing fewer fuzzless and more intermediate and fuzzy seeded F 2 segregants (comparing types 7-9 with types 16-18), perhaps because of its higher expression during fuzz initiation (Fig. 5B), or because of other modifier genes brought in with the different genetic backgrounds. (iii)Lint percentage (and hence lint fibre development) was largely determined by the functionality and expression (gene dosage) of the At homoeolog of MYB25-like, but could also be influenced by the Dt homoeolog. The negative effects of the defective MYB25-like_Dt on lint percentage was less severe in the predominantly Gh background of the FLNX plants than in the Gb background of the PX plants that had lower lint percentages with all allele combinations other than in those with fibreless seeds (Table 1).
Breeding fuzzless cottons without a penalty on lint fibre production is possible
Fuzzless seeds have historically been associated with an undesirable lint percentage (Ware, et al. 1944), most likely due to both fuzz and lint fibres being regulated by differential expression of the same genes. The N 1 gene, i.e. the dysfunctional MYB25-like_At, has a strong negative effect on lint development (Turley et al., 2007), and therefore our ability to breed cottons with both N 1 and a high lint percentage is quite poor. Most commercial Gb cultivars have a relatively high lint percentage (~34%), so their fuzzless seed trait should have smaller deleterious effects on lint development. To know whether it is possible to combine the fuzzless seed trait of Gb with the high lint percentage of Gh, we created a BC 5 F 2 population (FLNS from FLN1-10 × Sicala 40) and measured lint percentages of individual plants. These plants segregated for all five fuzzless associated loci, and contained homozygous MYB25-like_At alleles from Sicala 40, i.e. MYB25-like_At Sicala40/Sicala40 , which is expressed at a similar level to MYB25-like_At PimaS7/PimaS7 at 0 dpa. Overall, the average lint percentage of the BC 5 F 2 plants with a genotype of MYB25-like_Dt Sicala40/Sicala40 (normal fuzz) and MYB25-like_Dt PimaS7/ PimaS7 (fuzzless) were similar (41.96% and 41.71%, respectively), and as high as that of the elite Gh cultivar Sicala 40 (41.58%). These results suggest that Sicala 40 contains alleles that are able to compensate for any negative effects of MYB25-like_Dt PimaS7 on lint percentage. Individually, the Pima S-7 allele of loci V and II had a positive and negative effect on lint percentage, respectively, whereas the other three loci had no significant effect on lint percentage (see Supplementary Fig. S10).
Discussion
Identification and characterization of genes contributing to initiation of lint and fuzz fibres is essential for understanding the biological processes underlying fibre development and for improvement of lint yield through traditional breeding or transgenic approaches. Several fibreless or fuzzless mutants, such as Xu142fl, SL1-7-1, N 1 , and n 2 , have been reported and characterized (Zhang & Pan, 1991;Percy & Kohel, 1999;Turley & Kloth, 2008). They are valuable genetic resources for uncovering genes related to lint and fuzz fibre development. Transcriptome analyses using some of these fibre mutants have identified a large number of genes with a potential role in fibre development (Wu et al., 2006;Padmalatha et al., 2012;Wan et al., 2014); however, only a few of those genes (GhMYB25-like, GhVIN1, and GhJAZ2) have been shown to be essential for fibre initiation and development (Walford et al., 2011;Wang et al., 2014;Hu et al., 2016). Using a forward genetics approach, Wan et al. (2016) identified the dominant fuzzless N 1 gene to be a dysfunctional allele of MYB25-like_ At that generates siRNAs and showed that suppression of MYB25-like_At by virus-induced gene silencing phenocopied the fuzzless phenotype. In this study, we identified five loci associated with the fuzzless seed trait from Pima S-7 with the MYB25-like_Dt-containing locus having a more prominent effect on fuzz development than the other loci ( Fig. 3A; Supplementary Fig. S5).
We propose that MYB25-like_Dt is a candidate for the N 2 gene based on a number of pieces of evidence. First, N 1 and n 2 have previously been assigned to the two homoeologous chromosomes A12 and D12, respectively (Endrizzi & Ramsay, 1980). N 1 has been shown to be localized to an interval containing MYB25-like_At (Wan et al., 2016), and here we show that there is a major locus associated with the fuzzless seed phenotype of Gb carrying n 2 that maps to an interval containing its homoeolog, MYB25-like_Dt. While there are several other loci that associated with fuzzless seeds in different genetic populations segregating for n 2 , only the locus containing MYB25-like_Dt is on D12, known to contain n 2 . Second, in a highly introgressed population (FLNS) that is predominantly in a Sicala 40 background, those lines carrying the allele of MYB25-like_Dt from Pima S-7 all have much reduced fuzz relative to lines carrying the corresponding wild-type allele from Sicala 40, regardless of the combinations of loci for the other four fuzzless associated regions from Gb (see Supplementary Fig. S5). Third, expression of MYB25-like_Dt when fuzz fibres begin to initiate at 3 dpa is consistently much lower in NILs and other lines producing fuzzless seeds compared with those with fuzzy seeds, whereas the expression of MYB25-like_At in Pima S-7 is even higher than in the fuzzy seeded cultivar Sicala 40 and so is not indispensable for fuzz development. MYB25-like_Dt from Pima S-7 does not appear to contain any obvious deleterious mutations within its coding region, and a single copy of this gene is able to restore lint development in lines containing otherwise defective homoeoalleles of MYB25-like from Xu142fl. The Pima S-7 MYB25-like_Dt locus must thus be sufficiently expressed at 0 dpa and produce a functional protein to be able to activate transcription of its downstream gene(s), at least within the lint fibre pathway. There are a number of upstream sequence differences between the MYB25-like_Dt alleles from Pima S-7 and Sicala 40 ( Supplementary Fig. S8) that may be responsible for the low expression of the Pima S-7 allele during fuzz initiation in lines carrying n 2 , but further experiments will be required to verify that they are the causal mutations that convert N 2 into n 2 .
Taking advantage of those dysfunctional MYB25-like homoeoalleles in Xu142fl, we used it to infer both the functionality and the dosage effects of MYB25-like_Dt PimaS7 in fuzz development by comparing fuzz phenotype of F 2 segregants with MYB25-like_Dt Xu142fl/Xu142fl or MYB25-like_ Dt PimaS7/PimaS7 in common MYB25-like_At backgrounds (Table 1). This investigation allowed us to not only demonstrate a key role for MYB25-like_Dt in both fuzz and lint development, but also to uncover subtle differences in expression and dosage effects between MYB25-like_Dt PimaS7 and MYB25-like_Dt Xu142fl alleles in fuzz and lint fibre development. In addition, the loss of fuzz production by MYB25-like_Dt PimaS7 could be alleviated by the relatively highly Xu142fl ( expressed MYB25-like_At PimaS7 allele that was more effective than the less expressed MYB25-like_At Sicala40 allele (Table 1; Fig. 5B). Based on our results, we proposed a working model for the role of MYB25-like homoeologs in lint and fuzz development (Fig. 6). According to this model, transferring a homozygous allele with low expression at ~3 dpa, such as MYB25-like_ Dt PimaS7/PimaS7 , into a Gh background by backcrossing such that there is a high level of total MYB25-like at ~0 dpa to enhance lint fibre development should allow the breeding of elite commercial cotton lines with fuzzless seeds and without any penalty on lint yield, provided that other yield components (such as seeds per boll and bolls per plant) are not affected. We believe we are well on the way to achieving that result.
Interactions between MYB25-like_At, MYB25-like_Dt, and other loci indicate that the genetic model regulating the fuzzless seed trait from Pima S-7 is genetic background dependent. For instance, the female parents of the PS and FLNS F 2 populations (Pima S-7 and FLN1-10, respectively) had the same genotype at loci I-V (i.e. homozygous Pima S-7), but their F 2 populations showed a three-and two-genemodel for the fuzzless trait, respectively. In addition, the PS but not the FLNS F 2 population showed transgressive segregation, and a few Gb accessions (with homozygous Pima S-7 alleles at all five loci) produce considerable amounts of fuzz (see Supplementary Fig. S6). These results suggest the presence of as yet unknown modifier(s) affecting expression of the fuzzless genes or the interaction amongst the fuzzless genes in the Gb background, consistent with the previous finding that the genetics of the Pima S-7 fuzzless phenotype is complex (Rong et al., 2005).
In conclusion, the Gb fuzzless seed trait is regulated by multiple recessive genetic loci. The role of MYB25-like_Dt in regulating fuzz initiation and development is yet to be confirmed by specific silencing of its expression using gene editing, but the expression profile of MYB25-like_Dt and genetic analyses of cottons with variable lint and fuzz phenotypes provided quite convincing evidence for MYB25-like_Dt being the major fuzz gene in addition to its role in regulating lint development together with MYB25-like_At.
Supplementary data
Supplementary data are available at JXB online. Fig. S1. Schematic of the development of the near isogeneic lines (NILs) used in SNP genotyping and mapping-by-sequencing. Fig. S2. Alignment of the coding sequences of MYB25-like. Fig. S3. The Cotton SNP63K array based frequency of the Pima S-7 allele of the polymorphic SNPs between Pima S-7 and Sicala 40 in the NILs with normal or reduced fuzz. Fig. S4. Distribution of the Pima S-7 allele frequency across the 26 cotton chromosomes in the NILs showing fuzzless (RFB2) and segregating fuzz phenotype (RFB1) determined by MBS. Fig. S5. Representative fuzz phenotype of the FLNS F 2 :F 3 families. Fig. S6. Seed fuzz phenotype of representative G. barbadense accessions. Fig. S7. Alignment of the amino acid sequences of MYB25-like homoeoalleles from Xu142, Xu142fl, TM-1, and Pima S-7. Fig. S8. Alignments of the promoter sequences of MYB25like from mutant and wild-type lines. Fig. S9. Distribution of fuzz phenotypes (F 3 seeds) in the F 2 populations PX and FLNX by MYB25-like homoeoalleles. Fig. S10. Effects of fuzz associated loci on lint percentage of the FLNS F 2 :F 3 families. Table S1. Primers used in the study. If the expression level of MYB25-like is below the threshold level 2 at ~0 dpa, no lint fibre will initiate; otherwise lint fibres develop and their amount is positively correlated with the total expression level of MYB25-like. If the expression level of MYB25-like_Dt is lower than the threshold level 1 at ~3 dpa, cotton seeds will be fuzzless; otherwise fuzz fibres develop and their amount is positively correlated with the expression level of MYB25-like_Dt. When the total expression levels of MYB25-like at ~0 dpa and of MYB25-like_Dt at ~3 dpa are lower than the threshold level 2 and 1, respectively, cotton seeds will be fibreless (e.g. Xu142fl). When the total expression level of MYB25-like at ~0 dpa is higher than the threshold level 2 but the expression level of MYB25-like_Dt at ~3 dpa is below the threshold level 1, cotton seeds will be fuzzless (e.g. T586). No mutant with lintless and fuzzy seed has ever been reported; that is probably because cottons with an expression level of MYB25-like_Dt at ~3 dpa high enough for fuzz development usually have a high enough total expression level of MYB25-like_Dt and MYB25-like_At at ~0 dpa for lint development. Table S2. Chromosomal distributions of the 5426 polymorphic SNPs between Pima S-7 and Sicala 40 based on the Cotton SNP63K array. Table S3. Candidate regions identified based on mappingby-sequencing (MBS) and SNP array association analysis. Table S4. List of annotated genes in the five chromosomal regions associated with fuzz development. | 2018-04-03T01:24:24.567Z | 2018-01-17T00:00:00.000 | {
"year": 2018,
"sha1": "f17384878b8e9b58a09f99f1513ab32ee4613d8e",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jxb/article-pdf/69/5/997/25089289/erx459.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f17384878b8e9b58a09f99f1513ab32ee4613d8e",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
209948167 | pes2o/s2orc | v3-fos-license | Measurement of the Zγ→νν γ production cross section in pp collisions at √s=13 TeV with the ATLAS detector and limits on anomalous triple gauge-boson couplings
The production of Z bosons in association with a high-energy photon (Zγ production) is studied in the neutrino decay channel of the Z boson using pp collisions at √ s = 13 TeV. The analysis uses a data sample with an integrated luminosity of 36.1 fb−1 collected by the ATLAS detector at the LHC in 2015 and 2016. Candidate Zγ events with invisible decays of the Z boson are selected by requiring significant transverse momentum (pT) of the dineutrino system in conjunction with a single isolated photon with large transverse energy (ET). The rate of Zγ production is measured as a function of photon ET, dineutrino system pT and jet multiplicity. Evidence of anomalous triple gauge-boson couplings is sought in Zγ production with photon ET greater than 600 GeV. No excess is observed relative to the Standard Model expectation, and upper limits are set on the strength of ZZγ and Zγγ couplings.
Introduction
The production of a Z boson in association with a photon in proton-proton (pp) collisions has been studied at the Large Hadron Collider (LHC) since the beginning of its operation in 2010 [1][2][3][4][5]. These studies have been used to test the electroweak sector of the Standard Model (SM) and to search for new physics effects, such as potential couplings of Z bosons to photons. Previous publications from experiments at LEP [6-10] and the Tevatron [11][12][13] have shown no evidence for anomalous properties of neutral gauge bosons at the LHC. The set of data from the second period of the LHC operation provides the opportunity for more accurate measurements of the diboson production rate in pp collisions, and facilitates higher-precision tests of triple gauge-boson couplings (TGCs).
This paper presents a measurement of Zγ production with the Z boson decaying into neutrinos. The analysis uses 36.1 fb −1 of pp collision data collected with the ATLAS detector 1 at the LHC, operating at a centre-of-mass energy of 13 TeV. The measurements are made both with no restriction on the system recoiling against the Zγ pair (inclusive events) and by requiring that no jets with |η| < 4.5 and p T > 50 GeV (exclusive events) are present in addition to the Zγ pair.
The ννγ final state in the SM can be produced by a Z boson decaying into neutrinos in association with photon emission from initial-state quarks or from quark/gluon fragmentation. These processes are illustrated by the leading-order Feynman diagrams shown in figures 1(a)-(c). An example of an anomalous triple gauge-boson coupling (aTGC) of Z bosons and photons is shown in figure 1(d). Such couplings are forbidden at tree level in the SM but can arise in theories that extend the SM [14,15].
A study of the Z(νν)γ process has several advantages over processes with Z decay into hadrons or charged leptons. The channel with hadrons in the final state is contaminated by a large multijet background. A higher Z boson branching ratio into neutrinos relative to that into charged leptons provides an opportunity to study the Zγ production in a more energetic (higher E γ T ) region, where the sensitivity of this process to bosonic couplings is higher [5,16]. In addition, the neutrino channel is sensitive to anomalous neutrino dipole moments, although a higher integrated luminosity than that available to this study would be required to significantly improve upon LEP results [17,18].
The measurements of the rate and kinematic properties of the Zγ production from this study are compared with SM predictions obtained from two higher-order perturbative JHEP12(2018)010 parton-level calculations at next-to-leading order (NLO) and next-to-next-to-leading order (NNLO) in the strong coupling constant α S , as well as with a parton shower Monte Carlo (MC) simulation. The measured Zγ production cross section at high values of photon E T is used to search for aTGCs (ZZγ and Zγγ). For these searches an exclusive selection is used, providing higher sensitivity to the anomalous couplings due to further background suppression.
ATLAS detector and experimental data set
The ATLAS detector at the LHC is described in detail in ref. [19]. A short overview is presented here, with an emphasis on the subdetectors needed for a precision measurement of the Z(νν)γ final state. The ATLAS detector covers nearly the entire solid angle surrounding the collision point. Its major components are an inner tracking detector (ID) surrounded by a thin superconducting solenoid providing a 2 T axial magnetic field, electromagnetic (ECAL) and hadron (HCAL) calorimeters, and a muon spectrometer (MS). The ID is composed of three subsystems. Two detectors cover the pseudorapidity range |η| < 2.5: the silicon pixel detector and the silicon microstrip tracker (SCT). The outermost system of the ID, with an acceptance of |η| < 2.0, is composed of a transition radiation tracker (TRT). The TRT provides identification information for electrons by the detection of transition radiation. The MS is composed of three large superconducting air-core toroid magnets, a system of three stations of chambers for tracking measurements, with high precision in the range |η| < 2.7, and a muon trigger system covering the range |η| < 2.4.
The ECAL is composed of alternating layers of passive lead absorber interspersed with active liquid-argon gaps. It covers the range of |η| < 3.2 and plays a crucial role in photon identification. For |η| < 2.5 the calorimeter has three longitudinal layers in shower depth, with the first layer having the highest granularity in the η coordinate, and the second layer collecting most of the electromagnetic shower energy for high-p T objects. A thin presampler layer precedes the ECAL over the range |η| < 1.8, and is used to correct for the energy lost by EM particles upstream of the calorimeter. The HCAL, surrounding the ECAL, is based on two different technologies, with scintillator tiles or liquid-argon as the active medium, and with either steel, copper, or tungsten as the absorber material. Photons are identified as narrow, isolated showers in the ECAL with no penetration into the HCAL. The fine segmentation of the ATLAS calorimeter system allows an efficient separation of jets from isolated prompt photons.
Collision events are selected using a hardware-based first-level trigger and a softwarebased high-level trigger. The resulting recorded event rate from LHC pp collisions at √ s = 13 TeV during the data-taking period in 2015 and 2016 was approximately 1 kHz [20]. After applying criteria to ensure good ATLAS detector operation, the total integrated luminosity useful for data analysis is 36.1 fb −1 . The uncertainty in the combined 2015+2016 integrated luminosity is 2.1%. It is derived, following a methodology similar to that detailed in ref. [21], and using the LUCID-2 detector for the baseline luminosity measurements [22], from calibration of the luminosity scale using x-y beam-separation scans.
Simulation of signal and backgrounds
Simulated signal and background events were produced with various Monte Carlo event generators, processed through a full ATLAS detector simulation [23] using Geant4 [24], and then reconstructed with the same procedure used for data. Additional pp interactions (pileup), in the same and neighbouring bunch crossings, were overlaid on the hard-scattering process in the MC simulation. The MC events were then reweighted to reproduce the distribution of the number of interactions per bunch crossing observed in data.
For the signal modeling Sherpa 2.2.2 [25] with the NNPDF3.0 NNLO PDF set [26] is used as the baseline event generator. The signal sample was generated with up to three additional final-state partons at leading order (LO) and up to one additional finalstate parton at next-to-leading order (NLO). Alternative signal samples, the first generated using Sherpa 2.1.1 with the CT10 PDF set [27] and the second generated using MG5 aMC@NLO 2.3.3 [28] with the NNPDF3.0 NLO PDF set and interfaced to the Pythia 8.212 [29] parton shower model, are considered for studies of systematic uncertainties. Signal samples with non-zero anomalous triple gauge-boson couplings were also generated using Sherpa 2.1.1 with the CT10 PDF set. The values of coupling constants used in the generation are chosen to be equal to the expected limits obtained in a previous ATLAS study [5].
Background events containing Z bosons with associated jets were simulated using Sherpa 2.1.1 with the CT10 PDF set, while background events containing W bosons with associated jets were simulated using Sherpa 2.2.0 with the NNPDF3.0 NNLO PDF set. For both of these processes the matrix elements were calculated for up to two partons at NLO and four partons at LO. Background events containing a photon with associated jets were simulated using Sherpa 2.1.1 with the CT10 PDF set. Matrix elements were calculated with up to four partons at LO. Background events containing a lepton pair and a photon with associated jets were simulated using Sherpa 2.2.2 with the NNPDF3.0 NNLO PDF set. Matrix elements including all diagrams with three electroweak couplings were calculated for up to one parton at NLO and up to three partons at LO.
Selection of Z(νν)γ events
The event selection criteria are chosen to provide precise cross-section measurements of Z(νν)γ production and good sensitivity to anomalous gauge-boson couplings between photons and Z bosons. The selection is optimized for obtaining a high signal efficiency together with good background rejection.
Events are required to have been recorded with stable beam conditions and with all relevant detector subsystems operational. Event candidates in both data and MC simulation are selected using the lowest-E T unprescaled single-photon trigger: this requires the presence of at least one cluster of energy deposition in the ECAL with transverse energy E T larger than 140 GeV, satisfying the loose identification criteria described in ref. [30]. The trigger efficiency is greater than 98% for photons selected for this analysis.
Object selection
Photon candidates are reconstructed [31] from ECAL energy clusters with |η| < 2.37 and E T > 150 GeV. They are classified either as converted (candidates with a matching reconstructed conversion vertex or a matching track consistent with having originated from a photon conversion) or as unconverted (all other candidates). Both kinds of photon candidates are used in the analysis. Electron candidates are reconstructed [32] from ECAL energy clusters with |η| < 2.47 that are associated with a reconstructed track in the ID with transverse momentum p T > 7 GeV. The ECAL cluster of the electron/photon candidate must lie outside the transition region between the barrel and endcap (1.37 < |η| < 1.52). Muon candidates are reconstructed from tracks in the MS that have been matched to a corresponding track in the inner detector, and are referred to as "combined muons". The combined track is required to have p T > 7 GeV and |η| < 2.7.
The shower shapes produced in the ECAL are used to identify photons and electrons. Photons are required to pass all the requirements on shower shape variables which correspond to the tight photon identification criteria [30]. The tight photon identification efficiency ranges from 88% (96%) to 92% (98%) for unconverted (converted) photons with p T > 100 GeV. A sample of "preselected" photons, used for the calculation of missing transverse momentum, are required to satisfy the less restrictive loose identification criteria of ref. [30]. Electron candidates are required to satisfy loose [32] electron identification criteria, whose efficiency is greater than 84%. Muon candidates are required to satisfy tight identification criteria as described in ref. [33], with efficiency greater than 90% for combined muons used in the selection.
Electron and muon candidates are required to originate from the primary vertex 2 by demanding that the significance of the transverse impact parameter, defined as the absolute value of the track's transverse impact parameter, d 0 , measured relative to the beam trajectory, divided by its uncertainty, σ d 0 , satisfy |d 0 |/σ d 0 < 3 for muons and |d 0 |/σ d 0 < 5 for electrons. The difference z 0 between the value of the z coordinate of the point on the track at which d 0 is defined, and the longitudinal position of the primary vertex, is required to satisfy |z 0 · sin(θ)| < 0.5 mm for both the muons and electrons.
Photon, electron and muon candidates are required to be isolated from other particles. The following criteria are used for photons: the total transverse energy in ECAL energy clusters within ∆R = 0.4 of the photon candidate is required to be less than 2.45 GeV + 0.022 · E γ T , and the scalar sum of the transverse momenta of the tracks located within a distance ∆R = 0.2 of the photon candidate is required to be less than 0.05 · p γ T . For preselected photons, isolation criteria are not applied. For muons and electrons, the isolation requirement is based on track information and is tuned to have an efficiency of at least 99% [33].
Jets are reconstructed from topological clusters in the calorimeter [34] using the anti-k t algorithm [35] with a radius parameter of R = 0.4. Events with jets arising from detector noise or other non-collision sources are discarded [36]. A multivariate combination of track-2 Each primary vertex candidate is reconstructed from at least two associated tracks with pT > 0.4 GeV.
The primary vertex is selected among the primary vertex candidates as the one with the highest sum of the squared transverse momenta of its associated tracks. based variables is used to suppress jets originating from pile-up in the ID acceptance [37]. The energy of each jet is calibrated and corrected for detector effects using a combination of simulated events and in situ methods [38] using data collected at √ s = 13 TeV. The selected jets are required to have p T larger than 50 GeV and |η| < 4.5.
The missing transverse momentum is defined as the negative vector sum of the transverse momenta of all reconstructed physics objects in the event [39] (leptons with p T > 7 GeV, preselected photons with p T > 10 GeV and jets with p T > 20 GeV), plus a "soft term" incorporating tracks from the primary vertex that are not associated with any such objects [40]. The resulting vector is denoted E miss T since it includes calorimetric energy measurements, and its magnitude E miss T is used as a measure of the total transverse momentum of neutrinos in the event.
To resolve ambiguities in the object reconstruction, jet candidates lying within ∆R = 0.3 of the photon candidates are removed.
Signal region definition
The signal region (SR) is defined to have exactly one tight isolated photon, as described above. In order to reduce the contamination from events that do not contain high-energy neutrinos (mainly γ + jet background with fake E miss T from jet momenta mismeasurements) the selected events are required to have E miss T > 150 GeV. To reduce the number of W ( ν)γ and Z( )γ events, a lepton veto is applied: events with any selected electrons or muons are discarded. A requirement of at least 10.5 GeV 1/2 for the E miss T significance, defined as E miss T / Σp jet T + E γ T , further suppresses background contributions with fake E miss T . An additional angular separation requirement ∆φ( E miss T , γ) > π/2 is made, which suppresses the pp → W (eν) + X background. These object and event selection requirements define the reconstruction-level fiducial region and are summarized in table 1.
To simplify the interpretation of the results and comparison with theory predictions, the cross section is measured in an extended fiducial region, defined at particle level 3 in ta-3 "Particle level" quantities are defined in terms of stable particles in the MC event record with a proper decay length cτ > 10 mm which are produced from the hard scattering, including those that are the products of hadronization. The particle-level jets are reconstructed using the anti-kt algorithm with a radius parameter of R = 0.4, using all stable particles except for muons and neutrinos. The particle-level jets in ATLAS do not include muons because jets are built from calorimeter clusters, excluding muons.
Jets |η| < 4.5 ble 2. Compared with the fiducial region, the extended fiducial region removes requirements on E miss T significance, ∆φ( E miss T , γ), the lepton veto and the transition η region for photons. In the signal event selection at particle level, the E miss T significance and ∆φ( E miss T , γ) are given by p νν T / Σp jet T + E γ T and ∆φ( p νν T , γ), respectively. Photon isolation at the particle level is performed using the same requirements and cone sizes as described for the reconstruction-level isolation in section 3.1.
Background estimation
Backgrounds to the Z(νν)γ signal originate from several sources. The dominant sources (listed in decreasing order of importance) are estimated with data-driven techniques: electroweak processes such as W ( ν)γ, where the lepton is not detected; events with prompt photons and mismeasured jet momenta that gives rise to missing transverse momentum; events with real E miss T from neutrinos (such as Z(νν) or W (eν)) and misidentified photons from either electrons or jets. The procedures used to estimate these backgrounds closely follow those of the previous ATLAS measurement [5]. A less important source is γ (mainly τ τ γ) production, which is estimated from MC simulation and is expected to contribute roughly 1% of the selected event yield. In the following, each source of background is discussed in detail together with the method used for its estimation.
Misidentified events from W ( ν)γ production are one of the dominant background contributions. A large fraction (about 60%) of this contamination arises from W (τ ν)γ events. Photon+jets events form another sizeable background contribution to the signal region. For the estimation of these backgrounds, two control regions (CRs) are defined by selecting events with the same criteria used for the SR but requiring either exactly one charged lepton (e or µ) in the event, or requiring the E miss T significance to be less than 10.5 GeV 1/2 . The first CR is enriched with W ( ν)γ events (about 77%) while the second CR is enriched with γ+jets events (about 55%). The use of the 1-lepton (e or µ) control region for the estimation of the W ( ν)γ background to the signal region, where can be any of e, µ or τ , relies on the assumption of lepton flavour universality. A simultaneous fit to the background-enriched CRs is performed to allow the CR data to constrain the yield of these main backgrounds, initially estimated with MC simulation, by establishing the -7 -JHEP12(2018)010 normalization factors for the W ( ν)γ and γ+jets background contribution as described in refs. [5,41]. The same background normalization factors are assumed in the CR and SR and the fit uncertainties on these factors accounts for the uncertainty from this assumption. The normalization factor for the W γ background is found to be close to one, while the normalization factor for the γ+jets background is 1.7 ± 0.5, since the pre-fit expectation is computed at LO, for which higher-order corrections would be expected to be considerable. The pre-fit kinematic distributions of these backgrounds are taken from the MC simulation. The variations of the background yield in each bin due to each of the experimental and MC modelling uncertainties reported in section 5.1 are treated as Gaussian-distributed nuisance parameters in the likelihood function fit used to obtain the final background predictions in the SR. The dominant systematic uncertainties in the W ( ν)γ process come from MC modelling (mostly due to the QCD scale uncertainty) and from the uncertainty in the electron-photon energy scale. Their contributions are 5.8% and 3.8%, respectively. The systematic uncertainty for γ+jets events is also dominated by the QCD scale component, and amounts to approximately 19%.
Misidentification of electrons as photons also contributes to the background yield in the signal region. The main source of this background is the inclusive W (eν) process, but contributions also arise from the single top-quark and tt production processes. The estimation of the size of these background contributions is done in two steps. The first is the determination of the probability for an electron to be misidentified as a photon using Z(e + e − ) decays reconstructed as e + γ, as described in refs. [5,41]. The probability of observing an e + γ pair with invariant mass near the Z boson mass is used to determine an electron-to-photon fake factor f e→γ . The fake factor is found to vary between 0.6% to 2.7%, depending on the photon's η and p T . The second step is the construction of a control region by applying the nominal ννγ selection criteria described in section 3, with the exception that an electron is required instead of the final-state photon, leading to a control region dominated by the W (eν)+jets process. The estimated background is then given by the number of events in the chosen control sample scaled by the electron-to-photon fake factor. The statistical uncertainty is determined by the size of the control sample and does not exceed 5%. The systematic uncertainty for this background varies from 13% to 25%, depending on the photon p T and η, and is dominated by the difference between the fake rates obtained from Z(ee) and W (eν) MC events. This source of systematic uncertainty on the fake factor is estimated from MC simulation in order to avoid double counting the uncertainty associated with the estimation of backgrounds under the Z boson mass peak in collision data. The total relative systematic uncertainty of this background estimate is less than 15%, since the main contribution comes from the most populated central pseudorapidity region and has p T < 250 GeV, where the systematics on the fake factor is the smallest.
To estimate the contribution from background due to the misidentification of jets as photons, a two-dimensional sideband method is used, as described in ref. [5]. In this method the Z(νν)γ events are separated into one signal and three control regions. Events in the signal region require the photon to satisfy the nominal photon isolation and tight identification requirements, as described in section 3. The photon isolation and identification N data (obs) 3812 2599 Table 3. Summary of observed and expected yields (all backgrounds and signal) for events passing the selection requirements in data for the inclusive (N jets ≥ 0) and exclusive (N jets = 0) selections. The W γ and γ+jet backgrounds are scaled by the normalization factor from the fit, luminosity and cross section. The e → γ and jet → γ backgrounds are estimated using data-driven techniques. The row labelled "N sig (exp)" corresponds to the Sherpa NLO prediction. The row labelled "N sig+bkg total " corresponds to the sum of the expected background contributions and expected signal. The first uncertainty is statistical, while the second is systematic.
criteria are modified in order to build the control regions, which are disjoint from each other and from the signal region. The modified photon identification criteria requires photons to pass a "non-tight" identification but fail the tight identification. The non-tight selection criteria remove requirements on four out of the nine shower shape variables required for tight photons; the variables that are removed from the list of requirements are those that are least correlated with calorimeter isolation [42]. Two of the control regions are defined by modifying either the photon isolation or photon identification requirement, while for the third control region both the isolation and identification requirements are modified. The number of background events in the signal region can be derived from the number of observed events in the control regions according to the methodology described in ref. [5]. The statistical uncertainty of the background is established by the event yields in the four regions, while the systematic uncertainty is 29% and is dominated by the size of changes to the background estimate arising from the variation of the control regions' definitions, which leads to changes exceeding the expected size of the statistical fluctuations. This systematic uncertainty also covers possible effects due to the correlation between the isolation and identification criteria.
The resulting signal and background composition is shown in table 3. Kinematic distributions of the photon transverse energy, missing transverse momentum, and jet multiplicity in the fiducial region for the inclusive selection (N jets ≥ 0) are shown in figure 2. Kinematic distributions of the photon transverse energy and the missing transverse momentum in the fiducial region for the exclusive selection (N jets = 0) are shown in figure 3.
Good agreement between data and the SM expectation is observed in the shapes of most of the measured distributions. The discrepancy in the last bin of the inclusive E γ T distribution, which is not used to set aTGC limits, was found to be consistent with having Data Data Data Data Data fiducial region, defined in table 2, is calculated as where N is the number of observed candidate events, B is the expected number of background events and Ld t is the integrated luminosity corresponding to the analyzed data set. The factors C Zγ and A Zγ correct for detection efficiency and acceptance, respectively: • C Zγ is defined as the number of reconstructed signal events satisfying all selection criteria divided by the number of events that, at particle level, meet the acceptance criteria of the fiducial region; • A Zγ is defined as the number of signal events within the fiducial region divided by the number of signal events within the extended fiducial region, with both numerator and denominator defined at particle level.
The corrections A Zγ and C Zγ are determined using the Zγ signal events generated by Sherpa and are summarized in table 4 along with their uncertainties.
Systematic uncertainties
Systematic uncertainties in the acceptances A Zγ are evaluated by varying the PDF sets, the value of α S , the renormalization and factorization scales (QCD scale uncertainty), and the Monte Carlo parameter tunes for the parton shower (PS) and multi-parton interactions JHEP12(2018)010 N jets ≥ 0 N jets = 0 A Zγ 0.816 ± 0.029 0.952 ± 0.026 C Zγ 0.904 ± 0.031 0.889 ± 0.037 Table 4. Summary of values of the correction factors (C Zγ ) and acceptances (A Zγ ) for the Zγ cross-section measurements. The uncertainty presented here includes only systematic components, since the statistical uncertainty is found to be negligible.
(MPI). In total, 100 error sets are checked for the NNPDF3.0 NNLO PDF variation, leading to a relative uncertainty of 0.76% for the inclusive case and 0.35% for the exclusive case. These numbers fully cover variations arising from the use of alternative PDF sets such as CT14 [43] and MMHT2014 [44]. The uncertainty from α S is estimated by varying it within the range of its world-average value as provided in ref. [45] and is found to be negligible. The effects of the renormalization and factorization scale uncertainties are assessed by varying these two scales independently by a factor of two from their nominal values, removing combinations where the two variations differ by a factor of four, and taking the envelope of the resulting cross-section variations as the size of the associated systematic uncertainty. Uncertainties from the PS and MPI are evaluated using a series of eigentunes for the Pythia generator with its A14 parameter tune [46]. The size of the uncertainty from the renormalization and factorization scales does not exceed 3.0%, while PS and MPI uncertainties cause variations from 1.9% to 2.7% for the inclusive and exclusive cases, respectively. The total uncertainties in the acceptance factors are summarized in table 4.
Systematic uncertainties affecting the correction factor C Zγ include contributions arising from uncertainties in the efficiencies of the trigger, reconstruction and particle identification, as well as the uncertainties in the energy, momentum scales and resolutions of the final-state objects. Additional systematic uncertainty sources arise from the modelling of particle spectra and pile-up events. Spectrum modelling uncertainties are estimated by varying the PDF set and QCD scales as described above for the case of the acceptance factor A Zγ . Some of these contributions are found to have a non-linear dependence on photon transverse energy, E miss T or jet multiplicity. In these cases, uncertainties estimated as a function of these observables are used in the unfolding process of section 5.5 when the corresponding kinematic distributions are derived from the signal sample. Table 5 displays the size of the individual contributions to the uncertainties in the C Zγ factor; the total uncertainty is summarized in table 4.
Integrated extended fiducial cross section
The measurements of the cross sections, along with their uncertainties, are based on the maximization of the profile-likelihood ratio Λ(σ) = L(σ,θ(σ)) L(σ,θ) , where L represents the likelihood function, σ is the cross section, and θ are the nuisance parameters corresponding to the sources of systematic uncertainty. Theσ andθ terms denote the unconditional maximum-likelihood estimate of the parameters, i.e., the parameters for which the likelihood is maximized for both σ and θ. The termθ(σ) denotes the value of θ that maximizes L for a given value of σ.
The likelihood function is defined as representing the product of the Poisson probability of observing N events, given expectations of S for the signal and B for the background, multiplied by the Gaussian constraints θ on the systematic uncertainties, with central values θ 0 from auxiliary measurements, as described in section 5.1. The measured cross sections for Z(νν)γ production in the extended fiducial region are summarized in table 6, along with the theoretical predictions of the Mcfm [47] generator described in section 5.4. The measured cross sections agree with the SM expectations to within one standard deviation. Systematic uncertainties arise from uncertainties in the acceptances and correction factors, as well as from uncertainties in the background estimates. These two sources contribute roughly equally to the uncertainty in the measured cross sections. Compared with the Zγ measurements at √ s = 8 TeV [5], the systematic uncertainty is significantly reduced. This improvement is due primarily to the reduction of systematic uncertainty allowed by the data-driven estimate of the γ+jets and W γ backgrounds.
An overall check of the SM predictions is done with the Matrix generator [48]. Cross sections obtained by Matrix (inclusive case: σ ext.fid. = 78.6 ± 0.4 ± 4.4 fb; exclusive case: σ ext.fid. = 55.8 ± 0.3 ± 3.6 fb, where the uncertainties are statistical and systematic, respectively) are found to be consistent with those from Mcfm to within their statistical uncertainty. Table 6. Measured cross sections for Z(νν)γ production within the extended fiducial region for a centre-of-mass energy of √ s = 13 TeV, with corresponding SM expectations obtained from the Mcfm [47] generator at next-to-next-to-leading order in the strong coupling constant α S .
Standard Model calculations
The resulting measurement of the rate and kinematic distributions of Zγ production is compared with SM expectations using the parton shower Monte Carlo generator Sherpa The photon isolation criterion at the parton level is applied by considering a cone of variable opening angle ∆R (with maximum opening angle ∆R max = 0.1) centred around the photon direction, and requiring that the transverse energy flow inside that cone be always less than a given fraction of the photon p T ; this fraction is set to 0.1 when ∆R = ∆R max , and tends smoothly to zero when ∆R → 0, as described in ref. [49]. Due to this procedure, the contribution from photon fragmentation to the NNLO calculations of the Mcfm and Matrix SM predictions is zero.
Events generated with Sherpa, as described in section 2.2, are also compared with the particle-level measurements. For the NNLO parton-level predictions, parton-to-particle correction factors C * (parton→particle) must be applied in order to obtain the particle-level cross sections. These correction factors are computed as the ratios of the pp → Zγ cross sections predicted by Sherpa with hadronization and the underlying event disabled to the cross sections with them enabled. The systematic uncertainty in the correction factors is evaluated by using a signal sample from an alternative generator (MG5 aMC@NLO), taking the resulting change in C * (parton→particle) as the one-sided size of a symmetrized value for the uncertainty. This accounts for uncertainties in both the parton shower modelling and the description of the underlying event. The value of C * (parton→particle) is found to be 0.87 ± 0.04 for the inclusive predictions and 0.97 ± 0.04 for the exclusive predictions. For the exclusive case, the parton-to-particle correction includes an additional contribution from the jet veto, which compensates for the difference in the photon isolation between the parton and particle levels. The particle-level cross sections are then obtained by multiplying the NNLO parton-level cross-section values by the C * (parton→particle) correction factors, and are displayed in table 6.
JHEP12(2018)010
The systematic uncertainty in the expected NNLO SM cross sections arising from uncertainties in the QCD scale is estimated by varying the QCD scales by factors of 0.5 and 2.0 (separately for the renormalization and factorization scales, removing combinations where the two variations differ by a factor of four). The effect of the QCD scale uncertainty on the prediction for the first bin of the various differential cross-section measurements also accounts for uncertainties arising from the incomplete cancellation of divergences associated with soft gluon emission in fixed-order perturbative calculations of Zγ production. This effect is appreciable because of the symmetric E γ T and p νν T thresholds used in defining the SR. The corresponding corrections are estimated conservatively from the cited MC generators by evaluating the degree of compensation of the divergence that arises when the p νν T (E γ T ) requirement is lowered to a value significantly below the value of the E γ T (p νν T ) requirement of 150 GeV. The systematic uncertainty due to the PDF choice is computed using the eigenvectors of the NNPDF 3.0 PDF set [26] and the envelope of the differences between the results obtained with the CT14 [43] and MMHT2014 [44] PDF sets, according to the PDF4LHC recommendations [50]. Matrix predictions do not include the systematic uncertainty due to the PDF choice.
Differential extended fiducial cross section
The measurement of the Zγ production differential cross sections allows a comparison of experimental results with SM expectations for both the absolute rates and the shapes of kinematic distributions. The measurements are performed as a function of several observables that are sensitive to higher-order perturbative QCD corrections [51] and to a possible manifestation of aTGCs [52]: photon transverse energy (E γ T ), the transverse momentum of the neutrino-antineutrino pair (p νν T ), and jet multiplicity (N jets ). The differential cross sections are defined in the extended fiducial region, and are extracted with an unfolding procedure that corrects for measurement inefficiencies and resolution effects that modify the observed distributions. The procedure described in ref. [5] is followed, using an iterative Bayesian method [53]. For each distribution, events from simulated signal MC samples are used to generate a response matrix that accounts for bin-to-bin migration between the reconstruction-level and particle-level distributions.
The statistical uncertainties of the unfolded distributions are estimated using pseudoexperiments, generated by fluctuating each bin of the observed spectrum according to a Poisson distribution with a mean value equal to the observed yield. The shape uncertainties arising from the limited size of the signal MC sample are also obtained by generating pseudo-experiments. The sources of systematic uncertainty are discussed in section 5.1, with their impact on the unfolded distribution assessed by varying the response matrix for each of the systematic uncertainty sources by one standard deviation and combining the resulting differences from the nominal values in quadrature.
The differential cross sections as a function of E γ T and p νν T are shown in figures 4 and 5, respectively, for both the inclusive and exclusive measurements. Figure 6 shows the cross section measured in bins of jet multiplicity. The values of the SM expectations shown in the figures are obtained as described in section 5.4. Good agreement with SM expectations is observed in all but the last bin of the E γ T inclusive distribution. This disagreement is a consequence of the corresponding disagreement observed in figure 2, which was investigated and found to be consistent with having arisen from a statistical fluctuation of the data.
Limits on triple gauge-boson couplings
Vector-boson couplings are completely fixed within the Standard Model by the SU(2) L ×U(1) Y gauge structure. Their measurement is thus a crucial test of the model. Any deviation from the SM prediction is referred to as an anomalous coupling.
Within the framework of the effective vertex function approach [52], anomalous triple gauge-boson coupling contributions to Zγ production can be parameterized by four CPviolating (h V 1 , h V 2 ) and four CP-conserving (h V 3 , h V 4 ) complex parameters. Here the V indices are Z and γ, and h Z i and h γ i are the parameters of ZZγ and the Zγγ vertices, respectively. The h V 3 (h V 1 ) and h V 4 (h V 2 ) parameters correspond to the electric (magnetic) dipole and magnetic (electric) quadrupole transition moments of V , respectively [54].
All of these parameters are zero at tree level in the SM. Since the CP-conserving couplings h V 3,4 do not interfere with the CP-violating couplings h V 1,2 , and their sensitivities to aTGCs are nearly identical [52], the limits from this study are expressed solely in terms of the CP-conserving parameters h V 3,4 .
-16 -JHEP12(2018)010 [fb/GeV] The yields of Zγ events with high E γ T from the exclusive (zero-jet) selection are used to set limits on h V 3,4 . The exclusive selection is used because it significantly reduces the SM contribution at high E γ T and therefore optimizes the sensitivity to anomalous couplings. The contribution from aTGCs increases with the E T of the photon, and the measurement of Zγ production is found to have the highest sensitivity to aTGCs by restricting the search to the portion of the extended fiducial region with E γ T greater than 600 GeV. Cross The anomalous couplings influence the kinematic properties of the Zγ events and thus the efficiency factor of the event reconstruction (C Zγ ). The maximum variation of C Zγ due to non-zero aTGC parameters within the aTGC limits measured in this paper (about 7%) is adopted as an additional systematic uncertainty. The effect of anomalous couplings on the acceptance factor (A Zγ ) and parton-to-particle factor (C * (parton→particle) ) is an order of magnitude smaller than that on C Zγ , and so is neglected.
Limits on a given aTGC parameters are extracted from a frequentist profile-likelihood test similar to that of section 5.3. The profile likelihood depends on the observed number of exclusive Zγ candidate events, the amount of expected signal as a function of aTGC given by eq. (6.1), and the estimated number of background events. A point in the aTGC space is accepted (rejected) at the 95% confidence level (CL) if fewer (more) than 95% of randomly generated pseudo-experiments exhibit larger profile-likelihood ratio values than that observed in data. In this context, a pseudo-experiment is a set of randomly generated numbers of events that follow a Poisson distribution with mean equal to the sum of the number of expected signal events and the estimated number of background events. Systematic uncertainties are incorporated into the pseudo-experiments via a set of nuisance parameters with correlated Gaussian constraints. All nuisance parameters are allowed to fluctuate in the pseudo-experiments.
No evidence of anomalous couplings is observed. The allowed 95% CL ranges for the anomalous couplings are shown in h γ 4 ) vertices. Limits on anomalous couplings imposed by this analysis are 3-7 times more stringent than those from prior studies [5].
Limits on possible combinations of each pair of aTGC parameters are also evaluated. The ellipses of 95% CL on linear combination of the pairs of anomalous couplings are shown on the (h γ 3 , h γ 4 ) and (h Z 3 , h Z 4 ) planes in figure 7, which are the only such pairs that are expected to interfere [52].
Allowed ranges are also determined for parameters of the effective field theory (EFT) of ref. [55], which includes four dimension-8 operators describing aTGC interactions of neutral gauge bosons. The coefficients of these operators are denoted C BW /Λ 4 , C BW /Λ 4 , C W W /Λ 4 and C BB /Λ 4 , as described in ref. [56]. The parameter Λ has the dimension of mass and is associated with the energy scale of the new physics described by the EFT. The 95% CL limits on these EFT parameters displayed in Table 9. Observed and expected one-dimensional 95% CL limits on the C BW /Λ 4 , C BW /Λ 4 , C W W /Λ 4 and C BB /Λ 4 EFT parameters, assuming that any excess in data over the SM expectation is due solely to a non-zero value of the parameter C BW /Λ 4 , C BW /Λ 4 , C W W /Λ 4 or C BB /Λ 4 . For each row, all parameters other than the one under study are set to 0.
Conclusion
The cross section for the production of a Z boson in association with an isolated high-energy photon is measured using 36.1 fb −1 of pp collisions at √ s = 13 TeV collected with the ATLAS detector at the LHC. The analysis uses the invisible decay mode Z → νν of the Z boson, and is performed in a fiducial phase space closely matching the detector acceptance.
Kinematic distributions are presented in terms of differential cross sections as a function of the transverse energy of the photon, the missing transverse momentum, and the jet multiplicity. Measurements are made for both the inclusive case, with no requirements on the system recoiling against the Zγ pair, and the exclusive case in which no jets with p T > 50 GeV are allowed within |η| < 4.5.
The results are compared with SM expectations derived from a parton shower Monte Carlo generator (Sherpa) and from parton-level perturbative calculations carried out at NNLO (Mcfm and Matrix). Good agreement is observed between the measured and expected total and differential cross sections.
In the absence of significant deviations from SM expectations, the data are used to set limits on anomalous couplings of photons and Z bosons. Limits on aTGCs are determined using a modified SM Lagrangian that includes operators proportional to the h V 3 and h V 4 (V = Z or γ) parameters of the vertex function parameterization of aTGC contributions to Zγ production. The limits are also transformed into limits on the C BW /Λ 4 , C BW /Λ 4 , C W W /Λ 4 and C BB /Λ 4 parameters of an effective field theory formulation of aTGC effects. The limits obtained from the current study are 3-7 times more stringent than those available prior to this study. -23 - | 2020-01-08T00:05:53.917Z | 2018-12-03T00:00:00.000 | {
"year": 2020,
"sha1": "07764c6a3aadbb34bc4f4db9d5b1bb8d0e893ee8",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP03(2020)054.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1bf83a5c077f7b2be33b2166e3b2838ef0488b48",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
36446306 | pes2o/s2orc | v3-fos-license | The Bottom-Up Formation and Maintenance of a Twitter Community: Analysis of the #FreeJahar Twitter Community
Purpose – The article explores the formation, maintenance and disintegration of a fringe Twitter community in order to understand if offline community structure applies to online communities Design/methodology/approach – The research adopted Big Data methodological approaches in tracking user-generated contents over a series of months and mapped online Twitter interactions as a multimodal, longitudinal ‘social information landscape’. Centrality measures were employed to gauge the importance of particular user nodes within the complete network and time-series analysis were used to track ego centralities in order to see if this particular online communities were maintained by specific egos. Findings – The case study shows that communities with distinct boundaries and memberships can form and exist within Twitter’s limited user content and sequential policies, which unlike other social media services, do not support formal groups, demonstrating the resilience of desperate online users when their ideology overcome social media limitations. Analysis in this article using social networks approaches also reveals that communities are formed and maintained from the bottom-up. Research limitations/implications – The research data is based on a particular dataset which occurred within a specific time and space. However, due to the rapid, polarising group behaviour, growth, disintegration and decline of the online community, the dataset presents a ‘laboratory’ case from which many other online community can be compared with. It is highly possible that the case can be generalised to a broader range of communities and from which online community theories can be proved/disproved. Practical implications – The article showed that particular group of egos with high activities, if removed, could entirely break the cohesiveness of the community. Conversely, strengthening such egos will reinforce the community strength. The questions mooted within the paper and the methodology outlined can potentially be applied in a variety of social science research areas. The contribution to the understanding of a complex social and political arena, as outlined in the paper, is a key example of such an application within an increasingly strategic research area and this will surely be applied and developed further by the computer science and security community. Originality/value – The majority of researches that cover these domains have not focused on communities that are multimodal and longitudinal. This is mainly due to the challenges associated with the collection and analysis of continuous datasets that have high volume and velocity. Such datasets are therefore unexploited with regards to cyber-community research.
Introduction
Can communities form within Twitter? The social networking and multiplatform micro-blogging service that allows a limited tweet of 140 characters is generally associated with the spread of information.
Users however are able to follow, or subscribe to the posts of other users, which create a network of followers and followees. It must not be argued that the followers-followees are a community as observations of users are mainly inactive with outbursts of retweets around certain viral news. It may be argued that the followers-followees phenomenon is at most a network with equal ties. We must however probe deeper in order to identify and isolate potential community behaviour within which the Twitter environment can provide. This is important partly due to the fact that many recent world events that tipped the balance of powers of governments originated from coordinated activities within Twitter, and partly due to the need to understand collective behaviours in cyber-communities. Twitter as a social media however, is certainly a research environment (Golder & Macy, 2013) where we can explore our questions.
Twitter service interprets keywords prefixed by a hashtag '#' as topical, and users are preceded with a @ in the tweets. Unlike other social networks, the service does not allow formal group creations. The sequential presentation of tweets as viewed within Web browsers or on mobile devices is chronological.
Can communities form under such a limited environment? If communities are able to form, in what way do they maintain and support their existence, especially when Twitter does not allow formal groups?
The interest of this research lies in the controversial online teen #FreeJahar movement calling for the freedom of the Boston bombing suspect Dzhokhar Tsarnaev because teenage girls believe he is "too beautiful to be a terrorist" (Nelson, 2013). Concerns were raised that in on-line forums the younger Boston Bombing suspect appeared to attract a cult teen following expressing affection and concern for him (DailyMail, 2013). Facebook, Tumblr tribute accounts were set up in support of the teen with the #FreeJahar Twitter tag, which were trending when these activities first appeared. Teen activities in Twitter do need to be monitored (Wiederhold, 2012) as observed below: "I can't be the only one who finds the suspected bomber to be sexy, can I?" 19 April 2013. "i don't even care if jahar is a terrorist he's cute i don't want him to die." @******, 20 April 2013. "I'm not gonna lie, the second bombing suspect, Dzhokhar Tsarnaev, is hot. #sorrynotsorry" @******, 20 April 2013. "Yes I like Justin Bieber and I like Jahar but that has nothing to do with why i support him. I know hes innocent, he is far too beautiful" @******, 25 April 2013.
The term community has two major uses. The first being 'territorial' and 'geographical', which refers to the notion of community-neighbourhood, town and city, whereas the latter refers to the relational aspect of a community (Gusfield, 1975). Whilst geography plays an important role in the formation of social ties within online communities (Takhteyev, Gruzd, & Wellman, 2012), it is believed that online social networks such as Twitter can be highly relational. Such communities are concerned with the quality of character of human relationship (spiritual, professional, ideological, etc.) without reference to territories. It is noted early that modern society develops community around interests and skills more than around locality (Durkheim, 1964), the consistency of such a notion of community has been maintained in the Information Age. Community is better defined by the nature of relationships between individuals rather than geographical proximity (Preece & Maloney-Krichmar, 2005).
It was noted a decade ago that "the Internet has altered our sense of boundaries, participation, and identity" (Renninger & Shumar, 2002), and much ink has been filled on the topic of whether online communities are really communities (Bruckman, 2006). Twitter, unlike other services such as Usenet newsgroups, Internet Relay Chat (IRC), Facebook, SecondLife, etc., that allows a formal formation of communities (Wellman & Gulia, 1999) is different as it was originally created as a messaging service. It therefore may not be suitable to study Twitter using a similar approach in the literatures, see (Preece & Maloney-Krichmar, 2005;Wellman & Gulia, 1999).
How then does a community look like in Twitter, if the definition of a community that we know can be formed at all? To answer this question, we must first look at communities familiar to us, i.e., those that have been formulated in the literatures. A community should exhibit a 'Sense of Community' (D. W. take a nominalist approach in defining the concepts, particularly of community boundaries, in reality, the automated computational approach that is employed in the research gathers datasets within the realist strategy (Laumann, Marsden, & Prensky, 1989). Considering the volume of Twitter data that this research collects, it will be extremely tedious to attempt to find a boundary using qualitative approaches, thus, a computational, quantitative approach in social networks analysis is used. Comparisons thus can be made and hypotheses tested when data has been analysed. We may begin to see patterns of community as we explore the datasets in the subsequent sections.
Methods
As opposed to the conventional method of mapping the follower-followee network, the approach defined here practically maps the actual evolution of instantaneous activities occurring within a timescale, over a series of days encompassing both large and small events. This is more useful as activities define the true interactions of active members.
A Big Data Twitter streaming software was used (Ch'ng, 2014) for mapping Twitter users (egos) and tweets as nodes, and edges representing links between egos and their tweets. Tweets are represented as nodes so that the flow of information is made obvious. Only tweets containing the keywords #Dzhokhar, #FreeJihad, #FreeJahar, #Tsarnaev are recorded. The reason for using only four keywords was that these were the keywords that were consistently used in the Tweets. In fact, #FreeJahar would have been sufficient as it appeared in all of the tweets. This resulted in 60 longitudinal datasets, each containing 5 hours of continuous data from 17 May 3.00pm and to 31 May 12.05pm (15 days). The data were recorded from 17 May onwards as news of the 'movement' were not reported until then. An additional 30 days of data records the decline of the #FreeJahar activities.
The file sizes of the series are shown in Figure 1. Peaks and valleys are consistent with the time of activities during the hours spanning both days and nights, except when the keywords were trending. The relative importance of nodes uses Betweenness, Closeness (Freeman, 1979;Newman, 2005;Sabidussi, 1966) and Eigenvector (Bonacich, 1987) centralities measures. Betweenness is a measure of information brokerage between parties, Closeness measures the spread of information from a node to all other nodes, where lower closeness has shorter distance to other nodes. High Eigenvector demonstrates increased numbers of egos who were connected to important egos in the network, an indication of heightened activities.
Results
An analysis of the datasets shows that news events were all related to the Boston Bombers with the topics: 'triple murder', 'FBI kills man', 'Ibrahim Todashev', 'Al Qaeda Mag praises Tsarnaev brothers', 'Dzhokar Recovers', 'Mother and Father', 'Russians provides info about brothers'. Does the #FreeJahar community exists within the graph? Visualising the mapped networks will reveal this information.
Figure 2 visualises 16 graphs, 5 hours each (ranked by file size from the smallest 5708 to the largest 803, left to right, top to bottom). The graphs were reconfigured using Gephi's ForceAtlas algorithms so that connected nodes due to higher interactions, appear closer together (clusters in Figure 2). Each graph shows a different signature as they carry varying explosions of information when news went viral. Egos
The Bottom-Up Organisation of the Community
Communities do not form and then disintegrate. Efforts are needed to maintain the boundary and reinforce membership bonds so that the community becomes stronger over time. The limited and sequential nature of the Twitter environment makes it difficult to maintain an active community boundary, there is a higher probability of disintegration unless members assume some form of leadership.
Conversely, as the #FreeJahar group is formed from disparate actors with strong similar ideology, the community may be organised from the bottom-up where all members have equal importance. This reinforces the theory that positivity and success in the interactions create cohesion (Cook, 1969), external conflict increase internal cohesion (Stein, 1976).
Figure 6.
The centrality measures of the cluster corresponds to Leavitt's observation (Leavitt, 1951), that "where high centrality, and hence independence are evenly distributed, there will be no leader, many errors, high activity, slow organisation, and high satisfaction". The edges that play a central role in connecting the small-world network can be traced within 2 steps of egos with high Betweenness centrality within the social movement, the same agent with high Betweenness centrality makes the community cohesive. Removing these egos will invariably disrupt the entire community. Moody and White (Moody & White, 2003) observed that "a group is structurally cohesive to the extent that multiple independent relational paths among all pairs of members hold it together". In this context, removing the egos that keep the cluster alive will disrupt the community. Twitter's removal of highly active members confirmed Moody and White's concept of 'structural cohesion', defined as "the minimum number of actors who, if removed from a group, would disconnect the group". The removal of #FreeJahar members was due to the infringement of Twitter policies. As a result of the infringements, these accounts have since been suspended or deactivated. There was a Twitter post on the 30 July 2013 by one of the active members -"Aint nobody wanna #freejahar no more?" and 31 July -"Why is everyone deactivating their accounts?
This battle is just beginning! #freejahar". This member's account has also been suspended. Figure 5 presents samples from the final decline of the #FreeJahar community.
Discussion
In this article, a longitudinal Twitter dataset associated with the #FreeJahar group calling for the freedom of the Boston bombing suspect were explored. The datasets consist of 5 hourly tweets over 45 days mapped as a network of activities present opportunities for discovering global behaviours from instantaneous contents produced by collective social actors as they interacted desperately at the local level within the confines of a digital display.
The tracking of Twitter activities apart from the follower-followee network reveals distinct spatial expressions between tweets, retweets and conversations. Using this approach, tweet nodes and edges constituting a conversational nature could be identified and isolated from characteristic retweets. The tracking of multimodal connections will give us a more accurate measure of information than a followerfollowee network.
A number of questions were presented at the beginning, probing the possibility of communities forming and maintained within the limited Twitter environment. Data analysis shows that communities do form within Twitter, and as a consequence, raise specific issues on coordinated behaviour and information dissemination within the social media. Twitter community differs from offline community in many ways due to the limits of the Twitter environment, the most apparent is reinforcement, and the support needed amongst members. It is not clear if online Twitter community facilitates offline gatherings, or if Twitter social ties led to other online groups (FaceBook friendships, email and phone exchanges, and etc.) as data could not be obtained. However, to this end, we are at least able to describe the nature of Twitter communities and how they are formed and maintained.
We have learned that Twitter communities are relational, formed via a common ideology and justified by validation of the ideology and the commonality of symbols. These worked together to segregate the in-groups from the out-groups. Members fulfil their needs via discussions and defended their cause against conflicts from another community, which creates internal cohesion. MacMillan and Chavis stated that, "people possess an inherent need to know that the things they see, feel, and understand are experienced in the same way by others" Such a group norm validates their experience. Influence therefore is unidirectional -members influence the group. The community is organised from the bottomup, with equal distribution of leading roles and activities over time. The eventual decline of the #FreeJahar community was due to the suspension of important egos from Twitter, resulting in the destruction of the community structure.
The #FreeJahar event is an exemplar case study that could be generalised to much broader scopes for this sort of work as it demonstrates rapid, polarising grouping, behaviour, growth, disintegration and decline of an online community. The study shows that communities with distinct boundaries and memberships can form and exist within Twitter's limited user content and sequential policies, which unlike other social media services, does not support formal groups, demonstrating the resilience of desperate online users when their ideology overcome social media limitations.
Social networks can increase our range of human connectedness beyond the boundary of users' geographical location. Communications sent now may be retrieved and responded to, much later in time.
This invariably opens up a broad range of opportunities as space and time, in the eye of a user are 'compressed' to within a digital display. The fact that communities can form where services that facilitate group formation are not supported is an interesting phenomenon to look at. It will be beneficial to collate extremely large datasets from ad-hoc communities within Twitter in the future, particularly where revolutions and socially mediated civil uprisings are concerned. | 2018-04-03T03:44:35.349Z | 2015-05-11T00:00:00.000 | {
"year": 2015,
"sha1": "5a37e4b9a97e7520fe7d7f6b3940df5eff62d0ff",
"oa_license": "CCBY",
"oa_url": "https://nottingham-repository.worktribe.com/preview/743010/The%20Bottom%20Up%20Formation%20and%20Maintenance%20of%20a%20Twitter%20Community%20%20Analysis%20of%20the%20FreeJahar%20Twitter%20Community.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "0ec7d4073ef0f473c10bf014e4da4495249a02a1",
"s2fieldsofstudy": [
"Computer Science",
"Sociology"
],
"extfieldsofstudy": [
"Sociology",
"Engineering",
"Computer Science"
]
} |
187152938 | pes2o/s2orc | v3-fos-license | Identification of Tolerant Plant Species Growing Alongside National Highway-21 in Himachal Pradesh, India
Transport sector is a catalyst which leads to economic development of any nation and the rapid growth of motor vehicle impose a great threat on environment. In India, 60–70% of the air pollution is caused by transport sector followed by industries (Khandar and Kosankar 2014; Sisodia and Dutta 2016). Air pollution due to vehicular emissions has become one of International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 7 Number 06 (2018) Journal homepage: http://www.ijcmas.com
the most serious problems in the whole world and has resulted in huge threat to both the environment and the health of living organisms. The major pollutants emitted from exhaust emission of gasoline fuelled vehicles are CO, HC, NOx and Pb, while pollutant from diesel fuelled vehicles are particulate matter (including smoke), SO 2 , PAH, which play a great role in determining ambient air quality of surrounding areas (Gidde and Sonawane 2012).
Continuous increase in the vehicular pollution has resulted in deterioration of fresh air and has exerted a deep influence on the morphological (leaf number, leaf area, stomata number, stomata structure, flowering, growth, and reproduction), biochemical parameters, such as the ascorbic acid content, chlorophyll content (Sharma et al., 2017a) and physiological parameters (leaf extract pH and relative water content) of roadside plants (Seyyednejad et al., 2011;Pandit et al., 2017).
The degradation of air quality is a major environmental crisis that affects the surrounding regions. So, the only suitable ecological approach to control air pollution is plants. Plants are the prime receivers of different types of pollutants, act as a green belt component and a sink by cleaning the atmosphere (Sharma et al., 2017b). Plant cleans the atmosphere through the mechanism of absorption, adsorption, accumulation and detoxification, which not only control air pollution but also improve air quality by providing sufficient amount of oxygen to the atmosphere (Mondal et al., 2011;Kapoor and Bhardwaj 2016). Development of green belt, by plantation of tolerant plant species, can mitigate air pollution, prevent soil erosion and create aesthetic environment and for this, selection of plant species is an important factor to be considered. Screening of tolerant plant species was done on the basis of air pollution tolerance index (APTI) which includes four parameters (ascorbic acid, total chlorophyll content, relative water content and pH).
Site description
The present study was conducted in National Highway-21 (now NH-154) also known as Kiratpur -Nerchowk Expressway during the year 2016-17. The present highway in Northern India connects Chandigarh to the popular tourist places like Kullu and Manali in Himachal Pradesh which has caused heavy traffic load causing air pollution. In order to assess the impact of expansion and construction activities on the present national highway a detailed survey was conducted and a uniform stretch of the highway from Garamoura in Bilaspur to Nerchowk in Mandi district was selected under the jurisdiction of Himachal Pradesh situated between North latitude of 31º21'64" to 31º38'56" and East longitude of 76º56'77" to 76º46'46".
The study area experiences sub-tropical climate and there is a considerable variation in the seasonal temperature. In general, May-June are the hottest months and December-January, are the coldest ones in the region. The average annual rainfall in the region is 1200 mm, the bulk of which is received during monsoon months (June-September) with few pre monsoon showers during early June period. The average maximum and minimum temperature varies from 22.50 to 38.77 o C and 2.40 to 20.40 o C.
Survey
A field survey was conducted alongside the NH-21 from Garamoura in Bilaspur to Nerchowk in Mandi. For studying the distribution of species, random points were selected on both sides of the highway and 10 x 10 m quadrates were laid in order to select the most commonly occurring species. Total, eleven commonly growing species were identified and selected for the study. Among all plant species, six were trees, viz. Dalbergia sisso, Grewia optiva, Leucaena leucocephala, Toona ciliata, Morus alba and Ficus palmata and five were shrubs, viz. Adhatoda vasica, Vitex negundo, Murraya koenigii, Carissa opaca and Debregeasia hypoleuca.
Experimental details
To complete the objectives of the present study the eleven identified commonly growing plant species were selected alongside from the edge of the National Highway. The influence of vehicular pollution on selected plants was studied in the month of October. The National Highway 21 was divided into four uniform segments and each was considered on replication. In total there were 11 treatments (11 x 1) which were replicated four times under factorial Randomised Block Design.
Leaf analysis
To assess the impact of vehicular pollution on various biochemical parameters of selected plant species fully matured leafs were collected in the morning hours. The collected leaf samples were carried to the laboratory for analysis in the ice box. The leaf samples were analysed for total chlorophyll, ascorbic acid, leaf extract pH and relative water content by using following standard procedures:
Total chlorophyll content
The leaf chlorophyll content was estimated by using method given by (Hiscox and Israelstam 1979). The fresh leaves were chopped to fine pieces under subdued light and 100 mg of chopped leaf samples was placed in vials containing 7 ml of dimethyl sulphoxide. These vials were incubated at 65°C for half an hour and extract was transferred to graduated test tube and the final volume was made to 10 ml with dimethyl sulphoxide. Optical density values of the above extract was recorded on Spectrophotometer (Model-Spectronic-20) at 645 nm (A 645 ) and 663 nm (A 663 ) wavelength against dimethyl sulphoxide blank. Optical density of total chlorophyll content (T) is the sum of chlorophyll A 645 density and chlorophyll A 663 density. The total chlorophyll content was calculated by using formula: Where; V = volume of extract made a = length of light path in cell (usually 1cm) w = weight of the sample taken A645 is absorbance at 645nm A663 is absorbance at 663nm
Ascorbic acid
The ascorbic acid content was estimated by using A. O. A. C. (1980) method. Fresh leaves (10 g) were homogenized in metaphosphoric acid solution. Volume was made to 100 ml. This solution was titrated against indophenols dye. Appearance of rose pink colour was the end point. The amount of ascorbic acid in milligrams per 100 g was calculated by using formula:
Dye factor
Dye factor was determined by using the method by Ranganna (2008). For estimation of dye factor 5 ml of the standard ascorbic acid solution was mixed with 5 ml of metaphosphoric acid solution. The prepared indophenols dye was then titrated to a pink colour which persisted for 15 seconds. The dye factor was determined by the amount of ascorbic acid (mg) per ml of dye using the formula:
Leaf extract pH
Recently matured leaves (5 g) were homogenized in 10 ml deionised water and supernatant obtained after centrifugation was collected for determination of pH by using pH meter (Model-ESICO 1013) with buffer solution of pH 4 and 9 (Barrs and Weatherly, 1962).
Relative water content
Relative water content of the samples was estimated using the method proposed by Singh (1977). Leaves were collected from different sites in polythene bags, then brought to the laboratory and were rinsed thoroughly and excess water was removed with filter paper.
Then three steps were followed: first, fresh weight (FW) was obtained by taking weight of fresh leaves; second, turgid weight (TW) was obtained by placing the leaves in a water filled Petri plate overnight, and in the morning the weight of turgid leaves was taken. In the third step, turgid leaves were dried in an oven at 70 o C temperatures overnight and dry weight (DW) was recorded and was computed by using following equation: Where, FW-Fresh weight, TW-Turgid weight, DW-Dry weight of leaf samples
Air pollution tolerance index
The air pollution tolerance index (APTI) was estimated by considering four biochemical parameters namely ascorbic acid, total chlorophyll, leaf extract pH and relative water content and was computed by using the following equation given by Singh and Rao (1983).
Analytical tools employed
The observations recorded for various biochemical parameters of plant species were subjected to statistical analysis under Randomized Block Design. Analysis of variance (ANOVA) was worked out and critical difference at 5 per cent level of significance was calculated as suggested by Cochran and Cox (1967).
Results and Discussion
The chlorophyll content The plant species growing alongside the construction activities of National Highway were found to have significant variations in their leaf chlorophyll content (Table 1). The highest value of chlorophyll content (3.81 mg g -1 ) in case of tree species was recorded in the leaves of Toona ciliata. This was followed by Grewia optiva, Delbergia sisso, Leucaena leucocephala, Ficus palmata, with respective values of 3.14 mg g -1 , 3.08 mg g -1 , 3.08 mg g -1 , 2.69 mg g -1 and lowest value of 2.68 mg g -1 in case of Morus alba. In case of shrubs maximum value of 2.89 mg g -1 was observed in Adhatoda vasica, followed by Debregeasia hypoleuca, Murraya koenigii and Carissa opaca, with values of 2.20 mg g -1 , 2.17 mg g -1 and 1.95 mg g -1 . While, minimum value of 1.65 mg g -1 was observed in Vitex negundo.
Hence, total chlorophyll content of different plants varies from species to species due to photosynthetic process of plants which depends on leaf age, biotic-abiotic conditions and the level of vehicular pollutants (Katiyar and Dubey 2001;Tak and Kakde 2017;Sen et al., 2017). Higher values of chlorophyll content significantly reports that plant has high tolerance to air pollutants. Further, the high chlorophyll content may also be ascribed to the tolerance to the air pollutants. The results are in line with the findings of (Joshi et al., 1993;Santosh et al., 2008;Ninave et al., 2001).
Ascorbic acid content
Ascorbic acid is a powerful antioxidant which maintains cell division and cell membrane stability in plants during stress conditions by scavenging cytotoxic free radicals and reactive oxygen species produced due to photooxidation of SO 2 to SO 3 (Jyothi and Jaya 2010; Sanghi et al. 2015). The plant species growing alongside the construction activities of National Highway were found to have significant variations in their leaf ascorbic acid content in the leaves ( Table 2). The ascorbic acid content in the leaves of selected plants was in the range of 2.34 to 4.42 mg g -1 . The highest content of 4.42 mg g -1 in case of tree species was recorded in Toona ciliata which was followed by Delbergia sissio (3.90 mg g -1 ), Leucaena leucocephala (3.85 mg g -1 ), Ficus palmata (2.93 mg g -1 ), Grewia optiva (2.90 mg g -1 ) and lowest in Morus alba (2.61 mg g -1 ). In case of shrubs maximum value of 3.55 mg g -1 was observed in case of Adhatoda vasica followed by Debregeasia hypoleuca (2.89 mg g -1 ), Carissa opaca (2.81 mg g -1 ), Murraya koenigii (2.55 mg g -1 ), respectively, whereas the minimum value of 2.34 mg g -1 was recorded for Vitex negundo.
High values of ascorbic acid content in the plant species prevent them from the harmful effects of air pollutants (Kuddus et al., 2011). As Ascorbic acid content has inherent reducing power which is defensive against air pollution. Hence, this quality of plant leaves having high ascorbic acid content makes them more tolerant to air pollution. The results are in similar line with (Gholami et al., 2016;Ogunrotimi et al., 2017).
Leaf extract pH
The leaf extract pH plays a crucial role in regulating the pollution sensitivity in plants. (Das and Prasad 2010). The data pertaining to pH of plant species is cited in Table 3. A perusal of the data indicates that the selected plant species growing alongside the National Highway showed a significant variation in the leaf extract pH. The leaf extract pH in the selected plants was in the range of 5.63 to 6.11. The highest value of 6.11 in tree species was recorded in Toona ciliata which is followed by Morus alba (6.09), Leucaena leucocephala (6.04), Delbergia sisso ( The results showed pH of leaf extract was acidic, which may be due to diffusion of gaseous air pollutants like NO 2 , CO 2 and SO 2 in the cell sap and when plants are suffering from air pollutants (especially SO 2 ) as their cellular fluid would produce massive H + to react with SO 2 , which enters through stomata and intercellular space from air, so that H 2 SO 4 is generated and then leaf pH reduces. The present results are in line with (GHassanen et al., 2016;Kaur and Nagpal 2017). It is also reported that, in the presence of an acidic pollutant (SO 2 and NO 2 ), the leaf pH is reduced and the reducing rate is more in sensitive plants compared to that in tolerant plant species (Scholz and Reck 1977).
Relative Water Content
Relative Water Content (RWC) is an important factor with in plants which maintains its physiological balance under stressful conditions and enhances the tolerance capacity of the plants to the air pollution (Veni et al. 2014;Sen et al. 2017). The scrutiny of data in Table 4 A Ascorbic Acid, R Relative water content, T Total chlorophyll content, APTI air pollution tolerance index *Significant ay p≤ 0.05. Figure.1 Location map of the study area Figure.3 Linear regression analysis of individual variables (biochemical parameters) with APTI values. a) pH vs APTI, b) Relative water content (%) vs APTI, c) Total chlorophyll content (mg g -1 ) vs APTI and d) Ascorbic acid content vs APTI High RWC content in plants ensures the maintenance of the physiological balance under stresses such as air pollution (Buchchi et al., 2013;Sharma et al., 2017a). Also, higher water content in plants can dilute acidity inside the leaf cell sap and improve its drought tolerance (Palit et al., 2013;Rai et al., 2013). Therefore, it is likely valedictory that the plant species having high RWC have high capacity of tolerance of air pollution.
Air pollution toleration index (APTI)
The selected plant species growing alongside the national highway expansion were found to have significant variations in the air pollution tolerance index ( shrubs in general, were more tolerant to air pollution than trees. Higher values of APTI represent the potential of plants to facilitate in polluted areas and contribute as an air pollution synthesizer (sink) (Joshi and Swami, 2007;Sharma et al. 2017b;Kumari and Deswal 2017). The variation in the tolerance of the plant species of a region to stress conditions has also been reported by (Agbaire and Esiefarienrhe 2009;Kapoor and Bhardwaj 2016;Pandit et al. 2017).
Linear regression analysis
APTI of plant species was found to increase, which is mainly due to increase in the values of pH, relative water content, total chlorophyll content and ascorbic acid content of plant leaves.
Therefore a linear regression analysis was performed to ascertain which of leaf biochemical parameter had a greater influence on APTI of plant species. Analysis showed significantly strong positive impact of ascorbic acid content (R 2 = 0.647) on APTI. However, it was concluded that the higher values of ascorbic acid were found in all plant species as compared to pH, relative water content and total chlorophyll content, which indicated that ascorbic acid content impacted on the APTI for screening out the most tolerant plant species (Figure 3).
Also, a significant positive correlation at p ≤ 0.05 between APTI and total chlorophyll content in case of Ficus carica (r = 0.877) and Murraya koenigii (r = 0.729).
Overall analysis stated that each parameter plays a significant role in the evaluation of air pollution tolerance index of particular plant species. Especially ascorbic acid content and relative water content of leaves had a special affect in the higher resistance to stress conditions.
In conclusion, the present study inferred that tree species Toona ciliata and Leucaena leucocephala and shrubs namely Adhatoda vasica and Murraya koenigii emerged as tolerant species growing alongside the NH-21. It was concluded that air pollution tolerance index increased with increase in biochemical parameters such as total chlorophyll content, ascorbic acid, relative water content and pH and hence these plant species need to be recommended for plantation alongside the national highway falling in sub-tropical region of Himachal Pradesh. | 2019-06-13T13:17:50.091Z | 2018-06-10T00:00:00.000 | {
"year": 2018,
"sha1": "516f5b0487e910bd36cf8bb556630a22e7176528",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/7-6-2018/Abhay%20Sharma,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "beaed67a9411f9023dce1b0a8896a33125906252",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Geography"
]
} |
40642489 | pes2o/s2orc | v3-fos-license | Controlling Morphological Parameters of Anodized Titania Nanotubes for Optimized Solar Energy Applications
Anodized TiO2 nanotubes have received much attention for their use in solar energy applications including water oxidation cells and hybrid solar cells [dye-sensitized solar cells (DSSCs) and bulk heterojuntion solar cells (BHJs)]. High surface area allows for increased dye-adsorption and photon absorption. Titania nanotubes grown by anodization of titanium in fluoride-containing electrolytes are aligned perpendicular to the substrate surface, reducing the electron diffusion path to the external circuit in solar cells. The nanotube morphology can be optimized for the various applications by adjusting the anodization parameters but the optimum crystallinity of the nanotube arrays remains to be realized. In addition to morphology and crystallinity, the method of device fabrication significantly affects photon and electron dynamics and its energy conversion efficiency. This paper provides the state-of-the-art knowledge to achieve experimental tailoring of morphological parameters including nanotube diameter, length, wall thickness, array surface smoothness, and annealing of nanotube arrays.
Introduction
Ordered TiO 2 nanostructures, including nanoparticles, nanotubes, and nanorods [1,2] have garnered much research for their use in solar energy applications [3][4][5]. In hybrid solar cells, the titania nanostructures accept electrons from photoexcited dye molecules or polymers adsorbed to the surface and direct the electrons into an external circuit. In photoelectrochemical cells for the degradation of pollutants or the oxidation of water, the photoexcited titania nanostructures donate electrons or holes to chemical species adsorbed to the surface. TiO 2 nanotubes have also been experimentally applied as gas sensors and supercapacitors but these applications will not be discussed [6,7].
In 1999, Zwilling et al. reported on the anodization of titanium in solutions of fluoride-containing electrolytes to form porous titania nanotubes and Gong et al. later formed nanotubes using higher voltages ( Figure 1) [8,9]. Although titania nanotubes can also be formed by other routes [10], the anodization method leads to an aligned array with an adjustable morphology that can be optimized for its various applications. The morphology parameters, e.g., nanotube length, diameter, smoothness, depend on the anodization conditions, such as voltage, electrolyte composition, temperature, and duration. After anodization, the amorphous nanotubes can be annealed to increase the electron mobility, sensitized with dyes or polymers to increase solar photon absorption, and doped or surface-functionalized to adjust the density of states [11][12][13]. [14].
Since Honda and Fujishima reported water oxidation by titania thin films in 1972, titania nanoparticles, nanorods and nanotubes have been investigated [15][16][17][18]. Due to their hollow nature, nanotubes have twice the surface area per unit volume compared to nanoparticles and nanorods that have the same outside diameter as the nanotubes. Recently, Zhang and Wang fabricated a photoelectrochemical cell for water splitting that achieved a photoconversion efficiency of 0.84% under AM 1.5 illumination using titania nanotubes without any catalysts [18].
Hybrid solar cells with titania nanotubes, illustrated in Figure 2, have several advantages over other nanostructures and planar solar cells. Nanotubes, which are aligned perpendicular to the conducting substrate, increase electron mobility within the nanotube by directing electrons along a shorter path than nanoparticles [19,20]. The high surface area of nanotubes, compared to nanorods or flat surfaces, allows for more adsorption by electron donors such as molecular dyes and polymers, thus increasing solar photon absorption and charge collection [21]. Commonly used donors include ruthenium polypyridyl complexes (N719, N749), porphyrin dyes, poly(3-hexylthiophene), and poly(p-phenylene vinylene) derivatives [22][23][24][25]. Although titania nanotubes have attracted extensive research as photoanodes in hybrid solar cells, there are several complications that need to be overcome including phase separation between electron donors and titania, polymer penetration into the nanotubes, and efficient electrical contact with conductive glass [20,26].
Anodized Titania Nanotube Formation
The formation of titania nanotubes by potentiostatic anodization proceeds by similar mechanisms as porous alumina [27,28]. In the first step of the anodization process, the titanium surface is electrochemically oxidized. A compact layer of titanium oxide is formed on the titanium surface through Equation (1) [27][28][29].
Pitting of the oxide layer provides preferential locations for the field-assisted chemical dissolution of TiO 2 by fluoride ions through Equations (2) and (3) [27,29]. Nanotubes are formed as the pits are chemically dissolved further into the oxide layer; the pits provide the least resistive route for the current, therefore the high dissolution rate forms the inside of the tubes from the pits. To form highly ordered nanotubes, the first nanotube array is often removed from the titanium foil leaving indentations that facilitate the pitting behavior during re-anodization ( Figure 3) [30]. During the formation, the current typically behaves as illustrated in Figure 4. As the voltage increases to its set magnitude, the current increases (Region I) until the oxide layer provides enough resistance and the current decreases (Region II). The current increases again as Equation (2) begins to increase the surface area and thin the oxide layer. The oxidation continues and a steady state between Equations (1)-(3) is reached in Region III ( Figure 4) [3,28]. Wang et al. recently published on the formation of metal oxides by anodization and analyze the formation thermodynamically and mechanistically [27]. 3)). Illustrations reprinted from [31]. Creative Commons 2010.
Control of Morphology
Many experimental conditions of the anodization process are controlled to form nanotubes with the desired morphology: duration, applied voltage, temperature, Ti foil roughness, electrolyte composition. While the duration, voltage, and fluoride concentration primarily controls the nanotube length, diameter, and growth rate, many characteristics of the electrolyte affect the process including the solvent, water content, pH, viscosity, conductivity, and organic additives. Table 1 shows the range of conditions that have been used to form titania nanotubes. Table 1. Range of conditions used to control the nanotube morphology. Electrolyte age refers to its previous use for titanium anodization.
Nanotube Length
It is well established that the titania nanotube growth rate is directly proportional to the duration of anodization [3,[41][42][43][44][45], the concentration of fluoride ions [3,4,43,45,46], the voltage ( Figure 5) [3,[43][44][45][46][47], and the electrolyte conductivity [40,46,48]. Aqueous electrolytes limit the nanotube length to 500 nm for acidic and 2 µm for neutral electrolytes since the rate of Equation (3) is faster in aqueous electrolytes [3]. The longest nanotubes reported, 1 mm, required nine days of anodization at 60 V with 0.5 wt % NH 4 F and 3% water in ethylene glycol [37]. With the same electrolyte concentration and voltage, 5 µm long nanotubes are obtained after 17 h [46]. Nanotube formation requiring long anodization durations would not be time-efficient on an industrial scale and several routes have addressed the concern and achieved fast growth rates. Most notably, nanotubes 7 μm long have been grown in 15 s by the addition of 1.5 M lactic acid to the electrolyte solution ( [40] species [31,36,39]. Fast nanotube growth rates are also determined by a balance between water and fluoride concentration. High fluoride concentration (>0.1 M NH 4 F) enhances the dissolution of TiO 2 by Equation (3) while addition of water allows for a sufficient rate of titanium oxidation by Equation (1). However, addition of water also slows the dissolution of titania by Equation (3) and is supported by experimental results. Upon addition of 1% water to 0.5 M NH 4 F in anhydrous ethylene glycol, the growth rate increased from 83 nm/min to 308 nm/min at 60 V but 2% water only increased the formation rate to 217 nm/min [43]. Table 2. Experimental conditions to achieve efficient lengths.
Liu et al. used a theoretical model, based on the reduction of oxygen, to determine the most efficient dimensions of un-sensitized titania nanotubes for photocatalysis [49]. Based on the diffusion of oxygen and the molar absorptivity of TiO 2 , the photocatalytic efficiency plateaus with nanotubes greater than 5 µm long ( Figure 6) [49]. Experimental results show similar saturation behavior, albeit at longer nanotube lengths [50][51][52][53]. For example, un-sensitized nanotube arrays 12 µm long were most efficient for the degradation of gaseous benzene and toluene when compared to nanotubes ranging from 800 nm to 12 µm in length [50]. Similarly, for the catalysis of acetaldehyde and phenol, the efficiency continued to increase with nanotube length ranging from 200 nm to 17 µm [51,52]. In solar cells using titania nanotubes as the photoanode, there is a balance between absorbing the most photons and reducing the distance the electron must travel in the nanotubes. In accordance with the Beer-Lambert law, more photons are harvested with longer nanotubes that can adsorb more dye or polymer ( Figure 4). However, longer nanotubes have more recombination centers, higher series resistance, and lower open circuit potentials [54,55]. Thus, to ensure efficient electron collection, optimized nanotubes lengths do not exceed the electron diffusion length estimated to be 10-100 µm in titania nanotubes and 10 µm in nanoparticles [55][56][57][58].
Experimental results demonstrate the nanotube length optimization since the efficiency of hybrid solar cells employing nanotubes decreases after a certain length is exceeded (Figure 7). Park et al. found that the photocurrent density increased with increasing TiO 2 nanotube length up to 35 µm and attributed the effect to the higher surface area for dye-loading [59] Dubey et al. found that the photocurrent density and energy conversion efficiency was a maximum for 22 µm long nanotubes (16.3 mA/cm 2 , 6.12%) and decreased at 38 µm because of increased recombination at surface defects [60].
Diameter and Wall Thickness
Although great control over the nanotube length has been demonstrated, primarily adjusted by the anodization duration, less systematic control over the nanotube diameter and wall thickness has been shown. The nanotube diameter mostly varies with the anodization voltage ( Figure 8) [3,29,41,[44][45][46]61], but also varies with the solvent [29,32,62], duration [41,46], the water content [38] and the fluoride concentration [11,45]. Nanotubes have been synthesized by anodization that range from 15 nm to 709 nm by adjusting the voltage, solvent, and duration as seen in Table 3 [32,62]. Table 3. Experimental conditions to achieve nanotubes with different inner diameters.
where W is the wall thickness and D is the inner diameter. Based the theoretical model used by Liu et al.,20 nm is the most efficient inside diameter for photocatalysis of gaseous reactants ( Figure 6) and 20-30 nm, is the most efficient wall thickness [49].
In hybrid solar cells, the high surface area for contact between the electron donor and acceptor (titania) reduces the distance excitons must travel before electrons are collected at the donor-acceptor interface. Thus, excitons are less likely to decay and a higher incident-photon-to-current efficiency (IPCE) is expected, compared to planar solar cells of the same thickness. Exciton diffusion lengths for commonly used organic polymer sensitizers are 8-20 nm for poly(3-hexylthiophene) (P3HT) [22], 20 nm for the poly(p-phenylene vinylene) derivative MEH-PPV [24], and 14 nm for ladder-type poly(p-phenylene) [64]. Correspondingly, Ghicov et al. found that the nanotube diameter in DSSCs correlated with the solar cell's short circuit current and photoconversion efficiency, which is attributed to higher dye-loading due to higher surface area ( Figure 10) [54]. However, for polymer sensitizers, small diameter nanotubes that have high porosity present an issue for polymer packing within the nanotubes. In the bulk, exciton mobility is enhanced by π-π stacking which allows excitons to delocalize over multiple polymer chains [65]. However, in confining nanotubes, disordered configurations are favored and π-π stacking is largely prevented [65]. Although So et al. report that changing the nanotube diameter within 100-200 nm has no significant effect on the solar cell efficiency, 100 nm diameter nanotubes are more efficient at certain lengths ( Figure 10) [36,54].
Nanotube Roughness and Intertube Spacing
Titania nanotubes with smooth walls are grown using viscous solvents for anodization. Solvents with high viscosity (Table 4) reduce the mobility of fluoride ions and other ionic species reducing the growth rate, but also reducing current fluctuations and therefore forming smoother nanotube walls [29,34,43,66]. Diffusion through a fluid is inversely proportional to the viscosity of the fluid according to the Stokes-Einstein equation: where D is the diffusion coefficient of a particle with radius r, in a fluid with viscosity η, at temperature T, and k B is the Boltzmann constant [14,29]. Current fluctuations during the anodization process result from local inhomogenieties of the concentration of ionic species, which cause rough nanotube walls to form [67,68]. Spacing between nanotubes, on the order of 100 nm, has been achieved by increasing the fluoride concentration and using diethylene glycol. Nanotube arrays grown in diethylene glycol electrolytes have intertube space but require 48 h of anodization to reach 7-20 μm in length [34]. Nanotubes spaced almost 1 µm apart have also been obtained by increasing the HF concentration to 4 wt % in ethylene glycol [70].
Well-ordered nanotube arrays with smooth walls and no intertube contact enhance electron transport by directing the injected electrons toward the conducting substrate and preventing undesired lateral transport between nanotubes [71]. Also, dyes or polymers can be adsorbed to the outside of the nanotubes if they are spaced apart, fully utilizing the available surface area. Intertube spacing smaller than that currently obtained (100 nm to 1 μm) should be more efficient by maximizing the number of nanotubes per unit area. However, no studies on the solar cell or photocatalytic efficiencies of nanotube arrays with intertube space have been published.
Control of Crystallinity
Amorphous titania nanotubes have minimal use in solar energy applications due to the high concentration of recombination centers ( Table 5, "unannealed") [11,44,72]. The anatase crystalline phase of titania is favored due to its higher electron mobility and larger surface area compared to the rutile phase [1,10,73,74]. Although titania nanotubes transformed to mostly rutile while annealing them at 750 °C, the nanotubes collapse at that temperature preventing the experimental comparison between pure rutile and pure anatase nanotubes [44]. Rather, amorphous titania nanotubes are typically annealed at 450 °C in various atmospheres to form the anatase phase [55]. By annealing the nanotubes at temperatures between 450 °C and 750 °C, a mixture of anatase and rutile is formed [44].
The annealing atmosphere affects the anatase-to-rutile phase transformation, and oxygen vacancies and other defects referred to as Ti +3 states, which lead to different recombination mechanisms and kinetics [10,11,75]. Ti +3 states create an impurity band in the titania nanotubes and limit electron transport, but the number of Ti +3 states can be reduced by annealing the nanotubes in an oxygen rich atmosphere [76]. Dry atmospheres inhibit the transformation of anatase to rutile in the nanotube walls while the interfacial region between the nanotubes and the Ti foil substrate transforms to rutile even at 430-450 °C, which may give the false indication that rutile is present throughout the nanotube array [10,11].
Although uncollapsed pure rutile nanotubes have not been studied, mixed phase nanotubes have been shown to be more efficient for photocatalysis than pure anatase nanotubes. The photocatalytic degradation of methyl orange using titania nanotubes is enhanced with rutile/anatase mixing by annealing the nanotube array at 550 °C [77]. Likewise, photodegredation of toluene and rhodamine B by titania powder and nanofibers, respectively, is enhanced when both rutile and anatase are present (3/97 wt %) by using calcination temperatures ≥600 °C [78,79]. Water oxidation is also most efficient after annealing nanotubes at 580 °C where both phases are present [80]. To explain the mixed-phase phenomenon, Li et al. proposed that the rutile crystals provide electron trapping sites that extend the lifetime of photo-generated electron-hole pairs ( Figure 11) [78]. However, Richter et al. found that rather than increasing the electron lifetime, the calcination temperature reduces the number of exciton-like trap states from oxygen vacancies, therefore improving electron transport [60]. Ghicov et al. attributed the mixed-phase phenomenon to increased crystallization at 600 °C, which minimized the number of recombination centers by reducing the amount of grain boundaries and amorphous TiO 2 [11]. Figure 11. Proposed schematic illustration of the band structure related to photocatalytic mechanism of un-sensitized mixed-phase TiO 2 structure. Reprinted with permission from [78]. Copyright 2011 Wiley.
Although there are many studies on un-sensitized crystalline nanotubes, systematic studies of crystallinity on titania nanotube solar cells is lacking. However, the crystallinity of nanoparticles has been studied. Anatase nanoparticle films have a higher energy conversion efficiency (21%), photocurrent (30%), and electron diffusion length (10×) than rutile nanoparticle films of the same thickness [81]. The difference was partly attributed to the increased interparticle contact and dye loading from the higher surface area of the anatase nanoparticles [81].
Solar Cell Fabrication
Hybrid solar cells benefit from front-side illumination where light is incident on the transparent conducting oxide and immediately reaches the sensitized TiO 2 nanotube array (Figure 12b,c) [31,48]. In this orientation, reflection and absorption of light by the counter electrode and electrolyte is avoided. Nanotube arrays left on the titanium foil substrate can only be used in the less efficient back-side illuminated solar cell configuration since the foil is opaque (Figure 15a) [31,48]. Two routes have been used to fabricate front-side illuminated solar cells with TiO 2 nanotube arrays: (1) transferring the nanotube array from the titanium foil to fluorine-doped tin oxide coated glass (FTO glass) [48,59] and (2) anodizing a film of titanium sputter-coated onto FTO glass [29].
Removing the Array
Titania nanotube arrays have been removed from the Ti by dissolution in a bromine/methanol solution [82], aqueous HCl [59,83], solvent-evaporation of methanol [84], ultrasonication in water [46], acetone [85], ethanol/water solutions [86], and drying in air [31]. After removing the nanotube array, it can be attached to FTO glass with a few drops of 100 mM titanium isopropoxide or a layer of TiO 2 nanoparticle paste 3 µm thick and then annealed (Figure 13d) [31,59,60,78]. Dubey et al. enhanced the adhesion by putting a 100 g weight onto the nanotube-FTO glass assembly in a freezer [60]. Reprinted with permission from [87]. Copyright 2012 American Chemical Society.
Anodizing on Conductive Substrates
Titania nanotubes have been grown by anodization of titanium films sputter-coated on alumina, indium-doped tin oxide [88] coated polyethylene terephthalate (ITO PET) [29], fluorine-doped tin oxide coated glass [29,48]. Titania films 0.5 to 20 µm thick have been coated onto the conducting substrates by RF or DC magnetron sputtering and subsequently anodized in electrolytes containing fluoride to form nanotubes [29]. To improve the adhesion of titanium films to FTO glass, the glass is heated to 45-400 °C before deposition [29,48] and bombarded with Ar + during deposition of the titanium films [48]. By bombarding the titanium film with ions during deposition, weakly bound titanium atoms are removed, leaving the titanium atoms strongly bound to the substrate.
Removal of Barrier Layer
The closed ends of the nanotubes (barrier layer), originally attached to the titanium foil hinder light absorption from front-side illuminated hybrid solar cells [31,87]. Although nanotubes grown directly on conductive substrates suffer from the light reflection by the barrier layer, the barrier layer can be removed from nanotube arrays transferred from titanium foil. After removing the nanotube array from the titanium foil, the barrier layer can be removed by HF etching [31] or ion milling [87], similar to ion milling carbon nanotubes [89]. In the ion milling technique, the barrier layer is bombarded with Ar + and removed by the sputtering process.
The barrier layer thickness decreases with increasing argon ion milling duration (0-90 min) and the barrier layer is perforated after 90 min as seen in Figure 14d [87]. Under backside-illumination of the ion milled N-719 sensitized solar cell, the photocurrent and energy conversion efficiency increased by 46% and 48% to 7.85 mA/cm 2 and 3.7% , respectively, after 90 min of ion milling the nanotubes [87]. From electrochemical impedance spectroscopy measurements, Rho et al. determined that the barrier layer contributes to transport resistance in the nanotubes (Figure 15) [87]. Since the open-circuit
Conclusions
The extensive research on anodized titania nanotube arrays has led to steady improvements of its morphological control [3,25]. A wide range of nanotube array dimensions have been grown and tested in various solar energy conversion applications for optimum performance (Table 1). Great advances have been made in controlling the nanotube length since the first anodized titania nanotube report, but systematically controlling the nanotube diameter and wall thickness to the narrow dimensions that are theoretically efficient requires continued research. Considering that the titania nanotubes' crystallinity drastically affects its photoconversion efficiency and electron dynamics, the electron behavior in mixed-phase nanotubes requires attention to resolve disagreement in the literature [11,76,78]. Studies on hybrid solar cells with mixed-phase titania nanotubes may contribute to the understanding of the mixed-phase phenomenon in un-sensitized nanotubes.
Further optimization and characterization of the attachment of nanotube arrays to conductive substrates could benefit electron transport by reducing transport resistance between the phases [29,41,60]. For BHJs, improving polymer π-π packing and preventing phase separation with titania is needed to fully utilize the surface area available in the nanotubes and enhance photoconversion efficiencies [21]. | 2016-03-01T03:19:46.873Z | 2012-10-01T00:00:00.000 | {
"year": 2012,
"sha1": "f7584b9eda1c4a6aaaba6de9e68b0c32ca9b7ae3",
"oa_license": "CCBY",
"oa_url": "http://www.mdpi.com/1996-1944/5/10/1890/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "663c3e1cd9d3f95416f0e8c482318973e7d6f10c",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
267720759 | pes2o/s2orc | v3-fos-license | Exploring the role of ubiquitin regulatory X domain family proteins in cancers: bioinformatics insights, mechanisms, and implications for therapy
UBXD family (UBXDF), a group of proteins containing ubiquitin regulatory X (UBX) domains, play a crucial role in the imbalance of proliferation and apoptotic in cancer. In this study, we summarised bioinformatics proof on multi-omics databases and literature on UBXDF’s effects on cancer. Bioinformatics analysis revealed that Fas-associated factor 1 (FAF1) has the largest number of gene alterations in the UBXD family and has been linked to survival and cancer progression in many cancers. UBXDF may affect tumour microenvironment (TME) and drugtherapy and should be investigated in the future. We also summarised the experimental evidence of the mechanism of UBXDF in cancer, both in vitro and in vivo, as well as its application in clinical and targeted drugs. We compared bioinformatics and literature to provide a multi-omics insight into UBXDF in cancers, review proof and mechanism of UBXDF effects on cancers, and prospect future research directions in-depth. We hope that this paper will be helpful for direct cancer-related UBXDF studies. Supplementary Information The online version contains supplementary material available at 10.1186/s12967-024-04890-9.
Introduction
Ubiquitin is a small protein found in all eukaryotic organisms (most eukaryotic cells).It regulates the function of proteins in a significant way.Ubiquitin disorder can lead to a variety of human diseases [1].Irwin Allan Rose, Aaron Ciechanover, and Avram Hershko were awarded the 2004 Nobel Prize in Chemistry for their discovery of ubiquitin-regulated protein degradation [2].UBXD family (UBXDF) is a group of proteins containing ubiquitin regulatory X (UBX) domains.According to sequence similarity outside the UBX domain, these proteins are categorised as members of evolutionarily conserved subfamilies [3].
Bioinformatics evidence was obtained from public databases: Structures of protein domains were collected from cBioPortal [13] and gepia2 [14].The Cancer Genome Atlas (TCGA) clinical and transcriptome rawdata were collected from UCSC [15,16].The complete terms and abbreviations of TCGA cancer types are listed in Additional file 1: Table S1.Staining and protein expression levels of the UBXDF in tumour cells were collected from the Human Protein Atlas (HPA) [17].Immune cell infiltration was estimated by ImmuCellAI [18] and GSCA [19] with TCGA data.The GSCA [19] and CellMiner [20] were utilised to assess the impact of UBXDF on tumour drug sensitivity.Additional file 1 offer comprehensive data and methods.To compile the literature evidence, English-language PubMed articles published before November 2022 were gathered using keywords (("UBXD" OR "UBXN6" OR "UBXN4" OR "UBXN10" OR "UBXN2A" OR "UBXN11" OR "UBXN8" OR "UBXN7" OR "FAF2" OR "ASPSCR1" OR "NSFL1C" OR "UBXN2B" OR "FAF1" OR "UBXN1") AND ("Cancer" OR "Tumour")).To review patterns, the preclinical in vitro, in vivo, and clinical of UBXDF impacts on tumours, in addition to UBXDF molecular mechanism and UBXD tumour drugs, were outlined.In this paper, the UBXD protein and its roles and related mechanisms in cancer are reviewed systematically.The application of the UBXD protein in cancer is thoroughly interpreted, providing new ideas and directions for antitumour drug targets.
Protein domain structures of UBXD
Mammalian cells contain 13 members of the UBXDF, which are separated by order of their ubiquitin-related protein motifs: 8 members in the UBX (ubiquitin regulatory X) group and 5 members in the UBA (ubiquitinassociated)-UBX group (Fig. 1A) [21].In the UBX group, the UBX domain was the only ubiquitin-related domain [22].UBXD9 is different from other UBXDF members in the UBX group because it has two UBX domains [23].In addition to the UBX domain, the N-terminal of the protein contains another domain [24,25].In the UBA-UBX group, the UBA domain is found at the N-terminal of members, whereas the UBX domain is located at the C-terminal.Some members of UBA-UBX group have additional ubiquitin-related domains like UIM (ubiquitin-interacting motif ) and UBL (ubiquitin-like) [22].
Trimerisation of p47 (UBXD10) occurs at its central SEP (Shp1, eyes-closed, p47) domain, and the UBX domain defines the p47 subfamily at its C-terminal.The authentic p47, which has the UBA domain at its N-terminal, belongs to this subfamily [26,27].In addition, the relatively distant socius protein has homology with other p47 members in UBX and SEP domains.P47 often acts as an adapter in the homotypic membrane fusion of AAA ATPase (ATPases associated with various cellular activities) p97/VCP.The UBX domain of p47 works directly with p97/VCP, thus mimicking the ubiquitinated substrate.A large part of the function of other family proteins is unknown.Although the UBX contains relevant protein p47 has been discussed as binding to the AAA ATPase Cell Division Cycle 48 (CDC48)/p97 [28].Recently, scientists realised that the UBX protein is usually a cofactor of p97 and that there was a second p97 binding at the C-terminus of the SEP domain [29].The FAF1 (UBXN3A) subfamily is distinguished by the presence of a thiodoredoprotein-like folding motif shared by the N-terminal UBA domain, the C-terminal UBX domain, and the central UAS (Upstream activating sequence) domain of unknown function [30].The true FAF1 homologues are restricted to insects and vertebrates, while the ETEA (UBXD8) and UBXD7 are found in yeast and humans, respectively.The true FAF1 homologue is distinguished by one or two transmembrane domains close to the N-terminus, while the ETEA homologue is characterised by two biguanide-like domains with unknown functions [31].The true SAKS1 (UBXD13) homologous and erasin-like proteins make up the other half of the SAKS1 subfamily; these proteins are conserved from yeast to humans and have a highly similar central region that is not present in the other UBX subfamilies.SAKS1 contains both an N-terminal UBA and C-terminal UBX domains [32].Members of the TUG (UBXD9) subfamily can be found in all known eukaryotic organisms, and their central UBX domains reflect this diversity.An intriguing feature of this family is the presence of N-terminal ubiquitin-like domains in some of its members [33].There is extensive sequence conservation among the UBXD1 subfamily members discovered in all eukaryotes apart from fungi [34,35].It has a carboxy-terminal UBX domain and a central PUB (PNGase/UBA or UBX) domain.The N-terminal transmembrane span and C-terminal UBX domain are features unique to the rep8 (UBXD6) subfamily found in vertebrates [36].And lastly, UBXD3 homologues are observed only in mammals [37].
Due to the presence of a wide variety of domains and their potential permutations within the UBXDF, the individual proteins within this family display a wide range of functional characteristics.These variations enable them to crosstalk with various protein complexes and bind to a limited number of partners that depend on their subcellular localisation [23,[38][39][40][41]. Due to the presence of additional ubiquitin-related motifs in addition to the UBX domain, UBXD does not participate redundantly in the ubiquitin-proteasome pathway [42,43].
The mutation of UBXD family in tumours
We analysed the UBXDF gene mutations in TCGA.993 gene mutations of UBXDF in tumours are dispersed throughout the entire protein instead of clustered in the selected locations (Additional file 1: Figure S1A, Fig. 1B).FAF1 has the most gene alterations in the UBXD family (Fig. 1C), and its germline mutations have been linked to hereditary colon cancer [44].The majority of these regions had a single modification.In contrast, 109 locations had two to eight modifications, indicating that the mutations could result from random mutations accumulated during gene replication.Most UBXDF gene mutations are missense.High levels of gene amplification were found in UBXN7, while deep deletions were found in UBXN8 (Additional file 1: Figure S1B, Fig. 1D).The UBXDF gene was most amplified in LUSC and most deeply deleted in PRAD (Fig. 1E).
The UBX domain contains 80 residues, often located at the C-terminal of eukaryotic proteins.Based on the sequence alignment of structures, proteins containing the UBX domain have been identified in all eukaryotic species [45].As UBXDF is an evolutionarily conserved subfamily [3] and short proteins subfamily, its overall mutation rate is typically lower in TCGA tumours.UBXN7 (6%) was the only UBXDF gene with a frequency > 3% in the TCGA database.There was a statistically significant difference between overall survival (OS) with and without UBXDF gene mutation (Fig. 1F); UBXDF gene alteration causes poor prognosis (p < 0.05), indicating these gene mutations may be associated with the progress of tumour.
The mRNA expression of UBXD family in tumours
To reduce methodological differences between datasets, we analysed the mRNA expression of UBXDF in the TCGA tumours tissue database alone.We observed the expression levels of individual UBXD genes (Fig. 2A).Within the UBXDF, UBXN1 expression was the highest in tumours.To summarise the UBXDF gene expression in various tumour types in TCGA, we constructed UBXDF expression profiles of cancer-noncancerous tissues; however, the sample size (noncancer tissue sample size > = 5) limits the quality of the findings (Fig. 2B, Additional file 1: Table S2A, Figure S2A).We conducted Wilcox test analyses to calculate the significance.Surprisingly, UBXN10 expression was downregulated in the majority of tumours except for KIRP and BRCA, while ASPSCR1 expression was upregulated in tumours except for KICH, KIRC, and THCA.In addition, all UBXDF members were downregulated in KICH.Most members of UBXDF were upregulated in LIHC (p < 0.01), except for UBXN10 and UBXN8 (the two members with the lowest expression levels in the major tumours).
The co-expression of UBXD family in tumours
Protein p97/VCP is a highly conservative type II AAA protein containing two AAA ATPase domains [46,47].Many known p97 adapters utilise a conservative binding motif, with the UBX domain being the most common motif [48].Through the N-terminal domain of p97, the UBX domain allows all groups of the UBXD family (UBXDF) to connect to the multifunction AAA ATPase p97/VCP protein [48][49][50].The UBX domain binds to the hydrophobic sac between the two subdomains of the p97 N-terminal domain.Interactions between proteins have shown that p97 is more likely to bind to the UBX domain than ubiquitin [51].UBXD protein binds to endoplasmic reticulum lumen via p97 and becomes a key cofactor of endoplasmic reticulum-related degradation (ERAD) pathway [52].
We observed significant co-expression among most UBXDF members (p < 0.001) (Additional file 1 : Figure S2B, Table S2B), and p97/VCP was significantly co-expressed with all family members except UBXN1 (Fig. 2C).The results were consistent with previous studies; thirteen mammalian proteins have been found to bind to p97 and have been shown to contain a UBX domain [22].UBXD regulates p97 through the massive interaction networks they create and the structural constraints they impose on p97 as well as its compounds [53].The correlation network revealed a significant positive co-expression relationship (r = 0.58) between UBXN6 and UBXN1.This correlation could be indicated between UBXN7 and UBXN2A (r = 0.56) as well.In contrast, there was a negative co-expression among UBXN4 and UBXN1 (r = − 0.44).
The protein expressions of UBXD family in cancers
We used the HPA database to retrieve UBXDF tissue staining results to evaluate protein level expression.In healthy tissue, UBXN4 was highly expressed in most tissues, while UBXN6 was lowly expressed in most tissues, and ASPSCR1 had not highly expressed in the normal tissues.UBXD members are moderately expressed in most normal tissues but are often lowly expressed in muscles (smooth, skeletal, heart) (Additional file 1: Figure S2E).We also observed protein expression in tumour tissues (Additional file 1: Figure S2F).
According to the tissue staining data of UBXD family, we can know that cancer cells show varying degrees of cytoplasmic or nuclear immune reactivity in the UBXD family of proteins.For example, most cancer tissues exhibit weak to moderate cytoplasmic and/or nuclear immunoreactivity in the ASPSCR1, UBXN2A, UBXN10, UBXN1, FAF1, FAF2, UBXD2B proteins.Of UBXN4, UBXN8, and NSFL1C proteins, most malignant cells showed moderate to strong cytoplasmic immune reactivity.Cancer cells exhibit moderate cytoplasmic and/ or nuclear immunoreactivity in the UBXN6 protein.The staining results in testicular neoplasms, urothelial, gastric, and pancreatic cancers, and occasional melanoma, breast, and prostate cancers are strongly positive.Some hepatocellular carcinoma, endometrial carcinoma, and kidney cancer showed weak positive or negative.In the UBXN11 protein, cancer cells are weakly stained or negative in most cases, and only a small number of breast, prostate, and pancreatic cancer cases were moderately stained.But breast cancer shows a strong immune response in the subpopulation of cells.Most tumour tissues showed moderate nuclear positivity in the UBXN7 protein, and a small number of skin cancers and rare tumours of the ovary, cervix, lung, and testicle are strongly positive.Additional cytoplasmic positivity has been observed in cervical, endometrial, testicular, liver, and prostate cancers.
Nevertheless, protein expressions were discordant with mRNA expressions for most tumour types or were unavailable.The HPA has examples of UBXDF proteins staining in the U-2-OS cell line, where the majority of UBXDF members are found and are predominantly located in the nucleoplasm (UBXN7, ASPSCR1, NSFL1C, UBXN2B, FAF1, UBXN1, and UBXN8) (Fig. 2H).
UBXD family and prognosis of cancer patients
To assess the prognostic cancer value of UBXDF mRNA expression, we constructed 68 K-M curve plots of overall survival across cancer types and UBXDF members, with p < 0.05 (Additional file 1 Figure S2C).The expression level of some UBXDF members is significantly correlated with cancer patients' overall survival and may be involved in cancer progression (Fig. 2D).In KIRC, 11 genes in the UBXDF were significantly negatively correlated with OS, and 1 gene (ASPSCR1) was significantly positively correlated with OS.In LGG, eight genes in the UBXDF were significantly positively associated with OS, and 1 gene (UBXN1) was significantly negatively associated with OS.These findings indicate that the UBXD gene family is closely associated with cancer and may serve as a biomarker and prognostic indicator for various cancers.Finally, low expression of UBXDF genes (except for UBXN4, UBXN10, and UBXN2A) in DLBC causes poor prognosis, whereas high expression of UBXDF genes (except for UBXN6) in KICH causes poor prognosis (Additional file 1: Figure S2D, Table S2C).In the analysis of mRNA expression trends and stages (Fig. 2E), we found that the expression trends of UBXDF were significantly correlated with most cancers.Most UBXDF expression trends increased with the increase of stage in KICH, and UBXDF could play a crucial role in the development of KICH stage.
Additionally, we analysed the UBXDF correlation of cancer stemness, which was strongly correlated with cancer prognosis.Stemness represents the loss of a differentiated trait and the acquisition of progenitor and stem-cell-like properties [54].For DNA methylationbased stemness index (DNAss), that was notable to find a strong positive association between OV and NSFL1C (r = 0.85, p = 0.023) as well as TCGT and UBXN8 (r = 0.80, p < 0.001).FAF1 has a strong negative correlation with THYM (r = − 0.65, p < 0.001) (Fig. 2F, Additional file 1: Table S2D, E).For mRNA expression-based stemness index (RNAss), UBXN10 has a strong negative correlation with PRAD (r = − 0.69, p < 0.001).FAF1 has a strong positive correlation with THCA (r = 0.74, p < 0.001) (Fig. 2G, Additional file 1: Table S2F, G).The results revealed that some members of UBXDF were strongly correlated with stemness.As a result, we concluded that FAF1 might be an effective prognostic biomarker for those tumours.According to our review, no studies have investigated the cancer stemness of FAF1.Additionally, the stemness of UBXDF in tumours has received little research [55].It is uncertain whether stemness of UBXDF can guide cancer prognosis.We believe that studies between UBXDF and cancer stemness are urgently needed.
UBXD family and cancer tumour drugtherapy
To explore the effect of UBXDF on drugtherapy, we analysed associations of UBXDF transcriptome levels with therapeutic responses (drug sensitivity) (Additional file 1: Figure S2F).We found a negative link between UBXN8 expression and drug sensitivity such as everolimus, AP − 26113, and denileukin diftitox ontak, between NSFL1C and drug sensitivity such as dolastatin 10, vinblastine, and vinorelbine.At the same time, UBXN1 expression was positively correlated with cladribine, 5-fluoro-deoxy uridine 10mer, and fludarabine.The expression of FAF1 was negatively correlated with alectinib.
Furthermore, we analysed the relationship between UBXDF expression and the sensitivity (IC50) of cancer cell lines to various drugs using datasets from the GDSC [56] and CTRP [57], which include detailed info on cancer cell lines.Remarkably, half of UBXDF member expressions were inversely linked with the IC50 of cancer cell lines in CTRP, among these significant connections (Fig. 2J, Table 2H), while positively correlated in GDSC (Fig. 2I, Table 2I).These findings suggest that UBXDF may be a valuable predictive biomarker for pharmacological therapy; however, further research is required.
UBXD family and tumour microenvironment
Immune molecules and immune cells inside the tumour microenvironment (TME) are essential variables influencing carcinogenesis and can determine the responsiveness of malignancies to immunotherapy [18].Consequently, it may be helpful to examine the relationship between UBXDF and immunological molecules to investigate the possible influence of UBXDF on TME and immunotherapy.
Solid tumour tissue includes tumour cells and immunological, stromal, and vascular cells.We used ESTI-MATE [58] to calculate tumour stroma, purity, and immune score in TCGA, and the spearman correlation between UBXDF expression and scores were evaluated (Additional file 1: Table S3A, Figure S3A-D).FAF1 expression was inversely correlated with SARC, TGCT, THYM, and UCS, indicating that elevated FAF1 expression may be associated with reduced tumour purity.The role of FAF2 between ACC was the same as FAF1, and it was negatively associated with the stromal and immune score but positively associated with tumour purity.
The five primary hallmarks of tumour immune-expression are macrophages/monocytes [59], total lymphocyte infiltration (mainly T and B cells) [60], TGF-β response [61], IFN-γ response [62], and wound healing [63].Based on the aforementioned immune-expression patterns, it is possible to classify all cases into six repeatable immunological subgroups [64].Using the Kruskal-Wallis test, we investigated UBXDF gene expression in six immunological subgroups of TCGA pan-cancer.Statistically, the expression levels of all thirteen UBXDF members were distinguishable amongst immunological subgroups (Fig. 3A) that are not limited to specific tumour types and may play a crucial role in prognosis prediction [64].
Finally, to build a comprehensive profile of UBXDF in immunity across cancer, we determined the relationship between immune cell infiltration and UBXDF gene set expression level (GSVA score) [65], single nucleotide variant (SNV) level, and copy number variation (CNV) level.The infiltrates of 24 immune cells were evaluated through ImmuCellAI [18].Strikingly, UBXDF expression level was negatively correlated with infiltration score, central memory, CD4 T, Tr1, Treg, cytotoxic, Tfh, NKT, NK, Macrophage, and MAIT.On the other hand, UBXDF was positively correlated with neutrophil and effector memory in most cancer types (Fig. 3B).The gene set SNV level represents the integrated SNV status of inputted gene set for each sample.Immune-cell infiltration differed significantly between mutant and wild-type mutation groups in most cancers (Fig. 3C).The gene set CNV level represents the integrated CNV status of inputted gene set for each sample.We noted that Th17 immune infiltration was significantly downregulated, and NKT immune infiltration was significantly upregulated in the amplified and deleted groups for the WT group in KICH (Fig. 3D).These findings revealed the biomarker potential of UBXDF in immunotherapy targeting NKT, neutrophil, or effector memory cells.In addition, the data found a correlation between UBXDF and Th17 in certain tumours, indicating that it may influence IL-17 release in these tumours.
A significant disadvantage of these correlation analyses is that a troubling connection may occur in certain forms of cancer due to the small sample size.Consequently, we must exercise caution when interpreting this data.More research is necessary to investigate the potential immunotherapy impact of UBXDF.
In vitro evidence of UBXD family involvement in cancer
Numerous articles have presented in vitro evidence for the role of UBXDF in cancer (Table 1).It has been reported that some members of the UBXDF play a key role in the dysregulated formation, growth, proliferation, migration, invasion, and apoptosis pathways in specific tumours.
UBXN10-AS1 [66] showed a trend of low expression in colon adenocarcinoma tissues.UBXN10-AS1 acts as a tumour suppressor, regulating the miR-515-5p/ SLIT3 axis, and the overexpression of UBXN10-AS1 could inhibit the proliferation of COAD cells in vitro and in vivo, working as an antitumour role.UBXN2A binding to the starch-binding domain of mot-2 can control or limit the development of colorectal tumour cells by competitively disrupting the p53-mot-2 connection [39,[67][68][69].UBXN8 is an endoplasmic reticulum transmembrane protein that binds p97 to misfolded ERAD proteins.Low expression of UBXN8 interferes with this process, causing misfolded or unassembled proteins to accumulate in the endoplasmic reticulum lumen, which in turn induces endoplasmic reticulum stress.Endoplasmic reticulum stress can induce cytoplasmic localisation and degradation of p53.Therefore, UBXN8 can regulate the expression of the cell cycle inhibitors TP53 and p21CIP1/WAF, which function as tumour suppressors in hepatocellular carcinoma [71].As an adaptor protein of CRL2/VHL ligase complex and a specific substrate of MUL1 ligase, UBXN7 regulates HIF-1α protein expression under aerobic or anaerobic environments.The interaction between UBXN7 and cullins is not mediated by UBXN1 Glioblastoma N33 and U87-MG Inhibits cell growth and tumorigenesis NF-κB signalling pathway [87] its ubiquitinated substrate but involves the UIM motif in UBXN7 directly with the ubiquitinated cullins [40].UBXN1, p47, and FAF1 can target and inhibit crucial proteins involved in tumourigenesis as well as development and block the transcription of oncogenes activated by NF-κβ pathway.UBXN1 can regulate Iκβα expression and nuclear expression of NF-κβ and p-NF-κβ to control the development and tumourigenesis of cancerous cells.
The physical interaction of FAF1 with IKKβ disrupts the assembly of IKK complexes, inhibiting NF-κβ activity and its downstream signalling pathways [4].FAF1 renders TβRII unstable at the cell surface by recruiting the VCP/E3 ligase complex, thereby avoiding an excessive TGF-β response [75].Markedly activated AKT directly phosphorylates FAF1, destroying the FAF1-VCP complex and reducing FAF1 on the plasma membrane.The latter promotes TGF-β-induced SMAD & non-SMAD signals and increases TβRII expression on the cell surface.
Preclinical in vivo evidence of UBXD family involvement in cancer
The role of UBXDF in cancers has also been verified in xenograft mouse models (Table 2).In colon adenocarcinoma [90], UBXN10-AS1 is expressed at low levels and is predominantly localised in the cytoplasmic portion of COAD cells.Overexpression of UBXN10-AS1 reduced the proliferation and migration of COAD cells in vitro and slowed down the growth of tumours in vivo.Protein UBXN2A containing UBX domain can promote ubiquitination and proteasome degradation of mot-2 protein mediated by ubiquitin E3 ligase CHIP.The level of UBXN2A protein in colon tumour tissues is markedly lower than that in adjacent normal tissues.Enhancement of UBXN2A leads to apoptosis at the cellular level and in living animals, thereby inhibiting tumour growth, reproduction, and metastasis [67][68][69].TGF-β can promote the metastasis of advanced breast cancer cells.TβRII accumulated in FAF1-deficient cells of mouse embryos in FAF1-knockout mice, indicating that FAF1 has the physiological function of inhibiting TβRII [75].In Non-small cell lung carcinomas [83], Sanguinarine can increase the expression of FAF1.The up-regulated FAF1 inhibits cell proliferation, invasion, and migration and induces cell cycle arrest and apoptosis.This finding confirms that FAF1 can serve as a new therapeutic target.Studies on the progression of tumours in asbestos-induced malignant mesothelioma mouse models have shown that FAF1 is an essential factor in regulating the NF-β pathway.As in mouse model, the loss of FAF1 may relate to aberrant NF-β signalling and tumour progression [91].The expression of YTHDF2 in diffuse gliomas [88] promotes the deterioration of gliomas, and UBXN1, as a protein containing the UBX domain, inhibits the activation of NF-κβ by maintaining the expression of Iκβα.Its expression can inhibit glioma cell proliferation and migration stimulated by YTHDF2 upregulation.In Glioblastoma and Colon adenocarcinoma [87], FAF1 inhibits cell growth and carcinogenesis through TNF-triggered NF-β signalling.
Clinical evidence
FAF1 is a tumour suppressor gene that plays a role in various cancers.In a recurrent leiomyosarcoma study [92], analysis of DNA exon sequences, RNA and protein expression, and transcription factor binding in sarcomas and unaffected muscles and bones revealed that the cause of the disease was a point mutation S181G in FAF1, which may lead to loss of apoptotic function following transformed DNA damage.The loss of FAF1 function may affect the activity of the constitutive Wnt pathway and promote the occurrence of leiomyosarcoma.To fully comprehend how UBXDF impacts cancer cells, further studies are needed.
UBXD family targeting drugs
Adult T-cell leukaemia/lymphoma (ATLL) is a malignant tumour caused by human T-cell leukaemia virus type 1 (HTLV-1) infection.A previous study [93] revealed that chloroquine (CQ) or hydroxychloroquine (HCQ), an FDA-approved antimalarial drug, induced apoptosis and inhibited ATLL cell growth in vitro and in vivo.Autophagy was inhibited in CQ or HCQ-treated ATLL cells, which promoted the recovery of the negative regulator p47 (NSFL1C) and the inhibition of NF-κB activation, triggering ATLL cell apoptosis.Abdullah et al. 's significant work [68] demonstrated through high-throughput drug screening that veratridine, a natural plant alkaloid, upregulates UBXN2A expression in cancer cells.This upregulation induces increased cell death and inhibits cell proliferation, especially in colon cancer lines, highlighting the potential of targeting UBXD family proteins in cancer therapy [68].
Additionally, in various cancers, including lung cancer, Abnormal de novo lipid synthesis contributes to the progression and therapeutic resistance of various cancers, including lung cancer.Orlistat (an FDA-approved antiobesity drug) inhibited tumour growth in human and mouse cancer cells (in vivo and in vitro) [94].Using RNAseq to explore changes in genome-wide gene expression profiles mediated by orlistat treatment, FAF2/UBXD8 was found to be a new target associated with lipid metabolism in many significantly affected genes, and knockout of FAF2 further enhanced orlistat-induced survival inhibition, whereas overexpression of FAF2 reversed.Nevertheless, the potential mechanism of orlistat inhibiting FAF2 remains to be further explored.
Protein partners of UBXD family and their network
The UBXDF family, known for its wide-ranging interactions with various protein partners, significantly influences cellular functions and cancer pathology.Raman et al. 's thorough research identified 169 interacting proteins (54 unique) of 13 UBXDF members using N-and C-tag anti-FLAG and anti-HA AP-MS studies [37].Riehl's team, using GFP-tagged UBXD9 AX2 strains and a new BirA-UBXD9 strain, discovered 185 potential binders to UBXD9, notably including p97, UBXD9, and GSIII, across multiple methods [95].
Each UBXDF member plays a distinct role in various biological and pathological processes.UBXD3's involvement in ciliogenesis, particularly its interaction with the intraflagellar transport B (IFT-B) complex, links it to tumorigenesis, presenting a new perspective on cancer development associated with defective ciliogenesis [37,[96][97][98].UBXD4 emerges as a potential cancer therapeutic target due to its interactions with E3 ubiquitin ligases and its role in proteasomal degradation pathways, including its modulation of p53 tumour suppressor proteins [52,70,99].Furthermore, UBXD5's identification as an antigen in colon tumour-reactive T cells by Maccalli et al. positions it as a promising target for immunotherapy in colorectal and melanoma cancers [100,101].In the context of hypoxia response, UBXD7's targeting of HIF-1α for degradation via interactions with the p97 complex and CUL2/VHL E3 ubiquitin ligase complexes open new avenues for targeting hypoxic tumours [22,40].
UBXD8's regulation of neurofibromin, influencing the Ras-mediated signalling pathway, is particularly notable.Phan et al. 's discovery that UBXD8 silencing reduces Ras activity suggests its potential in treating neurofibroma [102].UBXD9's involvement in cellular dynamics is also significant, interacting with actin cytoskeletal proteins and implicating it in processes like Golgi reassembly and vesicle redistribution [25,[103][104][105].
UBXDF, through its diverse protein interactions, plays a critical role in various cellular processes and diseases, particularly cancer.
The mechanism of UBXD family in cancer
Numerous UBX proteins contain the UBA domain, which displays conservative permutation within the UBX family compared to the UBX domain.FAF1, SAKS1, p47, UBXD7, and UBXD8 have their UBA domains relatively close to their respective N-terminal, suggesting that these proteins can be "linking" regions between the UBA and UBX domains, making it possible to attach additional cofactors and substrates within a smaller volume than p97.Thirteen mammalian proteins have been found to bind to p97 and have been shown to contain a UBX domain [22].UBXD regulates p97 through the massive interaction networks they create and the structural constraints they impose on p97 as well as its compounds [53].There may not be sequential conservation in some structural characteristics of UBX proteins concerned with their function in the p97 complex.These characteristics include the tendency toward oligomerisation, the universality of the second binding site for p97, and the protection of various domain configurations.p97 contributes to regulating protein homeostasis, and tumour cells are highly dependent on protein quality control mechanisms, showing that p97 is a potential therapeutic target for cancer.What's more, the expression level of p97 is upregulated in many cancers, including human melanoma and breast cancer [10,[106][107][108].
FAF1 is a key regulator of TβRII on the cell surface and prevents overactivation of SMAD and non-SMAD TGF-β-induced signals (Fig. 4A).During cancer development, AKT activation mediates FAF1 phosphorylation and subsequent dissociation of FAF1 from the plasma membrane and TβRII, thereby enhancing the cell surface stability of TβRII and activating TGF-β-induced pre-metastatic function in breast cancer cells [109].Abnormal AKT overactivation may also alter TGF-β intracellular signalling, thus providing a catalyst for TGF-β's transformation from tumour suppressor to promoter.Thus, AKT-mediated FAF1 protein inactivation confirmed high expression of cell-surface TβRII, further enhancing SMAD and AKT (one of the non-SMAD pathways) signalling.Thus, AKT, through FAF1 inactivation, triggers a tumour-promoting self-reinforcing cycle of the TGF-β pathway, thereby stimulating cancer cell invasion and metastasis.
It's also worth noting that FAF1 acts as a negative regulator of mitochondrial antiviral signalling (MAVS) (Fig. 4B).Innate immune receptor retinoid-induced gene 1 (RIG-I) is linked to antiviral signalling through mitochondrial antiviral signalling proteins (MAVs), which mediate the recognition of viral RNA.After interacting with RIG-I, MAVs trigger downstream signalling effectors by polyubiquitinating lysine 63 (K63) with the E3 ligase TRIM31.Inhibiting TRIM31-mediated polyubiquitination of K63 ligation and MAVs aggregation, FAF1 can form aggregates and bind to MAVs via its UBL domain.By acetylating four lysine sites (K139, K143, K146, and K221) in the UPL domain of FAF1, virus-induced phosphorylation of FAF1 at Ser556 promotes FAF1 de-aggregation [110].
UBXN1, p47, and FAF1 can target and inhibit key regulatory proteins in tumourigenesis and development and block the transcription of oncogenes activated by NF-κβ pathway (Fig. 4C).
UBXN2A was originally identified as a function protein controlling the protein transportation of nicotinic receptors in the neural system [99].Sane et al. [69] reported that in colon cancer cells, UBXN2A could bind to mot-2, inhibiting the binding of mot-2 to p53.Genetic analysis showed that UBXN2A was bound to the substrate binding region of mot-2 and partially overlapped with the p53 binding site, indicating that UBXN2A and p53 may competitively bind to mot-2.UBXN2A protects the tumour suppressor function of p53 by binding mot-2 to release p53 from cellular fixation, which suggests that UBXN2A can promote cell death by interfering with the interaction of p53-mot-2 in colon cancer cells (Fig. 4D).
UBXD7 functions as an adaptor of p97 ATPase, which is essential for the p97-mediated degradation of misfolded or damaged proteins by the ubiquitin-proteasome system (UPS) [40,111].When ubiquitinated substrates are present, UBXD7 binds to them via its UBA domain and then recruits p97 or p97 core complexes via its UBX domain's interaction with the p97 N-terminal domain.Both UBA and UBX domains are inactive due to intramolecular or intermolecular interactions.As a transcription factor, hypoxia-inducing factor 1 (HIF-1) plays an important role in tumours after hypoxia, promoting tumour aggressiveness and possibly damaging the response to radiation and chemotherapy [112].Reducing HIF-1 levels can disrupt multiple pathways, including cell survival, glucose metabolism, invasion, and angiogenesis [113].With the UBA domain of UBXD7 binding to ubiquitinated HIF-1, UBXD7 can actively promote the interaction between p97 and CUL2/VHL E3 ubiquitin ligase and HIF-1 PCR.In another study [40], researchers discovered that UBXD7 binds to the neddylated form of CUL2 and uses its UBA and UBX domains to recruit ubiquitinated HIF-1 and p97 complexes.Nevertheless, the excessive expression of UBXD7 showed that the docking mechanism of UBXD7 negatively regulates CUL2 ubiquitin ligase activity and leads to the accumulation of HIF-1.Both two studies have shown that it is complicated the regulation of UBXD7 in the ubiquitin-proteasome pathway (Fig. 4E).
UBXN6 collaborates with protein tyrosine phosphatase 4a2 (PTP4A2) to assemble the endo-lysosomal damage response (ELDR) complex, thereby promoting autophagosome formation and facilitating the clearance of damaged lysosomes [114].UBXN4 exhibits a negative correlation with macrophage-related markers, suggesting its potential as a prognostic marker for lung cancer [115].Its involvement in regulating the WNT secretory factor EVI/WLS at the protein level further underscores its significance [116].Meanwhile, UBXD5, encoding the colorectal cancer neoantigen Colon antigen-1 (COA-1), induces peripheral blood mononuclear cell (PBMC) antigen and tumour-specific CD8 + immune responses, highlighting its immunogenic potential [117].On the epigenetic front, UBXN1 undergoes silencing through promoter region methylation mediated by the RUNX8-RUNX1T1 fusion protein, resulting in significant inhibition of acute myeloid leukaemia (AML) proliferation [118].ASPSCR1 and TFE3 fusion not only regulates the activity of a super-enhancer (SE) but also promotes Alveolar Soft Part Sarcoma (ASPS) angiogenesis [119].
In summary, these diverse roles of UBXN family members underscore their importance in various cellular processes and disease contexts.
Conclusions and perspectives
Undeniably, both the bioinformation evidence of TCGA and the evidence in the literature have their limitations.Although TCGA data provides relatively reliable information with a large amount of clinical data and the results of high-throughput mRNA sequencing, it is less customised and cannot provide in-depth investigation.Instead, prospective literature studies can provide welldesigned investigations with well-validated evidence if appropriate hypotheses need to be tested.These studies may be affected by limited research resources and biases inherent in the assumptions.Therefore, combining and comparing bioinformatics and literary studies makes sense to obtain a complete view of a discipline.S2D) Immune score (Additional file 1: Figure S3, Table S3A) in vitro ( S2A) Survival (Fig. 2A) Stemness (Fig. 2f, g, Additional file 1: Table S2D) Immune score (Additional file 1: Figure S3, Table S3A) in vitro (Table 2), in vivo (Table 1), clinical, targeted medicine
UBXN1 Diffuse gliomas
LGG and GBM Therefore, we summarise the bioinformatics analysis and literature evidence collected in this paper in Table 3.Despite understanding the UBXD family action mechanisms in some cancers, current data suggest that UBXD family members play an important role in different types of cancer.Two major areas of UBXDF in cancer that have been less studied are the effect of UBXDF on drug therapy and the impact of UBXDF on immunotherapy.Bioinformatics and literature studies have demonstrated the potential effect of UBXDF on cancer susceptibility to anticancer drugs.The role of UBXDF in drug action may vary depending on the pharmacological mechanisms of the drug, highlighting the value of screening to search for drugs associated with UBXDF.Although several previous studies have examined drugs targeted by UBXDF, this review is the first to summarise potential candidates (Fig. 2I, J).FAF1 inhibits tumour growth, migration, invasion, and apoptosis by regulating signal transduction pathways in breast cancer, stomach, lung, and other tumours.However, little research has been done on drugs targeting FAF1, which could be a breakthrough in developing UBXDF drugs.
NS
From another perspective, UBXDF may be associated with immune scores for several cancer types (Additional file 1: Figure S3D), a conclusion that can be drawn from our bioinformatics analysis results.This type of cancer will be the subject of future research.In addition, 13 UBXDF members showed statistically significant differences in immunological subgroups (p < 0.001) [64].This bioinformatics data suggests that UBXDF may impact pan-cancer immunotherapy (Fig. 3A).Up to now, no research has explored the potential impact of UBXDF on cancer immunotherapy.We believe this is a new direction for UBXDF's future cancer research.
Colorectal cancer and lung cancer are currently the most studied types of cancer.In future studies, we will study some types of cancer that are less studied but may be potentially affected by UBXDF based on bioinformatics.For example, THYM, whose survival is related to TCGA-based FAF1, has not been studied.We also propose to study further some less concerned members of the UBXD family, which may also affect tumour formation and progression in vitro and in vivo.For example, UBXD5, which encodes the carboxyl-terminal of COA-1, has recently been identified as a novel colorectal cancer antigen [117].However, the relationship between UBXD5 and COA-1 immune response efficiency has not been studied.We believe that our analysis and review provide a new perspective for the study of UBXDF in cancer and provide new research questions for future research.
Conclusions
We reviewed the bioinformatics and literature evidence for UBXD family in cancer.Members of the UBXD family play an important role in different types of cancer.UBXD family may affect cancer immunotherapy and drugtherapy and should be investigated in the future.Literature evidence suggests that by controlling the levels of these ubiquitin-like proteins, UBXDF may disrupt the pathways on which cancer cells rely for rapid, unchecked growth while protecting the health of normal cells.However, it remains unknown whether the remaining members of the UBXD family, such as UBXN4, UBXN11, and UBXN2B, have effects on tumour formation and progression in vitro and in vivo.More studies are needed to determine if UBXD family is a promising new target for non-genotoxic targeted therapies in treating human cancer.
Fig. 1
Fig. 1 Structure and gene alteration of UBXDF A UBXD proteins domain structures and groups.B Bar plot of counts of the four mutations for each UBXD gene.C The FAF1 gene mutation locations and count of all TCGA pan-cancer rawdata.D The OncoPrint with mutation spectrum and UBXDF gene alteration.E The UBXDF mutations frequency in TCGA.F K-M plot of the OS (UBXDF altered group vs. UBXDF unaltered group)
Fig. 2
Fig. 2 Expression of UBXDF in cancers.A The boxplot shows the mRNA expression levels of UBXDF based on the TCGA dataset.B Heatmap shows differential UBXDF expression (normal vs. tumour) in TCGA.C Co-expression network among UBXDF and VCP (p97).D Survival contribution of UBXDF genes in 33 cancer types.The coloured border represents a significance level of less than 0.05.E Trend plot presents the trend of gene expression from stage I to stage IV.F, G Correlations between UBXDF expression and DNAss or RNAss.H Representative images of UBXDF (except UBXN8) protein in the U-2-OS cell line.I, J Correlations between UBXDF and drug sensitivity data from the GDSC or CTRP.
Fig. 3
Fig. 3 UBXDF and cancer immunity.A Immunological subtype analysis of UBXDF genes in TCGA tumours.( * * * p < 0.001) B Heatmap summarises the significance of p-value and FDR for the spearman correlation analysis between GSVA score of the inputted gene set and immune cells' infiltrates.(*p ≤ 0.05; #FDR ≤ 0.05) C The scatter plot summarises the significance of p-value and FDR for comparing mean infiltrate between SNV groups.Using colour to indicate the p-value significant (green) and FDR significant (red) results.D The scatter plot summarises the significance of p-value and FDR in comparing mean infiltrate between CNV groups.Using colour to indicate the p-value significant (green) and FDR significant (red) results
Fig. 4
Fig. 4 The Mechanism of UBXD family in Cancer.(Created in Biorender.com)Additional file 2
Table 1
In vitro evidence for roles of UBXDF in cancers
Table 2
Preclinical in vivo evidence of UBXDF effect on cancers
Table 3
Summary of bioinformatics and literature evidence for UBXDF in cancers | 2024-02-18T05:12:20.094Z | 2024-02-15T00:00:00.000 | {
"year": 2024,
"sha1": "a5ee00a4d0171a529929e7b6435a327473dff08b",
"oa_license": "CCBY",
"oa_url": "https://translational-medicine.biomedcentral.com/counter/pdf/10.1186/s12967-024-04890-9",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "96b9b8c6172dc4fba4775089aad128265402e67b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5295620 | pes2o/s2orc | v3-fos-license | Agronomic Factors Affecting the Potential of Sorghum as a Feedstock for Bioethanol Production in the Kanto Region , Japan
In the Kanto region in Japan, the possibilities of running a bio-ethanol plant from rice straw has been assessed and sorghum production has been considered as a necessary part of the system. Two field experiments were conducted in 2012 and 2013 at the NARO—Agricultural Research Center in Tsukuba, Ibaraki to estimate yielding ability of sorghum in the Kanto region. Two cultivars of sweet sorghum and one of grain sorghum were sown using a pneumatic seeder. Above-ground dry matter (DM) yield ranged from 1.03 to 1.82 kg m−2 for the sorgo type cultivars and from 0.70 to 1.18 kg m−2 for the grain type cultivar. The observed yields were lower than the simulated potential yields, i.e., 1.61 to 2.66 kg m−2, indicating that biomass production was restricted in this study. Stem brix values for the sweet sorghum cultivars were generally low (3.3–16.2%) compared with the values reported in the literature. It appears that there is still room to improve the field management of sorghum to minimize the gap between the potential and actual production observed in these experiments.
Introduction
Unlike the southwest islands where sugarcane is almost exclusively expected to play a key role as a feedstock for bio-ethanol plants [1], rice straw is likely to be a major feedstock in the Kanto region [2,3].Effective utilization of lignocellulosic biomass has been attempted by many researchers at both laboratory [4,5] and plant scales [6].Issues of cost relative to those of fossil fuel are probably the major challenge to commercial implementation, i.e., construction and management of an actual plant.Cost aside, handling of by-products would arise as another major issue to be solved.Supposing an ethanol plant that has the capacity to produce 15,000 kL of ethanol annually, a similar capacity to a plant that used to work in Hokkaido (Hokkaido bioethanol, Co., Ltd.), 150,000-300,000 kL of vinasse could be generated, assuming the ratio of 10-20 to the ethanol [7].Being stripped of energy in the form of ethanol, vinasse still contains organic matter and therefore has a high chemical oxidative demand (COD).Releasing it casually, for example, into surface waters would not only be damaging to the environment but also be regarded as a waste of valuable energy and nutrients.In Brazil, the practice of applying sugarcane vinasse to fields to replace mineral fertilizers has been accused of causing environmental problems including odor and possible emissions of greenhouse gasses where anaerobic digestion is suggested as a possible remedy [8].An ethanol plant in the Kanto region with the major feedstocks being rice straw, vinasse could be applied to paddies but only at a particular time of year when rice needs nutrients for growth, while vinasse is generated in the ethanol plants all year round or close to it.It would therefore be necessary to develop cropping systems, apart from paddies, that could receive vinasse at various timings throughout the year.It would be preferable if the constituent crops could contribute to the operation of the ethanol plant as feedstocks when necessary.It would also be beneficial to incorporate double purpose cropping like the one reported on wheat [9] which employs a chance-seeking way of production, i.e., growers can change the usage of the standing crop in the field according to the prevailing meteorological and socioeconomic conditions, within a newly developed cropping system.It is known that sugarcane mills in Brazil have been altering the ratio of production of ethanol to sugar according to the demand [10], this flexibility within the biofuel industry is clearly advantageous where competition for fertile cropland for food production is concerned [11].
Sorghum has long been used as staple, sugar and feed crop in arid and semi-arid areas prone to drought [12,13], although its use especially for sugar [14] and as feed [15] has spread to temperate regions.Sorghum has been widely recognized as a potential source of biofuel since the era of oil shock in the 1970s [16][17][18] due to its ability to produce high biomass and sugars from a relatively short growing period.The recent upsurge in the number of biomass studies could largely be attributable to the adoption of the Kyoto protocol in 1997 [19], which was concerned in part with the increasing greenhouse gas emissions caused by anthropological activities.Recent field studies conducted at relatively high latitudes have generally focused on the characterization of sorghum as a feedstock and on estimating the amount of producible ethanol [20][21][22].In addition, some studies have looked at the effects of planting date and/or harvest date on sorghum biomass yield in the context of energy production [23][24][25][26].
In Japan, reflecting an increased demand by growers to use sorghum as a feed crop after the 1950s [27], a substantial number of studies can be found in the literature on this crop [28,29].However, the number of domestic studies clearly placing sorghum as a feedstock for biomass-energy industry is rather limited.Hoshikawa [16,17] was probably among the first to have recognized the potential for sorghum to be a biomass crop in the country.He and his group have conducted extensive work particularly focusing on sweet sorghum, the type that accumulates sugar in stem, in the Tohoku region [17,18,30,31].Wu et al. [32] has reported above-ground DM yield of a grain sorghum variety (12.85 Mg ha −1 ) and of two sweet sorghum varieties (22.75 Mg ha −1 and 23.66 Mg ha −1 ) planted in Minamiminowa located in the Chubu region.In the Kanto region, Yasui et al. [33,34], in a series of studies conducted under the project named "biomass transmutation plan" funded by MAFF, investigated stem constituents, especially sugars of sorghum sown on different planting dates.However, they did not investigate the biomass yielding ability of sorghum.Inuyama et al. [27] obtained stem fresh weights in the range of 33.7-44.8Mg ha −1 for three sorghum cultivars planted on 20 May.The DM yield, however, was unfortunately not clear from their study.
The objective of the present study was to estimate the potential biomass yield of sorghum in upland cropping systems in the Kanto region for different planting dates with the aid of simple simulation techniques.Estimation of potential biomass yield was used to help identify the factors that could possibly hinder successful sorghum production in the target region as well as proposing necessary production techniques to overcome these factors.Establishing and running a biomass energy system requires cooperation among stakeholders of different interests and disciplines.It is hoped that the present study will provide an agronomic perspective on this issue so that participants and stakeholders of present and future biomass energy projects can have information about feedstock crops (e.g., sorghum) in terms of climate and crop management systems.
Field Experiments
A field experiment was conducted in a field at the NARO-Agricultural Research Center (the predecessor of the Central Region Agricultural Research Center, NARO) (36 • 02 N, 140 • 10 E) (3-1-1, Kannondai, Tsukuba, Ibaraki, 305-8666, Japan) in 2012 and 2013.pH (H 2 O), total nitrogen, available phosphoric acid, exchangeable potassium and humus content in the soil sampled in 2013 were 6.5, 0.35%, 1.5 mg/100 g, 65.3 mg/100 g and 6.8%, respectively.Seeds of three sorghum cultivars, SIL05 (NARO), high sugar sorgo (FS501) (Snow brand seed, Sapporo) and meter sorgo (8080) (TAKII & Co., Ltd., Kyoto, Japan) were sown using a pneumatic seeder (AS404-HW, KUBOTA).SIL05 and FS501 are sweet sorghum, while 8080 grain sorghum.Sowing dates and seed rates are presented in Table 1.The target seed rates were 20, 10, and 10 for SIL05, FS501 and 8080, respectively, depending on the size of the seeds of each cultivar.Furrow spacing was 0.7 m.Nitrogen, phosphorus and potassium of a high analysis compound fertilizer (N, P 2 O 5 , K 2 O; 14%, 14%, 14%) were set to be applied as side dressing at sowing at the rate of 12 g m −2 (Table 2).A split-plot design with two replicates was applied with the main plot being sowing date.The size of each plot was 5.6 m by 48 m.After harvesting and the removal of the above-ground parts of the sorghum crop, the field was planted with winter crops mostly oat in 2012 and rye in 2013.
Sampling and Analysis
Above-ground biomass of known area, i.e., 1.4 m 2 (two rows), was harvested every two weeks until harvest (Table 3).In 2013, samples, except for harvest, were taken from an area of 0.7 m 2 (a single row) instead of two rows, as it was found difficult to take hold of two rows of plants showing average growth, i.e., plants of either row tended to show a poor growth.For the sampled plants, height was measured and effective tiller number counted.A part of each sample consisting of 2-3 plants (sub-sample A) was taken, separated to each organ, i.e., leaf blade, stem and leaf sheath and then its fresh weight was determined.Sub-sample A was dried at 80 • C in an oven to constant weight and DM weight was then determined.Another part of each sample, the size of which was similar to that of sub-sample A (sub-sample B) was taken to determine leaf area index (LAI) using an automatic area meter, AAM-8 (Hayashi Denko co Ltd., Tokyo, Japan).At the same time, weeds were also sampled from the same area where sorghum plants were sampled and then their DM weight determined by the same method used for the sorghum samples.After heading, the brix value of stems was measured using a ref brix, PR-101α (ATAGO, Tokyo, Japan).
Statistical Analysis and Simulation
Analysis of variance (ANOVA) and the following multiple comparisons by Bonferroni's test as well as regression analysis were performed using an SPSS 13.0 (IBM Japan, Ltd., Tokyo, Japan).Intercepted radiation was calculated on a daily basis from emergence to maturity according to Monsi and Saeki [35].Extinction coefficients for sorghum of 0.37 and 0.60 were employed from the literature [36,37].Daily gross assimilation (Equation ( 1)) and growth (Equation ( 2)) and maintenance respirations (Equation ( 3)) were calculated using the following equations according to Lövenstein et al. [38].Assimilation efficiency (Ea) and reflection coefficient (p c ) were fixed as 9, i.e., a typical value for C4 crops, and 0.08 respectively.A temperature correction coefficient (TC) was calculated for each day by applying the average temperature on the day.Although the experiments were conducted on two replicates, one replicate that was considered to be an outlier was omitted from calculations when values of the two replicates differed to a large degree, i.e., the value of one replicate was smaller than that of the other by more than 30%.
Weather
Mean daily temperature, total rainfall and total solar radiation during the experiments are presented in Table 4.There was a long dry spell of nearly 50 days in July and August 2013.
Leaf Area Index (LAI)
LAI observations made at the time of samplings were linearly interpolated (Figure 1).In 2012, the peak of LAI was in the range of 5-6, while it was in that of 8-11 in 2013.LAI tended to decrease very quickly in 2013 due to the severe infestation by aphids.Similarly, to the process of simulation, one replicate out of the two was omitted from calculations when values of two replicates differed to a large degree, i.e., the value of one replicate was smaller than that of the other by more than 30%, and one was considered as an outlier.The interpolated LAI data obtained here were then used to estimate light interception and biomass yield in the process of simulation.
DM Yield and Brix
Height, LAI, effective tiller number, DM yield of different organs, DM content of above-ground part and stem brix are presented in Table 5. Interactions were observed with panicle DM yield, DM content and stem brix.As to panicle DM yield, significant interactions between cultivar × year and sowing × year were observed.In the former interaction, it was greater in 2012 than in 2013 for FS501 and 8080 while no significant difference between two years was recognized for SIL05.In the latter interaction, greater panicle DM yield was observed for early sowing than for medium sowing in 2013, while there was no difference between the two sowing timings in 2012.In DM content, no difference was observed between SIL05 and FS501 for early sowing, while for medium sowing, DM content was higher for SIL05 than FS501.As to stem brix, there was no difference between two years for medium sowing, while a greater value was observed in 2012 than in 2013 for early sowing.There was no difference between SIL05 and FS501 for early sowing, while stem brix values were significantly higher for SIL05 than for FS501 for medium sowing.Effects of sowing dates were hardly seen except for height where greater height was associated with early sowing.FS501 showed greater LAI than SIL05 and 8080.Pearson's correlation analysis revealed that stem brix was negatively correlated with leaf DM yield and panicle DM yield (Table 6) and it was positively correlated with stem DM yield and especially stem DM content.
Estimation of Radiation Use Efficiency (RUE)
RUE was estimated by plotting above-ground DM yield against intercepted radiation (Figure 2a,b).Radiation intercepted by the sorghum canopy was simulated from daily solar radiation using an analogue of Beer's law [35,39] for each cultivar in each season.RUE is presented in Table 7.A light extinction coefficient (k) of 0.37 (Figure 2a,b and Table 7) and 0.60 (Table 7) was employed from the literature [36,37].Greater RUE values were estimated by 13.8-24.4% when a light extinction coefficient of 0.37 was employed compared to 0.60.
Estimation of Biomass Yield
Estimated daily assimilation and accumulated assimilation, i.e., yield, were presented as an example of the simulation for the data set of SIL05 sown early in 2012 (Figure 3a,b).The remaining results are presented in Table 8.Net assimilation of FS501 exceeded 20 Mg ha −1 for all sowing dates in two years, while that of SIL05 were below 20 Mg ha −1 for 5 simulation sets out of 10 (Table 8).The simulated results were plotted against the observed biomass yield (Figure 4).The yield level obtained in the present study was found below the estimated potential (region B in Figure 4) except for in a few cases (region A in Figure 4).Another attempt was made to estimate the potential biomass production of sorghum at different locations from RUE and Water Use Efficiency (WUE) (Table 9).RUE of 1.4 g MJ −1 and WUE of 5.0 g kg −1 were assumed following Narayanan et al. [40].In contrast to USA, biomass production was greater when estimated from rainfall than from radiation in most of the locations in Japan except for Takamatsu, which is known for its dry summers, and Tsukuba (Kannondai) in 2012 and 2013, where the present study was conducted.
Estimation of Ethanol Yield
Ethanol yield was estimated using stem yield and brix obtained in the present study following methodologies from the literature [20,25,41] (Table 10).Estimated ethanol production exceeded 2000 L ha −1 for SIL05 and FS501 with early sowing as well as SIL05 with medium sowing in 2014.With late sowing in 2014 and early and medium sowing in 2015, estimated ethanol production was generally low.
Yield and Brix
As mentioned in the Introduction, Inuyama et al. [27] obtained stem weights of sorghum in the range of 3.37-4.48kg m −2 on a fresh matter basis.These values are comparable to the lower range of the values obtained in the present study, i.e., 3.35-7.56kg m −2 .To compare yield levels obtained in the present study in terms of those reported by other workers, DM yield was plotted against DM content (Figure 5a,b) [20,43,44].Compared on a DM basis, yields obtained in the present study were relatively lower than those reported by Wortmann et al. [20] (Figure 5a) and Fukazawa et al. (Figure 5b) [43], while they were comparable to those reported by Harada et al. (Figure 5b) [44].High values of DM content reported by Wortmann et al. [20] and Fukazawa et al. [43] appeared to explain at least partly the differences in DM yield between their studies and the present study (Figure 5a).As to stem brix, Kamiyama et al. [45] observed values in the range of 13.7% to 16.2% for three cultivars in Ibaraki prefecture, while stem brix in the present study and especially of FS501 was generally lower.Kawahigashi et al. [46] reported brix values ranging from 2.8% to 19.6% for 109 sorghum accessions where SIL05 showed a value as high as 19.4% compared to the range of 5.3% to 16.2% observed in the present study for the same cultivar.When plotted against DM content, stem brix was positively correlated to DM content (Figure 5c) [20,43], which was in accordance with Table 6.
RUE
RUE estimated in the present study was in the range of 1.01-1.58g MJ −1 solar and 0.85-1.27g MJ −1 solar for light extinction coefficient of 0.37 and 0.60, respectively.In the case of 8080, leaves are arranged along a short statue, while the number of leaves is not very different from that of SIL05 and FS501.This might explain the smaller RUE obtained with 8080 compared to other cultivars.Monteith [47] pointed out that at least during the vegetative stage, the relationship between intercepted radiation and the annual production of DM was surprisingly similar for barley, potatoes, sugar beet and apples and showed that the slope, RUE, would be approximately 1.4 g MJ −1 solar in the UK.Values between 1.10 and 2.16 g MJ −1 solar have been cited for sorghum [48].Hammer et al. [36] obtained RUE in the range of 1.19-1.84g MJ −1 solar for dwarf sorghum.Horie and Okada [49] reported RUE of 2.75 g MJ −1 PAR (Photosynthetically Active Radiation) with rice until maturation.Their calculation, however, included root biomass, which was not assessed in the present study.In a recent study with sorghum in Northern Italy (44 • 32 N, 11 • 11 E, 38 m a.s.l.), Ceotto et al. [37] obtained RUE of as high as 3.48 g MJ −1 PAR where above-ground DM yield was plotted against intercepted PAR.Similarly in Kansas State, USA (39 • 24 N, 101 • 4 W, 963 m a.s.l.), Narayanan et al. [40] obtained RUE in the range of 2.13 to 3.53 g MJ −1 PAR.Values of 1.4 g MJ −1 solar and 2.0 g MJ −1 solar were reported [50] for C3 and C4 species, respectively, grown under optimum conditions.Bearing these values in mind, RUE obtained in the present study was considered relatively low for sorghum suggesting room for improvement in cultivation.
Estimation of Potential Yield and Ethanol Yield
An attempt was made to simulate potential yield of sorghum in the present study.As the LAI data set used was the one obtained in the present study, one needs to be aware that potential yield here means the one restricted by the observed LAI.The observed yields were found to be below the simulated yields especially in 2013 implying that the photosynthetic ability of green leaves was likely to be impaired to some extent.Possible factors we can think of are limited precipitation as well as the infestation by aphids.Pronounced effects of both factors such as rolling of leaves and leaves covered by black sooty mold on the aphid honey dew were observed especially in 2013.The LAI values observed in their peak in the present study appeared comparable to or even greater than the values reported by Narayanan et al. [40], which is interesting considering that some of the genotypes in their study produced above-ground DM yield greater than 2000 g m −2 .It should be noted that in the present study, estimation was conducted in a quite simplified manner without differentiating the vegetative phase from the reproductive phase.Besides, respiratory costs associated with sugar accumulation in the stem during the later phase of growth could complicate the balance sheet of assimilates in sorghum, in a similar way to sugarcane.In Table 9, a simple estimation of potential DM yield from weather data was attempted to evaluate the climatic resources of different places to produce sorghum.Solely from the perspective of solar radiation, one might be able to expect production of 30 to 40 Mg ha −1 of biomass yield in the Kanto region, a slightly lower level compared to that in USA, although it would be very much influenced by precipitation during the growth period.Both in 2012 and 2013 of the present study, potential yield was suppressed greatly by the amount of rainfall.In 2013 in particular, only a half of production estimated from solar radiation was considered to be theoretically possible.The issue of dry summer experienced in the study area will be further discussed later.As brix was generally low in the present study, the potential ethanol yield was low compared to the values reported in the literature [20,25].
Seed Rates and Weeds Infestation
It was possible to reduce the amount of sorghum seeds needed for sowing by 80-90% using a pneumatic seeder.A small plot experiment conducted in 2013 (data not shown), however, indicated that the use of a power harrow seeder might be a better way of sowing sorghum in terms of early crop growth and increased competition with weeds for resources even though one needs more seeds with this machinery.The issue of crop establishment requires further examination.Digitaria (Digitaria ciliaris (Retz.)Koel.), spotted lady's-thumb (Persicaria maculosa Gray), white goosefoot (Chenopodium album L.), and Amaranthus retroflexus (Amaranthus viridis L.) dominated some of the plots and appeared to have affected the growth and yield of sorghum in both years (data not shown).When sown in the middle of May, it took a week for sorghum seeds to germinate compared to 4-5 days in June.This, combined with slow growth of sorghum during the early growth phase which is sometimes compared with maize [51], is likely to allow weeds to establish faster than the crop.Sorghum originates from semi-arid tropics [52] and one cannot deny that temperature in the Kanto region is too low to exploit the yielding ability of this crop to its full capacity.Low temperature during the early phase of growth up to canopy establishment, a similar situation with sugarcane in southwest islands [53,54], is one of the main factors limiting yields.It appears that, in this region, when sowing sorghum as early as the middle of May, a cultivar possessing the trait of faster early phase growth would be required to compete effectively with weeds.
Aphids
Aphids were present in both years.In 2013, an outbreak was observed from early August until late August when it was stopped with the event of rainfall.Setokuchi [55] pointed out that dry weather and high temperature are the factors that favor growth of aphids (Longiunguis sacchari (Zehntner)).He reported a reduced DM content of sorghum following the infestation by this pest.As previously mentioned, values of DM content in the present study were found to be generally low compared with those reported in other studies.As a positive correlation was observed between DM content and stem brix, it is possible that stem brix was negatively affected by the presence of aphids.When one looks at mean temperature over last 30 years [56], maximum temperature in August in Tsukuba is lower by approximately 2 • C than that in Tadotsu, Kagawa, a prefecture known within Japan for its hot and dry summers.Similarly, the amount of rainfall during summer in Tsukuba is greater than that in Tadotsu.Weather data in 2012 and 2013 [56] however tell us a different story.There was almost no difference between the two locations in maximum temperature, while precipitation in summer was less in Tsukuba than in Tadotsu indicating that the weather conditions in Tsukuba were more likely to favor aphids.Evidence from the present study suggests that it is important to control aphids in sorghum during this growth phase and that the aspect of aphids needs to be investigated further, as this could be one of the major issues affecting the successful cultivation of sorghum in the Kanto region.
Typhoons
In 2012, two typhoons affected the field experiment.The first one was in June and only caused some leaf damage.High ridging was considered to have saved the seedlings from lodging.The second typhoon was at the end of September, and almost completely knocked down the cultivars of more than 3 m in height.i.e., SIL05 and FS501.The number of typhoons that have come close to the Kanto region in last 20 years is much fewer than to Okinawa, one of the most vulnerable regions to typhoons in the country (Figure 6) [56], however, it is probably optimistic to expect no typhoons to affect the Kanto region during the period from May to September.From the perspective of minimizing the risk of lodging, it is preferable that sorghum of a high growth habit be harvested by early September.In this respect, it might be better to place sorghum of a shorter growth habit such as 8080, which survived the typhoon at the end of September, after wheat or barley in the sequence of crop rotation, as harvesting wheat and barley would occur usually in June in this part of the world.
Future Perspective
Other issues to be looked into are the nutrient management of sorghum for biomass usage as well as preservation of the crop.As to the former issue, excess nitrate accumulation in feed crops including sorghum [44], very often being the consequence of overdosed fertilizer and/or manure application [57], has been regarded as a factor that can cause serious health problems with livestock.However, fertilizer recommendation rates for feed sorghum vary for prefectures with the differences sometimes exceeding twofold [58].It is of importance therefore, from the perspective of nutrient recycling and energy saving [22], that the wastes from the neighboring ethanol plants such as vinasse should be utilized as crop fertilizers.The latter issue of preservation is worth studying, because the stem juice of sorghum is known to deteriorate rather quickly after harvest [59], which is a possible factor that could hinder the use of sorghum as a feedstock.In addition, in the view of completing the cropping system as a whole, the inclusion of high yielding winter crops [60] would be an essential component to be considered in future studies.Poor early growth observed with the plots sown on 13 May, 2013 was considered to be at least partly attributable to possible allelopathic effects of oat residues [61] especially roots [62] as all the above-ground parts of oat had been removed from the field prior to sowing sorghum.Sown after oat as well, early growth of sorghum sown on 28 May 2013 did not appear to have been affected by the previous crop to the extent observed with the crop sown two weeks earlier.The issue of allelopathy would probably require a further examination to establish cropping systems that could support sustainable biomass production in the target region.Maize cultivation could be a possible alternative to avoid continuous sorghum cropping.A large part of the issues discussed above could probably not be solved solely by improving agronomy.Multiple cultivars are required if one wants to run any cropping system in sustainable manner.To cultivate sorghum in the Kanto region especially within a global warming context, more cultivars, for example, preferably with early vigor and equipped with resistance to aphids need to be developed.
Conclusions
Sorghum seeds of both sweet sorghum (SIL05 and FS501) and grain sorghum (8080) cultivars were sown under two field experiments conducted in 2012 and 2013 to estimate yielding ability of this crop in the Kanto region in the context of bioethanol production.Above-ground DM yield in the range of 1.03-1.82kg m −2 , 1.22-1.77kg m −2 and 0.70-1.18kg m −2 were obtained for SIL05, FS501 and 8080, respectively, in two years.The yield level obtained in the present study was found below the estimated potential except for in a few cases.In contrast to USA, potential biomass production was greater when estimated from rainfall than from radiation in most of the locations in Japan.Observed yields, however, were greatly suppressed by the amount of rainfall in both experimental seasons in the study site.As brix was generally low in the present study, the potential ethanol yield was low compared to the values reported in the literature.A positive correlation observed between DM content and stem brix suggests a possibility that stem brix was negatively affected by the presence of aphids.Controlling the population of aphids was identified to be one of the crucial factors determining the successful cultivation of sorghum in the Kanto region.
Figure 2 .
Figure 2. (a) The relationship between intercepted radiation and above-ground dry matter (DM) yield in 2012.(b) The relationship between intercepted radiation and above-ground DM yield in 2013.
Figure 3 .
Figure 3. (a) Estimated daily gross and net assimilation for SIL05 sown early in 2012.(b) Estimated accumulation of gross and daily assimilation for SIL05 sown early in 2012.
Figure 4 .
Figure 4. Comparison between simulated and observed yield: (A) the plots where the simulated yield was close to the observed yield; and (B) the plots where the simulated yield was higher than the observed yield.
Figure 5 .
Figure 5. (a) The relationship between stem dry matter (DM) content and stem DM yield.(b) The relationship between above-ground DM content and above-ground DM yield.(c) The relationship between stem DM content and stem brix.* The relationship between above-ground DM content and stem brix.
Figure 6 .
Figure 6.The average number of typhoons that had come close to the Kanto and the Nansei region in last 20 years.
Table 1 .
Seed rates over two seasons.
Table 3 .
Sampling dates and days after sowing (DAS) over two cropping seasons.
* Last sampling is referred to harvest.
Table 4 .
Mean temperature, total rainfall and total solar radiation over two cropping seasons.
Table 6 .
Pearson's correlation analysis between traits over two seasons.
Table 7 .
Results of regression analysis between interpreted radiation and biomass yield.
Table 8 .
Simulated gross assimilation, respiration and net assimilation.
Table 9 .
Estimation of biomass production of sorghum from solar radiation and rainfall.
Table 10 .
Estimation of ethanol yield and production by three methods. | 2017-07-28T23:27:01.276Z | 2017-06-02T00:00:00.000 | {
"year": 2017,
"sha1": "76194ca51b0bb80061b242bf6651dd64fa90110d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/9/6/937/pdf?version=1496660740",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "76194ca51b0bb80061b242bf6651dd64fa90110d",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Economics"
]
} |
64370545 | pes2o/s2orc | v3-fos-license | Finite Element Analysis on the Creep Constitutive Equation of High Modulus Asphalt Concrete
In order to obtain the viscoelastic constitutive relation of High Modulus Asphalt Concrete (HMAC), the constitutive relationship is used to carry on the various numerical calculation, utilize integral transforming in its Bailey-Norton creep rule, and establish new practical model. Combining with the collection data from creep tests of uniaxial compression, regression calculation is carried out with 1STOPT fitting software; thus the creep parameters of High Modulus Asphalt Concrete in different temperatures were obtained, which can be used in HMAC creep model.
Introduction
High Modulus Asphalt Concrete (HMAC) is originally a kind of asphalt concrete mixture agitated by hard asphalt, a certain grading of stone, and additives.Under 15 ∘ C, 10 Hz test conditions, its dynamic modulus can reach over 14000 MPa [1][2][3][4][5][6], which fully exhibited such advantages as high modulus, good rutting resistance, low temperature cracking, and low thermal fatigue cracking sensitivity.As the viscoelastic material, HMAC has unique constitutive relation different from other elastic, elastic-plastic materials; thus it is more difficult to grasp its mechanical characteristics.However, the development of viscoelastic theory provides mechanical analysis of HMAC with an effective tool finite element method.Since there is no complex analytic derivation in the finite element method, it is convenient to simulate viscoelastic of road materials and analyze the actual stress state of road surface [7][8][9][10][11].
Due to the simplicity and high controllability of creep test method, many researchers have conducted a lot of laboratory tests to reveal the actual stress condition of road surface through the asphalt concrete material stress-strain relationship.But it is still very difficult to obtain viscoelastic parameters from these asphalt stress-strain data and then use computer software to simulate its mechanical characteristics.
There are two reasons: one is the low precision degree of data fitting; the other is the inconsistency of formula in data fitting and computer simulation software.Therefore, though many researchers have viscoelastic data, they adopt elastic or elastic-plastic method instead to calculate the stress state of the approaching road surface [12,13].In order to better utilize these data, this paper analyzed the creep model in ABAQUS suitable for high modulus asphalt and carried on integration processing on Bailey-Norton creep law to establish new practical model.Based on the numerical simulation of uniaxial compression creep test, the creep parameters of High Modulus Asphalt Concrete in different temperatures were obtained by regression calculation with software 1STOPT of high efficiency and user-friendly control.
The Creep Constitutive Equations of Asphalt Concrete
The finite element analysis on material nonlinear problem has two aspects, namely, the establishment of the constitutive formula and the solution to nonlinear equations.Material constitutive relations are usually divided into two categories: the full amount of the constitutive equations and the incremental constitutive equations.Since the plastic deformation Advances in Materials Science and Engineering is unrecoverable inelastic deformation, stress state cannot be determined by the current state of deformation but rather by the route and deformation history.To accommodate this situation, the incremental constitutive equations are usually adopted in finite element analysis.Generally, creep effect is time-dependent; that is, under constant load conditions, the deformation of material increases with time.With constant load and displacement, creep effect has two stages: redistribution of stress within the structure as the first stage and the steady stress state of the structure as the second.In the process from transient creep state to steady creep state, the first stage is named as transient creep stage and the second stage is named as steady state creep stage.The steady creep stage can be analyzed by the full amount of the finite element analysis.But, for the transient creep, the load of a given change over time, and the structural creep effect under displacement, an incremental finite element method analysis is used usually along with thermal elastic-plastic incremental analysis, known as thermal elasticplastic creep analysis.
Asphalt concrete, which is the material dependent on time, temperature, and stress, presents deformation response in elasticity, plasticity, viscoelasticity, and viscoplasticity under repeated loads.The strain parameters can be expressed as In ( 1), is the total strain varying with time, e is elastic strain (recoverable and independent of time), p is plastic strain (unrecoverable and independent of time), ve is viscoelastic strain (recoverable and time-dependent), and vp is viscoplastic strain (unrecoverable and time-dependent).
Obviously, such plastic nature of the asphalt mixture as plastic strain and viscoplastic strain can generate permanent deformation, and those plastic strains are cumulative under repeated loads.So only viscoplastic strain remains to be calculated.However, it is very hard to differentiate viscoelastic part and viscoplastic part especially when the two are changeable under repeated loads.On the other hand, creep tests can easily identify elastic strain and inelastic strain, that is, creep strain, which should include viscoplastic strain and a part of viscoelastic strain at a certain point of time.Since the current test method can only measure the combined effects of the two, creep and plasticity cannot be treated respectively.
The creep deformation of the material can be expressed as a function of temperature , stress , and time ; that is, c = (, , ) which can be used to analyze the creep deformation [14][15][16][17].Therefore, Bailey-Norton creep model in ABAQUS can be used to simulate the nonlinearity of the asphalt concrete layer.Equation ( 2) is expressed in the form of creep strain rate, as follows: In ( 2), cr is creep strain rate; is 2.718; is time; is temperature; 1 , 2 , 3 , and 4 are material parameters.
In order to simplify the model and enhance its usability, the Bailey-Norton creep law was retransformed by carrying When the temperature is constant, 4 = 0, and (3) can be converted into Equation ( 4) is the time hardening creep model (expressed by creep rate) in the converted ABAQUS; 1 , 2 , 3 are model parameters dependent on temperature, which can be determined by material testing.
Creep Tests on High Modulus Asphalt Concrete
Since the High Modulus Asphalt Concrete is mainly used in sections of large traffic volume and harsh stress environment, grade A asphalt in specification [18] was chosen as asphalt material in the High Modulus Asphalt Concrete.In this study, two kinds of asphalt 50# and 70#, respectively, from Zhonghai Company were chosen based on the market survey, and five kinds of asphalt 70#-1 with additive from French PR company were chosen according to its mixed recommended dosage.They are 50#-1, 50#-2, 70#-1, 70#-2, and 70#-3.The performances of them can all meet the norms.The optimum asphalt content is shown in Table 1.Aggregate grading was selected from specification AC-20 median grading and the grading composition is shown in Table 2. Use MTS810 material testing machine imported from America to conduct the test with uniaxial compression creep method.The specimen size was 100 mm in diameter, its height was 100 mm, and test temperature was 20 ∘ C, 40 ∘ C, and 60 ∘ C, respectively.A series of pressure (0.1 MPa, 0.2 MPa, 0.3 MPa, 0.4 MPa, and 0.5 MPa) was loaded on the specimen for 60 minutes and then unloaded for 10 minutes before starting processing data.Through these tests, the changes of vertical accumulative strain over time for specimens of different type produced at different temperatures during creep process were obtained.Due to space limitations, only two graphs are presented here with Figure 1 showing creep curves for different types of asphalt mixture at temperature 40 ∘ C and Figure 2 showing the creep curves of 70#-1 asphalt mixture at different temperatures.
From Figure 1, it can be seen that, at loading stage, the instantaneous deformation of the specimen first occurs under the initial load, and then, with continuing role of dead load, the specimen deformation keeps increasing until the deformation increments are steady.After unloading, the elastic deformation resumes immediately, viscoelastic deformation gradually recovers over time, and plastic deformation
From Figure 2, it can be seen that, under the same stress conditions, as the temperature increases, both the accumulated deformation and the residual deformation after relaxation are increased.At the high temperature of 40 ∘ C and 60 ∘ C, instantaneous deformation of High Modulus Asphalt Concrete with 0.7% additive is consistent with the ordinary asphalt concrete, while the cumulative deformation of the former is far less than the latter with the growth of loading time, indicating a significant improvement in high temperature deformation assistance of the asphalt mixture with admixture.
Creep Model Parameters Results
According to the creep performance test results of High Modulus Asphalt Concrete under different temperature conditions as well as treatment and regression conducted by professional fitting software 1STOPT, creep parameters 1 , 2 , and 3 of 5 different types of asphalt concrete are obtained, respectively, as shown in Table 3.
From Table 3, the following can be seen.At the same temperature, creep model coefficient 1 has the biggest amplitude of variation and exhibits the change in the order of magnitudes with additive, while partial stress index 2 is reduced in five asphalt concrete creep parameters of different types.Time index 3 is always negative, its absolute value being less than 1, and changes little with additive variation.
As for creep parameters at different temperatures, whatever the material is, the amplitude of variation in the creep model coefficient 1 is always the biggest with the series of 4 Advances in Materials Science and Engineering change in the gap, and the higher the temperature is, the smaller the value of 1 is.As for partial stress index 2 , its value increases along with the temperature increase.The time index 3 , always being −1 and 0, shows no significant change regardless of temperature change.
At different temperatures, compared with the other four mixtures, the creep coefficient model 1 of High Modulus Asphalt Concrete (70#-3) shows increases in series, while the partial stress index 2 decreases by 4 to 6 times.(2) From laboratory High Modulus Asphalt Concrete creep test, it can be learned that the size of creep accumulative strain of asphalt mixtures shows the following rule: 70#-3 < 50#-2 < 70#-2 < 70#-1 < 50#-1.Among all the five types, high modulus asphalt mixture with added additive shows the strongest high temperature deformation resistance.
Conclusions
(3) Creep parameters of different types of High Modulus Asphalt Concrete at different temperatures are obtained with the professional fitting software 1STOPT on the basis of numerical simulation of uniaxial compression creep test.
3 Figure 1 :
Figure 1: Creep curves for different types of asphalt mixture.
( 1 )
By integration process of the time hardening creep model in ABAQUS, a new simplified creep model is established.
Table 1 :
The optimum asphalt content.
Table 3 :
Creep parameters for asphalt concrete. | 2018-12-20T11:29:02.072Z | 2015-10-11T00:00:00.000 | {
"year": 2015,
"sha1": "0577646677b9d5494c29ad993b496a260b18b301",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/amse/2015/860454.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0577646677b9d5494c29ad993b496a260b18b301",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
257061246 | pes2o/s2orc | v3-fos-license | Neurological scoring and gait kinematics to assess functional outcome in an ovine model of ischaemic stroke
Background Assessment of functional impairment following ischaemic stroke is essential to determine outcome and efficacy of intervention in both clinical patients and pre-clinical models. Although paradigms are well described for rodents, comparable methods for large animals, such as sheep, remain limited. This study aimed to develop methods to assess function in an ovine model of ischaemic stroke using composite neurological scoring and gait kinematics from motion capture. Methods Merino sheep (n = 26) were anaesthetised and subjected to 2 hours middle cerebral artery occlusion. Animals underwent functional assessment at baseline (8-, 5-, and 1-day pre-stroke), and 3 days post-stroke. Neurological scoring was carried out to determine changes in neurological status. Ten infrared cameras measured the trajectories of 42 retro-reflective markers for calculation of gait kinematics. Magnetic resonance imaging (MRI) was performed at 3 days post-stroke to determine infarct volume. Intraclass Correlation Coefficients (ICC's) were used to assess the repeatability of neurological scoring and gait kinematics across baseline trials. The average of all baselines was used to compare changes in neurological scoring and kinematics at 3 days post-stroke. A principal component analysis (PCA) was performed to determine the relationship between neurological score, gait kinematics, and infarct volume post-stroke. Results Neurological scoring was moderately repeatable across baseline trials (ICC > 0.50) and detected marked impairment post-stroke (p < 0.05). Baseline gait measures showed moderate to good repeatability for the majority of assessed variables (ICC > 0.50). Following stroke, kinematic measures indicative of stroke deficit were detected including an increase in stance and stride duration (p < 0.05). MRI demonstrated infarction involving the cortex and/or thalamus (median 2.7 cm3, IQR 1.4 to 11.9). PCA produced two components, although association between variables was inconclusive. Conclusion This study developed repeatable methods to assess function in sheep using composite scoring and gait kinematics, allowing for the evaluation of deficit 3 days post-stroke. Despite utility of each method independently, there was poor association observed between gait kinematics, composite scoring, and infarct volume on PCA. This suggests that each of these measures has discreet utility for the assessment of stroke deficit, and that multimodal approaches are necessary to comprehensively characterise functional impairment.
. Introduction
Ischaemic stroke is a leading cause of death and neurological disability worldwide (1,2). New approaches to reperfusion have extended the previously narrow window for intervention (3,4), resulting in reduced mortality, yet a higher incidence of patients facing persistent neurological impairment (5). To improve functional outcomes in the increasing number of patients who survive stroke, new therapies targeting secondary injury and neurological recovery mechanisms are urgently required (6). Animal models are an essential step in the development of novel stroke therapeutic agents, with restoration of function a key indicator of a treatment's efficacy. Despite this, the translation of pre-clinical findings to clinically efficacious stroke therapies has been largely ineffective to date (7). This may be a consequence of pre-clinical experimental design, selection of model species, and lack of comprehensive functional assessment (8). Due to their potential for enhanced clinical translation, large animal species, including sheep, pigs and nonhuman primates (NHP's) are increasingly being used as a screening tool once initial therapeutic efficacy has been demonstrated in small animals (9)(10)(11)(12), with accurate assessment of functional deficit in these species necessary for relevance to clinical disability.
Clinical assessment of post-stroke function is often carried out using composite scoring systems such as the National Institutes of Health Stroke Scale (NIHSS) or the modified Rankin Scale (mRS) which are used to determine acute stroke severity and long-term stroke outcomes respectively (13,14). The NIHSS assesses 11 criteria including vision, facial movement, motor function of the lower and upper extremities, and language disturbances, where a higher score allocated indicates greater stroke severity, providing useful information in the acute care setting. By comparison, the mRS comprises a 7-point scale which assesses functional independence and gait stability ranging from no symptoms to severe disability and death. Although relatively crude, the mRS is commonly employed as a long-term outcome measure in stroke clinical trials. Quantitative differences in gait kinematics have also served as a means of determining asymmetry and extent of neurological motor deficit, with studies demonstrating a significant increase in swing duration, and decrease in gait speed and stride length following stroke onset (15,16).
Comparable assessment of functional deficits in animal stroke models vary depending on the species. In rodents, composite scores such as the Bederson scale and Modified Neurological Severity Score (mNSS), motor function tests such as rotarod, cylinder test, ledged beam, grid walking, reaching chamber and staircase test, and quantitative systems to assess gait such as the Catwalk and DigiGait (17) are frequently employed. Large animal NHP models utilise the NHP Stroke Scale (NHPSS), which assesses level of consciousness, defence/startle reactions, upper/lower extremity movement, gait, circling, bradykinesia, balance, neglect, visual field, facial weakness, and grasp reflex (18). Additionally, the 2 tube choice test (hand preference, spatial neglect), the hill and valley staircase test (hemiparesis) and the Kluver board (motor control and planning) are common outcome measures used in NHP stroke models to evaluate the severity of post-stroke deficits (19). Although the ability to assess grasp in NHP's is of particular relevance given their comparable dexterity to humans (20-22), strict housing requirements, ethical considerations, and overall expense can limit use in large scale studies (10,11,23,24). Pigs have also been used as a large species to study stroke, with open field [exploratory behaviour; (25)] and gait analysis [step length, step velocity, swing duration, stance duration and maximum hoof height; (26)] described to assess post-stroke deficits. However, some porcine species are not available internationally, such as the Yucatan minipig, limiting widespread use.
The relative availability, amenable nature, and gyrencephalic cerebral structure has led to the increased use of sheep as a species to model stroke (27-31). In addition, the high proportion of white matter within the sheep brain (27.7%) is much closer to that in humans (40-45%) compared with rodent species (10-20%): an important consideration given the vulnerability of white matter to ischaemic injury (24). Functional deficits have been documented following ovine middle cerebral artery occlusion (MCAo), including noticeable hemiplegia of the contralateral limbs and general apathetic behaviour (27,30). These deficits are comparable to those seen clinically, where malignant MCAo often presents as unilateral hemiplegia and hemiparesis, and resultant compensatory reliance on unaffected ipsilateral limbs (32, 33). Functional assessment in sheep; however, presents unique challenges. Firstly, although composite scoring systems for hooved animals do exist, clinical tasks such as grasp reflex cannot be assessed. Secondly, motor function tests developed for rodents are often difficult to translate to large animals due to the need for increased size of test and measurement apparatus, whilst others are completely inappropriate for large species (e.g., rotarod). Thirdly, although quantitative systems to assess gait kinematics have been reported in ovine musculoskeletal, orthopaedic, and spinal cord injury models (34)(35)(36)(37), no assessment for stroke has been described to date.
Developing functional assessment methods that overcome these challenges is key in enabling detection of acute and long-term functional changes post ovine stroke. As such, this study sought to establish a neurological composite scoring system and subsequently develop a method to assess gait kinematics using motion capture in an ovine model of ischaemic stroke. Specifically, this study aimed to: (1) develop a neurological composite scoring system and assess its repeatability in healthy animals pre-stroke; (2) develop a method to assess gait kinematics using motion capture and assess repeatability in Sorby-Adams et al. .
FIGURE
Experimental timeline. Animals arrived at the facility months prior to stroke induction. Thereafter procedures commenced weeks prior to stroke (where stroke induction is indicated as day ) including staged habituation, palpation and marker attachment, baseline assessments, post-stroke assessment, and MRI.
healthy animals pre-stroke; (3) determine if a change in neurological composite scoring was detected 3-days post-stroke; (4) determine if a change in gait kinematics was detected 3-days post stroke and (5) determine the relationship between functional outcomes obtained via neurological composite scoring, gait kinematics, and infarct volume quantified via magnetic resonance imaging (MRI) at 3-days post-stroke.
. Materials and methods
. . Ethics
This study was approved by the South Australian Health and Medical Research Institute (SAHMRI) Animal Ethics Committee (SAM 3) and conducted in accordance with the Australian National Health and Medical Research Council code of care and use of animals for scientific purposes (8 th Edition, 2013), and Animal Research Reporting of In vivo Experiments (ARRIVE) guidelines (38). A total of 26 adult Merino sheep (Ovis aries, 57 ± 4 kgs, 18-36 months), obtained from a single farm (Gum Creek, South Australia) were used (n = 13F; 13M). Six months prior to commencing the study, animals were moved from the farm to the research facility [SAHMRI Preclinical Imaging and Research Facility (PIRL)], where on arrival they were examined by a veterinarian and judged to be healthy prior to study inclusion based on complete physical and orthopaedic examinations. Sheep were treated prophylactically with antiparasitic ivermectin administered intramuscularly (0.25 mg/kg, Ivomec 0.8 g/L) and cydectin administered by oral drench (0.1% Moxidectin). Animals were fed once daily with a combination of feedlot, nuts, grain, and lucerne hay (Laucke Mills, South Australia), with free access to water.
. . Experimental design
To determine the repeatability of neurological composite scoring and gait kinematics, pre-stroke, baseline assessment was carried out on three occasions; 8-, 5-and 1-day prior to stroke induction. To compare pre-and post-stroke parameters, assessment was performed 3 days following stroke onset. MRI was carried out at 3 days post-stroke following completion of functional assessments. Experimental timeline and procedures are shown in Figure 1.
. . Neurological composite scoring
A 10 criteria neurological assessment score was adapted from previous work (27) based on the common system for neurologic dysfunctions in large animals (39). This study specifically focused on the functional deficits observed in animals following transient MCAo, including changes in demeanour, behaviour, and motor dysfunction ( Table 1). A score of 0 was considered normal, with a possible total score of 36 indicating severe deficit.
Each criterion was scored at the time of assessment upon agreement of two independent assessors. Observations of level of consciousness and state of activity gave a score for animal demeanour (Table 1, criterion 1). Animals who were comatose warranted euthanasia, and no further investigations were performed. Abnormalities in animal behaviour were assessed by cumulative scores for presence of food debris in the mouth indicating inability to properly masticate, torticollis, evidence of abnormal flexion at the fetlock and/or carpus/tarsus joints, general ataxia or dysmetria in limb movements, and circling (Table 1, criteria 2, 3, 4, 5, and 6 respectively). Circling behaviours (criterion 6) were monitored prior to animal handling on assessment days by undisturbed video recording of the animal for 10 min within their home pen environment.
Three postural reaction tests (Table 1, criteria 7, 8 and 9) were conducted by forcefully shifting the animal's weight over their centre of gravity on individual limbs and assessing their ability to correct the movement. Criterion 7 refers to "hemi-standing", which evaluated the animal's ability to correct and co-ordinate fore-and hind-limbs during a lateral movement on the left and right side of the body. Criterion 8 refers to the "hopping reaction" which assessed forelimbs individually to determine the animal's ability to correct the limb during lateral movement. Additional quarter scores were allocated in criteria 7 and 8 if the animal exhibited inability to fully extend a limb upon release, causing 'knuckling' on the ground. Criterion 9 encompassed "lateral dragging", which involved the forced lateral movement of each individual limb and assessment of the animal's ability to return the limb back to the medial starting position. Quarter scores were given for criterion 9 if animals dragged a limb on return (0.25/limb) or if correction back to original position was only partial (0.25/limb). Scores for hemi-standing, hopping, and lateral drag were incorporated into a single postural reaction measure for the contralateral and ipsilateral side of the body, respectively. Forced forward movement of the animal on both forelimbs ("wheelbarrowing", Table 1, criterion 10) assessed for any sideways deviation, indicative of hemineglect .
/fneur. . . . Motion capture of gait kinematics . . . System design and hardware The relative size and strength of sheep requires the construction of robust systems that are adaptable for use in farming environments yet enable safe handling throughout assessment to ensure both animal and handler wellbeing. Given these requirements, a fenced, motion capture run measuring 10 × 5 × 1 m was fabricated using standard building and farming equipment (Supplementary Figure S1). One length of the run was defined as the capture volume (the space in which cameras can detect movement of the animal), with the remaining providing a circular pathway back to the capture volume. Sheep were encouraged to walk forwards through the run, with a familiar researcher walking behind them at a consistent pace. As the sheep turned the corners of the run, the researcher appeared in their visual field along the edge of their flight zone (40)(41)(42), encouraging continuous forward movement.
Ten motion capture cameras (Vicon Vero, Vicon Motion System Ltd., Oxford, UK) were placed equidistant around the periphery of the capture area, five on either side, approximately 1 m from the fence line and at a height of 1-1.5 m. Vicon Nexus software (v2.10) was used to capture marker data at a frame rate of 200 Hz. An additional video camera (Vicon Vue, Vicon, Oxford, UK) operating at a frame rate of 60 Hz captured video footage which was superimposed to the motion capture data for quality control when post-processing.
. . . Habituation
Staged habituation was undertaken prior to assessment to familiarise animals with handling and testing procedures. On facility arrival, animals were initially housed in protected outdoor pens in groups of six. Four weeks prior to surgery, animals were separated into pairs in the same outdoor pens. During this time, they underwent a three-stage habituation protocol ( Figure 1): Stage 1: pairs of animals (housed together) were allowed to roam the functional run without a handler for 30 min on five consecutive days; Stage 2: individual animals traversed the run in a clockwise direction (30 min for five consecutive days), with a handler walking behind them to encourage forward movement; Stage 3: animals were trained to step into, and out of, a modified transport crate; grain (Laucke Mills, South Australia) was used to encourage animals to step into the crate without handler intervention. Habituation and testing procedures were carried out by four trained handlers familiar to the sheep.
. . . Anatomical landmarks
Spherical retro-reflective markers (9-and 15-mm diameter; B&L Engineering, California, USA) were non-invasively attached to 42 anatomical landmarks ( Figure 2) using hooked Velcro R (Velcro USA Inc, Manchester, NH, US). The opposing loop surface of the Velcro R was adhered to the animal using cyanoacrylate adhesive (Bostik, Australia). To ensure consistent marker placing, 6 days prior to baseline testing, animals were intubated and anaesthetised (1.5% isoflurane, Henry Schein, Australia), anatomical locations palpated, and landmarks tattooed using a handheld tattoo gun and India ink (Windsor and Newton, Australia). Animals were shorn weekly to facilitate marker reattachment and visualisation.
. . . Motion capture data collection
On testing days, the Vicon motion capture system was calibrated, and the global coordinate system (GCS) set using a light emitting diode wand (Vicon Active Wand, Vicon Motion System Ltd., Oxford, UK). The GCS z-axis corresponded to the vertical (sagittal) direction with the positive axis pointing up; the y-axis corresponded to the direction of the progression (forward); and the x-axis corresponded to the lateral direction of the animal (left/right) with the positive axis pointing to the right side. Prior to assessment, animals were placed into a modified transport crate, reflective markers attached to anatomical locations, and moved to the functional testing space where they completed a minimum of 20 traverses of the functional run. Once motion capture was complete, reflective markers were removed and animals were returned to their home pen.
. . . Motion capture data post-processing
Post-processing of motion capture data was performed using Vicon Nexus software (version 2.10, Vicon Motion System Ltd., Oxford, UK) with 5 trials in which the animal maintained a consistent walking pace reconstructed for each session (8-, 5-, and 1-days prestroke and 3 days post-stroke). Markers were labelled and spline and cyclic algorithms used to fill all visible gaps (Vicon Motion System Ltd., Oxford, UK). A fourth order, zero lag, low pass Butterworth filter was applied with a cut-off frequency of 10 Hz. Data were exported to C3D format and further processed with custom MATLAB R code (Mathworks, Natick, MA, USA). Motion capture parameters selected for analysis sought to capture post-stroke gait, asymmetry and general apathetic behaviour observed, such as lowering of the head and shoulders and inability to extend the fetlock joint contralateral to the stroke affected hemisphere. Parameters of interest were subsequently classified into global and limb-specific. The final parameters selected for analysis and their purpose are provided in Supplementary Table S1 (global parameters) and Supplementary Table S2 (limb-specific parameters). Global parameters correspond to the outcome measures pertaining the entire trial, for example forward velocity. These outcomes were calculated as the mean value across the entire trial. Limb-specific parameters were computed from the observation of the kinematic data of each limb within its corresponding gait cycle. For each trial, the gait cycles were identified following the method from Ghoussayni et al. (43). Changes in the velocity of each limb's hoof marker (DPHAL, Figure 2) were detected in the vertical and progression directions, determining when the marker stopped moving (entering stance phase) or started moving (entering swing phase). One complete gait cycle per limb was extracted from each trial to calculate kinematic measures of interest using two-dimensional (2D) planar analysis. Planar analysis was independently performed in the vertical (sagittal) and lateral (left/right) directions. Joint angles were defined between two vectors in the sagittal plane for the fetlock, carpus, and elbow of the forelimb, and fetlock, tarsus, and stifle of the hindlimb (Figure 2). . /fneur. .
. Preoperative preparation
Twelve hours prior to surgery, animals were moved to indoor pens and fasted. Anaesthesia was induced with intravenous ketamine (0.05 mL/kg, 100 mg/kg Injection, CEVA, Australia) and diazepam (0.08 mL/kg, 5 mg/mL injection, Pamlin, CEVA, Australia). A jugular catheter (18 g, Terumo SURFLO R ) was inserted for delivery of intraoperative crystalloid fluids (Hartmann's, Baxter Health, Australia). Anaesthesia was maintained with inhaled isoflurane (1.5-2.0% in 3 L of air and 500 mL of oxygen, Henry Schein, Australia) and continuous ketamine infusion (4 mg/kg/hr) via the jugular catheter. An arterial catheter (20 g, Terumo SURFLO R ) was placed in the distal hindlimb to yield arterial blood samples for blood gas analyses.
A paediatric blood pressure cuff (Easy Care Cuff, Phillips) was placed on the proximal forelimb for a non-invasive measure of arterial blood pressure which was manually recorded at 5-minute intervals.
. . . Intraoperative procedures
Stroke surgery was performed as previously described in detail (30). Due to the presence of a rete mirabile in sheep, endovascular methods are precluded and direct access to the cerebrovasculature is required [for details and review please see (10,(44)(45)(46)]. To achieve this, an incision was made between the right ear and orbital rim, coronoid process of the mandible lateralised, and skull base exposed to perform a small craniotomy using a pneumatic drill (Midas Rex R Legend Electric System, .
Medtronic USA). A 2 cm skull flap was removed, underlying dura breached, proximal middle cerebral artery (MCA) located, and an aneurysm clip (Aesculap YASARGIL R Aneurysm Clip, Germany) placed over the vessel which remained in situ for 2 h. The clip was subsequently removed to achieve reperfusion, dura closed watertight with synthetic matrix (Durepair R , Medtronic, USA) and cyanoacrylate adhesive (Bostik, Australia), cranioplasty performed using dental cement (Sledgehammer, Keystone, Germany), and surgical site closed in layers using polyglactin suture (Vicryl R , ETHICON). Arterial blood samples were obtained at hourly intervals intra-operatively to maintain the animal within normal physiological limits.
. . . Postoperative recovery
Animals were removed from anaesthesia and, once lucid, treated with subcutaneous non-steroidal anti-inflammatory (NSAID, 0.7 mg/kg, 50 mg/mL every 12 h, Carprofen, Norbrook, Australia) and intramuscular Buprenorphine (Temgesic, 1.0 mL, 324 µg/mL Buprenorphine hydrochloride, Reckitt Benckiser, Australia) for pain relief, and intramuscular Depocillin for antibiosis (1 mL/25 kg every 12 h, Procaine benzylpenicillin, Intervet, Australia). NSAID and antibiotic treatment was continued for 3 days post-operatively, and as required thereafter. Clinical assessment was carried out twice daily to determine animal wellbeing, including urine and faecal output, food and water intake, and signs of apathy. Animals remained in indoor housing for 3 days post-operatively, after which they were returned to protected outdoor pens and housed individually.
. . Magnetic resonance imaging
Twenty (n = 10M; 10F) of the 26 animals underwent MRI at 3 days post-stroke under general anaesthesia (1.5% isoflurane, Henry Schein, Australia) on a 3 Tesla (T) Siemens Magnetom (Siemens AG, Munich, Germany) using a posterior 20-channel head coil enabling collection of T2 fluid attenuated inversion recovery (FLAIR) and diffusion weighted images (DWI). Axial T2 FLAIR sequences were acquired with a slice thickness of 0.89 mm, repetition time (TR) 5000 ms, echo time (TE) 386 ms, 1 average, flip angle of 12 • , acquisition matrix of 256 × 256, and in plane resolution of 0.39 mm/pixel. DWI sequences were acquired with a slice thickness of 3 mm, TR 5600 ms, TE 80 ms, 1 average, flip angle of 180 • , acquisition matrix of 190 × 190, in plane resolution of 0.85 mm/pixel, with 4 diffusion directions and b-values of 0 and 1000 s/mm 2 . Semi-automated segmentation of b-1000 DWI data was performed using ITK-SNAP (v3.8) to estimate infarct volume as previously described (47). Midline shift was calculated using axial T2 FLAIR images on RadiAnt (v2020.2). The midline between the left and right hemisphere was defined at the level of the foramen Monro and the degree of shift measured perpendicular to the midline in mm where the septum pellucidum was most displaced. Three measurements were recorded at intervals (one at the level of the foramen of Monro, one 4 mm superior, and one 4 mm inferior) and subsequently averaged to provide a single value in mm.
. . Statistics
Statistical analysis was performed using Stata (version 17.0, StataCorp, College Station, TX). Normality was assessed via visual inspection of histograms. Normally distributed continuous variables are reported as mean and standard deviation (SD) and analysed using parametric modelling. Skewed data are reported as median and interquartile range (IQR) and analysed using non-parametric tests.
. . . Baseline repeatability analysis
Intraclass Correlation Coefficients (ICC) were used to describe repeatability across the three baseline sessions (8-, 5-, and 1-day pre-stroke). For neurological composite scoring, ICC estimates were based on a mean-rating (k = 3), absolute agreement, two-way mixed effects model with non-parametric bootstrapped 95% confidence intervals (CI). Differences between baseline testing days was analysed using a Kruskal-Wallis test.
For gait kinematics, ICC's were based on a mean-rating (k = 3), absolute agreement, two-way mixed-effects model with parametric 95% CI. For limb-specific measures, two ICC values were derived; one un-adjusted and one adjusted for the potential confounding effect of walking speed (measured as mean absolute velocity at T1) using linear regression analyses. One-way repeated measures analysis of variance (ANOVA) were used to determine if there was a difference between the three baseline measures for each kinematic variable.
. . . Post-stroke analysis
To determine differences in neurological scoring pre-vs. poststroke, the mean across all baseline trials for each criterion was calculated to provide a single value. The mean baseline value was subsequently compared with 3 days post-stroke using a Mann-Whitney U-test.
To determine the change in gait kinematics pre-vs. post-stroke, a single baseline measure comprised of the mean of the three baselines was also calculated for each variable of interest. Linear mixed models (LMMs) were used to determine differences post-stroke. A random effect of sheep was used to account for the correlation between repeated or multiple measures on the same animal. Fixed effects were time (pre-, post-stroke), limb (left, right), and a time-by-limb interaction term. The interaction term was necessary as the right sided stroke was expected to cause left-sided deficits (with potential right-sided compensation) thus producing a side-dependent effect of time. Two models were fitted for each limb-specific measurement; the first un-adjusted and the second adjusted for velocity. Estimates of the difference between baseline and post-stroke were derived for each limb.
. . . Gender analysis
The effect of gender was assessed pre-and post-stroke for the following variables: infarct volume on DWI (post-stroke only), total neurological score, kinematic global measures including; mean absolute velocity and mean head to T1, and limb specific measures for the fore-and hind-limbs (both left and right) including minimum, maximum and range of the fetlock in stance, minimum, maximum and range of the fetlock in swing, and duration of stance, swing, and stride. Pre-stroke comparisons used the mean of all baseline measures. Infarct volume and neurological score were analysed using a Mann-Whitney U-test. Kinematic global measures were analysed using linear regression modelling. Limb specific measures were analysed using LMMs with fixed effects for gender and leg and random effect for animal. All models were adjusted for velocity.
. . . Principal component analysis
To determine the relationship between gait kinematics, total neurological examination score, and infarct volume at 3 days poststroke, a Principal Components Analysis (PCA) was performed. The measures considered for inclusion in the PCA were infarct volume on DWI, total neurological score, kinematic global measures including; mean absolute velocity and mean head to T1, and limb specific measures for the forelimbs (both left and right) including minimum, maximum and range of the fetlock in stance, minimum, maximum and range of the fetlock in swing, and duration of stance, swing, and stride. The number of extracted components for analysis was based on eigenvalues >1, and inspection of scree plots. A correlation matrix was used to assess correlations between variables. Kaiser-Meyer Olkin (KMO) measures of sampling adequacy were used to assess how suitable the data was for PCA, with scores assigned to each variable and the complete model. Individual scores <0.50 implied that the variable was not sufficiently correlated with the other variables to warrant inclusion and was excluded from final analysis. Bartlett's test was used to assess whether the variables, after PCA, presented variable homogeneity.
. . . Statistical interpretation
Results for ICC's are presented as ICC and 95% confidence intervals (CI). ANOVA, Kruskal-Wallis and Mann-Whitney U-tests between baseline sessions are presented as p-values. Results from LMM and linear regression models are presented as mean difference, 95% CI, and p-value. Interpretation of ICC values was <0.50 poor; 0.50 −0.75 moderate; 0.75 −0.90 good; >0.90 excellent (48). A p < 0.05 was considered statistically significant throughout.
. Results
Two animals were euthanised prematurely and excluded from the study (intravenous administration of 160 mg/kg sodium pentobarbital, Lethabarb, Australia). One animal had unsuccessful reperfusion of the MCA resulting in a permanent stroke, and the other had kidney failure leading to seizures. Twenty-four animals (n = 12M; 12F) reached the experimental endpoint for neurological scoring and gait kinematics and were included in the final analysis. Twenty (n = 10M; 10F) of these animals underwent MRI and were subsequently used for the PCA.
. . . Baseline repeatability analysis of limb parameters
There was no difference between baseline trials for all limbrelated variables (all p ≥ 0.050) with the exception of the range of the hoof height in swing for the left forelimb (p = 0.036, 6.73 ± 0.33 m/s). ICC repeatability was good for some, but not all, of the recorded measures (highlighted by an asterix ( * ) in Supplementary Table S4 for the forelimbs and Supplementary Table S5 for the hindlimbs). To allow for comparison with post-stroke trials, the outcome measures for each individual limb are described herein.
To probe these observations further, differences between ipsiand contra-lateral limb pairs (fore-and hind-limbs), indicative of asymmetry, are reported in Supplementary Table S10 (forelimbs) and Supplementary Table S11 (hindlimbs). No differences were observed between left and right forelimbs post-stroke when unadjusted and adjusted for velocity (all p > 0.17, Supplementary Table S10). Significant differences were; however, observed between the left and right hindlimbs [highlighted by an asterix ( * ) in Supplementary Table S11
Following stroke there were no differences observed between genders for total neurological score or global kinematic measures (all p > 0.05, Supplementary Table S15). For the forelimbs . /fneur. . Table S16
. . Infarct volume
All animals displayed evidence of infarction in the right parietal lobe encompassing the thalamus and/or cortical regions as quantified on DWI at 3 days post-stroke. Median (IQR) infarct volume was 2.7 (1.4 to 11.9) cm 3 (raw values shown in Table 2). Those animals with larger infarcts exhibited a greater degree of midline shift, indicative of space occupying oedema (Supplementary Table S18), although infarct volume was not corrected for oedema due to lesion variability. Due to significant variation in lesion volume, animals with infarcts >18 cm 3 (median: 21.99 cm 3 , IQR: 19.33 to 25.46 cm 3 , n = 5) were compared to those with infarcts measuring <6 cm 3 (median: 1.99 cm 3 , IQR: 0.90 to 3.19 cm 3 , n = 15) (Supplementary Table S19) for each of the following variables: total neurological score, kinematic global measures including; mean absolute velocity and mean head to T1, and limb specific measures for the fore-and hind-limbs (both left and right) including minimum, maximum and range of the fetlock instance, minimum, maximum and range of the fetlock in swing, and duration of stance, swing, and stride. All limb-specific measure were adjusted for velocity. Differences were assessed as per the gender analysis (Section 2.8.3).
No differences were observed between infarcts >18 cm 3 and <6 cm 3 for total neurological score or either kinematic global measure (all p > 0.05, Supplementary
. . Principal component analysis
Due to high correlations (r > 0.85), the following variables were removed from the PCA: minimum and maximum fetlock angle in stance, maximum fetlock angle in swing, and stance and stride duration. Gait kinematic variables below the threshold for KMO (<0.5) were excluded from PCA, including: minimum, maximum, and range of the right fetlock in stance, maximum and range of the left fetlock in stance, the minimum and maximum angle of fetlock in swing (left right forelimbs) and stance duration. The final PCA was thus fitted with infarct volume, total neurological score, kinematic global variables including mean absolute velocity and mean position of the head to T1, and limb specific variables for the left forelimb including minimum angle of the fetlock in swing and stance duration, and duration in swing for both left and right forelimbs. A summary of these variables is provided in Table 2. The final PCA produced an overall KMO = 0.67, implying that the data was appropriate for performing PCA. Two components had eigenvalues >1 which explained 67.2% of the variance. Bartlett's test of sphericity showed that there was an interrelationship among the final variables reported (χ 2 = 91.7, p < 0.001).
The final PCA yielded two components (Table 3), with the summary loading plot shown in Figure 4. Principal component 1 (PC1) accounted for 51.5% of the overall variance. PC1 was characterised by positive associations with stance duration of the left forelimb, total neurological score, swing duration (both left and right forelimbs); and negative associations with mean head to T1 and mean absolute velocity. Principal component 2 (PC2) related positively to infarct volume and minimum fetlock in stance (left forelimb); and negatively to swing duration (left forelimb) and mean head to T1.
. Discussion
In this study we present a comprehensive approach to assessing functional outcome in an ovine model of ischaemic stroke. First, through adaptation of a neurological assessment score, we characterised the pre-and post-stroke response of animals, including demeanour, behaviour, and postural reactions. Second, using motion capture, we developed a method to detect changes in gait kinematics, representing the first description of this approach to functional assessment in an ovine stroke model. We have shown both approaches to be repeatable in healthy animals through comparison of baseline pre-stroke trials, and subsequently used these findings to assess changes in functional outcomes at 3 days post-stroke.
. . Neuroscore
Neurological composite scoring remains a valuable tool both in pre-clinical models and clinical patients to assess functional outcomes across the post-stroke time course. Through adaptation of an ovine neurological score, this study demonstrated that composite scoring . /fneur. . was a repeatable means to assess neurological function for most measures of interest. Stroke prognostic scores such as the NIHSS perform well in predicting clinical outcomes post-stroke (49). Poststroke neuroscore values in the present study reflected significant functional impairment post-ictus, in keeping with the clinical literature and previous ovine studies (27). The most profound deficits observed included alterations in demeanour, including lowering of the head, and general apathy. It must be highlighted that the postoperative course is frequently reported as a potential confounder of animal demeanour (50), such that this may not be an observation linked solely to post-stroke sequalae given assessment was carried out 3 days post-operatively.
. /fneur. . In comparison, postural disturbances are a reliable indicator of veterinary neurological dysfunction, including stroke (51). Herein, post-stroke animals displayed abnormal movement of the forelimbs, evidenced by both ipsi-and contra-lateral postural reactions during conscious proprioceptive positioning. Slight variability was observed in baseline postural reaction tests and postural assessment of the left limbs was not considered repeatable. This likely reflects difficulty in performing postural reactions in large animal species due to the need for significant manual handling, in addition to the fact that on occasion, animals were unwilling to perform the task, lying down or showing no desire to respond to the perturbation. In evaluating ipsi-vs. contra-lateral deficit post-stroke, postural reaction tests revealed significant deficits in the left limbs when compared to prestroke, although differences were also observed in the right limbs following stroke onset. These findings suggest global deficits, rather than limb specific changes, were apparent in our ovine cohort 3 days following stroke.
. . Gait kinematics . . . Repeatability of gait kinematics Assessment of gait kinematics using motion capture sought to detect subtle changes beyond the scope of composite scoring. Repeatability of human gait kinematics using motion capture in multiple laboratories is good (ICC > 0.80), supporting its use as a valuable tool across a range of environments and in different species (52). When determining the repeatability of global outcome measures of interest, this study revealed that the position of the head in relation to T1, T13, and L7 across baseline trials had good repeatability, with animals consistently walking with their head upright, and slightly towards the right of the functional run. Although the average speed of walking was comparable across animals, velocity had poor repeatability. This may have subsequently influenced limb specific parameters of interest given gait patterns change as a function of velocity (53,54). This is true under normal physiological conditions, and factoring in all velocity-related changes when assessing gait in disease is especially challenging. Previous studies have suggested that observation of gait characteristics when speed is not controlled leads to variation from trial to trial, which is true for both experimental animals (55)(56)(57)(58) and human participants (59-61). Neglect of velocity has also been proposed to lead to oversimplification of analysis and loss of potentially valuable data (54).
To address this, previous studies have used treadmills for functional assessment to control for velocity (35)(36)(37). Although this offers the advantage of regulating walking speed, the selected speed of the treadmill has been shown to directly influence walking patterns (62)(63)(64). Importantly, faster speed has been shown to facilitate a more normal walking pattern following stroke in humans (65). Given the unilateral effects of MCAo, the ability to accurately assess deficits of symmetry is imperative. Allowing animals to walk at selfselected pace may enable more accurate assessment of asymmetric gait following stroke, which can retrospectively be adjusted for velocity. In the current study, significant differences between left and . /fneur. . right hindlimbs were observed following stroke. These differences remained when adjusted for walking speed via regression analysis. Use of regression-based analyses has been suggested as a robust approach to translational gait analysis and may be particularly relevant in the setting of stroke (54, 57). Indeed, the application of regression-based velocity adjustment reported in the current study suggests the method is reproducible. This enables application in various experimental conditions and environments such as those where velocity is not a controlled measure. Enabling animals to walk at their own pace also allowed us to determine the 'comfortable' walking speed pre-and post-injury: an important consideration from an animal welfare perspective. Thus, adjusting for velocity during post-processing is of benefit from both ethical and experimental perspectives, and favours the generation of more reliable data.
Regarding limb-specific outcome measures, we observed good repeatability for most, but not all, variables, with the few that were not repeatable varying between left and right limbs. Specifically, all joint angles were repeatable except for the left forelimb and right hindlimb minimum fetlock angle during swing, regardless of velocity adjustment. Retro-reflective markers on the distal limbs were smaller (9 mm) than the other markers (15 mm), which was essential due to the close proximity of placement on the hooves. Consequent reduction in spatial resolution necessitated more extensive gap filling of these markers during post-processing. Tattooing was also less distinguishable over the superficial bones of the distal limb, such that reattachment of markers at these sites may have been more variable. Together, these factors may have introduced more error, potentially confounding experimental results. Other fetlock angle parameters; however, were repeatable in baseline testing, including the minimum fetlock angle during swing for the right forelimb. Discrepancy may thus also represent variation of animal behaviour during overland walking. Future analyses should aim to improve data capture for the distal forelimbs and focus on assessing repeatable measures reported herein to accurately evaluate the effect of stroke and post-stroke interventions on gait kinematics.
The baseline gait outcomes reported in this study were generally consistent with other gait assessments performed in healthy sheep. Shelton and colleagues reported a stride length of approximately 1 m in mature female sheep (66), comparable to the present study (∼98 cm for all limbs). The duration of the gait cycle phases was also consistent with previous studies (35,37,(67)(68)(69)(70) as summarised in Table 4 . . . Post-stroke assessment of gait kinematics Following stroke, sheep had reduced forward velocity and a lowered head position relative to T1, in addition to lowering of the shoulders and thorax (position of T1 to T13 and T1 to L7 respectively). These findings potentially indicate motor deficit and/or animal apathy. Post-stroke apathy, mood and emotional disturbances are commonly reported clinically, presenting as a loss of motivation and initiative (74). Conducting cognitive tasks may provide a more accurate measure of motivation, and systems developed for use in sheep for other pathologies (75-78) may be a helpful avenue for assessment in ovine stroke models to probe underlying mechanisms.
Regarding limb-specific parameters, swing, stance, and stride durations were substantially longer post-stroke compared to baseline. These changes were observed in both the ipsi-and contra-lateral fore-and hind-limbs. This, in conjunction with decreased velocity, indicates that animals were less willing/able to execute forward movement. Furthermore, lateral deviation of the hoof in both left and right forelimbs was less than pre-stroke, as were forward and lateral swing velocity, indicating more "drag" of the limb and slower pace, respectively. Dysfunction of the left forelimb, contralateral to the lesion, was qualitatively observed following stroke, as per previous studies (27, 30). However, this was not uniform in all animals, as shown in the exemplar data for two animals in Figure 5. Therefore, over all animals, we did not detect pre-/post-stroke differences in joint angle minimum, maximum and range of the left forelimb, with the exception of the range of the elbow joint. It is important to note that the minimum angle of the elbow had poor repeatability across baseline sessions, so the significance of this finding is questionable.
Adjusting for velocity reduced the mean difference between preand post-stroke for most outcomes, although significant differences remained for stance, stride, and hoof lateral deviation, irrespective of limb. This suggests that the change observed in some of the gait parameters following stroke may reflect an alteration in gait signature due to the underlying pathology, not just a change in gait speed. Although these changes remained, they were not as anticipated when observing the animal qualitatively. As per the exemplar data ( Figure 5), if significant side-dependent deficit was present, a reduction in stance of the affected limb due to hemiparesis and inability to execute motor control, and subsequent compensatory increase in stride of the unaffected limb, would be expected. The lack of significant differences between pre-and post-stroke forelimb joint angles was unexpected, particularly of the fetlock. Although fetlock paresis was observed during testing, deficit was not pronounced for every animal, and if deficit was present, it was not consistent for every step of the gait cycle. Consequently, although we observed a qualitative loss of motor control of the left (contralateral) fetlock in 5/20 animals, this deficit was not captured in the data reported herein.
Furthermore, although we did not observe asymmetry between the left and right forelimbs post-stroke, differences in symmetry were observed in the hindlimbs. Results suggest the left hindlimb, contralateral to the stroke, was impacted more than the right, particularly the tarsus joint. Specifically, the minimum and maximum angle of the tarsus was greater in the left compared with the right hindlimb, which was evident during both stance and swing. This may be indicative of left hindlimb deficit, especially as both the minimum and maximum angle of the tarsus had moderate-to-good repeatability across baseline trials, both adjusted and un-adjusted for velocity. Nevertheless, we cannot discount this finding may be indicative of variability between animals following stroke, rather .
/fneur. . than discreet post-stroke deficits of symmetry. Further trials and/or sessions post-stroke may be necessary to increase the likelihood of accurately detecting sided deficit, suggesting an avenue for future development.
Following stroke, male animals exhibited extended stance, swing and stride duration of the forelimbs, and longer swing and stance duration in the hindlimbs compared with female animals. Nevertheless, these findings were observed at baseline, and appear to reflect inter gender variation rather than a consequence of male animals being more affected by the stroke itself.
. . Relationship between gait kinematics, neuroscore, and MRI parameters The PCA revealed that the parameters included clustered into two components of associated variables. For component 1, stance duration (left and right), swing duration (right) and neurological score had a positive association, while mean absolute velocity and mean head to T1 were negatively associated. Given the relationship between stance and swing duration in the normal gait cycle it is logical that these variables load to the same component. Further, /fneur. . the inclusion of neurological score with this cluster of variables is also logical considering that the neurological assessment provides an indicator of overall disability and encompasses measures of balance which likely align with the stance and swing variables. The negative association between mean absolute velocity and mean head to T1 likely reflects the observation that following stroke, animals that were more disabled tended to walk more slowly and had an apathetic demeanour, including a lowered head position whilst walking through the run. Decreased velocity subsequently increased the duration of stance and swing, hence the negative association. Minimum fetlock during stance (left) and infarct volume were both positively associated with component 2. The component loading plot showed that infarct volume did not load strongly onto component 1, and this likely reflects the variation in infarct volume we observed within the cohort following stroke. Taken together, these findings suggest that there are some associations between neuroscore, gait kinematic and infarct volume variables, but do not support one measure being used in isolation. The results highlight the importance of a multimodal approach to assessing post-stroke outcome that encompasses both medical imaging information along with assessment of function (neuroscore, gait kinematics). In pre-clinical studies, the use of multiple outcome measures encompassing infarct volume and behavioural assessment serve to underscore the recommendations of the Stroke Therapeutic and Industry Roundtable (STAIR) preclinical guidelines (79). As different neurological deficits recover at varying rates post stroke (80), modality specific approaches to assess functional outcomes of interest may be warranted when assessing putative stroke therapies and should be factored into experimental design.
Finally, despite significant variation in infarct volume, animals with comparably large stroke volume (>18 cm 3 ) compared with animals with smaller stroke volumes (<6 cm 3 ) only exhibited a worsening of functional deficit in limb specific variables. Specifically, animals with a greater lesion burden displayed an increase in stance duration of the forelimbs, although this was not isolated to the contralateral limb. Furthermore, although a significant increase in stance duration was also observed in the hindlimbs, this was not unilateral. These findings suggest an overall increase in global deficit for animals with a greater stroke burden, rather than the unilateral impairment often seen clinically.
. . Limitations and future directions
There were several limitations in this study. Regarding composite scoring, we did not assess for sensory deficits despite inclusion in clinical stroke scoring systems. Previous studies have reported sheep rapidly habituate to nociceptive stimulation (27), and thus we chose to focus on behavioural and motor deficits. Regarding kinematics, we must firstly acknowledge that reflective markers attached to areas with more overlying tissue were prone to skin motion artefact. While marker pins inserted directly into the bone can eliminate skin motion artefact, this was not possible from an animal welfare perspective given pins can be painful and increase likelihood of infection, especially relevant given the large number of markers in the present study. Skin motion artefact was minimised by selecting marker positions with minimal overlying soft tissue, not performing analysis of joints/bones with substantial overlying muscle mass, and tattooing the skin to make marker placement repeatable. Secondly, we only performed 2D analysis predominantly in the sagittal plane. Threedimensional joint angle analysis provides more comprehensive (i.e., rotations about three axes) and accurate (i.e., relative to anatomical coordinate systems rather than a GCS) joint angle assessment. We were limited in our approach due to the relative size of the animal, where given the number of joints assessed, there was insufficient space on rigid bodies to place additional markers. Future analyses should focus on refining the assessment to outcomes of the most relevance and where possible, ensuring markers are placed on rigid bodies. Thirdly, due to 2D analysis, any deviation from straight line walking in the forward direction (y axis in the GCS) could lead to errors in sagittal plane measures. We sought to minimise this by limiting the width of the run (1 m), using only one gait cycle per trial for which the animal was walking over the centre of the capture volume, and discarding trials where animals deviated from straight line walking. Fourthly, although this represents a comprehensive study for a large animal model, the number of animals used may limit interpretation of the statistical analyses. Certainly, utilising an even larger sample size than employed in the current study would limit variability and improve PCA interpretation.
It must also be acknowledged that for gait kinematics, neuroscore, and MRI, we only report a single time-point post-stroke. To accurately capture the temporal profile of post-stroke changes, functional studies should, ideally, mimic the clinical scenario where assessment is performed up to 90 days following stroke onset. Nevertheless, the purpose of the current study was to describe the capability of the functional assessment methods, in particular gait kinematics, to detect changes at 3 days post-stroke, rather than to characterise the temporal profile of post-stroke functional changes in detail. It must be acknowledged that stroke may not be fully organised by this time and animals may still be affected by the post-operative course; however, the decision to focus on day 3 post-stroke was made to avoid any residual effect of long-duration anaesthesia at day 1, and prior to onset of space-occupying oedema at day 5 post-stroke (30). A follow up study which goes beyond day 3 to provide a comprehensive and long-term assessment of post-stroke functional changes in this model is certainly warranted.
Finally, the 2 hour transient MCAo model reported herein was associated with a greater variability in lesion volume compared with permanent MCAo stroke (7.40 ± 9.59 cm 3 at 3 days compared with 16.3 ± 5.2 cm 3 at 1 day) (27), which may reflect differing arterial collateralisation between individual animals. In addition to large vessel stroke such as the MCA infarction described here, it is also pertinent to investigate the functional consequences of small vessel stroke. Specifically, lacunar infarcts typically have quite favourable functional prognoses (81), although there is a paucity of small vessel stroke models described in the literature, representing an avenue for future research.
. Conclusions
Functional outcome is a major end-point in stroke clinical trials, and an essential component of pre-clinical stroke models. In this study we developed and described comprehensive methods to assess function post-stroke in a clinically-relevant ovine model. Following stroke, animals exhibited deficit, observed both via .
/fneur. . composite scoring and kinematically via motion capture. Taken together, these methods of functional assessment may provide an opportunity for the evaluation of medical and surgical interventions following stroke, and assessment of their contribution to function in a sheep model.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author. | 2023-02-22T16:13:55.671Z | 2023-02-20T00:00:00.000 | {
"year": 2023,
"sha1": "f9a9469d35e48d34b39f9739127947409f47fa8d",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "74b43f28fc8d4aec86b336b0eef2d992458e93c4",
"s2fieldsofstudy": [
"Psychology",
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
16667229 | pes2o/s2orc | v3-fos-license | Low-Mass $e^+e^-$ Pairs from in-Medium $\rho$ Meson Propagation
Based on a realistic model for the rho meson in free space we investigate it's medium modifications in a hot hadron gas generated by hadronic rescattering processes, i.e. renormalization of intermediate two-pion states as well as direct rho meson scattering off hadrons. Within the vector dominance model the resulting in-medium rho spectral function is applied to calculate $e^+e^-$ spectra as recently measured in heavy-ion collisions at CERN-SpS energies in the CERES experiment.
Introduction
The main goal of ultrarelativistic heavy-ion collisions is the identification of possible phase transitions in strongly interacting matter associated with chiral symmetry restoration and/or deconfinement. Experimental signatures of such transitions have to be disentangled from hadronic rescattering processes occurring in the later stages of central collisions. Even though electromagnetic probes (photons and dileptons) can traverse the hadronic interaction zone without further distortion, the eventually observed spectra will be contaminated with contributions arising from conventional hadronic mechanisms and even decays after the hadronic freezeout. Focussing on the dilepton production, various 'conventional' background radiation, depending on the invariant mass range, is to be expected: for M l + l − ≥1.5 GeV (l=µ, e) Drell-Yan processes have to be disentangled from, e.g., a possible enhancement due to thermal qq annihilation in a QGP, or from anomalous J/Ψ, Ψ ′ suppression, which may or may not be due to a QGP formation; for M l + l − ≤1.5 GeV, the spectrum should be dominated by hadron decays. Here, the light vector mesons ρ(770), ω(782) and φ(1020) are of particular interest, since they can directly couple to dilepton pairs. Among these the rho meson is of special importance due to it's short lifetime (τ f ree ρ =1.3 fm/c), which is about an order of magnitude smaller than the typical lifetime of the hadronic fireball, τ f ireball ≈10 fm/c (to be compared with τ f ree ω =23 fm/c and τ f ree φ =44 fm/c). A systematic study of low-mass e + e − production in p-Be, p-Au and S-Au collisions at CERN-SpS energies has recently been performed by the CERES/NA45 collaboration [1]. Whereas their event generator (accounting for 'primary' hadron decays) can succesfully describe the p-A data, an overall factor of about 5 enhancement was observed in the S-Au case, reaching a maximum factor of ∼10 around invariant masses M e + e − ≃0.4 GeV (similar results have been obtained by the HELIOS-3 collaboration [2]; preliminary data from Pb-Au collisions further confirm these findings [3]). The inclusion of free π + π − → e + e − annihilation in transport [4,5] or hydrodynamical [6,7] simulations of the collision dynamics has been shown to reduce this discrepancy, still leaving a factor of up to 3 too little yield below the ρ mass. So far, the only quantitative explanation of these data could be achieved by assuming a density and temperature dependent dropping ρ mass according to the Brown-Rho scaling conjecture [8], interpreted as a signature of (partial) chiral symmetry restoration. However, in this contribution we try to demonstrate that 'conventional' hadronic rescattering mechanisms of the ρ meson in a hot and dense hadronic enviroment seem to be sufficient to account for the experimentally observed e + e − excess in central S-Au (200 GeV/u) and Pb-Au (158 GeV/u) collisions [9].
The ρ Meson in Free Space
A satisfactory yet simple description of the ρ meson in the vacuum can be achieved by renormalizing a 'bare' ρ of mass m bare ρ through coupling to intermediate two-pion states. The scalar part of the free ρ propagator then reads where the selfenergy contains the ρππ vertex function v ρππ as well as the free two-pion propagator G 0 ππ (M, k). The subtraction at zero energy ensures the correct normalization of the pion electromagnetic form factor (F π (0)=1), which in the vector dominance model (VDM) is given by The bare mass m bare ρ and coupling g ρππ (entering v ρππ ) are easily tuned to reproduce the experimental data on p-wave ππ scattering and the pion electromagnetic form factor in the timelike region [9, 10].
Medium Modifications in ππ Propagation
The most important medium effects in the intermediate two-pion states of the ρ propagator are attributed to interactions with surrounding baryons, as discussed in refs. [11,12,13] for the case of cold nuclear matter. Therefore one first needs a realistic model for the in-medium single-pion propagator D π . As is well known from pion nuclear phenomenology, pion-induced p-wave nucleon-nucleonhole (NN −1 ) and delta-nucleonhole (∆N −1 ) excitations are the dominant mechanism. Since we are interested in URHIC's at CERN-SpS energies (160-200 GeV/u), thermal excitations of the system should be taken into account, which, in the baryonic sector, are dominated by a large ∆(1232) component. Thus we extend the particle-hole picture to include π-∆ interactions as well in form of N∆ −1 and ∆∆ −1 excitations. To calculate the corresponding medium modified ρ selfenergy, vertex corrections of the ππρ vertex have to be included to ensure the conservation of the vector current. We here employ the approach of Chanfray and Schuck [13]. Within a full off-shell treatment of the pion propagation in connection with the afore mentioned extension to finite temperature the imaginary part of the in-medium ρ selfenergy at zero 3-momentum can be cast in the form [10] ImΣ with the longitudinal and transverse spin-isospin response functions Π L (k 0 , k) and Π T (k 0 , k), a factor α characterizing vertex corrections and thermal Bose distributions f π . The real part is obtained from a dispersion integral:
Rho-Nucleon and Rho-Delta Interactions
In analogy to the pionic interactions with the surrounding medium direct interactions of the (bare) rho meson with nucleons and deltas may have a substantial impact. Based on the observation that certain baryonic excitations (especially N(1720) and ∆(1905)) exhibit a strong coupling to the ρN decay channel (which suggests to identify them as 'ρN resonances'), Friman and Pirner proposed to derive a corresponding in-medium ρ selfenergy. As for pions, this is conveniently done in terms of p-wave (ρ-like) particle-hole excitations [14]. The ρN(1720)N and ρ∆(1905)N coupling constants are fixed by the experimental branching ratios (where it is very important to account for the finite ρ-width in free space to obtain realistic values). Within our off-shell treatment we are also able to incorporate lower lying ρN and ρ∆ contributions, the coupling constants for which are taken from the Bonn potential. Thus we obtain where the summation is over α = NN −1 , ∆N −1 , N∆ −1 , ∆∆ −1 , N(1720)N −1 , ∆(1905)N −1 , and the susceptibilties χ ρα contain the loop integrals of the corresponding particle-hole bubble as well as short-range correlation corrections (parametrized by Migdal paprameters g ′ ) [9]. In fig. 1 we display the transverse part of the ρ spectral function, at normal nuclear matter density and small temperature T=5 MeV with no medium modifications applied to the two-pion states. A pronounced structure of various branches is observed, in particular the ρN(1720)N −1 , which may be phrased 'Rhosobar' (in analogy to the 'Pisobar' π∆N −1 ).
Rho-Pion and Rho-(Anti)Kaon Interactions
Since in URHIC's at CERN-SpS energies large numbers of secondaries are produced, we furthermore evaluate ρ scattering off the most abundant surrounding mesons, ı.e. pions and (anti-) kaons. Assuming the interactions to be dominated by a 1 (1260) and K 1 /K 1 (1270) formation, the corresponding ρ selfenergy (in the Matsubara approach) can be written as (M=π, K,K). The invariant scattering amplitude M ρM is derived from a suitable (gauge invariant) lagrangian [15]. As long as the meson chemical potentials are kept zero the effect of ρ-M scattering is rather small: at highest temperatures considered (T =170 MeV) we find a ∼60 MeV broadening of the ρ spectral function [9], which is similar to the results of ref. [16].
3 e + e − Spectra from in-Medium π + π − Annihilation at the CERN-SpS Invoking the phenomenologically well established VDM the dilepton prodcution rate from π + π − annihilation can be expressed in terms of the ρ spectral function as with ImD ρ = 1 3 The full in-medium ρ selfenergy Σ ρ is the sum of the contributions discussed in sects. 2.2-2.4 (decomposed in transverse and longitudinal parts) [9]. Fig. 2 shows the ρ spectral function at fixed chemical potentials and given threemomentum: with increasing temperature/density a dramatic broadening is found, which, in particular, results in a pronounced enhancement over the free curve for invariant masses below M≃0.6 GeV. For calculating e + e − invariant mass spectra as measured in the CERES experiment the differential rate eq. (9) has to be integrated over 3-momentum and the space-time history of a central 200 GeV/u S-Au reaction. For that we assume a temperature/density evolution as found in recent transport calculations [5]. The experimental acceptance cuts on the dilepton tracks as well as the finite mass resolution of the CERES detector are also included. We supplement our results for π + π − annihilation with contributions from free Dalitz decays (π 0 ,η → γe + e − , ω → π 0 e + e − ) and free ω → e + e − decays as extracted from ref. [5], where medium effects are expected to be of minor importance. As can be seen from the fig. 3, the use of the in-medium ρ propagator (full curve) leads to reasonable agreement with the experimental e + e − spectrum as observed in central S+Au collisions at 200 GeV/u. The same is true when comparing to the preliminary data for the heavier Pb+Au system at 158 GeV/u (see fig. 4), where an accordingly modified temperature/density evolution has been employed. To summarize, our findings seem to indicate that hadronic rescattering processes in in-medium ρ propagation seem to resolve the discrepancy between the experimentally observed e + e − enhancement at CERN-SpS energies and theoretical results based on free π + π − annihilation. Even though further improvements to our analysis need to be done, we tend to conclude that the BR-scaling conjecture of a dropping ρ mass is presumably not an independent phenomenon. For disentangling such a uniform mass shift from the dynamic mechanisms we discussed, the measurement of invariant mass spectra in various p T bins might provide new insights. | 2014-10-01T00:00:00.000Z | 1997-01-30T00:00:00.000 | {
"year": 1997,
"sha1": "3836e9c8ef585710c6fcd089e16fd9c9fce512ed",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "05ac2c1061a38b846be6906f0414e417be0b3325",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
56066430 | pes2o/s2orc | v3-fos-license | Ponceau 4R: A Novel Staining Agent for Resolve Food Proteins on PAGE and Its Impact on Digestibility
Ponceaue 4R interaction with protein, Nisin and BSA was concentration dependent and may be used for protein assay. As the dye binds with almost all the proteins and current methodology may be used for the estimation of proteins in various food systems. During the course of present work staining with ponceau 4R of resolved proteins on PAGE (poly acryl amide gel electrophorosis) was comparable with Coommassie Brilliant Blue R250. The Ponceaue 4R was highly sensitive, rapid and produced sharp red bands on the gel on 0.2% concentration. The effects of pH, concentration of proteins and dye were also investigated in various conditions which would help food processors to use a calculated amount of dye. The impact of tryptic digestibility on Ponceaue 4R -Protein Complexes (PPC) has illustrated that dye may safely be used without any adverse effect on the digestion of PPC.
Introduction
It is an age old practice to enhance food quality and its aesthetical appeal by incorporation of edible colors in a variety of food systems like candy, marshmallow, soft drink, vermicelli etc. Synthetic colorants with wide chemical diversity represent a well recognized group of food additives, performing the prime function not just to compensate the loss of natural colors during processing but also improving the appearance of the products.However, many of these dyes lead to health hazards (Downham & Collins, 2000) as indicated by the World Health Organization (WHO) and Food and Agriculture Organization (FAO).Synthetic food colors are represented by major five groups, such as azo dyes, triarylmethane, chinophthalon derivatives of quinine yellow, xanthenes and indigo colorants (Minioti, Sakellariou, & Thomaidis, 2007).Azo compounds are the most widely used dyes which are characterized by the presence of chromophoric azo bonds (-N=N-) and include more than 300 components of different colors (Pardo, Yusa, Leon, & paster, 2009).
Synthetic dyes, complexed especially with biopolymers are present in foods or in living organisms.Human serum albumin, bovine serum albumin and other proteins are known for their ability to form stable linkages with azo based colors (Sereikaite, Bumeleine, & Bumelis, 2005).Inclusion complexes of some azo dyes with β-cyclodextrins are reported to be formed as illustrated by X-ray diffraction and UV spectrophotometery (Pardo et al., 2009).Natural flavonoids and carotenoids are reported to bind with proteins both in vivo and in vitro (Wade, Tollenaere, Hall, & Degnan, 2009).Some of the azo dyes such as Carmoisine (Saeed, Abdullah, Sayeed, & Ali, 2010), Allura red (Abdullah, Badaruddin, Sayeed, Ali, & Riaz, 2008), Sunst yellow (Badaruddin, Abdullah, Sayeed, Ali, & Riaz, 2007) have already been proved to interact and bind with various food proteins.The in vitro digestibility of protein-azo dye complexes has also been investigated and it was found that it rarely affects the digestion process by the gut proteases (Saeed et al., 2011).Earlier studies reported some natural colors such as lawsone to bind with a variety of food proteins (Ali & Sayeed, 1990;Ali, Sayeed, & Khan, 1995).It seems that covalent bonds formed during the color and protein interactions do not disturb the overall configuration of proteins and allow enzymes to easily attack as in the usual kinetics.
Ponceau 4R is well reported to interact with the biological molecules and more particularly with proteins.Currently differential pulse polarography using for the estimation of ponceua 4R in a mixture of Carmoisine and Allura that found in sweets and soft drinks (Chanlon, Joly-Pottuz, Chatelut, Vittori, & Cretier, 2005).Even a simple spectrophotometric method was developed by using the first derivative of the ratio spectra involving determination at zero-crossing wavelength.This technique was later named as zero-crossing wavelength method which successfully separated the mixtures of three dyes such as Tartrazine, Sunset yellow and Ponceau 4R (Berzas, Flores, Cabanillas, Llerena, & Salcedo, 1998).Recently a binary mixture of Carmoisine and Ponceau 4R was separated by using a simple sensitive and inexpensive technique of H-points Standard Adaptation Method (HPSAM) where a pair of wavelength (460, 549nm) was involved (Hajimahmoodi, Oveisi, Sadeghi, Jannat, & Nilfroush, 2008).
Ponceau 4R is a safe food dye and has no harmful effects on the reproductive or neuro behavior in human beings as shown by evaluating the non observed adverse effect level (NOAEL) which is approximately 205 mg/kg body weight/day (Tanaka, 2006).The Ponceau 4R consumption in the daily diet is much lower than the suggested safe value (SSV).
Ponceau 4R shows its interaction with protein nisin, BSA and various nuts proteins including walnut, pistachio, peanut etc.The present study has illustrated that the staining ability of Ponceau 4R to stain different resolved food proteins on PAGE and also demonstrating the Ponceau 4R-protein complex (PPC) is concentration dependent.Moreover, the variation in pH is highly effective for configuration and structural stabilities of dye-protein complexes.It may be concluded that spectral analysis may be used to determine either dye or total protein in a mixture under specific condition.
Materials
N, N'-methylenediacrylamide, acrylamide, Coomassie Brilliant Blue R 250 purchased from BDH, England, Tris (Trizma base) was obtained from Sigma-Aldrich Life Science USA.Ammonium peroxodisulfate (APS) from Omicron Sciences limited-London UK.Sodium dodecyl sulfate (SDS), glycine, bromophenol blue were supplied from Merck, 64271 Darmstadt, Germany.TEMED was purchased from Scharlu, Spain.BSA (bovine serum albumin) form BDH, England while Nisaplin (Nisin) was purchased from Suzhou Hengliang, China.The source of trypsin was fungal type XIII (from Aspergillus saitoi), Merck, Germany.Ponceau 4R was obtained from National Foods (Pvt) Ltd.All other chemicals were analytical grade.All aqueous solutions were prepared in double distill deionize (DDD) water.
Proteins Solublzing Solution (PSS)
The PSS (40 mL) was prepared by mixing 9.6 mL of the 20% glycerol, 2.5% SDS (sodium dodecyl sulphate), 1.8 mL of mercaptoethanol and 8 mL of tris-HCl buffer of pH 6.8, few crystals of bromophenol blue were also added as marker.
Sample Preparation
Twenty milligrams of BSA protein were dissolved in 1mL of DDD water.The nut proteins solution of peanut, pistachio, walnut and almond were prepared by taking 5g of fresh, crushed and defatted sample in 25 ml of 0.5M phosphate buffer (pH 7.6) to form thin slurry.The mixture was constantly stirred overnight in orbital shaker at 10C and filter through fine cloth.The filtered protein solution was centrifuged at 10,000 rpm.
Gel System
A 10% polyacrylamide (acrylamide/bisacrylamide in the ratio of 30:0.8 [wt/wt]) was prepared according the method of Laemmli (1970).Briefly describing 20 µl sample of protein were gently placed into the wells of gel (8 cm wide, 7.3 cm high, 0.75 mm thick).Protein samples were resolved by using the Bio-Rad Mini-Protean 3 cell system No. 67S/ 06917, at a constant 120 Volt for 3h.
Staining Solutions
Coomassie brilliant blue R-250 (0.2 g) was taken in 5 mL methanol with 7.5 mL acetic acid till total volume 100 mL with DDD Water.Similarly new staining agent Ponceau 4R was prepared with 8.5 mL acetic acid and 2 mL methanol.
Destaining Solution
The destaining solution for Commassie brilliant blue R-250 and Ponceau 4R was prepared by taken 30 ml of methanol and 10 mL of glacial acetic acid till total volume 100 ml with DDD water.
Gel's Staining
The polyacrylamide gel was stained in the Ponceau 4R staining agent overnight and washed several time with destaining solution of 20 minutes interval until clear appearance of red bands.Duplicate portion of overnight Coomassie blue stained gel was similarly destained as ponceau 4R which takes almost 24 hours.
Protein Binding
BSA and nisin proteins (1mL) concentrations of 0, 1.5, 3, 4.5, 6, 7.5, 9, 10.5, 12, 13.5 and 15 mg/mL in separate test tubes were mixed with equal volume of Ponceau 4R solution (2mg/mL) and incubated at 37 °C for one hour.Protein solution was precipitated by adding 1 ml TCA (40%).Supernatant was decanted after centrifuge at 6000 rpm and total volume made 10 ml with DDD water.Dye-Protein complex was measured by spectrophotometer at 505 nm.
Tryptic Digestion
For the quantification of tryptic digestibility of Ponceau 4R bound proteins (BSA and nisin) were digested by trypsin enzyme (1 mg/50 mg of substrate) for different intervals of time (Pfleinderer & Krauss, 1965).The reactions were stopped by adding 1 ml of 10% TCA.The extent of proteolytic activity of the supernatant was measured spectrophotometrically at 280 nm.
Absorption Spectroscopy
Different spectra in the region 400-750 nm wavelength were recorded by Spectrophotometer while the path length was 1 cm.Experimental curve was constructed with variable pH as: 1).At constant BSA protein concentration 2).At constant Ponceau 4R concentration.
In both of the experimental designs, the stock solution of dye was 1mg/ml while protein concentration was adjusted to 0.1 mg/ml.For constant protein spectral assay, 1 ml protein and known concentrations of ponceau 4R as 5 to 30 µl were adjusted to volume 1 ml by using phosphate buffer at different pH.Same methodology was followed for constant dye assay by adjusting protein concentration as 5-25 µl adjusting volume 1 ml by using phosphate buffer at different pH.
Statistical Analysis
Statistical analysis was performed by Minitab version 13.1.The regression analysis for digestibility showed that the linear relation to the time interval of the exposition to the enzyme.The 'r' values were in the range of 0.96-0.99.The 'p' values calculated were <0.005.
Protein Binding through PAGE
The present study is illustrating through PAGE shows the strong binding potential with various food proteins.In order to explain the staining ability of Ponceau 4R, the related factors are compared in table 1 with the standard dye Coomassie brilliant blue R-250.Ponceau 4R is nontoxic and takes only 1 ½ hour in destaining while the Coomassie brilliant blue R-250 bands are clearly seen after 48 hrs.Although the protein bands in case of Ponceau 4R are lighter but they appear to be very sharp at 0.2% concentration than Coomassie brilliant blue R-250.The different protein bands, stained with the two dyes as shown in Figure 1, have demonstrated the equivalent ability of binding with the various proteins.
It has been shown that Ponceau 4R binding with proteins by electrostatic and hydrophobic interactions due to complex formations at the surfaces of protein molecules.It also has been estimated in minute quantity as reported earlier using resonance light spectroscopy (Zhong et al., 2005).The nature of linkages involve in dye-protein complexes other than electrostatics forces and hydrophobic bonding but also hydrogen bonding and vander wall forces as demonstrated by the multiple experiments conducted earlier (Tal, Silberstein, & Nusser, 1985).It seems that present method being, rapid, economical and safe, may be used as a staining agent for routine analysis in PAGE especially in future research where immediate results are often desired.
Protein Binding through Absorption Spectroscopy
The binding of protein with Ponceau 4-R dye can be measured by absorption spectrophotometry at 507 nm.Protein-dye binding may be defined as the chemical interaction of the two molecules.The amount of ponceau 4-R is kept constant throughout the experiments the only change that occur is in the concentration of proteins.When protein is precipitated by TCA simultaneously the proteins with Ponceau 4-R interaction get minimized as demonstrated in Figure 2.
Protein Digestibility
The protein digestibility was estimated through spectrophotometer.In case of BSA-Ponceau complex has low absorbance as compared to the BSA unbounded dye but in case of nisin-Ponceau complex modifies digestibility and a decrease is observed in absorbance at 280 nm in comparison to unbounded nisin (Figure 3).These results described the digestion quality of protein as well as the impact of enzyme and also exhibited the binding capacity of Ponceau 4R with food proteins.Serum albumin has affinity to bind with negatively charged hydrophobic molecules which may interact with the aromatic regions of the dyes (Peters, 1985).Higher degree of helical contents of BSA molecule that contain cylindrical open channel with suitable functional groups can easily occupy dye molecules (Brodersen, Honore, Pedersen, & Klotz, 1988).Trypsin enzyme cleaved the basic amino acid residues such as lysine and arginine from the C-terminal of protein molecule so it partially digests both BSA and nisin proteins.
Serum albumin consists of three domains such as 1-80, 187-372 and 379-570 residues of amino acids (Brown, 1975).Amino acid sequence of BSA and Nisin molecules with their possible binding and digestion sites are given in Figure 4. BSA is a bigger molecule contains 607 amino acid residues in 156 amino acids (26%) may act as interaction sides for dye binding while Nisin is small protein molecule with 57 amino acid residues in which 10 amino acids (17.5%) may act for dye binding.Finally tertiary structure of both proteins and ionic environment decides the binding with dye and its mechanism of digestibility.
Structural Elucidation Absorption Spectroscopy
According to the observation of Figure 5A, the concentration of BSA protein is constant while the concentration of dye Ponceau 4R changed at different pH.Variation in pH causes the folding and unfolding of BSA molecule which also demonstrating through spectral observations.Figure 5B illustrates that concentration of protein increases and dye concentration remains constant than binding observed in the order pH 2> pH 10 >pH 7 > pH 4 > pH 3.
Figure 5. Absorption curve of the (A) BSA protein (0.1 mg/ml) with variable concentratain of Ponceau 4R (1mg/ml) with variable concentration of BSA protein at different PH The pH variation causes the reversible conformational isomerization in Serum albumin (Foster, 1977).That's why the variation in absorbance was not only due to the pH but also vary with concentration of dye.The folding and unfolding of protein molecule provides the different number of negatively charged amino acids where electrostatitic binding can possible.BSA molecules behave reversible transition of N (normal) to F (fast) form at pH 4.3 is due to the domain III unfolding in which helix unfolding shift from 55% to 45% (Geisow, 1977;Khan, 1986).The unfolding of helical structure of F (fast) form is responsible for increase in viscosity and decrease in solubility (Foster, 1960).At pH below 4, albumin undergoes expansion with degradation of the intra-domain helices(35%) called expanded (E) form achieved at pH 2.7 which increase viscosity (Harrington, Johnson, & Ottewill, 1956).At pH 9 albumin molecule shift in basic form (B) with 48% helix.Albumin molecule isomerizes in aged (A) form at pH 9 with low ionic strength for 3 to 4 days at refrigeration temperature.
Conclusion
The rapid and simple staining procedure and clear visualization of the stained protein band with Ponceau 4R can be used as a novel staining agent as routine PAGE analysis.The binding of color depends on pH exhibited by spectrophotometeric investigation.Sharpness of ponceau 4R color not altered in acidic or basic environment.It can be clearly seen that the binding of both proteins BSA and Nisin varies differently due to the difference in their molecular structure.It has been proved for the first time that the digestibility of protein was not affected in presence of ponceau 4R.
Figure 2 .
Figure 2. In vitro Protein binding of BSA-dye & Nisin-dype complexes at various protein concentrations
Figure 4 .
Figure 4.The amino acid sequence of BSA and Nisin is obtained from Swiss-Prot with accession number P02769 and P13068 respectively.The two amino around the cleavage sites found for trypsin digestions are underlined and possible dye binding amino acids are shown in bold | 2018-12-05T16:56:30.928Z | 2012-11-26T00:00:00.000 | {
"year": 2012,
"sha1": "08397f15f0e5b66348c2c780af12360c80db45c0",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/enrr/article/download/22486/14491",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "08397f15f0e5b66348c2c780af12360c80db45c0",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
243801760 | pes2o/s2orc | v3-fos-license | Interleukin 1 receptor‐like 1 rs13408661/13431828 polymorphism is associated with persistent post‐bronchiolitis asthma at school age
Abstract Aim Interleukin (IL) 1 receptor‐like 1, encoded by the IL1RL1 gene, is a receptor for IL‐33. In European birth cohorts, IL1RL1 rs102082293, rs10204137 (rs4988955), rs13424006 and rs13431828 (rs13048661) variations were associated with asthma at school age. In a Dutch multi‐centre study, IL1RL1 rs1921622 variation was associated with severe bronchiolitis. We evaluated the associations of these five IL1RL1 variations with asthma and lung function at school age after hospitalisation for bronchiolitis in infancy. Methods Follow‐up data, including impulse oscillometry at age 5–7 and flow‐volume spirometry at age 11–13 years, and the IL1RL1 genotype data were available for 141 children followed until 5–7 and for 125 children followed until 11–13 age years after bronchiolitis in infancy. The IL1RL1 rs10204137 and rs4988955, and the IL1RL1 rs13048661 and rs13431828, are 100% co‐segregating in the Finnish population. Results The variant IL1RL1 rs13048661/13431828 genotype was constantly associated with increased asthma risk by various definitions at 5–7 and 11–13 years of ages. The result was confirmed with analyses adjusted for current confounders and early‐life environment‐related factors. Statistical significances were lost, when maternal asthma and atopic dermatitis in infancy were included in the model. Conclusion IL1RL1 rs13048661/13431828 variation was associated with post‐bronchiolitis asthma outcomes at school age.
| INTRODUC TI ON
Interleukin 1 receptor-like 1 (IL1RL1), also known as suppressor of tumorigenicity-2 (ST2), is a receptor for IL-33, which is known to play a role in the pathogenesis of asthma. 1 IL1RL1 is encoded by the IL1RL1 gene and signals via the intracellular toll-like/interleukin-1 receptor (TIR) domain. 2 The production of IL1RL1 can be assessed, for example by measuring IL1RL1-a concentration in serum. 3 The IL1RL1 was involved with the signalling of the IL-33/IL1R1 pathway, called also as IL-33/ST2 pathway, in three previous studies. 4 The association between the severity of respiratory syncytial virus (RSV) bronchiolitis and three IL1RL1 single nucleotide polymorphisms (SNP) rs1921622, rs11685480 and rs1420101 was studied in ventilated (severe bronchiolitis) and non-ventilated infants in a Dutch multi-centre study. 5 The IL1RL1 rs1921622 SNP was associated with severe bronchiolitis compared with controls. In addition, severe bronchiolitis was associated with higher soluble IL1RL1-a concentrations in nasopharyngeal aspirates of the bronchiolitis patients. 5 Genome-wide association studies have documented that the IL1RL1 polymorphisms were associated with asthma in children attending the COPSAC (Copenhagen Prospective Studies on Asthma in Childhood) cohort. 6 The IL1RL1 rs102082293 and rs13431828 were associated with asthma at eight years of age in the PIAMA (Prevalence and Incidence of Asthma and Mite Allergy) cohort and the IL1LR1 rs102082293, rs10204137 and rs13424006 with asthma at the same age in the ALSPAC (Avon Longitudinal Study of Parents and Children) cohort. 7 In combined analyses, the IL1RL1 rs102082293 and rs13424006 SNPs were associated with the phenotype of late-onset type of wheezing. 7 We have prospectively followed, until school age, 166 children hospitalised for bronchiolitis at younger than six months of age. The control visits were arranged at the ages of 5-7 years 8 and 11-13 years. 9 As published previously from this cohort, the IL33 rs1342326 variation was associated with severe post-bronchiolitis asthma treated with inhaled corticosteroids (ICS) at school age. 10 The aim of the present study was to evaluate the associations of four IL1RL1 variations, selected based on the findings of the PIAMA and ALSPAC birth cohorts, with the presence of asthma, the use of asthma medication and the presence of lung function abnormalities at ages 5-7 and 11-13 years after hospitalisation for bronchiolitis under age six months. The panel was supplemented by one SNP, which was associated with severe bronchiolitis in the Dutch multicentre bronchiolitis study. Bronchiolitis was defined as a lower respiratory tract infection associated with diffuse wheezes and/or crackles. 11 Viral aetiology of bronchiolitis was studied with antigen detection and polymerase chain reaction (PCR) in nasopharyngeal aspirates. 11 Presence of atopic dermatitis in infancy was registered during the inpatient care and at the post-bronchiolitis control visit at 1.5 years of age. 12 Data on asthma in the families, with a special focus on asthma in mothers, were obtained by interviewing the parents.
| Design
The children hospitalised for bronchiolitis in infancy were invited to attend two follow-up visits at school age. The first was arranged in 2008-2009 when the children were 5-7 years old. 8 The second was arranged in 2014-2015 when the children were 11-13 years old. 9 Before the follow-up visits, the parents completed a structured questionnaire comprising questions on doctor-diagnosed asthma and self-reported allergic rhinitis, on current use of asthma medication including ICSs and bronchodilators, and on symptoms presumptive for asthma before the control visit at age 5-7 years, 8 or respectively, after it to the present. 9 The follow-up included an interview of children and parents to check the questionnaire data. 8,9 Lung function before and after an inhalation of bronchodilator was measured with impulse oscillometry (IOS) at 5-7 years of age, 13 and with flow-volume spirometry (FVS) at 11-13 years of age. 14 The weights and heights were recorded at both control visits, and the weight status was reported as body mass index for age zscores (zBMI) using Finnish growth references. 15
| Definitions
At the control visit at 5-7 years of age, current asthma was defined as continuous or scheduled intermittent ICS use for asthma during preceding 12 months, or alternatively, as reporting of asthmapresumptive symptoms during preceding 12 months and a diagnostic finding in the exercise challenge test. 8 Current asthma was present
Key Notes
• In European birth cohorts, interleukin (IL) 1 receptor-like 1 (IL1RL1) variations were associated with asthma at school age.
• We found that IL1RL1 rs13048661 and rs13431828 variations were in full linkage, and that this variation was constantly associated with post-bronchiolitis asthma at 5-7 and 11-13 years of ages.
• Examinations of IL1RL1 rs13048661 and rs13431828 variations need to be included in future studies on genetics of bronchiolitis and post-bronchiolitis outcome. in 21(12.7%) of 166 cases. 8 Allergic rhinitis was parent-reported and needed to be symptomatic during the last 12 months and was present in 48(29%) cases. 8 At the control visit at 11-13 years of age, current asthma was defined as continuous ICS use for asthma during preceding 12 months, or alternatively, as reporting of asthma-presumptive symptoms during preceding 12 months with a diagnostic finding in the bronchodilation test. 9 In all, 138 children attended the study, and current asthma was present in 18(13.0%) cases. 9 Allergic rhinitis was parentreported and needed to be symptomatic during the last 12 months, and was present in 60(43.5%) cases, respectively. 9 Eleven children (8.0%) presented with persistent asthma, which means current asthma at both the 5-7 years and 11-13 years follow-up studies. 9
| Lung function
Lung function at age 5-7 years was measured in 103 cases by IOS (Jaeger, Master Screen IOS, Höchberg, Germany), consisting of baseline, post-exercise and post-bronchodilation (0.3 mg salbutamol with a spacer) measurements, as described in detail previously. 13 The studied parameters were baseline and post-bronchodilator (post-BD) respiratory system resistance at 5Hz (Rrs5) and reactance at 5Hz (Xrs5), expressed as height-adjusted z-scores from populationbased references. 13 Lung function at age 11-13 years was measured in 89 cases with
| Genetics
Numerous IL1RL1 SNPs are 100% co-segregating in the Finnish population, for example the four SNPs of rs13408661, rs10173081, rs10197862 and rs13431828 (http://ensem bl.org). Thus, the genotypes of the IL1LR1 rs13408661 we determined are identical in this cohort with the IL1LR1 rs13431828, which was associated with childhood asthma in the PIAMA cohort. 7 Similarly, the SNPs of the IL1LR1 rs4988955, rs4988956, rs4988957, rs10192036, rs10204137, rs4988958, rs10192157, rs10206753, rs3755276 and rs7558339 are 100% co-segregating (http://ensem bl.org). Thus, the genotypes of the IL1LR1 rs4988955 we determined are identical in this cohort with the IL1LR rs10204137, which was associated with childhood asthma in the ALSPAC cohort. 7 Five SNPs, as presented in Table S1, were studied with PCRbased sequencing. Invitrogen Platinium Taq DNA polymerase (Thermo Fisher Scientific Inc.) was used for PCR according to the manufacturer´s instructions. The primers were designed using a Primer-Blast tool (National Center for Biotechnology Information, NCBI). The primers and annealing temperatures used in PCR are listed in Table S1. Prior the sequencing, PCR products were purified enzymatically with Thermo Scientific Exonuclease FastAP and Exo I (Thermo Fisher Scientific). Purified PCR products were sent for sequencing at Eurofins Genomics, Ebersberg, Germany.
| Controls
The controls for minor allele frequencies (MAFs) of the IL1RL1 were obtained from the Finnish data of two publicly available databases: the 1000 Genomes Project (available at http://ensem bl.org) and the Genome Aggregation Database (available at https://gnomad.broad insti tute.org).
| Statistics
Statistical analyses were performed using the Statistics Package for Social Science (SPSS 25.0, IBM Corp.). Chi-square and Fisher's exact tests, when appropriate, were used in the analyses of categorised variables. Student's t-test was used for normally distributed and Mann-Whitney test for non-normally distributed continuous variables. The results were expressed as frequencies, percentages, medians, means and standard deviations.
Multivariate logistic regression was used to confirm the significant findings in non-adjusted analyses on IL1RL1 wild versus variant genotypes as risk factors for asthma outcomes at ages 5-7 years and 11-13 years. The analyses were adjusted first for age, sex and current allergic rhinitis (current confounders), then for age, sex, RSV aetiology of bronchiolitis and maternal smoking in infancy (early-life environment-related risk factors), and finally for age, sex, maternal asthma and atopic dermatitis in infancy (early-life atopy-related risk factors). The results were expressed as odds ratios (OR) and 95% confidence intervals (95% CI).
Analysis of co-variance, adjusted for RSV aetiology of bronchiolitis, maternal smoking during infancy, current asthma and current zMBI, was used for confirming the significant findings revealed in non-adjusted comparisons of IOS and FVS parameters between children with wild versus variant ILRL1 genotypes. One study subject was excluded from the lung function analyses due to underweight at both ages (zBMI < 18).
| E THI C S
We obtained an informed consent from the parents including the use of samples for genetic studies on bronchiolitis and asthma risk collected during hospitalisations and at the control visits. The study was approved by the Ethics Committee of the Tampere University Hospital district, Tampere, Finland. The personal data of the study subjects were not given to the laboratory that performed the genetic studies, the Department of Medical Microbiology and Immunology, Turku, Finland.
| RE SULTS
The MAFs of the five determined IL1RL1 SNPs were rather similar in the post-bronchiolitis cohort consisting of 165 cases, in the population-based Finnish data of the 1000 Genomes Project, which included 99 subjects, and in the Finnish data of Genome Aggregation Database, which included 1471-1737 subjects, depending on the SNP in question (Table S2).
The variant genotype of the IL1RL1 rs13408661/13431828 SNP was associated with current ICS use in 141 former bronchiolitis patients at age 5-7 years (Table 1) and with persistent asthma in 123 at age 11-13 years ( Table 2). There were no significant associations between the other four IL1RL1 SNPs and asthma outcomes at 5-7 or 11-13 years of ages.
Multivariate logistic regression confirmed that the presence of the variant IL1RL1 rs13408661/13431828 genotype was associated with increased ICS use at preschool age ( Table 3). The finding was robust to adjustments with current confounders, such as allergic rhinitis, and to adjustments with early-life environment-related risk factors, such as RSV aetiology of bronchiolitis or exposure to maternal smoking. However, statistical significance was lost, when maternal asthma and early-life atopic dermatitis were included in the model (Table 3). Likewise, the presence of the variant IL1RL1 rs13408661/13431828 genotype was associated with increased risks of current asthma and persistent asthma in early adolescence adjusted for early-life environment-related factors ( Table 3).
The variant IL1RL1 rs4988955/10204137 and rs13424006 genotypes were associated with higher baseline Rrs5 in IOS in 98 former bronchiolitis patients at 5-7 years of age (Table 4). The variant genotypes of the IL1RLI rs10208293 and rs13408661/13431828 were associated with lower post-BD Xrs5 compared with respective wild genotypes in adjusted analyses (Table 4). These four results were robust to adjustments with RSV aetiology of bronchiolitis, maternal smoking in infancy, current zBMI and current asthma (Data not shown).
There were no significant associations between these five stud-
| DISCUSS ION
The present study evaluated the associations of five IL1RL1 polymorphisms with asthma and lung function at school age after bronchiolitis in infancy in a prospective follow-up setting. The main result was that the variant genotype of the IL1RL1 rs13408661/13431828 was associated with ICS use at age 5-7 years and with persistent asthma at age 11-13 years. The findings were robust to adjustments with current confounders such as allergic rhinitis, and to early-life environment-related factors, such TA B L E 1 Genotypes of the IL1RL1 rs10208293, rs4988955/10204137, rs13424006, rs113408661/3431828 and rs1921622 polymorphisms in relation to asthma outcomes at early school age in 141 former bronchiolitis patients The IL1RL1 gene locus has been associated with asthma in children in many studies, 1,6,7 but however, the contribution of different SNPs in this locus and the functional mechanisms remain unsolved. 18,19 Within the IL1RL1 gene, the SNPs present with TA B L E 2 Genotypes of the IL1RL1 rs10208293, rs4988955/10204137, rs13424006, rs13408661/13431828 and rs1921622 polymorphisms in relation to asthma outcomes in early adolescence in 123 former bronchiolitis patients Note: Adjustments: 1 Age, gender, current rhinitis; 2 Age, gender, RSV aetiology of bronchiolitis, maternal smoking in infancy; 3 Age, gender, maternal asthma, atopic dermatitis at <12 months of age.
Statistical significance is expressed as bolded. were associated with asthma at eight years of age in the PIAMA cohort, and the IL1LR1 rs102082293, rs10204137 and rs13424006 with asthma at the same age in the ALSPAC cohort. In addition, similar associations with school-age asthma were confirmed for three IL33 variations but for none of the IL1RAP variations. 7 We selected these four IL1RL1 SNPs for the present study, supplemented with the IL1RL1 rs1921622, which has been associated with the severity of RSV bronchiolitis. 5 Our post-bronchiolitis finding that the variation of the IL1RL1 rs13048661/13431828 was associated with an increased risk of severe and persistent school-age asthma and with lower post-BD reactance at 5Hz in IOS at 5-7 years are in line with each other.
TA B L E 3
Three IL1RL1 variations were studied in 81 ventilated (severe) and 384 non-ventilated infants under 12 months of age hospitalised with RSV bronchiolitis and compared to 930 healthy controls. 5 The IL1RL1 rs1921622 variation was associated with bronchiolitis severity. Furthermore, the concentrations of soluble IL1RL1-a in nasopharyngeal aspirates were higher in ventilated compared with non-ventilated bronchiolitis patients. 5 In the present study, the IL1RL1 rs1921622 variation was not associated with postbronchiolitis asthma or lung function at either 5-7 or 11-13 years of age. The result concerning lung function was unexpected, since RSV caused two-thirds of the bronchiolitis cases of our cohort and all cases needed hospitalisation. Severe bronchiolitis, especially when caused by RSV, is a known risk factor for later lung function deficits. 21 The main limitation of the present post-bronchiolitis follow-up study at school age was the small number of cases for genetic analyses, which means a risk for type-2 statistical errors. On the contrary, there was one constant finding concerning the IL1RL1 rs13408661/13431828 polymorphism, and the results for all other four studied polymorphisms were clearly negative. The variant genotype of the IL1RL1 rs13408661/13431828 was associated with severe or persistent asthma at both 5-7 years and 11-13 years of ages in both univariate and multivariate analyses. The strengths of the study are the homogenous material that consisted of ethnically Finnish children hospitalised for bronchiolitis at younger than six months of age, careful registration of clinical data including data on medication for asthma at controls visits, and prospective long-term follow-up to the mean age of 11.7 years.
Our interpretation, although thus far speculative, is that genetic factors are to a great part responsible which children will develop post-bronchiolitis asthma. Maybe, the same children are prone to wheezing during rhinovirus infections. The IL1RL1/IL-33 pathway induces the production of IL-33 and further other Th2-type cytokines, 1,18 which are involved in the development of allergy and asthma. However, the emergence of post-bronchiolitis asthma is a complex process influenced by genes, viruses and various environmental factors. No doubt, Th2-oriented versus Th1-oriented immunity plays a role, but the current knowledge is not sufficient for interventions during or after bronchiolitis.
| CON CLUS ION
We found evidence that IL1RL1 rs13048661/13431828 variation was associated with increased post-bronchiolitis asthma risk constantly at 5-7 and 11-13 years of ages. The results were confirmed by versatile adjusted analyses, and in addition, were in line with findings from previous European birth cohorts. Future studies on asthma emergence in children should put accent to the IL1RL1/IL-33 pathway.
CO N FLI C T O F I NTE R E S T
The authors declare no conflicts of interest. | 2021-11-07T06:16:39.153Z | 2021-11-06T00:00:00.000 | {
"year": 2021,
"sha1": "2309bb03db37d89935da5cc248b7ce394a3ff4d0",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/apa.16176",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "139647b8fe033fd832b795fbd7a7ebe10689eefb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54039941 | pes2o/s2orc | v3-fos-license | Neurological Sequelae in Acute Encephalitis Syndrome one month post discharge from the hospital in the Children Aged 1-14 years
Introduction Encephalitis is a complex clinical syndrome of the central nervous system (CNS) associated with fatal outcome or severe permanent damage including cognitive impairment, behavioral impairment and epileptic seizures. It is important to understand the clinical spectrum and outcome of acute encephalitis syndrome(AES) at local level to better define problem and to draw inferences for management and policy formulation. Material and Methods: This study was a hospital based observational, longitudinal and descriptive study conducted at Department of Pediatrics; Nobel Medical College Teaching Hospital, Biratnagar. Seventy cases with a diagnosis of AES (irrespective of the underlying etiology), were studied over a period of one year. All cases from 1 to 14 years of age fulfilling the standard WHO case definition of AES were included in the study. A pre-designed semistructured questionnaire was being used to obtain the clinical profile and investigations. The cases were followed after one month post discharge from the hospital and the outcomes were recorded. Results: On follow up of the cases at the end of 1 month, 35 (50.7%) cases were found to have complete cure and were labelled as cured. Neurological sequelae were seen in 8(11.6%) cases and were labeled as not cured. Total death was documented in 26(37.7%) of the cases. Conclusion: Despite of early diagnosis and aggressive treatment neurological sequelae is not uncommon in AES. So, regular follow up and early rehabilitative efforts should be instituted for all cases of AES post discharge from the hospital.
Introduction
Encephalitis is a clinical syndrome of the central nervous system (CNS) associated with fatal outcome or severe irreversible damage including cognitive and behavioral impairment and epileptic seizures.It is often acute, although symptoms may progress with rapid onset, causing severe debilitation to patients including otherwise healthy children [1].AES may manifest as encephalitis, meningoencephalitis or meningitis.A hospital based study conducted in Dharan showed mortality of 8.3% and neurological sequelae of 50% among the AES cases [2].
WHO defines AES as an acute onset of fever and a change in mental status (including symptoms such as confusion, coma, disorientation or inability to talk) and/or new onset of seizures (exception of simple febrile seizures) in a person of any age at any time of year [3].Acute encephalitis can be caused by several conditions, including bacterial or viral infection in the brain, complication of an infectious disease, ingestion of toxic substances and complication of an underlying malignancy.Hence differentiating encephalitis from other similar conditions continues to be a challenging task.Infection of the CNS is considered to be the major cause of encephalitis and more than hundred different pathogens have been recognized as causative organism among which Japanese encephalitis being one quarter of all diagnosed cases of encephalitis [1]
Materials and Methods
The Study was conducted in the Department of Pediatrics at Nobel Medical College and Teaching Hospital, Biratnagar, Nepal from May 2014-April 2015 for a period of one year.All the pediatric patients of age group 1-14 years fulfilling the WHO criteria for acute encephalitis syndrome were enrolled in the study.A complete evaluation of the patient was done with detailed history and clinical examination, with a special focus on symptoms and signs of acute encephalitis syndrome.All the relevant informations were documented on pre-designed semistructured questionnaires A thorough clinical examination was done with a special attention on neurological system.The signs of meningeal irritation was examined with examination of nuchal rigidity, kernig sign and brudzinski sign.
Extrapyramidal features in the form of dystonia, dyskinesia and any other movement disorders were noted.Any form of neurological deficit after detail neurological examination was noted.To identify variables associated with complete recovery and sequelae at the time of discharge and sequelae at 6 weeks were set as dependent variables and all others as independent variables.Data were analysed first using univariate regression analysis.
Results
Majority of cases (37)52.9% in our study were discharged without neurological sequelae.
Twenty-one (30%) cases expired.One month follow up assessment was done after one month of discharge.Parents were contacted by telephone and invited to re-attend the hospital.On follow up of the cases at the end of 1 month, (35)50.7%cases were found to have complete cure and were labeled as cured.Neurological sequalae were seen in ( 8)11.6% cases and were labeled as not cured.Total death was documented in (26)37.7%cases and 21.13% patients had neurological sequelae at the time of discharge.On follow up of the cases at the end of 1 month, (35)50.7%cases were found to have complete cure and no mortality was documented among the follow up cases.In our study neurological sequelae in the form of left sided hemiparesis in (4)50% cases,quadriparesis in (2)25% cases and seizure disorder in (2)25% cases.Association of focal neurological deficit and extrapyramidal features with neurological sequelae have not been reported in our study.Acute encephalitis is a major public health problem and treating pediatricians should be aware that patients with AES of unknown viral etiology also have a high risk of morbidity and mortality [5].Our study showed that AES affected all age group from children to adolescence.The mean age of the case was 6.59 (±3.831) years.There was higher incidence of AES in males i.e (48) 68.6% as compared to female which was only ( 22) 31.4%.The long-term outcome of encephalitis in children has not been well characterized; however, the evidence is concerning for high rates of neurocognitive and behavioural sequelae.A study from Finland showed cognitive and personality problems in over half [6] and an Israeli study showed moderate to severe sequelae in 63% of children with high rates of behavioural problems; low IQ scores, attention deficit hyperactivity disorder and learning disorders were over represented [7].In our study, neurological sequelae in the form of left sided hemiparesis in (4)50% cases, quadriparesis in (2)25% of the cases and seizure disorder in (2)25% cases.In contrast other studies had shown right sided hemiparesis more common [2,8].Hemiparesis was the most common neurological sequelae found in our study.In our study, none of the symptoms were significantly associated with mortality at discharge.Presence of signs of meningeal irritation was not found to be statistically significant similar to results observed in study kakoli et al [9].In contrary to this, the study conducted by Avabratha et.al. in Bellary, Karnataka, revealed association between mortality and meningeal signs [10].There were 7 cases(10%)of postencephalitic epilepsy in a study done by Fowler et al [6] which showed epilepsy as one of the most important sequelae seen in AES cases as seen in our study.Association of focal neurological deficit and extrapyramidal features with neurological sequelae has also been reported in few studies which was not present in our study [5].On follow up of the cases at the end of 1 month, (35)50.7%cases were found to have complete cure and were labelled as cured.Neurological sequelae were seen in ( 8)11.6% cases and were labelled as not cured.
Conclusion
To conclude, although AES is associated with high of mortality and debiliating neurological sequelae, early diagnosis and aggressive treatment, timely follow up, early institution of rehabilitative care and holistic approach from the family and medical personnels, a complete vocational cure can be attained as seen from our study.However, to better understand the clinical presentation, outcome and the association of clinical profile with the outcome, we need more of multicentric, randomized clinical trial.
Figure 1 :
Figure 1: Outcome at 1 month of follow up | 2018-11-29T16:51:12.627Z | 2016-12-26T00:00:00.000 | {
"year": 2016,
"sha1": "d9df7d2cce18d06336483f952c2211b2145c3701",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3126/jonmc.v5i2.16316",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d9df7d2cce18d06336483f952c2211b2145c3701",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235916336 | pes2o/s2orc | v3-fos-license | DNA Damage Repair Status Predicts Opposite Clinical Prognosis Immunotherapy and Non-Immunotherapy in Hepatocellular Carcinoma
Immune checkpoint inhibitors(ICIs) that activate tumor-specific immune responses bring new hope for the treatment of hepatocellular carcinoma(HCC). However, there are still some problems, such as uncertain curative effects and low objective response rates, which limit the curative effect of immunotherapy. Therefore, it is an urgent problem to guide the use of ICIs in HCC based on molecular typing. We downloaded the The Cancer Genome Atlas-Liver hepatocellular carcinoma(TCGA-LIHC) and Mongolian-LIHC cohort. Unsupervised clustering was applied to the highly variable data regarding expression of DNA damage repair(DDR). The CIBERSORT was used to evaluate the proportions of immune cells. The connectivity map(CMap) and pRRophetic algorithms were used to predict the drug sensitivity. There were significant differences in DDR molecular subclasses in HCC(DDR1 and DDR2), and DDR1 patients had low expression of DDR-related genes, while DDR2 patients had high expression of DDR-related genes. Of the patients who received traditional treatment, DDR2 patients had significantly worse overall survival(OS) than DDR1 patients. In contrast, of the patients who received ICIs, DDR2 patients had significantly prolonged OS compared with DDR1 patients. Of the patients who received traditional treatment, patients with high DDR scores had worse OS than those with low DDR scores. However, the survival of patients with high DDR scores after receiving ICIs was significantly higher than that of patients with low DDR scores. The DDR scores of patients in the DDR2 group were significantly higher than those of patients in the DDR1 group. The tumor microenvironment(TME) of DDR2 patients was highly infiltrated by activated immune cells, immune checkpoint molecules and proinflammatory molecules and antigen presentation-related molecules. In this study, HCC patients were divided into the DDR1 and DDR2 group. Moreover, DDR status may serve as a potential biomarker to predict opposite clinical prognosis immunotherapy and non-immunotherapy in HCC.
Immune checkpoint inhibitors(ICIs) that activate tumor-specific immune responses bring new hope for the treatment of hepatocellular carcinoma(HCC). However, there are still some problems, such as uncertain curative effects and low objective response rates, which limit the curative effect of immunotherapy. Therefore, it is an urgent problem to guide the use of ICIs in HCC based on molecular typing. We downloaded the The Cancer Genome Atlas-Liver hepatocellular carcinoma(TCGA-LIHC) and Mongolian-LIHC cohort. Unsupervised clustering was applied to the highly variable data regarding expression of DNA damage repair(DDR). The CIBERSORT was used to evaluate the proportions of immune cells. The connectivity map(CMap) and pRRophetic algorithms were used to predict the drug sensitivity. There were significant differences in DDR molecular subclasses in HCC(DDR1 and DDR2), and DDR1 patients had low expression of DDRrelated genes, while DDR2 patients had high expression of DDR-related genes. Of the patients who received traditional treatment, DDR2 patients had significantly worse overall survival(OS) than DDR1 patients. In contrast, of the patients who received ICIs, DDR2 patients had significantly prolonged OS compared with DDR1 patients. Of the patients who received traditional treatment, patients with high DDR scores had worse OS than those with low DDR scores. However, the survival of patients with high DDR scores after receiving ICIs was significantly higher than that of patients with low DDR scores. The DDR scores of patients in the DDR2 group were significantly higher than those of patients in the DDR1 group. The tumor microenvironment(TME) of DDR2 patients was highly infiltrated by activated immune cells, immune checkpoint molecules and proinflammatory molecules and antigen presentation-related molecules. In this study, HCC patients were divided into INTRODUCTION Hepatocellular carcinoma (HCC) accounts for 75%~85% of primary liver cancer cases, ranking sixth among the most common cancers in the world and fourth in terms of cancerrelated deaths (1). However, traditional treatment is not ideal for patients with advanced HCC (2). As an inflammation-related tumor, HCC features an immunosuppressive tumor microenvironment (TME) that can promote immune tolerance through various mechanisms. There have been a series of advances in immunotherapy, and immunotherapy that activates the tumorspecific immune response brings new hope for the treatment of HCC (3)(4)(5). However, there are still some problems, such as uncertain curative effects, low objective response rates (ORRs), many adverse reactions, and even drug resistance after initial patient response (6,7). Therefore, how to use molecular typing to improve the immune microenvironment, modify the immune response of patients, and guide the choice of immunotherapy or combination therapy scheme to effectively improve the efficacy of immunotherapy is an urgent problem to be solved and a future direction for the development of accurate treatments for HCC.
Various HCC-related risk factors can cause DNA damage. If damaged DNA is not repaired correctly in time, it can lead to gene changes and genome instability, which are generally considered common features of human HCC. Dysfunction of the DNA damage repair (DDR) process is related to susceptibility to HCC, and this process is often enhanced in HCC, resulting in a poor anticancer treatment effect against HCC cells (8). Chemotherapy is one of the few choices for most patients with advanced HCC who do not need surgery, and HCC shows different degrees of drug resistance to most chemotherapy regimens; as such, few chemotherapy drugs are available for HCC (9,10). Many conventional chemotherapy drugs produce effects by inducing DNA double-strand breaks. HCC cells counteract the DNA damage caused by chemotherapy drugs by strengthening their DDR ability, which often leads to chemotherapy resistance (11,12).
Mutations in members of DDR pathways may affect the efficacy of immunotherapy. Alterations in DDR signaling pathway members can lead to genomic instability and increased mutation frequency. Mutations can be used as potential biomarkers for the efficacy of immunotherapy. High mutation loads are closely related to increases in neoantigen loads (NAL) and tumor-infiltrating lymphocytes (TILs) (13,14). Among the possible biomarkers, mismatch repair deficiency (MMR-D), homologous recombination gene mutations and POLE mutations (which affect the DDR signaling pathway) play an important role in the efficacy of immune checkpoint inhibitors (ICIs). The main mechanism is that mutations in repair genes are related to increases in NAL, CD4+ and CD8+ TILs, and the expression of cytotoxicity-related genes, PD-1 and PD-L1 (15,16). However, the molecular status of the DDR pathway, the activity of the DDR pathway and the efficacy of immunotherapy in HCC are not clear. Therefore, it is particularly important to explore the potential significance of DDR pathway molecular typing and DDR pathway activity in predicting response to immunotherapy or routine treatment in HCC.
HCC Cohort and Immunotherapy Cohort
We downloaded the The Cancer Genome Atlas-Liver hepatocellular carcinoma (TCGA-LIHC) cohort data, which includes mutation data, expression data and clinical data, from the TCGA database (https://portal.gdc.cancer.gov/) using the "TCGAbiolinks" R package (17). Additionally, we collected data from another LIHC cohort (Mongolian-LIHC, N = 70) (18), which included mutation data, expression data and clinical data, from published literature. We collected data from a bladder cancer cohort (ICI-treated BLCA, N = 348) receiving immunotherapy, which included mutation data, expression data and immunotherapy prognosis data (19), by using the IMvigor210CoreBiologies R package. Data from another melanoma cohort receiving ICIs were obtained from the Gene Expression Omnibus (GEO) database (GSE78220, N = 27) (20). These two immunotherapy cohorts were used to verify the potential utility of DDR typing for predicting immunotherapy response. See Figure 1A for the detailed analysis flow of this study.
DDR Clustering and DDR Score Construction
The R package "ConsensusClusterPlus" was used to identify subtypes of DDR-related genes with highly variable expression (Median(-)> 1) in the TCGA-LIHC cohort, Mongolian-LIHC cohort and ICI-treated cohort (21). After unsupervised clustering (using the following parameters: maxK=8, reps=1,000, pItem=0.8, pFeature=0.8, clusterAlg="km", distance="Euclidean", innerLinkage="average", and finalLinkage="average"), we obtained two types of DDR clusters. Then, we used the "limma" R package to analyze the differences in expression data in different DDR molecular subclasses in the TCGA-LIHC cohort, Mongolian-LIHC cohort and ICI-treated cohort (22). Singlesample gene set enrichment analysis (ssGSEA) algorithm and DDR-related gene sets were used to construct the DDR signature (23,24). ssGSEA was similar to GSEA. For a given signature G of size N G and single sample S, of the data set of N genes, the genes are replaced by their ranks according to their absolute expression from high to low: L = {r 1 ,r 2 ,…,r N }. An enrichment score ES(G,S) is obtained by a sum (integration) of the difference between a weighted ECDF of the genes in the signature PGw and the ECDF of the remaining genes P NG (25): where P w G (G, S, i) = orj∈G, j≤i jr j ja S r j∈G jr j ja and P NG (G, S, i) = o r∉G,j≤i This calculation is repeated for each signature and each sample in the data set.
Immune Correlation Analysis
The CIBERSORT algorithm (https://cibersort.stanford.edu/ index.php) was used to evaluate the relative abundances of 22 immune cells in the TME of patients with LIHC (26). Immunerelated genes and immune signatures were collected from published literature (27). The GSEA algorithm evaluates the difference in the enrichment degree of immune pathways, metabolic pathways and pathological pathways between different groups according to expression differences between the groups. Pathways with a P value less than 0.05 were considered to have statistically significant differences (28).
Compound-Targeting Analysis
To identify which inhibitors/compounds may be useful for targeting cells with TP53 and RB1 co-mutations, we employed the Broad Institute's connectivity map (CMap) build 02 (29), which is a publicly available online analytical tool (https:// portals.broadinstitute.org/cmap/) that allows the analyzer to predict potential inhibitors/compounds based on upregulated and downregulated genes in a gene expression signature.
To further discover the mechanism of action (MoA) (30) and inhibitors/compounds, we used CMap tools (https://clue.io/). The CMap method is similar to GSEA, which can identify similarities and interactions (range: -1 to 1) based on differential gene expression data.
By using the pRRophetic R package (31), we constructed a ridge regression model based on the Genomics of Drug Sensitibity in Cancer (GDSC) cell line expression profile (www. cancerrxgene.org) (32) and the TCGA-LIHC, Mongolian-LIHC and ICI-treated cohort gene expression profiles to predict the half maximal inhibitory concentration (IC50) values of compounds/inhibitors.
Statistical Analysis
For comparisons of factors such as immune cells and immune gene expression between the DDR1 and DDR2 groups, we used the Mann-Whitney U test. Fisher's exact test and the chi-square test were used to analyze the contingency table. The Kaplan-Meier (KM) method and the log-rank test were applied in the survival analysis. When carrying out the survival analysis and comparing the efficacy of immunotherapy with that of traditional treatment, the survminer package (33) (surv_cutpoint function) was used to calculate the best cutoff for each cohort according to the relationship between the survival result and the ssGSEA score of DDR signaling. In this study, P < 0.05 was considered statistically significant, and all statistical tests were two-tailed. All statistical analyses and generation of visuals were performed using R software (version 3.6.3).
Relationship Between DDR Type and Clinical Prognosis
On the basis of the TCGA-LIHC expression data, we used the R package "ConsensusClusterPlus" to identify subtypes of samples based on the expression of DDR-related genes with highly variable expression (MAD) by unsupervised clustering ( Figure 1B) and identified two main subtypes via heatmap analysis ( Figure 1C). The cluster with low expression of DDRrelated genes was called DDR1, while the cluster with high expression of DDR-related genes was called DDR2. The differential expression analysis of DDR-related genes between DDR1 and DDR2 in the TCGA-LIHC cohort showed that DDR1 had significantly lower expression of DDR-related genes than DDR2 (P < 0.05). The expression of DDR-related genes in the DDR2 group was significantly higher than that in the DDR1 group (P < 0.05; Figure 2A). The result of this typing was also verified in another cohort (Mongolian-LIHC), the samples of which were also divided into a DDR1 group with low expression of DDR-related genes and a DDR2 group with high expression of DDR-related genes ( Figure 2B). In the TCGA-LIHC cohort receiving traditional treatment, the DDR2 group had a significantly shorter OS time than the DDR1 group ( Figure 2C, P < 0.001, HR = 1.94; 95%CI: 1.26-2.99). In the Mongolian-LIHC cohort receiving traditional treatment, the DDR1 group had a shorter OS than the DDR2 group ( Figure 2D; P = 0.033, HR = 2.46, 95%CI: 1.08-5.6). In a cohort of patients with advanced bladder cancer receiving immunotherapy, we were also able to divide the patients into two molecular subclasses: DDR1 patients with low expression of DDR-related genes and DDR2 patients with high expression of DDR-related genes ( Figure 2E). Interestingly, among these ICItreated patients, DDR2-type patients had significantly longer survival times from immunotherapy than DDR1-type patients ( Figure 2F, P= 0.043, HR = 0.76, 95%CI: 0.59-0.99). To determine the associations between common clinical factors and DDR type, we compared the ages and clinical stages of patients in each DDR group. In the TCGA-LIHC cohort, DDR2 patients were significantly younger than DDR1 patients ( Figure 2G, P < 0.05). In terms of clinical stage, we found that the DDR1 group had a higher proportion of early-stage patients (stage I and II) than the DDR2 group ( Figure 2H, P < 0.05).
Analysis of the Mutation Landscape in the Different DDR Groups
The nonsynonymous mutation data of TCGA-LIHC and Mongolian-LIHC were used to compare the mutations in the different DDR groups. The analysis of driving genes showed that in the TCGA-LIHC cohort, the DDR2 group had more patients with TP53 mutations than the DDR1 group, and the mutation frequencies of other driver mutations were not significantly different between the DDR1 and DDR2 groups ( Figure 3A). Some nondriver mutations had high frequencies; the top 10 genes with the highest frequencies of driver mutations (Top10 genes) were TTN, ALB, PCLO, RYR2, ABCA13, APOB, FLG, OBSCN, and XIRP2 and HMCN1 ( Figure 3A). Among these mutations, most were missense mutations. In the Mongolian-LIHC cohort, we found that patients in the DDR2 group had a higher mutation frequency of TP53 than patients in the DDR1 group. Similarly, the mutation frequencies of the Top10 genes were not significantly different between the DDR1 and DDR2 groups ( Figure 3B). Then, we performed mutual exclusion/ cooccurrence analysis on the mutations in each cohort. In the TCGA-LIHC cohort ( Figure 3C), TP53 mutation was mutually exclusive with MUC16 or CTNNB1 mutation. In the Mongolian-LIHC cohort ( Figure 3D), TP53 mutation and FAT3 or LRP1B mutation were cooccurring mutations. The above results suggest that there is no significant difference in the frequencies of highfrequency nonsynonymous mutations in driver genes between the with DDR1 and DDR2 groups.
Analysis of the Immune Microenvironment in the Different DDR Types
The immune microenvironment is one of the key factors affecting the efficacy and clinical benefits of cancer patients receiving ICIs. Therefore, we explored the differences in the immune microenvironment in different DDR molecular subclasses in terms of the proportions of immune cells, the expression levels of immune-related genes, the immune status and the enrichment degree of specific pathways. CIBERSORT analysis showed that DDR2 patients had significantly increased proportions of activated immune cells ( Figure 4A; P < 0.05), such as memory B cells, activated memory CD4+ T cells, M0 macrophages, plasma cells and T follicular helper cells (Tfhs). Checkpoint molecules are an important target of ICIs, and we analyzed the expression of checkpoint molecules in different DDR types. Compared with the DDR1 type, the DDR2 type had significantly increased expression of immune checkpoint molecules ( Figure 4B; P < 0.05), such as CD274, HAVCR2, LAG3, CD276, CTLA4, TIGIT and PDCD1. Additionally, inflammatory factors and proteins with other immune functions (antigen presentation and other functions) also play a key role in the response to immunotherapy. The expression levels of genes related to antigen processing and presentation (HLA-DPA1, HLA-DPB1, HLA-DQA1, HLA-DQB1. HLA-DQB2, and HLA-DRA), chemokine genes (CX3CL1 and CXCL9) and proinflammatory molecule genes (TNFSF9, IL1B, IL1A, and IFNG) in the DDR2 group were significantly higher than the respective levels in of the DDR1 group (P < 0.05; Figure 4C). The result of immune signature analysis showed that the DDR2 group had significantly higher immune scores, indicating characteristics such as BCR richness, BCR Shannon diversity, TCR richness and Th2 cell features, than the DDR1 group (P < 0.05; Figure 4D). In contrast, the stromal score of patients in the DDR1 group was significantly higher than that of patients in the DDR2 group (P < 0.05; Figure 4D). In the TCGA-LIHC cohort, GSEA was used to analyze and compare the enrichment of pathways in DDR1 and DDR2 patients. For some DDR-related signaling pathways (such as the nucleotide excision repair pathway), immune-related pathways (such as pathways related to the positive regulation of interleukin-6 biosynthetic processes, B cell activation, the positive regulation of T cell activation, TCR signaling, antigen processing and the presentation of peptide antigen via MHC class II) were significantly activated in the DDR2 group compared with the DDR1 group. In contrast, the activities of some immune depletion-related pathways (such as pathways related to lipid biosynthetic processes and fatty acid metabolic processes) were significantly higher in the DDR1 group than in the DDR2 group ( Figure 4E). Additionally, the above GSEA results were also verified in the Mongolian-LIHC cohort ( Figure S1). We further demonstrated the differences in the expression levels of genes in the above pathways between the DDR1 and DDR2 groups via heatmap analysis (Figures S2, S3).
Relationship Between DDR Score and Clinical Prognosis
To explore the relationship between the DDR score and the prognosis of LIHC patients receiving traditional treatment, we determined the DDR score using the ssGSEA algorithm and DDR-related gene sets. In TCGA-LIHC, patients with higher DDR scores had significantly reduced OS compared with those with lower DDR scores ( Figure 5A, P < 0.001, HR = 2.23, 95%CI: 1.51-3.28). Similarly, in the Mongolian-LIHC cohort, patients with high DDR scores had shorter OS than patients with low DDR scores ( Figure 5B; P = 0.001, HR = 3.54, 95%CI: 1.35-9.27). Next, we analyzed the difference in DDR scores in different DDR groups. In both the TCGA-LIHC and Mongolian-LIHC cohorts, DDR2 patients had higher DDR scores than DDR1 patients (P < 0.0001, Figures 5C, D). This result was consistent with the analysis of DDR classification and clinical prognosis. To further verify the utility of the DDR score in patients receiving ICIs, we also determined the DDR scores in an ICI-treated BLCA cohort. In the ICI-treated BLCA cohort, patients with high DDR scores had significantly longer OS after immunotherapy than Overall survival for subjects grouped according to DDR score subtype (high DDR score and low DDR score) in the TCGA-LIHC cohort. (B) Overall survival for subjects grouped according to DDR score subtype (high DDR score and low DDR score) in the Mongolian-LIHC cohort. (C) Comparison of the DDR scores between two molecular subclasses (DDR1 and DDR2) in the TCGA-LIHC cohort. (D) Comparison of the DDR scores between two molecular subclasses (DDR1 and DDR2) in the Mongolian-LIHC cohort. (E) Overall survival for subjects grouped according to DDR score subtype (high DDR score and low DDR score) in the ICI-treated BLCA cohort. (F) Comparison of the DDR scores between two molecular subclasses (DDR1 and DDR2) in the ICI-treated BLCA cohort. (G) Overall survival for subjects grouped according to DDR score subtype (high DDR score and low DDR score) in the ICI-treated melanoma cohort. (H) Comparison of TMB between two molecular subclasses (high DDR score and low DDR score) in the TCGA-LIHC cohort. (I) Comparison of TMB and NAL between two molecular subclasses (high DDR score and low DDR score) in the ICI-treated cohort. (*P < 0.05; ***P < 0.001; ****P < 0.0001; Mann-Whitney U test). patients with low DDR scores ( Figure 5E, P = 0.03, HR = 0.75, 95%CI: 0.57-0.99). Additionally, patients with the DDR2 type had higher DDR scores than patients with the DDR1 type ( Figure 5F, P < 0.0001). In another ICI-treated melanoma cohort, compared with the survival time of patients with low DDR scores, the survival time of patients with high DDR scores was significantly prolonged ( Figure 5G, P = 0.037, HR =0.33). In the TCGA-LIHC cohort, patients with high DDR scores had significantly higher TMB levels than those with low DDR scores (P < 0.05; Figure 5H). Similarly, in a BLCA cohort receiving immunotherapy, we also found that patients with high DDR scores had higher immunogenicity than patients with low DDR scores, and the patients with high DDR scores showed increased TMB and NAL levels ( Figure 5I, all P < 0.05).
Association Between the DDR Score and Sensitivity to Other Drugs
We used CMap analysis to predict therapeutic drugs and targets for the high DDR score group. CMap is a gene expression profile database containing gene expression data developed by the Broad Research Institute that is mainly used to reveal functional relationships between small molecule compounds, genes and disease states. The relationships between these factors are evaluated by a score, which ranges from -1 to 1. The results are arranged in descending order from high to low. The closer the value is to -1, the more likely the small molecules are to be an antagonist in patients with a high DDR score ( Figure 6A). Therefore, these antagonistic small molecules can be candidate drugs for the treatment of patients with high DDR scores. We found that 8-azaguanine, bufexamac, estriol (an estrogen receptor agonist), oxetacaine, pyrvinium, repaglinide (an insulin secretagogue), rimexolone (a glucocorticoid receptor agonist) and trazodone (an adrenergic receptor antagonist) may be candidate drugs for treating patients with high DDR scores ( Figure 6B). Additionally, we predicted the drug sensitivity of TCGA-LIHC patients by using the pRRophetic algorithm and a ridge regression model. Targeting the cell cycle (CGP-60474, GW 843682x, BI-2536, and CGP-082996), PI3K/mTOR signaling (JW-7-52-1, MK-2206, and A-443654), RTK signaling (sunitinib and PHA-665752) and WNT signaling (CHIR-99021) was significantly more effective in high DDR score patients than in low DDR score patients ( Figure 6C).
Differences in Pathway Activation Degree in the High and Low DDR Score Groups
In the TCGA-LIHC cohort ( Figure 7A), we found that LIHC patients with high DDR scores showed significant activation of DNA repair-related signaling pathways (pathways related to nucleotide-excision repair, DNA gap filling, nucleotide excision repair, and DNA double-strand break repair), immune pathways (pathways related to downstream TCR signaling and positive regulation of activated T cell proliferation), cell cycle-related pathways (pathways related to G2/M checkpoints, the G2/M transition, and the G1/S transition of the mitotic cell cycle), and traditional drug resistance pathways (pathways related to MAPK6/MAPK4 signaling and PIP3, which activate AKT signaling) compared with patients with low DDR scores. In contrast, the activity of some pathways, such as pathways related to fatty acid metabolic processes, lipid catabolic processes and cholesterol transport, was significantly higher in patients with low DDR scores than in those with high DDR scores. In the Mongolian-LIHC cohort, the activity of DNA repair and immune-related pathways in the immune microenvironment of patients with higher DDR scores was significantly higher than that of patients with lower DDR scores. However, LIHC patients with low DDR scores showed a significant decrease in the activity of immune depletion and drug resistance-related pathways ( Figure 7B). The above GSEA results were verified in the ICI-treated BLCA cohort in the same way ( Figure 7C). We further confirmed the differences in the expression levels of the genes in the above pathways between the low DDR score group and the high DDR score via heatmap analysis ( Figures S4-S6).
DISCUSSION
In this study, we found that there were significant differences in the activation of pathways in different DDR groups in HCC cohorts, and DDR1 patients had low expression of DDR-related genes, while DDR2 patients had high expression of DDR-related genes. After receiving traditional treatment, DDR2 patients had significantly shorter OS than DDR1 patients. In contrast, patients in the DDR2 group had significantly longer OS after receiving ICIs than those in the DDR1 group. After traditional treatment, patients with high DDR scores had worse survival prognoses than those with low DDR scores. However, the survival of patients with high DDR scores after receiving ICIs was significantly higher than that of patients with low DDR scores. The DDR score of patients in the DDR2 group was significantly higher than that of patients in the DDR1 group. To further explore the differences in the TME in different DDR groups, we further explored the potential molecular mechanism underlying the increased response to immunotherapy in the DDR2 group in terms of immune cells, immune-related gene expression, immune signatures and activation of specific pathways. The TME of DDR2 patients has highly infiltrated with activated immune cells and had high expression of immune checkpoint molecules, proinflammatory molecules and antigen presentation-related molecules. GSEA showed that the activity of immune-related pathways, DNA repair pathways and traditional drug resistance pathways in DDR2 patients was significantly higher than that in DDR1 patients, while the activity of some immune depletion pathways in DDR1 patients was significantly higher than that in DDR2 patients. Similarly, the activity of immune pathways, DDR-related pathways and traditional drug resistance pathways was significantly higher in patients with high DDR scores than in those with low DDR scores. Additionally, the CMap algorithm and a heuristic algorithm were used to predict potential drugs for LIHC patients (of the DDR2 type or with a high DDR score).
Patients with DDR1 HCC had lower DDR scores and benefited more from traditional treatment than patients with DDR2 HCC, which may be related to the low DDR activity, cell cycle activity and traditional drug resistance pathway activity in the DDR1 group. Studies have shown that high activity of the MAPK or PI3K/AKT pathway is related to chemotherapy resistance in cancer patients (34). Additionally, when DNA damage occurs, intracellular damage receptors, such as ataxia telangiectasia mutated protein (ATM), Rad3-related protein and the Rad9-Rad1-Hus1 protein complex, detect DNA damage and initiate the signal transduction cascade involving checkpoint kinases 1 and 2 and cell cycle regulators. These cell cycle regulators can block the G1 and S cell cycles stages and the G2/M cell cycle transition by activating p53 and inhibiting cell cycle-dependent kinases, enabling the cells to re-enter the cell cycle after successful repair. Once the DNA in cells cannot be repaired, the apoptosis pathway will be activated in the cells, inducing self-directed apoptosis to prevent the damaged DNA from being transmitted to the offspring (35). Yang et al. found that the overexpression of XRCC4-like factor (XLF), a key gene in the DDR pathway, was significantly related to the poor OS rate of HCC patients receiving traditional treatment. Knocking out XLF increased the sensitivity of HCC to chemotherapy by inhibiting DNA repair (36). Chen et al. showed that reducing the DNA repair ability of HCC cells can further enhance the cytotoxicity of radiotherapy and chemotherapy (11).
Patients with DDR2 cancer had a higher DDR score and benefited less from traditional treatment than patients with DDR1 cancer, but the DDR2 type was significantly related to a longer survival time with immunotherapy. This increased survival in the DDR2 group may be related to the TME of patients in the DDR2 group or patients with a high DDR score, who were more likely to receive immunotherapy. Patients with the DDR2 type had higher proportions of activated immune cells and higher expression of immune checkpoint molecules, chemokines (CXCR3 and CXCL9), proinflammatory factors (such as IFNg), and antigen processing-and presentationrelated molecules (HLA-related molecules) than patients with the DDR1 type. Additionally, the activity of pathways related to immune cell activation, proinflammatory factor secretion, and antigen processing and presentation was significantly higher in DDR2 patients than in DDR1 patients. Immune checkpoint molecules are important targets for ICI therapy, and studies suggest that high expression of immune checkpoint molecules is related to superior immunotherapy efficacy (37,38). TILs are an important part of HCC TME. High levels of lymphocyte infiltration, especially infiltration of CD4+ T cells and CD8+ T cells, are related to superior prognosis after immunotherapy (39,40). Andrea Necchi et al. showed that a high lymphocyte infiltration level indicates a strong antitumor immune response across cancers (41). CD8+ T cells are the main TILs in liver cancer and can release perforin and granzyme B through the Fas/ FasL pathway or kill target cells by releasing IFN-g and TNF (42). The expression of Fas/FasL in CD8+ T cells is positively correlated with the antitumor immunity of liver cancer (43). Additionally, cytokines in the TME play an important role in the formation of the inflammatory immune microenvironment (44). For example, chemokines (CXCL9 and CXCR3) can exert antitumor responses by recruiting CD4+ T cells, CD8+ T cells, NK cells and M1 macrophages into the tumor center (45)(46)(47).
Additionally, IFN-g can not only promote TILs to exert antitumor reactions but also mediate iron-induced death in tumor cells (48). Additionally, an IFN-g-related gene expression profile has also been significantly associated with superior ICI efficacy (49). The activity of pathways related to antigen processing and presentation in the TME will also affect the immunogenicity of tumors, and an increase in antigen processing and presentation activity is beneficial for improving the body's recognition of tumor antigens (50). Additionally, high expression levels of tumor-specific MHC-II molecules are significantly correlated with a superior immunotherapy response (51). Studies have shown that an increase in lipid metabolism can promote cancer metastasis and progression (50). Additionally, the enhancement of cholesterol metabolism will further inhibit T cells from enacting tumor cell killing. Consistent with the published literature (50)(51)(52), the activity of some immune exhaustion-related pathways (such as pathways related to cholesterol metabolism and lipid metabolism) was significantly decreased in patients in the DDR2 group.
However, this study also has some limitations. First, due to a lack of an ICI-treated HCC cohort, the differences between the DDR molecular subclasses (DDR1 and DDR2) and the DDR- high and DDR-low score groups could not be further verified. Second, the patients in the TCGA-LIHC and Mongolian-LIHC cohorts may have heterogeneous tumors, which may have a potential impact on the results of this study. We hope to collect and include HCC patients receiving ICI treatment in future research and further verify the influence of DDR molecular type (DDR 1 and DDR2) and the DDR score on the outcomes of HCC patients receiving immunotherapy.
CONCLUSIONS
In this study, through unsupervised clustering of the DDRrelated expression profiles of HCC samples, we found that HCC patients could be divided into a DDR1 group (with low activation of DDR pathways) and a DDR2 group (with high activation of DDR pathways). Patients with the DDR2 type had higher DDR scores than those with the DDR1 type. Intriguingly, after traditional treatment, the OS of patients with DDR1 HCC with a low DDR score was significantly prolonged. In contrast, after immunotherapy, DDR2 patients with high DDR scores had a better prognosis than those with low DDR scores. Based on the analysis of the TME, we found that patients with high DDR2 scores had an inflammatory TME, which was characterized by highly enrichment of activated immune cells, high expression of proinflammatory cytokines, high levels of expression of immune signatures and high immune-related pathway activity, while patients with low DDR1 scores had molecular features that are potentially more conducive to response to traditional treatment, such as a low ability to repair DNA damage and low activity of drug resistance pathways functioning in traditional treatment response.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors. Supplementary Figure 6 | Heatmap of core genes in significantly differentially enriched pathways between high DDR score and low DDR score tumors in the ICItreated cohort. | 2021-07-16T13:25:48.029Z | 2021-07-15T00:00:00.000 | {
"year": 2021,
"sha1": "e7b953b8454746b4749b08e0568ac14dde661554",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.676922/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e7b953b8454746b4749b08e0568ac14dde661554",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6619112 | pes2o/s2orc | v3-fos-license | High-resolution short-exposure small-animal laboratory x-ray phase-contrast tomography
X-ray computed tomography of small animals and their organs is an essential tool in basic and preclinical biomedical research. In both phase-contrast and absorption tomography high spatial resolution and short exposure times are of key importance. However, the observable spatial resolutions and achievable exposure times are presently limited by system parameters rather than more fundamental constraints like, e.g., dose. Here we demonstrate laboratory tomography with few-ten μm spatial resolution and few-minute exposure time at an acceptable dose for small-animal imaging, both with absorption contrast and phase contrast. The method relies on a magnifying imaging scheme in combination with a high-power small-spot liquid-metal-jet electron-impact source. The tomographic imaging is demonstrated on intact mouse, phantoms and excised lungs, both healthy and with pulmonary emphysema.
imaging, the extra optical elements (gratings) in the GBI arrangement typically cause a dose and exposure-time disadvantage compared to the free-space propagation of PBI. There are only few comparisons on observable detail vs dose for the two methods but for imaging gas-filled structures (like CO 2 -filled blood vessels or air-filled lung alveoli) the necessary dose for observing sub-50-μ m structures may differ a factor ten in favor of PBI 9 . Given that laboratory systems are typically limited by source power, this factor ten directly translates also into a shorter exposure time. Although this dose and exposure-time advantage of PBI over GBI is not a general result and the performance of the methods must be evaluated for each individual imaging task, the present study was performed with PBI since it has demonstrated lower-dose and shorter-exposure-time high-resolution imaging of the specific class of objects (gas-tissue interfaces) discussed here.
Any small-animal phase-contrast imaging system aiming for high spatial resolution benefits from a high-brilliance source. Consequently much of the early as well as present phase-contrast imaging was and is performed at synchrotron facilities 3 . However, small-animal imaging is typically an integral part of other investigations making it beneficial to have a laboratory imaging system in-house. A dedicated GBI-based tomographic scanner based on a classical microfocus source 10 has demonstrated impressive rodent imaging 11 . Still, the microfocus source of this system lacks sufficient brightness and spot size to allow high-spatial-resolution imaging with reasonable scan times. Several laser-/accelerator-based "compact" systems have been proposed as alternative high-brightness laboratory sources 12,13,14 . One of these, the inverse-Compton scattering Compact Light Source (CLS), has achieved sufficient stability for high-quality tomographic imaging, both in absorption, where sub-100 μ m bone imaging has been demonstrated with approx. 20 minutes exposure time 14,15 , and in grating-based phase-contrast tomography, where 80-μ m resolution was reached with several hours of exposure time 16 . Although the accelerator-based compact sources appears to have potential for improvement in brightness, their complexity and size makes it difficult to envision them on a rotating gantry.
In addition to the immediate small-animal-imaging applications discussed above, absorption-and phase-contrast tomography on excised samples is presently emerging as an alternative to conventional destructive histology 17,18 . The advantages include speed, 3D with thinner effective slicing, and less destructive and simplified sample preparation. Also in this application high spatial resolution and short exposure times are essential while dose is of lesser concern. With a short exposure time the risk of sample movement decrease, which is vital for obtaining the high resolution important in most histological analyses. Present systems for high-spatial-resolution phase-contrast virtual histology typically use hours of exposure time at synchrotrons 17,18 .
Here we demonstrate that laboratory x-ray tomography can be performed with minute exposure times and few-10-μ m resolution, both in absorption-contrast bone imaging and in phase-contrast soft tissue imaging. The method relies on a magnifying propagation-based arrangement in combination with a high-brightness liquid-metal-jet electron-impact source 19 . This source type has previously demonstrated its applicability for very high spatial resolution (cellular and sub-cellular) phase-contrast imaging of blood vessels, tumors, lung tissue, and muscle tissue in organs and in whole-body mouse and zebrafish, but with long exposure times and high dose 20,21,22,23 . In the present paper our imaging system is optimized to allow observation of high-spatial resolution features (few-10 μ m range) at reasonably short scan times (few-minute range) and at a dose acceptable for in-vivo rodent imaging (few-100 mGy range). We demonstrate the system for bone absorption imaging in intact mouse and for phase-contrast imaging of phantoms and excised mouse lungs, both healthy and with pulmonary emphysema.
Results
Laboratory x-ray tomography with short exposure time and high resolution. Figure 1a depicts the experimental arrangement. This is a classical magnifying x-ray tomography arrangement with a microfocus x-ray source, a sample, and a detector. The magnification M = (R 1 + R 2 )/R 1 allows high resolution imaging also beyond the limitation set by detector resolution, provided the source spot-size is small. By changing the effective (a) The small-spot high-brightness liquid-metal-jet microfocus source illuminates the object in a magnifying scheme. Depending on the setting of source-object-distance (R 1 ) and the object-detector distance (R 2 ) the contrast can be tuned from pure absorption to phase-contrast (PBI). The object is rotated around its vertical axis for the tomography. (b) The emitted Al-filtered spectrum consists of line emission from Ga (Kα at ∼ 9 keV) and In (Kα ∼ 24 keV) as well as a bremsstrahlung background. We use a 210 μ m Al filter to reduce low-energy emission that else would contribute to excessive dose via absorption in the object.
propagation distance z eff = (R 1 R 2 )/(R 1 + R 2 ) the contrast can be tuned from pure absorption (short z eff ) to increasing phase contrast (longer z eff resulting in propagation-based phase imaging, PBI) 24 , albeit typically at the price of longer exposure times. Thus, the arrangement allows for quick and simple adaption to optimize imaging properties (e.g., resolution and contrast) to the studied sample. Both absorption and phase-contrast imaging is illustrated below. However, for the arrangement to provide proper contrast, resolution and exposure times, it critically relies on the microfocus source properties.
For whole-body imaging of, e.g., a mouse we need a high-brightness microfocus source operating at a few tens of kV. We employ a 50 kV liquid-metal-jet source operating at 400 W electron-beam power with a spot size of 8 μ m. The high power is critical for the short exposure times and the small spot size provides high spatial resolution as well as the spatially coherent illumination necessary for high contrast in PBI. Furthermore, care was taken in the e-beam design to minimize low-intensity tails in the x-ray source spot and in the thermal design to minimize spot movements, since both are known to reduce spatial coherence and, thus, contrast 22 . For the experiments described below, primarily the emission in the 15-35 keV energy interval is of relevance. Typically, sub-15-keV photons contribute more to dose than to image contrast due to high absorption in the sample and we therefore use a 210 μ m Al filter to reduce the low-energy emission of the source. The emitted Al-filtered spectrum is shown in Fig. 1b. The higher-energy photons (> 35 keV) interact to a lesser degree with the sample and are in addition detected with low efficiency. In this 15-35 kV range the full flux is 3.5 × 10 12 ph/(s × sr) and the corresponding relevant brightness is 7.0 × 10 10 ph/(s × mm 2 × mrad 2 ). The source and system parameters are described in more detail in the Methods section and in the Supplementary Material.
High-resolution short-exposure absorption tomography of a mouse. The arrangement allows high-spatial resolution absorption CT of rodents. Figure 2a shows the 3D rendering of the mouse head from a whole-animal scan reconstructed with 7.6-μ m isotropic voxels. The image provides a high degree of detail as evidenced by structures in the teeth and the bone structure. Figure 2b shows a tomographic slice through the skull and Fig. 2c a zoom-in of part of the slice. Several 3-5-pixel (25-40 μ m) bone structures are observable in the images. The observable detail and spatial resolution is estimated from the intensity patterns of sharp edges in the image, cf. knife edge scans 25 . Edge scans of the bone structures in Fig. 2b and c typically exhibit a 25-75% intensity rise of approx. 25 μ m, from which we estimate a half-period resolution of approx. 25 μ m, which is consistent with our observations above. The total exposure time was 73 seconds (121 projections × 0.6 s, in steps of 1.5° over 180°). The magnification was 1.19 and the dose 400 mGy. In the Supplementary Material surface renderings and tomographic slices based on reconstructions from 1-, 6-and 30-minute total exposures are depicted for comparison.
High-resolution phase-contrast tomography of a phantom. Our angiography/lung phantom mimics CO 2 -filled blood vessels or air-filled lung structures in a mouse-size object, where the gas-filled vessel structures range from 23 to 684 μ m diameter. Figure 3 shows the result of propagation-based phase-contrast imaging. Figure 3a-c show projection imaging at M = 4.2 magnification with different exposure time and dose, from (a) 120 s/229 mGy over (b) 60 s/114 mGy to (c) 12 s/23 mGy. It is interesting to note that the 23 μ m vessel is observable at the 60 s/114 mGy exposure while for 12 s/23 mGy the smallest observable vessel diameter is the 176 μ m. This assessment of the observable detail is supported by calculations of the signal-to-noise ratio (SNR). Table S1 in the Supplementary Material shows the results. Figure 3d shows a tomographic reconstruction of the gas-filled structures. Here we used 6 minute total exposure time (180 projections × 2 s, in steps of 1° over 180°) and M = 3. All gas filled vessels are observable in this 686 mGy exposure, where the smallest vessel is clearly visible from examining several adjacent slices. Figure 3e shows a sagittal view of the smallest vessel. From these experiments we conclude that our system allows high-spatial-resolution phase-contrast imaging (few tens of μ m) in soft tissue/ gas structures with very short exposure times (approximately a minute) and at dose levels acceptable for in-vivo rodent studies. High-resolution phase-contrast tomography of mouse lungs. Figure 4 depicts the high-resolution tomographic imaging of two excised air-filled mouse lungs, one with pulmonary emphysema (a) and one healthy control (b). The air-filled bronchi and alveoli are black and the soft tissue of the lung is light gray. The lungs are surrounded by air (black) and the control lung is not completely filled with air. It is clear that the alveolar structures of the emphysematous lung are much larger than those of the healthy control, as expected. The inset in Fig. 4b shows a magnified view of the alveoli in the healthy lung. In this slice, air-filled structures (alveoli) with diameters < 50 μ m are observed. Edge scans of the alveoli boundaries typically exhibit 25-75% rise of ∼ 28 μ m, suggesting to a half-period resolution of < 30 μ m. The total exposure time for each sample was 6 minutes (180 projections × 2 s, in steps of 1° over 180°). The magnification was 1.67 and the dose 2.6 Gy. In the Supplementary Material the same sample is reconstructed from a 30-minute exposure, (900 projections × 2 s, in steps of 0.2° over 180°; 13 Gy) data for comparison. Here we observe < 40 μ m diameter alveoli and the estimated half-period resolution is < 20 μ m (25-75% rise is 18 μ m). We note from the introduction that the higher doses used here are appropriate for imaging excised samples. The Supplementary Material also includes a video surface rendering of an emphysematous lung, indicating the high contrast in the data and the extra structural information obtained from tomography.
Comparison with histology. After the scan, the lung samples were embedded, sliced, and stained for histology. The results are shown in Fig. 4c and d. In general, the PBI tomography data can be concluded to be in good agreement with the histology, proving that we in fact detect individual alveoli with high contrast. The typical alveoli diameters in the emphysematous lung is in the 200-μ m range (with a considerable spread), both in CT and in histology, while in the healthy lung the typical diameter is in the 50-60 μ m range. The numbers are consistent with recent experiments at synchrotrons with healthy mouse lungs 26 .
Discussion
We have demonstrated that small-animal imaging can be performed with high spatial resolution and at reasonable exposure time and acceptable dose with a laboratory system, both with absorption contrast (for bone imaging) and with phase contrast (for soft tissue imaging). The method relies on a magnifying propagation-based arrangement and a high-brightness electron-impact microfocus source. The arrangement allows for rapid and simple change of parameters to optimize resolution, contrast and signal-to-noise ratio for each imaging situation.
The absorption-contrast imaging of a whole mouse demonstrates that few-ten-μ m details can be observed with minute-range exposure times and a 400-mGy dose. It is interesting to note that the magnifying arrangement necessary for the PBI also benefits the absorption imaging when a small-spot high-brightness source is used since the resolution limitation due to the detector point-spread function (here 27 μ m full width at half maximum) can be overcome. For the phase-contrast imaging it is encouraging to observe few-tens of μ m gas/tissue structures with minute exposure time and acceptable dose, 100 mGy in the phantom. For comparison, present typical live mouse imaging is performed with a few-100 mGy dose while up to a Gy may be used for special purposes 2 . As stated above, propagation-based imaging (PBI) was chosen before grating-based imaging (GBI) due to PBI´s dose efficiency and lower scan times despite that phase retrieval will be more complex for realistic multi-material objects. Fortunately algorithms handling such situations are presently emerging 27 .
The high-resolution short-exposure-time laboratory PBI imaging of gas/tissue interfaces has important applications, both for small-animal imaging and for 3D histology-like examinations of organs. As for laboratory phase-contrast lung imaging, elegant previous work based on the integrated (low-spatial resolution) dark-field scattering signal from a GBI system has successfully demonstrated discrimination between healthy and diseased lungs, with pulmonary emphysema 28,29 . The method shown in the present paper provides detailed imaging data of the 3D lung structure from the organ to close to the cellular scale. Such data is valuable for improved multiscale lung modelling 30 and quantitative measures of, e.g., surface areas and alveolar density may be extracted for a better understanding of the state of the healthy and diseased lung without the need for classical histology 31 . It also allows for a detailed assessment of the structural changes associated with, e.g., pathological states or lung development, as presently demonstrated by synchrotron-based experiments with high dose 26,32,33 . In addition, CO 2 angiography of tumor microvasculature may become important in angiogenesis research 34 .
As for the source, the brightness of the 400 W electron-impact liquid-metal jet source in the energy range relevant for small-animal imaging (15-35 keV) exceeds the brightness of the compact accelerator-based sources several times, resulting in imaging with significantly shorter exposure-times. In addition, the x-ray emission angle of the electron impact tubes is large compared to the typically few-mrad emission from accelerator-based sources, thereby allowing for compact magnifying arrangements for high-resolution whole-animal imaging, both in absorption and for PBI. Although the monochromatic emission of accelerator-based sources 14 may be favorable in certain applications, simulations of the present experiments using our in-house software 35 show a negligible difference in image quality for comparable dose when a monochromatic source is used instead of the actual source spectrum. Finally, we note that that the liquid-jet electron-impact tubes are significantly less complex than their accelerator-based alternatives and, thus, easier to integrate in small-animal imaging equipment.
In summary, we conclude that the methodological advances demonstrated here for absorption-based as well as propagation-based phase-contrast imaging opens up for imaging bone and soft-tissue structures with cellular spatial detail in whole-body small-animal objects at acceptable dose and exposure time. Furthermore, the present 400 W e-beam power and 8 μ m spot size operation of the source is far from its theoretical limits, making future increases in source power and brightness highly realistic. Possibly, exposure times can be reduced > 10 times, making, e.g., gated kinematics studies presently requiring a synchrotron sources 36 feasible also in the laboratory.
Methods
Laboratory x-ray tomography arrangement for high resolution and short exposure time. Figure 1 depicts the experimental arrangement with its x-ray source, sample and detector. The microfocus source is an electron-impact liquid-metal-jet source based on a prototype platform from Excillum AB, Sweden, using a Galinstan alloy (Ga-In-Sn, 68.5%:21.5%:10%) as anode jet material. The emission is filtered by 210 μ m Al to reduce the low-energy radiation, including the Ga K α and K β line emission at 9.3 and 10.3 keV, that contributes more to dose than to image contrast via significant sample absorption. The 15-35 keV x-ray spectrum relevant for the imaging is dominated by the broad bremsstrahlung and the K α and K β line emission from In and Sn at 24.2 and 27.3 keV, and 25.2 and 28.5 keV, respectively. The sample is placed on a rotation stage. The 36 × 24 mm 2 and 4008 × 2671 pixel CCD detector (FDI-VHR, Photonic Science, United Kingdom) has a 15 μ m thick Gadox (Gd 2 O 2 S:Tb) scintillator, a pixel pitch of 9 μ m, and a measured point spread function with a full width at half maximum (FWHM) of 27 μ m.
The short-exposure, high-resolution imaging in high-magnified systems demonstrated here requires a high-power source with a spatially stable and small x-ray spot. Compared to commercially available liquid-jet micro-focus sources 37 , the prototype liquid-jet source used in the present paper is operated at a significantly increased electron-beam power and while still keeping the spot size small and spatially stable. The LaB 6 cathode generates a 400 W, 50 kV electron beam which is focused onto the 250 μ m diameter metal jet by a magnetic lens in combination with alignment and deflection coils, generating a high-quality x-ray spot with a FWHM of 8 μ m and very limited low-intensity tails. For comparison, in our previous small-spot (< 8 μ m) high-resolution imaging we typically operated at 30-40 W electron beam power (cf. e.g., Refs 21,22.) The stable operation of the present small-spot/high-power source was enabled by increased water cooling for improved thermal stability and a bent e-beam column to protect the cathode from anode vapor by removing the line-of-sight between anode and cathode. A well-defined and stable x-ray spot is important for high image contrast, especially in high-spatial-resolution PBI but also in absorption imaging. The spot size was measured with a 100 nm outermost zone-width Au zone plate and the quantitative spectrum (cf. Fig. 1b) Angiography/lung phantom. The phantom consists of 4 air-filled vertical low-density polyethylene (LDPE) tubes with inner diameters 23 μ m, 50 μ m, 176 μ m, and 684 μ m placed in a water-filled cylindrical PMMA holder of 16 mm inner diameter and 22 mm outer diameter. This corresponds in absorption to about 21 mm of tissue, a typical object thickness in mouse imaging. LDPE was chosen because of its density, which is similar to that of water and tissue, thus making the tubes a proper representation of a CO 2 -filled blood vessels 21 or air-filled lung structures 28 . Although the phantom correctly represent the image SNR of gas-tissue structures it naturally does not include the more complex background from e.g., bone, hair, and movements in a live mouse.
Mouse lungs and the pulmonary emphysema protocol. The excised lungs came from 6-to 8-week old pathogen-free female C57BL/6 N (Charles River Laboratories, Sulzfeld, Germany) mice. For the induction of pulmonary emphysema, a solution of pancreatic elastase in sterile phosphate-buffered saline was applied orotracheally (80 U per kilogram of body weight). Control mice received 80 μ L sterile phosphate-buffered saline. Mouse lungs were excised 28 days after elastase application, inflated with air, tied at the trachea, and placed in a formalin-filled plastic container. There was approx. 2 weeks between the excise and the x-ray imaging, leading to some leakage of air in some cases. The lung experiments were performed with permission of the Institutional Animal Care and Use Committee of the Helmholtz Zentrum Munich and carried out in accordance with national (Gesellschaft für Versuchstierkunde-Society for Laboratory Animal Science) and international (Federation for Laboratory Animal Science Associations) animal welfare guidelines. The Institutional Animal Care and Use Committee of the Helmholtz Zentrum Munich approved all the experimental protocols. Data acquisition. The mouse absorption tomography of Fig. 2 was performed with a source-object-distance (R 1 ) of 29.8 cm and a object-detector-distance (R 2 ) of 5.7 cm, resulting a magnification of M = 1.19. The reconstruction is based on 120 projections each exposed 0.6 s and an angular step size of 1.5° over 180°, resulting in 73 s total exposure time and 400 mGy dose. In the Supplementary material we also show reconstructions with 180 × 2 s with 1° steps (6 minutes, 1.9 Gy), and 900 × 2 s with 0.2° steps (30 minutes, 9.5 Gy) for comparison.
The PBI phase-contrast lung tomography in Fig. 4 was performed with M = 1.67 (R 1 = 30 cm and R 2 = 20 cm). The reconstructions are from 180 projections× 2 s with an angular step of 1° (6 minutes, 2.4 Gy). In the Supplementary material we show reconstructions from 900 projections× 2 s with 0.2° step (30 minutes, 9.5 Gy). Data processing and reconstruction. All experimental data was processed with the same procedure.
The projections were first flat-field corrected and then phase-retrieved using the Paganin method 38 before the tomography. The phase-retrieval assumed the appropriate constants for each experiment, i.e., bone/soft tissue (Fig. 2), water/air (Fig. 3) and tissue/air (Fig. 4). We note that the phase-retrieval step had a negligible influence on the absorption imaging of Fig. 2. The tomographic reconstruction was performed with the cone-beam-corrected filtered back projection in the Octopus software (Inside Matters, Aalst, Belgium) with 7.6 μ m voxel size. The 3D surface rendering employed the Amira software (Visage Imaging, San Diego, CA, US) on 2 × 2 binned data.
Histology. The lungs were washed to remove paraformaldehyde and then dehydrated and embedded in paraffin. At an interval of 0.5 mm, multiple 10-μ m-thin slices were prepared in the coronal plane. The slices were stained by using the Mayer hematoxylin-eosin staining routine protocol. Subsequently, the slices were scanned at 2.5× and 20.0× magnifications to create digital images. | 2018-04-03T03:31:31.464Z | 2016-12-13T00:00:00.000 | {
"year": 2016,
"sha1": "09e79750da0c4a92fe24b05b3719e7698776c02b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/srep39074",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "6fbeaf748c061a7d51ace06a9d83c6e439072b2a",
"s2fieldsofstudy": [
"Engineering",
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
247447237 | pes2o/s2orc | v3-fos-license | The classical Jellium and the Laughlin phase
I discuss results bearing on a variational problem of a new type, inspired by fractional quantum Hall physics. In the latter context, the main result reviewed herein can be spelled as"the phase of independent quasi-holes generated from Laughlin's wave-function is stable against external potentials and weak long-range interactions". The main ingredient of the proof is a connection between fractional quantum Hall wave-functions and statistical mechanics problems that generalize the 2D one-component plasma (jellium model). Universal bounds on the density of such systems, coined"Incompressibility estimates"are obtained via the construction of screening regions for any configuration of points with positive electric charges. The latter regions are patches of constant, negative electric charge density, whose shape is optimized for the total system (points plus patch) not to generate any electric potential in its exterior.
mathematical set-up is first described as concisely as possible in Section 2. Next, the physical motivation for such investigations and the elements of context allowing to interpret the main result are discussed in Section 3. The keywords here are the fractional quantum Hall effect and the Laughlin function.
As regards proofs, everything proceeds from the "plasma analogy", which maps trial states for the original, quantum, variational problem onto classical statistical mechanics Hamiltonians. More precisely, the modulus squared of the quantum wave-functions have an interpretation in terms of Gibbs states of effective 2D Coulomb Hamilton functions. In Section 4 we give a brief review on the statistical mechanics of the simplest of such equilibria: the jellium (or one-component plasma), namely a system of classical point charges interacting via repulsive Coulomb forces and attracted to a neutralizing background of opposite charge. Basic questions (for which the recent [60] provides an in-depth review) concern • the existence of the thermodynamic limit for homogeneous systems, as investigated first in pioneering works by Elliott H. Lieb and co-workers [68,59,66] and then taken up e.g. in [100,24,42]. • the local density approximation for inhomogeneous systems, investigated in a long series of works by Serfaty and co-workers, reviewed e.g. in [102,103,104].
In our applications to the variational problem to be described shortly, our needs are somewhat more specific. What we need are "incompressibility estimates": universal local density upper bounds for a certain class of generalized Coulomb systems. The simplest of such estimates follows from an unpublished theorem of Elliott: in the ground state of a classical jellium, the minimal distance between two point charges is bounded below, uniformly in the thermodynamic limit. The reason is that each point charge neutralizes a circular patch of the background around it, creating a region where it is never energetically favorable to put another point charge. This method was generously communicated by Elliott to Sylvia Serfaty in private conversation. Generalizations thereof played a key role in the aforementioned program [88,84,83]. Passed on to the present author and Jakob Yngvason, it allowed to prove the first unconditional incompressibility estimate [92], after the problem had been isolated and first progress made in [91]. To obtain a satisfying bound, whose use unlocked the main theorem from Section 2 below, a new key idea was necessary, proposed by Elliott to Jakob Yngvason and the present author. One can in fact associate to any set of point charges a patch of the background such that the potential generated by the ensemble vanishes outside of the patch 1 . It is never energetically favorable to add another charge inside such a patch. The construction of such screening regions is not straightforward, nor is their use to complete the proof of the needed incompressibility estimates. All this is the topic of [69,70]. Further developments are in [93,80], as we will explain below (see [87] for another exposition).
Before proceeding, some directions related to this note, but whose discussion goes beyond its scope, are worth mentioning. Indeed, Elliott made other inspiring contributions to the study of the Laughlin function [46,47] or the statistical mechanics of jellium-like models [61,62,63].
A UNUSUAL VARIATIONAL PROBLEM
2.1. Statements. We start from a very standard Hamilton function where , ∶ ℝ 2 ↦ ℝ are respectively a one-body and a two-body potential and ∈ ℝ is a coupling constant.
We are interested in minimizing the expectation value of the above in particular quantum wavefunctions of the following form. Let > 0 and be the Laughlin function of exponent ∈ ℕ * , where the planar coordinates 1 , … , are identified with complex numbers 1 , … , and Lau = Lau ( ) is a 2 -normalization constant. For any ∶ ℂ ↦ ℂ analytic in all its argument and symmetric in the sense that (2.6) What we aim at is a significant simplification in the → ∞ limit. Namely, consider the simplest functions of the form (2.4): where ∶ ℂ ↦ ℂ is analytic and is a normalization constant. Define a restricted infimum by setting ( , ) = inf , [Ψ ] | Ψ of the form (2.7), ∫ ℝ 2 |Ψ | 2 = 1 . (2.8) Obviously ( , ) ≤ ( , ). What we would like to prove is that ( , ) ≃ ( , ) as → ∞ with fixed. (2.9) In fact, functions from our variational set (2.4) naturally live over thermodynamically large length scales ∼ √ . It is hence natural to scale the potentials and accordingly. We thus set, for fixed functions , , and (the −1 pre-factor ensures that the potential and interaction energies stay of the same order when → ∞) (2.11) We are now ready to state the main result we want to discuss. It was proved in [80] after the (simpler but still highly non-trivial) = 0 version was obtained in [70,93]. We do not state precise or optimal assumptions, nor associated corollaries, for the sake of a simpler exposition.
Theorem 2.1 (Energy of the Laughlin phase).
Assume that and are smooth fixed functions. Assume that goes to +∞ polynomially at infinity, and that it has finitely many non-degenerate critical points. There exists 0 > 0 such that with > 0 fixed, > 0 a fixed integer and | | ≤ 0 .
Proof outline and the connection with jellium.
To set the stage for the material to be discussed below, we briefly sketch the main steps of the proof of Theorem 2.1.
Plasma analogy.
The main difficulty is to understand what the densities of wave-functions of the form (2.4) have in common. We use Laughlin's plasma analogy from [52,53], writing |Ψ | 2 as a Boltzmann-Gibbs factor, with ensuring 1 -normalization (partition function of the effective plasma) and an effective Hamilton function (2.13) Hence |Ψ | 2 is the probability density of particles minimizing the classical free energy associated with , that is, realizing the infimum over probability measures on ℝ 2 . This rewriting is fruitful because the latter functional has an interpretation in terms of 2D electrostatics. The potential Φ generated by a charge distribution is given by Hence is the energy of mobile negatively charged 2D particles (at locations 1 , … , ∈ ℂ ↔ ℝ 2 ) of charge − √ 4
2.
Attracted to a fixed uniform background of positive charge density
3.
Feeling the potential ∶= −2 log | | (2.14) generated by additional "phantom" positive charges. The location of the latter can be essentially arbitrary, and correlated with the positions of 1 , … , , but their charge must be positive because for any (recall that is analytic).
The last equation (2.15) is key to the method I expose here. For the more specific functions (2.7) the interpretation can be made cleaner. Taking to be a polynomial corresponds to the electrostatic interaction of our mobiles charges 1 , … , with fixed point charges 1 , … , of the opposite sign.
Flocking energy and the main strategy. There is a third energy that we use as an intermediary step to relate ( , ) to ( , ). This is the flocking energy (2.18) Note the mean-field character: we minimize over a single one-body density, assuming negligible correlations. However, the original nature of the problem is retained in the upper constraint imposed on admissible densities. This is a kind of "super-Pauli principle": particles are prevented to gather in space beyond a certain fixed density, lower than the usual Pauli principle ( = 1) would require. This is reminiscent of simple models [14] for flocks or swarms of animals, e.g. birds, a direction where Elliott also made contributions [28]. Since we clearly have ( , ) ≤ ( , ), the main theorem will follow from the following arguments: (1) Energy upper bound. By constructing suitable trial states we have The idea is developed in [93,80] after particular cases were dealt with in [89,90]. One writes the function in (2.7) as in (2.16) and optimizes the number and locations of the zeroes 1 , … , for the effective plasma to have a density matching that of the solution to (2.18). For this to be possible it is crucial that the latter saturates the density upper bound (2.19) everywhere on its support, i.e. is in the "solid" phase [28]. This is where we use the assumption that is small enough. For reasons explained in more details in [80], such a constraint is a natural requirement. One should not expect Theorem 2.1 to hold for repulsive interaction potentials if is allowed to be arbitrarily large.
(2) Energy lower bound, = 0. With no interaction potential in (2.1), the flocking energy is a simple bath-tub problem [67, Theorem 1.14] and the many-body problem (2.6) only depends on the oneparticle density What is needed is a proof that the set of admissible is in some sense included in the variational set defining (2.18), namely in some appropriate sense, for any symmetric analytic function entering (2.4). This is what we called an "incompressibility estimate" in [91,92]. The lower bound essentially follows.
(3) Energy lower bound, ≠ 0. With interactions there is an extra mean-field limit to deal with (recall the scaling (2.10)-(2.11)), and it is necessary to understand why correlations can be neglected.
In [80] we used the approach to classical mean-field limits based on the de Finetti-Hewitt-Savage theorem, see [85,86,Chapter 2] for review. The energy now genuinely depends on the pair density and we prove that we can approximate, for any , where is a probability measure over one-particle densities satisfying The most important is that only charges functions satisfying the incompressibility bound (2.21).
To obtain an energy upper bound at ≠ 0 we also prove that, for the pair density associated to a trial state Ψ of the form (2.7) (2) Ψ ( , ) ≃ Ψ ( ) Ψ ( ), which is again a mean-field kind of problem, but for the effective plasma Hamiltonians, not the original physical one.
In this note we wish to focus on the main ingredients, namely the incompressibility estimates necessary for steps (2) and (3) above. After partial progress in [91,92], together with Elliott and Jakob Yngvason we obtained [69,70] where | | is the area of the disk and (1) tends to zero as → ∞.
The above means that (with additional mild assumptions, [70, Section 5.1]) the desired bound holds in the sense of averages (on disks, or actually on any other nice set) of length-scale ≫ 1∕4 . Note that the thermodynamic length scale is ∼ 1∕2 in this problem, and the typical inter-particle distance ∼ −1∕2 (the magnetic length, fixed in our convention). We expect that the bound actually holds on any mesoscopic length scale ≫ 1.
To obtain the stronger notion of incompressibility (2.23), we construct the measure in the standard de Finetti-Hewitt-Savage-Diaconis-Freedman way [86,85]. The difficulty is to prove that it charges only densities satisfying the appropriate bound, which follows from the following result from [80]:
Theorem 2.3 (Probability of violating the incompressibility estimate).
Let be a disk of radius as above. Let |Ψ | 2 be the probability measure on ℝ 2 associated with Ψ of the form (2.4). Denote ℙ ( ) the associated probability of events ⊂ ℝ 2 and ♯( ) the cardinal of a discrete set . Then, for any > 0, This "large deviation bound" implies that not only (2.21) but also the stronger bound holds for all -particles reduced densities ( ) at least if is fixed in the limit → ∞. The de Finetti-Hewitt-Savage-Diaconis-Freedman theorem and arguments from [27] (originally used for fermionic semi-classical measures on phase-space) then imply (2.23).
PHYSICAL CONTEXT: THE FRACTIONAL QUANTUM HALL EFFECT
We now explain how the above strange variational problem sheds light on some aspects of fractional quantum Hall physics. This section can be skipped by readers who prefer to jump to the connection between the incompressibility bounds of Theorems 2.2-2.3 and the statistical mechanics of the one-component plasma.
Summary.
It is useful to first set a road-map for this section: • In a typical fractional quantum Hall experiment at filling factor ∼ −1 , the Laughlin function (2.2) is a well-educated guess for the system's vacuum. It freezes the magnetic kinetic energy and reduces a lot the short-range part of the interaction. • The quasi-holes wave-functions (2.7)-(2.16) are proposed to describe the vacuum's excitations in response to a slightly smaller filling factor, to impurities in the sample, to residual interactions, to external fields etc ... • The arguments leading to the above points in fact allow in generality to work with the class of states (2.4). The further reduction to (2.7) is motivated by (legitimate !) arguments based on simplicity/guesses, and, ultimately, experimental confirmation.
• That the Laughlin phase of uncorrelated Laughlin quasi-holes does in fact emerge as the effective ground state of the system is justified in the thermodynamic limit by Theorem 2.1 and corollaries.
The rest of the section is meant as a clarification of the above summary.
The many-body quantum Hamiltonian. We start from a basic Hamiltonian for the quantum 2D electron gas (in adimentionalised form ℏ = = = 2 = 1) acting on 2 asym (ℝ 2 ), the Hilbert space for 2D fermionic particles. Here ⟂ denotes the vector ∈ ℝ 2 rotated by ∕2 counter-clockwise, so that and thus 2 ⟂ is the vector potential of a uniform magnetic field, expressed in symmetric gauge. In view of our choice of units, is actually √ times the physical magnetic field, with = 2 ∕(ℏ ) ∼ 1∕137 the fine structure constant, see e.g. [71, Section 2.17].
We take into account an external potential ∶ ℝ 2 ↦ ℝ modeling trapping and/or impurities in the sample, and repulsive pair interactions ∶ ℝ 2 ↦ ℝ between particles. Typically should be the 3D Coulomb kernel (with the fine structure constant again) | − | or some screened version. We have made the customary assumption that the magnetic field is strong enough to polarize all the electrons' spins.
The quantum Hall effect [45,32,33,107,54] is a peculiar feature of the transport properties of 2D electron gases under strong perpendicular magnetic fields. The main experimental findings (see Figure 1) are plateaux in the Hall (transverse) resistance at particular quantized values, accompanied with huge drops in the longitudinal resistance . The extremely precise quantization 3 to particular values of (read on the vertical axis of Figure 1) has an interpretation in terms of topological invariants of the system [11,29,30], but that is not what we focus on here. Instead, looking at the horizontal axis of Figure 1, we see that the particular features occur around special values (the numbers associated with arrows on the picture) of the filling factor of the system with the electrons' density, the applied magnetic field and ℎ, , respectively Planck's constant, the speed of light and the elementary charge. In this note we (partially) address only the question "why does something special happen at these parameter values ?" without touching much on the "how does the particular observed experimental signature emerge ?" Landau levels. The workhorse of the quantum Hall effect is the quantization of kinetic energy levels in the presence of a magnetic field. Namely, the appropriate kinetic energy operator for a 2D particle in a perpendicular magnetic field is acting on 2 (ℝ ). The energy levels (eigenvalues) of the above are well-known [94,45] to be 2 ( +1∕2) for integer , since one can write = 2 † + 1 2 for appropriate ladder operators , † with [ , † ] = 1. The lowest eigenspace (lowest Landau level, corresponding to the eigenvalue ) can be represented as and the -th Landau level can be obtained as † LLL. Hence each energy level is infinitely degenerate when working on the full plane. Well-known arguments indicate that this degeneracy is reduced in finite regions, with a degeneracy ∝ × Area . We give one such heuristic argument 4 . 4 Another one is that (3.3) can be restricted to a rectangle whose area is a multiple of 2 −1 , imposing magneticperiodic boundary conditions see [1,2,26,81,79] or [45, Sections 3.9 and 3.13]. The energy levels are then the same as above, with degeneracy exactly (2 ) −1 × area of the rectangle.
The orthogonal projector Π 0 on LLL can be expressed using vortex coherent states [19,94] in the form and similarly the orthogonal projector on the -th Landau level is The exact expression of Ψ 0, is known, but what matters here is that this function (as well as † Ψ 0, ) is very localized (on the scale of the magnetic length −1∕2 ) around the point ∈ ℝ 2 . Hence, to approximate the number of eigenstates of with eigenvalue 2 ( + 1∕2) localized in a given large domain Ω ⊂ ℝ 2 , it makes sense to restrict the integration in (3.5)-(3.6) to Ω and compute the trace of the so-obtained operator, namely This gives a good approximation of the rank of the operator 1 Ω Π Π 1 Ω , hence of the number of orthogonal energy eigenstates with eigenvalue ∼ 2 ( + 1∕2) that one can fit in the domain Ω.
The integer quantum Hall effect. Some plateaux (left of Figure 1) in /drops in occur at integer values of and it is not surprising that something special should happen there (again, it is highly non-trivial to derive the specific signature of the "something special"). This can be understood in a non-interacting electrons picture, taking only the Pauli exclusion principle into account. One assumes that the magnetic kinetic energy, proportional to , is the main player and that all other energy scales in (2.1) are negligible against it. By this we mean that is dropped in (2.1) and that the only effect of is to essentially confine the gas to a domain Ω.
As the name indicates, the filling factor measures the ratio of electron number to number of available one-body states in a given Landau level (see the above considerations, keeping in mind that (2 ) −1 ℎ = = = 1): if electrons are confined to the region Ω with density = ∕|Ω|. In the ground state of an independent electron picture, one fills the eigenstates of (3.3) with one electron each, starting from the lowest one. At integer , the lowest Landau levels are thus completely filled, and the others completely empty, a very rigid and non-degenerate situation. This rigidity is actually important in order to treat the energy scales other than perturbatively.
The fractional quantum Hall effect. Many plateaux however occur at particular rational filling factors and are impossible to explain in an independent electrons picture. Laughlin's groundbreaking theory [52,53,54] explains why something special ought to occur at e.g. at the right-most plateau = 1∕3 of Figure 1, but also at = 1∕5, a fraction also observed in experiments ( = 1∕9 and lower is not observed, while = 1∕7 is borderline). The = 1∕3 fraction is the first to have been observed [108], and the most stable. Fractions from the principal Jain sequence (very prominent on the figure) are explained in terms of the composite fermions theory [45], a generalization of Laughlin's theory we will not touch upon. Fractions of the form = 1 − 2 + 1 are particle/hole symmetric pendants of the former (3.8), and thus we cover most of the fractions seen on Figure 1. There are other, more exotic, fractions and features, but let us not get into that to focus on Laughlin's theory of the mother of all fractions, namely (3.7).
Restriction to the lowest Landau level.
We henceforth restrict to filling factors < 1 . In the regime relevant to the quantum Hall effect, the gap between the magnetic kinetic energy levels is so large that the first approximation we make is to project all the physics down to as few Landau levels as possible. With filling ratio ≤ 1, the lowest Landau level is vast enough (again, see the above heuristics) to accommodate all particles, and thus we restrict available many-body wave-functions to those made entirely of lowest Landau 5 levels orbitals (3.4). It is in fact convenient to work on the full space at first. The restrictions to finite area/density will actually be performed later, and we will have to make sure they are coherent with our aim: a thermodynamically large system with density ∼ (2 ) −1 .
Killing the interaction's singularity. The main energy scale, the magnetic kinetic energy, is now frozen by projecting all one-body states to (3.4). Laughlin's key idea is that the next energy scale to be considered is the pair interaction, and more precisely its singular short-range part. The wavefunction (2.2) is introduced in order to reduce as much as possible the probability of particle encounters. Since we need the function to belong to analytic and antisymmetric (3.9) there is not much freedom. Ψ Lau is designed to vanish when = while preserving the antisymmetry and analyticity. It may seem that is a free variational parameter. But so far we thought somewhat grand-canonically: we have not fixed the density of our system yet. It turns out (this follows from Theorem 4.2 below) that the one-particle density of Laughlin's function satisfies That is, it lives on a thermodynamically large length scale 6 and has filling factor = −1 (if you bear with me concerning the choice of units in (3.1)).
Now we can answer our original question "what is special about filling factor = −1 ?" The answer is that, at such parameter values, we may form a Laughlin state of exponent as approximate ground state of our system. It minimizes the magnetic kinetic energy exactly, and does a very good job at reducing the short-range part of the interaction.
Towards a rigorous derivation of Laughlin's function. The second part of the above derivation is very heuristic, and will probably stay that way in the case of the true 3D Coulomb interaction. However, if one is willing to approximate the short-range part of the interaction as a sharply peaked delta-like potential, one may indeed derive rigorously the Laughlin state (and/or variants) in a physically relevant limit. This is based on the fact that the Laughlin function is an exact ground state for an approximate interaction of zero-range, projected on the lowest Landau level. For a precise formulation of this, and the derivation of such model interactions from scaled ones, I refer to [64,101]. A Gross-Pitaevskii-like limit of bosonic models based on effective delta interactions projected in the lowest Landau level is studied in a very nice paper by Elliott and co-workers [72].
The main open problem in this direction is to make the derivation of Laughlin's function alluded to above uniform in the particle number . This depends on a spectral gap conjecture for effective zero-range interactions, whose formulation can be found in [87,Appendix] and references therein. Partial progress towards the conjecture are in [77,76,109,110].
Laughlin quasi-holes. So far we have argued that Laughlin's function is a good ansatz for the ground state of the system at the relevant filling factor, when neglecting the effect of the external potential and the long-range part of the interaction in (2.1). That is not the end of the story, for the latter ingredients do exist in actual experiments, in particular, the disorder landscape that impurities enforce in is crucial to the quantum Hall effect.
The Laughlin state should in fact be seen as the "vacuum" of a theory explaining the FQHE experimental data. The next step is to construct the quasi-particles generated from said vacuum when suitably moderate external fields are applied, such as those generating the currents in experiments.
It is in fact easier to argue about quasi-holes, generated e.g. when the filling factor is lowered a little from the magic fraction −1 , as when moving towards the right on Figure 1. The salient feature is that we stay on the same FQHE plateau for a while when doing so. It must hence be that the ground state of the system stays "Laughlin-like" for reasonably smaller . In fact, Laughlin's next key idea is two-fold • for smaller filling factors, the ground state is generated from (2.4) by adding uncorrelated quasi-holes as in (2.7)-(2.16). These are typically pinned by the impurities of the sample (modeled by in (2.1)). • when applying an external field at close to −1 , the current is carried by the motion of such quasi-holes.
The second idea in particular is quite far-reaching: it has by now been measured [97,74,23] that the current is carried in fractional lumps of −1 and [8,78] that the charge carriers obey fractional quantum statistics, i.e. are emergent anyons [6,43,73,51,111,20].
Stability of the Laughlin phase.
The last point motivates the variational problem studied in Section 2. The model incorporates the ingredients from (3.1) that are not frozen by the aforementioned reductions: the external potential representing trapping and disorder and the long-range part of the interaction potential . By restricting the variational set as in (2.5) we take for granted the basic ingredients sketched above, but back-up a little by noting that they actually point to the general form (2.4) for trial states. The Laughlin function (2.2) and associated quasi-holes states (2.7) certainly are the simplest, and hence the first to try in order to explain experimental data. But ideally they should be singled out from the full set (2.4) by minimizing 7 the remaining energy scales in (2.6). This is what Theorem 2.1 proves, under some simplifying assumptions that we now discuss.
In scaling the external potential as in (2.10) we make it live on the natural, thermodynamically large, length-scale of the Laughlin function. This is very reasonable for the trapping part of the potential, but much less so for the part modeling disorder, which typically lives on a much shorter length scale. In fact the shortest length scale we could allow is that dictated by Theorem 2.2, so it does not need to be thermodynamically large. Improving it to realistic values however remains an open problem, and we prefer for simplicity to work on a single length scale in order not to obscure the main statements.
In Theorem 2.1 we assume the interaction to be smooth. This is because it is supposed to represent the long-range part only, the singular short-range part being taken care of by restricting to (2.4). Scaling as in (2.11) has the merit of making the two terms in (2.6) of the same order of magnitude, as in a mean-field limit. This also simplifies statements a lot, but for interactions scaling like 3D Coulomb, this is actually the correct thing to do, see [80,Section 2.2].
Concerning the smallness assumption on in Theorem 2.1, it corresponds to the fact that the filling factor should stay close to −1 for the theorem to be true. Too large a deviation makes the system jump to a different FQHE plateau, e.g. a Laughlin state with higher exponent. What is slightly tricky is that we do not work at fixed density but fixed particle number. But increasing the (repulsive) interaction strength has the net effect of spreading the system further, and hence lowering the density (see again [80, Section 2.2] for more details). An upper bound on | | is thus necessary for the statement to hold. We do not however provide a meaningful estimate of the size of | | needed for the proof to carry through, which is probably model-dependent.
THE ONE-COMPONENT PLASMA
Our ultimate goal is to get to grips with the many-body density |Ψ | 2 of functions defined as in (2.4). Since this will be made possible by the plasma analogy (2.12)-(2.13), we first briefly review the statistical mechanics of classical Coulomb systems (referring to [102,103,104,60] for more complete accounts). The challenging step of including a general many-body analytic factor as in (2.13) will mostly be dealt with in the next section. We first focus on a case closer to the target = ⊗ appropriate to describe quasi-holes wave-functions (2.7). In view of (2.14)-(2.17), a general Hamiltonian including (2.13) with = ⊗ as a particular case is as follows: where 1 , … , ∈ ℝ are coordinates of particles in the Euclidean space, ∶ ℝ ↦ ℝ is an external potential and We only consider space dimensions ≥ 2 in the sequel, where is the fundamental solution of Laplace's equation: so that the potential generated by a charge distribution is obtained through We will be interested in equilibrium properties, namely in the Gibbs states at temperature > 0 minimizing the free-energy amongst probability measures over ℝ . As usual the infimum is given in terms of the partition function , normalizing (4.5) as and we identify the = 0 problem with the ground state. Namely (0, ) = inf ℝ and 0, is the empirical measure associated with a minimum point.
Homogeneous systems and the thermodynamic limit.
We start by describing a fundamental contribution by Elliott Lieb and Heide Narnhofer [68], inspired by the methods of [59,66]. Namely we confine the Coulomb gas described above to a finite container of volume , fix the density ∶= and take the thermodynamic limit , → ∞. For this to make sense, we need to make the system neutral. In the jellium model this is done by taking an external potential generated by a constant neutralizing background of density − . Hence we think of one species of charges as fixed and spread in space, and the other as point-like and moving in the "jelly" thus generated. The highly non-trivial point is to quantify the screening of the background by the mobile point charges, leading to a system neutral on length scales much larger than the microscopic typical interparticle distance −1∕ . Hence the long-range tail of the Coulomb potential is not felt all across the box and the free energy can be extensive (after, of course, having taken the energy of the background into account), as indicated by the Theorem 4.1 (Thermodynamic limit for jellium).
Let Ω be a regular simply connected set and Ω its dilation by a factor . Let ∈ ℝ + , = be given by exists and is independent of the shape of Ω.
This follows [68,100] from the method of [59,66] which uses Newton's theorem and averages over rotations around well-chosen centers to quantify screening. A difficulty is that the background is fixed, so that extra care has to be taken with this procedure compared to [59,66] where the background is replaced by point charges. On the other hand, stability of matter is not an issue in this set-up, so that one can deal with the classical model. In [59,66] one has to use the Heisenberg and Pauli principles of quantum mechanics in a highly non-trivial way [65,71] to prove that even the lim inf makes sense.
Inhomogeneous systems and the mean-field approximation.
A natural follow-up question is to choose a inhomogeneous background distribution of charge in (4.1). Thus we now choose a general external potential . It is still desirable that the system lives on a thermodynamic length scale ∼ 1∕ , and that the energy be extensive. We achieve this by picking with a fixed confining potential (i.e growing at infinity). Then, changing length units by setting = 1∕ in (4.1) we obtain an energy in mean-field scaling 8 In the rescaled Hamiltonian in parenthesis, the two terms formally weigh the same and thus the 's will want to stay in a domain of fixed volume. The latter, and the overall shape of the density is obtained by minimizing a continuum/uncorrelated version of the above: (4.10) In fact, rescaling lengths in (4.5)-(4.6), one sees that the temperature is effectively small, so that the entropy does not appear in the leading order of the energy (it does [75,48,49,50,15,16,105,4,5] if one allows for to grow appropriately fast when → ∞, see also [86,85,Chapter 2]).
Theorem 4.2 (Mean-field limit for inhomogeneous Coulomb systems).
In the set-up just described, with a sufficiently (requirements are low) regular function growing polynomially (not really required) at infinity, and ≥ 0 fixed, we have that (cf Footnote 8) where MF is the infimum of (4.10) amongst probability measures on ℝ . Moreover, let ( ) be the -particle density of the full Coulomb system 9 , i.e. with , the Gibbs measure (4.5) if > 0. Then weakly as measures, where MF is the unique (using that ( . ) is of positive type) minimizer for MF .
We did not aim at the greater generality or precision in the above. At various degrees of both these criteria, proofs may be found in [3,25,103,90,88,95,13,12,44,17,18] and many related sources. This vindicates (3.10), for in this case it is particularly easy to compute MF .
To see more precisely why the entropy does not contribute at this order, observe that one can expect the mean-field "uncorrelated" behavior (4.13) In fact, the above result is compatible with this ansatz, and shows there is a lot of truth in it. The entropy in such an ansatz is (4.14) Since the temperature in (4.6) is fixed, the contribution of − × entropy to the free energy is much smaller than the terms identified in (4.11).
Inhomogeneous systems and the local density approximation.
Informally, Theorem 4.1 is a very precise version of Theorem 4.2 in the particular case (homogeneous system) to which it applies: the next-to-leading order beyond mean-field is identified. The equivalent result for Inhomogeneous systems was obtained much later, in [99,88,83,84,82] at = 0 and in [4,57] at > 0. Later developments can be found e.g. in [105,4,5,55,56,58,10,9]. The formulation of the result is in a somewhat different spirit from Theorem 4.1 in these references (and many more things are proved beyond what we state), but we refer to [60] for an explanation of the fact that, indeed, the statement below follows from [99,88,83,57]:
Theorem 4.3 (Local density approximation for inhomogeneous Coulomb systems).
Under the same assumptions as in the previous theorem, we have, in the limit → ∞ with fixed where ( , ) is defined in Theorem 4.1 and =2 = 1 in 2D and 0 otherwise. This is called a "local density approximation" because the correction to mean-field theory is obtained by integrating the free-energy density of the homogeneous system at density MF ( ) over . This means that, locally at the microscopic scale, the system is in thermal equilibrium at the density set by the macroscopic mean-field theory. This separation of scales is again a powerful manifestation of screening in Coulombic matter. See [61,62,63,21,22] for similar results in the context of the uniform electron gas and density functional theory. It is noteworthy that • the fixed temperature shows up at the level of precision of the above but is absent from the leading order in Theorem 4.2. • precise estimates on the remainder are obtained in [4]. The order of magnitude thereof are presumably optimal, for they scale precisely like boundary terms (at least in the homogeneous case where Theorem 4.3 reduces to Theorem 4.1).
Renormalized Jellium energy.
In the proof of Theorem 4.3 (and for the derivation of important corollaries not mentioned here), it is useful to characterize the homogeneous Jellium's free energy not as the thermodynamic limit of an infimum in finite volume, but as the infimum of a quantity directly defined in infinite volume. We briefly sketch this below, referring to [98,99,88,83,57] for more details. That the quantities defined below coincide with those of Section 4.1 follows from the fact that the results of [98,99,88,83,57,4], bearing firstly on the inhomogeneous setting of Sections 4.2 and 4.3, apply as well in the homogeneous setting by choosing the external potential as in Theorem 4.1 (see [60], in particular Remark 38 therein for more comments on this point).
A first key point is to define the energy of a charge configuration via the electric field it generates.
Definition 4.1 (Admissible electric fields).
Let > 0. Let be a vector field in ℝ . We say that belongs to the class if = ∇ℎ with for some discrete set Λ ⊂ ℝ , and integers in ℕ * .
One should think of as the electric field (i.e. gradient of the potential) generated via Laplace's equation (4.3) by a configuration of point charges and a uniform background of density . The "renormalization" alluded to in this subsection's title enters via the smearing of point charges on a length scale , ultimately sent to 0 after the subtraction of appropriate counter-terms. in some subset of ℝ , with Λ ⊂ a discrete set of points, we let We have To define the energy (per volume) of an infinite configuration of point charges minus uniform background, we observe that formally (that is, modulo subtracting the infinite self-energies of the point charges) it ought to be given (using (4.3) again) by the mean value where is the electric field (see Definition 4.1). This is where the renormalization via screening takes place: we smear charges, consider − ∫ ℝ | | 2 as in Definition 4.2, remove the self-energies of individual smeared charges, and then pass to the limit → 0. A key point in the following definition is that we pass to the infinite volume limit before letting → 0.
Let be as in Definition 4.2 and the energy of charges smeared on radius 1 be
For any ∈ , we define ( is a hypercube of side-length , − ∫ is the mean of over it) and the renormalized jellium energy is given by Again, =2 = 1 in 2D and 0 otherwise.
A first statement we can make is that where (0, ) is the ground-state energy per volume defined by Theorem 4.1. This follows from the aforementioned works [99,88,83,84,82] by choosing the external potential as in Theorem 4.1 (see also [60]). An extension to positive temperatures requires more definitions [57], for which we shall be somewhat less precise. We call a point process a probability measure over locally finite point configurations in ℝ , or equivalently over the set of non-negative, purely atomic Radon measures on ℝ giving an integer mass to singletons. Then we have where ( , ) is the free-energy per volume defined by Theorem 4.1. The identity follows from [57,4] in the same way as (4.25). There would be a lot more to say about these definitions (including the extension to tagged point processes, crucial to [57]) and their uses. To keep things between bounds, we stick to the following comments. The infinite volume quantities defined above are very useful to obtain estimates and limit theorems on the fluctuations around the mean-field Theorem 4.2: central limit theorems, large deviation principles ... derived in the aforementioned references. The formulation via the electric field in Definition 4.3 permits to quantify screening mechanisms differently from what was alluded to above, i.e. without appealing to local rotational invariance and then Newton's theorem. Briefly, "good" configurations (energy-wise) of point charges can be modified slightly in order not to change the energy or entropy too much, while making the associated electric field vanish outside of a large hyper-rectangle. Modified configurations in neighboring hyper rectangles can then be glued together without introducing divergences in the field at the interface, and thus making the energies add up. Thereby one obtains trial states for large domains by gluing equilibrium configurations in smaller domains, which is key to a form of additivity allowing to deduce the existence of thermodynamic quantities.
Separation of points in the ground state.
There is one (unpublished) result of Elliott's which, in addition to its independent interest, played an important role in putting the tools of the previous section to good use. To see this, observe that it is not obvious from Definition 4.3 that the infimum of the infinite volume renormalized jellium energy (4.24) is finite. We have to make sure that the negative, diverging when → 0, counter-terms do indeed cancel corresponding infinities in the main term. In the approach of [88] (simplifying [99], and in turn simplified in [83] and [4] using other ingredients) this is achieved as follows: • for a lower bound it is sufficient to consider (quasi-)minimizers.
• by a variant of Elliott's argument, point charges corresponding to (quasi-)minimizers via (4.16) are well-separated.
• if points are more than a distance of order apart, the radial smearing of point charges in Definition 4.2 does not change the interaction between different points, by Newton's theorem. • a few calculations and estimates then vindicate that indeed infinities compensate one-another in the limit → 0. The crucial ingredient, in the second point above, shows that the minimal distance between points from quasi-minimizing configurations is bounded below, uniformly in the limit → ∞ (thermodynamic limit) of (4.23) and in the limit → 0. This is obtained by a variant (mostly accounting for the positive smearing parameter ) of the following (see also [60,Lemma 24]).
Even though the use of this theorem has now been by-passed [4,83] to bound energies of the type of Definition 4.3 from below, variants of it are still crucial to prove separation/equidistribution of charge in related systems [83,84,82].
The proof is as short as it is elegant.
Proof. There is a ball of center 0 and radius −1∕ fully included in Ω . Assume for contradiction that there is at least another point ≠ 0 in said ball, and consider variations of the energy with respect to the motion of that point, all the others being fixed. For the configuration to be a minimizer, it must be that sits at a minimum of the potential generated by the other points and the background, with as in (4.7). We split Φ as Φ can be decreased by moving to , which contradicts the fact that our configuration was assumed to be a minimizer.
INCOMPRESSIBILITY BOUNDS FOR COULOMB GROUND STATES
We turn to explaining how one can extend the idea of the proof of Theorem 4.5 to prove density bounds of the form (2.21). Using the plasma analogy described in Section 2.2, this translates to density bounds on Gibbs equilibria of generalized 2D classical Coulomb systems. It turns out that the effective temperature in the plasma analogy is quite small in the limit → ∞: in fact one is exactly in the scaling described in Sections 4.2-4.3. Our proof of Theorems 2.2-2.3 proceeds from bounds for the ground state at = 0, coupled with rough estimates relating Gibbs to ground states for small . The latter part is that where we have to restrict to the non-optimal length scales in our statements, and for which an improvement would be most desirable. But this remains an open problem.
In this note we restrict to explaining how to obtain density upper bounds for ground states of 2D classical Coulomb Hamiltonians of the form (2.13). Modulo a change of length and energy units, we can consider the following general Hamiltonian with 1 , … , ∈ ℝ 2 and a (quite possibly -dependent) function superharmonic in each variable 11 : The first term in (5.1) has a constant Laplacian, hence corresponds to the potential generated by a constant neutralizing background, of density 1 in our units. We proved in [69,70] that the density of charge in the ground state cannot exceed that of the background, on any length scale much larger than the typical inter-particle distance (namely, 1 in these units).
Theorem 5.1 (Incompressibility for 2D Coulomb ground states).
There exists a bounded function ∶ ℝ + ↦ ℝ + , independent of and , with such that, for any 0 = ( 0 1 , … , 0 ) minimizing , any point ∈ ℝ 2 and any radius > 0 where ( , ) is the disk of center and radius and ♯ stands for the cardinal of a discrete set. 11 Such functions are sometimes called "plurisuperharmonic" in the literature. Elliott suggested that we stick to "superharmonic in each variable" when writing [69,70], on the grounds that this was 2016, and that in the Trump era, simple words should be preferred.
A first observation is that, due to the superharmonicity (5.2), the proof of Theorem 4.5 applies to this system (there is simply one more superharmonic term in (4.27)), and shows that the minimal distance between points is any case larger than 1∕ √ . Hence, one can place a disk of radius 1∕(2 √ ) around each point without any overlap between the disks. This leads to a non-trivial bound on the density, but 4 times too large, something we used in [92] to obtain (2.21) with 2 −1 in the righthand side.
To obtain the optimal bound needed as an input for Theorem 2.1, a new idea is needed. In the proof of Theorem 4.5, we used that any point neutralizes the background in a disk of radius 1∕ √ around it (i.e. the total charge of the disk generates no field in its exterior). This can be called a "screening region" for a single point charge, and the main tool in the proof of Theorem 5.1 is to define such a screening region associated to any discrete set of point charges. This idea was known in potential theory under other names prior to our work, see Remark 5.4 below.
Screening regions.
For neutrality the charge contained in a screening region must be equal to the number of points in the region. Since this is a necessary condition, we include it in the There is a saying 12 that "a good definition is the assumption of a theorem". At the very least, such definitions are a noteworthy subset of all good definitions. Somewhat dually, a noteworthy subset of all good theorems are "theorems which prove that a natural definition is not empty". The following belongs to this class: Thomas-Fermi molecules and screening regions).
be points in ℝ 2 . Consider the energy functional It has a unique minimizer TF in the class Moreover TF is of the form for an open set which is a screening region for the points 1 , … , . In addition, let For any > max | | we have The functional minimized to obtain the screening region is referred to as "incompressible Thomas-Fermi", for it is reminiscent of a semi-classical approximation of the energy of a 2D molecule (with "nuclei" at 1 , … , and a continuous density of "electrons" ). We use the word "incompressible" because we impose the constraint ≤ 1. A "real" 2D Thomas-Fermi theory would instead have a penalizing term in the energy, with ( ) = 2 2 , the semi-classical energy density of the free (quantum) 2D electron gas at density . The minimization problem (5.7)-(5.8), corresponds formally to taking ( ) = with → ∞ to enforce the uniform upper bound ≤ 1.
The bound (5.10) will be important in the sequel, for it gives a control on the shape of the screening region, which could in general be somewhat wild.
Remark 5.4 (Subharmonic quadrature domains and partial balayage). When proof-reading this text, I became aware of the fact (unbeknownst to us when working on [69,70]) that what we call "screening regions" in Definition 5.2 were known as "subharmonic quadrature domains" in the potential theory literature [41,36,96]. The method to construct such sets in Theorem 5.3 is itself known as "partial balayage of the measure ∑ =1 to the Lebesgue measure" [35]. Another name used more on the physics side [112] is "equigravitational mass scattering". Most of the content of Theorem 5.3 can be found in a variety of sources [96,34,38,31,106], see also [7,37] and references therein. All this is in turn connected to the classical obstacle problem. Our method of proof seems to differ from those in these references, although there is certainly some overlap that we were unaware of. ⋄
Exclusion by screening.
Equipped with the above concept we can now briefly sketch the proof of Theorem 5.1. Actually we sketch one of two proofs presented in [70], relying on (5.10). The other proof we provided is similar in its first steps, but does not use (5.10).
Step 1. Consider a minimizing configuration, a subset thereof, and the screening region associated to it by Theorem 5.3. Since in (5.1) is superharmonic in each variable, the arguments of the proof of Theorem 4.5 imply that no other point of the configuration can lie in the screening region just defined. We refer to this as the exclusion rule. In the sequel we can forget about minimizers of (5.1) and consider all point configurations satisfying this rule that no point can lie within the screening region defined by any subset of other points of the configuration.
Step 2. Consider the set of all configurations satisfying the exclusion rule. We already know (by the argument sketched below Theorem 5.1, using screening regions for single points) that the density of any such configuration is bounded above by 4 (the precise sense in which this holds is similar to (5.3), but with a 4 in the right-hand side). Hence the maximal (or supremal) density max achievable by a configuration satisfying the exclusion rule is a well-defined number, and we aim at proving that max ≤ 1.
Step 3. Consider now a configuration ( 1 , … , , …) satisfying the exclusion rule, and achieving the maximal density max . This configuration cannot have any large vacancy. Indeed, a density lower than max in some region would have to be compensated for by a density higher than max in another region. This would contradict the definition of max as the maximal density achievable while satisfying the exclusion rule.
Main and final step. It is hence sufficient (roughly) to consider a configuration satisfying the exclusion rule and having a maximal density max "everywhere". Consider the points in said configuration lying in some disk of radius (say, without loss, of center 0), and the associated screening region Σ . Since, by definition |Σ | = number of points in (0, ) the conclusion of the theorem will follow if we prove that |Σ | ≲ 2 for large . This we do by proving that Σ cannot "leak" too much out of the original disk (0, ). Namely, we want to be in the "good case" sketched in Figure 2, where the screening region is included in a slightly larger disk of radius + , ≪ . Since the configuration satisfies the exclusion rule, the screening region has to avoid all the points in the exterior of the disk. Our main enemy is thus, as sketched in Figure 3 that a tendril of the screening region is sent to infinity, winding its way bizarrely around all the other points. Let Φ be the potential (5.5) generated by the points in the original disk and the screening region. By the exclusion rule and (5.6), it must vanish at all the configuration's points outside of the disk. Since the configuration may not have large vacancies, the points the screening region must avoid, at which Φ = 0, are numerous. A few estimates prove that these points are sufficiently dense to deduce sup ∈ (0, ) |Φ( )| ≪ 2 .
Remark 5.5 (A proof variant). According to [35,Theorem 5.4], Σ can be written as a union of disks with centers in (0, ) (the proof is in [39,40]). This also excludes the pathological configuration in Figure 3. Indeed, write which is not included in (0, ) has size ∼ 2 , so if → ∞ when → ∞, it must touch points from the configuration outside (0, ) (recall that the configuration may not have large vacancies). This would be a contradiction, so must stay bounded by a (possibly large) constant when → ∞. It follows that | | ≤ , i.e. Σ ⊂ (0, + ) for some constant , and thus we get (5.12) again. ⋄
SHORT CONCLUSION
We have discussed an unusual variational problem in many-body quantum mechanics, and the motivation for introducing it, which comes from fractional quantum Hall (FQH) physics. The aim is to show that the Laughlin state with quasi-holes is stable under weak external and interaction potentials. The approach to this problem we have been following in the past few years (with Elliott Lieb, Alessandro Olgiati, Sylvia Serfaty and Jakob Yngvason) proceeds by analogy with the study of (somewhat contrived, if seen independently from the FQH motivation) classical Gibbs states of Coulomb systems. This lead us to a brief and partial review of known results bearing on more standard classical Coulomb systems: the homogeneous and inhomogeneous jellium. An unpublished (but generously communicated to people that had the use for it) theorem of Elliott Lieb bearing on such systems then provided the source of inspiration for the derivation of the main tools used in the study of the FQH variational problem.
Acknowledgments: I am financially supported by the European Research Council (under the European Union's Horizon 2020 Research and Innovation Programme, Grant agreement CORFRON-MAT No 758620). It is a pleasure to thank the editors of the present collection for inviting me to write this text, and the aforementioned collaborators, work with whom it is based on. I am indebted to Björn Gustafsson for useful discussions and references related to Remarks 5.4 and 5.5. Finally it is an honor to thank Elliott H. Lieb for the opening or widening of so many fields of mathematical physics, for us to continue exploring. | 2022-03-15T01:16:15.281Z | 2022-03-14T00:00:00.000 | {
"year": 2022,
"sha1": "e46e17c8c66364e05dd8cb6c5617c4c443837524",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e46e17c8c66364e05dd8cb6c5617c4c443837524",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
51766362 | pes2o/s2orc | v3-fos-license | Dimensions of some locally analytic representations
Let $G$ be the group of points of a split reductive group over a finite extension of ${\mathbb Q}_p$. In this paper, we compute the dimensions of certain classes of locally analytic $G$-representations. This includes principal series representations and certain representations coming from homogeneous line bundles on $p$-adic symmetric spaces. As an application, we compute the dimensions in Colmez' unitary principal series of ${\rm GL}_2({\mathbb Q}_p)$.
Introduction
Let L be a finite extension of Q p and let G " GpLq be the group of L-valued points of a split connected reductive algebraic group G over L. Let P Ď G be a parabolic subgroup.
Admissible Banach space representations or locally analytic representations of G admit a well-behaved notion of (canonical) dimension. The rational representations coming from the algebraic group G or the traditional smooth representation from Langlands theory are known to have dimension zero. Moreover, any representation which is not zero-dimensional has dimension greater or equal to half the dimension of the minimal nilpotent orbit of G [2], [31]. Besides these general results, the dimensions of even very explicit representations like principal series representations have not been computed so far. In this paper, we make an attempt to close this gap and determine the dimensions of certain families of representations. This includes principal series representations as well as representations coming from p-adic symmetric spaces. The technical key result is that the functor F G P introduced by Orlik and the second author in [26] from Lie algebra representations of g endowed with a compatible action of P to locally analytic G-representations preserves the dimension.
As an application, we compute the dimensions in Colmez' unitary principal series of GL 2 pQ p q [9]. Let ΠpV q denote the unitary representation associated by Colmez' p-adic local Langlands correspondence [10] to an absolutely irreducible 2-dimensional p-adic Galois representations V of GalpQ p {Q p q. The resulting map V Þ Ñ dim ΠpV q is bounded above by the number 2 due to the presence of infinitesimal characters. We show that its restriction to trianguline representations is constant with value 1. This raises the question whether there are absolutely irreducible 2-dimensional representations V of GalpQ p {Q p q with the property dim ΠpV q " 2.
In the following we give more details on the individual sections of this paper. In section 2 we review basic notions of dimension theory and establish two auxiliary lemmas. In section 3 we develop a framework which allows us to prove faithful flatness of Arens-Michael envelopes in many situations. In section 4 we combine this result, in the case of the universal enveloping algebra Upgq of g " LiepGq, with a study of the functor F G P and prove that the latter preserves dimensions. On the level of Lie algebra representations, canonical dimension coincides with the more traditional Gelfand-Kirillov dimension and this enables us to give explicit dimension formulas for the representations F G P pMq whenever the Gelfand-Kirillov dimension for M (viewed as an Upgq-module) is known. We illustrate this in section 5 in the case of the classical parabolic Bernstein-Gelfand-Gelfand category for p Ď g where p " LiepP q. For example, the dimension of the locally analytic parabolic induction Ind G P pV q where V is a locally analytic P -representations on a finitedimensional vector space equals the vector space dimension of g{p. We also remark that the dimensions of irreducible objects in the BGG-category can be computed out of the Kazhdan-Lusztig conjecture through Joseph's Goldie rank polynomials. The main result of [26] shows that the functor F G P preserves irreducibility in many cases which yields the dimensions of all the irreducible G-representations which can be constructed through a functor of type F G P . In section 6 we let G " GL d`1 pLq and compute the dimension of locally analytic representations coming from homogeneous line bundles on Drinfeld's upper half space [25]. In section 7 we give the aforementioned application to GL 2 pQ p q and its unitary principal series.
Notation and conventions:
We denote by p a prime number and consider fields L Ă K which are both finite extensions of Q p . Let o L and o K be the rings of integers of L, resp. K, and let |¨| K be the absolute value on K such that |p| K " p´1. The field L is our "base field", whereas we consider K as our "coefficient field". For a locally convex K-vector space V we denote by V 1 b its strong dual, i.e., the K-vector space of continuous linear forms equipped with the strong topology of bounded convergence. Sometimes, in particular when V is finite-dimensional, we simplify notation and write V 1 instead of V 1 b . All finite-dimensional K-vector spaces are equipped with the unique Hausdorff locally convex topology.
We let G be a split reductive group scheme over o L and T Ă B Ă G a maximal split torus and a Borel subgroup scheme, respectively. We denote the base change to L of these group schemes by the same letters. We let B Ď P be a parabolic subgroup and let L P the unique Levi subgroup which contains T. By G 0 " Gpo L q, B " Bpo L q, etc., and G " GpLq, B " BpLq, etc., we denote the corresponding groups of o L -valued points and L-valued points, respectively. Finally, Gothic letters g, p, etc., will denote the Lie algebras of G, P, etc.: g " LiepGq, t " LiepTq, b " LiepBq, p " LiepPq, l p " LiepL P q, etc.. Base change to K is usually denoted by the subscript K , for instance, g K " g b L K.
Grade and dimension
In this section we introduce some basic notions in dimension theory and establish two simple lemmas. The term module always means left module. Noetherian rings are twosided noetherian and other ring-theoretic properties are used similarly.
We recall the notion of an Auslander regular ring [21]. Let R be an arbitrary associative unital ring. For any R-module N the grade j R pNq is defined to be either the smallest integer k such that Ext k R pN, Rq ‰ 0 or 8. Now suppose that R is (left and right) noetherian. If N ‰ 0 is finitely generated, then its grade j R pNq is bounded above by the projective dimension of N. A noetherian ring R is called Auslander regular if its global dimension is finite and if every finitely generated R-module N satisfies Auslander's condition: for any k ě 0 and any R-submodule L Ď Ext k R pN, Rq one has j R pLq ě k. Let R be an Auslander regular ring of finite global dimension gldpRq and M an R-module. The number Let τ be an automorphism of R and let M be a left R-module. We denote by τ M the abelian group M with the left R-action r.m :" τ prqm and call τ M the twist of M with τ . In case of a right module M we denote the analogous construction by M τ .
Lemma 2.1. Twisting with τ has the following properties: (i) the functor M Ñ τ M is an auto-equivalence on the category of all R-modules, (ii) M is finitely generated if and only if τ M is finitely generated, (iii) there are canonical isomorphisms Ext k R p τ M, Rq » Ext k R pM, Rq τ for all k, (iv) one has j R pMq " j R p τ Mq.
Proof. Twisting with τ´1 yields a quasi-inverse, so (i) is clear. (ii) is trivial so let us turn to (iii). In case of k " 0 the isomorphism is given explicitly by sending a linear form f on τ M to the linear form τ˝f on M. According to (i) a projective resolution P ‚ for M yields a projective resolution τ P for τ M. Since the isomorphism for k " 0 is natural in M, we are done. (iv) follows formally from (iii).
Lemma 2.2. Let R Ñ S be a faithfully flat ring extension between noetherian rings. Let M be a finitely generated R-module and put M S :" S b R M. We have Indeed, since R Ñ S is flat, choosing a free resolution of M by finitely generated free modules reduces us to the case k " 0 and M " R where the statement is obvious. By faithful flatness of R Ñ S, we have Ext
Arens-Michael envelopes and faithful flatness
Let R be a complete discrete valuation ring with field of fractions K and uniformizer π. Let A be an R-algebra, flat as an R-module, equipped with an increasing and exhaustive filtration by R-submodules such that 1 P F 0 A and F i A¨F j A Ď F i`j A for all i, j. In particular, F 0 A is an R-subalgebra of A. We make the following three assumptions on this filtration.
(1) We have F i A¨F j A " F j A¨F i A as R-submodules of A for all i, j; (2) the ring F 0 A is a commutative noetherian integral domain such that F 0 A{πF 0 A is a regular integral domain; (3) the associated graded ring gr F ‚ A is commutative and isomorphic to a polynomial ring over F 0 A in finitely many, say r, variables (where the polynomial ring has its usual positive grading by total degree with the variables placed in degree one). The regularity assumption in (2) means that all local rings of F 0 A{πF 0 A at prime ideals are regular or, equivalently, that the ring F 0 A{πF 0 A has finite global dimension. Of course, any filtration with F 0 A " R satisfies (2), but there is no point in restricting to this special case at the moment.
Positively filtered algebras A that satisfy these requirements abound. The main examples we have in mind are universal enveloping algebras of Lie algebras as well as the rings of (crystalline) differential operators on certain smooth affine R-schemes. We will give more details at the end of this section.
In the following we will assume that these conditions hold. We then have the K-algebras The algebra pF 0 Aq K has a natural structure of normed algebra by declaring the lattice F 0 A to be the unit ball. We give A K the finest locally convex topology making the inclusion map pF 0 Aq K ãÑ A K continuous (where the source has its norm topology). Our aim in this subsection is to analyze the algebraic and homological properties of the Arens-Michael envelope K of the locally convex algebra A K . Recall [13] 1 that A K :" pHausdorffq completion of A K w. r. t. all continuous submultiplicative seminorms.
Among our main results will be that K is a Fréchet-Stein algebra in the sense of [35] and that the canonical completion homomorphism A K Ñ K is a faithfully flat ring extension. As we will see, these results make the homological algebra of K quite transparent.
As a first step we will obtain a more accessible description of K . To this end, we consider the Rees ring R F ‚ pAq :" ' iě0 pF i AqX i of the filtered ring A, viewed as a subring of the polynomial ring ArXs. The ring R F ‚ pAq is noetherian according to [21,II.2.2.1]. For each number n ě 0 we let A n be the image of R F ‚ pAq under the evaluation homomorphism ArXs Ñ A given by X Þ Ñ π n . Obviously, A n`1 Ď A n and A 0 " A. Let n be the π-adic completion of A n and put n,K :" n b R K. All rings A n , n and n,K are noetherian.
In the following we will need some basic results on the interplay between the positive filtration F ‚ A on A, the π-adic filtration on A and the rings A n . Such results are established by K. Ardakov and S. Wadsley in [2] and to be completely clear, we therefore relate our situation to the terminology used in loc.cit. The positively filtered ring A is an almost commutative R-algebra in the sense of the definition [2, 3.4]. Moreover, it is deformable and A n is its n-th deformation [2, 3.5]. According to [2,Prop. 3.8] the algebra n,K is therefore an almost commutative affinoid K-algebra in the sense of the definition [2, 3.8].
In particular, n,K is a complete doubly filtered K-algebra with slice n {π n [2, 3.1]. Of course, we have A n {πA n " n {π n .
Each ring A n has its induced filtration F m A n :" A n X F m A. Since gr F ‚ A is flat over R one has In particular, F 0 A n " F 0 A. The graded ring gr F ‚ A n is in fact isomorphic to the graded ring gr F ‚ A via the map given on the i-th homogeneous component as In particular, gr F ‚ A n is isomorphic, as a graded ring, to a polynomial ring in r variables over F 0 A n .
The slice A n {πA n has the quotient filtration coming from F ‚ A n . We let gr F ‚ pA n {πA n q be the associated graded ring. According to [2,Lem. 3.7] the map A n Ñ A n {πA n induces an isomorphism of graded rings Following [2, 3.1] we finally abbreviate Grp n,K q :" gr F ‚ pA n {πA n q .
This is a polynomial ring over F 0 A n {πF 0 A n in r variables and is therefore a noetherian regular integral domain according to (2).
Proposition 3.4. The homomorphism n`1,K Ñ n,K is flat for all n.
Proof. We follow an overall strategy of Berthelot [5, 3.5.3] which is made explicit in [12, 5.3.10]. As a starting point, we equip the ring A n with the following 'augmented' filtration: for all m. We claim that this filtration satisfies for all k, ℓ so that we have an associated graded ring gr F 1 ‚ A n . To prove the claim, it suffices to verify A n`1¨Fm A n " F m A n¨An`1 .
Because of A n`1 " ř jě0 π pn`1qj F j A together with (3.1) this reduces to for each i, j. However, this is a direct consequence of our hypothesis (1). Secondly, we observe that F 0 A n " F 0 A which implies gr F 1 0 A n " F 1 0 A n " A n`1 . Finally, we claim that the ring gr F 1 ‚ A n is finitely generated over gr F 1 0 A n by central elements. To start with, the composite is surjective and factors through F m`1 A n {F m A n for all m ě 0. We obtain a graded ring homomorphism According to the isomorphism (3.2) and our hypothesis (3) on gr F ‚ A, the source of f is a polynomial ring over F 0 A n in finitely many variables, say y 1 , ..., y r P gr F 1 A n . It therefore suffices to see that the images of these generators in gr F 1 ‚ A n are central, that is, they commute with To this end, we choose elements x 1 , ..., x r in F 1 A such that y i " π n x i`F0 A n . This is possible according to (3.2). The commutator rgr F 1 0 A n , f py i qs vanishes in gr F 1 ‚ A n , if we can show the inclusion rA n`1 , π n x i`F0 A n s Ď A n`1 inside A n . Since F 0 A n " F 0 A n`1 Ă A n`1 and since r¨, π n x i s is additive, we are reduced to show rπ pn`1qj z, π n x i s P A n`1 for any z P F j A and j ě 0. Since gr F ‚ A is commutative, the commutator rz, x i s P F j`1 A lies in fact in the subgroup F j A. This implies rπ pn`1qj z, π n x i s " π pn`1qj`n rz, x i s P π n¨πpn`1qj F j A Ă π n¨F j A n`1 Ă F j A n`1 which proves the claim. All in all, we have now verified the conditions (i),(ii),(iii) appearing in [12,Lem. 5.3.9] for the augmented filtration F 1 ‚ A n and its subring F 1 0 A n " A n`1 . Hence, [12,Prop. 5.3.10] implies the flatness of n`1,K Ñ n,K .
The proposition implies that the projective limit lim Ð Ý n n,K , with its projective limit topology, is a Fréchet-Stein algebra in the sense of [35]. Proof. Clearly, any A n gives rise to a continuous submultiplicative seminorm, say ||.|| n , on A K and it suffices to see that these are cofinal in the directed set of all such seminorms on A K . According to [2,Lem. 3.1], the graded ring of A n relative to its π-adic filtration is isomorphic to a polynomial ring pA n {πA n qrts in one variable t over A n {πA n . Since Grp n,K q is an integral domain, the rings A n {πA n and pA n {πA n qrts are integral domains, too. This implies that ||.|| n is in fact multiplicative. After these preliminaries, we consider an arbitrary continuous and submultiplicative seminorm ||.|| on A K . Choose a graded isomorphism between gr F ‚ A and a polynomial ring over F 0 A and lift the variables to elements x 1 , ..., x r in F 1 A. By (2) the ring F 0 A is an integral domain and, hence, so is gr F ‚ A. In particular, the principal symbol map for gr F ‚ A is multiplicative. It follows that the ordered monomials x k :" x k 1 1¨¨¨x kr r for k :" pk 1 , ..., k r q P N r form a basis of the F 0 A-module A. Take an element a P A K and write a " ÿ k a k x k with uniquely determined a k P pF 0 Aq K . Let |.| be the norm on pF 0 Aq K and choose n large enough such that ||x i || ď |π|´n for all i. By (3.2) the symbols of the elements π n x i in gr F ‚ A n are in degree one and constitute a complete set of variables over F 0 A n . Repeating the argument above for A n shows that ||π n x i || n " 1 for all i and that Our assertion follows now from Proposition 3.6. The canonical homomorphism A K Ñ K is faithfully flat.
Before we turn to the proof of the proposition we establish two auxiliary lemmas. We consider the π-adic filtration on A n , n and n,K . Let gr π ‚ A n be the associated graded ring of A n and let t be the principal symbol of π. Of course, gr π ‚ A n " gr π ‚ n . As we have explained above gr π ‚ A n " pA n {πA n qrts equals the polynomial ring over A n {πA n in the variable t. In particular, gr π ‚ n,K " pA n {πA n qrt˘1s .
Since Grp n,K q is noetherian, the ring A n {πA n is noetherian, too. So gr π ‚ A n is noetherian. Since A n is R-flat, the π-adic filtration on A n is separated. Since π is a central and regular element in A n , we have the Artin-Rees property for the π-adic filtration on A n [21, Cor. I. 4.4.8]. This implies that the Rees ring associated with the π-adic filtration of A n is noetherian [21, Thm. II.1.1.5] and this finally allows us to apply the theory of lifted Ore sets as explained in [22]. To do this, let T n Ď gr π ‚ A n be the central and multiplicative subset equal to t1, t, t 2 , ...u and put S n :" ts P A n : σpsq P T u.
Here, σ denotes the principal symbol map for the π-adic filtration on A n . One has S n " tπ m p1`I n q : m ě 0u where I n denotes the ideal of A n generated by π. Recall the notion of an Ore set in a (noncommutative) ring [24, 2.1.13].
Lemma 3.7. The set S n is an Ore set in A n . There is a filtration on the localization S´1 n A n making A n Ñ S´1 n A n a filtered homomorphism. The associated graded ring is canonically isomorphic to the localization T´1 n pgr π ‚ A n q. The completion homomorphism Proof. The statements about the Ore set, the filtration and the graded ring follow from [22, Cor. 2.2/Cor. 2.4]. Note that the filtration on S´1 n A n is Zariskian in the sense of [21] and therefore Lemma 3.8. In the situation of the preceding lemma, the canonical homomorphism A n Ñ A n,K extends to an isomorphism of K-algebras Proof. The canonical homomorphism h : A n Ñ n,K is of course filtered relative to π-adic filtrations. Moreover, hp1`I n q consists of units in n which implies hpsq P p n,K qˆfor each s P S n . For any m we denote the homogeneous component of gr π ‚ A n of degree m by gr π m A n , and similarly for the graded rings gr π ‚ and gr π ‚ n,K . Given s P S n with σpsq P gr π m A n we have σphpsqq P gr π m n,K . We have already explained that A n {πA n is an integral domain. Hence, the graded ring gr π ‚ n,K " pA n {πA n qrt˘1s is an integral domain, too, and therefore its principal symbol map is multiplicative. Since σp1q " 1 P gr 0Ân,K , we deduce that σphpsq´1q P gr´m n,K . The universal property of microlocalization [21, Prop. IV.1.1.3] applied to h therefore yields a filtered homomorphism We claim thatĥ is an isomorphism. Since the filtrations on source and target are exhaustive, separated and complete, it suffices to check that its graded map is an isomorphism [21, Cor. I.4.2.5]. However this graded map equals the canonical map between the graded ring of S´1 n A n and gr π ‚ n,K " T´1 n pgr π ‚ A n q which is an isomorphism according to the preceding lemma. We now turn to the proof of the proposition.
is injective for any n. The ring A K being noetherian, the A K -module J is finitely presented and, hence, so is the K -module K b A K J. It is therefore a coadmissible module for the Fréchet-Stein algebra K [35,Cor. 3.4] and, consequently, equals the projective limit over the modules n,K b A K J. Since the projective limit is left-exact, we obtain thereby from (3.9) the injectivity of the map K b A K J Ñ K . This establishes the flatness of the map A K Ñ K . We turn to faithful flatness. To this end, consider a (left) A K -module M and assumê [5, 3.3.5] that M is a cyclic module on one generator, say m. According to the first lemma, the completion homomorphism S´1 n A n Ñ { S´1 n A n is faithfully flat. Moreover, π P S n , so that S´1 n A n " S´1 n A K . According to the second lemma, we have an isomorphism { S´1 n A n »Â n,K . We may therefore deduce from Thus, there exists an element f n P S n with f n m " 0 for all n. However, S n is of the form Y mě0 π m¨p 1`I n q where I n denotes the ideal generated by π in A n " ř iě0 π ni F i A. Since n,K is π-adically complete, the elements in 1`πF 0 A are units in n,K which allows us to assume that f n is of the form 1`π n g n with some element g n P A. Since Am is contained in the K-vector space M, it is R-free and hence π-adically separated. The limit of the sequence f n m P Am in the π-adic topology equals m. Thus, m " 0 and M " 0. This completes the proof of the proposition.
We have already explained that the ring Grp n,K q is a noetherian regular integral domain. Let d denote its (finite) global dimension. Of course, d equals the sum of the global dimension of F 0 A{πF 0 A and the number r as defined in (3).
Proposition 3.11. The noetherian ring n,K is Auslander regular of global dimension ď d.
According to the proposition the Fréchet-Stein algebra K verifies that assumption (DIM) as formulated in [35, 8.8]. Consequently, the grade number j K is a well-behaved codimension function on the abelian category of coadmissible modules. This implies the following corollary, cf. [35,Lem. 8.4].
We finish with a discussion of examples of algebras A satisfying our requirements. Let g be a R-Lie algebra which is finite and free as an R-module, say of rank d. Let A :" Upgq be its universal enveloping algebra equipped with its usual positive filtration. Then A satisfies all our requirements. Indeed, (1) follows by definition of the filtration and (2) is trivial since F 0 A " R. It is well-known that the graded ring of Upgq equals the symmetric algebra of the R-module g whence (3). Note that A K " Upg K q with the K-Lie algebra g K :" g b R K and that n,K "Ûpπ n gq K , i.e. n,K coincides with the π-adic completion with subsequent inversion of π of the universal enveloping algebra Upπ n gq of the R-Lie algebra π n g for all n. Note that Grp n,K q is isomorphic to the symmetric algebra of the R{πR-vector space g{πg. In particular, the global dimension ofÛ pπ n gq K is in fact equal to d as follows from [2, Prop. 9.1] applied to the augmentation characterÛ pπ n gq K Ñ K given by x " 0 for all x P π n g. Since F 0 A " R, the Arens-Michael envelope K equals the completion of Upg K q with respect to all submultiplicative seminorms on the abstract K-algebra Upg K q. This completion was first introduced and studied in [29] and [30]. For future reference we restate its faithful flatness property.
Theorem 3.13. The natural homomorphism Upg K q ÑÛpg K q is faithfully flat.
As a second example we consider a smooth affine integral scheme X of finite type over R whose closed fibre is integral. We assume that the locally free module of differentials Ω X{R is already free, say of rank d. Let A :" DpXq be the ring of (crystalline) global differential operators on X with its natural filtration. 2 In particular, F 0 A " OpXq, the ring of global sections of X. Then A satisfies all our requirements: again, (1) follows by definition of the filtration and (2) follows from F 0 A " OpXq and our assumptions on X. It is well-known that the graded ring of DpXq equals the symmetric algebra of the OpXq-module consisting of the global vector fields on X whence (3).
More generally, the enveloping algebra of a Lie algebroid [27] gives rise to many examples. Let us briefly recall the definition (taken from [1]). Let R Ñ S be a ring homomorphism to some commutative ring S. A Lie algebroid is a pair pL, aq consisting of an R-Lie algebra and S-module L, together with an S-linear R-Lie algebra homomorphism a from L to the R-linear derivations of S, such that rv, sws " srv, ws`apvqpsqw for all v, w P L and s P S. It is possible to form a unital associative R-algebra UpLq called the enveloping algebra of pL, aq which is generated as a R-algebra by S and L subject to appropriate natural relations. Whenever L is a projective S-module, UpLq has a natural positive filtration with associated graded ring the symmetric algebra Sym S pLq. Suppose now that L is already a free S-module, say of rank d. Then F 0 A " S and A :" UpLq satisfies all our requirements if and only if F 0 A " S satisfies (2). Our two first examples above are the special cases S :" R and pL, aq :" pg, 0q respectively S :" OpXq and pL, aq :" pΩ _ X{R pXq, idq.
From Dpg, P q-modules to DpGq-modules
We consider the locally L-analytic groups P and G as well as the maximal compact subgroup G 0 Ď G. We let P 0 " G 0 X P . The locally analytic distribution algebras with coefficients in K are denoted by DpP q, DpGq, DpP 0 q and DpG 0 q. In this section, we will consider a certain functor F G P p.q 1 from Lie algebra representations of g endowed with a compatible locally analytic action of P to locally analytic G-representations. This functor, or rather its restriction to certain highest weight categories was introduced and studied in [26]. To alleviate notation, we denote the universal enveloping algebra of the base change to K of the L-Lie algebra g by Upgq.
The group G and its subgroup P act via the adjoint representation on the Lie algebra g. We denote by Dpg, P q :" DpP q b U ppq Upgq the corresponding skew-product ring. Similarly, we denote by Dpg, P 0 q the skew-product ring DpP 0 q b U ppq Upgq. Proof. Let UṔ be the group of points of the opposite unipotent radical of P and let uṔ be its Lie-algebra. In particular, g " p ' uṔ . The multiplication map PˆUṔ Ñ G is injective and induces an injective homomorphism DpPˆUṔ q Ñ DpGq. The linear map appearing in the lemma is injective being the composite of the injective linear maps Dpg, P q " DpP q b K UpuṔ q ÝÑ DpP q b K DpUṔ q ÝÑ DpPˆUṔ q ÝÑ DpGq .
The remaining assertions are clear.
An obvious variant of the above proof for the group G 0 shows that the natural linear map Dpg, P 0 q Ñ DpG 0 q is an injective ring homomorphism with image equal to the subring of DpG 0 q generated by DpP 0 q and Upgq.
Lemma 4.2. One has
DpGq " DpG 0 q b Dpg,P 0 q Dpg, P q as bimodules. In particular, Proof. The bimodule map equal to the composite DpG 0 q b DpP 0 q DpP q ÝÑ DpG 0 q b Dpg,P 0 q Dpg, P q Ñ DpGq is an isomorphism according to [36,Lem. 6.1]. Since the first map is surjective, both individual maps are isomorphisms as well. The second statement is clear.
We consider the functor M Þ Ñ F G P pMq 1 :" DpGq b Dpg,P q M from Dpg, P q-modules to DpGq-modules. Here, we follow the notation of [26], compare in particular Prop. 3.7 in loc.cit. If the parabolic subgroup P is clear from the context, we will occasionally abbreviate M :" F G P pMq 1 . We now start a more detailed analysis of the module M closely following the discussion in [26, 5.5]. We put κ " Let in the following r always denote a real number in p0, 1q X p Q with the property: there is m P Z ě0 such that s " r p m satisfies 1 p ă s and s κ ă p´1 {pp´1q .
For such numbers r we let D r pG 0 q and D r pP 0 q be the Banach algebras appearing in loc.cit. Let us briefly sketch their construction. One chooses suitable uniform pro-p groups H Ă G 0 and H`:" H X P 0 such that H is open normal in G 0 . The distribution algebras of H and H`admit canonical r-norms coming from the canonical p-valuation on the group [35]. The rings DpG 0 q resp. DpP 0 q are finite free ring extensions over DpHq resp. DpH`q and carry the corresponding maximum norms. The rings D r pG 0 q resp. D r pP 0 q are the associated completions. They define the Fréchet-Stein structure of DpG 0 q resp. DpP 0 q. Let U r pgq and U r ppq be the topological closure of Upgq in D r pG 0 q and Uppq in D r pP 0 q respectively. Put D r pg, P 0 q :" D r pP 0 q b Urppq U r pgq .
An argument completely analogous to Lem. 4.1 shows that the natural linear map D r pg, P 0 q Ñ D r pG 0 q is an injective ring homomorphism with image equal to the subring of D r pG 0 q generated by D r pP 0 q and U r pgq. If HP 0 denotes the subgroup of G 0 generated by H and P 0 , the intersection P 0,r :" HP 0 X D r pg, P 0 q is thus well-defined. Proof. This follows form [26, 5.6].
We let For g P G we denote by Adpgq the automorphism of Upgq (or U r pgq) induced by the left conjugation action h Þ Ñ ghg´1 of g on G. We note that the group P 0 acts on M r via p.px b mq :" pAdppqpxqq b pm.
Lemma 4.6. The natural map Proof. The ring D r pP 0 q is a finite and free module over U r ppq on a basis given by distributions δ p with p P P 0 . The pP 0 , U r pgqq-module structure on M r therefore extends to a module structure over the ring D r pg, P 0 q. The resulting map D r pg, P 0 q b Dpg,P 0 q M Ñ M r provides an inverse for the map in question.
Using the two lemmas we can derive the following decomposition of M r as U r pgq-module, We compute a class of examples related to locally analytic parabolic induction. Recall the Levi decompositions P " L P¨UP and p " l P ' u P . Let V be a locally analytic L P -representation on a finite-dimensional K-vector space. We set u P V " 0 and consider V a Uppq-module. The induced Upgq-module MpV q " Upgq b U ppq V is then naturally a Dpg, P q-module which is finitely generated over Upgq. Indeed, we have the diagonal action of L P on the tensor product MpV q where L P acts on the factor Upgq via the adjoint action. It extends extends to a DpL P q-action and it suffices therefore to check that the u P -action extends compatibly to DpU P q. However, the action of the Lie algebra u P even integrates uniquely to an algebraic action of U P on MpV q as follows. Given an element u " exppxq P U P pKq, where K denotes an algebraic closure of K, we define ρpuq :" ř ně0 ρpxq n n! , where ρpxq n " 0 for n " 0. The representations of L P and U P are compatible in the sense that h˝ρpuq˝h´1 " ρpAdphqpuqq, for h P L P , u P U P . Hence, MpV q is a DpP q-module and then even a Dpg, P q-module as claimed.
Proof. The map for δ P DpGq, x P Upgq, v P V is well-defined and provides a two-sided inverse.
Remark: The module DpGq b DpP q V is dual to the locally analytic parabolic induction Ind G P pV 1 q. In the following we will investigate the behavior of the functor F G P p.q 1 in terms of dimensions. To this end, recall that the ring Upgq is a noetherian Auslander regular ring of global dimension d :" dim L g. For a finitely generated Upgq-module M we therefore have its canonical dimension dim U pgq M :" d´j U pgq pMq, cf. 2.
Remark: Traditionally, dimension theory over the ring Upgq is developed using the socalled Gelfand-Kirillov dimension, cf. [16]. However, it follows from [20,Remark 5.8 (3)] together with [24,Prop. 8.1.15 (iii)] that for finitely generated Upgq-modules, Gelfand-Kirillov dimension coincides with canonical dimension. If DpHq " lim Ð Ýr D r pHq is a Fréchet-Stein structure for DpHq and M r :" D r pHq b DpHq M, then (4.10) dim DpHq pMq " sup r dim DrpHq pM r q according to [35, §8]. Moreover, if M is even a DpGq-module, then, according to [35] and [28], the number dim DpHq M is independent of the choice of H. In this case, we denote it by dim DpGq M, or simply dim M, if no confusion can arise, and call it the canonical dimension of the coadmissible DpGq-module M.
We shall also need the Arens-Michael envelopeÛ pgq of g as introduced in the preceding section. Recall that this is a Fréchet-Stein algebra equal to the completion of Upgq with respect to all submultiplicative seminorms on Upgq. As such, it comes with a natural completion homomorphism Upgq ÑÛ pgq which is faithfully flat, cf. Thm. 3.13.
Theorem 4.11. If M is a Dpg, P q-module which is finitely generated as Upgq-module, then Proof. It suffices to prove j DpG 0 q pMq " j U pgq pMq. The left-hand side of this identity equals min r j DrpG 0 q pM r q according to (4.10). Now D r pG 0 q is a finite free U r pgq-module on a basis which consists of units satisfying the assumptions of [35,Lem. 8.8]. Hence, j DrpG 0 q pM r q " j Urpgq pM r q for all r. By (4.7) together with Lem. 2.1, we have j Urpgq pM r q " max So it remains to show j U pgq pMq " min r j Urpgq pM r q. SinceM :"Û pgq b U pgq M is coadmissible, we have j U pgq pMq " jÛ pgq pM q " min r j Urpgq pM r q according to (3.10) and (3.12).
Combining the theorem with [16,Lem. 8.9] gives the dimension of parabolically induced representations.
Corollary 4.12. One has dim F G P pMpV qq 1 " dim L pg{pq where dim L denotes vector space dimension.
Highest weight modules and dimension
In this section we explain the relation to the parabolic BGG-categories for the pair p Ď g appearing in [26] and compute the dimensions of certain irreducible G-representations occurring in the image of the functor F G P . As in the previous section, we make the general convention that, when dealing with universal enveloping algebras, we write Upgq, Uppq etc. to denote the corresponding universal enveloping algebras after base change to K, i.e., what is precisely Upg K q, Upp K q and so on.
5.1.
The category O and its parabolic variants O p . The category O in the sense of Bernstein, Gelfand, Gelfand, cf. [4], [15], is defined for complex semi-simple Lie algebras. Here we consider the following variant for split reductive Lie algebras over a field of characteristic zero. Thus we let O be the full subcategory of all Upgq-modules M which satisfy the following properties: (1) M is finitely generated as a Upgq-module.
(2) M decomposes as a direct sum of one-dimensional t K -representations.
(3) The action of b K on M is locally finite, i.e. for every m P M, the subspace Upbq¨m Ă M is finite-dimensional over K. As in the classical case one shows that O is a K-linear, abelian, noetherian, artinian category which is closed under submodules and quotients, cf. [15, 1.1, 1.11]. In particular, every object of O has a Jordan-Hölder series and a simple object of O is simple as abstract Upgq-module. Following [26] we define a certain 'algebraic' subcategory of O. Note that by property (2), we may write any object M in O as a direct sum where M λ " tm P M | @x P t K : x¨m " λpxqmu is the λ-eigenspace attached to λ P tK " Hom K pt K , Kq. Let X˚pTq " HompT, G m q be the group of characters of the torus T which we consider via the derivative as a subgroup of tK.
We denote by O alg the full subcategory of O whose consisting of objects M P O where the t K -module structure on every M λ lifts to an algebraic action of T. Again, O alg is an abelian noetherian, artinian category which is closed under submodules and quotients. The Jordan-Hölder series of a given Upgq-module lying in O alg is the same as the one considered in the category O.
Example 5.2. For λ P tK, let K λ " K be the 1-dimensional t K -module where the action is given by λ. Then K λ extends uniquely to a b K -module. Let be the corresponding Verma module. Denote by Lpλq P O its simple quotient. Suppose the character λ integrates to a locally analytic character of T . As we have explained before Prop.4.9, the module Mpλq is then a Dpg, Bq-module finitely generated over Upgq and the same holds true for Lpλq. In this situation, Mpλq resp. Lpλq is an object of O alg if and only if λ P X˚pTq.
We shall also need the parabolic versions of the above categories. We define O p to be the category of Upgq-modules M satisfying the following properties: (1) M is finitely generated as a Upgq-module.
(2) Viewed as a l P,K -module, M is the direct sum of finite-dimensional simple modules.
(3) The action of u P,K on M is locally finite. This is analogous to the definition over an algebraically closed field, cf. [15, ch. 9]. Clearly, the category O p is a full subcategory of O. Furthermore, it is K-linear, abelian and closed under submodules and quotients, cf. [15, 9.3]. Hence the Jordan-Hölder series of every Upgq-module in O p Ă O lies in O p as well. If Q is a standard parabolic subgroup with Q Ą P , then O q Ă O p . Finally, consider the extreme case p " g: the category O g consists of all finite-dimensional semi-simple g K -modules. On the other hand, Similarly as before we define a subcategory O p alg of O p as follows. Let Irrpl P,K q fd be the set of isomorphism classes of finite-dimensional irreducible l P,K -modules. Again, any object in O p has by property (2) a decomposition into l P,K -modules . Let ∆ be the set of simple roots of G with respect to T Ă B. Let λ P tK and set I " tα P ∆ | xλ, α _ y P Z ě0 u. We let P " P I is the standard parabolic subgroup of G attached to I. Then λ is dominant with respect to the reductive Lie algebra l P . Denote by V I pλq the corresponding irreducible finite-dimensional l p -representation and consider the generalized Verma module (in the sense of Lepowsky [19]) There is a surjective map where the kernel is given by the image of ' αPI Mps α¨λ q Ñ Mpλq. Now suppose the l p -representation on V I pλq integrates to a locally analytic L P -representation. As we have explained before Prop.4.9, the module M I pλq is then a Dpg, P q-module and finitely generated over Upgq. In this situation, M I pλq is an object of O p alg if and only if the l p -action on V I pλq integrates to an algebraic L P -action. This happens if and only if λ P XpTq. In this case, Lpλq is an object of O p alg , as well, cf. [15, sec. 9.4]. Let M be an object of O p alg as above. Then M is the union of finite-dimensional p Kmodules. Denote by X one of these finite-dimensional submodules. Then X lifts uniquely to an algebraic P K -representation [26,Cor. 3.6]. Let us sketch the argument. The Uppqmodule X, considered as a Upl p q-module, decomposes into a direct sum of isotypic modules X a and each module X a lifts uniquely to an algebraic representation of L P,K . The action of the Lie algebra u p,K integrates uniquely to an algebraic action of U P on X in the manner we have explained before Prop. 4.9. This shows that X is uniquely endowed with an algebraic representation of P K . Consequently, there is a unique Dpg, P q-module structure on M that extends its Upgq-module structure and such that the action of Uppq, as a subring of Upgq, coincides with the action of Uppq as a subring of DpP q. Moreover, any morphism M 1 Ñ M 2 in O p alg is automatically a homomorphism of Dpg, P q-modules. In other words, we have a fully faithful embedding of categories O p alg ãÑ category of all Dpg, P q´modules, finitely generated over Upgq .
We now explain how one may compute the dimensions of the irreducible G-representations that occur in the image of O p alg via the functor F G P . It follows from [14, sec. 9.4] that for every object M of O there is unique standard parabolic subalgebra p which is maximal for M. The same definition applies for objects in the subcategory O alg in which case we also say that the standard parabolic subgroup P corresponding to p is maximal for M. In that case M lies in O p alg . We recall [26,Thm.5.3]. Theorem 5.6. If the root system Φ " Φpg, tq has irreducible components of type B, C or F 4 , we assume p ą 2, and if Φ has irreducible components of type G 2 , we assume that p ą 3. Let M P O p alg be simple and assume that P is maximal for M. Then F G P pMq 1 is a simple DpG 0 q-module (and so, in particular, a simple DpGq-module).
We let λ P tK be the differential of a locally analytic character of T and let P " P I be adapted to λ in the sense of Example 5.4. Then p K is maximal for Mpλq. Consider the simple quotient Lpλq and the coadmissible DpGq-module L I pλq :" F G P pLpλqq 1 .
We have dim Lpλq " dim Lpλq by Thm. 4.11. Moreover, if λ P X˚pTq, then Lpλq is an object of O p alg and the DpGq-module Lpλq is simple under the assumptions of the preceding theorem. We therefore briefly recall the classical relation of dim Lpλq to classical Goldie rank polynomials. To this end, we need to introduce some extra notation following [16, 2.7]. We let X˚pTq Ď Λ be the integral weight lattice and let Λ`and Λ``be the subsets of dominant resp. strictly dominant weights. For simplicity we assume λ P Λ. Recall that the isomorphism classes of the Lpµq, µ P t˚as well as the isomorphism classes of the Mpµq, µ P t˚form two different Z-bases of the Grothendieck group of the abelian category O, cf. [16, 4.5]. In particular, for any µ P tr Lpµqs " ÿ µ 1 Pt˚p Lpµq : Mpµ 1 qqrMpµ 1 qs for some uniquely determined coefficients pLpµq : Mpµ 1 qq P Z. For any µ P Λ``, the number a Λ pw, w 1 q :" pLpw¨µq : Mpw 1¨µ qq for w, w 1 P W is independent of the choice of µ, cf. [16, 4.14]. We fix once and for all an element t P t such that αptq " 1 for all α P ∆. For fixed w P W we let m " m w P N ě0 be minimal such thatf is nonzero, cf. [16, 9.13]. The number m w does not depend on the particular choice of t P t˚. In fact, different choices of t lead to polynomials that differ by a scalar in Lˆ, cf. [16, 14.7]. The polynomialf Λ w is, up to scaling, the so-called Goldie rank polynomial of w P W . 3 We pick µ P Λ`such that λ " w¨µ " wpµ`ρq´ρ for some w P W , put (5.7) S :" B 0 µ :" tα P ∆ | xµ`ρ, α _ y " 0u and let W S be the subgroup of W generated by all s α , α P S. Hence, W S coincides with the stabilizer tw 1 P W : w 1¨µ " µu according to [16, 2.5]. Let W S be the unique system of representatives of maximal length for the left cosets in W {W S . Since W¨µ " W S¨µ we may and will assume that w P W S .
Theorem 5.8. The module Lpλq " Lpw¨µq has the dimension dim U pgq Lpw¨µq " #Φ`´m w 3 The polynomialsf Λ w and their generalizations to arbitrary cosets t˚{Λ were introduced and studied by Joseph and build a bridge between primitive ideals of U pgq, nilpotent adjoint orbits and the representation theory of W . For more details we refer to [16,Kap. 14].
where m w denotes the degree of the polynomialf Λ w . Proof. Since w P W S , we have B 0 µ " S Ă τ Λ pwq according to [16, 2.7(1)] (and in the notation of loc.cit.). We may therefore apply [16,Satz 9.12] to obtain dim U pgq Lpw¨µq " #Φ`´m w . Note that all argument extend from the split semisimple case of loc.cit. to the more general split reductive case considered here and that Gelfand-Kirillov dimension may be replaced by canonical dimension.
Remark: The wish to explicitly compute the polynomialf Λ w and its degree led to the formulation of the so-called Kazhdan-Lusztig conjecture [18]. This conjecture is now a theorem thanks to the work of Beilinson-Bernstein [3] and Brylinski-Kashiwara [6].
Application to equivariant line bundles on Drinfeld's upper half space
In this section we explain briefly how the results of the preceding sections combined with a theorem from [26] allow to compute the dimension of representations coming from line bundles on Drinfeld's half space.
We let G " GL d`1 . Moreover, B Ă GL d`1 equals the Borel subgroup of lower triangular matrices and T Ă B the diagonal torus. For a decomposition pn 1 , ..., n s q of d`1 the symbol P n 1 ,...,ns denotes the corresponding lower standard parabolic subgroup of GL d`1 with Levi subgroup L n 1 ,...,ns .
Let X be Drinfeld's half space of dimension d ě 1 over K. This is a rigid-analytic variety over K given by the complement of all K-rational hyperplanes in projective space P d K , i.e., where H runs over the set of K-rational hyperplanes in K d`1 . There is natural action of G " GL d`1 pKq on X induced by the algebraic action m : GˆP d K Ñ P d K of G defined by g¨rq 0 :¨¨¨: q d s :" mpg, rq 0 :¨¨¨: q d sq :" rq 0 :¨¨¨: q d sg´1 .
Let s P Z and denote by λ 1 " ps, ..., sq P Z d the constant integral weight for GL d . Let r " λ 0 P Z and set λ " pr, s, ..., sq P Z d`1 .
We denote by L λ the homogeneous line bundle on P d K " GL d`1 {P 1,d such that its fibre in the base point is the irreducible algebraic L 1,d -representation corresponding to λ. Then we obtain L λ " Opr´sq where the G-linearization is given by the tensor product of the natural one on Opr´sq with det s . The space of global sections H 0 pX , L λ q is a coadmissible DpGq-module. We may compute its dimension as follows.
Put w j :" s j¨¨¨s1 , where s i P W is the (standard) simple reflection in the Weyl group W -S d`1 of G. Recall that¨denotes the dot action of W on X˚pTq Q . There is at most one integer 0 ď i 0 ď d, such that H i 0 pP d K , L λ q is non-vanishing which is i 0 " 0 for r ě s resp. i 0 " d for s ě r`d`1. Otherwise, there is a unique integer i 0 ă d with w i 0¨λ " w i 0`1¨λ . This is the case for 0 ď i 0 " s´r´1 ă d`1. We put µ i,λ :" This is a L pi,d´i`1q -dominant weight with respect to the Borel subgroup L pi,d´i`1q X Bẁ here B`denotes the upper triangular matrices in GL d`1 . Consider the block matrix where I k P GL k pKq denotes the kˆk-identity matrix. We may regard z j as an element of W and consider the weights z´1 j¨µ j,λ for any j " 0, ..., d´1. For each j we choose an element v j P W such that v´1 j¨p z´1 j¨µ j,λ q P Λ`.
As explained in the discussion following (5.7), we can and will here assume additionally that v j lies in the subset W S j Ă W corresponding to S j :" B 0 v´1 j¨p z´1 j¨µ j,λ q .
Theorem 6.1. The coadmissible DpGq-module H 0 pX , L λ q has the dimension dim H 0 pX , L λ q " #Φ`´min j"0,...,d´1 Here, m v j denotes the degree of the polynomialf Λ v j . Proof. We abbreviate LpX q :" H 0 pX , L λ q. For j " 0, ..., d´1 the Upgq-module Lpz´1 j¨µ j,λ q lies in the category O p j`1,d´j alg . The parabolic p j`1,d´j is maximal for it, cf. [26,Prop. 7.5], and so under the assumption of Thm. 5.6 the DpGq-module F G P pj`1,d´jq pLpz´1 j¨µ j,λ qq is simple, but we do not need this. In any case, there is a filtration of LpX q by coadmissible DpGq-submodules The passage to locally analytic vectors V Þ Ñ V an is an exact functor from admissible Banach space representations to admissible locally analytic representations [35, §7]. Let V 1 and pV an q 1 be the corresponding coadmissible modules.
Proposition 7.1. One has dim V 1 " dimpV an q 1 for any admissible Banach space representation V of G. We now specialize to certain reductive groups G. To this end, let G be a connected split reductive group scheme over o L . Letκ be an algebraic closure of κ, the residue field of o L . Let us consider the following three hypothesis on the geometric closed fibre Gs " G b o Lκ of G which are familiar from the theory of modular Lie algebras (cf. [17], 6.3).
(H1) The derived group of Gs is (semisimple) simply connected.
(H2) The prime p is good for theκ-Lie algebra LiepGsq.
(H3) There exists a Gs-invariant non-degenerate bilinear form on LiepGsq. For example, the general linear group GL n satisfies these conditions for all primes p (using the trace form for (H3)). Any almost simple and simply connected Gs satisfies these conditions if p ě 7 (and if p does not divide n`1 in case Gs is of type A n ). For a more detailed discussion of these conditions we refer to loc.cit.
We assume from now on that (H1)-(H3) hold. As before, Φ`denotes a set of positive roots of G and the number r denotes half the dimension of the minimal nilpotent coadjoint orbit of Gs ( [7], Rem. 4.3.4). We let G from now be on a locally L-analytic group whose L-Lie algebra LiepGq is isomorphic to g L :" L b o L g. Let us identify these Lie algebras via such an isomorphism. Let d " dim L g L . Proposition 7.2. Let V be an admissible G-Banach space representation. If V an has an infinitesimal character, then dimpV q ď 2¨#Φ`. If dimpV q ě 1, then dimpV q ě r.
Proof. The second statement follows from the main result of [2], extended to reductive groups satisfying (H1)-(H3) in [31]. Suppose now that V an has an infinitesimal character. By [31, 9.4/9.6] it suffices to show that, for any n ě 1, the dimension of a finitely generated module M over the Auslander regular ring z Upgq n,K with infinitesimal character is bounded above by 2¨#Φ`. Here, z Upgq n,K denotes the π-adic completion (with subsequent inversion of π) of the universal enveloping algebra Upπ n gq for a choice of uniformizer π of o L . We may choose a good double filtration for M and form its double graded module GrpMq in the sense of [2, 3.2]. The latter is a finitely generated module over Grp z Upgq n,K q " Sympg κ q whose support has dimension equal to the dimension of M. Since M has a central character, GrpMq is annihilated by Sympg κ q G k , the ideal of invariant polynomials without constant term. Its support lies therefore in the cone of nilpotent elements of g κ which has dimension 2¨#Φ`.
We recall that if V is absolutely irreducible, then V an admits an infinitesimal character [11].
We now let G " GL 2 and turn to the unitary principal series of G " GL 2 pQ p q. As usual, B Ă G denotes the Borel subgroup of upper triangular matrices. We fix a finite extension Q p Ď K as a coefficient field for the representations. We denote the continuous character x Þ Ñ x|x| of Qp by χ. Finally, G Qp " GalpQ p {Q p q denotes the absolute Galois group of Q p . In [10] Colmez establishes a correspondence V Þ Ñ ΠpV q from absolutely irreducible 2-dimensional representations V of G Qp over K to absolutely irreducible unitary admissible G-representations. This correspondence is based on the construction of a G-representation DpV q b P 1 attached to V with central character δpxq " χ´1 det V pxq where det V is the character of Qp corresponding by local class field to the determinant of V . The representation DpV q b P 1 is an extension of ΠpV q by its dual twisted by δ˝det. In particular, the central character of ΠpV q equals δ and Prop. 7.2 implies dim ΠpV q ď 2. In the remainder of this section, we will determine dim ΠpV q in case ΠpV q belongs to the unitary principal series, i.e. in case V is trianguline [9].
In the following, all pϕ, Γq-modules are taken over the classical Robba ring R. Given a continuous character η : Qp Ñ Kˆ, the associated pϕ, Γq-module of rank 1 is denoted by Rpηq. Recall that a 2-dimensional Galois representation is called trianguline, if the associatedétale pϕ, Γq-module is an extension of two (non-étales if V is irreducible) modules of rank 1. If η 1 , η 2 are two continuous characters Qp Ñ Kˆ, we denote the locally analytic induction Ind G B pη 2 b η 1 χ´1q simply by B an pη 1 , η 2 q (note the reversed order of the η i !). Proposition 7.3. One has dim ΠpV q " 1 for any irreducible trianguline representation V .
Proof. Let ∆psq be theétale pϕ, Γq-module associated with V . Here, s " pδ 1 , δ 2 , L q is the associated parameter consisting of continuous characters δ 1 , δ 2 : Qp Ñ Kˆand an element L P PpExt 1 pRpδ 1 q, Rpδ 2 qqq. In [8, Thm. 0.7] (compare also [23]) the locally analytic representation ΠpV q an is computed. Either we have the exact sequence of locally analytic G-representations 0 Ñ B an pδ 1 , δ 2 q Ñ ΠpV q an Ñ B an pδ 2 , δ 1 q Ñ 0 (the generic case) or we have an exact sequence of locally analytic G-representations 0 Ñ E L Ñ ΠpV q an Ñ B an pδ 2 , δ 1 q Ñ 0 (the special case) where E L is an extension of a representation W pδ 1 , δ 2 q on a finitedimensional K-vector space by St an pδ 1 , δ 2 q. Here, W pδ 1 , δ 2 q is in fact a subrepresentation of B an pδ 1 , δ 2 q and St an pδ 1 , δ 2 q denotes the corresponding quotient of B an pδ 1 , δ 2 q. We have dim B an pη 1 , η 2 q " 1 according to Cor. 4.12 for any pair of continuous characters pη 1 , η 2 q which settles the generic case. Since dim W pδ 1 , δ 2 q " 0 we have dim E L " 1 and this settles the special case.
The preceding proposition suggests the following question: Are there any absolutely irreducible 2-dimensional representations V of G Qp such that dim ΠpV q " 2? | 2014-09-29T04:53:15.000Z | 2014-09-29T00:00:00.000 | {
"year": 2014,
"sha1": "d4385ba16f1148049813bcd50525a69aae17e787",
"oa_license": null,
"oa_url": "https://www.ams.org/ert/2016-20-02/S1088-4165-2016-00475-9/S1088-4165-2016-00475-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "d4385ba16f1148049813bcd50525a69aae17e787",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
247105183 | pes2o/s2orc | v3-fos-license | Update on Epidemiology, Diagnosis, and Biomarkers in Gastroenteropancreatic Neuroendocrine Neoplasms
Simple Summary Neuroendocrine neoplasms are divided into two groups: well-differentiated neuroendocrine tumors and poorly differentiated neuroendocrine carcinomas. The progress in diagnostic methods, including pathology optimization and imaging, might be one of the reasons for the increasing incidence of gastroenteropancreatic neuroendocrine neoplasms; however, the remaining biological factors are undetermined. Rapid advances in molecular diagnostic and treatment strategies in recent years have significantly contributed to personalized management for patients with these rare neoplasms. This review aimed to provide an update on the epidemiology, diagnosis, and biomarkers in gastroenteropancreatic neuroendocrine neoplasms. Abstract Gastroenteropancreatic neuroendocrine neoplasms (GEP-NENs) are a heterogeneous group of malignancies that originate from the diffuse neuroendocrine cell system of the pancreas and gastrointestinal tract and have increasingly increased in number over the decades. GEP-NENs are roughly classified into well-differentiated neuroendocrine tumors and poorly differentiated neuroendocrine carcinomas; it is essential to understand the pathological classification according to the mitotic count and Ki67 proliferation index. In addition, with the advent of molecular-targeted drugs and somatostatin analogs and advances in endoscopic and surgical treatments, the multidisciplinary treatment of GEP-NENs has made great progress. In the management of GEP-NENs, accurate diagnosis is key for the proper selection among these diversified treatment methods. The evaluation of hormone-producing ability, diagnostic imaging, and histological diagnosis is central. Advances in the study of the genetic landscape have led to deeper understanding of tumor biology; it has also become possible to identify druggable mutations and predict therapeutic effects. Liquid biopsy, based on blood mRNA expression for GEP-NENs, has been developed, and is useful not only for early detection but also for assessing minimal residual disease after surgery and prediction of therapeutic effects. This review outlines the updates and future prospects of the epidemiology, diagnosis, and management of GEP-NENs.
Introduction
Neuroendocrine neoplasms (NENs) are a group of epithelial tumors with morphological and immunohistochemical features of neuroendocrine differentiation [1]. Recently, the World Health Organization (WHO) published a uniform classification framework for all NENs to resolve the longstanding confusion regarding differences in terminology among organ systems [1]. The disease can arise in most epithelial organs of the body, with the gastrointestinal (GI) tract and pancreas accounting for approximately 50% of the primary sites [2]. Although all NENs share similar configurations and specific neuroendocrine expressions, they behave very differently in relation to the site of origin, histological grade, clinical stage, and hormone production [3,4]. The clinical presentation and prognosis of NENs are diverse; therefore, various diagnostic and therapeutic approaches have been attempted to date. Multidisciplinary management strategies have improved the survival of patients with NENs; however, the prognosis of patients with advanced NENs is still unfavorable [5,6]. Additionally, the etiology of NENs is largely unknown outside of certain hereditary genetic syndromes, such as multiple endocrine neoplasia type 1 syndrome (caused by MEN1), MEN 2, von Hippel-Lindau syndrome (VHL), and tuberous sclerosis (TSC1, TSC2) [7]. Recent advances in genomic and epigenetic sciences have provided significant benefits in oncology [8][9][10], whereas the evidence is insufficient for NENs. Although NENs are considered rare, their incidence has been increasing globally, which in turn has received more attention from clinicians and researchers in recent years. This review focuses on the updated findings of the epidemiology, diagnosis, genetic data, and future perspectives of gastroenteropancreatic (GEP)-NENs.
Epidemiology
The incidence of GEP-NENs increases with age. The median age is 60 years or more in most gastrointestinal (GI)-NENs but reportedly less than 50 years for the appendix and pancreas [11,12]. The incidence is similar among males and females [11,13]. The reported incidence of GEP-NENs has been increasing worldwide [14,15]. A large population-based study using the Surveillance, Epidemiology, and End Results (SEER) database estimated that the age-adjusted incidence of GEP-NENs in 2012 was 3.56 per 100,000 persons in the United States (US) [2]. The incidence has continuously increased over the last four decades, especially in the small intestine, rectum, and pancreas. Increasing trends have also been observed in European countries, where the prevalence of GEP-NENs ranges from 2.1 to 6.6 cases per 100,000 population per recent reports [12,[16][17][18][19]. Several population-based studies have been published in Asian countries. In Japan, the age-adjusted incidences of GI-NENs in 2005 and 2016 were 2.10 and 2.84 per 100,000 people, respectively, indicating an approximately 1.3-fold increase, while the incidence of pancreatic NENs in 2005 and 2016 was 1.01 and 0.70 per 100,000 people, respectively, showing a slight decrease [20,21]. In Taiwan, the age-adjusted incidence of GI and pancreatic NENs between 1996 and 2015 had risen from 0.13 to 1.87 cases and from 0.02 to 0.45 cases per 100,000 population, respectively [22]. The most common site was the rectum, comprising 30% of all NENs and 47% of GEP-NENs. A Korean multicenter study reported dramatic changes in the incidence of GEP-NENs, with the incidence in 2009 becoming nine times that reported in 2000. The most significant increase was found in the rectum, while no apparent changes were observed at other sites [23]. The recent age-adjusted incidence of GEP-NENs worldwide is shown in Table 1. Recent advances in diagnostic techniques, including endoscopy and imaging, are considered to be responsible for the increased prevalence of GEP-NENs, especially for those in the rectum, stomach, and pancreas [11,13,25]. Indeed, the reported incidence of localized and regional NENs has increased more than that of NENs with distant metastases [2,26].
The distribution of GEP-NENs is known to differ regionally [14]. In Asia, rectal NENs are the most prevalent, followed by pancreatic or gastric NENs [20][21][22][23]. In contrast, small intestinal and appendiceal NENs are predominant in Europe ( Figure 1) [12,[16][17][18]. Although some combinations of biological and environmental backgrounds are considered, the reason for the regional disparities has not been clearly elucidated. Notably, Kessel et al. have reported similar racial disparities in the US; rectal NENs were more likely to occur in Asians and African Americans, but less likely to occur in Whites. In contrast, small intestinal NENs were common in Whites, African Americans, and Hispanics but rare in Asians [27]. This phenomenon suggests that there might be an association between genetic background and the biological characteristics of GEP-NENs.
The behavior of GEP-NENs varies depending on their primary site, grade, and stage [11,28]. For instance, rectal and appendiceal NENs are more likely to be low-grade and localized, with a better prognosis. However, high-grade NENs are common in the pancreas, stomach, and colon. Esophageal NEN, a rare presentation of NEN, is mostly diagnosed in aggressive stages [11,13,21,23,29]. Although improved survival for patients with metastatic GEP-NENs has been reported in the SEER, comparing the period of 2000-2004 with 2009-2012, the overall survival of patients with GEP-NENs with high-grade and distant metastases was still unfavorable [2,6]. While a subset of NENs is functional, presenting with characteristic endocrine-related symptoms, the majority are non-functional [1] and do not present with symptoms until later stages. Therefore, further development in early identification and targeted therapy for GEP-NENs is warranted.
Diagnosis
The diagnosis of GEP-NENs is based on biopsy, anatomical and functional imaging, and positron emission tomography (PET) with DOTATATE, a gallium (Ga)-68-labeled octreotide derivative, to identify tumors expressing somatostatin receptors (SSTRs). Some blood biomarkers are specific to functional GEP-NENs; however, their utility for comprehensive diagnosis is limited. On the other hand, biomarkers would be helpful for detecting very small tumors, which are difficult to diagnose by imaging or biopsy [30]. We discuss details of biomarkers including novel multianalyte biomarkers developed in recent years in the later Section 5, Biomarkers.
Pathology
The 2019 WHO classification of tumors of the digestive system [1] defines GEP-NENs as G1 (Ki67 < 3%), G2 (Ki67: 3-20%), and G3 (Ki67 > 20%), according to the Ki67 proliferation index. G3 GEP-NENs are classified based on cell morphology and proliferation into well-differentiated G3 and poorly differentiated neuroendocrine carcinomas (NECs). NECs are further morphologically classified into two subtypes: small-cell and large-cell carcinomas. Mixed neuroendocrine-non-neuroendocrine neoplasms (MiNENs) were proposed for mixed tumors with exocrine components ( Table 2). For a proper pathological diagnosis, the morphology, grade, and immunohistochemical staining for chromogranin A (CgA) and synaptophysin should be assessed. SSTR2 is expressed in many NENs, and immunostaining for SSTR2 is useful in assessing tumor differentiation and estimating the effects of somatostatin analog therapy [31,32]. Another promising new immunohistochemical neuroendocrine marker is the transcription factor insulinoma-associated protein 1 (INSM1), which appears to be more specific to neuroendocrine cells than synaptophysin [33].
Endoscopy
Endoscopy with biopsy is the gold standard for diagnosing NENs of the stomach, duodenum, and colorectum [34][35][36]. For NENs in the small intestine, video-capsule endoscopy and double-balloon endoscopy (DBE) are additional endoscopic techniques that are indi-cated when primary lesions cannot be detected by conventional imaging such as computed tomography (CT), magnetic resonance imaging (MRI), and somatostatin receptor imaging (SRI) [37]. The sensitivity of DBE in identifying the primary lesion in the small intestine was 90% or more, which is considerably higher than those of other imaging modalities [38].
In the diagnosis of pancreatic NENs, endoscopic ultrasound (EUS) can exclude the effects of intestinal gas and subcutaneous fat compared with extracorporeal ultrasound. The sensitivity and specificity of EUS in the diagnosis of pancreatic NENs have been reported to be higher than those of CT [39]. EUS is particularly useful in detecting small pancreatic lesions. EUS-guided fine-needle aspiration (FNA) for cytology and histology was subsequently performed for grading pancreatic NENs, enabling decisions on appropriate treatment strategy. However, it should be noted that the diagnostic rate using EUS-FNA samples might depend on the technical level. It is believed that the low concordance rates for histological grading based on WHO classification between EUS-FNA and resected specimens is due to tumor heterogeneity and the failure of sampling "hot spots" related to a lesion. Therefore, to increase the concordance rate as much as possible, it is important to collect more than 2000 tumor cells from EUS-FNA samples, as recommended by the European Neuroendocrine Tumor Society [40].
CT
CT is a widely used, standardized, and reproducible technique that generally results in high diagnostic yields, making it the basic radiologic diagnostic imaging method for NENs [41]. The sensitivity is 73% for tumors with suspected primary tumors, 95% for unknown primary tumors, 80% for liver metastases, and 75% for extrahepatic metastases. The threshold of detection is 0.5 cm [42]. Morphological imaging may fail to detect small tumors, especially those located in the stomach, duodenum, and small intestine [34].
MRI
MRI is advantageous in the examination of the liver and pancreas and is usually preferred for initial staging and preoperative imaging. Diffusion-weighted MRI is now routinely used in cell-rich tissues, such as tumors, to take advantage of restricted water movement, which facilitates lesion detection. The sensitivity of MRI for detecting pancreatic NETs is 79% (rang 54-100%) [43][44][45]. The sensitivity of MRI in detecting metastatic liver lesions is 91% (range 82-98%), which is superior to that of CT [46][47][48][49][50]. MRI is also advantageous over CT in bone and brain imaging [41].
Functional Imaging
Functional imaging studies are based on the expression of SSTRs by GEP-NETs. Historically, imaging of SSTRs included 111 indium pentetreotide scintigraphy (Octreoscan ® ); however, 68 Ga-DOTATATE PET/CT was found to be more accurate and is now the technique of choice [4]. SSTR-PET imaging is regarded as the most sensitive and specific method for detecting NEN and its metastases, with a sensitivity of 93-96% and specificity of 85-100% [51]. 68 Ga-DOTATATE PET/CT is also important for determining radionuclide uptake, which is associated with response to peptide receptor radionuclide therapy (PRRT) [52]. PET/CT is often performed for the imaging of GEP-NENs, but since MRI provides greater contrast in soft tissues than CT, PET/MRI is more appropriate, particularly when liver and bone metastases are suspected and need to be excluded [53]. Other methods such as 64 Cu-DOTATATE are currently in use. It has been reported that 64 Cu-DOTATATE has a higher detection rate than 68 Ga-DOTATATE. In the future, 68 Ga-DOTATATE might be replaced by 64 Cu-DOTATATE [54][55][56].
Genetic Features and Targeted Therapy
Data regarding somatic mutations in GEP-NENs were obtained and analyzed for a total of 859 specimens collected from 820 patients from the AACR Project GENIE database (ver.10) (https://www.aacr.org/professionals/research/aacr-project-genie/ accessed on 20 August 2021) (Figure 2, Supplementary Table S1). A total of 490, 191, 36, 19, and 124 specimens from 464, 182, 35, 18, and 121 patients with pancreatic NETs (PANET), small bowel well-differentiated NETs (SBWDNET), well-differentiated NETs of the rectum (RWDNET), well-differentiated NETs of the appendix (AWDNET), and high-grade NECs of the colon and rectum (HGNEC) were included, respectively. These NETs include grade 1 to grade 3 specimens, and NECs include both small and large types of poorly differentiated neuroendocrine carcinoma of the WHO classification. PANET mainly harbors mutations in genes that encode regulators of the PI3K/mTOR pathway. The most frequently mutated gene was MEN1, with variants detected in 30.2% of the patients, followed by 14.9% in DAXX, 14.5% in TP53, 7.8% in ATRX, and 6.9% in TSC2. MEN1, DAXX, and ATRX genes play important roles in chromatin remodeling. MEN1 binds to the TERT promoter and affects the machinery that controls telomere integrity [57]. Inactivating mutations in DAXX and ATRX are strongly correlated with somatic telomere repeat content and telomere length [58]. Mutations in DAXX, ATRX, and MEN1 are associated with a worse prognosis than the corresponding genes without these mutations [59,60]. These mutations are rarely present in gastrointestinal NETs. TP53 mutations were predominantly found in poorly differentiated pancreatic NECs and G3 PANET [58,61,62], with mutations detected in 62.9% of HGNEC (Figure 2).
SBWDNET and RWDNET have a low rate of candidate driver events. CDKN1B mutations were most frequently identified in SBWDNET, as previously reported [63]. ERBB2 mutations were frequently identified in RWDNET, and others have found recurrent mutations in TP53, PTEN, and SMAD4, as in a previous report [64]. In well-differentiated neuroendocrine tumors of the appendix, mutations in KRAS and PIK3CA genes, which frequently occur in right-sided colorectal cancer [65], were detected ( Figure 2).
In HGNEC, high mutation rates of colorectal adenocarcinoma-associated genes such as APC, KRAS, BRAF, and TP53 were found. BRAF mutations were detected in 16.1% of patients with HGNEC. BRAF mutations occur in 5-10% of patients with advanced colorectal adenocarcinomas and are associated with a poor prognosis [66,67]. Furthermore, HGNEC displays a high frequency of recurrent TP53 and RB1 mutations, which are commonly observed in small-cell lung cancer [68], and are rare events in other NETs. These mutations may play critical roles in the aggressiveness of malignant tumors ( Figure 2).
All types of GEP-NENs had at least one potential actionable mutation that was predictive of a drug response according to the evidence levels of 1-3B in OncoKB (http: //oncokb.org accessed on 30 August 2021) (Figure 3) [69]. For instance, 40 patients (8.1%) with pancreatic NETs may have benefited from mTOR inhibitors. The RADIANT-3 clinical trial with the mTOR inhibitor everolimus demonstrated its safety and efficacy in the treatment of advanced PANET [70]. Moreover, a phase 2 pilot study is currently investigating the utility of the mTOR inhibitor ABI-009 as a single agent in patients with metastatic, unresectable, low-or intermediate-grade NETs of the lung or GEP system (NCT03670030). Eighteen patients (14.5%) with HGNEC harbored a BRAF V600E mutation, which was inhibited by vemurafenib. A previous report demonstrated vemurafenib responses in two patients with NECs [71]; one had a partial response that was sustained for 4.1 months, and the other had stable disease (SD) of unknown duration. The data for utilizing candidate genes for patients with GEP-NENs are insufficient, and future studies need to identify a novel therapeutic target.
Biomarkers
There are no established biomarkers for patients with GEP-NENs. If patients have symptoms suspected for functional GEP-NENs, some biomarkers, such as insulin, gastrin, and glucagon, are specific, although their use is limited in accurate diagnosis [30]. Patients with functional NENs could benefit from somatostatin analogs to relieve their hormonal symptoms [3,4]. Additionally, hereditary endocrine tumor syndromes, including MEN1 and VHL, might present in the background of these patients; therefore, attention to multifocal and multiorgan tumors is needed [7]. Since CgA has been commonly used as a blood-based biomarker for NET, regardless of tumor types (functionality or location), its accuracy has been discussed in recent studies [30,72,73]. Several factors such as heart failure, renal failure, malignant tumors, and the use of medication with proton-pump inhibitors may cause false-positive CgA results [30,72].
In recent years, the analysis of somatic mutations associated with NETs has provided a new strategy for their diagnosis or follow-up. Liquid biopsy, based on mRNA, is thought to be useful as a novel biomarker for NENs instead of monoanalyte biomarkers. The analysis of the NET transcriptome signature, NETest (Wren Laboratories, Branford, CT, USA), is accurate as a circulating multianalyte biomarker [73]. NETest is a prespotted polymerase chain reaction (PCR) plate targeting 51 genes, in which tumor-derived mRNA is extracted from the patient's blood and quantified by PCR [30,74]. The output results show 0-100% as an activity index, and the cut-off value is 20%. An index of 20-40% is considered an SD and 41-100% a progressive disease (PD) [30]. NETest shows high sensitivity and specificity for diagnosis ( Table 3). The diagnostic accuracy of NETest is significantly higher (99%) than that of CgA (21-36%) for GEP-NENs [72]. NETest is especially valuable in terms of follow-up after radical resection of NETs. After R0 resection, the NETest index significantly decreased from 62% to 22% 30 days after the initial surgery. For 30% of patients who underwent R0 resection, the NETest index remained high (≥20%); 81% of those patients experienced recurrence 18 months after the initial surgery [79]. The high NETest index after tumor resection suggests the existence of minimal residual disease (MRD) and early recurrence [79,80].
PRRT is thought to be an effective therapeutic option for unresectable or relapsed NETs. PRRT using 177Lu-DoTATATE was approved by the US Food and Drug Administra-tion (FDA) in 2018. Among "Responder" patients after PRRT, NETest score significantly decreased from 61% to 29%, while "Non-responders" showed unchanging or increasing scores [81].
NETest is also adequate for the evaluation of disease progression and prognosis. In total, 87% of patients diagnosed with SD by the RECIST1.1 had a low NETest score (≤40%), whereas 81% of patients with PD showed a high NETest score (≥80%). Comparison of the three classes of NETest scores (low: <40%; intermediate: 41-79%; and high biological activity: 80-100%) indicates shortening of progression-free survival in the intermediate and high-biological-activity groups [80]. NETest reflects disease activity, and a high score indicates a poor response to drug therapies or PRRT. The multianalyte biomarker, NETest, has multiple uses. It is used not only for the diagnosis of GEP-NENs but also for the determination of disease activity and therapy effectiveness and follow-up after tumor resection. NETest can detect disease progression 5-24 months before imaging changes. Identification of MRD that cannot be detected by imaging studies should lead to earlier therapeutic intervention in GEP-NENs [76,78]. Follow-up of GEP-NENs requires frequent endoscopy with biopsy and/or CT scanning, which causes physical pain and radiation exposure, and is costly. In the US, using a follow-up strategy with NETest resulted in a 42% saving in cost [82]. NETest would also be effective in reducing these patients' burdens.
GEP-NENs are highly heterogeneous diseases, which complicates their diagnosis or evaluation of progression. Although NETest shows highly sensitive and specific results as presented above, a comprehensive genetic analysis of GEP-NENs is needed for more accurate diagnosis and early therapeutic intervention in the future.
Conclusions
There has been a rapid increase in the number of clinically identified GEP-NENs in the last few decades. Given the different distribution of GEP-NENs among races, there might be a biological difference based on genetic background; hence, evidence from the Asian population is required. Recently, next-generation sequencing has provided new insights into the genetic and epigenetic landscape of a subset of GEP-NENs [5]. Various therapeutic options are currently available for treating GEP-NENs. Although surgery is the first choice for resectable GEP-NENs, drug therapies, such as somatostatin analogs, molecular-targeted drugs, and cytotoxic agents, play a key role in the treatment of unresectable or relapsed GEP-NENs [83]. With regard to molecular-targeted drugs, sunitinib is available for pancreatic NETs, whereas everolimus is used for all types of NETs. Recently, Japan approved the use of the agent in PRRT, which had already been approved by the FDA and was broadly used for the treatment of GEP NETs in Europe and in the US [72,83]. The efficacy of immune checkpoint inhibitors for GEP-NENs remains controversial; meanwhile, some clinical trials are ongoing [83,84]. Although these novel and personalized therapeutic options are expected to improve the prognosis of patients with GEP-NENs, their application in clinical settings is still limited. To fill this gap, the development of optimized diagnostic modules and therapies is underway. For instance, constant molecular monitoring via liquid biopsy might be a predictive tool for tailoring a personalized diagnostic and treatment strategy that improves patient outcomes. It will require an international and transdisciplinary endeavor to enter all patients with these uncommon neoplasms into a novel personalized clinical trial.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/cancers14051119/s1, Table S1: Candidate somatic mutations in gastroenteropancreatic neuroendocrine neoplasms in the GENIE cohort. Institutional Review Board Statement: Ethical review and approval were waived for this review due to its descriptive nature and the exclusive use of published data.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available in the present manuscript or in the Supplementary Materials.
Conflicts of Interest:
The authors declare no conflict of interest. | 2022-02-26T00:02:40.671Z | 2022-02-22T00:00:00.000 | {
"year": 2022,
"sha1": "2c2a29fda5f64bff225b8b8c3e3e387787d5bdff",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/14/5/1119/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "97bfdc4e9d4c77219cde994ec9439290dbff72ed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.