id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
115524166 | pes2o/s2orc | v3-fos-license | Green Logistics : A System of Methods and Instruments-Part 2 Zelena logistika : sustav metoda i instrumenata – 2
DOI 10.17818/NM/2018/1.7 UDK 502/504:656 Review / Pregledni rad Paper accepted / Rukopis primljen: 31. 8. 2017. Aleksandr Rakhmangulov Department of Logistics and Transportation Systems Management Nosov Magnitogorsk State Technical University Russia e-mail: ran@magtu.ru Aleksander Sladkowski Department of Logistics and Industrial Transportation Silesian University of Technology Poland e-mail: aleksander.sladkowski@polsl.pl Nikita Osintsev Department of Logistics and Transportation Systems Management Nosov Magnitogorsk State Technical University Russia e-mail: osintsev@magtu.ru
LITERATURE REVIEW / Pregled literature
Review and analysis of the publications, as well as results of current scientific studies in the field of sustainable development, [1 -3], green logistics [3], [4] and integration of environmental factor into the practice of logistics management [5 -7] show that: -concepts and terminology apparatus of green logistics and green supply chain management have developed, approaches and principles of sustainable development have formulated, the system of indicators for assessing this activity and legal framework for its implementing have created; -environmental conscience and skills of ecological behaviour have been actively forming in business and private life; training, development of competencies for sustainable development are being implemented; -different kinds of environmental programs and projects are performed with the support of public and state institutions, business structures, research institutions and international associations.
However, generally accepted principles of green logistics have not been formulated yet, and there is a lack of a unified system of methods and tools for implementing these principles. Many researchers have noted the problem of implementation of green principles in practice , because there is the contradiction between the logistics principles aimed to maximise profits and to achieve economic growth and activity related to the reduction of the harmful impact on the environment [8], [9].
Review of existing and prospective instruments of green logistics [10] combined these instruments in 4 groups: -economic instruments aimed at minimising the transport costs for example as a result of using cheaper and environmentally friendly modes of transport, optimisation of rolling stock' loading, optimisation of the size of transport shipments, selection of efficient routes and transportation schemes; -legal instruments represent the established in advance and adopted in the prescribed manner regulatory limits; -instruments of social policy based on the complex application of economic and legal instruments with the aim to create and operate the transport infrastructure by social and environmental requirements, for example, through the implementation of intellectual transport systems, rational organisation of passenger transportation; -information and analytical tools, providing information support of the application of other instruments of green logistics include, for example, scientific studies, training, dissemination of best practices of environmental education and education for sustainable development, benchmarking, consulting, the use of carbon calculators and eco-labelling. Authors reviewed the methods of green logistics regarding business and included in it: management of transport system (combined transport, 3PL-logistics), packaging management (to reduce the impact of packaging materials on environment), an organisation of green communications and production; warehouse management and waste management [11]. The matrix of green logistics methods, presented in studies [12], is systemized in levels of transportation management, warehousing and the provision of additional services.
The ways to reduce the harmful impact on logistics companies, outlined in the study [13], are systemized in three directions: technical, operational (operating) and logistical. Authors have classified ten ways according to the complexity and efficiency, as the priority actions of the sustainability of logistics systems [14].
Studies [15], [16] reflect the analyses of the logistics operation of green supply chains' management (designing, planning and controlling the objects of the logistics infrastructure, as well as the processes of delivery and storage of products) from the perspective of strategic, tactical and operational management.
Thus, the analysis of scientific studies in the field of sustainable development leads to the conclusion that there is a wide variety of approaches and views on contents of methods and instruments of green logistics that caused the thin consistency of its implementation. In the practice of logistics companies, it reduces the efficiency of these methods and instruments separately, does not contribute to the planned reduction of the harmful impact of transport on the environment in case of the increased economic efficiency of supply chains' operation.
SYSTEM OF GREEN LOGISTICS METHODS AND INSTRUMENTS / Sustav metoda i instrumenata zelene logistike
Authors have carried out the systematization of methods and instruments of green logistics with the application of structuralfunctional [17] [18] and systemic approaches [19] to describe logistics and transport system. These methods are based on the selection of fundamental (basic) functions of the elements of the logistics systems.
According to this approach, the following elements of logistics systems were identified, (Fig. 1): input flow, applying the basic function of material flow's entering into logistics system and providing purchase, supply of logistics system with raw materials, materials or services. Cumulative element, providing the management function of material flows' speed as a result of it braking, accumulation and storage. Transport element implements the basic function of expediting and braking material flows. Processing element provides the function of changes in the qualitative properties of material flows, its transformation from raw materials to the finished products. Output element ensures the removal of material flow from logistics system, sales and distribution of finished products and services. Management element provides information and financial relationship between the elements of the logistics system, monitors the implementation of its functions and operations, regulates the promotion of information and financial flows in the logistics system.
The structural-functional approach, used by authors to systematise well-known methods of green logistics, is fundamentally different from the standard way to select functional areas of logistics: transport logistics, transport, distribution logistics, industrial logistics, supplying logistics and warehouse logistics [24]. The disadvantage of this functional approach is «linking» of logistics functions and operations to the infrastructure elements of supply chains -warehouses, industrial enterprises, supply, sales and transport departments. Moreover, there is a situation in the use of a functional approach to solving the problems of systemizing logistics methods, when the same management method of logistics flows is implemented in different functional areas of logistics.
It is one of the leading causes of non-harmonized application of methods and instruments of green logistics when same methods and instruments are applied in different methodical basis, supported by different normative-legal documents, sometimes conflicting between each other. A typical example is the selection of the separate functional area in green logistics -also called «reverse logistics». In our view, this choice is excessive, since the object of reverse logistics management is material flow consisting of waste products, packaging, package, secondary raw materials, but different from the main material flow only by direction -it moves towards with the main one. Green methods of reverse flow management are being implemented by the same logistics elements, where management object is the material flow.
It is quite evident that means of transport are one of the main supports of any logistics systems. Therefore, if we talk about green logistics, we should understand that it should be based on environmentally friendly modes of transport. It is obvious that bicycle transport fully meets the principles of green logistics and should be used as much as possible, especially in urban conditions [20]. However, for the delivery of heavy and bulky cargo, it is not adapted. A good solution is the use of inland waterways [21], yet, the application of this mode of transport also has limitations. A radical environmental solution for urban transport is the widespread use of electric or solar cars for the delivery of goods [22,23]. However, this answer is more for not so distant future. At present, great importance is the use of gaseous fuels (compressed or liquefied gas). Also, the additives for traditional engines, for example, the use of hydrogen, can now be considered [25]. These solutions can help ensure that modern transport meets the principles of green logistics.
Consequently, one of the leading advantages of the structural-functional approach to the systematisation of different green logistics methods is the possibility to group all well-known green methods in two main signs. First sign based on membership to the logistics element, realising one of the fundamental logistics functions, and the second one based on membership to the effects-based methods on one of the logistics elements, or on material flow and flow of services, either on information and financial flows. Described systematisation approach allows not only identify the cases of green methods' duplication at different stages of logistics process, but also to determine missing perspective methods and tools that are successfully applied in traditional logistics, but not considered as the green methods because of misunderstanding of the sources of its environmental effect. Table 1 presents the results of systematisation of methods and instruments of green logistics by the structural-functional approach. Formulation of methods and instruments is similar to the formulation of traditional logistics methods in the table, however, it is necessary to consider these methods as the methods and instruments for achieving sustainable development goals in terms of green logistics. For example, the instrument «analysis of suppliers' market» that generally used for selecting the optimal suppliers according to the criterion «quality/price» (goal 8), must take into account the requirements of rational use of water (goals 6 and 14) and forest resources (goal 15) in green logistics. Moreover, this instrument forms mutually beneficial logistics networks with suppliers of raw materials, engaging them in the process of implementing green logistics methods (goal 17).
THE RESULTS OF THE ANALYSES ON SYSTEMATIZATION OF GREEN LOGISTICS METHODS AND INSTRUMENTS / Rezultati analiza sistematizacije metoda i instrumenata zelene logistike
The analysis of the frequency of using methods and instruments of green logistics to achieve the goals of sustainable development in elements of logistics system allows to make the following conclusions: the implementation of identified 27 methods and 104 instruments of green logistics achieves thirteen goals of sustainable development from seventeen. The highest number of instruments are implemented by management flow of logistics system (21 instruments with achievement goal' frequency equal to 164), but the smallest number of instruments implements by input element (13 instruments with achievement goals' frequency equal to 71). Indicators of instruments' number and frequency of its usage in other logistics elements are quite similar (17-18 instruments with achievement goal' frequency of sustainable development in a range of 94 to 108).
It should be pointed out that input logistics element together with output element, is a boundary element of the logistics system, providing the connection with this system and external environment. This element also determines the properties of material flow in a system and eventually defines the impact of this flow on abilities of other logistics elements to achieve the goals of sustainable development. Therefore, in our opinion, it is necessary to carry out intensive research efforts to search and develop new methods and instruments of green logistics, specific only for input logistics element. The most number of popular instruments are applied for achieving the goals No. 8 (decent work and economic growth), No. 9 (industrialisation, innovation and infrastructure) and No. 13 (climate change).
These goals coincide with the traditional economic and infrastructure goals of logistics, but goal No. 13 corresponds to current normative-legal restrictions and requirements in the field of ecology that should observe by companies, operating on the market of logistics services. Instruments of green logistics are little used for achieving the goals No. 3 (good health and well-being), No. 4 (quality education) and No. 16 (peace, justice and effective institutions) due to indirect impact of these instruments on achieving the goals which are priority for that kind of areas such as health, education and law. Instruments of green logistics do not directly impact on achieving the goal No. 1 (elimination of poverty), No. 2 (elimination of hunger), No. 5 (gender equality) and 10 (reducing inequality). The main reason for that is solving these problems related to global and national priorities at the state level. Authors didn't identify logistics methods and instruments, ensuring direct achievement of these goals. It is necessary to carry out additional research for establishing the impact of instruments of green logistics on such common goals, as well as to develop appropriate new instruments and methods.
CONCLUSION / Zaključak
The paper has presented a new approach to achieve the goals of sustainable development at the operation of logistics and transport systems by the originally developed system of methods and instruments of green logistics. The structural-functional and system approaches are applied at the systematisation of methods involving allocation of (basic) functions of elements of logistics systems. Grouping of instruments is carried out by the purpose of each method, green logistics and taking into account the functions to pass and process logistics flows.
Application of proposed approach could be used form balanced programs of improving the sustainability and efficiency of supply chains' operation. The systematic implementation of methods and instruments of green logistics will ensure achieving the goals of sustainable development. Moreover, the developed system of methods could assess green supply chains and its elements compatibility with principles of sustainable development, identify gaps in recommended methods. In authors opinion, further development of presented approach in the paper is to develop the mathematical apparatus allowing to globally optimise the parameters of logistics flows with the aim to ensure sustainable development of supply chains by coordinated selection and realisation of methods and instruments of green logistics. Table 2 The analysis of usage frequency of methods and instruments of green logistics to achieve the goals of sustainable development in elements of logistics system Tablica 2. Analiza korištenja učestalosti metoda i instrumenata zelene logistike da bi se postigli ciljevi održivoga razvoja u elementima sustava logistike | 2018-12-13T03:48:11.032Z | 2018-03-12T00:00:00.000 | {
"year": 2018,
"sha1": "6aa240b3e2f0d53bccb4c36baa3567a7e1f06b3b",
"oa_license": "CCBY",
"oa_url": "http://www.nasemore.com/wp-content/uploads/2018/04/7_Rakhmangulov_Sladkowski_Osintsev_Muravev.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6aa240b3e2f0d53bccb4c36baa3567a7e1f06b3b",
"s2fieldsofstudy": [
"Environmental Science",
"Business",
"Engineering"
],
"extfieldsofstudy": [
"Business"
]
} |
35664009 | pes2o/s2orc | v3-fos-license | Roles of the NH2-terminal Domains of Cardiac Ryanodine Receptor in Ca2+ Release Activation and Termination*
Background: The NH2-terminal region of cardiac ryanodine receptor (RyR2) contains three domains (A, B, and C) that harbor many disease-causing mutations. Results: Domains A, B, and C distinctively regulate the activation and termination of Ca2+ release. Conclusion: Individual NH2-terminal domains play distinct roles in RyR2 channel function. Significance: These data shed new insights into the actions of RyR2 NH2-terminal disease mutations. The NH2-terminal region (residues 1–543) of the cardiac ryanodine receptor (RyR2) harbors a large number of mutations associated with cardiac arrhythmias and cardiomyopathies. Functional studies have revealed that the NH2-terminal region is involved in the activation and termination of Ca2+ release. The three-dimensional structure of the NH2-terminal region has recently been solved. It is composed of three domains (A, B, and C). However, the roles of these individual domains in Ca2+ release activation and termination are largely unknown. To understand the functional significance of each of these NH2-terminal domains, we systematically deleted these domains and assessed their impact on caffeine- or Ca2+-induced Ca2+ release and store overload-induced Ca2+ release (SOICR) in HEK293 cells. We found that all deletion mutants were capable of forming caffeine- and ryanodine-sensitive functional channels, indicating that the NH2-terminal region is not essential for channel gating. Ca2+ release measurements revealed that deleting domain A markedly reduced the threshold for SOICR termination but had no effect on caffeine or Ca2+ activation or the threshold for SOICR activation, whereas deleting domain B substantially enhanced caffeine and Ca2+ activation and lowered the threshold for SOICR activation and termination. Conversely, deleting domain C suppressed caffeine activation, abolished Ca2+ activation and SOICR, and diminished protein expression. These results suggest that domain A is involved in channel termination, domain B is involved in channel suppression, and domain C is critical for channel activation and expression. Our data shed new insights into the structure-function relationship of the NH2-terminal domains of RyR2 and the action of NH2-terminal disease mutations.
The NH 2 -terminal region (residues 1-543) of the cardiac ryanodine receptor (RyR2) harbors a large number of mutations associated with cardiac arrhythmias and cardiomyopathies. Functional studies have revealed that the NH 2 -terminal region is involved in the activation and termination of Ca 2؉ release. The three-dimensional structure of the NH 2 -terminal region has recently been solved. It is composed of three domains (A, B, and C). However, the roles of these individual domains in Ca
release activation and termination are largely unknown. To understand the functional significance of each of these NH 2terminal domains, we systematically deleted these domains and assessed their impact on caffeine-or Ca 2؉ -induced Ca 2؉ release and store overload-induced Ca 2؉ release (SOICR) in HEK293 cells. We found that all deletion mutants were capable of forming caffeine-and ryanodine-sensitive functional channels, indicating that the NH 2 -terminal region is not essential for channel gating. Ca 2؉ release measurements revealed that deleting domain A markedly reduced the threshold for SOICR termination but had no effect on caffeine or Ca 2؉ activation or the threshold for SOICR activation, whereas deleting domain B substantially enhanced caffeine and Ca 2؉ activation and lowered the threshold for SOICR activation and termination. Conversely, deleting domain C suppressed caffeine activation, abolished Ca 2؉
activation and SOICR, and diminished protein expression. These results suggest that domain A is involved in channel termination, domain B is involved in channel suppression, and domain C is critical for channel activation and expression. Our data shed new insights into the structure-function relationship of the NH 2 -terminal domains of RyR2 and the action of NH 2 -terminal disease mutations.
The cardiac ryanodine receptor (RyR2) 4 is an essential player in excitation-contraction coupling in the heart. It governs the release of Ca 2ϩ from the sarcoplasmic reticulum that drives muscle contraction (1,2). This RyR2-mediated sarcoplasmic reticulum Ca 2ϩ release also plays a critical role in the control of heart rhythm (1,2). Consistent with its fundamental role in cardiac function, naturally occurring mutations in RyR2 are associated with cardiac arrhythmias and cardiomyopathies (2)(3)(4)(5). Interestingly, most of the disease-associated RyR2 mutations are clustered in three hot spots in the linear sequence of the channel: the NH 2 -terminal, central, and COOH-terminal regions (5,6). Although the functional impact of disease-linked RyR2 mutations has been extensively studied, the molecular basis of actions of these disease mutations is largely unknown. This is in part due to the lack of understanding of the structurefunction relationship in the RyR2 channel.
The recently solved crystal structures of the NH 2 -terminal region of RyR have provided novel insights into the structural basis of disease mechanisms associated with the NH 2 -terminal mutations (7)(8)(9)(10)(11)(12)(13)(14). The three-dimensional structure of the NH 2 -terminal region of RyR contains three domains: domain A (residues 1-217), domain B (residues 218 -409), and domain C (residues 410 -543) (9). This NH 2 -terminal region harbors more than 50 disease mutations. Interestingly, almost all of the disease-causing mutations in this region are located at domain interfaces (9). Docking the NH 2 -terminal structure into low resolution cryoelectron maps of the RyR1 structure places these NH 2 -terminal domains at the top of the cytoplasmic assembly, forming a ring structure around the 4-fold axis of the RyR chan-nel (9). This central ring structure is connected to the channel pore-forming domain via inner branches (15). Furthermore, this central region has been shown to undergo large conformational changes upon channel activation (15). Based on these observations, it has been hypothesized that disease mutations in the NH 2 -terminal region destabilize domain interfaces, which in turn alters conformational changes in the NH 2 -terminal region that are important for channel gating (7, 9 -12, 14). Consistent with this hypothesis, NH 2 -terminal disease mutations have been shown to enhance the activation of the RyR2 channel (16 -19). We have recently shown that a naturally occurring deletion of exon 3, corresponding to residues Asn 57 -Gly 91 within domain A in the NH 2 -terminal region, markedly reduces the threshold at which Ca 2ϩ release terminates (18). However, it is unclear how mutations in the NH 2 -terminal region of RyR2 alter the activation and/or termination of Ca 2ϩ release.
The structure of the NH 2 -terminal region of RyR is remarkably similar to that of the inositol 1,4,5-trisphosphate receptor (IP 3 R) despite considerable differences in their amino acid sequences (9,20). The IP 3 R NH 2 -terminal region is also composed of three domains: the suppressor domain (SD) (residues 1-223), IP 3 binding core- (IBC-) (residues 224 -436), and IBC-␣ (residues 437-604), corresponding to domains A, B, and C of RyR, respectively (20 -23). Functional studies revealed that domains IBC-␣ and IBC- form the IP 3 binding pocket, whereas the SD inhibits IP 3 binding (20 -22, 24, 25). Given the structural similarities between the NH 2 -terminal domains of IP 3 R and RyR, it is possible that individual NH 2 -terminal domains of RyR2 may also play a distinct role in channel function. To test this possibility, in the present study, we deleted individual NH 2 -terminal domains of RyR2 and assessed the impact of these deletions on the activation and termination of Ca 2ϩ release. We found that deletion of domain A markedly delayed the termination of Ca 2ϩ release, whereas deletion of domain B significantly enhanced the activation of Ca 2ϩ release. Deletion of domain C drastically reduced the expression of the channel protein. Our data suggest that individual NH 2 -terminal domains of RyR2 are involved in distinct roles in channel function.
Construction of NH 2 -terminal Deletion Mutants of RyR2
The NH 2 -terminal deletions in mouse RyR2 were generated by the overlap extension method using PCR (26,27). Briefly, an NheI/ClaI fragment containing deletion-B (Del-B), Del-C, Del-AB, or Del-ABC was obtained by overlapping PCR and used to replace the corresponding wild type (WT) fragment in the full-length RyR2 cDNA in pcDNA3, which was then subcloned into pcDNA5. An NheI/AflII fragment containing Del-A was obtained by overlapping PCR and was used to replace the corresponding WT fragment. The sequences of all deletions were confirmed by DNA sequencing.
Generation of Stable, Inducible Cell Lines Expressing WT and Deletion Mutants of RyR2
Stable, inducible HEK293 cell lines expressing RyR2 Del-A, Del-B, Del-C, Del-AB, and Del-ABC were generated using the Flp-In T-REx Core kit from Invitrogen. Briefly, Flp-In T-REx HEK293 cells were co-transfected with the inducible expression vector pcDNA5/FRT (flippase recognition target)/TO containing the mutant cDNAs and the pOG44 vector encoding the Flp recombinase in 1:5 ratios using the calcium phosphate precipitation method. The transfected cells were washed with phosphate-buffered saline (PBS; 137 mM NaCl, 8 mM Na 2 HPO 4 , 1.5 mM KH 2 PO 4 , and 2.7 mM KCl, pH 7.4) 24 h after transfection followed by a change into fresh medium for 24 h. The cells were then washed again with PBS, harvested, and plated on new dishes. After the cells had attached (ϳ4 h), the growth medium was replaced with a selection medium containing 200 g/ml hygromycin (Invitrogen). The selection medium was changed every 3-4 days until the desired number of cells was grown. The hygromycin-resistant cells were pooled, aliquoted (1 ml), and stored at Ϫ80°C. These positive cells are believed to be isogenic because the integration of RyR2 cDNA is mediated by the Flp recombinase at a single FRT site.
Caffeine-induced Ca 2؉ Release in HEK293 Cells
The free cytosolic Ca 2ϩ concentration in transfected HEK293 cells was measured using the fluorescence Ca 2ϩ indicator dye Fluo-3 AM (Molecular Probes). HEK293 cells grown on 100-mm tissue culture dishes for 18 -20 h after subculture were transfected with 12-16 g of WT or deletion mutant RyR2 cDNAs. Cells grown for 18 -20 h after transfection were washed four times with PBS and incubated in Krebs-Ringer-Hepes (KRH) buffer 1 (125 mM NaCl, 5 mM KCl, 1.2 mM KH 2 PO 4 , 6 mM glucose, and 25 mM HEPES, pH 7.4 with NaOH) without MgCl 2 and CaCl 2 at room temperature for 40 min and at 37°C for 40 min. After being detached from culture dishes by pipetting, cells were collected by centrifugation at 1,000 rpm for 2 min in a Beckman TH-4 rotor. Cell pellets were loaded with 10 M Fluo-3 AM in high glucose Dulbecco's modified Eagle's medium at room temperature for 60 min followed by washing with KRH buffer 1 plus 2 mM CaCl 2 and 1.2 mM MgCl 2 (KRHϩ buffer) three times and resuspended in 150 l of KRHϩ buffer plus 0.1 mg/ml BSA and 250 M sulfinpyrazone. The Fluo-3 AM-loaded cells were added to 2 ml (final volume) of KRHϩ buffer in a cuvette. The fluorescence intensity of Fluo-3 AM at 530 nm was measured before and after repeated additions or single additions of various concentrations of caffeine (0.025-5 mM) in an SLM-Aminco series 2 luminescence spectrometer with 480-nm excitation at 25°C (SLM Instruments). For ryanodine sensitivity studies, the RyR2 WT or mutant channels were first sensitized by a relatively low concentration of caffeine (0.1 or 0.25 mM). The caffeine-sensitized channels were then treated with ryanodine (25 M). The ryanodine-treated channels were further activated by multiple additions of a relatively high concentration of caffeine (1 mM). The peak levels of each caffeine-induced Ca 2ϩ release were determined and normalized to the highest level (100%) of caffeine-induced Ca 2ϩ release for each experiment.
Single Cell Ca 2؉ Imaging
Cytosolic Ca 2ϩ Measurements-Cytosolic Ca 2ϩ levels in stable, inducible HEK293 cells expressing RyR2 WT or mutants were monitored using single cell Ca 2ϩ imaging and the fluores-cent Ca 2ϩ indicator dye Fura-2 AM as described previously (16,28). Briefly, cells grown on glass coverslips for 8 -18 h after induction (as indicated) by 1 g/ml tetracycline (Sigma) were loaded with 5 M Fura-2 AM in KRH buffer 2 (125 mM NaCl, 5 mM KCl, 6 mM glucose, 1.2 mM MgCl 2 , and 25 mM HEPES, pH 7.4 with NaOH) plus 0.02% Pluronic F-127 and 0.1 mg/ml BSA for 20 min at room temperature (23°C). The coverslips were then mounted in a perfusion chamber (Warner Instruments) on an inverted microscope (Nikon TE2000-S). The cells were perfused continuously with KRH buffer 2 containing increasing extracellular Ca 2ϩ concentrations (0, 0.1, 0.2, 0.3, 0.5, 1.0, and 2.0 mM). Caffeine (10 mM) was applied at the end of each experiment to confirm the expression of active RyR2 channels. Time lapse images (0.25 frame/s) were captured and analyzed with Compix Simple PCI 6 software. Fluorescence intensities were measured from regions of interest centered on individual cells. Only cells that responded to caffeine were analyzed. The filters used for Fura-2 imaging were ex ϭ 340 Ϯ 26 and 387 Ϯ 11 nm and em ϭ 510 Ϯ 84 nm with a dichroic mirror (410 nm).
Luminal Ca 2ϩ Measurements-Luminal Ca 2ϩ levels in HEK293 cells expressing RyR2 WT or mutants were measured using single cell Ca 2ϩ imaging and the fluorescence resonance energy transfer (FRET)-based endoplasmic reticulum (ER) luminal Ca 2ϩ -sensitive chameleon protein D1ER as described previously (29,30). The cells were grown to 95% confluence in a 75-cm 2 flask, passaged with PBS, and plated in 100-mm-diameter tissue culture dishes at ϳ10% confluence 18 -20 h before transfection with D1ER cDNA using the calcium phosphate precipitation method. After transfection for 24 h, the growth medium was then changed to an induction medium containing 1 g/ml tetracycline. In intact cell studies, after induction for ϳ22 h, the cells were perfused continuously with KRH buffer 2 containing various concentrations of CaCl 2 (0, 1, and 2 mM) and tetracaine (1 mM) for estimating the store capacity or caffeine (20 mM) for estimating the minimum store level by depleting the ER Ca 2ϩ stores at room temperature (23°C). In permeabilized cells studies, the cells were first permeabilized by 50 g/ml saponin (31) in incomplete intracellular-like medium (125 mM KCl, 19 mM NaCl, and 10 mM HEPES, pH 7.4 with KOH) at room temperature (23°C) for 3-4 min. The cells were then switched to complete intracellular-like medium (incomplete intracellular-like medium plus 2 mM ATP, 2 mM MgCl 2 , 0.05 mM EGTA, and 100 nM free Ca 2ϩ , pH7.4 with KOH) for 5-6 min to remove saponin. The permeabilized cells were then perfused with various concentrations of Ca 2ϩ (0.1, 0.2, 0.4, 1, and 10 M) followed by tetracaine (1 mM) for estimating the store capacity and caffeine (10 mM) for estimating the minimum store level by depleting the ER Ca 2ϩ stores. Images were captured with Compix Simple PCI 6 software every 2 s using an inverted microscope (Nikon TE2000-S) equipped with an S-Fluor 20ϫ/0.75 objective. The filters used for D1ER imaging were ex ϭ 436 Ϯ 20 nm for CFP, ex ϭ 500 Ϯ 20 nm for YFP, em ϭ 465 Ϯ 30 nm for CFP, and em ϭ 535 Ϯ 30 nm for YFP with a dichroic mirror (500 nm). The amount of FRET was determined from the ratio of the light emission at 535 and 465 nm.
Western Blotting
HEK293 cell lines grown for certain periods of time after induction were washed with PBS plus 2.5 mM EDTA and harvested in the same solution by centrifugation for 8 min at 700 ϫ g in an IEC Centra-CL2 centrifuge. The cells were then washed with PBS without EDTA and centrifuged again at 700 ϫ g for 8 min. The PBS-washed cells were solubilized in a lysis buffer containing 25 mM Tris, 50 mM HEPES, pH 7.4, 137 mM NaCl, 1% CHAPS, 0.5% soy bean phosphatidylcholine, 2.5 mM DTT, and a protease inhibitor mixture (1 mM benzamidine, 2 g/ml leupeptin, 2 g/ml pepstatin A, 2 g/ml aprotinin, and 0.5 mM PMSF). This mixture was incubated on ice for 1 h. Cell lysate was obtained by centrifuging twice at 16,000 ϫ g in a microcentrifuge at 4°C for 30 min to remove unsolubilized materials. The RyR2 WT and mutant proteins were subjected to SDS-PAGE (6% gel) (32) and transferred onto nitrocellulose membranes at 90 V for 1.5 h at 4°C in the presence of 0.01% SDS (33). The nitrocellulose membranes containing the transferred proteins were blocked for 30 min with PBS containing 0.5% Tween 20 and 5% (w/v) nonfat dried skimmed milk powder. The blocked membrane was incubated with the anti-RyR antibody (34C) (1:1,000 dilution) and then incubated with the secondary anti-mouse IgG (heavy and light) antibodies conjugated to horseradish peroxidase (1:20,000 dilution). After washing for 5 min three times, the bound antibodies were detected using an enhanced chemiluminescence kit from Pierce. The intensity of each band was determined from its intensity profile obtained using ImageQuant LAS 4000 (GE Healthcare), analyzed using ImageJ software, and normalized to that of -actin.
Statistical Analysis
All values shown are mean Ϯ S.E. unless indicated otherwise. To test for differences between two groups, we used unpaired Student's t tests (two-tailed). A p value Ͻ0.05 was considered to be statistically significant.
Construction and Expression of RyR2 NH 2 -terminal Deletion
Mutants-To understand the role of individual NH 2 -terminal domains (A, B, and C) in RyR2 function, we used a deletion approach in which NH 2 -terminal domain A (residues 1-217), B (residues 218 -409), C (residues 410 -543), AB (residues 1-409), or ABC (residues 1-543) was deleted in the full-length RyR2 (Fig. 1A). The boundary of each domain was selected based on the three-dimensional structure of the NH 2 -terminal region (residues 1-543) of RyR (9,13). These deletion constructs were generated by site-directed mutagenesis and transiently expressed in HEK293 cells. Immunoblotting analysis revealed that the expression level of Del-A was reduced, whereas the expression level of Del-B was increased compared with that RyR2 WT. The expression levels of Del-AB and WT were comparable. Conversely, the expression level of Del-C or Del-ABC was markedly reduced compared with that of WT (Fig. 1, B and C). Thus, deletion of domain C considerably impaired the expression of the RyR2 protein.
Distinct Roles of NH 2 -terminal Domains of RyR2
The NH 2 -terminal Deletion Mutants of RyR2 Form Caffeineand Ryanodine-sensitive Functional Ca 2ϩ Release Channels-We first determined whether these NH 2 -terminal deletion mutants are functional. HEK293 cells were transfected with RyR2 WT or Del-A, Del-B, Del-C, Del-AB, or Del-ABC mutants. The transfected HEK293 cells were loaded with the fluorescent Ca 2ϩ indicator dye Fluo-3, AM, and the cytosolic Ca 2ϩ level was monitored by using a luminescence spectrometer before and after the addition of caffeine or ryanodine. The RyR2 WT or mutant channels were first sensitized by a relatively low concentration of caffeine (0.1 or 0.25 mM). The caffeine-sensitized channels were then treated with ryanodine (25 M). The ryanodine-treated channels were further activated by multiple additions of a relatively high concentration of caffeine (1 mM). As shown in Fig. 2, the ryanodine-untreated (Ϫryanodine) HEK293 cells expressing RyR2 WT responded to repeated stimulations by submaximal concentrations of caffeine, each resulting in partial Ca 2ϩ release ( Fig. 2A, top panel). In contrast, WT-expressing HEK293 cells treated with ryanodine (25 M) (ϩryanodine) only responded to the first subsequent stimulation by caffeine ( Fig. 2A, bottom panel). It is known that ryanodine only binds to the open RyR channel and that the binding of ryanodine converts the channel to a mainly fully open state (34,35). Thus, in the presence of ryanodine, the caffeine-activated channels would be modified by ryanodine into a fully activated state, leading to a depletion of the intracellular Ca 2ϩ store. Therefore, subsequent additions of caffeine yielded little or no Ca 2ϩ release in ryanodine-treated cells. However, in the absence of ryanodine, a submaximal concentration of caffeine induced only partial Ca 2ϩ release, a phenomenon known as quantal Ca 2ϩ release (36 -38). Importantly, similar to cells expressing RyR2 WT, HEK293 cells expressing Del-A, Del-B, Del-C, Del-AB, or Del-ABC all exhibited quantal Ca 2ϩ release induced by submaximal concentrations of caffeine in the absence of ryanodine (Fig. 2, B-F, top panels). Ryanodine pretreatment rendered all these deletion mutant cells unresponsive to repeated caffeine stimulations (Fig. 2, B-F, bottom panels). These observations indicate that all these NH 2 -terminal deletion mutants are able to form caffeine-and ryanodine-sensitive functional Ca 2ϩ release channels. It is noted that there were immediate drops in the fluorescence level after additions of caffeine. This is due to fluorescence quenching by caffeine (39,40).
Effect of NH 2 -terminal Deletions on the Sensitivity of Caffeine Activation of RyR2-We next assessed whether NH 2 -terminal deletions affect the sensitivity of the RyR2 channel to caffeine activation. To this end, we determined the response of each of these deletion mutants to activation by increasing concentrations of caffeine. As shown in Fig. 3, the level of Ca 2ϩ release in HEK293 cells transfected with RyR2 WT increased progressively with each consecutive addition of caffeine (from 0.05 to 1.0 mM) and then decreased with further additions of caffeine (2.5 and 5 mM) likely due to the depletion of the intracellular Ca 2ϩ stores by the prior additions of caffeine (0.025-1.0 mM) (Fig. 3A). The response to caffeine activation of HEK293 cells transfected with Del-A was similar to that of WT-expressing cells (Fig. 3, B and G). Conversely, Del-B caused a significant leftward shift in caffeine response (Fig. 3, C and G), whereas Del-C (Fig. 3D) and Del-ABC (Fig. 3F) resulted in a significant rightward shift (Fig. 3G). Del-AB slightly inhibited the caffeine response (Fig. 3, E and G). Collectively, these data indicate that Del-A has no significant effect on the activation of RyR2 by caffeine and Del-B enhances it, whereas Del-C reduces it. NH 2 -terminal Deletions of RyR2 Alter the Propensity for SOICR-Disease-causing mutations in the NH 2 -terminal region of RyR2 have been shown to increase the propensity for arrhythmogenic spontaneous Ca 2ϩ release during store Ca 2ϩ overload, a process also known as store overload-induced Ca 2ϩ release (SOICR). It is of interest to assess whether deletion of individual NH 2 -terminal domains of RyR2 alters the propensity for SOICR. To this end, we generated stable, inducible HEK293 cell lines expressing the RyR2 WT and Del-A, Del-B, Del-C, Del-AB, and Del-ABC mutants. These HEK293 cells were perfused with elevating extracellular Ca 2ϩ (0 -2.0 mM) to induce spontaneous Ca 2ϩ oscillations as described previously (16,28). The resultant SOICR was then monitored by using a fluorescence Ca 2ϩ indicator, Fura-2 AM, and single cell Ca 2ϩ imaging. As shown in Fig. 4, HEK293 cells expressing the Del-A (Fig. 4B) and Del-AB (Fig. 4E) mutants exhibited a similar fraction of cells that displayed spontaneous Ca 2ϩ oscillations as compared with WT cells (Fig. 4, G and H). In contrast, the Del-B (Fig. 4C) mutant-expressing cells exhibited an increased fraction of oscillating cells (p Ͻ 0.01) as compared with WT (Fig. 4G). Conversely, HEK293 cells expressing Del-C (Fig. 4D) and Del-ABC (Fig. 4F) showed a caffeine response but no SOICR at all (Fig. 4H). It is important to note that the enhanced SOICR . Arrows indicate the presence of Ca 2ϩ release in ryanodine-untreated cells and the absence of Ca 2ϩ release in ryanodine-treated cells. Note that the immediate drops in fluorescence after the addition of caffeine were due to fluorescence quenching by caffeine.
Distinct Roles of NH 2 -terminal Domains of RyR2
activity observed in Del-B-expressing HEK293 cells is unlikely to result from its increased expression level because enhanced SOICR activity was still observed in Del-B-expressing HEK293 cells when the expression of Del-B was reduced to a level less than that of WT (Fig. 5, A and C). Similarly, the lack of SOICR in Del-C or Del-ABC is unlikely due to the reduced expression level of these mutants as SOICR still occurred in WT-expressing HEK293 cells when the expression of the WT protein was reduced to a level similar to or less than that of Del-C or Del-ABC (Fig. 5, B and C). Thus, these results demonstrate that Del-A has no major impact on SOICR and Del-B enhances the propensity for SOICR, whereas Del-C abolishes SOICR.
Effect of NH 2 -terminal Deletions on the SOICR Activation and Termination Thresholds-To assess the impact of NH 2terminal deletions on the activation and termination threshold for SOICR, we monitored the ER luminal Ca 2ϩ dynamics in HEK293 cells using a FRET-based ER luminal Ca 2ϩ -sensing protein, D1ER (29,30). As shown in Fig. 6, elevating extracellular Ca 2ϩ from 0 to 2 mM induced spontaneous ER Ca 2ϩ oscillations in RyR2 WT-expressing HEK293 cells (depicted as downward deflections of the FRET signal). SOICR occurred when the ER luminal Ca 2ϩ content increased to a threshold level (F SOICR ) and terminated when the ER luminal Ca 2ϩ content fell to another threshold level (F termi ) (Fig. 6A). The ER luminal Ca 2ϩ dynamics in Del-A-, Del-B-, and Del-AB-expressing cells during SOICR is shown in Fig. 6, B, C, and D. The Del-A and Del-AB mutations markedly reduced the SOICR termination threshold (34.7 Ϯ 2.3% in Del-A and 38.0 Ϯ 2.9% in Del-AB versus 59.4 Ϯ 1.0% in WT) (p Ͻ 0.01) but had no significant effect on the SOICR activation threshold (93.2 Ϯ 0.4% in Del-A and 92.6 Ϯ 0.7% in Del-AB versus 93.1 Ϯ 0.5% in WT). As a result, the fractional Ca 2ϩ release during SOICR (activation threshold Ϫ termination threshold) in Del-A or Del-AB mutant cells (58.5 Ϯ 2.5% in Del-A and 54.7 Ϯ 3.6% in Del-AB) was significantly increased compared with that of the WT cells (33.7 Ϯ 0.9%) (p Ͻ 0.01) (Fig. 6, E, F, and G). Conversely, the Del-B mutation substantially decreased the SOICR activation threshold (80.0 Ϯ 1.0 versus 93.1 Ϯ 0.5% in WT) (p Ͻ 0.01), which is in agreement with its increased SOICR propensity (Fig. 4). The Del-B mutation also significantly reduced the SOICR termination threshold (41.6 Ϯ 1.4 versus 59.4 Ϯ 1.0% in WT) (p Ͻ 0.01). The fractional Ca 2ϩ release in Del-B mutant cells (38.4 Ϯ 0.5%) was also significantly different from that of WT cells (33.7 Ϯ 0.9%) (p Ͻ 0.01) (Fig. 6, E, F, and G). It should be noted that there was no significant difference in the store capacity (F max Ϫ F min ) between RyR2 WT and deletion mutant cells (Fig. 6H). Consistent with their lack of SOICR activity (Fig. 4), no ER luminal Ca 2ϩ oscillations were observed in HEK293 cells expressing Del-C or Del-ABC (not shown). Furthermore, SOICR did not occur in control HEK293 cells expressing no RyR2, and SOICR was not affected by the IP 3 R inhibitor xestospongin C (18), indicating that SOICR is mediated by RyR2. Collectively, these data indicate that deletion of domain A only affects the termination threshold for SOICR, whereas deletion of domain B alters both the SOICR activation and termination thresholds.
Effect of NH 2 MARCH 20, 2015 • VOLUME 290 • NUMBER 12 reflects the equilibrium between ER Ca 2ϩ release and Ca 2ϩ uptake. As shown in Fig. 7, elevating cytosolic Ca 2ϩ reduced the steady state ER Ca 2ϩ level in permeabilized HEK293 cells expressing RyR2 WT in a concentration-dependent manner most likely due to increased Ca 2ϩ release as a result of enhanced cytosolic Ca 2ϩ activation of the RyR2 channel (Fig. 7, A and G). HEK293 cells expressing Del-A showed a response to cytosolic Ca 2ϩ similar to that seen with the WT cells (Fig. 7, B and G). Conversely, HEK293 cells expressing Del-B showed a very different response to cytosolic Ca 2ϩ (Fig. 7, C and G). The steady state ER Ca 2ϩ level at resting cytosolic Ca 2ϩ (100 nM) in Del-B cells was markedly reduced as compared with that in WT cells (42.2 Ϯ 0.03% in Del-B versus 73.3 Ϯ 0.01% in WT) (p Ͻ 0.001). This suggests that Del-B may enhance cytosolic Ca 2ϩ activation of RyR2. Increasing cytosolic Ca 2ϩ from 100 to 200 nM reduced the steady state ER Ca 2ϩ level in Del-B cells similarly to that seen in WT cells. However, different from that seen in WT cells, further elevation in cytosolic Ca 2ϩ concentration to 400 nM, 1 M, and 10 M increased the steady state ER Ca 2ϩ level in Del-B cells (Fig. 7, C and G). These observations suggest that Del-B may also enhance cytosolic Ca 2ϩ -dependent inacti-vation of RyR2. Cells expressing Del-AB exhibited reduced steady state ER Ca 2ϩ level at 200 nM cytosolic Ca 2ϩ as compared with that in WT cells (Fig. 7, D and G), suggesting that Del-AB is able to sensitize RyR2 to cytosolic Ca 2ϩ activation. However, Del-AB cells displayed increased steady state ER Ca 2ϩ levels at 1 and 10 M cytosolic Ca 2ϩ as compared with those in WT cells (Fig. 7, D and G), suggesting that Del-AB may also sensitize RyR2 to cytosolic Ca 2ϩ -dependent inactivation. The steady state ER Ca 2ϩ level in HEK293 cells expressing Del-C or Del-ABC did not respond to increasing cytosolic Ca 2ϩ concentrations (100 nM-10 M) and was only slightly reduced upon caffeine addition (Fig. 7, E and F). These data indicate that Del-C and Del-ABC diminish the cytosolic Ca 2ϩ response and impair caffeine activation of RyR2. Taken together, our results suggest that the NH 2 -terminal domains play an important role in cytosolic Ca 2ϩ activation and inactivation of RyR2.
DISCUSSION
The NH 2 -terminal region of RyR2 is a hot spot of naturally occurring mutations associated with cardiac arrhythmias and cardiomyopathies (5, 6). We have recently shown that disease- causing RyR2 mutations in the NH 2 -terminal region alter the activation and/or termination of Ca 2ϩ release (18). However, how the NH 2 -terminal region regulates the activation and termination of Ca 2ϩ release and how mutations in this region impair these processes are unclear. The NH 2 -terminal region of RyR2 encompasses three well defined domains: domain A (residues 1-217), domain B (residues 218 -409), and domain C (residues 410 -543) (9,13). In the present study, we assessed the role of these individual domains in Ca 2ϩ release activation and termination. Our data indicate that domain A is an important determinant of Ca 2ϩ release termination, whereas domains B and C play a critical role in Ca 2ϩ release activation. These results provide novel insights into the structure-function relationship of the NH 2 -terminal domains of RyR2 and the understanding of disease mechanisms.
The NH 2 -terminal domains (A, B, and C) of RyR have been mapped to the central region around the 4-fold symmetry axis of the channel. There are extensive domain-domain interactions in the NH 2 -terminal region. Domains A and B through intra-and intersubunit interactions form a central ring structure at the top of the cytoplasmic assembly (9). This ring structure is connected to the transmembrane domain of the channel via some central electron-dense columns and to the peripheral "clamp" region via domain C (9,15). To gain insights into the functional significance of these domain-domain interactions, we determined the role of each NH 2 -terminal domain in channel function. We found that removing domain A (Del-A) markedly reduced the threshold for Ca 2ϩ release termination, suggesting that domain A is involved in the termination of Ca 2ϩ release. Hence, it is possible that mutations that alter interactions with domain A may affect Ca 2ϩ release termination. We have recently shown that cardiomyopathy-associated RyR2 mutations A77V and R176Q and exon 3 deletion markedly reduce the termination threshold for Ca 2ϩ release (9). Interestingly, these mutations are located in the domain interface between domain A and the central electron-dense columns (also known as interface 4) (9,15), suggesting that interface 4 may be involved in Ca 2ϩ release termination.
The intra-and intersubunit interactions between domains A and B are believed to be important for stabilizing the closed state of the channel. Disease mutations located in interfaces between domains A and B would weaken these interactions, thus facilitating channel opening (7)(8)(9)(10)(11)(12)(13)(14). Del-A would be expected to remove both intra-and intersubunit interactions between domains A and B, leading to destabilization of the closed state and channel activation. Surprisingly, Del-A did not significantly affect channel activation. The sensitivity to activation by caffeine or Ca 2ϩ , the propensity for SOICR, or the SOICR activation threshold of the Del-A mutant were not significantly different from those of the WT. Conversely, deleting domain B (Del-B) significantly enhanced the sensitivity of RyR2 to caffeine, increased cytosolic Ca 2ϩ activation and the propensity for SOICR, and reduced the threshold for SOICR activation. These observations suggest that disease mutations located in interfaces between domains A and B may enhance channel activity by affecting the function of domain B. It should be noted that Del-B also reduced the threshold for Ca 2ϩ release termination, implying that domain B may also be involved in Ca 2ϩ release termination directly or indirectly via interaction with domain A. Furthermore, Del-B also altered the cytosolic Ca 2ϩ -dependent inactivation of RyR2. Thus, domain B plays an important role in stabilizing the closed state of the RyR2 channel.
Del-A or Del-B resulted in gain of function either by delaying Ca 2ϩ release termination or by sensitizing Ca 2ϩ release activation. In contrast, deleting domain C (Del-C) suppressed caffeine activation of RyR2 and completely abolished cytosolic Ca 2ϩ activation and SOICR. Furthermore, unlike Del-A or Del-B, Del-C drastically reduced the protein expression of RyR2. It should be noted that reducing the expression level of WT similar to or less than that of Del-C did not abolish SOICR in WT-expressing cells. Thus, the lack of SOICR in Del-C-expressing cells is unlikely due solely to their reduced expression level. These observations suggest that domain C is required for channel activation and expression.
Docking the crystal structure of the NH 2 -terminal domains of RyR1 in the open and closed states of the cryo-EM structure of RyR1 revealed that the opening of the channel is associated with large conformational changes in the NH 2 -terminal domains (14,15). These have been confirmed in recent FRETbased studies using conformational probes inserted into the NH 2 -terminal domains (41). During the transition from the closed to the open state, the triangle-like structure formed by domains A, B, and C within the same subunit appear to be tilted upward and outward around a hinge located near domain C. As such, domains A and B rotated ϳ7-8 Å, whereas domain C rotated ϳ4 Å (14). Hence, part of domain C may act as a hinge and play an important structural role in mediating and controlling the movement of domains A and B during channel gating. Therefore, deleting domain C may affect the structure/folding of this region, which may contribute to the markedly reduced expression level of the Del-C or Del-ABC mutant protein.
We also determined the impact of deleting the first two NH 2terminal domains (Del-AB) or all three domains (Del-ABC) on Ca 2ϩ release. Del-AB substantially reduced the termination threshold for Ca 2ϩ release, which is consistent with the impact of Del-A or Del-B on Ca 2ϩ release termination. Del-AB also enhanced cytosolic Ca 2ϩ -dependent activation and inactivation of RyR2 similarly to Del-B. However, unlike Del-B, Del-AB did not significantly affect the activation of SOICR. One would expect that Del-AB would have the combined effect of Del-A and Del-B, but this is not the case. The reason for this seemingly contradictory data is unclear. It is possible that the stimulating effect of Del-B on Ca 2ϩ release may require the presence of domain A. Del-ABC markedly inhibited caffeine activation, reduced protein expression, and completely abolished cytosolic Ca 2ϩ activation of RyR2 and SOICR, which are similar to the effects of Del-C, suggesting that Del-C has a dominant impact on channel function. These results also demonstrate that the NH 2 -terminal region is not essential for the gating of the RyR2 channel, although it plays an important role in regulating it.
Crystal structures of the NH 2 -terminal region of the IP 3 R have also been solved recently. The overall structure of the NH 2 -terminal region of IP 3 R is very similar to that of RyR (20 -22). As with RyR, IP 3 R contains three NH 2 -terminal domains: the SD, IBC-, and IBC-␣, corresponding to domains A, B, and C in RyR, respectively. The functional role of individual NH 2terminal domains of IP 3 R has been well studied. IBC- and IBC-␣ are involved in IP 3 binding, whereas the SD is believed to clamp domains IBC- and IBC-␣ in a conformation with reduced affinity for IP 3 , thus acting as a suppressor of IP 3 binding (20 -25). Interestingly, it has recently been shown that the IP 3 R SD and domain A of RyR are functionally interchangeable (20). An RyR-IP 3 R chimeric channel in which the SD in the full-length IP 3 R was replaced with domain A of RyR was still gated by IP 3 . These observations suggest that the SD of IP 3 R and domain A of RyR may share similar functional roles. However, it is important to know that deletion of the SD in IP 3 R completely abolished IP 3 -induced Ca 2ϩ release (25), whereas Del-A or even the deletion of the entire NH 2 -terminal region (Del-ABC) retained caffeine-induced Ca 2ϩ release. Thus, the respective NH 2 -terminal region plays a very different role in IP 3 -de- were transfected with the FRET-based ER luminal Ca 2ϩ -sensing protein D1ER and induced using tetracycline before the experiment. The cells were perfused with KRH buffer 2 containing increasing levels of extracellular Ca 2ϩ (0 -2 mM) to induce SOICR. FRET recordings from representative cells (a total of 40 -75 cells each) are shown. To minimize the influence by CFP/YFP cross-talk, we used relative FRET measurements for calculating the activation threshold (E) and termination threshold (F) using the equations shown in A. F SOICR indicates the FRET level at which SOICR occurs, whereas F termi represents the FRET level at which SOICR terminates. The fractional Ca 2ϩ release (G) was calculated by subtracting the termination threshold from the activation threshold. The maximum FRET signal F max is defined as the FRET level after tetracaine treatment. The minimum FRET signal F min is defined as the FRET level after caffeine treatment. The store capacity (H) was calculated by subtracting F min from F max . Data shown are mean Ϯ S.E., and error bars represent S.E. (n ϭ 3) (*, p Ͻ 0.01 versus WT; NS, not significant). pendent gating of IP 3 R and the caffeine-induced activation of RyR. These observations also suggest that the mechanism of IP 3 -induced opening of IP 3 R differs from that of caffeine-induced opening of RyR.
In summary, our data show that domain A is important for Ca 2ϩ release termination but not for Ca 2ϩ release activation. Conversely, domain B is involved in stabilizing the closed state of the channel, which is important for both activation and termination of Ca 2ϩ release, whereas domain C is important for channel activation. RyR2 lacking domain AB remains functional, indicating that it is not essential for channel gating. Thus, domain AB plays a regulatory role in channel gating. These results provide new insights into the function of the NH 2 -terminal domains and the disease mechanism of mutations associated with the NH 2 -terminal region. | 2018-04-03T02:57:53.691Z | 2015-01-27T00:00:00.000 | {
"year": 2015,
"sha1": "0d44cade74aae4efb408959c7ede48b2d3a6bfd4",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/290/12/7736.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "92d737bcf1d89328e3ed0508acff3aa589fe314c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
199518751 | pes2o/s2orc | v3-fos-license | In Vitro Activity of Statins against Naegleria fowleri
Naegleria fowleri causes a deadly disease called primary amoebic meningoencephalitis (PAM). Even though PAM is still considered a rare disease, the number of reported cases worldwide has been increasing each year. Among the factors to be considered for this, awareness about this disease, and also global warming, as these amoebae thrive in warm water bodies, seem to be the key factors. Until present, no fully effective drugs have been developed to treat PAM, and the current options are amphotericin B and miltefosine, which present side effects such as liver and kidney toxicity. Statins are able to inhibit the 3-hydroxy-3-methylglutaryl-coenzyme A (HMG-CoA) reductase, which is a key enzyme for the synthesis of ergosterol of the cell membrane of these amoebae. Therefore, the in vitro activity of a group of statins was tested in this study against two types of strains of Naegleria fowleri. The obtained results showed that fluvastatin was the most effective statin tested in this study and was able to eliminate these amoebae at concentrations of 0.179 ± 0.078 to 1.682 ± 0.775 µM depending on the tested strain of N. fowleri. Therefore, fluvastatin could be a potential novel therapeutic agent against this emerging pathogen.
The first reported case of PAM was recorded in 1965 in Australia [15]. After this first report, PAM cases have been recorded worldwide, reaching a total of around 440 officially diagnosed cases [16,17]. Currently, the most affected countries are the United States and Pakistan; for example, in the US, 143 cases were reported during the period of 1962-2016 [18].
As mentioned above, N. fowleri can infect humans after amoebae-contaminated water enters the nose during water-related activities [3,6,8,12,14]. After that, the amoebae are able to pass through the nasal cavity and penetrate into the olfactory neuroepithelium, migrating through the olfactory nerves to the cribriform plate. Once the cribriform plate is passed, amoebae invade the brain, causing extensive parenchymal inflammation and haemorrhagic necrosis [3,6,9,10,19]. It is also important to mention that PAM is characterised as a rapid and fulminant disease with non-specific clinical symptoms. The average time of symptom appearance after exposure is 1-9 days (median 5 days) after exposure to contaminated water sources, whereas the average patient death is 1-18 days (median 10 days) after symptoms begin [4,7,17,19,20].
Moreover, diagnosis is often performed post-mortem because of the clinical symptoms and course of the disease mentioned above. Among the most common symptoms are temperature, seizures, stiff neck, severe bi-frontal headaches and coma in the later stages of the disease [3,4,7,19,21].
Regarding treatment of PAM when diagnosed, current therapy involves a combination of amphotericin B and other drugs such as azithromycin, rifampin, azoles and, lately, miltefosine [3,9,10,20,22,23]. The addition of miltefosine to this drug combination as well as hypothermia have recently resulted in the successful survival of treated patients [9,[24][25][26][27]. However, the treatment is frequently associated with severe adverse effects such as renal toxicity, anaemia, nausea, vomiting or even brain damage. Worryingly, the mortality as a consequence of PAM is above 95-97% of registered cases [3,21,28,29]. According to these data, there is an urgent need to develop new anti-Naegleria agents to treat PAM quickly and efficiently while also causing low toxicity.
The 3-hydroxy-3-methylglutaryl-coenzyme A (HMG-CoA) reductase is an enzyme that regulates the mevalonate pathway, which is the metabolic way of producing cholesterol in humans and ergosterol in fungi, plants and protozoa. Ergosterol, 7-dehydrostigmaterol (7DHC) and cholesterol have been reported as the main sterols in N. fowleri, which is the first, essential for the integrity of cell membranes [30]. Among known commercially available inhibitors of the HMG-CoA reductase, statins are a group of molecules widely used to lower cholesterol levels in patients. Moreover, their activity against another FLA was recently demonstrated showing promising results [23,31,32]. Therefore, the in vitro activities of six statins (simvastatin, fluvastatin, atorvastatin, pravastatin, mevastatin and lovastatin) were evaluated against the trophozoite stage of N. fowleri, using a colorimetric method based on alamarBlue ® in comparison with the reference drug Amphotericin B (Amph B). In addition to the activity assays, an in vitro toxicity assay was also performed [32-34].
In Vitro Activity of the Tested Statins against the Trophozoite Stage of Naegleria fowleri
In this study, the activity of six statins was tested against two strains of N. fowleri. All the tested compounds were active against the tested amoebic strains. However, atorvastatin, fluvastatin, simvastatin and lovastatin showed amoebicidal effects, whereas pravastatin only induced amoebostatic effects. Moreover, in the case of mevastatin, this compound was only active against the N. fowleri ATCC 30215 strain (Table 1).
From all the tested statins, fluvastatin was the most active one showing IC 50 values ranging from 0.179 ± 0.078 to 1.682 ± 0.775 µM depending on the tested strain of N. fowleri. Furthermore, atorvastatin also showed high activity values between 6.278 ± 1.085 to 7.629 ± 0.696 µM. Moreover, strong changes in morphology as well as in the number of intracellular vesicles were observed in the two strains of Naegleria fowleri when they were incubated with serial dilutions of atorvastatin and fluvastatin especially (Figures 1-4). Table 1. Activity of the evaluated statins against the trophozoite stage of Naegleria fowleri and cytotoxicity against the J774A.1 macrophage cell line. Table 1. Activity of the evaluated statins against the trophozoite stage of Naegleria fowleri and cytotoxicity against the J774A.1 macrophage cell line. Table 1. Activity of the evaluated statins against the trophozoite stage of Naegleria fowleri and cytotoxicity against the J774A.1 macrophage cell line. Table 1. Activity of the evaluated statins against the trophozoite stage of Naegleria fowleri and cytotoxicity against the J774A.1 macrophage cell line.
In Vitro Toxicity against Murine Macrophages
The toxicity of the tested compounds was determined in vitro against the J774A.1 murine macrophage cell line. The less toxic molecules were fluvastatin and atorvastatin with CC 50 values between 100 and 1000 times higher than the IC 50 values obtained in the case of fluvastatin (Tables 1 and 2).
In Vitro Toxicity against Murine Macrophages
The toxicity of the tested compounds was determined in vitro against the J774A.1 murine macrophage cell line. The less toxic molecules were fluvastatin and atorvastatin with CC50 values between 100 and 1000 times higher than the IC50 values obtained in the case of fluvastatin (Tables 1 and 2).
Discussion
The number of encephalitis cases due to Naegleria fowleri has been increasing worldwide. Moreover, as PAM is a highly lethal infection, there is an urgent need to develop novel antiamoebic agents which are able to eliminate the pathogen in a fast and highly effective way [6].
Statins have been previously tested in another pathogenic genus of FLA in our laboratory, Acanthamoeba, showing that atorvastatin, simvastatin and fluvastatin were able to eliminate both life cycle stages of Acanthamoeba [30,31]. Furthermore, the enzyme 3-hydroxy-3-methylglutaryl-coenzyme A (HMG-CoA) reductase is the main target of these family of molecules, and, thus, it is suspected that the same enzyme is present in Naegleria fowleri and is also inhibited by these drugs. Moreover, the same enzyme is widely expressed in vertebrates and other parasitic protozoa, and the active site of this enzyme has been reported previously to be highly conserved from an evolutionary point of view [31,32].
As it has been previously reported in FLA, the main sterol membrane components are ergosterol and 7-dehydrostigmaterol (7DHC), apart from cholesterol in the case of N. fowleri. Moreover, ergosterol is essential for the integrity of the cell membrane in this species [30]. In our study, we identified some statins which were able to eliminate trophozoites belonging to two different types of N. fowleri strains in a highly effective dose, causing low toxic side effects (Tables 1 and 2 and Figures 1-4). Therefore, statins, at least in vitro, could be a potential family of therapeutic agents against this emerging pathogen. Another important fact to highlight is that statins are able to penetrate the blood-brain barrier [31,32] and, thus, if future studies in vivo support the in vitro data, at least fluvastatin and atorvastatin are supported by our results presented in this study ( Figure 5).
Discussion
The number of encephalitis cases due to Naegleria fowleri has been increasing worldwide. Moreover, as PAM is a highly lethal infection, there is an urgent need to develop novel antiamoebic agents which are able to eliminate the pathogen in a fast and highly effective way [6].
Statins have been previously tested in another pathogenic genus of FLA in our laboratory, Acanthamoeba, showing that atorvastatin, simvastatin and fluvastatin were able to eliminate both life cycle stages of Acanthamoeba [30,31]. Furthermore, the enzyme 3-hydroxy-3-methylglutarylcoenzyme A (HMG-CoA) reductase is the main target of these family of molecules, and, thus, it is suspected that the same enzyme is present in Naegleria fowleri and is also inhibited by these drugs. Moreover, the same enzyme is widely expressed in vertebrates and other parasitic protozoa, and the active site of this enzyme has been reported previously to be highly conserved from an evolutionary point of view [31,32].
As it has been previously reported in FLA, the main sterol membrane components are ergosterol and 7-dehydrostigmaterol (7DHC), apart from cholesterol in the case of N. fowleri. Moreover, ergosterol is essential for the integrity of the cell membrane in this species [30]. In our study, we identified some statins which were able to eliminate trophozoites belonging to two different types of N. fowleri strains in a highly effective dose, causing low toxic side effects (Tables 1 and 2 and Figures 1-4). Therefore, statins, at least in vitro, could be a potential family of therapeutic agents against this emerging pathogen. Another important fact to highlight is that statins are able to penetrate the bloodbrain barrier [31,32] and, thus, if future studies in vivo support the in vitro data, at least fluvastatin and atorvastatin are supported by our results presented in this study ( Figure 5). However, because of the very serious nature of PAM, higher statin levels may be used, and any side effects must be accepted and lessened to some extent and compensated for by dietary uptake [31,32]. Moreover, the bioavailability of statins differs greatly, ranging from 5% to 60%, as reported previously with the elimination of half-lives ranging from 1 h for fluvastatin to 19 h for rosuvastatin [35,36]. In a previous clinical assay, using simvastatin for the long-term treatment of two weeks allowed the researchers to check for CSF (Cerebrospinal fluid) biomarkers, showing a reduction in some of them in the statin-treated group [37]. Additionally, further experiments should be carried out in order to confirm whether statins act differently at different temperatures, as this does not seem to have been investigated. From the obtained data, we are able to conclude that the mentioned statins are as effective as the current drug being used to treat PAM, and to the best of our knowledge, lower costs and side effects support further developments to propose statins as a novel effective therapeutic agent against N. fowleri infections.
Amoebic Cultures
To test the amoebicidal activity of the compounds, two strains of Naegleria fowleri (ATCC ® 30808™ and ATCC ® 30215™) of the American Type Culture Collection (LG Promochem, Barcelona, Spain) were used. The strain was axenically cultured at 37 • C in 2% (w/v) Bactocasitone medium (Thermo Fisher Scientific, Madrid, Spain) supplemented with 10% (v/v) foetal bovine serum (FBS), containing 0.5 mg/mL of streptomycin sulfate (Sigma-Aldrich, Madrid, Spain) and 0.3 µg/mL of Penicillin G Sodium Salt (Sigma-Aldrich, Madrid, Spain). Strains were kept in the biological security facilities level 3 of our institution following Spanish biosafety guidelines for this pathogen.
For the toxicity assays, the murine macrophage J774A.1 (ATCC # TIB-67) cell line was cultured in Dulbecco's Modified Eagle's medium (DMEM, w/v), supplemented with 10% (v/v) fetal bovine serum and 10 µg/mL gentamicin (Sigma-Aldrich, Madrid, Spain), at 37 • C and 5% CO 2 atmosphere. For the experiments, all the strains were used during the logarithmic phase of growth.
Chemicals
A total of six statins were used in this study. The statins were purchased from Cayman Chemicals (Vitro SA, Madrid, Spain) and included atorvastatin, fluvastatin, simvastatin, pravastatin, mevastatin and lovastatin. The stock solutions for the experiments were prepared in dimethyl sulfoxide (DMSO) and were maintained at −20 • C until required for the experiments. As a positive control, amphotericin B was used.
In Vitro Activity Assays against the Trophozoite Stage of Naegleria fowleri
The activity of the tested statins against the trophozoite stage of Naegleria fowleri was determined in vitro using a modified colorimetric assay based on the oxido-reduction of alamarBlue ® reagent (Life Technologies, Barcelona, Spain), as previously described [38,39].
Briefly, the trophozoites were counted using a Countess II FL automatic cell counter (Thermo Fisher Scientific, Madrid, Spain) to prepare a working cell suspension (10 5 cells/well), and 50 µL per well was added in a 96-well plate (Thermo Fisher Scientific, Madrid, Spain).
After that, a serial dilution of the different statins diluted in the same culture medium was added to the plate (50 µL) (in all tests, 2% DMSO was used to dissolve the highest dose of the compounds without inducing any effects on the parasites). As a negative control, the trophozoites were incubated with the medium alone. Finally, the alamarBlue ® reagent was positioned into each well (10% of medium volume) and plates were incubated with slight agitation for 48 h at 37 • C. Subsequently, the plates were analysed with an EnSpire ® Multimode Plate Reader (Perkin Elmer, Madrid, Spain) using a wavelength of 570 nm and a reference wavelength of 630 nm. To calculate the percentages of growth inhibition and 50% inhibitory concentrations (IC 50 ), a non-linear regression analysis was performed with a 95% confidence limit using the SigmaPlot 12.0 software (Systat Software Inc., London, UK). All the experiments were performed in triplicate and the mean values were also calculated. A paired two-tailed t-test was used for the analysis of the data and the values of p < 0.05 were considered statistically significant.
Cytotoxicity Assays
To evaluate the toxicity of the statins used in this study, the murine macrophage cell line J774A.1 (ATCC TIB-67) was used. First, the macrophages were cultured in RPMI 1640 medium without phenol red (Roswell Park Memorial Institute, Thermo Fisher Scientific Inc., Waltham, MA, USA). After that, the cells were seeded (50 µL) in a 96-well plate (10 5 cells/mL) and serial dilutions (diluted in medium) of the statins were added (50 µL) to reach a final volume of 100 µL/well, as previously described [38,39]. A negative control was used, consisting of cells, and was incubated with the medium alone. Finally, the alamarBlue ® reagent was placed into each well (10% of final volume) and incubated for 24 h at 37 • C and 5% CO 2 atmosphere.
The plates were analysed with an EnSpire ® Multimode Plate Reader, as mentioned above. The 50% of cytotoxic values (CC 50 ) were calculated using the statistical analysis software SigmaPlot 12.0, as previously reported. All the experiments were performed three times and the mean values calculated. Finally, the selectivity index was calculated based on the obtained IC 50 and the CC 50 , as shown in Table 2.
Conclusions
This study assesses the in vitro efficacy of statins against Naegleria fowleri. Although rare, infections due to this pathogen are usually lethal and, therefore, new effective treatments are needed. The obtained results have highlighted that at least fluvastatin could be a potential new candidate for the treatment of PAM. Further in vivo studies should be developed to fully establish this compound as a novel therapeutic agent against PAM. | 2019-08-11T13:03:27.109Z | 2019-08-08T00:00:00.000 | {
"year": 2019,
"sha1": "31a03b38e817b35ab490ebb91404f0411175fee7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0817/8/3/122/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "93be6f33b3208655e92e1c2697f31856aaeeb5a5",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
198949568 | pes2o/s2orc | v3-fos-license | Determination of the Molecular Mechanism of Torularhodin against Hepatic Oxidative Damage by Transcriptome Analysis
Torularhodin, extracted from Sporidiobolus pararoseus, is a significant carotenoid that is similar to lycopene in structure. Some studies have indicated torularhodin as having antioxidative activities. However, it has not been thoroughly studied with respect to its antioxidative activity and molecular mechanisms in liver injury. Therefore, the aim of this study was to elucidate the antioxidative activity of torularhodin against hydrogen peroxide- (H2O2-) induced damage and the mechanism involved through transcriptome analysis and to explore its antioxidant potential. BRL cells were first subjected to H2O2 damage and then treated with torularhodin. The results showed that at 10−5 g/ml, torularhodin had significant protective effects against H2O2-induced oxidative damage. Morphological and immunofluorescence staining showed that torularhodin could maintain cell integrity and enhance the activity of antioxidant enzymes in the cells. According to transcriptome analysis, 2808 genes were significantly differentially expressed (1334 upregulated and 1474 downregulated) after torularhodin treatment. These genes were involved in three major Gene Ontology categories (biological process, cellular component, and molecular function). Moreover, torularhodin was involved in some cellular pathways, such as cancer inhibition, antioxidation, and aging delay. Our data highlighted the importance of multiple pathways in the antioxidative damage of liver treated with torularhodin and will contribute to get the molecular mechanisms of torularhodin inhibition of hepatic oxidative damage.
Introduction
The liver is the most important metabolic organ, accounting for approximately 2% of the total body weight. More than 500 significant functions are performed by this organ, such as conversion of food components to critical blood components, storage of vitamins and minerals, manufacture of many vital plasma proteins and minerals, maintenance of hormonal balance and metabolism, and detoxification of toxic wastes in the body [1]. Numerous chemicals have been reported to cause liver injury, such as drugs, pollutants, fried foods, and alcohol [2][3][4][5]. Although drugs are the most effective way to treat diseases, most of them are metabolized in the liver and kidneys and can eventually lead to hepatotoxicity. Although alcohol has been stated to be beneficial to health when taken in moderation, it is still harmful to the liver. Alcohol can also increase the metabolic pressure in the liver and cause oxidative damage to the organ, which can modify the structure and function of proteins, damage DNA, and lead to fatty liver and cirrhosis in severe cases [6]. Alcohol consumption has resulted in 3.3 million deaths worldwide, accounting for 5.9% of all deaths in 2015. Nearly 700,000 people died from alcohol consumption in China, ranking the highest in the world in 2016. Pollutants known to be toxic to the liver include organic toxicants and heavy metals. Fried foods also contain a high amount of trans fats and toxic substances, both of which can injure the liver. Additionally, liver injuries can develop into a variety of illnesses, such as fatty liver, hepatitis, fibrosis, cirrhosis, and liver failure, as well as cancer [7]. Therefore, liver injury is regarded as a serious health problem, raising worldwide concern.
Carotenoids, which are organic compounds that belong to the family of 40-carbon terpenoids, occur naturally in fruits, vegetables, algae, fish, eggs, and oil [8,9]. Until now, approximately 750 compounds of this type have been identified, out of which 50 compounds exhibit provitamin A activity [10][11][12]. They exert health-promoting effects, such as enhancing the immune system and accelerating wound healing, and can also be used to prevent organ injury owing to their antioxidative property [13,14]. Humans are unable to biosynthesize carotenoids, and therefore, they must be supplied with diet [12]. Torularhodin is one of the most important carotenoids in Sporidiobolus pararoseus (Figure 1, including several other carotenoids in the yeast). Because it contains a hydroxyl group, it belongs to lutein. It has a noncyclic β-ionone ring and is the precursor of β-carotene [15]. Although torularhodin and lycopene are similar in structure, torularhodin has one more double bond [16][17][18]. Therefore, we surmised that the antioxidative activity of torularhodin was stronger than that of lycopene. Some research suggested that torularhodin from yeast has great scavenging activity toward peroxyl radicals and effectively inhibits degradation by singlet oxygen; thus, it increases cellular resistance to oxidation such as damage of cells induced by excessive selenium intake [6,17,19]. Other studies have indicated torularhodin as having antioxidative, anticancer, and antimicrobial activities [20][21][22][23][24], suggesting that it has good potential to protect the liver against oxidative damage.
The antioxidative function of torularhodin in liver injury has not been thoroughly studied. Therefore, the objective of the present study was to elucidate the antioxidative activity of torularhodin against oxidative damage in BRL cells. To elucidate the potential molecular mechanism underlying this process, injured BRL cells treated with torularhodin were used for transcriptome sequencing. Then, genes that were differentially expressed between the torularhodin-treated and control groups were identified, verified, and analyzed.
Materials and Methods
2.1. Materials. Sporidiobolus pararoseus JD-2 was obtained from the School of Biotechnology of Jiangnan University (China). Torularhodin was isolated and purified from the S. pararoseus extract according to a previously published method [25]. Its purity was greater than 95%, as determined by high-performance liquid chromatography with UV detection at 450 nm. Torularhodin was stored at -80°C; it was first dissolved in dimethyl sulfoxide (DMSO) and then in Dulbecco's modified Eagle's medium (DMEM) before use.
2.6. Morphological Observation and Determination of the Antioxidative Capacity of Torularhodin in BRL Cells. BRL cells in the logarithmic growth phase were seeded in 96-well plates at 5 × 10 3 cells/well and incubated for 24 h. Then, the cells were divided into a normal control group (BRL cells were incubated for 24 h), an injury group (BRL cells were incubated for 16 h and then incubated with H 2 O 2 for 8 h), and an intervention group (BRL cells were incubated with different concentrations of torularhodin solution (10 -7 , 10 -6 , 10 -5 , or 10 -4 g/ml) for 16 h and then incubated with H 2 O 2 for 8 h). Cell viability was analyzed with the CCK-8 reagents according to the manufacturer's instructions. Cell morphology images were obtained with microscope equipment (Leica Microsystems, Germany).
Immunofluorescence
Staining. BRL cells were treated and incubated with H 2 O 2 as described in Section 2.6. Then, the cells were fixed with 4% (m/v) paraformaldehyde, permeabilized with 0.1% Triton X-100, and blocked with 5% bovine serum albumin. Thereafter, the cells were stained with the primary antibody (anti-Superoxide Dismutase (SOD) rabbit polyclonal antibody and anti-COX IV mouse polyclonal antibody (Proteintech, USA)) and subsequently with Alexa Fluor 488-conjugated donkey anti-mouse secondary antibody or Alexa Fluor 568-conjugated donkey anti-rabbit secondary antibody (Invitrogen, USA). After cell staining, images were acquired with a confocal laser scanning microscope (Carl Zeis AG, Germany).
2.8. RNA Extraction and Analysis. BRL cells were treated and incubated with H 2 O 2 as described in Section 2.5. Then, total RNA was extracted from the cells using TRIzol Reagent according to the manufacturer's instructions, and genomic DNA was removed using DNase I (TaKaRa, Dalian, China). Then, the RNA was sent to Majorbio (Shanghai, China) for sequencing. The RNA quality was determined with the 2100 Bioanalyzer (Agilent, Santa Clara, CA, USA) and quantified using the ND-2000 spectrophotometer (NanoDrop Technologies, Wilmington, DE, USA).
Only high-quality RNA samples (concentration ≥ 200 ng/μl, OD 260/280 = 1 8 − 2 2) were used to construct the sequencing library. Then, the oligo(dT)-enriched mRNA was fragmented in a fragmentation buffer, following which the cleaved RNA fragments were reverse transcribed to establish the final cDNA library. After adaptor connection, paired-end sequencing was performed on an Illumina HiSeq 4000 System (Illumina, San Diego, CA, USA) according to the vendor's recommended protocol. Each group was tested with three biological replicates and three technical replicates. The pathways including statistically enriched genes were identified utilizing the Kyoto Encyclopedia of Genes and Genomes (KEGG, http://www.genome.jp/kegg) and Gene Ontology (GO, http://www.geneontology.org/) database analysis.
2.9. Statistical Analysis. All experimental data were from at least triple independent experiments. The results are presented as the means ± standard deviations (SD). Oneway analysis of variance was conducted using data processing software. A P value of less than 0.05 was considered statistically significant.
Results
3.1. BRL Cell Growth Curve. As shown in Figure 2, after 24 h of culture, the BRL cells entered the logarithmic growth phase. BRL cells at this stage were selected for subsequent experiments.
Effect of H 2 O 2 on BRL Cell Viability.
In order to determine the median lethal H 2 O 2 concentration for BRL cells, the cells were incubated with different concentrations (0, 100, 200, 300, 400, 500, 600, 700, 800, 900, or 1000 μmol/ml) of H 2 O 2 for 8 h. As shown in Figure 3, the different concentrations of H 2 O 2 had obvious inhibitory effects on cell proliferation, with the inhibitory action being dose dependent. At the 700 μmol concentration of H 2 O 2 , the cell survival rate decreased to 51.26% (P < 0 05), which was the approximate half-lethal dose.
Effect of Torularhodin on BRL Cell Viability.
Before determining the antioxidative capacity of torularhodin in BRL cells, its toxicity toward this cell line needed to be tested. We used the CCK-8 assay to assess the effect of different concentrations of torularhodin on BRL cell viability. As shown in Figure 4, cell viability was not affected in the presence of torularhodin at less than or equal to 10 -4 g/ml. However, cell viability was obviously inhibited when treated with torularhodin at 10 -3 and 10 -2 g/ml (P < 0 05). Therefore, a concentration of torularhodin less than or equal to 10 -4 g/ml was chosen for the study.
Morphological Changes and Antioxidative Capacity of
Torularhodin in BRL Cells. Torularhodin at various concentrations (10 -7 , 10 -6 , 10 -5 , or 10 -4 g/ml) was first incubated with BRL cells for 16 h, and then the cells were treated with H 2 O 2 for 8 h. According to the cell viability assay (Figure 5), torularhodin at different concentrations had protective effects against cell damage by H 2 O 2 , particularly at 10 -5 g/ml (P < 0 05). As shown in Figure 6(a), cells in the control group had a better growth status and appeared to be polygonal in shape with intact membrane integrity. The cells in the injury group were shrunken, and their number was obviously decreased (Figure 6(b)). Moreover, dead cells were observed in the medium. It was noteworthy that the oxidation-damaged cells treated with torularhodin were protected to a certain extent. As shown in Figure 6(c), most cells treated with torularhodin had unrestricted intercellular edges with intact membrane integrity, and their survival rate had increased significantly. Figure 7 shows the immunofluorescence staining results for the control (Figure 7(a)), injury (Figure 7(b)), and intervention (Figure 7(c)) groups of cells. Compared with the control cells, the cells in the injury group had much fewer mitochondria and lower superoxide dismutase (SOD) activity. As expected, the number of cells in the injury group was the lowest. On the other hand, the intervention group had more cells, more mitochondria, and stronger SOD activity compared with the injury group. The results indicated that torularhodin could maintain cell integrity and enhance the antioxidative capacity in BRL cells.
Immunofluorescence Observation.
3.6. Results of Transcriptome Analysis for the Torularhodin Intervention and Injury Groups. The differentially expressed genes (DEGs) between the torularhodin intervention and injury groups were analyzed using the rat reference genome. The results (Figure 8) showed that a total of 2808 genes were significantly differentially expressed, with 1334 being upregulated and 1474 downregulated after torularhodin treatment. As shown in Figure 9, the DEGs between the torularhodin intervention and injury groups were classified by Gene Ontology (GO, http://www.geneontology.org/) enrichment into three main categories: biological process, cellular component, and molecular function. The most obvious difference found was in the biological process category. Thus, torularhodin could protect the biological process of cells under oxidative damage.
According to the GO enrichment of the torularhodinregulated DEGs, the main functions of these genes were in regulating cell cycle processes and enzymes in cells ( Figure 10). Meanwhile, the results of the KEGG pathway enrichment analysis of the torularhodin-regulated DEGs showed that the main functions of these genes were related to cancer, antioxidation, and senility ( Figure 11).
Discussion
Reactive oxygen species (ROS), which are oxidation products of cellular metabolism, can break DNA and oxidize proteins and lipids [26][27][28]. Usually, higher organisms can maintain a balance between oxidation and antioxidation. When organisms are exposed to harmful substances, they produce a stress reaction that breaks the balance between oxidation and antioxidation, leading to cell-and organism-level damage [29]. To some extent, the occurrence and aggravation of all diseases are directly or indirectly related to oxidative stress or the damage it causes [30]. The liver is susceptible to ROS-mediated damage because the ROS produced by the mitochondria, microsomes, and peroxisomes in parenchymal cells regulate peroxisome proliferator-activated receptor-alpha, which is mainly related to the expression of genes involved in liver fatty acid oxidation [28]. Excessive ROS accumulation disrupts the oxidative balance and leads to oxidative stress, which can cause or accelerate the occurrence of liver disease [31]. Oxidative stress can damage proteins, lipids, and DNA, and even change the pathways that control the normal physiological functions of organisms. Furthermore, the oxidative stress caused by liver disease can also cause injury to other organs of the body, such as kidney failure and brain impairment [32].
Carotenoids are a group of important natural pigments that are ubiquitous in animals, higher plants, fungi, and algae. They are the main source of vitamin A in vivo and also have antioxidative, immune regulatory, anticancer, and antiaging functions [11,33]. Torularhodin is one such carotenoid. Studies have shown that torularhodin has a stronger ability than carotenes to scavenge peroxide free radicals [17]. Other studies suggested that torularhodin was more potent than α-tocopherol in inhibiting lipid peroxidation [34]. In the anticancer field, studies have shown that torularhodin has protective properties against the preneoplastic changes in the liver induced by dimethylnitrosamine and the ability to inhibit the development of prostate cancer [33,35]. It is noteworthy that torularhodin also has a strong antioxidative capacity. Previous reports showed that torularhodin neutralized free radicals more efficiently than β-carotene [18]. Moreover, torularhodin also has strong ( * represents P < 0 05; * * represents P < 0 01; * * * represents P < 0 001).
antimicrobial activity [18]. Therefore, this carotenoid can protect organs by reducing the risks of oxidative stress, infection, and inflammation damage. So far, research studies on torularhodin have mainly focused on their antioxidative, anticancer, and bacteriostatic effects, and there are few studies that have reported the molecular mechanism underlying its antioxidative activity. Our study found that at 10 -5 g/ml, torularhodin had a significant protective effect against the oxidative damage of hepatocytes, with the results showing that cell viability and integrity were protected. This is considered due to the neutralization of free radicals by torularhodin, which can reduce the oxidative stress-induced cell damage and protect the integrity of cell membranes; thus, it maintains the normal morphology of cells and reduces their mortality. The immunofluorescence staining results indicated that torularhodin could maintain cell integrity and enhance the cellular antioxidative capacity. Torularhodin was considered to regulate the cell cycle and the activity of intracellular enzymes. The transcriptome analysis results showed that a total of 2808 genes were significantly differentially expressed. According to the GO enrichment analysis of the torularhodin-regulated DEGs, the main functions of these genes are in regulating cell cycle processes, enzymes in cells, and the cell response to oxygenated compounds. The response of cells to stress was similar to that in some previous studies, which was considered to be associated with the alleviation of cell damage [36,37]. Therefore, torularhodin stabilizes the intracellular environment and increases cellular activity and thus increases the cell survival rate. Meanwhile, the KEGG pathway enrichment analysis showed that the main functions of these genes are relevant to cancer, antioxidation, cell cycle processes, metabolic pathways, and senility. This was similar to the GO results, in that the function of torularhodin was to protect cells from damage.
In addition, the transcriptome results showed that torularhodin could modulate the insulin metabolic pathway, likely because this pathway is affected by excessive ROS stimulation, in turn affecting other functions. As an antioxidant, torularhodin can improve this condition and protect the health of the cells and thus the organism as a whole. This was similar to other published research results [38]. The KEGG results also indicated that torularhodin modulated aging-related pathways and attenuated the effect of ROS on cells, thereby reducing the probability of premature cell aging [39]. We also found that torularhodin regulated the pathway involving p53, which is a tumor suppressor, and we speculate that the carotenoid attenuated oxidative stimulation to a certain extent, thereby reducing the likelihood of cancer [40]. In summary, torularhodin affects many cellular pathways, especially the anticancer and antioxidative pathways, and thus plays a significant role in stabilizing the intracellular environment.
In conclusion, we consider that torularhodin has a significant protective effect against the oxidative damage of hepatocytes; however, further study is necessary to verify this. The results showed that neutralization of free radicals, antioxidative and anticancer activities, and cell cycle pathways played an important role in this process of protection. The findings are similar to those of many studies on antioxidants. For example, Ungureanu and Ferdes showed the torularhodin had strong antioxidative activity, and Wu et al. concluded that torularhodin showed neuroprotective activity against H 2 O 2 -induced oxidative injury, related to its strong antioxidative activity [18,20]. In the future, we will study the pathways and specific effects of carotenoids with the aim to utilize their full potential as antioxidants. To date, torularhodin has not been detected in foods. However, considering its obvious antioxidant capacity, torularhodin can be considered to be used as food additives and has a good commercial market prospect [12]. In addition, some studies suggested that torularhodin also had anticancer and antimicrobial activities. Du et al. confirmed that torularhodin at 18 mg/kg body mass significantly inhibited the development of prostate cancer in studied mice. Ungureanu and Ferdes also concluded that torularhodin showed antibacterial and antifungal properties toward all tested strains [18]. Therefore, we will research torularhodin further and explore its potential in the fields of medicine, health, and industrial production.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The data used to support the findings of this study are included within the article. | 2019-07-28T05:50:20.151Z | 2019-07-14T00:00:00.000 | {
"year": 2019,
"sha1": "dd4b534d6464939f409c1f79fa43ab73a40b1f06",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/omcl/2019/7417263.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd4b534d6464939f409c1f79fa43ab73a40b1f06",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
118967941 | pes2o/s2orc | v3-fos-license | On the evaluation of matrix elements in partially projected wave functions
We generalize the Gutzwiller approximation scheme to the calculation of nontrivial matrix elements between the ground state and excited states. In our scheme, the normalization of the Gutzwiller wave function relative to a partially projected wave function with a single non projected site (the reservoir site) plays a key role. For the Gutzwiller projected Fermi sea, we evaluate the relative normalization both analytically and by variational Monte-Carlo (VMC). We also report VMC results for projected superconducting states that show novel oscillations in the hole density near the reservoir site.
I. INTRODUCTION
This paper concerns the calculation of matrix elements using projected wave functions of the form, |Ψ = P |Ψ 0 . Here, P = i (1 − n i↑ n i↓ ) is a projection operator which excludes double occupancies at sites i, and |Ψ 0 , a trial wave function. Projected wave functions of this form were originally proposed by Gutzwiller to study electronic systems with repulsive on-site interactions 1 . The choice of |Ψ 0 depends on the problem under consideration. For instance, a projected Fermi liquid state, was used successfully in the description of liquid 3 He as an almost localized Fermi liquid 2,3 . Soon after the discovery of high temperature superconductivity in the cuprates, projected BCS wave functions were proposed as possible ground states of the so-called t − J model 4,5 .
Early results from variational Monte Carlo (VMC) studies as well as a renormalized mean field theory based on Gutzwiller approximation showed that a projected dwave BCS state, reproduces many features seen in the phase diagram of the high temperature superconductors 6,7,8 . The projection operator P N which fixes the particle number N in (2), is useful when considering the phase diagram near half filling 6 . Without P N in (2), one would need to consider the effects of particle number fluctuations, which become singular near half-filling 9,10 . Detailed VMC studies have been carried out recently using projected d-wave BCS states as variational wave functions for the two dimensional Hubbard model 11 , after a suitable canonical transformation 3 . Similar wave functions have been proposed in the literature for cobaltate superconductors as well as organic superconductors 12,13 . To make analytical progress however, it is desirable to extend Gutzwiller's scheme and construct normalized single particle excitations and calculate matrix elements. In this paper, we take the first step in this direction. We construct normalized excitations of the Gutzwiller projected Fermi sea and consider the evaluation of matrix elements.
In his original paper, Gutzwiller proposed that in calculating expectation values of operators with projected wave functions, the effects of projection on the state |Ψ 0 could be approximated by a classical statistical weight factor, which multiplies the quantum result 14 . Thus, for example, whereÔ is any operator, and g, a statistical weight factor. The basic idea is that the projection operator P reduces the number of allowed states in the Hilbert space, and invoking a simple approximation, such a reduction can be taken into account through combinatorial factors. For example, expectation values of the the kinetic energy operator c † i c j +c † j c i and the superexchange interaction between sites i and j, S i · S j in the projected subspace of states are renormalized by the Gutzwiller factors, where n is the density of electrons. In deriving these renormalization factors, one considers the number of states that contribute to Ψ|Ô|Ψ and to Ψ 0 |Ô|Ψ 0 respectively. The ratio of these two contributions is identified as the renormalization factor. It is clear that this approach can be generalized to evaluate matrix elements of an operatorÔ between different projected states. However, as we will see in this paper, many of the matrix elements that are of interest are reduced to the calculation of matrix elements between partially projected wave functions of the form The wave function |Ψ ′ l describes a state where double occupancies are projected out on all sites except the site l, which we call the reservoir site. The reason for the appearance of reservoir sites is not far to seek. Consider, for (6) and (1). Note the good agreement between the Gutzwiller result (solid line), Eqs. (10,15) and the VMC results for the projected Fermi sea (open circles). Statistical errors and finite-size corrections are estimated to be smaller than the symbols. example, the operator P c l↑ . Clearly, it can be rewritten as c l↑ P ′ l . Since calculation of matrix elements involving excited states involve the commutation of projection operators with creation/destruction operators, partially projected states arise inevitably within the Gutzwiller scheme.
In this paper, we present a method to calculate matrix elements between a partially projected Fermi sea, i.e., a projected Fermi sea with a reservoir site at l, as in (5). We will show that this problem has to be solved if we were to construct normalized particle/hole excitations of the (fully) projected Fermi sea. The same problem arises when calculating matrix elements for particle/hole tunneling into the projected Fermi sea. We develop an analytical approximation to solve this problem, and use it to calculate various matrix elements. We use VMC to test the validity of the approximation and find that our analytical results for the partially projected Fermi sea are in good agreement with the results from VMC.
The outline of the paper is as follows. In Sec. II, we present results for the occupancy of the reservoir site. We use these results in Sec. III, where we show how normalized single particle excitations can be constructed from the projected Fermi sea. In Sec. IV we calculate the matrix elements for particle/hole tunneling into the projected Fermi sea. VMC results for density oscillations in the vicinity of the reservoir site for both projected Fermi sea and BCS states are presented in Sec. V. The final section contains a summary and discussion of results.
II. OCCUPANCY OF THE RESERVOIR SITE
Consider a partially projected wave function, Double occupancy is projected out on all sites except the site l, called the reservoir site. Unless specified otherwise, we take |Ψ 0 to mean the Fermi sea. For the calculation of single particle excitations and matrix elements, we need expectation values such as that generalize the Gutzwiller renormalization scheme (3) to partially projected wave functions.
A. Gutzwiller approximation
In order to evaluate the generalized renormalization parameters g ′ in (7), we obviously need the normalization Ψ ′ l |Ψ ′ l . We define the norm of the fully projected state relative to the state with one reservoir site. Invoking the Gutzwiller approximation, we estimate this ratio by considering the relative sizes of the Hilbert spaces, where L = N ↑ + N ↓ + N h , is the number of lattice sites, N ↑ , N ↓ and N h , the number of up spins, down spins and empty sites respectively. The first term in the denominator of (9) represents the number of states with the reservoir site being empty or singly occupied; the second term represents the state with the reservoir site being doubly occupied. Eq. (9) can be simplified in the thermodynamic limit. We get, where the particle densities, n σ = N σ /L (σ = ↑, ↓) and n = n ↑ +n ↓ . The above argument can be extended to the case of two unprojected sites in an otherwise projected Fermi sea. We then get, where, P lm = i =l,m (1 − n i,↑ n i,↓ ). We note for later use that
B. Exact relations
Assuming translation invariance, it is possible to derive the following exact expressions for the occupancy of the reservoir site, where The proof is straightforward. Consider for instance, the probability (13) of finding the reservoir site empty. Since, Eqs. (14) and (15) can be proved analogously.
C. VMC results for projected Fermi sea and BCS states
In Fig. 1, we compare (10) with VMC results for d Ψ ′ l = 1 − X. We find that the results from the generalized Gutzwiller approximation are in excellent qualitative agreement with the VMC results for a partially projected Fermi sea. We also used VMC to obtain the same quantity using projected s/d-wave BCS states as variational states in the simulation. The results for d Ψ ′ l in BCS states are shown in Fig. 2. In contrast to the projected Fermi sea, a clear deviation from the Gutzwiller approximation is seen. This underscores the importance of pairing correlations in the unprojected wave function that are not completely taken into account by the Gutzwiller approximation scheme. These differences between Fermi sea and BCS states are discussed in more detail in Sect. V, where we consider density oscillations in the vicinity of the reservoir site.
In the following we discuss some details of the VMC calculations with one unprojected (reservoir) site l. As mentioned earlier, single occupancy is enforced (by projection) on all other sites. Simulations are performed on a finite square lattice spanned by two vectors (L x , L y ) and (−L y , L x ) with periodic boundary conditions 16 . The number of sites, L = L 2 x + L 2 y . The numbers of up-and down-electrons are chosen to be equal, N ↑ = N ↓ . The simulation for the local quantity d Ψ ′ l = n l↑ n l↓ Ψ ′ l has a larger statistical error than results for macroscopic quantities in uniform systems because the summation over site indices yields effectively L times more statistics for the latter. In order to overcome this problem, we update the reservoir site more often than the projected sites. Accordingly, the transition probability needs an extra weighting factor to keep the local balance. With this procedure, we can improve the statistical accuracy by about one order of magnitude. In addition, we carry out measurements after every update. Usually, in VMC simulations, measurement are performed every O(L) updates to obtain independent samples since similar states return similar sampled data. However, in the case of n l↑ n l↓ , a measurement returns only 0 or 1; viz., the sampled data can be different even when the states are similar. Given this, the measurement after every update seems more reasonable as it reduces statistical errors. Furthermore, we have restricted updates to the transfer of a single electron to an unoccupied site, and excluded updates via the exchange of two electrons. The calculation of the transition probabilities for the former update consumes time of O(N σ ) whereas the time taken for the latter update is O(N 2 σ ). As the system size increases, this restriction achieves efficiency. We have collected statistics from up to 60 independent runs over two days, and the total number of updates amounts to 10 8 ∼ 10 9 .
For superconducting states, one can perform the VMC simulation either with fixed particle number P N P |Ψ BCS , or with a fixed phase P |Ψ BCS 10,16 . For the latter choice, particle number fluctuation hinders the variational wave function from reaching half filling unless the chemical potential µ goes to infinity. On the other hand, the wave function can be optimized by varying the gap ∆ k even at half filling, if we choose to fix the particle number. It is important to note that simulations with fixed particle number are done not with the most probable N of P |Ψ BCS , but that of |Ψ BCS . This is because P decreases the average particle number 10 . Despite these differences, both choices of wave functions yield quantitatively similar results 10 . Throughout this paper we choose to fix the particle number while working with projected BCS states.
Let us define a k ≡ v k /u k . For the d-wave BCS state, a k=0 = 0 in the thermodynamic limit. However, if one chooses a k=0 = 0 in the finite system, the k = 0 state is unoccupied, although it is the lowest energy state. One can also choose a large value for a k=0 . Usually, the difference between these choices is O(1/N ). We expect an N electron system with with a k=0 = 0 to be similar to an N + 2 electron system with large a k=0 , because the two electrons that are no longer in the k = 0 state can occupy other available states. However, this argument fails at half filling for projected states, because there are no available states left for the two extra electrons. So it should not be surprising that X depends strongly on these choices close to half filling; the a k=0 = 0 definition gives larger X than the other does as shown in Fig. 3. At the other fillings, our results show only O(1/N ) of difference between these choices. Except for Fig. 3, where we show both cases, all other results in this paper are ob- (6) and (2). The parameterization follows Ref. 16. Statistical errors and finite-size corrections are estimated to be smaller than the symbols.
tained for a choice of large a k=0 ; i.e., we take a k=0 larger than any other a k .
The system size dependence is quite small except in the vicinity of half filling. In fact, it is qualitatively consistent with the Gutzwiller approximation; size dependence enters only as (N h + 1)/L in Eq. (9) and is negligible for large N h . In Fig. 3, we show the dependence of d on the system size. As shown in Fig. 3, d approaches unity for the projected Fermi sea. For the projected dwave BCS state, we speculate that the value of d goes to unity too, because it does not saturate, but increases more rapidly as 1/L decreases.
III. SINGLE PARTICLE EXCITATIONS OF THE PROJECTED FERMI SEA
We consider the particle excitation and the hole excitation Any calculation involving |Ψ ± kσ needs the respective norms, N ± kσ = Ψ ± kσ |Ψ ± kσ . We now calculate these norms within the generalized Gutzwiller approximation.
Equation (19) has appeared frequently in the literature. here, we repeat its derivation to facilitate a comparison with the analogous problem for hole excitations. The norm Ψ + kσ |Ψ + kσ is given by where we have used (16) for the diagonal contribution in the last step. Invoking the Gutzwiller approximation for the off-diagonal term, Eq. (19) follows directly from (20).
B. Hole excitation
The normalization of the hole excitation can be done analogously. We get, where, P lm = i =l,m (1 − n i,↑ n i,↓ ). The last term in the above equation corresponds to a hopping process between two reservoir sites. The generalized Gutzwiller approximation assumes that the matrix elements are proportional to the square roots of the corresponding densities (13,14,15).
Invoking the Gutzwiller approximation and using (11), we get, for the normalization of the hole excitation relative to the norm of the Gutzwiller wave function. The general expression (21) for the hole normalization, can be simplified upon using the Gutzwiller result (12) for the relative norm X. We then get, for the last term in (21). Finally, we get the simple result, It is interesting to compare this result for the normalization of the hole excitation with the corresponding expression (19) for the particle excitation. The vanishing of the latter at half filling could have been expected. But the divergence of N − kσ as n → 1 is surprising. We will return to this point in the next section.
C. Consistency check
The norm N + kσ has to vanish whenever |Ψ + kσ = P c † kσ |Ψ 0 vanishes. For the Fermi sea this is the case when k < k F , i.e. when n 0 kσ = 1. This physical condition is obviously fulfilled by (19). Similarly, we expect N − kσ to vanish for n 0 kσ = 0, which is satisfied by (22). Thus, the Gutzwiller result (10) obeys the normalization condition for the hole excitation and the theory is consistent.
IV. TUNNELING MATRIX ELEMENTS
We now consider the tunneling of electrons and holes into a projected wave function. Single particle tunneling into a projected superconducting state has been considered recently by Anderson and Ong 9 , and Randeria et al. 15 . Here, we restrict ourselves to the projected Fermi liquid state and evaluate the tunneling matrix elements by retaining systematically, all terms arising from the commutation of the electron creation and destruction operators with the projection operator P , as outlined in Sec. III.
A. Particle tunneling
Consider first, the matrix element The numerator may be calculated easily by using the result of (19): . From the above expression we find that the particle tunneling matrix element takes the form, It vanishes at half filling n → 1, implying that the addition of electrons is not possible exactly at half filling because of the restriction in the Hilbert space.
B. Hole tunneling
Next we evaluate the matrix element corresponding to the tunneling of holes into the projected state. Naively, we might expect this process to be allowed at half filling, since the removal of electrons is not forbidden by the projection operator. Consider now, the matrix element in the numerator of (25). We follow the same procedure used to evaluate the norm of the hole wave function in Sec. III B and use (12) and (14) and find, Using this expression together with the norm (21) of the hole excitation, we obtain the hole tunneling matrix element (25), a surprising result, in that it vanishes at half filling (n ↑ = n ↓ = 0.5) too.
The vanishing of the hole tunneling matrix element at half filling is clearly related to the divergence of the norm of the hole excitation. This, in turn, is related to the fact that X → 0, as n → 1 (cf. Eq(10)). The vanishing hole tunneling matrix element can then be understood as follows. When the reservoir site is doubly occupied, a single hole in the otherwise projected Fermi sea can be found in any of the lattice sites. Consequently, when double occupancy of the reservoir site occurs with probability 1, as it does at half filling, an "orthogonality catastrophe" occurs leading to zero overlap for the tunneling matrix element. Note that the result (27) hinges on the exact functional dependence (12) of (1 − X)/X on the particle densities n σ . On the other hand, the particle tunneling matrix element M + kσ is not affected by the functional form of the relative normalization factor X. If X were to vanish more slowly than (1 − n) at half filling, then from (21) and (26), one could conclude that the hole tunneling matrix element M − kσ does not vanish as n → 1, possibly leading to an asymmetry between particle and hole tunneling. Our analytical results preclude this possibility for the projected Fermi sea. But we are unable to provide a definite answer for the projected superconducting states, in view of the discrepancy between the Gutzwiller approximation and the VMC results (Fig. 2). To understand this discrepancy, we study density oscillations in the vicinity of the reservoir site using VMC.
V. DENSITY OSCILLATIONS NEAR THE RESERVOIR SITE
To clarify the limitations of the Gutzwiller approximation for projected superconducting states, we use VMC to calculate the hole density in the vicinity of the reservoir site. We find that the density oscillations seen are very different for the projected Fermi sea and the BCS states. Fig. 4 shows VMC results for the hole density in the partially projected state |Ψ ′ l are presented in the first row of. The sites m are distinct from the reservoir site l (marked by a cross in the figure). All results shown correspond to half filling; viz., n ↑ = n ↓ = 0.5. We choose ∆ = 1 for the BCS states. The vectors of periodic boundary conditions are L 1 = (L x , L y ) and L 2 = (−L y , L x ) respectively, with L x = 37, L y = 1; Including the reservoir site, L = L 2 x + L 2 x = 1370 sites. In the figure, white/black correspond to high/low values of n h (m), which is scaled by a logarithmic gray scale varying in the range −8.5 < log n h (m) < −6. Thus, the same gray represents the same value in all the three cases shown.
For the Fermi sea, we see that the hole is distributed more uniformly than the other cases even though the diagonal direction has a larger probability of being occupied by a hole. The s-wave shows a checker-board pattern. The d-wave has a quasi checker-board pattern where only one of four sites is black, and the hole tends to be near the reservoir site. The VMC results for the projected BCS wave functions are strikingly different in that the hole density is not uniform. On the other hand, the Gutzwiller approximation would be exact, if all states in the Hilbert space contribute equally to the wave function. That would correspond to a uniform density of holes. Clearly, the Gutzwiller approximation has to be extended to treat projected superconducting wave functions. This is in agreement with our previous considerations, where we found that the functional form of X (Eq. 10, derived using Gutzwiller approximation) agrees with the VMC calculations only for the projected Fermi sea, but not for BCS states (see Fig. 1 and Fig.2).
To further investigate the effects of Gutzwiller projection, we also plot (second row of Fig. 4) the correlation function in systems without the Gutzwiller projection. This correlations function between a hole at site m and a doubly occupied site at l corresponds to the quantity n h (m) for the partially projected wave function close to half filling. This is because, in the latter case, the unprojected site is doubly occupied. Note that translation invariance implies that the second term in (28) does not depend on the site indices l and m, and is a constant factor. Then, using Wick's theorem, the correlation function d For the Fermi sea, only c † i,↑ c j,↑ is finite. The nesting of the Fermi surface by Q = (π, π) then leads to the to the checker-board pattern for the hole density observed in Fig. 4 (second row).
For the s-wave, the Friedel oscillations of c † i,↑ c j,↑ are similar to that of the Fermi sea while the oscillation of c i,↑ c j,↓ is phase shifted by π/2. Summing both contributions to d Let us compare these results with those obtained after projection. For the Fermi sea, we see clearly that the density oscillations are suppressed by projection. This is likely because projection reduces the discontinuity at the Fermi level, thereby suppressing the nesting by Q and the corresponding Friedel oscillations.
The emergence of the checker-board pattern in the projected s-wave suggests that Gutzwiller projection affects c i,↑ c j,↓ stronger than c † i,↑ c j,↑ . With only one contribution, the Friedel oscillations are no longer smeared out and are observed.
Projection changes the pattern qualitatively for the dwave too. The observed pattern resembles approximately the function, ∼ sin 2 (xπ/2) sin 2 (yπ/2) (see Fig. 4, top row), with m = (x, y). This indicates that the nodal points at (± π 2 , ± π 2 ) contribute dominantly after projection. Furthermore, in this case, the hole tends to stay near the reservoir site. It means that only a part of the Hilbert space has a large weight, leading to a deviation from the Gutzwiller approximation. We believe this effect cannot be captured within the Gutzwiller approximation without invoking off-site correlations 17 .
VI. DISCUSSION
In this paper, we extended the Gutzwiller approximation scheme to construct normalized excitations and matrix elements for the projected Fermi sea. In typical calculations, one needs to determine matrix elements between partially projected Gutzwiller projected states, where double occupancies are projected out at all but one site l (called the "reservoir" site). The occupancy of the reservoir site, n l turns out to be an important quantity in the calculation of matrix elements. Since the wave function projects out double occupancies on all sites m = l, it follows that the occupancy n m ≤ 1. But, n l = {0, 1, 2}. Therefore, our results for n l are nontrivial in that the Gutzwiller approximation is extended to calculate the occupancy at an unprojected site. We presented an analytical method to calculate such matrix elements and showed that the approximations are in good agreement with results from variational Monte Carlo (VMC) for the Fermi sea. These results were used to construct normalized single particle excitations of the fully projected Fermi sea, and to calculate matrix elements for tunneling into the projected Fermi sea.
Single particle tunneling in projected BCS wave functions has been discussed recently, by Anderson and Ong 9 , and Randeria et al. 15 . In our calculations for tunneling into the projected Fermi sea, we find the surprising result that the matrix elements for both particle and hole tunneling vanish as n → 1 (half filling). Within our scheme, the result follows from the behavior of the charge density in the vicinity of the reservoir site. In particular, for the projected Fermi sea the analytical result hinges on the expression for single occupancy of the reservoir site, Eq. (10). As can be seen in Fig. 2, the analytical result does not agree with numerical calculations done for projected BCS wave functions. This discrepancy underscores the importance of pairing correlations in the unprojected wave functions, which are not taken into account within the Gutzwiller approximation scheme.
There are two ways by which electron correlations arise in the Gutzwiller scheme: one is through the mean field or trial wave function |Ψ 0 , and the other via the projection on the subspace of no double occupancy, |Ψ = P |Ψ 0 . The latter effect, which results in the reduction in the size of the Hilbert space can be described by combinatorial arguments, leading to (10). As seen in Fig. 1, the analytical and VMC results are in good agreement for the case of the projected Fermi sea. We can trace this agreement back to the fact that the Fermi sea does not contain any additional explicit correlations.
Consider instead |Ψ BCS , which contains additional, molecular field correlations in the unprojected wave func- tion. Here, we may expect deviations for quantities like the relative normalization X from the combinatorial result (10). Indeed, the VMC data presented in Fig. 2 confirms this expectation. For instance, the data show a qualitatively different dependence of X on doping, for the s-wave BCS states. The VMC data indicates a possibly different limiting behavior for X in the limit n → 1, as indicated by the analysis of the data as a function of inverse cluster-size, presented in Fig. 3.
For the s-wave BCS state, we observe a dramatic enhancement in the double occupancy at the reservoir site for low doping, which we understand as a consequence of enhanced on-site pairing, relative to the Fermi liquid state. On the other hand, The double occupancy of the reservoir site is reduced for the d-wave, since the d-wave state suppresses on-site pairing fluctuations. The quantitative behavior of the normalization ratio X as a function of doping, for projected superconducting states is thus a subtle problem which we hope to solve in the future.
We also studied the hole density near the reservoir site for projected superconducting wave functions at half fill-ing using VMC. The results are shown in the top row of Fig. 4. For the projected Fermi sea, we find that the hole density is uniform. However, for the superconducting states, we find that projection induces oscillations in the hole density near the reservoir site. For the projected d-wave state, we find that the hole density is mostly near the reservoir site. We believe that the Gutzwiller approximation needs to be extended to treat pairing correlations in the superconducting wave functions to understand these results fully. This issue, along with the study of systems away from half filling and their possible relevance to the checker-board pattern observed in scanning tunneling microscopy of the high temperature superconductors is left to future research.
We thank P.W. Anderson, N. P. Ong, and H. Yokoyama for several discussions. N.F. is supported by the Deutsche Forschungsgemeinschaft. V.N.M. acknowledges partial financial support from The City University of New York, PSC-CUNY Research Award Program. | 2019-04-14T02:07:47.707Z | 2005-03-07T00:00:00.000 | {
"year": 2005,
"sha1": "cc35984aee406413f8457743f0bb2279f2d23c18",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0503143",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "cc35984aee406413f8457743f0bb2279f2d23c18",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
55226335 | pes2o/s2orc | v3-fos-license | Overload and exhaustion: Classifying SNS discontinuance intentions
Abstract Social networking sites (SNS) have transformed the communication systems; along with its positive effects, maladaptive usage of SNS brings some adverse outcomes too. The current study investigates the adverse impact of SNS usage. It focuses on social overload, information overload, and SNS exhaustion resultants of maladaptive usage causing dissatisfaction and regrets influencing the customer continuation intentions. Stressor-Strain-Outcome (SSO) framework is adopted in this study to investigate the antecedents of user intentions to discontinue the SNS usage. In the proposed research model, stressors, strains, and outcome are empirically examined with data collected from 505 SNS users. Findings based on statistical analysis show that psychological and behavioral alterations caused by maladaptive usage force users to discontinue the SNS usage due to dissatisfaction and regret caused by SNS usage. The excessive usage cause social overload, information overload, and SNS exhaustion source to dissatisfaction and consequent regret that push the user to decide to discontinue SNS. This research work develops theoretical implication for future SNS-based work and put forward a practical suggestion for organizations using SNS, SNS user, and SNS service providers.
Abstract: Social networking sites (SNS) have transformed the communication systems; along with its positive effects, maladaptive usage of SNS brings some adverse outcomes too. The current study investigates the adverse impact of SNS usage. It focuses on social overload, information overload, and SNS exhaustion resultants of maladaptive usage causing dissatisfaction and regrets influencing the customer continuation intentions. Stressor-Strain-Outcome (SSO) framework is adopted in this study to investigate the antecedents of user intentions to discontinue the SNS usage. In the proposed research model, stressors, strains, and outcome are empirically examined with data collected from 505 SNS users. Findings based on statistical analysis show that psychological and behavioral alterations caused by maladaptive usage force users to discontinue the SNS usage due to dissatisfaction and regret caused by SNS usage. The excessive usage cause social overload, information overload, and SNS exhaustion source to dissatisfaction and consequent regret that push the user to decide to discontinue SNS. This research work develops theoretical implication for future SNS-based work and put forward a ABOUT THE AUTHORS Muhammad Asim Nawaz is PhD candidate in School of Management at the University of Science and Technology of China. He received his Master degree in Business Administration from University of Central Punjab, Pakistan. Later on he is affiliated with Lyallpur School of Management, GCUF as Lecturer.
Zakir Shah is a PhD student in School of humanities and Social Sciences at the University of Science and Technology of China. His current focus of research is social media and its role disaster management.
Ali Nawaz and Junaid Raza are freelance lecturers of Management. Their current focus is on integration of supply chains with mediating role of social media.
Dr Fahad Asmi is assigned a position as postdoctoral at University of Science and Technology of China. His research work is published in different internationally well-recognized journals.
Zameer Hassan is a PhD student in School of humanities and Social Sciences at the University of Science and Technology of China. His current focus of research is on social media and Disaster Management.
PUBLIC INTEREST STATEMENT
The excessive engagement with social media brings out negative perceptions in varying degree of enormity like anxiety, depression, and boredom causing behavioral alteration. Such adverse outcome pushes the consumer away from service with intentions to reduce the usage or terminate it permanently. This behavioral outcome is mainly due to social overload (excessive social engagement with friends, peers, colleagues, and family), information overload (excessive exposure to undesired information), and SNS exhaustion (negative feeling due to excessive interaction with technology). Such maladaptive usage carries multiple cognitive and physical stresses for the user. These stressors cause the dissatisfaction with the service and generate regret in users that later converts into the adverse behavioral response of discontinuance intentions. The current results validate the negative perception associated with social media due to excessive interaction. Moreover, this study had implications for service providers, general users, and organizations using SNS for different communicational purposes.
practical suggestion for organizations using SNS, SNS user, and SNS service providers.
Such trend directed more interest in the discontinuance behavior of users from SNS, that is observed to have different determinants than continuance usage (Turel, 2014). The research work conducted by Maier, Laumer, Eckhardt, and Weitzel (2014) considered the "discontinuance of usage" as a strategy to cope with the stress induced by SNS, due to factors such as information overload (Kefi & Kalika, 2015;Tarafdar, Gupta, & Turel, 2013;Zhang, Zhao, Lu, & Yang, 2016), social overload (Baum, Calesnick, Davis, & Gatchel, 1982;Maier, Laumer, Eckhardt, & Weitzel, 2012;Maier et al., 2014) and SNS exhaustion (Cao & Sun, 2018). These studies highlighted the negative sideways of the SNS excessive usage resultant of stressful experience of SNS leading to discontinuous usage. But yet a lot of questions need to be addressed (Berger, Klier, Klier, & Probst, 2014).
The user experience with any product or service is an essential determinant of intentions to continue using it. A positive experience results in satisfaction and leads to prolonged use (Chang, Liu, & Chen, 2014). In contrast to it, a negative experience results in regret causing service switch , discontinuous intentions (Lemon, White, & Winer, 2002;, lower satisfaction (Bui, Krishen, & Bates, 2009;Inman, Dyer, & Jia, 1997;Taylor, 1997;Tsiros & Mittal, 2000), and an adverse impact on reuse intentions (Tsiros & Mittal, 2000). The current research purpose is to identify the adverse effect of SNS excessive use and resultant individual behavioral change (Amichai-Hamburger, Kingsbury, & Schneider, 2013), as social networking take the place of all other communication networks. SNS is used for posting private and personal messages but also used as a marketing tool (Culnan, Mchugh, & Zubillaga, 2010), the recruitment source (Eckhardt, Laumer, & Weitzel, 2009), and communication medium with stakeholders (Majchrzak, 2009). If exhaustion and overloads cause a discontinuance of SNS usage, it will lead to less user participation in social networks. Hence, a better understanding of SNS is essential to avoid adverse impacts resulting from excessive usage (e.g. Barley, Meyerson, & Grodal, 2011).
Based on this gap in prior research studies, this study adopted stressor-strain-outcome (SSO) model to examine the IS post-adoption outcomes of regret in term of SNS usage. This study addresses the two research questions: 1) how do perceived overloads and exhaustion contribute to the user's online regret and dissatisfaction with SNS? 2) how do regret and dissatisfaction impact discontinuance usage intentions of SNS user? The current research work contributes to enhancing the researcher understandability of online regret experience, a construct of excessive interest but still had space for more empirical study. The further contribution is to examine the regret in postadoption perspective, capturing all critical features of the cognitive experience of the SNS users and how it results in adverse outcomes for SNS user. Finally, this study includes dissatisfaction and regret as different strains that are resultant of perceived overloads and exhaustion. So, this research work enhances the post-adoption literature of IS regarding SNS that included more factors of discontinuance intention besides the dissatisfaction only.
Stressor-strain-outcome model (SSO)
Based on the prior literature on technostress and information system (IS) discontinuation intentions, we adopt SSO model to develop our framework (Ragu-Nathan, Tarafdar, Ragu-Nathan, & Tu, 2008). Stressor (S) represent those factors that generate stress for SNS user, and in present study social overload (Maier, Laumer, Weinert, & Weitzel, 2015), information overload (Krasnova, Spiekermann, Koroleva, & Hildebrand, 2010), and SNS exhaustion (Maier et al., 2014) are considered as stressor. Strain (S) characterize psychological resultant of stress induced by the individual during maladaptive usage. We consider regret and dissatisfaction (Chen, Lu, Gupta, & Xiaolin, 2014) as strain factors. Finally, the outcome (O) that refers to the behavioral outcome of the stressful situation, here we consider user discontinuance intentions as an outcome (Furneaux & Wade, 2010).
Regret and dissatisfaction
Regret experience in virtual communities has started receiving researcher attention recently. It is challenging to manage potential receivers of information and content shared online, controlling the audience, the spread of content and forecasting others reaction to their recent online activity . SNS have both aspects, good and bad, for example, self-disclosure results in better social relationship and better quality of well-being (Valkenburg, Peter, Valkenburg, & Peter, 2009). At the same time, this information disclosure can result in embarrassment, social snubbing, and revictimization (Bellmore, Xu, Burchfiel, & Zhu, 2013). Most of SNS users experience online regret (Madden, 2012;. User disclosure of personal information online result in regret feeling later on, (Moore & Mcelroy, 2012). This makes "regret" an important aspect to be discussed with a satisfaction level of SNS users. Satisfaction is the result of the comparison, originated from marketing literature. Whereas satisfaction is a comparison between expected and actual performance, but in case of regret comparison is between the achieved option and forgone alternatives (Tsiros & Mittal, 2000). Many researchers discuss this concept and coins that regret is related to choices and satisfaction is relevant to the outcome.
The relationship of dissatisfaction with switching intentions is discussed by Zeelenberg & Pieters (2004), Zeelenberg, Van Dijk, & Manstead (2000). Similarly, the relationship between regret and switching intentions is well discussed by Chang et al. (2014). The relationship between regret and dissatisfaction, regret and discontinuance intentions is yet missing. SNS users are expected to feel regret that might result in discontinuance intentions such as reduction in use, short break, or termination. This study covers the relationship of regret and dissatisfaction but also discuss regret as a determinant of discontinuance intentions.
Overloads and SNS exhaustion
The user engages with social media for entertainment, informational and communication purposes and such use can yield immediate gratification. This gratification can be accompanied by a weakened sense of volitional control and encourage continuous activity leading to excessive use (Thomée, Härenstam, & Hagberg, 2011). Excessive social use of SNS creates expectations, obligating user to respond to other users demand, to do so user continuously visit SNS accounts, and this behavior exposes the user to overwhelming volume of social demand resulting in SNS exhaustion, which leads to physical and psychological strain called "social overload" (Maier et al., 2012). Similarly, growth in virtual relationships demands an increase in social support (Maier et al., 2015). This social support results in negative psychological and behavioral consequences such as "SNS Exhaustion." SNS platform permits users to share loads of information on walls, profiles, and blogs resulting in "Information Overload" (Eppler & Mengis, 2004). The current framework of SSO examines these three stressors to study the regret and dissatisfaction influencing the behavioral intentions to discontinue the use of SNS as shown in the Figure 1 below.
Conceptual framework and hypothesis
A conceptual framework is incorporating SSO framework.
Hypothesis
Social overload is described in terms of crowding, as too much friends requests, messages to respond, time, and attention needed to respond and maintain a social relationship in an evergrowing social circle (Maier et al., 2012;McCarthy & Saegert, 1978). Whereas humans have a limited ability to maintain stable social relationships and that is 150 also known as Dumber number (Dunbar, 1992). Recent studies show this limit is jumped by most users (Walther, Van Der Heide, Kim, Westerman, & Tong, 2008). Research regarding sociology initiates that social overload is resultant of unwanted social interaction and this induces psychological distress (Evans & Lepore, 1993;Maier et al., 2014). Research in sociology describes that after a particular time, the increasing density of regional population will affect residential satisfaction (Bonnes, Bonaiuto, & Ercolani, 1991;Machleit, Eroglu, & Mantel, 2000). Maier et al. (2012) studied psychological reactions of social overload in the context of SNSs and figured that users face adverse motivation from excessive virtual activities resultantly they experience low satisfaction. SNS users have social expectations from networks when these expectations are not satisfied; it is anticipated that they might encounter lower satisfaction in the context of SNS. Thus, we propose this hypothesis: H1(a). Social Overload is having a positive influence on user dissatisfaction.
The rise of overload takes place when the social activities in the virtual world exceed an individual user's ability to process the interactions and respond to them accordingly (McCarthy & Saegert, 1978). SNS users are continuously visiting profiles to manage virtual personalities of them, updating status, replying to queries, liking photos and messages. When this situation is compared with the social norms and bonds of the offline world, it demands individuals to look after friends and meet their demands accordingly (Koroleva et al., 2010). This social overload leads to the feeling of regret due to wastage of time and energy. Prior literature regarding user behavior advocates a relationship existing between SNS usage and SNS-based regret. Recent research empirically shows that negative emotions are more likely to occur when the expected outcome is not favorable . When users find that SNS inferior to what was expected, they might experience more regret, as the outcome was unexpected. Social overload is an unexpected outcome leading to regret due to the wasted opportunity for better utility. Hence, another hypothesis can be predicted to test this relationship: . Social overload has positive influence to regret feeling.
People have a limited ability to process information, and when this boundary is surpassed, people experience information overload (Eppler & Mengis, 2004). Information overload can be discussed regarding two variables: the information processing capability and the other is information processing requirement when second overruns first, information overload arises and that decreases information use evident from prior research work (Lusk, 1993;Pennington & Tuttle, 2007). Information processing ability differs from individual to individual, so it is hard to set a standard to measure information overload (Chen, Shang, & Kao, 2009). SNS-induced information overload results in emotional distress and dissatisfaction (Eppler & Mengis, 2004). Consequences of information overload are confusion, inability to recall information, and set priorities (Schick, Gordon, & Haka, 1990); it also leads to stress and anxiety (Eppler & Mengis, 2004). The psychology research already states that psychological fatigue will have a worse impact on user ability to continue the task (Bartlett, 1953). Ravindran, Kuan, and Goh (2014) found in detailed qualitative interviews that people who face SNS fatigue were inclined to reduce the usage for a short period or abandon SNS usage. Based on these facts, we propose the following hypothesis: H2(a). Information Overload is positively related to user dissatisfaction.
Users with regret complain more and have low intentions to re-experience such product or service (Keaveney, Huber, & Herrmann, 2007). Marketing literature has studied regret and conclude that regret feeling might influence the behavioral intentions which are not determined by the satisfaction (Tsiros & Mittal, 2000). Messner and Waenke (2009) studied information overload and found that too much information set user astray and it gets hard to make choices that later result in regret. Studies show the existence of a positive relationship between satisfaction and repurchase intentions and a negative correlation between regret and reuse (Keaveney et al., 2007;Oliver, 1980). In short, this digital explosion of information is leading to information overload that is having an adverse effect on human behavior and health (Jackson et al., 2008;Stokols, Misra, Runnerstrom, & Hipp, 2009). Both regret and information overload are considered to have a negative relationship with satisfaction and satisfaction is observed to have negative correlation with regret (Inman et al., 1997;Maier et al., 2012;Taylor, 1997), based on this argument it gets interesting to study the relationship between regret and information overload in SNS, so a new testable statement is proposed: . Information overload is negatively related to regret feeling. SNS connects likeminded people from similar background, nationalities, and regions, and this social embeddedness benefits the user in terms of better social support (Ellison, Steinfield, & Lampe, 2007). But at the same time, users are exposed to increase number of friends, news, happenings, information, and situation demanding social support out of a sense of duty to respond to social requests (Maier et al., 2015). This phenomenon might lead to negative psychological and behavioral outcomes leading to a feeling of tiredness and exhausted. Users are expected to have a higher level of SNS exhaustion if they are tired of SNS usage, this feeling is the result of interpersonal relationships with other user's online (Ayyagari, Grover, & Purvis, 2011). SNS exhaustion is the psychological and behavioral reaction of compulsive and maladaptive usage of social media leading to lower level of satisfaction; this process displays user's psychological reaction to stress creating situation triggered by SNS usage (Maier et al., 2015). These findings suggest that extensive usage due to increased social demand and interpersonal interaction of SNS can result in SNS exhaustion, and this feeling of exhaustion can contribute to the feeling of the lower level of satisfaction with SNS performance. To test this relationship, we can develop a hypothesis as follows: H3(a). SNS Exhaustion is positively related to user dissatisfaction.
The negative consequences of SNS exhaustion include tiredness and exhaustion, and this level of exhaustion can be higher if users are tired of SNS usage, this condition comes into play due to continuous interaction in interpersonal relationships (Ayyagari et al., 2011). These findings suggest that SNS exhaustion leads to adverse outcome and regret is also observed as negative emotions that results due to forgone options imagined being better (Yi & Baumgartner, 2004;Zeelenberg et al., 2000). Regret involves the self-blame and consideration that one made the wrong choice and a desire to make corrective action (Roseman, Wiest, & Swartz, 1994;Zeelenberg, van Dijk, Manstead, & Der Pligt, 1998). Both regret and SNS exhaustion hurt end users, based on the above arguments it would be interesting to study this relationship of SNS exhaustion and regret generation in users, so another hypothesis is proposed as follows: H3(b). SNS exhaustion is positively related to regret feeling.
When a product/service performance meets the expectations of a user, satisfaction is confirmed (Mckinney, Yoon, & Mariam, 2002), the inverse of it results in dissatisfaction (Tsiros, 1998). Whereas regret is the negative emotion that occurs when the user realizes the difference in performance between current and forgone choices (Landman, 1987;Zeelenberg et al., 2000). When the performance of a chosen product fails to meets the user expectation, dissatisfaction occurs. Whereas regret occurs when a selected product performs lower than forgone choices (Keaveney et al., 2007). Similarly, Taylor (1997) found that regret has a significant influence on satisfaction level. In an experiment to examine the relationship of regret and satisfaction, it was observed that regret influenced satisfaction level of customers (Inman et al., 1997). Regret and satisfaction are different concepts but they can occur together (Tsiros, 1998;Tsiros & Mittal, 2000;Zeelenberg et al., 2000). To test this relationship of regret and dissatisfaction, we can develop another hypothesis as: H4(a). Feeling of regret is negatively associated to user dissatisfaction.
The misuse of social media leads to worse outcomes, and regret is one such outcome. A recent study reports that 29% of young adults posted job-related secrets and 74% of adults removed some material from the walls to avoid negative effect on the job (Kuegler, Smolnik, & Kane, 2015). Regret felt in online activities generate negative consequences as users blame themselves for such a situation resultantly user develops a feeling of guilty and embarrassment (Connolly & Zeelenberg, 2002). In recent studies, it is observed that regret has negative effect on continuance intentions toward online services ). Experiencing online regret can result in lower satisfaction and develop a tendency to discontinue the service (Bui et al., 2009;Lemon et al., 2002;. Online user's experiences regret, and this regret might develop a sense of discontinuing from SNS services so that we can propose another hypothesis as: H4(b). Feeling of regret positively associated with user's SNS discontinuance intentions.
Expectancy disconfirmation theory considers that the continuation intentions of the user are established by the level of satisfaction, which is determined by three factors: individuals' initial expectations, perceived performance, and perceived disconfirmation. User satisfaction with adopted IS is the most critical ingredient of IS-related research as it is linked to continuous usage of IS (Bhattacherjee, 2001). Organizations invest considerable time, budget, and human resource to keep track of user satisfaction and simultaneously attempt to improve user satisfaction level (Islam, 2011). Satisfaction is considered as the strongest aspect of user intentions to continuous usage (Bhattacherjee, 2001) and same is perceived for the inverse of it, as dissatisfied users are more likely to develop discontinuing intentions (A. Bhattacherjee, Limayem, & Cheung, 2012). Hence, by this reasoning and research stream presented, we can improve the final hypothesis as: H5. Lower user satisfaction is positively associated with user's SNS discontinuance intentions.
Data collection
The questionnaire was pre-tested with five IS researchers having experience in virtual communities. Back translation method was observed to translate the original English-based questionnaire to local language (Urdu). A pilot study was conducted on 50 volunteer respondents who were active members of SNS. This study validated the results and measures for further data collection. Empirical data are collected by conducting the online survey through e-mail. A total of 568 questionnaires were received. The survey system filtered duplicate copies according to respondents IP addresses, after this a total of 505 questionnaires were left regarded as valid. To see the non-response bias, we studied the comparison of the means of all variables and demographics for initial and later submissions. The results of the T-test demonstrated no substantial dissimilarity occur. Table 1 elaborates the demographics as approximately 60%, and 40% of the respondents were male and female, respectively. A maximum number of respondents were aged from 20 to 30, having diverse educational background undergraduate to professional degree holders.
Measurement model
To check the factor reliability of the scales, principal component analysis (PCA)-based factor analysis was conducted that depicts six factors with eigenvalues of more than 1 and were extracted with 73.348% of the total variance explained. The item loading on the expected factors was observed greater than 0.5 and showing no cross-loading-related complication, which establishes good convergent and discriminant validity. High reliability of the scale was recorded as all Cronbach's alpha values were higher than 0.70. Later conformity factor analysis (CFA) was conducted. Table 2 shows the results. Table 3 shows AVE (the average variance) for each construct that is also observed above 0.5, indicating the decent convergent validity of the scale (Bagozzi & Youjae, 1988). The composite reliabilities (CRs) all were above 0.7, ensuring the scale's sound reliability (Nunnany, 1978). The correlation between the latent constructs is given in Table 3. The diagonal elements are demonstrating the AVE square root of the corresponding, and all were better than corresponding correlation coefficients with other constructs that indicates the scale has sound discriminant validity. To check common method bias existence, Harman's one-factor test was conducted that explained 73.348% of the variance. First factor is accumulated for 16.017% of the total variance, showing no single factor accumulated for the utmost of the variance and this ensures that common method bias is not a considerable thread to current research.
In the measurement model, six constructs were assessed by CFA through AMOS. The goodnessof-fit used to see overall model fit. The CFA overall fit is observed acceptable (Hair, Anderson, Tatham, & Black, 1998). RMSEA is 0.61, less than the accepted range of up to 0.10 (Anderson & Gerbing, 1988). CMIN/DF is 2.876, within the acceptable range. CFI and IFI are 0.970, respectively, NFI is 0.954 and TLI is 0.965; all these are above the estimate of 0.90 (J. F. Hair, Black, Babin, & Anderson, 2010). Results also are given in Table 4.
Structural model
The structural model was tested through the data collected for validity measures. SPSS is used for this purpose. All the results were within the acceptable range. RMSEA is 0.61, within the accepted range of up to 0.10 (Anderson & Gerbing, 1988). CMIN/DF is 2.902, within the acceptable range. CFI and IFI are 0.969, respectively, NFI is 0.953 and TLI is 0.965; all these are above the estimate of 0.90 (J. F. Hair et al., 2010). Results are shown in Table 4. Results show acceptable model, and we proceed further to estimate path coefficient.
The path coefficient calculation leads to significant results. The results indicate that social overload positively contributes to the user dissatisfaction level (H1(a): b = 0.239) and regret (H1(b): b = 0.125). Hence, H1(a) and H1(b) are accepted. Information overload is having a significant impact on the dissatisfaction level of the user (H2(a): b = 0.164) and regret (H2(b): b = 0.211). So, H2(a) and H2(b) are confirmed. SNS exhaustion is also having a positive impact on customer dissatisfaction, but it is third in row proceeded by social overload and information overload (H3(a): b = 0.083) and regret (H3(b): b = 0.273), so H3a and H3b are also accepted. Regret is also having a significant impact on discontinuance intentions of the user (H4(b): b = 0.231). Moreover, regret contributes positively to initiate the user dissatisfaction (H4(a): b = 0.198), means that regret contributes to the level of dissatisfaction and its impact on later outcomes. Hence, both H4(a) and H4(b) are accepted as shown in Table 5 below. Dissatisfaction positively contributes to the discontinuance intentions of the user (H5: b = 0.441). Figure 2 shows the hypothesis results. Regret and dissatisfaction together explain almost 53% of the variance. The model further explains that 24.5% variance is observed in dissatisfaction, 27.9% variance in regret and 30. 5% is concerned to dependent variable (i.e. user discontinuance intentions). Control variables have a nonsignificant effect on discontinuance intentions except gender. So, the hypothesized model is accepted. (Figure no. 2).
Discussion
Based on SSO, the study investigates how SNS-based stressors (social overload, information overload & SNS exhaustion) induces user with strains (dissatisfaction & regret) and how it outcomes influence SNS user. All the three stressors, namely, had a positive influence on dissatisfaction and regret generation in users. This confirms the recent finding on regret and dissatisfaction (Chang et al., 2014;Liao, Lin, Luo, & Chea, 2016). The current study investigates nine hypothesis and results were found significant. The results show that user with higher intensity to use SNS is more likely to discontinue confirming the recent findings (Zhang et al., 2016). This study also finds that social overload has a significant impact on dissatisfaction followed by the information overload and SNS exhaustion. Which is different from the recent findings of Cao and Sun (2018) urging excessive cognitive usage have a most significant impact and also different from Lee, Son, and Kim (2016) who advised that system feature overload, information overload, and communication overload exert the same impact on strains. The possible explanation for this phenomena is that SNS users are having higher numbers of friends that surpass the normality (Cannarella & Spechler, 2014;Dunbar, 1992). But this is inverse in case of regret, SNS exhaustion contributes the maximum to regret generation followed by the information overload and social overload.
The recent research regarding regret and satisfaction finds that regret contributes significantly to repurchase/reuse intentions regarding marketing (Liao et al., 2016). In IS perspective, Liao, Liu, Liu, To, and Lin (2011) discussed information quality disconfirmation, system quality disconfirmation, and service quality disconfirmation that generate regret in customers regarding online purchasing and repurchasing in post-adoption phase but to the best of our knowledge, none have discussed regret with information overload, social overload, and SNS exhaustion, so it is the first study of its nature.
Addressing this gap, the current study exhibits the effect of social overload, information overload, and SNS exhaustion on regret and dissatisfaction level of the user. The current conceptual framework presented some new and interesting finds. Social overload followed by information overload and SNS exhaustion had a significant influence on a generation of regret feel and dissatisfaction in SNS users. These findings are different from the recent work of Lee et al. I2016). Prior work shows that SNS users' experience online regret (Dhir, Kaur, Chen, and Lonka, 2016), and this experience leads to switching intentions , negative impact on the reuse intentions (Tsiros & Mittal, 2000), contrast to this, current SSO framework finds that information overload, social overload, and SNS exhaustion had significant effect on dissatisfaction and regret, moreover regret had direct impact on discontinuance intentions of user and indirect impact through contribution to dissatisfaction that further strengthens the user intentions to discontinue the usage. To conclude, besides dissatisfaction, regret also plays a vital role in influencing the discontinuance intentions of SNS user; this makes it a significant finding that provides a healthier picture of online regret and its outcome.
Theoretical and practical implication
The current study can be of potential interest for psychologists, psychiatrists, educators, researchers, and practitioners who are concern to study online regret. It contributes to better understanding of the relationship of regret and dissatisfaction much discussed in marketing literature but quite ignored regarding social media. Practitioners can also use this study to develop a better conceptualization of online regrets, like how SNS-based overloads and exhaustion develop dissatisfaction and regret. Consequently, strengthening the dissatisfaction impact leads to discontinuance intentions.
Current research work is also useful for organizations using SNS platforms such as Facebook, Twitter, or Qzone to reach potential and current customers. The organizations using SNS platforms are struggling to keep intact existing participants and keep the active participation alive on such platforms (Habibi, Laroche, & Richard, 2014). The present study results show that social overload, information overload, and SNS exhaustion can contribute in the feeling of online regret, so managers and administrators of SNS service providers should develop better path ahead to ensure that their users can participate actively without incurring the online regret. Such as managers and administrators can restrict the amount of MBs shared by the individual user every day, often offer a smart and easy session to understand service filters, provide a warning clock showing the continuity of online activity that would help the user to realize the time of actual engagement per day in the context of SNS. Such limitations might assist in curtailing the regret experience among the users and fulfilling the organizational goal to engage the user in a better way for a more extended period.
Limitations and future research
The current study has certain limitations. First, the results are based on SNS users from a single country (e.g. Pakistan) that represents a single geographical location and culture. So this restricts the applicability of current study findings to the whole sum of SNS users. Second, present work is more general and more specific study must be done to develop a better understanding of the users on each SNS community such as Twitter, Facebook, etc. Third, self-report and cross-sectional data are included that might not be appropriate for causal studies because it might lead to a common method bias.
Future researchers can investigate the effect of regret on other walks of life such as education, work environment, or other real-life interactions. We have used dissatisfaction and regret as stress variables; other stressors can also be used as mediating variables that influence the discontinuance intentions of users. The "center of gravity" at which user start feeling dissatisfaction and regret must be further investigated in future research work. More psychological and behavioral factors should be engaged in future studies. Considering the personality traits that might influence the regret-related aspects, in future personality-based variables can be used to develop a better understanding of this phenomenon. Moreover, it will be important to see the impact of regret in terms of age and gender. Finally, future researchers must examine the occurrence of online regret in the SNS-specific features as picture tagging (Dhir, Chen, & Chen, 2017) and picture sharing (Malik, Dhir, & Nieminen, 2016) that are considered much popular among SNS users.
Funding
The authors received no direct funding for this research. | 2018-12-15T08:15:54.825Z | 2018-10-03T00:00:00.000 | {
"year": 2018,
"sha1": "9094a8a80415cd4a70006f99c41d51e92c23c332",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23311908.2018.1515584?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "9094a8a80415cd4a70006f99c41d51e92c23c332",
"s2fieldsofstudy": [
"Psychology",
"Computer Science"
],
"extfieldsofstudy": [
"Psychology"
]
} |
13707890 | pes2o/s2orc | v3-fos-license | Breastfeeding in Patients with Chronic Myeloid Leukaemia: Case Series with Measurements of Drug Concentrations in Maternal Milk and Literature Review
Breastfeeding in patients with chronic myeloid leukaemia (CML) during tyrosine kinase inhibitors (TKIs) therapy is not recommended but interruption of TKI treatment may cause the loss of remission. We studied the 3 cases of pregnancy and breastfeeding in women with CML and observed that stopping treatment without major molecular response may end in haematological relapse. The concentrations of nilotinib and imatinib in maternal milk were measured and nilotinib distribution in human breast milk was demonstrated for the first time. The estimated maximal doses of imatinib and nilotinib which an infant may ingest with the maternal milk were less than the therapeutical doses. However, the unknown impact of the low dose chronic exposure to these TKIs in infants imposes the limitations on their use during breastfeeding. Breastfeeding without TKI treatment may be safe with molecular monitoring, but preferably in those patients with CML who have durable deep molecular response.
Introduction. Currently, patients with chronic myeloid leukaemia (CML) who achieve an optimal response being treated by tyrosine kinase inhibitors (TKIs) have a high life expectancy, and therefore planning a family is a significant issue for them. 1,2 However, the TKIs used for CML treatment have been classified as Category D by the US Food and Drug Administration (FDA) due to their potential teratogenicity and the use during pregnancy is not recommended unless treatment benefits overweigh potential risks. [3][4][5] It has been proved that the first-generation TKI imatinib is distributed into breast milk. [6][7][8][9][10] It is reasonable to suggest that the second and third generation of TKIs used for CML treatment (nilotinib, dasatinib, bosutinib, ponatinib and radotinib) also distribute into maternal milk, but at the present time it has never been demonstrated in humans, as of our knowledge. According to the calculations made from the experimental data, the dose of imatinib which a child may ingest with the maternal milk is considerably lower than the therapeutic drug dose, since it corresponds to the plasmatic level. 7 However, the effects that even low doses of TKIs may cause on infants in the first months of life are unknown. Therefore, breastfeeding for women who use these drugs is not recommended. On the other hand, if a woman insists on breastfeeding, a delay in resuming TKI after labour may lead to loss of response to treatment. We aimed to describe the course of the disease in women with CML who were offtreatment during the breastfeeding period and to measure the concentrations of TKIs in breast milk when available.
Materials and Methods. Three women with Ph+ positive CML in chronic phase (CP) were observed during the years 2014 to 2017. Two patients interrupted imatinib in order to conceive without TKI, one of them had an in vitro fertilization. One woman conceived while taking nilotinib and stopped the drug immediately after pregnancy confirmation. The haematological and the molecular response of the patients were assessed every 4-6 weeks during the off-treatment period or more often if required. The definitions of the haematological and molecular response were in accordance with the European LeukaemiaNet (ELN) recommendations. 11 One patient resumed imatinib in the second trimester due to the loss of complete haematological response (CHR) and 2 patients were off-treatment until labour ( Table 1). The pregnancy ended in childbirth in all 3 patients, all 3 babies were healthy. The women insisted on breastfeeding their children and were observed without treatment during the breastfeeding period.
When the breastfeeding period came to an end, the patients were asked to collect breast milk samples after TKI intake. The patients took the same TKI they had before pregnancy and avoided breastfeeding during the sampling day. The time points for the milk sample collection were established as 1, 2, 4, 6, 8, 12 and 24 hours after the drug intake. The samples were stored at -20°C until evaluation. Quantitative detection of drug concentrations was done by high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS). All patients signed an appropriate informed consent for analysis of their biological samples and clinical data.
Results.
Molecular monitoring of BCR-ABL levels during pregnancy and breastfeeding. The molecular response and management of CML differed in each case ( Table 1). In order to provide the details, we present a brief description of these cases. CML-chronic myeloid leukaemia, TKI-tyrosine kinase inhibitor, CHR-complete haematological response, MR2-molecular response with the level of BCR-ABL<1%, MMR -major molecular response with the level of BCR-ABL<0,1%, DMR -molecular response with the level of BCR-ABL<0,01% including undetectable level, IM-imatinib, NILnilotinib.
Case 1.
A 32-year old woman with CML CP and low Sokal score had achieved a CHR but no cytogenetic response after 6 months on imatinib 400 mg and was switched to nilotinib at a dose of 800 mg. The patient conceived after 3.8 years of nilotinib therapy and stopped the drug from the fourth week of gestation. The patient had a stable DMR for 2 years before pregnancy and during whole pregnancy with BCR-ABL levels less than 0.0032%. Median (Me) time interval between subsequent molecular tests during pregnancy was 7 weeks (from 5 to 9 weeks). The treatment-free period was prolonged in order to breastfeed and it lasted for 19 months with no loss of DMR. Me time interval of molecular monitoring during breastfeeding period was 12 weeks (from 3 to 33 weeks). On the day when the breastfeeding was ended, the patient took 400 mg nilotinib and samples of breast milk were collected. After that, the patient did not restart nilotinib and continued treatment-free observation with molecular monitoring. Molecular tests were done every 3-6 months. The DMR was maintained (Figure 1a).
Her total treatment-free period at the last followup was 37 months. The follow-up of the child for more than 2 years showed no developmental delay.
Case 2 A 30-year-old woman with CML CP and low Sokal score had been receiving treatment with imatinib at a dose of 400 mg for 7 years. A DMR was achieved which was stable for more than 6 years, and the BCR-ABL level was undetectable with the sensitivity of the PCR method of > 4.5 lg.
The patient wished to become pregnant and stopped the drug intake. A pregnancy occurred after 5 months. At the onset of the pregnancy, the major molecular response (MMR) was lost and the level of BCR-ABL was 0.11%. Further tests during pregnancy showed fluctuations of BCR-ABL levels between 0.1% and 0.35%. Me time interval between molecular tests during pregnancy was 6 weeks (from 3 to 9 weeks). The patient insisted on breastfeeding. The treatment-free period was extended. Two molecular tests were done during breastfeeding period with time interval of 10 and 5 weeks. The last test showed the BCR-ABL level was 1.65% after nearly 3 months of breastfeeding. The breastfeeding was terminated, treatment with imatinib at a dose of 400 mg was resumed. The total duration of the treatment-free period for conception, pregnancy and breastfeeding was 18 months. The DMR was restored 4 months after restarting imatinib and remained stable for the following 2 years of follow-up. Molecular monitoring was done every 6 months after treatment resuming (Figure 1b). The child met the milestones of development during 2.5 years of follow-up.
Case 3 A 33-year old woman with CML CP and low Sokal score had received imatinib treatment before pregnancy for nearly 9 years. A first attempt to conceive was made after 1 year of imatinib 400 mg, when no MMR was achieved and only BCR-ABL level<1% was observed. The patient stopped taking imatinib and was switched to interferon alpha (IFN). No pregnancy took place, the BCR-ABL level increased to 35%, and the patient restarted treatment with imatinib. The dose of imatinib was increased to 600 mg and the patient continued this treatment for 6 years. A DMR was reached but it was not stable and longlasting. Two more attempts to conceive with imatinib interruption for 3-7 months were made by the patient. The DMR was lost, the BCR-ABL level raised to 3%, and again no pregnancy occurred. The patient restarted treatment with imatinib at a dose of 400 mg (Figure 1c).
The last attempt to stop taking imatinib and to conceive with the help of in vitro fertilization was successful. The off-treatment period for conception lasted for 1 month and it was prolonged after pregnancy confirmation. The molecular test which was done at the 10th week of gestation (2.5 months after treatment was stopped) showed a BCR-ABL level of 65%.
The haematological relapse of CML which was reflected by the loss of CHR was observed after 1 month. The whole treatment-free period during conception/pregnancy lasted for 5 months. Imatinib at 400 mg was resumed in the second trimester after the 16 th week of gestation as imatinib was a drug with a high efficacy in this patient and has a low placental transfer. 12 The CHR was restored in 3 weeks. The next molecular test during pregnancy was done 3 months after the administration of imatinib. The level of BCR-ABL was 5,16%. It was strongly recommended to the patient that she should continue imatinib after labour. However, the patient interrupted treatment to breastfeed and resumed imatinib at a dose of 600 mg after 1 month. She maintained CHR, but nearly 3 months after delivery the level of BCR- ABL increased to 10%. No BCR-ABL mutations were found. The patient was switched to nilotinib at a dose of 800 mg and the MMR was achieved in 3 months. (Figure 1d). The MMR remained stable during further observation. The recommended frequency of molecular monitoring every 3 months was not followed properly by the patent. The follow-up of the child for nearly 3 years showed no developmental delay and no growth retardation.
Concentration of imatinib and nilotinib in maternal breast milk. Four series of samples were analysed (Figure 2). In case 1, the patient received nilotinib at 400 mg; in case 2, the patient received imatinib at 400 mg; and in case 3, the patient received imatinib at 400 mg on day 1 and imatinib at 600 mg on the second day of milk-sample collection. One sample after 24 hours of nilotinib intake was missed, and other samples were collected according to the schedule.
The maximum concentration (Cmax) of nilotinib in breast milk was 129 ng/ml after 4 hours of the drug intake in case 1. The Cmax of imatinib in breast milk at a dose of 400 mg was 1402 ng/ml after 4 hours of the drug intake and 420 ng/ml after 8 hours in cases 2 and case 3, respectively. The Cmax of imatinib after a dose of 600 mg was 1411 ng/ml after 6 hours of the drug intake in case 3.
Discussion.
Lactation and breastfeeding are biological mechanisms that have been established in mammals, including humans, during years of evolution. Besides nutrition, the benefits of breastfeeding for the child include supporting the immune system and protection from infectious, autoimmune and other diseases. 13 The emotional perception of women regarding breastfeeding may be connected with psychological, social and cultural factors. 14 Mothers with CML may also ask whether they are permitted to breastfeed their children It has been found that imatinib distributes to maternal milk as well as its active metabolite Ndesmethyl derivative (or CGP74588) ( Table 2). The milk/plasma ratio for CGP74588 was higher than for imatinib: 0.9-3 vs 0,5. 7,9 The calculated maximal dose of imatinib plus CGP74588 that a child could take daily with the maternal milk was less than 3 mg. This dose corresponds to 0.75% of the standard maternal dose of 400 mg and it is much lower than the lowest paediatric dose of imatinib of 260 mg/m² recommended for children with CML. 15 However, experience of imatinib use in the first year of an infant's life is very rare as the median age of paediatric CML patients is nearly 12 years. 16 Some studies have reported impaired bone growth, growth hormone synthesis and vitamin D metabolism resulting in growth retardation in children with CML who received imatinib. 17,18 Nilotinib has been just recently approved for use in children with CML and no extensive data can be taken from the pediatric population today. Our concentration measurements of imatinib in maternal breast milk correspond with the drug levels described earlier ( Table 2) and demonstrate the inter-individual and dose-depending variations (Figure 2). The concentration measurements of nilotinib in maternal milk described here are, to the best of our knowledge, the first in a woman with a single dose of nilotinib 400 mg once a day. Nilotinib penetration into human breast milk is evident. Based on our data, the estimated maximum daily dose which an infant may take is nearly 1 mg for imatinib and 0,1 mg for nilotinib since the maximum daily milk intake is considered as being 1000 ml. Therefore, we deduce that the calculated doses of these TKIs which an infant may ingest with the maternal milk are less than the therapeutical doses. However, the unknown effects of the low-dose chronic exposure to imatinib in infants in the first year of life and no data of nilotinib durable impact on infants' development are the main concerns limiting the use of these TKIs during breastfeeding.
The key issue for treatment interruption during pregnancy or breastfeeding in patients with CML is the risk of disease progression. It has been demonstrated that treatment-free remission is safe in CML patients with stable and long-lasting DMR with a 40%-60% probability of maintaining an MMR without treatment. 19,20 Our case series represents different situations of the leukaemic cells kinetics in CML patients without treatment ranging from stable DMR to haematological relapse. Stopping treatment during breastfeeding may be dangerous in patients without DMR/MMR and lead to further insufficient treatment response. A close molecular monitoring is needed for the patients who extend the off-treatment period for the breastfeeding. If the MMR loss after treatment cessation is confirmed breastfeeding needs to be terminated and TKI treatment should be restarted. We consider that recommendation to use a bottle feeding is the safe choice. The recommendation to avoid TKIs and to give breastfeeding for the short period of the first 2-5 days after labour to give the child colostrum 5 may be acceptable as well.
The women with CML who plan pregnancy should be aware of the risks of taking TKIs during breastfeeding as well as the risks of remission loss if the treatment is discontinued. | 2018-05-21T20:56:47.476Z | 2018-05-01T00:00:00.000 | {
"year": 2018,
"sha1": "7760765702758272326d8bd2fa88caf5f5c603b1",
"oa_license": "CCBYNC",
"oa_url": "https://www.mjhid.org/index.php/mjhid/article/download/2018.027/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7760765702758272326d8bd2fa88caf5f5c603b1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55439836 | pes2o/s2orc | v3-fos-license | Effect of inter and intra row spacing on seed tuber yield and yield components of potato ( Solanum tuberosum L . ) at Ofla Woreda , Northern Ethiopia
Farmers in southern zone of Tigray are using different spacing below or above the national recommendation depending on the purpose of planting either for seed tuber or consumption due to lack of recommended plant spacing. This study was therefore conducted with the objective of determining the best inter and intra-row spacing for optimum tuber seed yield and quality of potato seed tuber at Ofla Wereda, Northern Ethiopia. Four different intra-row (20, 25, 30 and 35 cm) and inter-row (65, 70, 75 and 80 cm) spacing were used in the experiment. The result reveals that inter and intra-row spacing significantly (p<0.001) affected seed tuber yield ha, the maximum seed tuber yield (36.89 and 37.54 ton ha) was recorded at 65 and 20 cm inter and intra-row spacing, respectively. From this study, it can be concluded that the narrow spacing (20 and 65 cm intra and inter-row spacing) produced higher seed tuber yield per hectare than other spacings. Thus, potato (Jalenie variety) growers in the study area can benefit if they use this narrow spacing (20 and 65 cm intra and inter-row spacing).
INTRODUCTION
Potato (Solanum tuberosum L.) originated from the high Andes of South America and was first cultivated in the vicinity of Lake Titicaca near the present border of Peru and Bolivia (Horton, 1987).In terms of quantity produced and consumed worldwide, potato is the most important vegetable crop.It is one of the most important food crops in the world; it produces more energy and protein per unit area and unit of time than most other major food crops.
The potato crop was introduced to Ethiopia around 1858 by Schimper, a German botanist (Pankhurst, 1964).Among African countries, Ethiopia has possibly the greatest potential for potato production; 70% of its arable land mainly in highland areas above 1500 m is believed to be suitable for potato.Since the highlands are also home to almost 90% of Ethiopia's population, the potato could play a key role in ensuring national food security *Corresponding author.E-mail: hany7mn@gmail.com.
Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License (FAO, 2008).However, the current area cropped with potato is about 0.16 million hectares and the national average yield is about 7.2 t/ha, which is very low as compared to the world's average production of 16.8 t/ha (Adane et al., 2010).The crop yield in Ethiopia is lower than that of most potato producing countries in Africa like South Africa and Egypt, which produce 34.0 and 24.8 t/ha, respectively (FAO, 2008).
Many diverse and complex biotic, abiotic and human factors have contributed to the existing low productivity of potato.Some of the production constraints which have contributed to the limited production or expansion of potato in Ethiopia include shortages of good quality seed tubers of improved cultivars, disease and pests, and lack of appropriate agronomic practices including optimum plant density, planting date, soil moisture, row planting, depth of planting, ridging and fertility status (Berga et al., 1994).
The optimization of plant density is one of the most important subjects of potato production management, because it affects seed cost, plant development, yield and quality of the crop (Bussan et al., 2007).The yield of seed potato can be maximized at higher plant population (closer spacing) or by regulating the number of stems per unit area and to certain extent by removing the haulm earlier during the maturity (O'Brien and Allen, 2009).Rahemi et al. (2005) reported that the effect of intra-row spacing on yield of potatoes was significant especially at 20 cm intra-row spacing, which showed 36.85%yield increment as compared to 30 cm intra-row spacing.Intrarow distance of 20 cm increased total tuber number and weight, and tuber weight per plant and the marginal return rate increased by 13% when intra-row distance decreased from 35 to 25 cm.EARO (2004) also determined that there is a little difference in yield between intra-row spacing of 25 and 30 cm for all varieties released so far in Ethiopia and the 30 cm intra-row and 75 cm inter-row spacing accepted as standard.
Farmers in the study area (Sothern zone of Tigray) are using different spacing below or above the national recommendation depending on the purpose of planting either for consumption or for seed tuber due to lack of recommended inter and intra-row spacing.Hence, it is important to maintain appropriate plant population per unit area to have high yield, marketable size and good quality of seed tuber.Even though different research is done in different parts of the country on potato plant density, the condition is not studied in Ofla Woreda, Southern Zone of Tigray.This study was therefore conducted to determine the best inter and intra-row spacing for optimum tuber seed yield and quality of potato seed tuber at Ofla Wereda, Northern Ethiopia.
MATERIALS AND METHODS
The experiment was conducted in 2011/2012 under irrigation condition in Southern Zone of Tigray, Ofla Woreda at Hashenge Kebele, on farmer's field.The experimental site is located at an elevation of 2500 m above sea level.Maximum and minimum temperature ranges from 22.57and 6.8°C, respectively.The mean annual rainfall of the area is 806.5 mm.The major soils include clay (28%), loam (57%) and sandy (15%) with a pH of 6.8 (BoARD, 2009).
The Woreda is classified into three agro-ecological zones, namely, highland, midland and lowland.The midland covers the largest part which accounts about 42% of the total 133, 296 ha while both the highland and lowland covers 29%.The average land holding in the Woreda is about 0.5 ha per household and estimated total population of 132,491 (BoARD, 2009).
Different local and improved potato varieties are being grown in the area.Among the improved variety, Jalenie is growing widely and has got acceptance by farmers due to its high yielding ability and acceptability by consumers.
The experiment was laid out in 4 x 4 factorial arrangements using a Randomized Complete block design (RCBD) with three replications and two factors, which consisted of four different intrarow spacing: 20, 25, 30 and 35 cm, and four different inter-row spacing: 65, 70, 75 and 80 cm.Each plot contain four rows with different plot size of (3.15 x 3.2, 3.15 x 3, 3.15 x 2.8, 3.15 x 2.6) and different number of plants per row which includes 15, 12, 10, 9 plants for 20, 25, 30 and 35 cm intra row spacing, respectively.A foot path of 0.5 and one meter was left between plots and blocks, respectively.
The collected data on different growth stage was analyzed by using SAS Computer software version 9.0 (SAS Institute Inc., 2008).
Leaf area index
Intra-row spacing showed a very highly significant (P<0.001)effect on leaf area index.However, the effect of inter-row spacing and interaction showed no significant difference in leaf area index (Figure 1).The result revealed that significantly the highest leaf area index (3.21)was recorded at 20 cm intra-row spacing, and this could be due to high number of haulms per unit area.Whereas the lowest (2.32) leaf area index was recorded from 35 cm intra-row spacing and it is statistically difference from the other three (30, 25 and 20 cm) intrarow spacings.
This result is in agreement with the findings of Ronald (2005) and Tamiru (2005) who reported that the highest density increased leaf area index, possibly indicating potential partitioning of assimilates for vegetative growth.
Total tuber seed yield (t/ha)
The effect of inter-row and intra-row spacing showed a very highly significant (P<0.001)differences on total tuber yield ha -1 (Table 1).However, the interaction effect was non-significant (P>0.05).The highest yield (36.89 t/ha) was obtained from 65 cm inter-row spacing, whereas the lowest (31.87 t/ha) yield was recorded at 80 cm inter-row spacing.
Regarding the intra-row spacing, the higher total yield per hectare (37.54 t/ha) was obtained from 20 cm intrarow spacing.As intra-row spacing increased from 20 to 35 cm, total tuber yield decreased from 37.54 to 29.38 t/ha.Intra-row spacing of 35 cm showed lower total tuber yield (29.38 t/ha) and it was significantly different from the three levels.It was clearly evident from the results that the yield of seed tuber per hectare was increased with decreasing plant spacing.
The increased yield was attributed to more tubers produced at the higher plant population per hectare although average tuber size was decreased because of increased inter-plant competition at closely spaced plants leading to more unmarketable tuber yield.At closer spacing there is high number of plants per unit area which brings about an increased ground cover that enables more light interception, consequently influencing photosynthesis.It is therefore, very likely that substantial increases in rate of land coverage and thereby tuber yield could be achieved by dramatically increasing the stem density per unit area.
The present result agrees with the findings of Zabihi et al. ( 2011) who reported that plant density in potato affects some of the important plant traits such as total yield, tuber size distribution and tuber quality.Increase in plant density led to decrease in mean tuber weight but number of tubers and yield per unit area were increased.In contrast, Berga et al. (1994) reported that wider row width by wider in-row distance (80 x 40 cm) gave the highest yield (34 t/ha) and the 60 x 20 treatment gave the lowest yield (22.2 t/ha).
Marketable seed tuber yield (t/ha)
The data concerning marketable yield as influenced by planting density is presented in Table 1.Inter and intrarow spacing showed a very highly significant (P<0.001)effect on marketable yield.Significantly maximum marketable yield (35.89 and 35.09 t/ha) was obtained at a 20 and 65 cm intra and inter-row spacing, respectively.While the lowest marketable yield (28.65 and 31.42 t/ha) was obtained at the wider spacing (35 cm intra and 80 cm inter-row spacing, respectively).However the interaction effect did not show significant difference on marketable yield per hectare.The highest marketable yield recorded at closer spacing is attributed to more tubers produced at the higher plant population per hectare.The present result agreed with the findings of many authors (Stoffella and Bryan, 1988;Khalafalla, 2001) regarding plant density effect on marketability of the crop.Close spacing of 15-25 cm was reported to give better proportion of marketable yield than wider spacing of 35 cm.
Total number of tubers per hectare
The results of total number of tuber (ha -1 ) as influenced by inter and intra-row spacing is presented in Table 2. Inter and intra-row spacing had very highly significantly (P<0.001)affected total number of tuber per ha.Significantly maximum total number of tuber per hectare (532,865) was recorded at 65 cm inter-row spacing.While the lowest number of tuber per hectare (447,586) was obtained at wider spacing (80 cm) inter-row spacing.
As far as the intra-row spacing is concerned, significantly maximum total number of tuber per hectare (558,174) was obtained from 20 cm spacing.Whereas the lowest total number of tuber per hectare (430,311) was obtained at 35 cm spacing.Total tuber number per hectare was increased with closer spacing.The highest number of tuber at closer spacing is due to high number of plants per unit area.Rahemi et al. (2005) reported that intra-row distance of 20 cm increased total tuber number and weight per unit area.
Marketable seed tuber number per hectare
Marketable tuber number (000's ha -1 ) as influenced by inter-row and intra-row spacing is presented in Table 2. Inter and Intra-row spacing had very highly significant (P<0.001)effect on marketable tuber number per hectare.However, the interaction effect had no significant (P>0.05)effect on marketable tuber number per hectare.
Maximum marketable tuber number (485,144 and 501,651) was obtained at 65 and 20 cm inter and intra-row spacing respectively, while the result recorded at 20 cm intra-row spacing was significantly different from the other intra-row spacings.The lowest number of marketable tuber per hectare (411,315 and 395,106) was obtained at 80 cm inter and 35 cm intra-row spacing, respectively.Among the inter-row spacings, statistically, the same results were obtained from 65 and 70 cm, which scored the highest marketable tuber number per hectare, 485,144 and 455,026, respectively.LDS (5%)= 6.89 CV (%)=13.17Related study was reported by Burton (1989); wider spacing may produce few tubers as it gave rise to few stems that could lead to high number and possibly misshapen tuber while, closer spacing improved quality and saleable yield.
Average fresh tuber weight (g)
Intra-row spacing showed highly significant (P<0.01)difference on average fresh tuber weight per plant (Figure 2).However, the main effects of inter-row spacing and its interaction with intra-row spacing had no significant (P>0.05)difference on average fresh tuber weight.The maximum mean tuber weight (79.68 g) was recorded at 35 cm intra-row spacing but not statistically different with 25 cm intra-row spacing.The smallest average fresh tuber weight (67.3 g) was recorded at 20 cm intra-row spacing.However, it was not significantly different from 25 and 30 cm intra-row spacing for the values of (74.24 and 69.16 g, respectively).
Increase in density probably increased competition between and within plants and hence, leads to decrease in availability of nutrients to each plant and consequently, resulted in decline of mean tuber weight.This result is in line with that of Ali (1997), who found higher average fruit weight at wider spacing as compared to closer spacing.Berga and Caesar (1990) also reported that stem number per plant and tuber number per plant are positively related, however, average tuber weight increased with wider spacing.
Tuber size category
Intra-row spacing had shown highly significant (P<0.01)effect on number of tubers graded less than 20 mm (Table 3).Maximum (9.96%) less than 20 mm number was recorded at intra-row spacing of 20 cm.However, it was not significantly different from 25 cm intra-row spacing.While, the lowest (6.629%) was at 35 cm.Intrarow spacing also showed a very highly significant (P<0.001)effect on weight of tubers graded less than 20 mm.Significantly maximum (0.74%) less than 20 mm weight was recorded at intra-row spacing of 20 cm.It was significantly different from the other intra-row spacings.However, the effect of inter-row spacing and interaction effect had no significant (P>0.05)difference for number and weight of tubers graded less than 20 mm.
Intra-row spacing also showed very highly significant (P<0.001)effect on tubers graded greater than 50 mm in terms of number and weight.Significantly maximum (23.74 number and 52.91 weight percent) greater than 50 mm graded tuber was recorded at 35 cm intra-row spacing.While, the lowest (18.50 number and 42.30 weight percent) was recorded at 20 cm intra-row spacing.Inter-row spacing showed highly significant (P<0.01)effect on tuber graded 30-40 mm weight.
The highest (17.14 percent) tubers graded 30-40 mm weight was recorded at 65 cm inter-row spacing.The results of this investigation clearly indicated that the level of intra-row spacing largely affected potato tuber size distribution.Thus, based on market and consumers' demand, it is possible to produce either seed potato or ware potato of required size through the selection of appropriate planting density (intra-row spacing).
The present result is in agreement with the findings of Wiersema (1987) who reported that at higher stem density, the tuber produced will remain smaller than at lower stem densities.Khajehpour (2006) also reported that increase in plant density decreases mean tuber size probably because of plant nutrient elements reduction, increase in interspecies competition and large number of tubers produced by high numbers of stems.Generally, the result of this study indicates that tuber size category is influenced mainly by intra-row spacing rather than inter-row spacing.
Summary and conclusion
The result of this study demonstrated that yield per unit area is influenced by the different level of inter and intrarow spacing.From this study, it can be concluded that the narrow spacing (20 and 65 cm intra and inter-row spacing) produced higher seed tuber yield and marketable yield per hectare than other spacings.Thus, potato (Jalenie variety) growers in the study area (southern zone of Tigray) can benefit if they use this narrow spacing (20 and 65 cm intra and inter-row spacing).
Figure 1 .
Figure 1.Means for the effect of intra-row spacing on leaf area index.
Figure 2 .
Figure 2. Means for the effect of intra-row spacing on average fresh tuber weight.
Table 1 .
Means for the effect of inter and intra-row spacing on total tuber yield and marketable tuber seed yield per hectare
Table 2 .
Means for the effect of inter and intra-row spacing on total and marketable seed tuber number of tuber ha -1 .
Means followed by the same letter within the same column are not significantly different at 5% level of significance.
Table 3 .
Means for the effect of intra-row spacing on tuber size category.Means followed by the same letter within the same column are not significantly different at 5% level of significance. | 2018-12-11T10:52:43.995Z | 2014-06-30T00:00:00.000 | {
"year": 2014,
"sha1": "84f057c7197a6e96f55bafc051e52d2e516db6d2",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJPS/article-full-text-pdf/47462D945574.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "84f057c7197a6e96f55bafc051e52d2e516db6d2",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
44132291 | pes2o/s2orc | v3-fos-license | The relationship between psychological wellbeing and body image in pregnant women
Background: The aim of the present study was to determine the association between body image and psychological wellbeing during pregnancy. Materials and Methods: This descriptive correlational study was conducted on 320 pregnant women who were referred to health centers in Isfahan, Iran, during 2016 and had the inclusion criteria. They were selected by nonprobability convenient sampling. Data were gathered using standard psychological wellbeing and body image satisfaction questionnaires. The data were analyzed using Statistical Package for the Social Sciences software by descriptive and inferential statistical methods. Results: The results showed that the mean (SD) score of psychological wellbeing among participants was 77.50 (10.10) and their mean (SD) score of satisfaction with body image was 89.30 (14.60). Moreover, the results revealed a positive and significant relationship between the scores of psychological wellbeing and body image satisfaction (r=0.354, p <0.001). The results of regression analysis showed that the two variables of self-acceptance (t = 5.6, p <0.001) and personal growth (t = 2.06, p = 0.04)) can predict body image in pregnant women. Conclusions: The findings revealed a significant positive relationship between body image satisfaction and psychological wellbeing. Therefore, the training of positive attitude with respect to body image or increasing the level of knowledge on psychological wellbeing can create a positive cycle for these variables, and thus, make the pregnancy more enjoyable and acceptable.
Introduction
Psychological wellbeing, as one of the important subjects in the present day, is the center of attention of many societies. [1] Positive psychology has defined mental health as positive psychological functioning and has conceptualized it in the form of the phrase "psychological wellbeing." [2] In this novel approach toward psychology, contrary to the traditional approach which defines health as lack of an illness, adaptability, happiness, self-confidence, and other such positive characteristics are signs of health, and an individual's main goal in life is the development of their capabilities. [3] In this respect, models have been provided that view individuals from a positive perspective. Carol Ryff's six-factor model of psychological wellbeing is one of the most important models in the field of psychological wellbeing. Ryff has defined psychological wellbeing as an endeavor for perfection in realizing the real potential abilities of an individual. [4] Ryff has stated that wellbeing is multidimensional and consists of the dimensions of autonomy, personal growth, environmental mastery, purpose in life, positive relations with others, and self-acceptance. [5] These six factors define psychological wellbeing both theoretically and practically. [6] The results of previous studies show that psychological wellbeing and its components have varying status in different stages of life and in relation to demographic characteristics, and various factors can impact the psychological wellbeing level of individuals. [7] The experience of pregnancy, with its accompanying profound physical and mental changes in women's life, affects all dimensions of life including psychological wellbeing. [8] Researchers believe that pregnancy, in addition to disruption in psychological wellbeing and mental health, creates the basis for stress, anxiety, and depression during and after pregnancy and future emotional disorders in the child, and increases the risk of behavioral issues in early childhood. [9] This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.
For reprints contact: reprints@medknow.com
Among the characteristics that undergo change during this period are physical appearance and body image. Studies in this respect have shown that many women undergo substantial changes in weight, body shape satisfaction , and eating habits during pregnancy and the period after childbirth, and these changes can have positive or negative effects on the health of the mother and fetus. [10] Today, being slim is considered as one of the criteria of beauty and sexual attraction in some women; [11] therefore, increased weight and changes in appearance during pregnancy can result in body image dissatisfaction. The results of the study by Garrusi et al. [12] on 255 pregnant women showed that 48.7% of subjects were dissatisfied with their body image and that there is a positive and significant relationship between body image dissatisfaction and depression. In addition, Wilson et al. [13] stated that body image dissatisfaction can have consequences such as anxiety, depression, social isolation, and weakened self-concept and self-esteem. On the contrary, Dotse [14] in a study on 100 individuals (56 women and 44 men), with an age range of 12-50 years, found a positive and significant correlation between body image satisfaction and psychological wellbeing.
The majority of previous studies, especially in Iran, have evaluated the relationship of body image satisfaction or dissatisfaction with the negative aspects of mental health, such as depression, or the relationship of body image with psychological wellbeing in individuals with high body mass index (BMI). Thus, the question arises as to whether there is a relationship between body image and psychological wellbeing during pregnancy, which is accompanied with rapid changes in weight and body image. Therefore, the present study was conducted to determine the relationship between body image and psychological wellbeing in pregnant women referring to health centers in Isfahan, Iran.
Materials and Methods
This descriptive, correlational study was performed on 320 pregnant women referred to nine health centers selected from among the 46 health centers in Isfahan using nonrandom quota sampling method (based on the number of referrals). The subjects were selected through nonprobability convenience sampling from among individuals who had the inclusion criteria. Sampling was conducted during April to August 2016. The sample volume was calculated based on Z 1 = 1.96, Z 2 = 1.24, and r with a minimum absolute value of 0.2 (an estimation of the correlation coefficient between psychological wellbeing score and different variables). Inclusion criteria consisted of being an Iranian pregnant woman of any gestational age covered by health centers, lack of any recognized mental disorders such as depression and bipolar disorder, and lack of smoking and use of sedatives. Data were collected using Ryff's Psychological Wellbeing (PWB) scale and the Body Image Satisfaction scale.
The PWB scale consists of 18 items scored based on a five-point Likert scale and evaluates the six components of psychological wellbeing (autonomy, personal growth, environmental mastery, purpose in life, positive relations with others, and self-acceptance). The internal consistency of this scale was calculated using Cronbach's alpha in a study by Shahidi et al. [15] and was reported as 0.72, 0.73, 0.76, 0.52, 0.75, and 0.51 for the components of autonomy, personal growth, environmental mastery, purpose in life, positive relationships with others, and self-acceptance, respectively, and as 0.71 for the whole scale.
The Body Image Rating scale consists of 22 items that assess the satisfaction or dissatisfaction of the individual with her/his body. This scale was designed by Souto and Garcia in 2002. [16] The items are scored based on a five-point Likert scale ranging from never to always. The validity and reliability of this scale were approved in the study by Taheri Torbati et al. [17] Independent t-test results showed that this scale has good differential validity and can differentiate between the two groups of good and bad body image (p < 0.001). Moreover, Cronbach's alpha showed the acceptable reliability of this scale (α = 0.91). [17] The collected data were analyzed using descriptive (mean and standard deviation, and frequency distribution) and inferential statistics (Pearson correlation coefficient) using Statistical Package for the Social Sciences software (version 16, SPSS Inc., Chicago, IL, USA).
Ethical considerations
To observe ethical principles, before beginning the study, written informed consent forms were obtained from the participants and they were assured of the confidentiality of their information and that the results will not be analyzed individually and the participants' personal information will be protected.
The mean (SD) psychological wellbeing and body image satisfaction scores of the participants were, respectively, 77.50 (10.10) and 79.30 (14.60). Furthermore, the results suggested a significant positive relationship between psychological wellbeing score and body image satisfaction score (p <0.001, r=0.354). In addition, the results illustrated that body image score had a direct relationship with the components of self-acceptance (r = 0.40; p < 0.001), positive relationships with others (r = 0.20; p < 0.001), environmental mastery (r = 0.19; p < 0.001), purpose in life (r = 0.14; p = 0.01), and personal growth (r = 0.27; p < 0.001). However, it did not have a significant relationship with the component of autonomy (p = 0.74).
Multiple linear regression results also showed that, among the scores of the components of psychological wellbeing, self-acceptance and personal growth were, respectively, the best predictors of the body image score, and the scores of the other components were not significant predictors of body image [ Table 1].
Discussion
The aim of the present study was to determine the relationship between psychological wellbeing and body image in pregnant women referring to health centers in Isfahan. The results of the Pearson correlation coefficient showed a significant positive relationship between psychological wellbeing and body image. The results of this study regarding the relationship between psychological wellbeing and body image is in agreement with that of the studies by Dotse, [14] Jane Sabik, [18] and Winefiel et al. [19] Ojha and Kumar [20] conducted a study on 223 students and found that body image dissatisfaction reduces individuals' happiness, while body image satisfaction increases self-esteem which increases psychological wellbeing. Their findings were in accordance with that of the present study. Based on the findings of Asgari and Shabaki, [21] body image is an essential element of individuals' personality and self-concept which impacts their mental life and views. This image can be positive or negative, can impact the psychological wellbeing of individuals, and can become a source of positive or negative emotions, and thus, affect individuals' quality of life (QOL). High inconsistency in body image impacts social and marital relations, daily activities, interpersonal communication, and familial relationships, which are effective components of QOL. [21] On the contrary, individuals with higher psychological wellbeing are more satisfied with their body image. Furthermore, in individuals with a purpose in life and positive feeling toward themselves and their future, body image dissatisfaction has little effect on their experiences, goals, and values. [22] Women pay more attention to their bodies, compared to men, and are dominated by their body image more than men. This issue is more pronounced in pregnant women. Pregnancy is a challenge of psychological adjustment, and reaching a desirable weight gain which insures the health of the fetus can be affected by body image satisfaction. During the 40 weeks of pregnancy, the body of the mother changes drastically. [12] These rapid changes may cause the mother to reevaluate her body image. She may have a positive view toward these changes, consider them as natural and caused by the pregnancy, and, due to its transience, still be satisfied with her body image. Nevertheless, these rapid changes may cause a negative body image, and consequently, reduced self-esteem, self-belief, self-acceptance, and self-worth in some mothers. A positive body image in pregnant women increases their self-confidence and, through the creation of positive emotions, increases their positive relations with others, self-acceptance, environmental mastery, and purposefulness, and thus, results in increased psychological wellbeing.
The results of the present study also illustrated the lack of a significant relationship between body image score and the score of the component of autonomy. This finding was in agreement with that of the study by Chung. [23] It seems that today individuals consider employment and increasing of skills as strategies to gain autonomy, and body image satisfaction plays a more subtle role in women's independence.
The results of multiple regression analysis showed that, among the scores of the components of psychological wellbeing, self-acceptance and personal growth scores were the best predictors of body image. Chung found that self-acceptance and environmental mastery were the strongest predictors of body image. [23] These findings are in agreement with that of the present study in terms of the variable of self-acceptance, but are not in accordance with the present study in terms of the variable of environmental mastery. Pregnancy conditions, the feeling of motherhood, and presence of individuals who continually support the mother may cause her to have less need for environmental mastery during pregnancy.
In explanation of the results of the present study, it can be stated that self-acceptance and personal growth are effective components that can create the basis for other components; therefore, it can be concluded that an individual who has accepted her/himself and has achieved personal growth may be better able to connect with others and lead a purposeful life. Self-acceptance is a perception that provides individuals with awareness of their strengths and weaknesses and a realistic view of their abilities, through which and by the development and improvement of their activities, they can achieve a positive view of themselves. [14] Pregnancy is most often considered as a strength which will result in the reinforcement of the sense of femininity and self-acceptance [24] that can result in body image satisfaction.
Personal growth represents the individual's constant participation in activities and resolution of issues in order to expand his/her abilities. [14] It seems that individuals, who have achieved personal growth and a clear view of themselves, have a high mental performance and have achieved growth in different aspects of life such as pregnancy. [25] These individuals have a better view of themselves, are less concerned with changes in their appearance, and are satisfied with their body image. The majority of the participants in the present study considered the changes in their appearance as part of the process of pregnancy. They spoke of God's wisdom, accepted themselves as a pregnant individual, and were satisfied with their physical appearance. In Iran, both in terms of culture and religion, pregnancy is considered as a holy and precious period and this may increase the body image satisfaction and psychological wellbeing of pregnant women. The present study is a starting point for the performance of more comprehensive and practical studies. Moreover, the performance of studies with the aim to analyze and assess the different aspects of body image and the effects of negative body image during pregnancy and after childbirth seems necessary. The limitation of this study was that data on the mothers' medical history were obtained through asking the participants themselves and there may have been cases that the women were unaware of.
Conclusion
It can be concluded that body image is one of the factors related to psychological wellbeing during pregnancy. Hence, it is recommended that this issue be taken into consideration in the provision of pregnancy care and planning for the improvement of the mental health of pregnant women. Moreover, for the creation and maintenance of a positive body image during pregnancy, women must be provided with information on physical changes during pregnancy and after childbirth in order to help them better accept changes in their body and prevent the formation of unrealistic expectations during pregnancy and after childbirth. | 2018-06-07T14:16:47.940Z | 2018-05-01T00:00:00.000 | {
"year": 2018,
"sha1": "61a3048ed57d5c50d20881cd7eea7010b3744b95",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ijnmr.ijnmr_178_16",
"oa_status": "GOLD",
"pdf_src": "WoltersKluwer",
"pdf_hash": "ddfd63fa54c1c49c5d72d04006558d039c88a112",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
204809593 | pes2o/s2orc | v3-fos-license | Clinical and morphological profile of aneurysms of the anterior communicating artery treated at a neurosurgical service in Southern Brazil
Background: The aim of the study was to characterize the clinical profile of patients with anterior communicating artery (ACoA) aneurysms and examine potential correlations between clinical findings, aneurysm morphology, and outcome. Methods: A review of medical records and diagnostic neuroimaging reports of patients treated at a neurosurgical service in Porto Alegre, Brazil, between August 2008 and January 2015 was performed. Results: During the period, 100 patients underwent surgery for ACoA aneurysms. Fifteen had unruptured aneurysms and 85 had ruptured aneurysms. Ruptured aneurysms had a higher aspect ratio than unruptured ones (2.37 ± 0.71 vs. 1.93 ± 0.51, P = 0.02). Intraoperative rupture occurred in 3%, and temporary clipping was performed in 15%. Clinical vasospasm occurred in 43 patients with ruptured aneurysms (50.6%). Overall, mortality was 26%; 25 patients in the ruptured group (29.4%) and one in the unruptured group (6%). The Glasgow Outcome Scale (GOS) was favorable (GOS 4 or 5) in 54% of patients, significantly more so in those with unruptured aneurysms (P = 0.01). In patients with ruptured aneurysms, mortality was associated with preoperative Hunt and Hess (HH) score (P < 0.001), hydrocephalus (P < 0.001), and clinical complications (P < 0.001). Unfavorable outcomes were associated with HH score (P < 0.001), Fisher grade (P = 0.015), clinical vasospasm (P = 0.012), external ventricular drain (P = 0.015), hydrocephalus (P < 0.001), and presence of clinical complications (P = 0.001). In patients with unruptured aneurysms, presence of clinical complications was the only factor associated with mortality (P < 0.001). Conclusion: Despite advances in the management of subarachnoid hemorrhage and surgical treatment of aneurysms, mortality is still high, especially due to clinical complications.
INTRODUCTION
Intracranial aneurysms (IAs) are present in 2%-5% of the population. [9,35] and are more prevalent in women and individuals over the age of 30. [35] Anterior communicating artery (ACoA) aneurysms are the most frequent in several series [13,16] and are, according to some studies, those most likely to rupture. [16,23] Subarachnoid hemorrhage due to rupture of an IA is an extremely serious event, with a mortality rate reaching 25% and permanent sequelae occurring in up to half of those who survive. [6] When considering only subarachnoid hemorrhage due to ACoA aneurysm rupture, mortality can be even higher. [16] Several factors are related to an unfavorable outcome, such as age, large aneurysm size, Fisher grade, and poor neurological status. [28,31] e ACoA complex often exhibits anatomical variations, such as asymmetry of the A1 segment, lateral rotation of the complex, ACoA aplasia, and hypoplasia. Aneurysms usually arise at the junction of A1 with ACoA. Due to their multiple vascular relationships, deep location, and frequent anatomical variations, they are considered complex aneurysms. [1,13] Surgery of ACoA aneurysms is usually performed through the pterional approach, [40] which provides direct visualization of the aneurysm while minimizing the necessary cerebral retraction. ere is no consensus about the best therapeutic modality for ACoA aneurysms. [25,32] ere are no definite results about the best or preferable technique (surgical or endovascular), considering short-and long-run results. [32] Within this context, the aim of the present study was to characterize the clinical and morphological profile of ACoA aneurysms treated surgically at Hospital Beneficência Portuguesa de Porto Alegre, Brazil, from August 2008 to January 2015. We present the clinical data of this group of patients, the morphological features of the aneurysms; and we try to correlate these data with the clinical outcome, according to the findings of the existing literature.
METHODS
is was a retrospective chart review study of patients with ACoA aneurysms who underwent microsurgical treatment by physicians of the Department of Neurosurgery, Hospital Beneficência Portuguesa de Porto Alegre (Dr. Mario Coutinho Neurosurgical Service), Brazil, from August 2008 to January 2015. Only those patients in whom aneurysm diagnosis was established or confirmed by digital angiography or computed tomography (CT) angiography before the intervention were considered. Patients who underwent endovascular treatment were not included in the sample. Ethics Committee Approval was obtained before data collection (CAAE 79257717.9.1001.5327).
Demographic data (age and sex) and clinical information (risk factors, craniotomy side, and presence of postoperative complications) were obtained from medical records. In patients with ruptured aneurysms, additional data were obtained: initial symptoms, Hunt and Hess (HH) classification at admission and at the time of the procedure, and Glasgow Coma Score (GCS). Clinical vasospasm was defined as the late onset of neurological deficits, including a GCS decline of two or more points, with no other attributable cause, such as fluid-electrolyte disturbance, hydrocephalus, or ventriculitis. e clinical outcome at discharge was assessed using the Glasgow Outcome Scale (GOS). e following neuroimaging data were obtained through direct analysis of patient scans: aneurysm dome direction in the coronal plane, aneurysm size, aneurysm neck size, presence of A1 dominance, presence of multiple aneurysms, and presence of preferential angiographic filling. e aspect ratio (AR) (i.e., the ratio of aneurysm size to neck size) was calculated on the basis of the aforementioned data.
CT angiography images were obtained in a GE Brightspeed CT scanner, using a specific thin-slice protocol (0.625 mm). Digital angiography images were obtained with GE OEC 9800 Series and Novomédica Radius S/R C-arms, using a bilateral three-view protocol (anteroposterior, lateral, and oblique) for the anterior circulation.
Statistical analysis
Quantitative variables were described as mean and standard deviation or median and interquartile range as appropriate. Categorical variables were expressed as absolute and relative frequencies.
Student's t-test or the Mann-Whitney U test (in case of asymmetrically distributed data) was used to compare means. Pearson's Chi-square test or Fisher's exact test was used to compare proportions. For polytomous variables, a supplemental analysis using adjusted residuals was performed as well.
To control for confounding factors, Poisson regression analysis was carried out to evaluate factors independently associated with mortality and unfavorable outcomes. e level of significance was set at 5% (P ≤ 0.05), and all analyses were performed using SPSS, version 23.0.
RESULTS
From August 2008 to January 2015, 100 patients with ACoA aneurysms underwent surgery at the study facility. e mean age was 53.1 ± 12.1 years. On average, patients with unruptured aneurysms were older (ruptured = 51.4 years and unruptured = 62.3 years). ere was a slight female predominance (43 men and 57 women). Among the 100 patients, 85 had ruptured aneurysms and 15 had unruptured aneurysms. Of those with ruptured aneurysms, most had a HH score of 1 or 2 (HH1/2 = 44, HH3 = 32, HH4 = 10, and HH5 = 0). Detailed demographic data are described in Table 1.
During the study period, only three patients with ACoA aneurysms underwent endovascular treatment: two unruptured and one ruptured aneurysm (HH4). ere were no deaths. GOS of the patients was five for the patients with unruptured aneurysms and three for the ruptured aneurysm patient.
e morphological features of the aneurysms are described in Table 2.
On average, ruptured aneurysms were larger than unruptured ones (5.32 ± 1.96 mm vs. 4.79 ± 0.97 mm), but the difference was not statistically significant (P = 0.49). ere was also no significant difference in neck size between ruptured and unruptured aneurysms (2.31 ± 0.79 vs. 2.69 ± 0.97, P = 0.11). 0.061 Alcoholism 6 (6.0) 6 (7.1) 0 0.289 Hunt and Hess score at the time of surgery (n=85), Surgical intervention was performed most often 4 days after the hemorrhagic stroke (range, 2-6 days). In all cases, access was obtained through pterional craniotomy (left-sided in 54 cases and right-sided in the remaining 46). e laterality of the approach was defined by preferential angiographic filling. In cases with no evidence of preferential filling, craniotomy was performed contralateral to the projection of the aneurysm dome. In patients with multiple aneurysms, craniotomy was performed on the side that would allow access to the largest number of lesions. Symmetrically filling single aneurysms with no lateral projection were approached from the right. Hydrocephalus was present in 43 patients (43%).
Intraoperative aneurysm rupture occurred in 3% of cases, and temporary clipping was performed in 15%.
The mean duration of temporary clipping was 115 s. An external ventricular drain was placed at some point during hospitalization (either at admission or intraoperatively) in 37% of patients. Patients with mild ventricular enlargement and a normal level of consciousness (HH score 1 or 2) were treated with daily therapeutic lumbar puncture and cerebrospinal fluid (CSF) manometry per routine hospital protocol, obviating the need for ventriculostomy.
Clinical vasospasm occurred in 43 patients (43%). Clinical complications occurred in 41% of patients and are listed in Table 3.
In the unruptured group, the only death occurred due to pulmonary thromboembolism on the third postoperative day, which occurred despite routine prophylactic measures.
Regarding clinical outcome, information on the GOS was available for 98 of the 100 patients. e clinical outcome was favorable (GOS 4 or 5) in 53 patients (54%) and unfavorable (GOS 1, 2, or 3) in 45 (46%). Outcomes were significantly better among patients with unruptured aneurysms than in those with ruptured aneurysms (P = 0.01).
Prognostic factors
Among the factors of interest, the following were associated with mortality in patients with ruptured aneurysms: HH score in the immediate preoperative period (P < 0.001), hydrocephalus (P < 0.001), and presence of clinical complications (P < 0.001). Detailed data are provided in Table 4. On multivariate analysis to evaluate factors independently associated with death, only the presence of clinical complications (P = 0.003) remained statistically significant [ Table 5].
Comparison of patients with favorable versus unfavorable outcomes in the group of ruptured aneurysms revealed that the following factors were associated with an unfavorable outcome: HH score (P < 0.001), Fisher grade (P = 0.015), clinical vasospasm (P = 0.012), external ventricular drainage (P = 0.015), hydrocephalus (P < 0.001), and presence of clinical complications (P = 0.001) [ Table 6]. On multivariate analysis, no factor was independently associated with an unfavorable outcome. In the unruptured aneurysms group, presence of clinical complications was the only factor associated with an unfavorable outcome (P = 0.008).
DISCUSSION
ere have been a few recent case series of patients undergoing surgical treatment for ACoA aneurysms. With the advent of endovascular techniques, these approaches have become increasingly popular, although they are not necessarily superior to conventional surgical treatment. [25,26,33] In our service, surgical treatment was offered to all patients with anterior circulation aneurysms except when the patient was considered nonoperable due to clinical conditions or, in the case of nonruptured aneurysms, when the patient expressly opted for endovascular treatment and also there were no contraindications. [18,27] In the present study, we analyzed a series of 100 aneurysms operated over a period of 7 years. e mortality rate was 26%, quite higher than elsewhere in the literature. e presence of clinical complications was the only factor independently correlated with mortality in ruptured aneurysms in our sample.
We observed a high rate of infectious clinical complications, which explains the high mortality. It should be noted that, in our series, most patients (85%) had ruptured aneurysms and among these cases, 48.8% were in severe neurological condition (HH score 3 or higher) preoperatively. e postoperative course of this patient population tends to be worse due to the higher incidence of clinical complications and clinical vasospasm. [21,22] Ventriculitis was present in 7 patients (18.9% of those who received an external ventricular drain), which is equivalent to 19.1 cases/1000 catheter days. Ramanan, in a meta-analysis of 35 observational studies, found an overall incidence of 11.4/1000 catheter days. When analyzing only smaller studies (<1000 catheter-days), the observed incidence was higher (18.3/1000 catheter-days). [30] Considering that the total number of catheter days in our series is 365, our rate is consistent with that of the smaller studies included in the meta-analysis. Lozier, in a review article, observed that, in most studies, the presence of hemorrhagic CSF is associated with a higher incidence of ventriculitis. [24] Clinical vasospasm occurred in 50% of patients. Although the occurrence of clinical vasospasm had no direct correlation with mortality, it did correlate with unfavorable outcomes (GOS 1, 2, or 3). However, this statistical association was not maintained on multivariate analysis. Rosengart and Orakdogen, among other authors, have reported an association between clinical vasospasm and mortality. [28,31] Angiographic vasospasm was found in 34% of patients in a case series by Brown, who reported that the incidence of late ischemia was 31% higher in patients with angiographic vasospasm than in patients without it. [3] However, 25% of patients with late ischemia did not exhibit angiographic evidence of vasospasm. ere are several possible explanations for the presence of clinical vasospasm and late cerebral ischemia in the absence of detectable angiographic vasospasm. ese include initial early damage related to intracranial hypertension in the first 72 h after stroke, which could lead to subsequent global cerebral ischemia; increased concentrations of procoagulant factors in CSF; and cortical spreading depolarization, secondary to dysfunctional cation influx in the neuronal membranes, with subsequent dysfunction and spasm in the cerebral microvasculature. [10] In our series, there was no significant difference in overall aneurysm size or aneurysm neck size between ruptured Variables included in the multivariate model (P>0.2). *Statistically significant at the 5% level and unruptured aneurysms. Aneurysm size has been studied by several authors as a potential predictor of rupture. [5] In 1998, a cooperative analysis of a retrospective cohort (International Study of Unruptured Intracranial Aneurysms) concluded that aneurysms smaller than 10 mm in patients with no history of SAH have a risk of rupture of 0.05% per year. [38] Juvela et al., in their cohort, observed that, although larger size is a risk factor for aneurysm rupture, most ruptured aneurysms were smaller than 7 mm. [15] In a series reported by Weir, 77% of ruptured aneurysms were <10 mm in size. [37] Regarding the aneurysm AR, in our series, it was larger in ruptured than in unruptured aneurysms, which is consistent with the literature. In a retrospective study by Weir, the mean AR of unruptured aneurysms was 1.8 versus 3.4 in ruptured aneurysms. e odds of aneurysm were 20-fold greater when the AR was >3.47 than when the AR was <1.38. [36] Ujiie, in another retrospective study, found that almost 80% of ruptured aneurysms had an AR >1.6, while almost 90% of unruptured aneurysms had an AR <1.6. [34] In our series, the criterion used to define the laterality of craniotomy was preferential filling, as described by Chemale. [4] Preferential filling corresponds to the side on which the aneurysm is most completely visualized on angiography, if there is a difference. Chemale noted in his series that, in most cases, the dome of the aneurysm is directed to contralateral to the side of preferential filling, even when both A1 segments are symmetrical. When this did not occur, the A1 segment was tortuous and its terminal portion, adjoining the ACoA, was directed contralaterally to preferential filling. Chemale argues that, although the right side is classically preferred when obtaining access to ACoA aneurysms because of the lower risk of morbidity in the nondominant hemisphere, [39] the preferential filling approach allows easier dissection of the aneurysm neck and is associated with a lower risk of intraoperative rupture. [4] Similar findings were observed in our study: nearly 85% of aneurysms exhibited preferential filling on one side; however, A1 segment hypoplasia occurred in only 45%. e aneurysm dome was directed contralateral to the side of preferential filling in 85.8% of cases (73/85). When filling was symmetrical, most aneurysms (10/15) did not project laterally. e laterality of craniotomy did not influence morbidity or mortality in our series.
Intraoperative rupture occurred in 3% of the cases, a lower rate than those reported in the literature. Leipzig et al., in a series of 1694 aneurysms, [19] found moderate or severe intraoperative rupture (disregarding small bleeds that could be controlled immediately using the microsurgical aspirator or clipping) in 3.2% of aneurysms. ACoA aneurysms ruptured intraoperatively in 9.3% of cases, a rate higher than in the present series. In another series of 694 aneurysms, reported by Kheireddin et al., [17] the overall intraoperative rupture rate was 11.7%, being highest in ACoA aneurysms. Hsu et al., [14] in a series of 538 surgically treated aneurysms, found that experienced surgeons (more than 300 procedures performed) had a significantly lower intraoperative rupture rate than surgeons with little experience (8% vs. 16%).
We found no association between temporary clippings and worse outcomes in our patients, a finding consistent with the literature. Araújo Jr., in a series of 32 patients with ACoA aneurysm, of whom 21 required temporary clipping, did not find a significant association between duration of temporary clipping and outcome. [7] Griessenauer, in two follow-up studies of patients who underwent temporary clipping during treatment of cerebral aneurysms, also found no association between duration of clipping and outcome, even with an average time as high as 19 min. [11,12] Study limitations e limitations of this case series are those inherent to retrospective study designs. Our data were collected from medical records completed by different individuals in a heterogeneous manner over time. Sometimes, specific data were unavailable for a specific patient.
Regarding morbidity and mortality, due to the paucity of data available in outpatient medical records at our facility, it was impossible to evaluate late outcomes in the cohort.
CONCLUSION
e present study reports a series of 100 cases of ACoA aneurysms treated surgically over 7 years at a tertiary care center in Southern Brazil. e overall mortality rate was 26%, demonstrating that, despite advances in the management of subarachnoid hemorrhage, it is still an event that carries high morbidity and mortality rates, especially in patients who present with severe neurological deficit (as did a substantial portion of our sample). e development of clinical complications, especially infectious ones, was the key determinant of mortality, highlighting the importance of adequate neurointensive care in these patients.
Financial support and sponsorship
Nil.
Conflicts of interest
ere are no conflicts of interest. | 2019-10-10T09:25:56.495Z | 2019-10-04T00:00:00.000 | {
"year": 2019,
"sha1": "c856455e6c4814123b36d96e6915a9a8375a45c2",
"oa_license": "CCBYNCSA",
"oa_url": "http://surgicalneurologyint.com/wp-content/uploads/2019/10/9684/SNI-10-193.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "79a78b5613fa55ae27824fe18ba06193a6d6966d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54570664 | pes2o/s2orc | v3-fos-license | The design of Superpave compared with Marshall design method
In order to compare and analyze Superpave and Marshall design method for asphalt mixture. Through the analysis about gradation selection, predicition asphalt content, the determination method of theoretical maximum density, the compaction methods of specimens, the determination method of optimum asphalt content and the test method of mixture performance of these two design methods. The following conclusions have been come to Superpave design method is better than Marshall design method for asphalt mixture. Keyword Superpave Marshall design method gradatiom compaction methods optimum asphalt
FOREWORD
At present, the common asphalt mixture design method is Marshall (Marshell) method and the Superpave method.Marshall method first by Bruce Marshall (Brue Marshall) is put forward, in 1948 the U.S. army corps of engineers of this method is improved, and add some test performance test, standard development into the mix proportion design of asphalt mixture; Superpave method is 1987 ~ 1992, the United States SHRP plan of asphalt project research results, this system puts forward a new set of design based on performance related to traffic and climate on the basis of material selection and mixture design method.
The Marshall developed on the basis of experience and have some certain limitation at the field of method of sample shaping and test indexes.And the Superpave is not popularized in china because of the stiff price.There have some different between the grading choose, the method of sample shaping and OAC design.So most technician care for that whether have some point and relationship between two methods.
COMPARE WITH THE METHODS
(1) the grading choose Marshall method is fit of the design of asphalt of continuity dense grading.In the past the grading choose usually use the mid-value that recommend by norm.The present norm was required to confirm the "engineering design gradation scope" by engineering practice, and according to the highway classification, climate and traffic condition to choose the coarse type (C) or thin type (F).Usually the gradation curve must continuousness, smooth and has large compactness.It is need to analyze whether form the skeleton structure .
Superpave gradation selection is through (area) to implement the control points and limits, locus of control is the requirement that aggregate gradation shall not exceed that of the specified range, respectively in nominal maximum size screen, medium sieve (2.36 mm) and minimum screen (0.75 mm).Its purpose is that to limit the amount of sand and provide enough VMA.Restricted area is located in the biggest density level wiring between the medium sieve and 0.3 mm sieve.There have same question of compaction if the asphalt beyond the restricted and lack of ability to resist permanent deformation during using [3] .And then, it is cause VMA too small to affect the asphalt usage when the gradation beyond the restricted by Superpave.Therefore, Superpave should make design gradation in between control points and avoid the limits when choose the gradating, and this grading are usually formed the skeleton structure.
(2) estimate the content of cementing agent Marshall use this formula to estimate the asphalt usage: This way can quickly to count the resolute.But there have some reasons to cannot confirm the estimate proportion of asphalt was very access the really one.The reasons are that there have much error because the data that from the constructed project is not fully reliable [4].
The formula about estimate the asphalt usage in the Superpave is : Through the formula it is can be found that the superpave about the method of asphalt usage is conclude the volume based on the mixture.The Pb need a initial value is from the experience.And the coefficient C choose by hydroscopicity of aggregate.The more small the value, the more big the hydroscopicity, and common the rage is 0.5~0.8.
There have to setting parameter is Pb and C if measure the density of aggregate and cementing agent because the void ratio Va is 4% in the Superpave.The void ratio of Superpave is 4%, but marshall is a rage.Marshall will confirm a aim void ratio when count the asphalt-aggregate ratio, and the aim void ratio is confirm by experience.
(3) confirm the max theory density ratio The Marshall using the vacuum test method for determining the biggest theory relative density for the modification of ordinary asphalt mixture, and for the modified asphalt and SMA mixture calculated using the following formula: Superpave have hear tester (SST) and indirect tensile test and instrument (IDT) to test and prediction the mixture performance.These two instruments were improve the forecast methods for performance test.But it was too expensive and the research are rarely.Superpave about water sensitivity of the test mixture, is not based on performance test, and there are two purposes: first, determine whether the combination of asphalt cement and aggregate is sensitive to water; Second, the effect of measuring antistripping agent.Superpave mixture water sensitivity of the test method is similar to the current specification of freeze-thaw splitting test [3].
Due to the shear test and indirect tensile test method and standard is perfecting, when the domestic scientific research units in the Superpave mixture design, they usually to test method to performance verification of the mixture by Marshall.Although it is not fully reflect the Superpave mixture excellent road performance, it yet be regarded as a kind of auxiliary Superpave mixture design method.
CONCLUSION
1 when gradation selection, Superpave used the concept about restricted area and restricted, and the design of the grading was better than Marshall generally with skeleton structure.
2 Superpave is more reasonable than experience of the Marshall method because it is based on the volume of the mixture properties for Forecast for cement content 3 To determination the theoretical maximum density of mixture, the Marshall method use the vacuum test for ordinary asphalt mixture, and for modified asphalt mixture was used by calculation method.But Superpave are all made of the measurement method.It is due to the many shortcomings of measurement method for modified asphalt mixture, that use the Marshall to test the max theoretical density .
4 Superpave conclude the traffic parameter for moding.At the same time to conclude the situation of pavement.Ttherefore the design of mixture density than the Marshall method design.
References
the percent of cementing agent; W is the quality of aggregate, g; be | 2018-12-03T08:44:38.562Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "313a9290d4c928b62dc6818d84741f715f68a907",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/26/matecconf_mmme2016_02007.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "313a9290d4c928b62dc6818d84741f715f68a907",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
245286229 | pes2o/s2orc | v3-fos-license | Environmentality in biomedicine microbiome research and the perspectival body
Microbiome research shows that human health is foundationally intertwined with the ecology of microbial communities living on and in our bodies. This challenges the categorical separation of organisms from environments that has been central to biomedicine, and questions the boundaries between them. Biomedicine is left with an empirical problem: how to understand causal pathways between host health, microbiota and environment? We propose a conceptual tool – environmentality – to think through this problem. Environmentality is the state or quality of being an environment for something else in a particular context: a fully perspectival proposition. Its power lies partly in what Isabelle Stengers has called the ef fi cacy of the word itself, contrasting the dominant sense of the word environment as something both external and fi xed. Through three case studies, we argue that environmentality can help think about the causality of microbiota vis-a-vis host health in a processual, relational and situated manner, across scales and temporalities. We situate this intervention within historical trajectories of thought in biomedicine, focusing on the challenge microbiome research poses to an aperspectival body. We argue that addressing entanglements between microbial and human lives requires that the environment is brought into the clinic, thus shortening the conceptual gap between medicine and public health.
Introduction
In this paper, we argue that the prominence of microbiome research in the 21st century is bringing about changes in the status of the environment within biomedicine.Categorical distinctions between organisms and environments are brought into question, and the biological boundaries between them become less clear.We argue that this introduces challenges regarding usage of the term environment: what constitutes an environment, for whom, and with which consequences for health?We situate microbiome research historically within medical and biomedicalcurrents of thought from the 19th century onwards, particularly vis-a-vis the rise to dominance of a place neutral medicine.We then offer a conceptual response to the challenge of proliferating environments in microbiome research.
The word microbiome refers to the combined genetic composition of the microbiota-bacteria, viruses, archaea and fungi-that live on and in the body of another organism (a host, e.g., a human).Microbiome composition is different from individual to individual, seems to impact host health and wellbeing in far-reaching ways, and changes over the lifespan of the host according to myriad factors such as diet, social interactions, place and antibiotic intake (e.g., Lynch & Pedersen, 2016).This complex collection of microscopic organisms is frequently described as an inner environment or ecosystem (Nerlich & Hellsten, 2009) in metabolic dialogue with the environment outside.It has also been described as a functional component of the macro-organism, with microbiota and host making an integrated unit called the holobiont. 1Through either lens, microbiome research brings organism and environment closer: they are co-constituting over time, and across scales from macro to micro.
Whether and when microbiota-host relations should be framed in ecosystemic terms or as part of an integrated unit of individuality is a matter of ongoing debate.Diverse attempts have been made to delineate levels of integration-for example, evolutionary, developmental, physiological, immunological-and the consequences for individuality.
Arguments have been made that host and microbiota are integrated biological individuals, the aforementioned holobionts (e.g., Gilbert et al., 2012;Gilbert & Tauber, 2016;McFall Ngai, 2013;Theis et al., 2016); that holobionts are constitutively embedded in their world but not biological individuals (Smith, 2017); that host and organisms are ecological communities without unity (e.g., Douglas & Werren, 2016;Moran & Sloan, 2015;Skillings, 2016;Stencel & Proszewska, 2018), and that holobionts are individuals construed as processes of intersecting lineages collaborating in metabolism (Dupr e & O'Malley, 2009).Su arez and Stencel (2020) have argued that both ecosystemic and integrated individuality perspectives can hold, depending on whether we adopt the perspective of the host or the microbiota; Dupr e (2012) has argued that boundary demarcation should be decided in context depending on biologically salient aspects of the analysis.Within biology, the two perspectives are not generally considered incompatible (e.g., see Gilbert, 2019).
A kind of ontological fracturing is thus taking place (Landecker, 2019).Categories such as individual/community, organism/environment, inside/outside, are creaking under the weight of experimental data (Landecker, 2019).Whilst not dissolving completely, categories begin to leak into one another, with epistemic and ontological consequences that invite re-thinking the kinds of questions we ask in biology and biomedicine; the modes of intervention we consider; and the worldviews that underly them.Landecker and Kelty's (2019) invitation to think metabolic disorder through short-chain fatty acids produced at the metabolic interface between microbiota and host-rather than thinking through genes or calories-is one example of such thinking reversals.Faecal microbiota transplantation (see section 3) is an example of a shift towards ecosystemic modes of thinking within medical intervention; another is the development of probiotic based cleansing products for use in hospitals (see Caselli & Purificato, 2020).
Many have argued that shifts in imagery and metaphor indicate (and follow) shifts of thought in science (Haraway, 2004;Keller, 2020).Indeed, ecosystemic imagery and metaphors have been a core driver of scientific articulations of the importance of microbiome research, as well as its uptake in public and media discourses (Sangodeyi, 2014). 2cFall-Ngai et al. (2013, p. 3233) write of the holobiont as "the ecosystem that is an individual animal and its many microbial communities."Gilbert (2019, p. 308) similarly writes that "we are not only organisms, we are biomes-sets of integrated ecosystems", and Gilbert et al. (2012, p. 336) write that "we are all lychens."This richness of imagery and metaphor has been coupled with widespread societal attention to the implications of thinking with and through microbial environments.Popular books and journal articles alike draw on cultural touchpoints such as Walt Whitman's I contain multitudes (e.g., Podolsky, 2012;Yong, 2017) and scifi tropes such as We are not alone (e.g., Shivaji, 2017;YourekaScience, 2014), to indicate disturbance to notions of the sovereign individual.
Responding to this landscape of categorical frictions and fractures as interdisciplinary scholars,3 our paper proceeds in three sections.In section 2, we lay out the historical landscape: situating the microbiome projects as the successors and intellectual disruptors of the human genome project, and arguing that clinical microbiome research sits uncomfortably within the prevailing paradigm of place neutral medicine.We argue that this research is changing attention to environmental factors within biomedicine in several ways: for example, by rendering evident the limits of controlled laboratory experiments that isolate microbiota strains from their community context, and by directing attention to microbial embeddedness as an environmental factor within human health.Notably and importantly for our argument, we argue that microbiome research troubles notions of what actually constitutes an environment, and for whom.
In section 3 of the paper, we propose a conceptual response to the challenges of proliferating usages of environment in microbiome research.This is environmentality,4 which we define as the locally described state or quality of being a causal context for something else.Environmentality has a dual action as concept and as term; that is to say, its value lies partly in contributing to conceptual debate about what constitutes environment, and partly in the action of the word itself.It is a tool for sharpening attention to what may count as an environmental factor in localised and case-specific ways, over time and across scales, from micro to macro.And it is also a way of highlighting cultural meanings imbued in the word environment-namely a gravity towards the external, the adjacent and the fixed-which may be influencing ability to respond to epistemological and ontological challenges posed by a microbial view.We illustrate how environmentality has shaped our thinking through three case studies.Sections 2 and 3 should be read as in dialogue with each other.They speak in different voices, and section 3 is a conceptual response to a problem laid out in section 2.
In the final section 4, we revisit the landscapes of microbiome research and contemporary medicine through the lens of environmentality.We argue that environmentality helps us to think in terms of relations, beyond categories and across scales and temporalities.This facilitates a case-specific perspectival understanding of the body-the body in place, co-constituted with microbiota and environment.Finally, we reflect on some consequences for medicine, which has (outside specific sub-disciplines such as tropical medicine) tended to separate body and environment.
The growth of environmental thinking
Over the past two decades, microbiome research has given rise to an extensive proliferation in ways of locating environment.Already before the publication of the full human genome sequence in 2003, microbiologists began calling for the use of genetic sequencing technologies to investigate the trillions of microbes living on and in our bodies.In 2001, noted microbiologists David Relman and Stanley Falkow remarked that science was "still woefully ignorant of the composition and variability of our endogenous microflora," and that "we still do not fully appreciate to what extent human life is dependent on its microflora" (Relman, 2001, p. 208).This paper was part of a push in microbiology to deepen the understanding of commensal microbes living on and in human bodies, which would eventually lead to the establishment of the Human Microbiome Project in 2007.In describing the importance of beginning this work, the authors explicitly likened the scope and nature of the undertaking to the study of other natural environments: "The human biome is as much an unexplored frontier as the collection of life found at deep-sea thermal vents, if not more so" (Relman, 2001, p. 208).
Humans, from this perspective, shared a fate similar to most other things on the planet-certainly all living ones-namely to be environments for and with microbial communities (Gilbert, 2017).The environmental imagery of landscapes, jungles and deep-sea vents used in the first decade of the 2000s served several functions.Conceptually, it reoriented the study of human-microbe relations away from previously dominant war metaphors (Institute of Medicine Forum on Microbial Threats, 2006) by emphasizing the communal, even natural, entanglement of organisms and environments.Methodologically, it suggested a shift from microbes as either model organisms (Ankeny & Leonelli, 2011) or singular pathological entities, towards the study of microbes as ecologies and communities (Paxson & Helmreich, 2014).Growing concern in the 1990s about the rapid spread of antibiotic resistant microorganisms as a major health threat was a key factor in this shift; see Landecker (2016) and Sariola and Gilbert (2020) on changing biologies of bacteria in a world of human desires, and ensuing changes to human ideas of the bacterial.Joshua Lederberg wrote in 2000 (p. 290) that a re-conceptualization of disease as "instabilities within this context of cohabitation" was imperative. 5Highlighting environmental co-existence with microbes was thus not only a rhetorical device used to further an exciting and underexplored research area, but was also a substantive response to the growing realization of the depth and complexity of the entanglement between human and microbe.From this perspective, disease in general had to be understood as an environmental phenomenon, or at least as always having environmental qualities.
The newness of this ecological vision for biomedicine must be appreciated in historical context; it stood in stark contrast to the genomic science it was built upon.As Nerlich and Hellsten (2009) have described, post-genomic microbiome research cast itself and its objectives in a very different register than the linguistic metaphors that had dominated the Human Genome Project (HGP).Where the HGP had been presented as a deciphering of the book of life, with the genomic sequence as constructed from letters (bases) and chapters (chromosomes) organised in books (genomes), microbiome research deployed a language of interactions, communities and ecologies (Baty et al., 2014).The genomic vision of the late 20th century was one in which the mysteries of the human organism would be mostly solved by looking within, deeper and deeper into the molecular information contained in its cells (Keller, 2000).Geneticist Walter Gilbert in 1992 summarised this view and its belief in the revolutionary promises of the Human Genome Project by stating that one day "three billion bases of DNA sequence can be put on a single compact disc and one will be able to pull a CD out of one's pocket and say, 'Here is a human being; it's me!'"His essay was entitled A Vision of the Grail, and was published in a book called The Code of Codes Kevles and Hood, 2000 (p.96).
Unlike the relatively bounded metaphors of organisms as books, the rise of microbiome research was part of dragging the human organism into what Paxson and Helmreich (2014, p. 166) describe as a "newly ascendant model of 'nature', one swarming with organismic operations unfolding at scales below everyday human perception, simultaneously independent of, entangled with, enabling of, and sometimes unwinding of human, animal, plant, and fungal biological identity and community."This ecosystemic view of humans as entangled environments was metaphorically enshrined in the paper The Human Microbiome: Eliminating the Biomedical/Environmental Dichotomy in Microbial Ecology by microbiome research pioneers Ruth E. Ley, Rob Knight and Jeffrey I. Gordon, in which they write: "When a new human being emerges from its mother, a new island pops up in microbial space.Although a human lifespan is a blink in evolutionary time, the human island chain has existed for several million years, and our ancestors stretch back over the millennia in a continuous archipelago" (Ley et al., 2007, p. 3).Similarly, Scott Gilbert has written of what he calls a holobiont birth narrative, emphasizing the continuous communal and environmental embeddedness of individuals (Gilbert, 2014).Humans, from this perspective, cannot be understood as singular foregrounded entities standing against a backgrounded environment; they are themselves environments, embedded in and traversed by other environments larger and smaller.
This shift was, as Juengst and Huss (2009) describe, integral to the entire Human Microbiome Project (HMP), right from its inception.The researchers articulated a vision of the human genome as part of a human metagenome, which included the genomes of all the microbes associated with the body.The human body, it was argued, should be thought of as an ecosystem, and to be human was to be a superorganism consisting of multiple organisms that together produced a self.Bodies were environment for microbes, and microbes were environments for bodies.Thus, the thrust of the project was at once both methodological-using new technologies to study living things in new ways; and conceptual-re-organizing and re-describing the structure and function of the bodies in question. 6espite the clarity of the HMP's vision, defining and situating these multiple, entangled environments was a complicated affair right from the start.Early discussions around the project began with an informal brainstorming session in February 2006.Here, participants noted the many variables that would have to be taken into account in order to understand the microbiome: temporal, genetic, environmental, seasonal and individual factors.The question was also raised of what a normal or core microbiome might be, and if such a thing even existed.As Sangodeyi (2014, p. 264) discusses, the questions raised at these workshops were at once practical and philosophical: "What did a healthy body look like in microbial terms?Was there such a thing as a core microbiome?What was the difference between the microbiome in health and disease?These questions, central to the practicalities of research and study design, came down to a deceptively simple question that had deep cultural resonance: what did it mean to be normal?And what were the boundaries of health?".Eventually, phase 1 of the project was structured according to distinct body sites with particularly rich and diverse microbial ecosystems: gut, skin, vagina, nose and mouth (The iHMP Research Network consortium, 2019).The groups working on the different sites each proceeded with different understandings of how to determine what a normal microbiome might be (Sangodeyi, 2014)-an indication of the complexity, variety and environmental embeddedness of the human microbiota.
A history of place neutrality
The tension between the inwards-oriented, programme metaphors of the Human Genome Project and the Human Microbiome Project's reliance upon an ecological and environmental conception of human bodies also point to a deeper tension in the historical development of biomedicine.Environmental historian Christopher Sellers has argued that modern biomedicine derived a significant part of its conceptual rigor, analytical power and clinical efficiency by separating organism and environment (Sellers, 2018, p. 1).He terms this place neutrality, suggesting that from the late 19th century and onward into the 20th century, medicine and medical science increasingly aspired to "a medicine in which patients' own places didn't matter to what doctors thought or did" (Sellers, 2018, p. 1).Inspired by the increasing emphasis on the clinic and the hospital as the sites where medicine happened, was studied, and became conceptualised (Foucault, 1973), this new vision of medicine turned inwards.Disease became localised, and the interior landscapes of human bodies became the primary if not exclusive domain of the medical practitioner and the medical scientist (Jewson, 2009).Environment and body came to be conceptualised and studied at an increasing distance from one another.
This inwards move has to be understood in the context of medical theories and practices that it diverged from: it was quite distinct from mid-19th century and earlier medicine, which was built on a much more fluid ontology in which bodies and environments were in constant born out of a need to accelerate reading of the human genome within the Human Genome Project, then became the "third-millennium microscope" for gazing upon microscopic wilderness, bringing forth a "world overflowing from all sides the tranquil categories that ordered it intelligibly".exchange and bodies were far more porous and permeable. 7Neo-Hippocratic atmospheric thinking, which dominated the 18th century and first half of the 19th century, emphasised environmental concepts as key ways to understand and intervene in disease.Even as anatomical knowledge was growing, and a more clinical pathology developing, medicine at that time still understood bodies through environmental ontologies (Nash, 2007) such as miasma-the theory that epidemic disease was caused by 'bad air' resulting from putrefaction.Airs, winds, weather, fluids and landscape topography were the keystones of this environmentally inclined medicine.By the mid-19th century, as medical statistics and the categorisation of pathologies were gaining increasing traction in medical thinking and practice, the lack of rigidity and causal explanatory power of earlier theories were increasingly visible, for example in understanding epidemic diseases such as cholera.Doctors began looking to more empirically focused theories of disease.
This new place neutral medicine was bolstered by the development and explanatory power of experimental physiology (Stahnisch, 2012).Pioneers such as Claude Bernard emphasised the body as having a self-regulating inner environment (Landecker, 2017), and outer influences were increasingly seen as less relevant, pushed aside by the drive to understand the mechanisms of internal regulation.As experimental physiology began having effects on clinical practice and medical ontology, slowly at first and then much faster through successes such as germ theory, disease increasingly became a thing to be studied inside the body.Even as germ theory found the source of the disease in the environment, the clinician's task began only once the patient was sick.The battle for health was fought by attempting to restore homeostatic balance within the patient (Sellers, 2018).Its explanatory power and its understanding of the interior terrain bolstered the success of this approach; a success that was achieved in part by the degree to which experimental, lab-based medical science managed to establish a space in which the mess of environmental conditions could be suspended.This suspension allowed the experimentalist to establish causal influences between organs, organ systems and disease.Medical science (and later biomedicine) thus had important origins in an epistemological push for control over, and separation of, environmental factors.
The environment outside the body was in this way rendered at a certain distance from interior life; body and environment were studied at a distance from one another.As Sellars argues, this was accompanied by the establishment of a number of more environmentally inclined medical sub-disciplines and specialties that studied relations between body and environment, but separated from the core medical faculties: tropical medicine, industrial health and public health.Sellers (p. 1) writes: "The rise of place neutrality from the late nineteenth century onward, I suggest, had close and enabling historical ties to the near-simultaneous formation of place defined specialties-tropical medicine, bacteriological public health, and industrial medicine and hygiene."This relieved other clinicians from having to consider environmental influences.Medical science as it developed in the 20th century was thus epistemologically and ontologically inclined towards place neutrality and transferability of knowledge across differing environments and different bodies.Medicine took place primarily within the body; as science, as clinical practice, and as a driving force in cultural articulations of health, illness and corporeal existence in general.
Microbiomes and the situated medical body
If the development of biomedicine in the late 19th and 20th centuries was marked by a remarkable shift in understandings of the inner workings of human bodies, the Human Microbiome Project marked both a practical and conceptual push towards shortening the distance between bodies and environments.Alongside this ontological impact of microbiome research, another major contribution has developed: as investigations into the human microbiome began in earnest, microbes were found to be involved in many of the major health issues facing postindustrial societies, such as metabolic, inflammatory, immune and systemic disorders (Lynch & Pedersen, 2016), as well as a range of mental disorders (Reider et al., 2017).These are all disease states sharply on the rise, as more and more countries experience major industrial advances and shifts towards urbanised living.The deep evolutionary microbial embeddedness and co-existence of humans with microbes has, it seems, been perturbed (Flandroy et al., 2018).
Martin Blaser coined the expressions disappearing microbiota (Blaser & Falkow, 2009) and missing microbes (2014) to highlight the dangers of perturbation to this foundational co-existence.As the co-metabolism (Smith et al., 2013, p. 549) between host and microbiota is disrupted-for example, through widespread use of antibiotics and sanitisers, or through fibre-impoverished diets8 -the physiological integrity of the holobiont suffers.Indeed, Blaser (2014) argues that this loss of microbial (bio)diversity within and on our bodies is so pernicious that it surpasses in severity the dangers associated with the rise of antibiotic resistant pathogens such as Methicillin-resistant Staphylococcus aureus (MRSA) and Clostridium difficile.
Microbiome research, then, is in some sense a lightning rod within biomedicine and broader ideas about health vis-a-vis environmental thinking9 ; it marks a deep concern with what geographer Jamie Lorimer has called a more probiotic understanding of the relationship between bodies and environment (Lorimer, 2020).While antibiotic attempts to manage life through control and separation had great impacts on public health and longevity during the 20 th century, there is a now a growing sense that the unintended side effects on microbial ecologies are serious health disruptors.As Lorimer writes, "in recent decades, scientists and citizens have in many cases considered this antibiotic approach to be excessive; obsessions with purity, division, simplicity, and control lead to blowback and the emergence of new pathologies.Modern modes of managing life and the earth may be disturbing and intensifying natural processes, helping drive the planetary transition into the Anthropocene" (Lorimer, 2020, p. 3).
Thinking with microbiomes has also re-surfaced process approaches within philosophy of biology10 : questioning the adequacy to the present of metaphysical foundations of thought that have facilitated the categorical separation of organisms and environments.Under a process ontology, dialogical change is the foundation of all living processes, and stability becomes the phenomenon in need of explanation (Dupr e, 2020).Organisms with their symbiotic microbiota-holobionts-are biological verbs, not nouns; products of lineage-forming entities collaborating in metabolism 11 (Dupr e & O'Malley, 2009).Boundaries and biological identities are formed and continuously re-formed in dynamic interactions.Here, the question of boundaries is rendered empirical: there are multiple ways of drawing boundaries between organism and environment, "reflecting real biologically salient aspects of the multiply interconnected systems that make up the living world."(Dupr e, 2012, p. 241).
In dialogue with a process ontology of life, anthropologists Niew€ ohner and Lock (2018) have proposed the notion of situated biologies.They point to the need to construe and document biologies as constituted in dialogue with place, objects, and through the materialization of ideas.Bodies as shaped through biomedical sanitation practices; bodies as shaped through cultural dining norms.And to include in our understanding of organismal biologies the impact of the very processes through which organisms (macro and microscopic) come to be known by humans-what Donna Haraway (1988) has called situated knowledges.Through interdisciplinary scholarly attention to microbiome research, boundaries of nature and culture blur; place and its practices become embedded in bodies that continuously individuate-in-relation.
This emphasis on the situatedness of knowledge is also finding echoes in recent calls to include more of the 'wild' into microbiome laboratory studies.To date, experiments with laboratory rodent microbiota have provided foundational knowledge about how hosts and microbiota interact under highly controlled laboratory conditions.However, these studies are built on the place neutral approach embedded deep in contemporary biomedicine.Summarised here by researchers calling for a re-wilding of laboratory studies: "The most convincing evidence for host-microbiome interactions has been gleaned through microbiome transplantation studies […] However, there is a trade-off: highly controlled experiments isolate mechanisms of interest, but they cannot simultaneously capture the full suite of ecological processes (drift, dispersal, competition etc.) that influence reciprocal host-microbiome interactions in nature" (Greyson-Gaito et al., 2020, p. 2).This points to the complex trade-off between experimentally verifiable microbiome knowledge gained through isolated laboratory conditions on the one hand, and the limitations on comparing and translating this knowledge into 'the wild' of human health.
Environmental complexity is thus a confounder, but also the stage on which the extraordinary variety of microbial-macrobial interactions evolve, acquire stability and de-stabilise.In a sense, microbiome research within biomedicine is struggling with questions of holism and embeddedness.Coming up against the limits of metrics of distinct parts and wholes; seeking ways to disentangle complex relational dynamics long enough to make distinct readings, even as the system continuously shifts.In interesting ways, this mirrors medical preoccupations before the advent of place neutral medicine: microbes as the new, empiricallygrounded miasma.
Introducing environmentality
In section 2, we situated microbiome research in a history of reaching for a more scientific medicine; a more controlled, precise and generalizable knowledge that would separate body from environment and locate it instead in the aperspectival 'view from nowhere' of the clinic.We argued that microbiome research challenges these core paradigms, rendering environmental factors more multiple and mutable than ever before.Attending to the microbiome makes foreground and background repeatedly switch place, and biological boundaries move.It blurs the biological boundaries between species and forces the environment (back) inside the clinic.We do not intend to play down the life-saving advances of clinical medicine, but rather to elucidate some of its current conceptual challenges (see section 4).As argued by Hannah Landecker (2019), categories such as inside/outside, organism/environment, are creaking under the weight of experimental data, bringing about a kind of ontological fracturing.It is not that categories dissolve, but that the data puts pressure on them; it becomes clear that, as thinking tools, they are not adequate to the multidimensional complexity of the living world.
In response to these conceptual challenges, we offer a concept that arises directly out of the ways that environment seems to shift around in microbiome research: environmentality as the state or quality of being a causal context12 for something else.This is a firmly perspectival concept, to be used locally in relation to a particular case-a form of situated knowledge (Haraway, 1988), aware of its own situatedness as well as the situated nature of its object of study.Cases can be diverse-e.g., a research paper, a clinical observation, or a scientific hypothesis.Within the case, environmentality invites us to start by foregrounding a particular entity, phenomenon or set of relations, and then to identify how surrounding entities or relations are acting as causal context for the foreground and vice versa, following lines of environmentality across time and space.What is backgrounded and what is foregrounded might shift as we follow a particular line of thinking, and new agents might be identified and brought into play.For example, in case study two, we begin by tracing the environmentality of the mother's gut microbiome in relation to foetal development, and we end the analysis by foregrounding the role of fibre.
Environmentality addresses the proliferation problem, not by drawing tighter rings around usages of the term environment, but by sidestepping to see afresh what may already be there: the relational nature of the term environment itself, cutting across boundaries of time, space and scale.As Trevor Pearce (2010, p. 241) writes in his description of the term's mid-19 th century introduction by Herbert Spencer; "The word 'environment' seems to refer to a mishmash of unrelated entities: sunlight, soil, climate, air, organisms, and so on."With environmentality, we don't propose to give up on better understanding the 'mishmash', but to accept its originary looseness and underdetermination (Walsh, 2021) and then orient attention on a case-by-case basis to specific-and often surprising-relationalities within it.We have found value here in thinking alongside Niew€ ohner & Lock's situated biologies (2018), which attend (anthropologically) to the particular ways in which place, practices and ideas become constitutive of biologies; the ways that environments become bodies across spatial and temporal dimensions-and, we add, across scales.We follow Niew€ ohner and Lock's insistence that "what constitutes 'the environment' only becomes meaningfully defined in relation to a second entity to which that environment can be environment; or put in a different way-organism and environment always penetrate each other in several ways or co-construct each other" (2018, p. 691).See also Smith (2017) on this co-construction.
A related line of perspectival thinking is unfolding in the philosophy of biology around the question of what constitutes a biological individual.Su arez and Stencel (2020Stencel ( , p. 1310) have developed a part-dependent account of individuality whereby "holobionts are biological individuals from the perspective of the host, and ecological communities from the perspective of the microbes."Dupr e (2012) defends a promiscuous individualism, whereby boundary demarcation is decided in context depending on the biologically salient aspects of relevance to the current analysis.Lloyd and Wade (2019) have proposed the terms euholobiont and demibiont to demarcate differential adaptive consequences of the associations for hosts vs. for microbiota.We find kinship in the perspectival nature of these accounts, but work with a different focus.Rather than examining microbiota/organism relations in order to clarify what counts as an individual and from which perspective, we attend anew to the diversity and context-specificity of microbiota/organism/environment relations-and thus the kinds of things that can act as environment and for whom.That is to say, we are interested in encountering the relations that make an organism and an environment, rather than defining the entities between which those relations hold.
Environmentality is an epistemic or operational tool rather than an ontological one.It is a way of being attentive to causal factors and relations that may not obey categories such as inside/outside.Although kin in worldview to a process ontology (Dupr e, 2020; Nicholson & Dupr e, 2018), we propose it as a tool for analysis within the prevailing ontology of things.The task is not to sharpen the definitions of boundaries, nor to advocate for their dissolution, but instead to hold the tensions of boundaries in place long enough that something meaningful can be said about constitutive relations between situated entities that themselves are temporally stabilised processes (Dupr e, 2020).We are aware that the term environment is itself often used operationally rather than ontologically within the biomedical literature.But the estrangement of a new word, unburdened by the strong associations of externality and fixity that accompany the word environment, may help to identify and articulate new and surprising lines of connection.Here, we are inspired by Stenger's (2008) description of the power of words to help us think differently as efficace (efficacy).The work of environmentality may lie partly in the efficacy of the word itself: for us at least, it has helped re-think environments through the ground of microbiome research.
Three case studies
Below we illustrate how environmentality has shaped our thinking in relation to three microbiome case studies, all based on biomedical research articles.The first exemplifies the ways in which foodstuffs travel into, become and exit bodies, acquiring environmentality in relation to the entities involved (Fern andez et al., 2000).The second lays out how environmentality can help us construe microbe and human as co-metabolic partners within embryonic development in utero; a partnership negotiated through fibre (Kimura et al., 2020).And the third considers faecal microbiota transplants, in which a traditional waste product takes on environmentality for another, less 'healthy' environment (Wilson et al., 2019).
It's not enough to call it 'ham'
A paper by Fern andez et al. 2020-A diet based on cured acorn-fed ham with oleic acid content promotes anti-inflammatory gut microbiota-provides an interesting case study for environmentality.This paper reports the results of a study designed to assess whether consumption of acorn-fed ham by laboratory rats may serve as a prevention strategy for development of ulcerative colitis.Specifically, whether a diet composed exclusively of acorn-fed ham (as opposed to standard feed) changes the susceptibility of laboratory rats to ulcerative colitis (UC) induced in the lab by the chemical dextrain sodium sulfate.
The key entities in this study are acorn-fed ham and the inflammatory bowel disease ulcerative colitis.Acorn-fed or Iberian ham is produced from the muscle of free-range pigs who, in the months running up to slaughter, feed exclusively on acorns and grass in traditional agroforestry systems in southern Spain and Portugal.Acorns (the nuts of the oaktree family), when metabolised by pigs, are considered to be of high nutritional value, as they contain high quantities of oleic acid, an antiinflammatory fatty acid.Throughout months of feeding, the oleic acid content of the seeds becomes stored in the pig's muscle tissue, creating a ham with a much darker colour, valued taste, and specific nutritional profile.
The interesting thing about this study for the purposes of our analysis is that the 'environment' travels through flesh across different scales, resulting in altered clinical outcomes.Agroforestry oaks produce acorns, which are eaten by the pig, whose muscle acquires an acorn-like quality, which is then consumed ad libitum for 7 days by rats (and their microbiota) in laboratory cages.The rats are then induced with ulcerative colitis for another 7 days and tested for disease index and gut microbiota composition-as compared to conspecifics fed on standard rat feed.With as many other clinical factors as possible controlled and standardised, disease activity indexes are traced back to the 'oak-like quality' of the ham-its high oleic acid content.The acorn acquires environmentality in relation to the rat's ulcerative colitis, as it is metabolised through the pig.
Here, what counts as environment-in clinically relevant ways-is amplified and simultaneously sharpened through an environmentality framework.Oak health; agroforestry systems; free-range pig production-as well as oak-fed ham and oleic acid and diet and gut microbiota-become environmental factors for inflammatory gut disease.Of course, many other entities have environmentality in relation to other entities in this study (e.g., the acorn/ham has environmentality in relation to the bacterial populations of the rat's gut) but the foreground of our analysis rests on the acorn-pig-rat line.
The word environmentality is doing different work in the analysis from the work that the word environment-with its imbued imagery of immediate externality-can do.It would sound strange to argue that the acorn in the dehesa (the local agroforestry system) is an environment for the health of the laboratory rat.But the first has acquired environmentality (and explanatory value) in relation to the latter.It is this ability to hold and stretch thought across scales and temporalities, to follow circumscribed, case-specific, metabolic travelways that link environment-to-organism(s)-to disease (or health), that is of value.Thinking through environmentality facilitates a process of holding threads of thought as foodstuffs move through (in this case) other foodstuffs to impact health outcomes.The analysis highlights that, for the purposes of health outcomes, 'ham is not just ham'. 13That is to say, naming a foodstuff is not enough: its environmentality for health states will be dependent on provenance, on environmental and production factors.Switching the perspective, ham is also a different ham depending on circumstantial factors at the time of ingestion: e.g., gut microbiota ecology.As Lock (2017) argued, continual interactions of biological and social processes across time and space sediment into biologies that are local; they precipitate differently.
Not all mothers are equal in the metabolic afterlife of fibre
A paper by Kimura et al. (2020)-Maternal gut microbiota in pregnancy influences offspring metabolic phenotype in mice-provides another example through which to think alongside environmentality.The paper shows that, in a mouse model, short-chain fatty acids produced by the pregnant mother's gut microbiota travel through the placenta to the foetus and influence development, making offspring less prone to metabolic disorders and obesity later in life.However, if the mother is fed a low-fibre diet during pregnancy, depriving the gut microbiota of fibre to metabolise and thus diminishing production of short-chain-fatty acids, mouse offspring are highly susceptible to obesity later in life.
The study indicates that the mother's gut microbiota has environmentality in relation to the developing foetus, in mutual implication with the mother's dietary intake.Put differently, the mother's fibre intake acquires environmentality in relation to the foetus' future susceptibility to obesity, but only in triangulation with the mother's gut microbiota.In fact, the study shows that the protective effects of a maternal high fibre diet are cancelled when the mother is treated with antibiotics to eradicate gut microbiota.Similarly, supplementing germ-free or low fibre-fed females with short-chain-fatty acids during pregnancy resulted in adult offspring resistant to obesity.This indicates that it is the relational node between mother's gut microbiota and mother's fibre intake that acquires environmentality in relation to the offspring's metabolic health.
The experimental layout was the following: pregnant mice were bred under control (SPF) and germ-free (GF) conditions, and pups were then raised by foster mothers to align post-natal growth environment.Upon weaning, the two groups were fed a high fat diet to induce obesity.The offspring of GM mothers were found to be highly susceptible to metabolic syndrome and developed an obese phenotype not seen in the control group.The authors postulated that short chain fatty acids (SCFAs) produced by the mother's microbiota may have been travelling from mother to foetus via the placenta, influencing development of its metabolism 13 We are not attempting to adjudicate on the clinical relevance of acorn-fed ham in relation to ulcerative colitis (e.g., it is interesting to note that SCF butyric acid levels usually associated with gut health were lower in the rats fed with Iberian ham treatment vs. standard feed, presumably due to lack of fibre in the diet).Rather, we are thinking environmentality through this study.
(this process would be impeded in a germ-free mother where there were no microbiota to produce SCFAs).
To further investigate the role of these SCFAs, a dietary intervention was performed.Non-germ-free mothers were fed low-fibre vs. high-fibre diet, and the susceptibility of offspring to metabolic syndrome was tested, as per the previous experiment.A low-fibre diet in the mother was found to render the offspring highly susceptible to metabolic syndrome, as had been the case with a germ-free reared mother.This indicates that microbial fermentation of the mother's dietary fibre is responsible for resistance to obesity in the offspring.To further investigate this, the authors conducted further studies supplementing low-fibre diet mothers and high-fibre diet, germ-free mothers with SCFA proprionate during pregnancy.The resulting adult offspring were resistant to obesity as adults, indicating that the SCFAs were indeed responsible for influencing developmental pathway in the foetus in such a way as to render them metabolically healthy.
Where the first case study we reviewed indicated that not all ham is equal as an environmental factor in gut inflammatory disease, this study indicates that (otherwise identical) fibre is not equal as an environmental factor across different pregnant mothers-as its metabolism into SCFAs depends on the mother's gut microbiota ecology.It is the microbial fermentation, the relational node between mother's dietary intake and microbiota, that acquires environmentality in relation to obesityresistance in the adult offspring.Fibre itself is only an environment in relation to the microbiota's metabolism; and vice-versa.We could say that the mother and her microbiota are held in reciprocal capture14 (Stengers, 2010), continuously re-actualised and re-negotiated through her dietary choices.And this relationship becomes part of the maternal metabolic communication with the foetus, influencing developmental pathways.
An environmentality line of thinking may help construe this.The 'external setting' association for the word environment makes it difficult to argue that a relational pathway is itself an environment.Thus, environmentality could help with relational interpretations of data, a need clearly arising within biological practice: e.g., see Human Microbiome Project phase 2's focus on interactivity factors between host and microbiota (The iHMP Research Network Consortium, 2019).This need is also echoed within philosophy of science-for example, see Longino (2020) for a paper explicitly calling for interactivity as an ontologically distinct explanatory target in biology.And within interdisciplinary scholarship for example, Landecker and Kelty (2019) propose that we approach microbiome research from a metabolic interface angle: where the protagonists are not the humans nor the bacteria, but the metabolic products of their chemical relation, e.g., short-chain-fatty acids.
One stool does not fit all
Our final case study is a review paper by Wilson et al. (2019), which assesses the research on the phenomenon of super donors in faecal microbiota transplantation (FMT).FMT is the process of taking faeces from a healthy donor and transplanting either rectally or orally (in specialised capsules) to the colon of the patient.In a sense, attempting to restore a damaged environment by re-seeding it with a more healthy and hopefully beneficial microbial ecosystem (Young & Hayden, 2016).FMT has become widely utilised as a treatment for infection with antibiotic resistant Clostridium difficile, but there are also a growing number of studies examining its potential for treating other conditions associated with dysbiosis of the gut microbiota, such inflammatory bowel disease, irritable bowel syndrome, obesity, constipation, ulcerative colitis and also autism and other neurological disorders (Antonopoulos & Chang, 2016;Cryan et al., 2020;Sharon et al., 2019;Zhou et al., 2019).These disease states are a fraught and complicated terrain for medicine, as they are rapidly on the rise and medicine has limited means with which to treat them (Eom et al., 2018).If FMT could be used to successfully treat these conditions, it would be a significant breakthrough.However, there are deep uncertainties about its efficacy and safety (Wilson et al., 2019), even if many patients are willing (Kahn et al., 2012).The stakes surrounding the idea of super donors-donors whose stool could be used across a range of disease states and patient types-is thus high, as it would represent an important step towards making FMT a more controlled and predictable procedure.Wilson et al. (2019) survey a range of FMT studies, in order to assess whether this idea has traction.Their conclusion is that no, it appears that 'one stool does not fit all'; that clinical screening guidelines are insufficient in determining potential effects, and that donors and patients need to be matched in much greater detail than anticipated.As they write, "it appears a patient's response to FMT predominantly depends on the capability of the donor's microbiota to restore the specific metabolic disturbances associated with their particular disease phenotype" (Wilson et al., 2019, p. 7).Thus, while microbial diversity in the donor stool seems to be a possible predictor of a successful treatment, donor-recipient compatibility also plays a major role.This compatibility, the authors argue, can range from environmental factors including diet, xenobiotic exposure and microbial interactions, to genetic factors associated with immune response (although whether or not host genetics is a relevant factor is yet unclear, and other studies have emphasised the role of diet).
In other words, the complex of factors shaping the donor's microbiota-their life history, genetics and diet-gains environmentality for the patient.At the same time, the patient's microbiota acquires environmentality for theecosystem donated through the stool, as the two enter into ecosystemic relations as soon as the transfer is made.The donor's past fibre consumption gains a kind of environmentality for the patient's present.The patient's past fibre consumption-or lack thereof-has environmentality for the microbial communities shaped by the fibre consumption, genetics etc. of the donor.In essence, getting closer to the environmentality that each acquires for the other, through the 'subjective' capabilities of the donor's microbiota in the context of the patient's microbiome ecology.
Environmentality thus offers a relational mode of attentiveness in this clinical procedure.It shifts the locus of attention away from donor or patient factors in and of themselves, and into the metabolic interactivity that comes to occur through stool transfer.The efficacy of FMT cannot only be reduced to the properties of one ecosystem, no matter how healthy, because how the different factors in the procedure comes to acquire environmentality for each other changes what happens in the transfer.Even if the complex of factors that has produced the healthy body that produces donor stool (diet, living environment, social economic factors, age, etc.) is partially transferable via the stool itself into the patient, the donor stool comes to acquire environmentality for the patient at the intersection of life histories.
Attending with environmentality
The case studies illustrate how we have used environmentality to expand our sensitivity to what counts as environment, in three examples of microbiome research.In concluding this section, we gather together some of the key aspects of the cases that environmentality drew our attention to, particularly where these stand in contrast to a more standard perspective on environmental factors.In the first case study, agroforestry-derived acorns acquire environmentality in relation to rats' ulcerative colitis, as they are metabolised through the pig the rats will later eat (Fern andez et al., 2000).The 'environment' travels through, becomes, exits and re-enters flesh, disturbing the isolation of 'food' that is actually embedded in both commercial food systems and laboratory practices.Environmentality helps us to trace these metabolic lines across scales, holding threads of thought as foodstuffs move through other foodstuffs to impact health outcomes.
In the second case study (Kimura et al., 2020) we identified the relational pathway between the mother's fibre intake and her gut microbiota to be an environment for the offspring's metabolic health.Environmentality here helped us see that-just as not all ham is an equal 'environmental' factor for IBD (case 1) but is actually a process embedded in a production system-otherwise identical fibre is not equal in its environmentality to the metabolism of offspring of different mothers.Maternal fibre consumption acquires environmentality for the offspring's future metabolic health in mutual implication with to the mother's (localised, temporally dynamic) gut microbiota.Similarly, the mother's microbiota acquires environmentality for the offspring only in triangulation with her fibre intake.In the final case (Wilson et al., 2019) environmentality helps us examine a currently unfolding clinical strategy-searching for super donors to provide stool for faecal microbiota transplants.Environmentality stretches the line between donor and recipient further back in time and across scales of microbial-human relations, helping to highlight the dependence of the transplant success on the specific interaction between donor and recipient microbiota; and revealing a possible clash of ethos between the super donor strategy and key insights of microbiome research.
In sum, environmentality helped us trace lines of relationality across scales, back in time, through flesh, and across organismic boundaries.In doing so, the 'environmental factors' at play became stranger and more embedded in matter, their effects dependent on the particular relationalities at play.Ham is not just ham, fibre is not just fibre, a super stool is not super for everyone.This can potentially help us identify relevant configurations for deeper analysis, and to support the more relational, context-embedded thinking that both biology and medicine are reaching towards.
An environmentality analysis is a shift of perspective, a 'makingstrange' that echoes the effect of microbiome research itself on our conceptions of organism, environment, health.It does not solve the challenge of proliferation, and in some senses could be said to recapitulate it, drawing still more entities and relations under the 'environment' umbrella.Our contention is that by doing so locally we can get a better picture of what counts; attending to the shape-shifting proliferation of reciprocal relations but with a constrained and case-specific focus.Environmentality does not define the specific kind of causality at stake in the relations it highlights, or place limits on the kinds of entities and relations that can enter into those relations.Causality here may be direct or indirect, and like environmentality itself, resides in a particular relation rather than being an essential, generalizable quality.Environmentality cannot deliver definitions of entities or relations that will apply across contexts and instances-it rather helps elucidate some of the reasons that definitions can be challenging in this field, and we hope can be a useful attentional tool, a sensibility whose strength lies partly in the efficacy (Stengers, 2008) of the word itself.
Discussion
In this concluding discussion, we revisit the landscape of microbiome research through the lens of environmentality.We return to the contrast between the genome and microbiome projects, and to the challenge of articulating relations between human and environment, showing how these disputed differences led us to environmentality.We bring to the foreground notions from Isabelle Stengers' work that help us to articulate the concept and consider the kind of work it might do, and reflect on how environmentality has illuminated for us some of the conundrums facing biology and the health sciences.Finally, we argue that the distance between medicine and public health is shortened through microbiome research.
In section 2, we situated microbiome research and its proliferation of notions of environment in the context of an intellectual inheritance from the human genome project.We argued that the two projects-genome (HGP) and microbiome (HMP)-can be peered at through the metaphors they generated: informational in the first case, ecological in the second.
One points to a human fully intelligible through its biology, a code to be deciphered; the other points to a human made strange through its biology, an environmentally porous new being that demands re-framing across disciplines.Stengers (2020, p. 228) has argued that the two projects actuate a contrast between the telescope and the microscope: the telescope allowing science to soar beyond earth-sensitive knowledge; the microscope opening up the realm of the small, a "world of teeming, swarming complicated activity" that overflows from all sides the tranquil categories that ordered it intelligibly. 15his contrast is of course a simplification.As Keller has argued (2020), the Human Genome Project (HGP) was most significant in its implications for our understanding of the relations between genes and genomes; and the rise of genomics has brought about a collapse in any framing of the genome as simply a collection of genes.Simultaneously, most (though not all) microbiome research within biology and biomedicine attempts to bring that "teeming, swarming, complicated", earthly activity into cultivation in controlled laboratory conditions-re-taming into intelligibility those 'escaped' tranquil categories.Nevertheless, the transition into genomics and metagenomics marks a general transition in biology towards knowledge-in-interaction.Keller (2020) wrote that genomics research reveals, at every level, a biology which is itself constituted by interactions; and has proposed that genomes be construed as reactive systems (Keller, 2020)-an interesting interplay between circumstantial responsivity and constraint.
This metaphor of a reactive system points to a dynamic interplay between circumstantial responsivity and structural stability.The microbiome clearly has less structural stability than the genome-its composition varies widely within and across individuals and over time.Looked at in one way, it can be argued to be environment for the host-an environmental factor we have the power to intervene in (Knight, 2019).Looked at in another way, the microbiome is cast as part of the self.For example, Gilbert et al. (2012, p. 325) have argued that organisms "have never been individuals" by any criteria of individuality, whether anatomical, developmental, physiological, immunological, genetic or evolutionary.And Rees et al. (2018) have argued the microbiome is integral to all core categories traditionally used to define self-immune defence, higher functions of the brain, and genome. 16Thus the microbiome might be seen as a tentacular bridge, or rather millions of tentacular bridges, invisible to the naked eye, extending the human into environment and pulling the environment back into the body.In that understanding, our identity is now cast as woven in a mesh of reciprocal captures (Stengers, 2010) with microbes.A mesh that extends far back in evolutionary time but is also continuously re-actualised in our daily activities; our dietary choices; our chemical exposures; our intake of antibiotics and other drugs; our sanitation practices; our place dwellings; our social networks.
It is in the attempt to get a practical handle on the depth and complexity of these reciprocal captures, and their consequences for human health, that environmentality is useful for us.As we attempt to think through the particularities of cases where human healths and microbial lives entangle, the concept captures the contextual nature of environment without its gravitas towards the external and contiguous.It helps us to think processually across elements of a landscape (case study 1), across scales and generations (case study 2), and ecologies entangled through clinical practice (case study 3).For example, a line drawn across generations from a mouse mother's fibre consumption to its offspring's obesity travels across scales through the microbiota, as it actions metabolic processes and developmental cascades.Importantly for us, environmentality helps with the process of seeing and attending to relations and interactions across temporalities and scales, macro to micro.It acts as an epistemological tool for thinking about particular situated cases where conceptions of environment are rendered strange through microbiome research.
Stengers (2020) introduces the notion of responsibility17 as a way of thinking about what is recognised as 'real' within biology, dictating what can be experimented upon and thus rendered intelligible.For instance, arguing that the microbes grown in Pasteur's laboratory were granted existing status, experimentally, because he was able to attribute to them responsibility for a broad range of other phenomena in the world: beer brewing, bread, epidemics.As microbiome research proliferates, so do such responsible entities.In the last decade, microbes have been attributed potential responsibility for an astonishing range of phenomena in human health, from ulcerative colitis to metabolic health, to evolution of the nervous system (Klimovich et al., 2018), and even to "control of brain development, function, and behavior by the microbiome" (Sampson & Mazmanian, 2015, p. 565).Yet it becomes increasingly clear that the living world does not fit neatly within our categories (of which microbes are one) and biology is, as Keller put it (2020), itself constituted of interactions.Furthermore, as we discovered in the case studies, the menu of responsible entities within microbiome research resists full experimental intelligibility-a resistance reflected in translational challenges.
As microbes rise to visibility then, so do our structures of thought.It becomes clear, on the one hand, that the conceptual scaffolding we currently have for thinking the living world is not adequate to its complexity.The microbial world is, to borrow from Law (2016), baroque in sensibility-it overflows categories, changes and unfolds as we peer into its detail, and resists an overview.Incorporating its richness into our knowledge may require that we learn to think beyond categories-that we surround ourselves with tools and epistemologies for thinking beyond parts that add up to a neat whole (Latour & Weibel, 2020;Mol & Law, 2002).Process ontologies (e.g., Dupr e & Nicholson, 2018) and metabolic thinking (Landecker, 2011;Landecker & Kelty, 2019) may come to our aid.As Latour and Weibel (2020, p.17) argue, "landing on Earth requires a different view of the material world than has been framed, delineated and entrenched since the modern period".It is striking that this 'different view' is emerging as a perspectival one across many different disciplinary contexts.From the philosophical debate about biological individuals and holobionts (e.g., Dupr e, 2012; Su arez & Stencel, 2020); to concepts such as situated knowledges (Haraway, 1998), the embedded body (Niew€ ohner, 2011) and situated biologies (Niew€ ohner & Lock, 2018); proposals to overturn a "view from nowhere" imaginary of the planet (Ar enes, 2018, p. 15); microbial metabolisms as material and metaphor in creative practices (e.g., Bencard et al., 2020); or calls for a more contextual public health (Cohn et al., 2013;Sariola & Gilbert, 2020).
It is clear that our concepts and structures of thought are shaken and stirred through the biological protagonism of the microbial.Furthermore, we see that concepts themselves have material consequences; that they are agents in the world.Landecker, thinking through the consequences of 20th and 21st century industrialisation of metabolism at a recent presentation, argues that our very concepts shape the biology we come to have: concepts sediment into bodies, into biologies and into societies.There is such a thing as the biology of history (Landecker, 2016)-antibiotic resistance is a case in point in the making.Tragically, so is the ongoing Covid-19 pandemic crisis.Microbes live and evolve in a world of human concepts-and alongside their histories, our own, and our healths. 18In a (bio)medical world, it becomes imperative that we attend to the medicine of history.
What, then, do we see when we look at contemporary medicine from the perspective of environmentality?Place neutral medicine derived much of its explanatory power from its ability to universalise and transfer knowledge and treatments across bodies.One patient's tuberculosis was functionally and categorically similar to another's.Everything was registered and diagnosed within the body.As the post-industrial health landscape shifts more and more towards disease states that are multifactorial and lifestyle embedded (obesity, allergies, autoimmune diseases, gut dysbiosis etc.), it is becoming increasingly vital to include environmental factors in understanding health and disease (Fuller, 2017).Indeed, the complex totality of the organism's environmentally enmeshed life is pushing its way forcibly into biomedical research.Microbiome research is one example, another is the emerging notion of the exposome (Wild, 2005), a term that refers to all the health-relevant exposures of an individual over their lifetime.Yet this fits somewhat uneasily with the thrust toward the development of therapeutic drugs to stave off and control the effects of these 'lifestyle diseases'; and with the development of one-type-fits-most probiotics, prebiotics, and even psychobiotics.
Amidst incomplete knowledge, public health policy can play a role that clinical biomedicine, charged with offering individually-oriented therapies for identifiable conditions, cannot (at this moment) do.Whilst we still understand little of the complexities of causal pathways, we do know that the lives and healths of humans are profoundly implicated with the ecologies of their resident and neighbouring microbial communities.In public health, where the central unit is communities in situ, interventions can potentially be designed to nurture more probiotic environments (borrowing from Lorimer, 2020)-through policy, community engagement and cross-sector dialogue.As Greenhough and Lorimer have recently argued (personal communication, March 25, 2021), "microbiome research might call for a more provincial kind of understanding: How are healthy microbiomes configured through conjunctions of environmental conditions & cultural practices, carried out at particular places over generations?".
In this paper, we have argued that microbiome research insists on the co-constitution of organism and environment: clinically relevant factors are obscured when an organism is extracted from environment or studied without reference to it.This is not a new insight-the problems of translation between laboratory animal studies and the real-world conditions they model, the difficulties of zoo breeding programs, and the intransigence of health inequities all speak to it.Yet the entanglements behind these difficulties are often treated as noise to be extracted, rather than as constitutive and a potential resource for understanding.This is often for good reason-to yield quantifiable forms of knowledge, or to deal with resource limitations or environmental degradation-but we argue that the time has come to foreground co-constitution and environmental embeddedness.The time is ripe for a more perspectival body: within biology, biomedicine and medicine.
A worm plucked from the soil and placed on the kitchen table cannot show us what it is and does; it is constituted via its enmeshment with the soil microbiota, meteorological conditions, daily and seasonal cycles, other organisms, fertiliser and shaking of the ground from traffic and drilling.We cannot truly address and optimise its health without attending to this complex of factors.Different as humans are from earthworms in their porosity to environment, the difference "is one of degree, not kind". 19Gut microbiome research is revealing that our biological enmeshment with what we call environment is profound.The soil can no longer stay outside the clinic.
Financial support
This study was supported under a core group grant from the Velux Foundation, Denmark (00017008) to Louise Whiteley (PI) and Adam Bencard (Co-PI), and a Novo Nordisk Foundation Mads Øvlisen Postdoctoral fellowship in Art and Natural Sciences to Adam Bencard (NNF17OC0024454).The work was also supported via internal funding from the Novo Nordisk Foundation Center for Basic Metabolic Research (CBMR), an independent research center at the University of Copenhagen partially funded by an unrestricted donation from the Novo Nordisk Foundation (NNF18CC0034900). | 2021-12-19T17:14:59.027Z | 2021-12-15T00:00:00.000 | {
"year": 2021,
"sha1": "195666741f97b05f5fbaebbb60212e92eae5add6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.shpsa.2021.11.005",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5d947f1442cc0e5710692f6248f13f73c6404330",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7480551 | pes2o/s2orc | v3-fos-license | The relationship between borderline personality disorder and bipolar disorder.
It is clinically important to recognize both bipolar disorder and borderline personality disorder (BPD) in patients seeking treatment for depression, and it is important to distinguish between the two. Research considering whether BPD should be considered part of a bipolar spectrum reaches differing conclusions. We reviewed the most studied question on the relationship between BPD and bipolar disorder: their diagnostic concordance. Across studies, approximately 10% of patients with BPD had bipolar I disorder and another 10% had bipolar II disorder. Likewise, approximately 20% of bipolar II patients were diagnosed with BPD, though only 10% of bipolar I patients were diagnosed with BPD. While the comorbidity rates are substantial, each disorder is nontheless diagnosed in the absence of the other in the vast majority of cases (80% to 90%). In studies examining personality disorders broadly, other personality disorders were more commonly diagnosed in bipolar patients than was BPD. Likewise, the converse is also true: other axis I disorders such as major depression, substance abuse, and post-traumatic stress disorder are also more commonly diagnosed in patients with BPD than is bipolar disorder. These findings challenge the notion that BPD is part of the bipolar spectrum.
It is clinically important to recognize both bipolar disorder and borderline personality disorder (BPD) in patients seeking treatment for depression, and it is important to distinguish between the two. Research considering whether BPD should be considered part of a bipolar spectrum reaches differing conclusions. We reviewed the most studied question on the relationship between BPD and bipolar disorder: their diagnostic concordance. Across studies, approximately 10% of patients with BPD had bipolar I disorder and another 10% had bipolar II disorder. Likewise, approximately 20% of bipolar II patients were diagnosed with BPD, though only 10% of bipolar I patients were diagnosed with BPD. While the comorbidity rates are substantial, each disorder is nontheless diagnosed in the absence of the other in the vast majority of cases (80% to 90%). In studies examining personality disorders broadly, other personality disorders were more commonly diagnosed in bipolar patients than was BPD. Likewise, the converse is also true: other axis I disorders such as major depression, substance abuse, and post-traumatic stress disorder are also more commonly diagnosed in patients with BPD than is bipolar disorder. These findings challenge the notion that BPD is part of the bipolar spectrum. effective 23,24 and the possible overprescription of medications that have little benefit and carry the risk of medically significant side effects. 25 Because of the potential treatment implications, it is clinically important to recognize both bipolar disorder and BPD in patients seeking treatment for depression, and it is important to distinguish between the two. However, this presupposes that each is a valid diagnostic entity. During the past 20 years there have been increasing suggestions that BPD should be conceptualized as part of the spectrum of bipolar disorder. Advocates of the bipolar spectrum suggest that treatments that have been found effective in treating bipolar disorder should be used when treating patients with BPD because of its inclusion on the bipolar spectrum. 6,[26][27][28] Literature reviews considering whether BPD belongs to the bipolar spectrum have reached differing conclusions. Smith et al 29 suggested that a strong case could be made that a significant percentage of patients with BPD fall into the bipolar spectrum, and Belli et al 30 concluded that the two disorders are closely linked in phenomenology and treatment response. Antoniadis et al 31 and Coulston et al 32 did not draw a conclusion regarding BPD's inclusion on the bipolar spectrum, whereas Paris et al 33 and Dolan-Sewell et al 34 concluded that empirical evidence did not support BPD's link to the bipolar spectrum. Sripada and Silk, 35 reviewing neuroimaging studies, noted that there were some areas of overlap and some differences between BPD and bipolar disorder. Some of the authors of these reviews noted that few studies have directly compared patients with bipolar disorder and BPD, and they called for such empirical data to help clarify the relationship between the two disorders. 32,35 In the present review we focus on the most studied question on the relationship between BPD and bipolar disorder-their diagnostic concordance. More than 30 studies have examined the frequency of bipolar disorder in patients with borderline personality disorder, or the frequency of BPD in patients with bipolar disorder. We address the following questions: (i) What is the frequency of each disorder when the other is present? (ii) Is the level of co-occurrence elevated? That is, is the prevalence of BPD significantly higher in patients with bipolar disorder than in other psychiatric disorders? (iii) Is BPD the most common personality disorder in bipolar patients or are other personality disorders more frequent?
Methodological issues in personality disorder assessment
Any review of a topic involving personality disorders needs to consider assessment methodology, because assessment issues can have a significant impact on the findings. In short, there should be some consideration of the who, what, and when of personality disorder assessment. To be sure, these are also issues in the evaluation of Axis I disorders, though they have not been studied as much as they have been studied in the personality disorder field. Who should be questioned when assessing personality disorders-the target individual or someone who knows the target individual well? The evaluation of personality disorders presents special problems that may require the use of informants. In contrast to the symptoms of major Axis I disorders, the defining features of personality disorders are based on an extended longitudinal perspective of how individuals act in different situations, how they perceive and interact with a constantly changing environment, and the perceived reasonableness of their behaviors and cognitions. Only a minority of the personality disorder criteria are discrete, easily enumerated behaviors. For any individual to describe their normal personality they must be somewhat introspective and aware of the effect their attitudes and behaviors have on others. But insight is the very thing usually lacking in individuals with a personality disorder. DSM-IV notes that the characteristics defining a personality disorder may not be considered problematic by the affected individual (ie, ego-syntonic) and suggests that information be obtained from informants. Research comparing patient and informant report of personality pathology has found marked disagreement between the two sources of information. [36][37][38][39] Only one of the studies examining the frequency of personality disorders in patients with bipolar disorder examined the impact of informant assessment on the rates of personality disorder diagnoses. 40 Peselow et al 40 presented personality disorder rates based on independent patient and informant interviews, and we have included in Table I the results based on the patient information in order to be consistent with other studies. What measures should be used to diagnose personality disorders? Several instruments exist, and while there is no evidence that any one interview schedule is more reliable or valid than another, there is consistent evidence that prevalence rates are higher based on self-administered scales than clinician interviews. [41][42][43] Borderline personality disorder and bipolar disorder - Zimmerman When should personality disorders be assessed during the course of the mood disorder? The impact of psychiatric state on personality disorder assessment has been well established, and to minimize this effect some researchers evaluate personality disorders after a patient has improved and is in a euthymic state. [44][45][46] The potential problem with this approach is that it underestimates the prevalence of personality disorders because the presence of personality pathology predicts poorer outcome. Therefore, we included all studies, regardless of when personality disorders were assessed, with the plan to examine the potential impact of psychiatric state on prevalence rates.
Excluded studies
To obtain a systematic and comprehensive collection of published studies of comorbidity, we conducted a Medline and PsycInfo search on the terms bipolar and borderline. We reviewed the titles from this search to identify studies that potentially included information on the comorbidity of bipolar disorder and BPD. We also identified studies in reference lists of identified studies and review articles. Several studies that have been included in other reviews of bipolar disorder-BPD comorbidity were excluded from the present review. Self-report measures of personality disorders are more appropriately considered screening instruments than diagnostic measures. Consistent with this, as noted above, prevalence rates based on self-report scales are higher than those based on clinician-administered interviews. We therefore did not include studies that relied on self-report scales to make personality disorder diagnoses. [47][48][49] We also did not include studies in which the personality disorder diagnoses were based on unstructured clinical evaluations 46,[50][51][52][53][54][55][56][57] because these evaluations are less reliable 58,59 and underdetect personality disorders. 20,60 Studies in which diagnoses were based on chart review were also excluded 61,62 because diagnoses were based on unstructured evaluations.
Reports based on overlapping samples were included only once. We included the data from Pica et al, 63 but not from Jackson et al 64 Benazzi 71,72 were overlapping. We concluded that they were based on different samples because the sample sizes were different, the second paper referenced the first without indicating that the samples overlapped, and the time frames over which the samples were collected were relatively brief (6 months and 10 months) and were consistent with the rate of recruitment over separate periods of time.
Coid et al 73 studied the frequency of bipolar disorder in prisoners with BPD who manifested affective instability. Because of the uncertain impact that requiring affective instability might have on the prevalence of bipolar disorder, this study was excluded. We also excluded the report by Schiavone et al 74 because the authors only recorded one personality disorder diagnosis even when patients had more than one. Thus, a patient with BPD who had another personality disorder that was considered more clinically significant than BPD would not be counted as having BPD. This would artificially reduce the number of patients with bipolar disorder who would be diagnosed with BPD. The report by Zanarini and colleagues 75 on the frequency of Axis I disorders in patients with BPD was excluded because they indicated that patients with a history of a major psychotic disorder such as schizophrenia or bipolar disorder were excluded from the sample. It is therefore not surprising that no patients were diagnosed with bipolar disorder. We excluded studies of the fre-quency of BPD in patients with cyclothymic temperament, 76 a construct that is not in DSM-IV and differs from cyclothymic disorder.
Frequency of borderline personality disorder in patients with bipolar disorder
Twenty-four studies reported the frequency of BPD in patients with bipolar disorder (Tables I and II). Most studies were of psychiatric outpatients, and only four were of samples of inpatients (or predominantly inpatients). The majority of the studies assessed BPD when the patients were in remission (n=9) or with no more than mild symptom severity (n=6); the remainder (n=9) assessed BPD when the patient was symptomatic. 80 19 0.0 (0) 19 36.8 (7) Barbato 85 42 14.3 (6) Benazzi 71 50 12.0 (6) Benazzi 72 78 11.5 (9) Brieger 86 60 6.7 (4) Carpenter 87 23 0.0 (0) Carpiniello 78 57 31.6 (18) Comtois 81 34 23.5 (8) Dunayevich 88 56 5.4 (3) Garno 77 100 17.0 (17) Gasperini 89 54 5.5 (3) George 90 52 3.8 (2) Joyce 82 26 11.5 (3) 19 31.6 (6) Loftus 91 51 19.6 (10) Perugi 83 25 48.0 (12) Peselow 40 47 23.4 (11) Pica 63 26 11.5 (3) Preston 123 35 40.0 (14) Rossi 92 71 29.6 (21) Ucok 93 90 10.0 (9) Vieta 67 129 6.2 (8) Vieta 68 40 12.5 (5) Wilson 84 30 50.0 (15) Zimmerman 79 focused on either bipolar I or bipolar II disorder, and many did not discuss the bipolar I-bipolar II distinction. Two reports specified the number of patients with bipolar I and bipolar II disorder, but only reported the prevalence of BPD for the entire group without specifying the prevalence of BPD in the bipolar subtypes. 77,78 Only two groups of investigators examined the frequency of BPD in patients with bipolar I and bipolar II disorder. 67,79 Across all studies, the frequency of BPD in the 1255 patients with bipolar disorder was 16.0% (n=201). In the 12 studies of 598 patients with bipolar I disorder, the prevalence of BPD was 10.7% (n=64). In the seven studies of 261 patients with bipolar II disorder, the prevalence of BPD was twice as high (22.9%, n=60). Only two groups of investigators reported data on both bipolar I and bipolar II disorder. In two separate reports Vieta et al 67,68 found that BPD was diagnosed twice as frequently in patients with bipolar II disorder than bipolar I disorder (12.5% vs 6.2%). While they did not statistically compare these prevalence rates, we conducted a chisquare test based on the raw data provided in the two articles and found that the difference was not significant (X 2 = 1.71, ns). Similarly, Zimmerman et al 79 reported a higher prevalence of BPD in patients with bipolar II disorder, but the difference was not significant. Thus, while the summary across studies suggests a significantly higher rate of BPD in patients with bipolar II than bipolar I disorder, the only two studies that allowed for a direct comparison did not find a significant difference between the two groups. In the seven studies of 389 patients that either did not specify the type of bipolar disorder, or did not present results separately for bipolar I and bipolar II disorder, the rate of BPD was similar to the rate in patients with bipolar II disorder (20.8%, n=81). Nine studies indicated that they assessed patients upon presentation for treatment or when the patients were symptomatic. 71
Is borderline personality disorder the most frequent personality disorder in patients with bipolar disorder?
Fifteen studies examined the full-range of personality disorders in patients with bipolar disorder. 40,63,67,68,80,82,[85][86][87][88][89][90][91][92][93] In only four of the 15 studies BPD was the most frequent diagnosis. 40,68,82,91 Histrionic personality disorder was the most common diagnosis in four studies 63,67,85,89 and tied for the most common in another two studies, 90,93 and obsessive-compulsive personality disorder was the most common in three studies 86,87,92 and tied for the most common in another two studies. 90,93 While this suggests that there is no clear evidence that BPD is the most common personality disorder in patients with bipolar disorder, it is noteworthy that BPD was the most frequent personality disorder diagnosis in the only two studies of bipolar II disorder. 68,82 Is borderline personality disorder more common in patients with bipolar disorder than psychiatric control groups?
Eight studies compared the frequency of BPD in patients with bipolar disorder and major depressive disorder. 33,71, 81-83, 86,89,92 Four studies found no difference between the two groups, 81,86,89,92 whereas three of the four studies of bipolar II disorder found a higher rate of BPD in the bipolar patients. 33,71,82,83 Another study found no difference in the rate of BPD in patients with bipolar disorder and schizophrenia. 63 One study compared the frequency of Axis I disorders in a heterogeneous sample of psychiatric outpatients, and sufficient data was provided to calculate the rate of BPD in patients with different diagnoses. 79 BPD was significantly more frequent in patients with bipolar disorder than in patients with major depressive disorder, as well as more common than in patients with any psychiatric disorder. Another study of psychiatric outpatients with mixed diagnoses found a lower rate of BPD in patients with bipolar disorder. 80 Thus, four of ten studies found a significantly higher rate of BPD in patients with bipolar disorder compared with a psychiatric control group, and three of these four positive studies were comparisons of bipolar II disorder versus major depressive disorder.
Frequency of bipolar disorder in patients with borderline personality disorder
Twelve studies reported the frequency of bipolar disorder in patients with BPD (Tables III and IV). Three studies of psychiatric outpatients of mixed diagnoses and one study of patients with a major depressive episode con-tributed data to both this analysis as well as the previous analysis examining the frequency of BPD in patients with bipolar disorder. [79][80][81]83 Most studies were of psychiatric outpatients, and only two were of samples of inpatients. 94
Co-occurrence of bipolar disorder and borderline personality disorder in nonpatient samples
To this point we have summarized studies of psychiatric patients. Only four studies of nonpatient samples have examined the association between bipolar disorder and BPD. Because comorbidity may be associated with seeking treatment, an examination of the degree of co-occurrence should examine non-treatment-seeking samples.
While there are many studies of the epidemiology of personality disorders, 97 we are aware of only four studies that reported bipolar-BPD comorbidity. Zimmerman and Coryell 98 assessed DSM-III Axis I and Axis II disorders in 797 first-degree relatives of healthy controls and psychiatric patients. Trained interviewers experienced in evaluating psychiatric patients administered the fully structured Diagnostic Interview Schedule (DIS) 99 for Axis I disorders and the semi-structured SIDP for Axis II disorders. BPD was the third most frequently diagnosed personality disorder in individuals with bipolar disorder (obsessive-compulsive and antisocial personality disorders were the most frequent diagnoses). The rate of BPD was nearly twice as high in bipolar disorder than major depressive disorder (12.5% vs 6.9%), though this difference was not significant. The rate of bipolar disorder in the subjects with BPD was 15.4%, significantly higher than the rate in individuals C l i n i c a l r e s e a r c h 162 Author n of BPD sample % (n) with % (n) with % (n) with % (n) with Cyclothymia Any bipolar disorder Bipolar I disorder Bipolar II disorder Akiskal 127 100 Excluded 17.0 (17) b 7.0 (7) Alnaes 80 44 0.0 (0) 15.9 (7) Comtois 81 103 These subjects also completed the IPDE screening questionnaire. A multiple imputation method was used to approximate the diagnosis of BPD in the NCS-R respondents who completed the IPDE screening questionnaire but were not administered the diagnostic interview. DSM-IV Axis I diagnoses were based on the fully structured Composite International Diagnostic Interview. 104 The Axis I diagnostic information presented in the article focused on diagnoses in the past year, and the data for bipolar disorder combined bipolar I and bipolar II disorder. The rate of bipolar I or II disorder in subjects with BPD (14.8%) was nearly identical to the rate reported by Zimmerman and Coryell 98 and Swartz et al 100 The prevalence of BPD in subjects with bipolar I or bipolar II disorder was 15.5%. Odds ratios (OR) were computed controlling for demographic variables. The odds ratio between BPD and bipolar disorder (12.5) was higher than all other odds ratios between BPD and Axis I disorders except any impulse control disorder (OR=14.4) and intermittent explosive disorder (OR=12.5). Grant et al 105 conducted face-to-face interviews with approximately 35 000 participants in the second wave of the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC). Diagnoses were based on the DSM-IV version of the fully structured Alcohol Use Disorder and Associated Disabilities Interview Schedule. 106 The overall rate of BPD was 5.9%, higher than the rates reported in other epidemiological surveys. 101,107,108 The prevalence of BPD in respondents with a lifetime history of bipolar disorder was high (bipolar I, 35.9%; bipolar II, 26.7%). The rates were even higher when the analyses were limited to bipolar diagnoses in the past 12 months (bipolar I, 50.1%; bipolar II, 39.4%). The higher rates for diagnoses based on the past year are likely due to BPD being associated with greater chronicity and recurrence of bipolar disorder episodes. The lifetime prevalence of bipolar I and bipolar II disorder among individuals with BPD was 31.8% and 7.7%, respectively. Grant et al 105 computed odds ratios between BPD and the lifetime rate of 15 Axis I disorders controlling for demographic variables and found that the odds ratio was highest for bipolar I disorder (OR=9.9), whereas for bipolar II disorder several disorders had higher odds ratios. When the presence of other Axis I disorders was also controlled, then lifetime diagnoses of bipolar I and bipolar II disorder had the highest odds ratios with BPD. However, another report from the Wave 2 assessment in the NESARC study, on the association between narcissistic personality disorder and Axis I disorders raise questions about the specificity of the association between BPD and bipolar disorder. Stinson et al 109 computed odds ratios between narcissistic personality disorder and the lifetime rate of the same 15 Axis I disorders controlling for demographic variables and, similar to the results of Grant et al 105 on BPD, found that the odds ratio was highest for bipolar I disorder (OR=5.2), whereas for bipolar II disorder several disorders had higher odds ratios.
To summarize the results of these four epidemiological and quasi-epidemiological studies, three studies were consistent in finding that approximately 15% of the community respondents with BPD were diagnosed with bipolar disorder, 98,100,101 whereas the NESARC data was an outlier with a combined bipolar I and bipolar II prevalence of nearly 40%. 105 The NESARC study was also an outlier in finding a higher prevalence of bipolar disorder than other epidemiologic studies. It is not surprising that significant odds ratios were found between bipolar disorder and BPD. However, BPD was significantly associated with other Axis I disorders as well. The specificity of the relationship between BPD and bipolar disorder was not clearly established. The only report of the full range of personality disorders found that BPD was the third most frequent diagnosis in adults with bipolar disorder, and that the rate of bipolar disorder in subjects with BPD was not significantly higher than the rate in subjects with other personality disorders. 98 However, the sample size in the study was relatively small, and diagnoses were based on DSM-III which had not yet officially recognized bipolar II disorder.
Summary and conclusions
The goal of this review was to examine the relationship between bipolar disorder and BPD, particularly the specificity of the relationship. While many studies have examined comorbidity rates, particularly in psychiatric patients, methodological considerations limit some of the conclusions that can be drawn. How frequent is BPD in bipolar patients? And does this vary by subtype of bipolar disorder? Across studies approximately 10% of patients with BPD had bipolar I disorder and another 10% had bipolar II disorder. Thus, a total of about 20% of patients with BPD were diagnosed with bipolar disorder. Likewise, approximately 20% of bipolar II patients were diagnosed with BPD, though only 10% of bipolar I patients were diagnosed with BPD. Psychiatric status at time of assessment did not appear to have an influence on these rates.
Most of the studies in the present review were based on small sample sizes; only 1 of the 24 studies summarized in Table II had a sample size greater than 100. Small sample sizes result in large confidence intervals, and this contributes to the wide variation in prevalence rates. The small-scale studies typically focused on only one bipolar disorder subtype, with only two investigators providing information on both bipolar I and bipolar II disorder.
Much has been written about the bipolar-borderline link, and some authors have suggested that BPD is on the bipolar spectrum. 76,110 It was therefore surprising that in the 15 studies examining the full range of personality disorders in patients with bipolar disorder that BPD was the most frequent in only four studies. Obsessive-compulsive and histrionic personality disorders were more frequently the most commonly diagnosed personality disorders. This raises questions about the specificity of the bipolar-borderline link. However, BPD was the most frequent personality disorder in the only two studies of bipolar II disorder. Consistent with the stronger association between BPD and bipolar II disorder than bipolar I disorder, three of the four studies comparing the prevalence of BPD in bipolar II patients with psychiatric control groups were significant versus one of the six studies of bipolar I or unspecified bipolar disorder. Why is there a seemingly stronger link between bipolar II disorder and BPD? We believe that this is primarily related to diagnostic error. As one of us discussed elsewhere, when diagnosis is based on the presence of symp-tom episodes that occurred in the past, as is the case with bipolar disorder in currently depressed patients, diagnostic clarity is sometimes elusive thereby resulting in some false-negative as well as false-positive diagnoses. 111 DSM-IV is a categorical system that provides descriptive diagnostic criteria of psychiatric syndromes. The definition of mental disorder in DSM-IV notes that these syndrome descriptions represent underlying behavioral, psychological, or biological dysfunction, albeit imperfect representations of the potentially unknown, underlying core dysfunction. The descriptive diagnostic criteria should not be considered the last word on whether a patient has the illness in question, but instead the criteria should be conceptualized as a type of test for the underlying, etiologically-defined, illness. Accordingly, as with any other diagnostic test, diagnoses based on the DSM-IV criteria produce some false positive and some false negative results. That is, some patients who meet the DSM-IV diagnostic criteria will not have the illness (ie, false positives), and some who do not meet the criteria because their symptoms fall below the DSM-IV diagnostic threshold, will have the illness and incorrectly not receive the diagnosis (ie, false negatives). According to this conceptualization, the gold standard with which DSM-IV diagnoses are being compared is a not-yetdiscovered index of illness such as a biomarker. The lack of congruence between phenomenological diagnosis and underlying pathophysiology is one cause of diagnostic error. A second cause is related to the limits of the accuracy of retrospective recall and reporting. Transient episodes of affective instability and emotional lability associated with borderline personality disorder might be confused with hypomanic episodes, thereby resulting in false-positive diagnoses. 33,112 This is not to suggest that affective instability is pathognomonic for borderline personality disorder, but rather to illustrate how phenomenological similarities might result in diagnostic error. This error is likely greater with bipolar II disorder than bipolar I disorder, and we would hypothesize would be even greater if the diagnostic thresholds for bipolar disorder are lowered below the current DSM-IV standard. Thus, some patients diagnosed with both borderline and bipolar II disorders are likely to have false-positive bipolar disorder diagnoses. And some likely have false positive BPD diagnoses. In clinical practice additional sources of diagnostic error include clinical unfamiliarity with Axis II disorders, 113 the perception that bipolar disorder is more easily treated (thus "erring on C l i n i c a l r e s e a r c h the side of caution"), 114 the desire to protect patients from a stigmatizing diagnosis, 115 or lower reimbursement rates for treating Axis I vs Axis II disorders. 115 To us, the question is not whether diagnostic error exists, but rather which type of error predominates and what can be done to reduce such errors. There is much need for research comparing patients with BPD to bipolar disorder, particularly bipolar II disorder. As noted in the introduction, few studies have compared these groups. Moreover, the few studies that have directly compared the two disorders have been based on small samples and examined a limited number of variables. 84,[116][117][118][119][120] We are not aware of any study that has focused on depressed patients presenting for treatment and compared those who are diagnosed with either bipolar II disorder or BPD-a clinically important distinction faced by clinicians. A direct comparison of these two groups of patients could identify variables that would assist clinicians in making this differential diagnosis, and subsequently in making treatment decisions. Similarly, few direct comparisons of patients with bipolar disorder and BPD have been conducted with respect to treatment. Even fewer include groups of patients with comorbid bipolar disorder and BPD in their comparisons, and those that do neglect one of the other two groups. Similar to other studies reviewed here, existing treatment studies suffer from small sample sizes, 56,121 use unclear diagnostic methods, 122 or rely on atypical measures to diagnose one or both disorders. 123 With some exceptions, they also largely use pharmacotherapy, typically with medications such as mood stabilizers that have been shown to be effective for treatment of bipolar disorder. Importantly, preferential use of medication trials neglects the psychosocial and behavior change interventions inherent in treatments for BPD. More research is needed on to what degree these disorders benefit from various treatments relative to one another, and also on best treatment practices for comorbid BPD and bipolar disorder.
An examination of comorbidity, and the specificity of the association, is informative regarding the link between BPD and the bipolar spectrum; however, the most informative approach towards answering this question is to compare depressed patients with and without BPD on validators that are specific for bipolar disorder. 124 Thus, the demonstration that compared with depressed patients without BPD, depressed patients with BPD have more anxiety disorders, more substance-use disorders, and a younger age of onset, does not support the bipolar spectrum hypothesis because these differences would be expected for BPD as well. Instead, studies attempting to demonstrate that BPD is part of the bipolar spectrum should focus on variables that are specific to bipolar disorder such as a family history of bipolar disorder which would not be expected to be elevated in BPD probands unless BPD was part of the bipolar spectrum.
In the final analysis though we believe that the results of the present review challenge the notion that BPD is part of the bipolar spectrum. While the comorbidity rates are substantial, each disorder is nonetheless diagnosed in the absence of the other in the vast majority of cases (80% to 90%). In studies examining personality disorders broadly, other PDs such as histrionic and obsessive-compulsive were more commonly diagnosed in bipolar patients than was BPD. Although not reviewed here, the converse is also true: other axis I disorders such as major depression, substance abuse, and post-traumatic stress disorder are also more commonly diagnosed in patients with BPD than is bipolar disorder. 115 In both of these cases, rates of comorbidity alone have not led to the argument that the disorders exist along the same spectrum. In valid cases of co-occurrence, it is possible that this reflects a common etiology where risk factors for one disorder lead to the co-occurrence of the other. 125 | 2016-05-12T22:15:10.714Z | 2013-06-01T00:00:00.000 | {
"year": 2013,
"sha1": "305e78de6c9da5e588e95dd01b8291ac0777e18f",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc3811087?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "76161c500224ddf9d4d9349030f6474e203631dd",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
155104080 | pes2o/s2orc | v3-fos-license | From Academia to Reality Check: A Theoretical Framework on the Use of Chemometric in Food Sciences
There is no doubt that the current knowledge in chemistry, biochemistry, biology, and mathematics have led to advances in our understanding about food and food systems. However, the so-called reductionist approach has dominated food research, hindering new developments and innovation in the field. In the last three decades, food science has moved into the digital and technological era, inducing several challenges resulting from the use of modern instrumental techniques, computing and algorithms incorporated to the exploration, mining, and description of data derived from this complexity. In this environment, food scientists need to be mindful of the issues (advantages and disadvantages) involved in the routine applications of chemometrics. The objective of this opinion paper is to give an overview of the key issues associated with the implementation of chemometrics in food research and development. Please note that specifics about the different methodologies and techniques are beyond the scope of this review.
Introduction
Advances in biology, biochemistry, chemistry, and mathematics have increased our knowledge and understanding of the main issues facing food systems (e.g., food integrity, safety, omics) [1][2][3]. Nevertheless, only a limited number of interactions and cause-effect associations reported in foods are well understood, due to the intricate nature of these relationships, which has hindered our ability for a fundamental knowledge of these singularities in food and consequently the development of new innovations to boost R&D and new applications in the food industry [1][2][3].
The so-called bottom-up approach ("reductionist") has dominated research in food science, where only one compound or nutrient was considered or analyzed independent of the food matrix [4]. It is in this context that several scholars in the field have detailed that this line of thinking has created an "unreal world view", where chemical components/molecules/nutrients (e.g., protein, carbohydrates, lipids) analyzed in isolation from the whole food matrix (e.g., beer, fruits, flour, grains) might not be solely responsible for explaining the observed differences in the food [5][6][7][8][9][10]. In recent years, the understanding of the inherent complexity in food will require intricate answers, hence, studies in this space must move in the direction of a more holistic, multidisciplinary, and integrative arrangement (e.g., systematic approach) as stated by several researchers in the field [5][6][7][8][9][10].
This new approach (e.g., systems, omics) to the analysis of food and food systems will have the capacity to determine a high level of complexity which has not been explored before by food scientists where for many researchers in the field is still considered a "scientific utopia" [5][6][7][8][9][10]. In the last 30 years, the modern food industry (and sciences) has moved into the digital and technological age, providing better tools able to deal with the numerous challenges which result from the use of novel instrumental techniques and methods, hardware (e.g., computing, mobile telecommunications) and software (algorithms). The combination of these methods and techniques have been incorporated into the exploration, mining, and description of the data from such complexity (see Figure 1). Nevertheless, in this digital environment, food scientists need to be mindful of the issues (advantages and disadvantages) involved in the routine application of modern analytical and instrumental methods during the analysis of food and food systems [5,[11][12][13][14][15]. Encompassing this approach, researchers in food science have been proactive in the integration and evaluation of multivariate data analysis methods (chemometrics), as demonstrated by the increased number of published articles in the field (over 1300 articles have been published containing keywords such as "food", "chemometrics", and/or "multivariate data") (Web of Science, March 2019). During the last decade, more than 1000 papers alone were published demonstrating and reporting the ability of these methods to target issues related with food integrity and safety, authenticity, and applications of instrumental methods and techniques (e.g., near infrared, mid infrared, Raman, electronic noses and tongues), to mention a few examples. This exponential growth in the number of available articles can be explained by the accessibility to instruments and chemometric software by both researchers and industry, allowing for the development of new applications. Please note that the keywords used in this search only included the words chemometrics, multivariate data analysis, and food (see Figure 2).
The objective of this opinion paper is to give an overview of the key issues associated with the implementation of chemometrics in food research and development. Please note that specifics about the different methodologies and techniques are beyond the scope of this review.
Chemometrics Linking the Univariate with the Multivariate World
According to Hopke (2003) [5] the assimilation of modern statistical and chemometric methods and techniques in food R&D have become more important since the 1980s. Unfortunately, the universal application of chemometrics in food science has been endangered by the selection or inappropriate use of the different techniques or methods available as highlighted by Nunez and co-workers (2015) [2]. These authors also emphasized that several of these studies ignored the basic requirements for a proper experimental design before incorporating these methodologies into the analysis of food [2,3,16]. Furthermore, it has been shown that among the main steps required in the effective application of such statistical methods in food sciences (research and development) were the non-interest in performing complex mathematical analysis which prevailed among scientists and determined the misinterpretation of the statistical results or the misuse of the statistical packages available to perform the analysis [2,3,16]. Contradictorily, the ease of access to a wide range of commercial statistical software (some of them available in modern instrumental techniques) have provided researchers with valuable tools that facilitated the incorporation of statistical and mathematical methods to explore and develop new applications [2,3,16].
The development and fast growth in the use of technology (e.g., computing power) has had an important influence on the routine applications of mathematics and analytics in food science (e.g., R&D) due to the availability of computing packages, facilitating the statistical analysis of different types of datasets [2,3,16]. This has granted researchers the potential of analyzing data from food experiments in a smaller amount of time to generate models, charts or even resolve complex mathematical processes by using diverse types of algorithms and pre-processing techniques. The availability of these tools has also facilitated the rapid expansion and integration of these methodologies in food sciences; however, this has also resulted in the inappropriate use of such methods where there is a necessity of both understanding the mathematics that stands behind the model as well as the digital tools needed to create models/charts, etc. [2,3,16]. In many cases, misleading or wrong conclusions were drawn based on the lack of understanding of the chemometric background. Granato and collaborators [3] (2014) highlighted on a positive note, the importance of the integration and use of these approaches in a new era in food research and development.
Tremendous gains have been made in analytical capabilities resulting from the integration of a diverse array of methods and techniques in food science. Despite this, researchers still face several issues; topics associated with the practicality of the information gained from the data generated (e.g., need for data integration tools), the requirements for rigorous control systems in order to verify the integrity of the data generated, among other factors, are still some constraints that face the user of these methods in food sciences [5,[11][12][13][14][15].
A recent review defined chemometrics as "the application of statistical and mathematical methods, to handle chemical or process data" [17]. This review also highlighted that a more comprehensive definition of chemometrics was introduced by Massart and colleagues [18], where according to this scientists chemometrics includes mathematics, statistics, and formal logics to design and/or decide on the optimal experimental design, as well as to maximize the comprehensive interpretation of chemical information from chemical data, to gather deep knowledge about the system [17][18][19]. It is in this space that the widespread application of chemometrics in the modern food sciences is concerned with issues related to the analysis of big and multivariate data [17][18][19].
The routine use of statistics in food R&D focuses on the investigation of the effects of single variables (e.g., univariate) by means of standard statistical analysis (e.g., analysis of variance). Although, the routine use of ANOVA in research delivers valuable data, detailed information about associations among variables as well as other relevant information related with sampling, the sample or the experiment might be missing [8,9,20,21].
Nowadays, large datasets (e.g., several variables and samples) are collected each day in most of the labs and industrial sites around the world due to the introduction of modern instrumental methods [8,9,[20][21][22]. Additionally, the standard and conventional methods and techniques currently in use lean towards the elimination of the matrix interference detaching or removing the analyte (e.g., chemical or physical) that is measured, determining an apparent simple analytical process [8,9,[20][21][22][23]. Nevertheless, this systematic method allows for a better understanding of the intrinsic associations between the several constituents and properties that define a food.
The main characteristic of the various rapid analytical and instrumental techniques used by the food industry is that, in most cases, the parameters estimated (e.g., measured) during the analysis, do not necessarily have a direct link with the analyte of interest, resulting in a proxy or correlative method [8,9,[20][21][22][23][24][25][26]. Alternatively, chemometric methods offer to analyze food beyond the one-dimensional (univariate) space. Therefore, the use of chemometric analysis and interpretation of the data can reveal properties, relationships, and levels of interferences or interactions in the food matrix not easily observed when univariate analysis is used [8,9,[20][21][22][23][24][25][26].
Researchers are concerned about the quantifiable interactions between dependent and independent variables to gain evidence and data about the system (e.g., interactions, models, simulation charts, among others). In quantitative analysis, the development of a linear function or model that connects dependent and independent variables is one of the common applications of these methods in food R&D [24,[27][28][29]. Regression and calibration might be interchanged and used as a single word in reporting calibrations (e.g., fit a model) or to quantify the associations between variables [24,[27][28][29]. However, it is important to remember that this association does not necessarily determine a cause-effect relationship [24,[27][28][29]. Table 1 summarizes the most common algorithms used in several of the applications of chemometrics in food R&D.
The Importance of Experimental Design
Before sample analysis, data collection, mining, and interpretation of the results, the design (e.g., treatments, variables) of the experiment (DoE) is considered of fundamental importance in this approach [32][33][34]. However, this significant first step is usually overlooked or misjudged in many of the applications or analysis reported. Moreover, the DoE founded in the assumption that only one variable change relative to others, is no longer valid when "state-of-the-art" instrumental techniques and chemometrics are combined for the analysis of complex systems such as food [32,33]. Recent applications of chemometric methods and techniques highlighted as prerequisite the need for optimization of the variables in combination with the appropriate DoE protocol to carry out the experiments [32,33]. It has been demonstrated that good DoE not only provides with the means of exploring different and several factors or interactions at the same time but also provides with an efficient tool to make savings in routine applications of any given method [32][33][34].
Sampling and Samples
The most often misjudged component of any analysis, which has a vital part during model building and mining of the data generated, are the sampling process and the sample itself [35][36][37]. One of best-known applications of chemometric methods in the food industry is the development of calibration models [35][36][37]. During this process, finding a "robust model" encompasses, among other issues, the cautious collection of appropriate samples to be incorporated into the model (e.g., calibration development) [35][36][37].
The sampling method and the selection of the sample are undoubtedly the most important stages to be consider before developing a calibration model [35][36][37]. This process involves different stages, which ideally results in the selection of a wide range of samples covering current and future sources of variability (e.g., range in protein, temperature, moisture) to be subsequently measured [35][36][37]. Samples in both the training and test must belong to the same population (e.g., origin, similar chemical or physical properties) as the model will not be able to predict samples outside these settings [35][36][37].
Consequently, samples collected with the purpose of being included in calibration must hold the different and expected levels of variability (training and validation) where selected samples must be equally distributed throughout calibration and validation [35][36][37]. Any further sample to be incorporated into the model would be exposed to identical conditions (e.g., temperature, moisture, treatments, etc.) to those in the training set. The purpose of this is to generate the broadest range in composition to compensate for unwanted variations in the system, during the test and routine use of the model [35][36][37].
Interpretation of Results and Validation
In most applications of instrumental methods, the main objective is the creation of a model or calibration to predict unknowns [30,[38][39][40][41][42]. However, before the calibration model is used in the real world, it must be validated [30,[38][39][40][41][42]. The validation process requires the model to predict the desired property using several samples not involved during the calibration process [40]. Any results obtained after the validation must be compared with the reference value; if both values are identical (the exception of the rule), the model can be used to accurately predict the property in the future [30,[38][39][40][41][42].
Based on the data available in the published reports, cross validation has been the favorite tool to check the capability of the model to predict new samples [30,[38][39][40][41][42]. However, in some cases such as in the so called "bottom-up approach" the exploratory research starts with the mining of well-known analyses. In this scenario, a given scientist experienced in these new methodologies compares results through interlaboratory studies. It is in these cases that cross-validation is of great utility.
As reported by other authors, numerous statistics and acronyms have been described to interpret the results obtained during calibration development and validation experiments [30,[38][39][40][41][42][43][44]. These statistics include the prediction error of a calibration model, which is defined as the root mean square error for cross validation (RMSECV), when cross validation is used, or the root mean square error for prediction (RMSEP) when internal or external validation is used [30,[38][39][40][41][42][43][44]. These statistics provide an estimation of the average uncertainty that can be expected for predictions of future samples [30,[38][39][40][41][42][43][44]. The standard error of prediction (SEP) can be reported instead of the RMSEP [30,[38][39][40][41][42][43][44]. The residual predictive deviation (RPD) value has also been proposed to evaluate the ability of a calibration model to predict new samples [44,45]. The RPD value is defined as the ratio of the standard deviation of the response variable to the RMSEP or RMSECV (other authors use the term SDR) [44,45].
A common statistic often used to report the capability of the model to predict new samples is the coefficient of determination (R 2 ). This statistic represents the proportion of explained variance of the response variable in either the training or test sets [28,45]. Overall, the relationships between variables can be defined by the existence of some structured association (linear, quadratic, etc.) between the independent (X) and dependent (Y) variables [28,45]. It is generally accepted that correlation quantify how strong the association between two variables is where the robustness of the prediction is usually linked with the ability of the model to measure or predict the future behavior or results that it is designed to predict [28,45].
Unfortunately, different authors have described and reported similar results using different statistics/acronyms, making the assessment and interpretation of results published in the literature very difficult. One of the most important issues is related to the differences in the magnitude and structure of the population with respect to the measured parameter (e.g., range, standard deviation, coefficient of variation). It is therefore critical to report the standard deviation (SD), minimum and maximum values of the population for the attribute of interest [28,45].
Most applications report or interpret the models using the statistical parameters described above. However, this is not sufficient to describe the model and other parameters such as the loadings or coefficients of regression need to be added into the interpretation or reporting of the results (e.g., the why and how of the analysis). Even though the model is established, the fit for purpose criterion needs to be included during the evaluation of the models. Therefore, the users of these technologies need to interpret the models in the overall context of the application and not only on the cold interpretation of the statistics [46]. Another important step during calibration and validation which contributes towards the robustness of the models is the incorporation of appropriate pre-processing methods [21][22][23][24]30,38]. This step is of importance when instrumental methods are used (e.g., GC, HPLC) during analysis as the chromatogram needs some degree of pre-processing before being used for data mining (e.g., peak alignment, standardization). Numerous pre-processing methods have been proposed by different authors such as the use of derivatives (e.g., first and second), smoothing, bias, and slope corrections [21][22][23][24]30,38]. A detailed description and explanation of these pre-processing methods is beyond the scope of this report and the reader is directed to consult relevant references on the topic.
The Misuse of Chemometrics
The development of new applications (e.g., sensory, instrumental and advance methods) in food science either in research or by the industry have been boosted using chemometrics [47,48]. However, a word of caution on potential biased practice of this method in food science is needed [47,48]. Several issues must be considered during the development of quantitative models such as aspects about the sample selection (e.g., number, replicates, origin) critical to develop the models, the need of independent validation (not just cross validation), as well as the appropriate selection of pre-processing, among other issues, are still the most common errors made by the practitioners of chemometrics [3,33,47,48].
Of importance before embracing the use of chemometric methods is the definition of the exact purpose of the analysis as food "quality" can be defined as fitness for purpose. Most of the reports in the use of chemometrics highlighted the importance of defining the fitness for purpose as this can be associated with aspects of sampling (e.g., origin of the sample, the number of samples) and the process analyzed. Central to the development and utilization of these methods is the need to understand that the results are only as good as the sampling method and the DoE. Numerous examples indicated that calibration and models will be invalid due to inadequacies in sampling (e.g., sample selection).
Finally, after the model or calibration was established, the validation and systematic update of the models must be considered and implemented. It is important to remember that the overall error of the model developed (e.g., model error) will be of the accumulation of errors (squared) during all the steps followed during the process (e.g., DoE, sampling, laboratory errors, and mistakes, etc.) [3,33,47,48]. Table 2 summarizes drawbacks and missuses of the application of chemometrics gathered from published reports. Table 2. Summary of common drawbacks and mistakes encountered during the application of chemometrics in food science, research, and development.
Common Drawbacks and Mistakes
Lack of understanding of the chemometric tools (e.g., background, limitations of the method) Diverse type of algorithms and pre-processing techniques (e.g., improper selection of the appropriate tool for the task) Lack of the fundamentals and information required to interpret the results Incorrect use or sampling protocol Lack or inappropriate experimental design Inappropriate sample selection (e.g., number of samples, source) Validation (e.g., cross-validation versus independent validation) Issues reporting results (e.g., no information about the laboratory error associated with the reference method; Inconsistencies in reporting errors) Lack or minimal training/education Easy access to hardware and software
Final Considerations and Perspectives
The growing uptake of rapid analytical methods and techniques in food science, in either research or industry, was boosted by the increasing use of mathematics and statistics (e.g., software, algorithms, internet of things, databases) as they become a key component of the analysis. While the development of such applications in food science could be considered a simple computational exercise, the whole process needed to be considered. To this end, researchers must develop a comprehensive understanding of the complexity of the analysis (system) where the integration of the sample into the evaluation and mining of the data, the role of the instrument (e.g., signal-to-noise ratio), the soundness of the multivariate data analysis method selected, and the end use are components of the analytical system.
Modelling in modern food science relies on gathering data to generate knowledge, as well understanding different aspects of the system that might influence the analysis of samples or processes. Among the most important factors that govern the incorporation of chemometric techniques into the analysis and the interpretation of the results are (i) knowledge about the reference laboratory method (reported as the standard error of the laboratory), (ii) intrinsic characteristics of the method or technique selected (e.g., limit of detection, extraction steps, etc.), (iii) inherent characteristics and properties of the sample (e.g., chemical and physical properties), and (iv) associations between the sample, instrument and the data collected (e.g., signal-to-noise ratio, peak alignment, drifts, pre-processing). Different to the routine use of analytical methods, this approach will also require that the person in charge of developing such models needs to have the knowledge about the whole process used to generate the sample, and the willingness to engage in multidisciplinary work.
The current developments and uses of chemometrics in food science are related to the prediction of nutritional value and functional properties evaluated with instrumental methods. The ability to simultaneously evaluate multiple parameters in a single analysis has revolutionized the way that instrumental methods are used, allowing for the development of new applications. Future progress of these developments will provide analytical tools to interrogate about the composition or variations in commodities and foods in real time in addition to testing the integrity of the foods (undesirables or faults, food safety, traceability, and origin, among others).
Nevertheless, the inadequate academic support in topics such as novel use of instrumental techniques and chemometrics, are among the several limitations which faces the routine utilization of these methods by both the food industry and R&D. Even though there are some academic organizations that actively interact with the food industry sharing knowledge and expertise enriching their experience with mutual benefits for both parties. Regrettably, these are still several roadblocks for the widespread of these methods in food R&D and their translation into the food industry.
Author Contributions: For research articles with several authors, a short paragraph specifying their individual contributions must be provided. The following statements should be used "conceptualization, J.C. and D.C.; writing-original draft preparation, D.C., M.D., J.C., V.K.T. and A.E.; writing-review and editing, V.K.T., M.D., A.E., S.G., P.R.P., S.C., J.C. and D.C.; please turn to the CRediT taxonomy for the term explanation. Authorship must be limited to those who have contributed substantially to the work reported.
Funding: This research was funded by RMIT University. | 2019-05-17T13:08:37.925Z | 2019-05-01T00:00:00.000 | {
"year": 2019,
"sha1": "0da9a0923cb0920fd55e15be378ca55961932813",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/8/5/164/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0da9a0923cb0920fd55e15be378ca55961932813",
"s2fieldsofstudy": [
"Chemistry",
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Mathematics"
]
} |
235646598 | pes2o/s2orc | v3-fos-license | Calcium fructoborate coating of titanium–hydroxyapatite implants by chemisorption deposition improves implant osseointegration in the femur of New Zealand White rabbit experimental model
Background: The identification of biocomposites that improve cell adhesion and reduce bone integration time is a great challenge for implantology and bone reconstruction. Aim: Our aim was to evaluate a new method of chemisorption deposition (CD) for improving the biointegration of hydroxyapatite-coated titanium (HApTi) implants. CD method was used to prepare a calcium fructoborate (CaFb) coating on a HApTi (HApTiCaFb) implant followed by evaluation of histological features related to bone healing at the interface of a bioceramic material in an animal model. Methods: The coating composition was investigated by high-performance thin-layer chromatography/mass spectrometry. The surface morphology of the coating was studied by scanning electron microscopy (SEM), before and after the in vitro study. We implanted two types of bioceramic cylinders, HApTi and HApTiCaFb, in the femur of 10 New Zealand White (NZW) rabbits. Results: The release of CaFb from HApTiCaFb occurred rapidly within the first three days after phosphate-buffered saline immersion; there was then a linear release for up to 14 days. SEM analysis showed similar morphology and particle size diameter for both implants. Around the porous HApTiCaFb implant, fibrosis and inflammation were not highlighted. Conclusions: Easily applied using CD method, CaFb coatings promote HApTi implant osseointegration in the femur of NZW rabbits.
Introduction
There is growing interest in obtaining biomaterials for reconstructing bone tissue. Unwanted outcomes of postimplantation include assimilation of the implant, creation of proteolytic enzymes, and pro-inflammatory mediators and that form a granuloma around the implant, producing a series of biochemical reactions leading to osteolysis and bone resorption. Therefore, when considering metal implants or composites, there is the need for compatibility with the bone structure [1,2].
Hydroxyapatite (HAp) is a biomaterial that exhibits biocompatibility and bioactivity; it is frequently used for bone grafting and coating orthopedic metallic components [3]. After implantation, it produces chemical species that promotes the adhesion of the implant to the surrounding tissue by forming a functional connective structure [4]. However, there are some major drawbacks of pure HAp, including its poor mechanical properties and the challenging preparation of harder HAp ceramic composites. Reinforcement with particles, whiskers, and long fibers have been used to make HAp composites with superior mechanical properties. Bio-inert metal particles of titanium (Ti) are useful for reinforcing HAp, having a positive effect on mechanical and biological properties of the composites [5]. Calcium fructoborate (CaFb; Ca[(C 6 H 10 O 6 ) 2 B] 2 •4H 2 O) is a superoxide anion scavenger and anti-inflammatory agent, as shown by several in vitro studies [6]. Many research studies have reported that CaFb positively influences calcium metabolism, growth, and development of bone, soft tissues, and the formation of antibodies and collagen [7]. Given these considerations, combining Ti to reinforce HAp materials with active components, such as CaFb, may lead to new bone substitutes that successfully combine the properties of these classes of materials [8].
Aim
In the present study, we aimed to evaluate the osteoformation after femoral implantation of rabbits with two types of bioceramic cylinders, hydroxyapatite-coated titanium (HApTi) and CaFb coating on a HApTi (HApTiCaFb) implant.
Design and fabrication of implanted cylinders
The biocomposite implant preparation was described thoroughly in our previous work [9][10][11][12]. In short, the matrix was produced from HAp powdered particles with an average size of 200 nm (Merck, Darmstadt, Germany) and reinforced with titanium hydride (TiH 2 ) particles of approximately 100 μm (Merck) in a 75:25 ratio (HAp, TiH 2 ). A couple of steps were conducted to obtain the final form implants. The first stage was to dry the HAp particles and strengthen them with the TiH 2 , while the second phase was to compact and submit to a two-step sintering (TSS) heat process. After the sintered samples were obtained, one set was dipped into a CaFb stock solution, and another was left untreated as control [9][10][11][12].
Preparation of CaFb coating by chemisorption deposition
To examine potential ways to improve the osseointegration of the biocomposite samples into genuine bone, some sintered samples were immersed into a CaFb-based solution. The CaFb is well recognized as a biomaterial with considerable benefits for the human body, not only from the nutritional perspective but also therapeutically. Recent research shows that CaFb may be used as a bio-adhesive to produce biocompatible implants, as well as an osseoinductive factor owing to its antiinflammatory and antioxidative properties. The thermal behavior of CaFb was studied in previous research [13].
Chemisorption deposition
The HApTi cylinders were immersed in CaFb solution (0.4 g/10 mL). The cylinders were weighed before and after chemisorption deposition (CD), after preliminary drying for 24 hours, at 20°C.
CaFb release and mass spectrometry confirmation
Cylinders were first immersed in sealed containers with 10 mL phosphate-buffered saline (PBS), at 37±0.5°C, for 15 days. At regular time intervals, 0.5 mL of solution was taken and immediately replaced with an equal volume of PBS. The amount of released CaFb was determined by the high-performance thin-layer chromatography (HPTLC) method [14]. To confirm that the compound truly was CaFb, we eluted one of the bands from the sample directly into the mass spectrometer and we obtained the expected mass spectrum [6]. The settings used for mass spectrometry (MS) analysis were as follows: the mobile phase was Methanol-Ammonium Acetate 10 mM aqueous solution (9:1, v/v); negative mode, electrospray ionization (ESI); probe temperature, 450°C; capillary voltage, 0.8 kV; cone voltage, 25 V.
Surface characterization of Ti implants
Scanning electron microscopy (SEM) was performed to highlight aspects related to the morphology of the samples, such as the size, particle shape, agglomeration tendency, and porous characteristics. Micrographs acquisitions were completed with the help of a high-resolution SEM (FEI Inspect F50) at 30 keV voltage and various magnifications. We used the same protocol for mesenchymal stem cells (MSCs) isolation described thoroughly in our previous work [10]. In order to analyze the cytotoxic and proliferation effect, the cultivation of both biomaterials with MSCs was performed, after prior ultraviolet (UV) sterilization of the implants. The biomaterials were seeded with MCSs for 48 hours, fixed in 2.5% Glutaraldehyde for one hour, washed with PBS, dehydrated through a graded series of ethanol, and vacuum dried. All samples were coated with gold using a sputter coater and the morphology of MSCs was observed by means of SEM (FEI Inspect F50).
Animals, anesthesia, and surgical technique
For our study, we used 10 male New Zealand White (NZW) rabbits aged six months, with an average weight of 3000-3500 g. All NZW rabbits were kept in animal facilities, at 25°C, having 12-hour of light:dark cycles. Throughout the entire experimental period, the rabbits were kept in individual plastic cages, and were provided a normal chow diet and water ad libitum.
The implantation and post-operative protocols followed by rabbit euthanasia for bone tissue harvesting has already been successfully used in other research projects and has been approved by the Ethics Committee of the University of Medicine and Pharmacy of Craiova (Approval No. 134/2019). During surgery, anesthesia was maintained by administration of Fentanyl diluted with saline (1 mL Fentanyl in 9 mL saline). General anesthesia was completed by administering 1% Lidocaine (5 mL) at the incision site. At the beginning of the surgical procedure, the incision site was depilated and washed well with water and soap as well as Betadine solution, after which the animal was covered with a sterile field. A 5 cm incision was performed at the anterior face of the proximal femoral region. This included the epidermis, dermis, and fascial layers and highlighted the femur covered by the periost. The periost was incised and removed from the surface of the femur using a scraper. At the level of each femur, we made one excavation that completely removed the cortical bone near the medullary canal; for this, we used the Stryker Core Reamer orthopedic engine at low speed. In the femoral excavations, we inserted the implants (3 mm diameter, 5 mm long), as follows: in the left femur, we introduced HApTi (control implant); in the right femur, we introduced HApTiCaFb.
We chose the implantation of the two biocomposites in different femurs of the same animals to exclude variations owing to different animal healing responses. Subsequently, structure incisions were sutured with 4-0 Dexon thread. After the surgical procedure, we subcutaneously administered two doses of Buprenorphine diluted in saline at a dose of 0.05 mg/kg every four hours between doses. The operative wound was controlled and patched daily until healed. Prior to sacrifice, the rabbits were sedated by subcutaneous administration of Fentanyl (0.1 mL/kg) and Midazolam (2 mg/kg).
Histopathology and immunohistochemistry analysis
Histological tissue analysis was performed to observe the degree of composite osteointegration, osteoformation, and biocompatibility with bone tissue. Eight weeks after the implantation of the composites, the animals were sacrificed, and the femoral bones were removed and processed according to classical decalcification and paraffin embedding protocols.
Bone fragments were first decalcified for two months in 10% buffered ethylenediaminetetraacetic acid (EDTA; pH 7.4), with constant mechanical agitation on an orbital shaker. A fresh solution was prepared each week, and the decalcification endpoint was checked by testing the density of the bone fragment with a sharp metallic pointer, until the consistency was below that of cartilage. All tissue fragments had the same size (2×1×0.3 cm) and were decalcified for the same amounts of time.
After decalcification, tissue fragments were thoroughly washed in tap water, distilled water, dehydrated in increasing ethanol concentrations (75-100%), cleared in Xylene for three hours, and incubated in two paraffin baths, overnight, at 67°C. On the next day, the fragments were embedded in paraffin blocks that were sectioned at very low speed on a rotary microtome (Microm), producing 4 μm-thick sections that were collected on poly-L-Lysine coated slides.
For histological staining, the slides were deparaffinated, rehydrated and then stained sequentially in Hematoxylin and Eosin (HE) solutions. The sections were then dehydrated, cleared, and placed under coverslips utilizing a permanent Xylene-based mounting medium (Sigma-Aldrich).
CaFb coating analysis
To characterize the release of CaFb from the proposed biomaterial coatings, we immersed the biomaterials in a PBS physiological buffer, at 37°C. The sampled supernatants were quantified by HPTLC. The results showed that CaFb release occurred rapidly within the first 75 hours; then, it plateaued over a period of up to 14 days (Figure 1
Analysis of coating surface morphology
Both implants showed a granular appearance on the ceramic surface, as determined by SEM analysis. In both cases, the presence of nanometric particles or agglomerates, most likely consisting of HAp, was uniformly dimensional. The morphology was predominantly spherical, but sometimes-smaller rods and polyhedral particles were present.
Due to the method of obtaining implants using the TSS stage, the nanometric particles of HAp retain their size. Particle diameters ranged from 70 to 120 nm and were similar for both materials. Implants have low porosity, sometimes identifying triple junctions resulting from the bonding of several particles because of heat treatment applied (Figures 3 and 4).
SEM analysis of Ti implants after in vitro study
Using the SEM technique, we tested the potential of composites to be substrates for the in vitro adhesion and growth of osteoblasts on their surface. It was observed that when using HApTi implants, the number of cells that adhered to the surface is small compared to HapTiCaFb (Figures 5 and 6). The addition of CaFb has a direct positive effect on the process of cell adhesion and growth. In the case of the HapTiCaFb sample, a high number of cells is found on the surface of the analyzed composite, of micron dimensions, with typical morphological appearance, which demonstrates a good adhesion to the substrate and a reduced cytotoxic effect of the material ( Figure 6). Experimental data show that HApTiCaFb can promote cell proliferation and thus tissue and bone regeneration. Correlating the pharmacological effect of CaFb with SEM images, we can conclude that CaFb in the used concentration, stimulates cell adhesion and proliferation.
In vivo testing
Histopathology showed that the implants ended suddenly where the bone began, with no histological indication of newly formed bone tissue; neither fibrosis nor inflammation was observed. Characteristic elements of chronic inflammation (neutrophils, macrophages, foreign body giant cells) and necrosis were not detected up to eight weeks following initial implantation (Figure 7, a and b). Lack of osseointegration implant-bone elements visible with histology can be explained by lower maintenance period femur implant in animals studied.
IHC examination of the treated sections for OC and OPN immunoexpression was performed under both transmitted and polarized light to visualize osteoblasts and OC as non-collagenous protein (NCP) (Figures 8-11, a, c, e and g), as well as birefringent collagen fibers (Figures 8-11, b, d, f and h).
At the implant-bone interface of the left femur, we examined OC immunoexpression in osteoblasts, Havers channels, and bone lamellae. Following transmitted light microscopy analysis, we observed implant adhesion to the surface of the bone tissue, implant fragments embedded in bone mass, and the presence of newly formed collagen fibers. Polarized light images revealed the presence of birefringent collagen fibers that ensure the incorporation of implant fragments into the bone integration area (Figure 8, a-h).
The implant-bone interface of the right femur revealed the presence of OC in the extracellular bone matrix and Havers channels. Brightfield light examination showed the presence of implant fragments in the extracellular bone matrix and osteoblasts. When examined under polarized light, our analysis revealed the presence of birefringent collagen fibers (Figure 9, a-h).
Figure 7 -Implant-bone interface histological analysis (HE staining, ×40): (a) HApTi (left femur); (b) HApTiCaFb (right femur). HApTi: Hydroxyapatite-coated titanium; HApTiCaFb: Calcium fructoborate coating on a HApTi; HE: Hematoxylin-Eosin.
OPN appeared to be expressed in the left femur at the level of osteoblasts and the Havers canals. Brightfield light analysis of OPN indicated the presence of a partial adherence zone of the implant to the adjacent bone tissue and the presence of osteoblasts. Under polarized light, birefringent collagen fibers near the implant-bone interface were observed; however, birefringent collagen bundles were not detected in the implant incorporation area (Figure 10, a-h).
In the right femurs of the NZW rabbits, OPN was expressed in the extracellular bone matrix and Havers channels. In brightfield light analysis, we observed implant fragments adhering to the adjacent bone tissue and the presence of osteocytes. Under polarized light, collagen fibers and implant incorporation zones without the birefringence phenomenon were observed (Figure 11, a-h). : (a and b) ×28; (c and d) ×140; (e and f) ×210; (g and h) ×280. HApTi: Hydroxyapatite-coated titanium; HApTiCaFb: Calcium fructoborate coating on a HApTi; OC: Osteocalcin. -OPN antibody immunostaining: (a and b) ×28; (c and d) ×140; (e and f) ×210; (g and h) ×280. HApTi: Hydroxyapatite-coated titanium; OPN: Osteopontin. -OPN antibody immunostaining: (a and b) ×28; (c and d) ×140; (e and f) ×210; (g and h) For both types of implants tested, the analysis performed at the implant-bone interface reveals the adhesion areas of the implant, the implant fragments incorporated in the bone tissue mass, the birefringent collagen fibers, and the presence of osteoblasts. CaFb functionalization strategy significantly improves osseointegration, representing an interesting option for the treatment of osteoporotic fracture or other bone defects.
Discussions
The success of interventions involving the implantation of prostheses depends on the ability of the prosthesis components to rapidly fixate on the surface where the bone mass is located [18]. Because the opportunities for human studies are limited, a good alternative used by many researchers for the investigation of the implantbone interface is represented by the using of the animal models [19]. This type of experiments comprises 35% of the rabbits used in medical research worldwide.
At the base of bone development stand three main mechanisms: modeling, remodeling, and longitudinal growth [20]. Adult bone remodeling is represented by a succession of events carried out by a group of cells that form bone multicellular units [21]. When osteoclasts are activated, they begin the bone resorption and erosion. When they have reached a certain depth of resorption, the osteoclasts will be replaced by mononuclear cells that will help complete the bone resorption [22]. After resorption, the area is invaded by preosteoblasts that differentiate into osteoblasts and begin the formation of the bone matrix. After a certain period (bone maturation time), the bone matrix will be mineralized into lamellar bone [23]. The osteoblasts will continue to form a bone matrix that will later mineralize, thus repairing the so-called resorption defect. During this process, some osteoblasts will be included into the matrix.
Resorption and formation are closely interconnected both temporally and spatially. In the normal remodeling phenomenon, the order of the processes is clearly determined, so the resorption will always be followed by formation, and formation will always be preceded by resorption [24]. An important parameter is represented by the bone balance, which stands for the difference between the amount of resorbed and reformed bone during the remodeling cycle. This parameter may vary on different surfaces of the bone and is influenced by a range of factors, both local and systemic [25].
The superior osseointegration of the HApTiCaFb implant in the rabbit femur can be explained by the release of CaFb over a period of two weeks. Barna et al. (2015), in other biological assays, suggest that HApCaFb biocomposites are potential materials that can prevent further bone loss and could increase or restore bone mass [8]. The release time of CaFb is satisfactory, considering the simplicity and accessibility of the CD method. Moreover, our recent research indicates that HApTi and HApTiCaFb exhibit a good in vitro biocompatibility on osteoprogenitor cell culture [10].
Woven bone is found in different processes, such as the process of rapid ossification, during fracture development and healing, or in different tumors and some bone metabolic disorders. The isotropic mechanical characteristics of the woven bone are given by the disorientation of collagen fibers [26]. Osteocyte is considered to be the most mature or most differentiated cell of the osteoblastic line, located in lacunae and interconnected by canals. Through cytoplasmic extensions inside canal cells, cells feed and maintain contact with other osteocytes or with cells from the bone surface [27]. On woven or immature bone, collagen fibers are randomly arranged. The structure of lamellar bone is characterized by the placement of the collagen fibers in parallel sheets and spindles, and the alternating orientation between the successive blades. This structure explains the variation of bright and dark bands seen under polarized light [28]. Histological analysis of the implant-bone interface revealed lack of inflammation and good biocompatibility for both implants.
OC and OPN are two major NCPs that are involved in the organization and deposition of the bone matrix, having important roles in both the mechanical and biological functions of bone. Both are expressed in the bone formation process and control bone mass, mineral size, and orientation [29]. To emphasize osteointegration and osteoformation, we performed IHC analysis. In left femurs, OC was expressed in osteoblasts, Havers channels, and bone lashes; OPN was expressed in osteoblasts and Havers channels. For the right femurs, OC and OPN were expressed in the extracellular bone matrix and Havers channels.
Analysis at the implant-bone interface for both types of implants reveals adhesion areas of the implant, implant fragments embedded in the mass of bone tissue, birefringent collagen fibers, and the presence of osteoblasts. Functionalization strategies significantly improve osseointegration, representing an interesting option for the treatment of osteoporotic fracture or other bone defects. Although these advances are yet to be fully applied clinically, functionalization represents a promising strategy to help improve the implant stability and to ensure fast, functional improvement in patient quality of life [30].
Recently, Tao et al. (2020) suggested that local incorporation of Acetylsalicylic Acid into HAp-coated Ti implants is useful to improve osseointegration by increasing bone formation around the implant through the activation of Notch signaling pathways, both in the osteoporotic and normal condition [31]. In addition, current animal studies demonstrate a possible improvement of osseointegration of the orthopedic implant in animal models from both systemic and local administration of Zoledronate [32].
In other experimental tests using ovariectomized rat animal model, there was evidence that bisphosphonates induced extracortical subperiosteal femur bone neoformation [33]. Another study indicates that the fixation of porous coated implants that have also been subject to HApsurface coating and peri-implant bone compaction can be improved from local Alendronate treatment. Also, a beneficial role for HAp-coated joint replacements can be taken from the combined effect of local bisphosphonate treatment and bone compaction [34].
Another study found that local administration of Silymarin managed to stimulate bone formation around the implant in osteoporotic rats. The helpful effects of Silymarin were proved through different parameters like increased implant osseointegration, binding strength, osteogenic activity, and improved trabecular microarchitecture. Thus, the fixation of HAp-coated implants in ovariectomized rats is improved by the local incorporation of Silymarin [35].
Conclusions
Our study showed that CaFb coatings can easily be applied by the CD method and that CaFb coatings have a promoting effect on implant osseointegration in the femurs of NZW rabbit experimental model. Both types of implants showed a good degree of osseointegration; however, several improvements support the superior osseointegration of HApTiCaFb implants and the possibility of using them in orthopedic surgery for bone reconstruction. | 2021-06-27T06:16:29.826Z | 2021-06-23T00:00:00.000 | {
"year": 2020,
"sha1": "92dd71543b7c7e121c42f971b50f95a7a1d58269",
"oa_license": "CCBYNCSA",
"oa_url": "https://rjme.ro/RJME/resources/files/61042012351247.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "252ffe7389813cb8060849c06808048ff2f4959f",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
143428278 | pes2o/s2orc | v3-fos-license | Political Economy of the Polarization of LEs-SMEs Industrial Structure in Korea *
The purpose of this study is to analyze the polarization of the LEs (Large Enterprises)-SMEs (Small-Medium Enterprises) industrial structure in Korea within the context of political economy. The SMEs began increasing in terms of numbers, production, and value added beginning in the 1980s. This resulted from the rise in self-employment following increased unemployment, rapid liberalization and structural adjustment since democratization. The LEs and SMEs had interacted with each other through the subcontracting relationship. By applying the new institutional approach to this study, three factors can be suggested as institutional context for either the limited success or failure of SMEs policies: SMEs’ exclusion from a winning political coalition, the absence of a political role of interest groups for SMEs, and the exploitative subcontracting relations between the LEs and the SMEs. The state did not put forth real efforts to prohibit the exploitative subcontracting system and to pursue a productive discourse.
I. INTRODUCTION
The recent U.S. financial crisis of 2008 has resulted in a negative ripple effect that has shaped the global economy.Yet the conventional wisdom that everyone has a difficult time during economic recessions does not apply in the case of Korea.The profit margin of SMEs and a few LEs are worsening day by day.However, profitability of a majority of LEs shows no sign of having been affected, but instead are showing improvement.Consequently, polarization between LEs and SMEs is worsening again as it had been immediately after the 1997 financial crisis.
Economic polarization describes the widening gap between different sectors of the economy caused by the weakening of their interrelationships.The polarization appears in many forms, such as between exports and domestic demand, between industries and industries, between corporations and corporations, and between of employment and wages.The causes for intensifying polarization include globalization, the rise of China, acceleration of technological advances, and the global financial crisis that in turn negatively effected Korea's domestic structure.This research will focus primarily on the problem of polarization between the LEs and SMEs, and to shed light on the policy implications from the perspective of political economy.
The academic fields of business administration and economics have been conducting and accumulating many studies on LEs and SMEs.Most have focused on measuring fair transactions and policies that support SMEs in order to enhance the coexisting relationship between LEs and SMEs; that includes technological support, financial support, tax support and startup assistance.It is important to discuss the relationship between LEs and SMEs from the perspective of efficiency, but it is also necessary to analyze the relationship from the perspective of politics or the political economy (Kim et al. 2008).For quite a while the government has placed much effort into creating a harmonious relationship between the LEs and SMEs, one of those being through its SME Promotion Policy.Despite these efforts, the polarization phenomenon persists.The causes range from changes in the external economic environment to internal economic situations.Among these this study will focus on the political economy, and more specifically, it will explain the truth and falsity of governmental policies initiated to promote SMEs, the role of SME interest groups, as well as the subcontracting relationships between LEs and SMEs.
The structure of this study is as follows: section 2 will examine the current situation of the polarization phenomenon of the LEs and SMEs.Section 3 will apply the new institutional approach to examine why SME promotion policies brought only limited results.In section 4, to explain why these policies resulted in limited outcomes, it will discuss the government policies, the interest groups of SMEs and the subcontracting relationships between LEs and SMEs.Section 5 concludes this research with a summary and policy implications.
II. POLARIZATION OF LES AND SMES
Economic polarization is when economic outcomes are divided to extreme ends in the process of reacting to internal and external change caused by the heterogeneous properties of economic actors (Ju, H. 2007: 16).Thus economic actors face disparity in their economic outcomes because of a gap in their technical level and scale, adaptation capabilities, and the education level of their employees.Polarization refers to the significant gap between the two extremes, as well as that this disparity has a propensity to continuously expand.The term is used especially when the cause of this gap is structural.
Beginning in 1980, Korean SMEs began to grow and their number began increasing.According to Figure 1, before the 1980s, the scale of businesses expanded rapidly because of economies of scale for determining competitiveness in the labor-intensive export industry.As a result the priority and importance of SMEs decreased. 1In the 1970s with the promotion of the heavy chemical industry, support for LEs was further strengthened and the merger and acquisitions of SMEs further caused their diminished importance.Nevertheless, it was necessary to create a subcontracting relationship between the LEs that produced the final product and the SMEs that made the parts and components.This provided an opportunity to reinforce awareness that there must be policy support for SME.In fact, the reinforcement of SME growth owes resulted from the expansion of subcontracting relationships and The proportion of SMEs in manufacturing employment and added value between 1963-73 decreased from 66.4% to 52.8% and 39.4% to 272.2% respectively (Baek, N. 1996).
the change of governmental policy for promoting SMEs. 2 As it can be seen in Figure 1, the proportion of SMEs in the mining and manufacturing industry is 99.4% in terms of firms and 75.9% in terms of employees.However when the commerce sector is added, the proportions rise to 99.9% and 87.5% respectively.The constant and steady increase in the number of self-employed small business owners is another reason for the quantitative increase of SMEs after democratization (Kim, S. 2008).Some of the reasons lie in the a large number of workers deciding on selfemployment in restaurants, wholesale and retailing in the midst of the high unemployment rates that followed democratization and market liberalization.Figure 2 compares the proportion of importance that SMEs form in several countries.The quantitative proportion of Korean SMEs does not fall behind when compared to that of other countries.
2
Using the added value standard, the proportion of SMEs in manufacturing was only 31.7% in 197531.7% in but it rose to 37.6% in 198531.7% in and reached 49.2% in 199431.7% in (Baek, N. 1996 Korean SMEs expanded quantitatively after the 1980s.Yet polarization between the LEs and SMEs has been progressing as well.This phenomenon is explained by the profit and productivity gap between the two.According to Figure 3, the business profit rate of LEs has been relatively high compared to that of SMEs from 1991 until 2007.For 16 years the average profit rate of LEs has been 7.8% and 4.9% for the SMEs, with a difference of 2.9%.During 2002-2005 when polarization worsened, the profit rates of LEs and SMEs moved in opposite directions.In 2005 the disparity decreased but again increased in 2007.In 2004, the gap for the profit margin ratio reached up to 5.3%.The fact that the profitability of LEs increased while that of SMEs declined, and that both have intimate ties through subcontracting relations can cause much concern.Labor productivity (value added per capita) of SMEs in 1991 was 48.6% of the total productivity of LEs, 31.4% in 2004 and30.9% in 2007, showing a decrease in the gap (Kbiz, each year).
When compared with other countries such as the United States or Japan, it can be seen that the polarization of Korean LEs and SMEs is more severe.In the case of Japanese SMEs in the manufacturing sector, the labor productivity of SMEs has been 50% of that of the LEs, and this rate had been stable for quite a long period .The SMEs of the US also accounted for 58.2% in 1997 and 59.2% in 2002, showing even some improvement.This sharply contrasts with the Korean case where it decreased to 33.6% in 2003 from 53.8% in 1988.(Ju, H. 2007: 106-112).
The proportional significance of LEs in number of businesses, volume of output and added value show tendencies of reduction, the significance of SMEs has been increasing.However when the index indicating productivity and profitability is examined, we see that polarization has intensified in qualitative terms.The gap between the two is to some extent inevitable.However, it becomes a concern if the profitability of LEs increases while that of SMEs decreases while the two are in a coordinated subcontracting relationship.Problems such as the weakness of SMEs in the Korean economy, their low equipment investment levels, low efficiency in research and development investment, in addition to the LE centered industrial structure of South Korea, have all been continuously subject to concern.
As it can be seen in Figure 3 and Figure 4, polarization accelerated after democratization in the 1990s.As the labor union became more active after democratization, employment by corporations decreased while expanding their automated production facilities and subcontracting relationships.This resulted in expanding polarization between companies. 33 Interview.Joo Hoon Kim, Senior Researcher at KDI, 2009. 7. 1.This is because as labor unions became more active after democratization, corporations decreased the number of employees hired and increased the number of subcontracting relationships; 1993 1994 1996 1998 2000 2002 2004 2006 Some say the reason for greater polarization between LEs and SMEs was the relatively late restructuring of SMEs compared to the prompt response by LEs.The difficulty of M&A by weak SMEs resulted from the rigid financial system, which was one cause that made SME restructuring difficult. 4Also the long-term high exchange rate policy implemented by the government in order to increase exports aggravated problems for domestic-demand oriented SMEs (Hankyoreh, 2005/1/3).In contrast, the economic power of LEs after 1997 increased significantly.The reason for this increased influence after the financial crisis was while sorting out the weak SMEs, the LE-centered economic structure was consolidated in each of the industries.Also, because of competition with China, the profitability of SMEs weakened, which also led to the intensification of this economic structure (Kim, D. 2007: 486).The rise of China's economy intensified competition between South Korea and China, resulting in a weakened export competitiveness, industrial hollowingout, and limiting job creation in domestic SMEs.Because this study will focus on the political economic perspective, this paper will present the causes of polarization and the relevant countermeasures.It points out the truth and falsities of policies that the government has implemented by examining the dynamics of public and private interest groups related to SMEs, and examining the reality of subcontracting relations between LEs and SMEs.
III. THEORETICAL DISCUSSION: FAILURE OF THE SME POLICY AND THE APPLICATION OF NEW INSTITUTIONALISM
While studies may not agree about all the cause of polarization, it is likely that many will agree that one of the many causes is governmental policy.The Korean government has continuously led LE-friendly policies, which resulted in their development and expansion.In comparison, the weakness of Korean SMEs comes not from the fact that there were no governmental SME promotion policies, but that many of them were mere policies of "relief out of courtesy" that did not reap much tangible result.This research will shed light on the SME development policies by applying the perspective of new institutionalism.
Old institutionalism, the origin of new institutionalism, emphasizes official and static laws and institutions.However, behavioralism in the 1950s-1960s criticized the old institutional perspective about its focus on mere perfunctory law and administration, and in response began emphasizing the informal distribution of power and political behavior.However the revolution of behavioralism, while focusing on the attitude and behavior of individuals and groups, cannot fully explain why different countries with similar groups and individuals with similar resources demonstrate different behaviors.It was new institutionalism that proposed a significant critique regarding this point.Different countries reacted differently to crises after the financial crises of the 1970s.New institutionalism focuses on the middle-range institutional attributes of states, the institutional network connecting corporate networks, the state bureaucracy, political parties and economic groups.Through these factors, it examines how the institutional environment offers specific incentives and constraints to the political actors (Katzenstein 1978;Gourevitch 1986).Katzenstein explains the different responses that different actors display in similar crises, focusing on the political network connecting the state and society, or the "relational character of institutions." Similar groups or individuals display different reactions because of their differing institutional contexts.However this perspective also has limitations in that it emphasizes institutions only when the reform process and its outcomes vary over time within a single country.
New institutionalism has developed into two branches-rationalchoice institutionalism and historical institutionalism.Rational-choice institutionalism accepts the role of institutions but assumes that preference is exogenously given and that it is stable and coherent.However, when there is institutional change the rules of the games change and that changes the strategic choices humans make.Historical institutionalism does not regard human preference as exogenously given but sees them as internally constructed and redetermined (Hall and Taylor 1996;March and Olsen 1984;Katznelson and Weingast 2005).
Rational-choice institutionalism sees the institution as a 'structure' that influences the strategic choices human make, and historical institutionalism views that the institution itself shapes human preference.Both perspectives have a larger emphasis on 'structure' than the 'actor' and in the process of policy decision emphasizes 'institution' more than the 'ideas.'As pointed out before, when structure and institution are emphasized, it is difficult to discover why some policies succeed and others fail under similar institutions.In fact, individual preferences can be constructed by the structural and institutional influences, but at the same time powerful political leaders who accept new ideas can form new preferences.Also, policy choice and results may vary according to the political coalitions and conflicts (Peter et al. 2005;Schmidt 2009).
New institutionalism has overlooked the role of the state.Rational-choice institutionalism recognizes the state as a structure influencing the actors.The historical institutional perspective recognizes the state as an institution that leads to path dependency.The perspective of Varieties of Capitalism (VOC) regards the state as a regulator in the globalized environment and focuses on the firm as the basic unit of analysis (Hall and Soskice 2001).Recently the new perspective of discursive institutionalism began to draw scholarly attention, as a result of internal development of new institutionalism.Recently the perspective of discursive institutionalism emphasizes the role of 'ideas' rather than 'institutions' and the role of 'agency' rather than 'structure' .Discursive institutionalism emphasizes dynamic politics to understand the role of the state.Politics is understood as either power conflicts or coalitions among various interest groups, or as the process of the interactive discourses.Governmental discourse can be divided into coordinative discourse and communicative discourse.The coordinative discourse refers to mutual exchange and persuasion among politicians, policy experts, and entrepreneurs, whereas the communicative discourse refers to politicians or a government spokesperson attempting to persuade the public (Schmidt 2009, 529-534).In many cases the failure of reform policies can be attributed to the opposition of interest groups, but even more important is a lack of effort for politicians to persuade the public.Discursive institutionalism places significance on the political alliance between actors, interaction as well as the process of persuasion between the government and the major actors.Political coalition is crucial in understanding state policy, and its consequences.
To explain why the SME policies failed, this study adopts the theory of new institutionalism. 5Its utility is that in addition to emphasizing the state's role, it includes the relationship between the state and other middle range institutions in its analysis.As the new institutional framework is applied in this study, the political coalitions among the state, the LEs, and the SMEs, the relationship between the state and SMEs, the subcontracting relations between the Les and the SMEs, and its governmental rules on this subcontracting relations, all become significant analytical tools.This comprehensive analysis of institutional characteristics will reveal the reasons behind limited results of the SME Promotion Policies.To begin with, many policies were ineffective; most were merely 'relief out of courtesy,' and the government did not fully implement the laws on subcontracting due to their continuous LE-friendly policies.This study will also analyze the role of SME interest groups and the characteristics of LE-SME subcontracting relations.
Government Policy: Limited outcomes
As observed earlier, the productivity and profitability of LEs and SMEs have become polarized since the 1990s, and became heightened during 2002-2005 after the 1997 Asian financial crisis.Economic variables such as the internal and external changes of the economic environment are significant to understand polarization, but it is also necessary to examine the political variables such as the governmental policies that implemented the SME policies, the role of interest groups that advocated the interests of SMEs, and the subcontracting relationship between the LEs and SMEs which affected the SMEs significantly in their growth.
The effectiveness of governmental policies implemented to promote SMEs was in fact quite low.As mentioned earlier, these policies were "out of courtesy." Despite this fact, the basic foundation of SME support policies underwent change after democratization.The South Korean economy had to adapt itself to the environment of sophistication and unlimited competition of the economic structure after democratization.In this case, economic democratization took the form of deregulation and market liberalization (Lim 2008).In addition, SMEs also encountered fundamental change from receiving protection, to being thrown out into competition.The change of the large framework in SME policy can be summarized as from protection and support in the 1980s, to liberalization and competition in the 1990s, from direct to indirect support.The most representative policies in the 1990s are as follows: the abolition of designation of the kyeyŏlhwa plan, "core business industry designation policy, " and "collective private contract policies".
The SME policies prior to democratization emphasized protecting and supporting SMEs while also regarding them as relatively weak compared to LEs.In this way, SMEs were regarded not as a source of growth but as a countermeasure to concentrated economic power and to promote social justice (Cho, M. and Kim, S. 2008).Because of HCIs (heavy chemical industries: HCIs) policy of the government, the phenomenon of capital concentration accelerated during the 1970s.M&As of SMEs by LEs resulted in the reduction in the number of SMEs, and the SME promotion policies did not reap many effective results.Rather, the governmental SME policies of the 1970s focused on passive protection policies, acceleration of vertical integration between LEs and SMEs, and consolidation of subcontracting relationships.
The Chun Doo Hwan administration during the 1980s proposed a more active SME support policy.At that time the necessity of supporting SMEs was proposed because the limitations of LE-centered heavy chemical industries had surfaced with the change of the international political economy.The HCIs industrialization strategies required constant component and material related with imports and as a result became the main cause of the inactivity of intermediary goods and thus the trade deficit with Japan.In this situation, it was thought that SME support policies were urgently needed (Kim, S. et al. 2008: 25).However, economic policies at that time favored economic growth, and given the persistence of exclusive political coalitions comprised of the government, LEs, and technocrats, SME support policies resulted in actions of "relief out of courtesy." The difficulty of the relationship between LEs and SMEs lies primarily in the exploitative subcontracting relationships.
The Roh Tae Woo administration, after the 1987 political democratization, emphasized equality, competition, and welfare-the democratic principles of the economy.The general consensus is that the Roh administration's SME policies were relatively more regressive than those of the Chun administration.Market non-intervention policies were predominant for the facilitation of free competition.SMEs policies was shifted the emphasis from 'protection' to 'competition' .Revising and legislating SMEs laws were not actively sought (Kim, S. et al. 2008: 30).
We next examine in detail the government's SME support policy since democratization in the 1990s.The Kim Young Sam administration initiated policies that reflected trends during that time of globalization and liberalization.First, while active protection and support policies were characteristics of the 1980s, during the 1990's the Kim administration emphasized a political ideology of "autonomy and competition." Until then the government had used direct support policies that focused on specific categories of business or specific individual firms, however, they switched to providing indirect support policies based on a neutral incentive system.This transition began in the 1980s and became more central after the establishment of the WTO and South Korea's membership in the OECD in 1995.Secondly, they revised SME laws and merged the SME Business Regulation Act and the Gye-yol hwa Promotion Act into the SME Business Protection and Cooperation Enhancement between Enterprises Act to strengthen global competitiveness and relax regulations.The Special Act on the Promotion of Venture Enterprises (1997.8) was also enacted at that time.Third, the Small and Medium Business Administration was established under the Ministry of Trade and Industry in February 1996 to strengthen the practical support of SMEs, and to promote systemic governmental policies. 6 Under the Kim Dae-jung administration, SME policy focused on the development of knowledge-intensive SMEs under the rubric of a democratic market economy.After the financial crisis, the development of Small and Medium venture companies became the center of the nation's industrial policy with the legislation of the Special Measure in relation to Venture Company Support (1997).The Kim administration proclaimed that for the next 5 years, the development of 20,000 venture companies would be promoted to transform the industrial structure into one that was technology and knowledge-intensive, and ultimately to produce more jobs.Venture companies were promoted as the new principle agent for growth to overcome the financial crisis and recover economic vitality.While the former policy centered on input, protection and development, direct support, and producer-orientation, new policies were focused on reform promotion, competition and cooperation, creation of an infrastructural ecosystem, and consumer-orientation (Small and Medium Business Administration 2007).Also, the Presidential Commission on Small and Medium Enterprise was established as presidential body for supporting SME development in 1998.Its purpose was to review, revise, and assess the SME development policies of related departments, and analyze SME's business trends necessary for the Committee to perform its tasks.Their general task was to take on the role as a mediator between the different policies of the different department's involved, encouraging cooperation and coordination (Oh, C. 2003: 206).The government wanted cooperation among the Presidential Commission on Small and Medium Enterprise, the Small and Medium Business Administration, the Korea Federation of Small and Medium Businesses, the Small & Medium Business Corporation, and other SME supporting institutions to compensate for inadequacies in SME policies.7 SMEs' weakness 6 The Small and Medium Business Administration Bureau is a central agency where SME related work is carried out.It was founded in February 1996 to promote a more systematic and effective construction of corporate support policies, discarding the Industrial Advancement Administration and expanding the SME division within the Ministry of Trade and Industry.Its regional organization is comprised of 11 Regional Small and Medium Business administration Bureaus (Kim, S. et al. 2008).
in Korea is more attributed to the ineffectiveness than the lack of SME policies (Baek, N. 1996;Park, D. et al. 2006).
Despite the Kim administration's state ideology prioritizing SME policies, the Collective Private Contract Policies and Core Business Industry Designation Policies for developing and protecting SMEs were being threatened by those favoring liberalization and deregulation.Objections increased in response to possible trade conflicts resulted from market liberalization after joining the WTO in 1995, claims of reverse discrimination of national LEs, and declining competitiveness of SMEs .Core Business Industry Designation Policies that restricted market entry by LEs in business areas deemed suitable for SMEs were gradually abolished because they are against the market economy.By 2007, all 256 designations were completely abolished (Table 2).On principle, the Regulatory Reform Committee decided to remove 43 of the 83 remaining designations in 2000, and 45 more in the next 5 years (Kim, S. et al. 2008).The Fair Trade Commission insisted on prompt abolishment of the Collective Private Contract Policies because they also limited competition.
After the SME Core Business Industry Designation Policies were abolished in 2007, it was confirmed that 3 out of 4 SMEs in related industries showed reduction in their sales.This was mainly caused by extensive competition between businesses due to LE market entry (68%), recession in the domestic channels, that efficiently conducts work for the promotion and development of SMEs.The corporation manages and operates financial loans for prospective SMEs that face difficulties in obtaining loans from banks using their own policies and criteria.It was established to foster public economic development by efficiently promoting businesses for SME development under the Small and Business Corporation Law in December 1978 (Park, D. et al., 2005: 368;Oh, C. 2003: 208).1979 1983 1984 1989 1994 1995 1997 market (63%), increased cost of raw materials (50.5%) and increase in imports of foreign products (10.3%).89.9% responded 'no' to the question of whether SME competitiveness increased as a result of market entry of LEs (Hankyoreh 2008/10/17).
Collective Private Contract Policies, introduced in 1965 as a policy for SMEs, allowed the government to close contracts under its own discretion with the Korea Federation of Small and Medium Businesses without making a competitive bid when purchasing a specific product.However this policy was criticized because only a few SMEs with vested interests benefitted.In 2003 only 14.2% of the total SMEs that produced products subject to the Collective Private Contract Policies were involved in collective contracts as members of the Federation.The top 20% provided 77% of the total supply.This policy removed the members' incentive of investment to improve technology and product quality and weakened SMEs' competitiveness.It was also abolished in 2007 after two year grace period.
The Roh Moo Hyun administration's SME policy can be summarized as the development of innovational SMEs known as Inno-Biz, which promised to develop 30,000 SMEs through a tailored system.The policy focused on developing innovative SMEs, development of the part and component industries, promotion of 'Innovation Clusters' and etc.The number of innovational SMEs increased to 20,000 in 2007, new investments of venture capital increased to 6.3 billion won, and the number of SME affiliated research labs reached 12,300.The Small and Medium Business Administration supported Inno-Biz enterprises with technical skills, venture companies, and Main-Biz with high value-added.<Table 3> shows that there is a difference of efficiency between overall SMEs and Inno-Biz, in the number of jobs created, total sales, and R&D.To support these innovative SMEs, the Korea Technology Finance Corporation (KIBO) decided to increase the number of guarantees based on only 'technology' from 15.2% in 2005 to 60% in 2009.It also increased the frequency of meetings to promote cooperation between LEs and SMEs.However as it can be seen in <Table 4>, although the Kim and Roh administration strongly promoted policies to foster venture companies and Inno-biz, their implementation process has problems of overlapping and confusion.
When we analyze the overall change in growth contribution of LEs and SMEs over several administrations, as in <Table 5>, we see that SMEs showed the best performance under the Kim administration and the worst under the Park Chung-hee administration. 8During the Third Republic period,
8
The results compare and analyze the development of each administration's SME support policy, and the growth and contribution rate of LEs and SMEs.The research designation industrial policies during the Roh Moo Hyun administration, the growth of production and shipment of SMEs as well as their added value fell behind that of the LEs.As a result, polarization increased.
In surveying this progress of SME policy development, we can see that business polarization intensified beginning with democratization and the financial crisis, and that government efforts to alleviate the problem were ineffective.Several causes of such ineffectiveness can be pointed out.First, as labor unions became increasingly active, corporations expanded their automated production facilities, resulting in reduced employment and increased outsourcing.As a result, the per capita added value of LEs increased, and polarization intensified.After the 1997 financial crisis the links between chaebols and banks ceased, resulting in the rising cost of financing for LEs.With the relationship between the bank and chaebols shifting, corporations began to increase employment instead of investing in mechanical equipment.Consequently, since the end of the financial crisis through 2006, investment in machinery decreased while there was growth in employment.The employment of temporary as well as dispatched worker positions within corporations became possible, and the outsourcing rate continuously rose.
In addition, the polarization within SMEs also exacerbated polarization among firms.The number of small enterprises with less than 20 employees gradually increased, the subcontracting chain multiplied fourfold, and the number of foreign employees and temporary workers increased to lower costs.Thus, the increased number of SMEs in Korea is an increase in the number of small enterprises, and thus polarization between SMEs and LEs worsened.Another reason behind the increased polarization is a problem embedded in the Korean financial system.When the SMEs were divided according to their production or sales into categories of 30% of higher growth, less than 30% growth, and less than 0% growth, between 1999 and 2006 there has not been any significant increase in SMEs that have experienced growth of more than 30% or less than 30%.On the other hand, the number of SMEs that experienced a negative growth increased, which implies that they were not kicked out of market.The fact that these companies were not banned despite their negative profits shows that companies subject to restructuring did not have any exit points via M&A.In the case of insolvent enterprises, they cannot even become liquidated due to their inaccessibility to secure loans.Currently the number of businesses with one or more employees is 3.4 million.However, those that have credit guarantees are only 300,000, which is less than 10%.The number of businesses that did not receive credit guarantee from the government is considerable, and that 90% must struggle to survive in a system of unlimited competition regardless of government support. 9This phenomenon is clear evidence of the limited outcome of the government's financial policy towards SMEs.
In conclusion, political democratization influenced economic democratization, but that democratization was limited to markets and liberalization.Consequently, the direction of SME support policy changed from protection and promotion to liberalization and competition.The problem was that the vulnerable SMEs were forced to face unlimited competition without structural reform to improve competitiveness.The government basically continued their pro-LE policies, and the SME support policies reaped only limited results in that they were mere lip-services with no binding force.
The Role of SMEs Interest Groups
Another reason behind the limited achievements of the numerous government policies regarding SMEs support is related to the role of institutions that represent the SMEs' interests.
Generally, the Japanese industrial structure compared to that of South Korea has developed with a good balance between LEs and SMEs.In fact, before the war, the Japanese subcontracting system also showed imbalanced relationships.However, the Japanese government continued their efforts in protecting and supporting SMEs by initiating various governmental policies and establishing laws preventing unfair subcontracting practices.Behind these efforts were pressures coming from the interest groups of Japanese SMEs, coupled with political will in the government to win electoral support (Nishigushi 1994;Lim, H. 1998).The Japanese self-employed business group's politics are more institutionalized and pluralistic.The SME groups decided on strengthening their external political connections during periods of crises.From the 1950s to 1960s the Federation of SMEs emerged with connections to the leftist party, which exercised a fair amount of political pressure.By demonstrating that self-employed businesses and SMEs could break away from political support, they were able to influence the Liberal Democratic Party, and led it to propose the following policy responses: the enactment of the Minor Enterprises Act, a comprehensive development plan for SMEs, proposals for creating a favorable environment for SMEs such as the establishment of exclusive banks for SMEs, signature loans, SME Restructuring Act, and the Minor Enterprises Act (Kim, S. et al. 2008: 41).The SME loan policy showed effectiveness to the extent that by 1967, the amount of bank loans received by the SMEs exceeded that of the LEs.Japan's SME development began from below, initiated by the independent SME movements.
Apart from being pushed aside in the LE-centered structure, SME interest groups also have a significantly limited role in Korea.When the influence of the Federation of the Korean Industries (FKI) and Korea Federation of Small and Medium-sized Businesses (Kbiz), the main interest groups of LEs and SMEs respectively, is compared, the rate of 20.6% whereas 10.6% (Hwang, J. 1997).It is also problematic to regard Kbiz as the main representative for all the SMEs in Korea.Kbiz in 1999 had a total of 735 associations and 64,780 companies as members, which is, only 4.9% of total SMEs in Korea (Jeong, S. 2002: 196).The ratio of registered Kbiz members lowered to 2.3% in 2006.But as it altered eligibility of membership in regard to the categories of business, the membership rate rose to 19.5%. 10 The decisive factor that weakens the influence of Kbiz is their low fiscal self-reliance ratio.Since 1963 Kbiz has been receiving government support, and until 1995 government grants accounted for approximately 26.5% of the total ordinary earnings of its general account (Jeong, S. 2002).The proportion of the budget supplied by membership fees is only 4%.Its dependence on government led to circumvent main issues such as conflict with LEs, or financial problems but only to respond to minor issues.The overrepresentation of manufacturing industry and the underrepresented commerce industry in Kbiz' membership also weakened representativeness of SMEs.
There is a big difference between LEs and SMEs in membership rate of their labor unions.In 2004, the number of workers employed in LEs with over 300 employees accounted for 10.1%, whereas the remaining 89.9% were employed in SMEs.72.5% of members of The Federation of Korean Trade Union and the Korean Confederation of Trade Unions work in LEs with over 500 employees, 4.9% in enterprises of 50-99 employees, and 3.3% in small businesses with less than 49 employees (Hankyoreh 2005/1/3).The fact that labor unions are concentrated in LEs has close relations with the market power of LEs.
In contrast with Japan's balanced industrial structure, the imbalanced LE-centralized structure is mainly due to the political ties between the government and LEs, and the continuance of LE-oriented industrial policies.However, as we can see from the Japanese experience, the political role of SME interest groups is also significant.The government policy for supporting SMEs in Korea was ineffective because the government was reluctant to impose punishments or show political will in cases of non-compliance by LEs.SME interest groups are able to exert positive influence on the effectiveness of government policy by arranging measures to solidify internal organization and strengthen external political ties.Their continuously weak politicaleconomic position is also caused by the subcontracting relationship between the SMEs and LEs, which will be further discussed in the next section.
Structure of the LEs-SMEs Subcontracting Relationship
The subcontracting rate of Korea's SMEs reached 63.1% in 2003, increasing from 48.9% in 1994.These subcontractors supply, on average, 82% of their total products to LEs.The ratio of SMEs which supply more than 90% of their total sales to LEs was 71.4%.The subcontracting rate between LEs and SMEs has increased, and the monopoly status of the former over the latter has strengthened (Kim, D. 2007: 480).According to Figure 5, whereas the number of SMEs that receive orders from other SMEs is decreased, their dependence on the LEs increased to 85% in 2007.Main complaints of SMEs in transactions with LEs are: the increased price of raw materials are not reflected in the supply price (67.2%), pressure of lowering supply price (49.8%), pressure to shorten the delivery date (28.8%), and delaying payment (24.6%). 11 According to a study conducted in 2005, 80% of the SMEs identified themselves to be in a subordinate position vis-à-vis LEs.According to another study by Kbiz on 150 SMEs, only 0.7% supported to the LEs expanding into the business field of SMEs while 84.5% opposed.The difficulties for SMEs conducting business included the highhandedness of LEs in their subcontracting transactions (32.4%), manpower shortage (28.3%), lack of demand for their products (13.7%), and government interference and regulations (9.4%).The most frequent case of unfair subcontracting transactions was unilateral automatic cut of supply price (63.2%), arbitrary modification or cancel of orders (15%), and delaying payments (10.3%).What's needed to facilitate fairness are fair practice of subcontracting (64.6%), localization (17.4%), financial support (10.2%), and joint marketing (5.4%) 11 The average of assemblers for each supplier to supply the product increased to 14.5, and the Small and Medium Business Administration (SMBA) interprets this as a diversification trend of the supply structure.SMEs' Financial problems aggravated after the 1997 financial crisis due to the delaying payment.Even though the supply price was made through negotiations, the majority believed that the buying assembler's influence was determinant in the price decision.No more than 5.2% agreed that the suppliers' position was adequately considered (Lee, Y. 2003: 223).
Businesses that followed each step of the transaction were asked of their experiences in receiving demands of reduced product unit price.According to the results, at the second and third step of the transactions, more demands were received for cutting unit price (33.1% for step 1, 37.2% for step 2, and 55.3% for step 3).At the second and third steps of the transactions, there was a tendency to depend more on simple, manufactured products than complicated ones, and a larger possibility of price competition within the same industry.Second and third step vendors had better provide differentiated products based on better skills, a tool to evade price competition (KOSBI 2009).
According to the companies that replied to the survey on the demand of low cost, most cited the continued LE demands of cutting cost, LE-centered economic structure, and wage increases subsequent to LE labor unions as the primary causes.57.8%, and more than half, claimed that the fundamental problem had to do with the behavior of LEs.The following are solutions suggested, about unit price negotiations.Most of those replied claimed that Requirement for Product Unit Price Negotiation Policy would be more efficient than Pegged Unit Price.35.7% claimed that the former will be more effective, and furthermore, 33.7% replied that there would be positive results when the government thoroughly oversees the process and secures the laws.Also, 35.7% were concerned with retaliation from LEs, which decreased the effectiveness of such measures.
According to <Table 9>, 55.5% of the companies claimed that the LEcentered economic structure and structural imbalance were primary causes of polarization, and 33.0%suggested the cause to be the trend towards globalization and lack of competitiveness of SMEs-showing that a majority attributed polarization to the LE-centered economic structure.The most popular suggestion to rectify this problem included reforming the currently LE-structured economic structure, improving technical capabilities of SMEs.And the second suggestion was strengthening the fairness of subcontracting relations.Institutional reform is necessary to prevent unfair corporate practices.For instance, in order to prevent any disadvantages for the consignees, 38.2% of the investigated group claimed that it was necessary to frequently initiate investigations regarding unjust transactions of consigners when the consignees are reported.Also, 27.1% answered that precedents of unjust cases should be reflected more effectively in the laws to exclude all unfair practices, 12.7% claimed that punishment should be intensified to increase losses when unjust practices are carried out by the consigners, and 12.5% suggested that policy consolidation for SME technology and quality competitiveness is needed.
A majority of those studied claimed polarization resulted from LEscentered industrial structure, and this shows that government policies favor LEs.Suggested reforms include punishing unfair transaction practices and frequent investigations of LEs.This shows that there has been a constant, exclusive political alliance between the government and LEs.V. CONCLUSION Korean SMEs have grown quantitatively since the 1980s, yet there has also been a structural weakness in a constant LE-SME polarization.The quantitative development of Korean SMEs does not lag when compared with other countries.After the 1980s, in terms of its number, production and added value, the number of LEs decreased while that of SMEs increased.Yet the profitability of SMEs decreased in their subcontracting relations with the LEs.Korean SMEs may have increased in numbers, but in qualitative terms they have reflected structural problems such as small size and low profitability.
The causes of LEs-SMEs polarization may be explained not just in economic terms, but also in terms of the government's LE-favored policies, the ineffectiveness of SME policies, and the exploitative subcontracting relationships between the LEs and SMEs.In fact, the number of SME support policies and LE-SME cooperative policies were actually overwhelming.Moreover, polarization showed signs of intensifying after democratization and the financial crisis.Thus, the focus of the problem should not be whether there were shortages of such policies but on why these policies were ineffective.Causes included the government's exclusive political alliance with LEs, the lack of political representation for the interests of SMEs, and the lack of win-win strategies between LEs and SMEs.To foster cooperation between LEs and SMEs for a '"win-win" strategy, the government needs to stress the importance of "win-win" strategies and continue their efforts in persuading the LEs.At the same time, the government's willingness was not strong enough to impose severe punishments for unfair subcontracting practices by LEs.Political democratization also influenced the realm of economics in Korea, but the economic democratization was perceived as only the opening of markets and liberalization.Following democratization, the SME support policies focused on liberalization and competition and moved away from previous policies of protection and development.However, without structural reform for fair competition, SMEs were subjected to unlimited competition which caused excessive polarization of LEs and SMEs.Japan was able to achieve a relatively balanced industrial structure because the interest groups of SMEs were institutionalized to significantly influence the government.The government also placed sustained efforts on developing SMEs to get firm political support.The most important factor was that there were effective policies such as the initiation of the Monopoly Regulation and Fair Trade Act to foster fair subcontracts between LEs and SMEs, and an expansion of financial support for SMEs.The transformation of subcontracting relations between LEs and SMEs from exploitative to cooperative was an important factor that contributed to balanced development.
According to a study conducted in 2003 identifying the number of SMEs in the manufacturing industry that developed into LEs exceeding 300 employees, of 56,000, only 57 became LEs.The number of companies that expanded to 500 employees was only 8. The current situation in Korea is that there is a structural barricade between LEs and SMEs, which cannot be considered a healthy industrial structure. 12The financial support policies for SMEs has shown limitations, by financial institutions maintaining their conservative tendencies during takeover and M&A processes of weak SMEs.In order to prevent further polarization and to develop a balanced industrial structure, it is necessary to implement more effective policies and foster win-win strategies between LEs and SMEs.Also, SME interest groups need to consolidate internally and strengthen their external political ties in order to positively influence the effectiveness of governmental policies.
Figure 2 .Figure 3 .Figure 4 .
Figure 2. Comparison of SMEs in Major Countries: Share of SMEs in manufacturing sector.
9
Interview.Kim, Joo Hoon, KDI Senior Researcher 2009.7. 1. Political Economy of the Polarization of LEs-SMEs Industrial Structure in Korea 243
Table 1 .
Change in Gap between LEs and SMEs in their Key Indicators
Table 2 .
Progress of SME Core Business Industry Designation
Table 3 .
Comparison of SMEs Efficiency
Table 4 .
Progress of Inno-Biz was based on the manufacturing sector using chronologically organized statistical data.Korea Federation of Small and Medium Businesses conducted the research in the coming of the 60 th year of the establishment of the Korean government.
Table 5 .
Changes in Growth Contribution of LEs and SMEs The SMBA has reported the Status of Korean SMEs 2007 that surveyed 4100 small and medium manufacturing companies(Daily Labor News 2008/11/19).Trend of Suppliers' transaction dependence on parent company (%).
Table 6 .
Complaints of SME subcontractors in transactions with LEs (%)
Table 7 .
Experiences of reduction in supply price according to each transaction step Source: KOSBI, "The Survey on Transactions between LEs and SMEs, 2009".
Table 8 .
Fundamental cause of the pressure of low cost
Table 9 .
Causes of polarization of LEs and SMEs The Survey on Transactions between LEs and SMEs, 2009".Political Economy of the Polarization of LEs-SMEs Industrial Structure in Korea 249
Table 10 .
Measures for LEs-SMEs "win-win" strategy The Survey on Transactions between LEs and SMEs, 2009". | 2019-01-03T03:45:00.493Z | 2010-05-01T00:00:00.000 | {
"year": 2010,
"sha1": "37c7e564e96c45f00c24c3c69ee60a7d5be6ce1b",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.17937/topsr.20.1.201005.145",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "37c7e564e96c45f00c24c3c69ee60a7d5be6ce1b",
"s2fieldsofstudy": [
"Political Science",
"Economics"
],
"extfieldsofstudy": [
"Political Science",
"Business"
]
} |
56481200 | pes2o/s2orc | v3-fos-license | TGF-beta 1 levels are associated with lymphocyte percentages in patients with lung cancer treated with radiation therapy.
Purpose
Plasma TGF-β1 protein levels reportedly may predict the treatment outcomes of lung cancer. We hypothesized that in patients with lung cancer treated with radiation therapy (RT), TGF-β1 levels may correlate with the percentages of CD4+ T cells, CD8+ T cells, and the CD4+/CD8+ T cell ratio in peripheral blood.
Patients and methods
Eighty-two lung cancer patients satisfied the inclusion criteria. Platelet-poor plasma was obtained before RT, at the second and fourth weeks during RT, and at the end of RT (pre-, during-, and post-RT, respectively). TGF-β1 was measured via ELISA, while recording the percentages of lymphocyte subsets in peripheral blood. Short-term efficacy was categorized as complete response, partial response, stable disease, or progressive disease.
Results
Patients who had significantly lower TGF-β1 protein levels after RT than pre-RT seemed to have a better short-term effect (P<0.05) than those who had higher TGF-β1 levels. There was a significant association between the TGF-β1 levels and percentages of CD4+ T cells, CD8+ T cells, or CD4+/CD8+ T cell ratio during and at the end of RT. Changes in CD3+ T cells, B cells, or natural killer cells were not statistically related to the changes in TGF-β1 levels.
Conclusion
Lung cancer patients with TGF-β1 levels in plasma after RT that are below pre-RT levels may experience better short-term efficacy. The underlying mechanism may be related to the influence of TGF-β1 on antitumor immunity.
Introduction
Lung cancer is the leading cause of cancer-related death in the world. It has been reported that TGF-β1 levels in the lung cancer tissues of patients, [1][2][3][4] or in plasma, may be associated with prognosis of the disease. 5,6 In locally advanced non-small-cell lung cancer (NSCLC), an increase in TGF-β1 levels in plasma during radiation therapy (RT) indicates a poorer prognosis. This may relate to immune escape or suppression induced by TGF-β1 at the end of RT. [7][8][9] To the best of our knowledge, no studies have investigated the possible associations between plasma TGF-β1 levels and the lymphocytes CD4 + T cells, CD8 + T cells, or CD4 + /CD8 + T cell ratio.
This study investigated the prognostic significance of plasma TGF-β1 levels during RT in the treatment of lung cancer, and the possible correlations during the course of RT between TGF-β1 levels and CD4 + T cell or CD8 + T cell levels, or the CD4 + / CD8 + T cell ratio.
Materials and methods study design
Ethical approval for this investigation was obtained from the Research Ethics Committee, Tianjin Cancer Hospital & Institute. In brief, 82 patients with lung cancer were included in the final analysis. Blood samples were collected at the following timepoints: within a week prior to receiving RT (pre-RT), at the second and fourth weeks during RT (2 weeks during-RT or 4 weeks during-RT), and within a week after RT (post-RT). The blood samples were analyzed for CD4 + T cells, CD8 + T cells, B cells, and natural killer (NK) cells, and the CD4 + /CD8 + T cell ratio was calculated. TGF-β1 levels were investigated for associations with CD4 + T cell or CD8 + T cell levels, or CD4 + /CD8 + T cell ratio.
The response to RT treatment was evaluated by chest computed tomography (CT) image 1 month after RT, and classified as a complete or partial response, or stable or progressive disease. The short-term efficacy of treatment was then categorized as effective (complete or partial response) or ineffective (stable or progressive disease). The response rates were evaluated to determine the prognostic value of plasma TGF-β1 levels during RT.
Patients
We identified all patients with lung cancer, confirmed and treated at Tianjin Cancer Hospital, between March 1, 2016, and May 1, 2017. After appropriate eligibility was established, 82 patients with lung cancer participated. All patients involved in this study were informed of the content of the research and provided written informed consent.
All the participating patients had lung cancer, confirmed by pathology or cytology in Tianjin Cancer Hospital. Excluded from the analysis was any patient with medical complications or serious infectious diseases that could affect immune function; obvious contraindications for intensitymodulated RT; malignant tumors other than lung cancer; receiving immune therapy or taking immunosuppressive drugs within the previous 3 months; incomplete medical records; or who did not comply with the criteria mentioned above, gave up the treatment for various reasons, or were otherwise lost to follow-up. radiotherapy All patients received RT with or without sequential or concurrent chemotherapy. Radiation was given via consistent intensity-modulated RT. Treatment planning was performed with a Philips Pinnacle 3 radiation treatment planning system (Philips Medical Systems, Amsterdam, Netherlands).
For all patients, gross tumor volume was identified based on the CT images. The gross tumor volume included the tumor and the metastatic lymph nodes. The clinical target volume was based on the gross tumor volume, but also included the primary tumor bed and metastatic lymph nodes before chemotherapy. The distance from the margin of the gross tumor volume to the clinical target volume was 5 mm. The distance from the margin of the clinical target volume to the planning target volume was 5-10 mm. The radiation dose was 50-66 Gy in 20-33 fractions, 1.8-3 Gy per fraction, 1 fraction per day. chemotherapy Chemotherapy was given before, during, or after RT, or at combinations of these intervals. When chemotherapy was given sequentially with RT, small-cell lung cancer (SCLC) patients received chemotherapy of etoposide and cisplatin, or etoposide and carboplatin. NSCLC patients received chemotherapy of platinum-based doublets (carboplatin or cisplatin combined with vinorelbine, paclitaxel, or gemcitabine). When chemotherapy was given concurrently, the combination of carboplatin or cisplatin and etoposide or paclitaxel was commonly used. The dosage of all the chemotherapy drugs was within normal limits. A chemotherapy cycle was considered to be 21 days. The median number of chemotherapy cycles was 4.
sample collection and TgF-β1 measurement
Blood samples were collected, with dipotassium EDTA as the anticoagulant, at 3 timepoints as described above in the study design. Blood samples were placed on ice immediately after collection. Blood samples were centrifuged within 2 hours of collection at 3,000× g for 30 minutes. The upper onethird of the supernatants was collected and stored at −80°C. Plasma TGF-β1 levels were then measured by enzyme-linked ELISA using a Quantikine ELISA Kit (R&D Systems, Inc., Minneapolis, MN, USA). The presence of CD4 + T cells, CD8 + T cells, CD4 + /CD8 + T cell ratio, B cells, and NK cells was tested by the Department of Clinical Laboratory, Tianjin Cancer Hospital. FACSCanto™ II flow cytometer (BD Biosciences, San Jose, CA, USA) was used to finish the measurement of lymphocyte percentages.
criteria for evaluating therapeutic effect
The response rate was evaluated by chest CT image, 1 month after RT. In accordance with the Response Evaluation Criteria in Solid Tumors version 1.1, short-term efficacy was OncoTargets and Therapy 2018:11 submit your manuscript | www.dovepress.com
8351
TgF-β1 are associated with lymphocyte percentages and radiotherapy classified as a complete response, partial response, stable disease, or progressive disease. Immeasurable lesions (such as bone metastasis sites or malignant pleural effusion) were generally not evaluated, unless involved in disease progression. Effective treatment was defined as a complete or partial response after RT. Ineffective treatment was defined as stable disease or progressive disease after RT.
statistical analysis
Clinical characteristics were compared using the chi-squared test. The Mann-Whitney U test was used to compare differences in TGF-β1 in plasma and lymphocytes, while two-tailed Student's t-test was used to compare the same patients with different timepoints. Independent t-test and oneway ANOVA were used to compare differences in TGF-β1 between different clinical characteristics. Correlations were tested by Pearson's correlation analysis. All P-values were two-sided. Data are presented as mean ± SD unless otherwise specified.
clinical characteristics of patients
In this prospective study, 104 patients with lung cancer were initially considered for inclusion, and subsequently 22 patients were excluded for the following reasons: 11 received a diagnosis of esophageal cancer; 5 were found with thymoma; and 6 were missing data. Thus, 82 patients were included in the final analysis; different clinical characteristics are listed in Table 1. In addition, patients with different clinical characteristics also had similar TGF-β1 levels ( Table 1). association between TgF-β1 levels and short-term response to treatment Of the whole group, 47 (57%) achieved an effective response (CR+PR), and 35 (43%) achieved an ineffective response (SD+PD; Table 2). At different timepoints, TGF-β1 levels varied. Furthermore, patients with SCLC and NSCLC also had different TGF-β1 levels ( Table 2). The TGF-β1 levels of the patients before RT were comparable (P0.05; Table 2). After RT, the mean TGF-β1 levels of the patients who had shown an effective response were significantly lower than that of the patients for whom RT was ineffective (P0.05; Table 2).
Overall, 47 patients showed an effective response to RT, while treatment was ineffective for 35 patients (Table 3). Of those with an effective response, the majority (36/47) had TGF-β1 levels after RT that were lower than the pre-RT levels. The TGF-β1 levels of most of the patients who showed (Table 3). Of those who demonstrated an effective response, the majority (11/12) had TGF-β1 levels after RT that were lower than the pre-RT levels. The TGF-β1 levels of most of the SCLC patients who showed an ineffective response (6/9) were higher after RT relative to pre-RT. In the NSCLC group, 35 and 26 patients experienced effective and ineffective responses to RT, respectively (Table 3). Of those who demonstrated an effective response, the majority (25/35) had TGF-β1 levels after RT that were lower than pre-RT levels. The TGF-β1 levels of most of the SCLC patients who showed an ineffective response (21/26) were higher after RT relative to pre-RT.
TgF-β1 levels and lymphocytes
Overall, the TGF-β1 levels decreased only slightly from pre-RT to post-RT, while the percentages of CD3 + T cells and B cells were significantly lower at post-RT (both, P0.01; Table 4). In addition, compared with pre-RT levels, at post-RT the CD4 + T cells and the CD4 + /CD8 + ratio were only slightly higher; CD8 + T cells were higher (P0.05; Table 4); and NK cells remained stable. At the second and fourth weeks during RT, and at the end of RT, TGF-β1 levels were significantly associated with the percentages of CD4 + T cells; CD8 + T cells; and the CD4 + / CD8 + ratio (P0.05, all; Table 5). There were no significant associations between TGF-β1 levels and percentages of CD3 + T cells, B cells, or NK cells (Table 5).
Discussion
TGF-β is a member of a multifunctional cytokine family, and TGF-β1 is considered to participate in carcinogenesis 10,11 by promoting cell proliferation, differentiation, and extracellular matrix production. 12,13 Studies have shown that TGF-β1 levels may be a marker of patient prognosis; patients with tumors who had higher TGF-β1 levels after treatment compared with TGF-β1 levels before treatment seemed to have a significantly poorer overall prognosis. 2,4,14 This may be due to the immune suppression effect of TGF-β1. Zhao et al 9 observed that in locally advanced NSCLC, a decrease of TGF-β1 levels during RT correlated with a favorable prognosis. Huang et al 15 reported that high TGF-β1 protein levels were associated with a poor prognosis. What is more, studies have reported that chemotherapy may affect TGF-β1 levels. Our present small-sample study suggests that patients with decreased TGF-β1 levels at the end of RT have a higher response rate, which is in accord with the reported literature. 16 Patients who received more than 4 cycles of chemotherapy had lower TGF-β1 levels compared with those who received fewer than 4 cycles.
The changes of lymphocytes could also be a prognosis biomarker. Results from Yang 17 showed that the CD4 + /CD8 + ratio counts were consistently higher in prostate cancer Note: in both sclc and nsclc patients, the TgF-β1 levels before rT were comparable, patients who experienced an effective response (cr+PR) also had a significantly lower mean TgF-β1 level after rT compared with whom rT was ineffective (sD+PD). Abbreviations: cr, complete response; nsclc, non-small-cell lung cancer; PD, progressive disease; Pr, partial response; sclc, small-cell lung cancer; rT, radiation therapy; sD, stable disease. Notes: a relative to pre-treatment TgF-β1 levels. The majority (36/47) of patients who had an effective response, TgF-β1 levels experienced a drop after rT compared with pre-rT levels. The TgF-β1 levels of most of the patients who showed an ineffective response (27/35) were higher after rT relative to pre-rT. Abbreviations: nsclc, non-small-cell lung cancer; rT, radiation therapy; sclc, small-cell lung cancer.
8353
TgF-β1 are associated with lymphocyte percentages and radiotherapy patients with a better response to RT while CD3 + and CD8 + cell counts were lower. These results are in accord with our research and indicated that variations in peripheral lymphocyte subpopulations are predictive of outcome after RT. Another research from Spain 18 indicated that in prostate cancer patients undergoing RT, in vitro radiation-induced apoptosis of CD4 + T lymphocytes assessed before RT was associated with the probability of developing chronic genitourinary toxicity, and radiation-induced apoptosis of CD8 + T lymphocytes was associated with overall survival. In our study, instead of analyzing the probability of T lymphocytes as a prognostic biomarker, we focused on the relationship between TGF-β1 levels and T lymphocytes and hypothesized that the underlying mechanism of TGF-β1 could reflect treatment outcome that may be related to the immunosuppression effect of T lymphocytes. Recent studies have indicated that the specific immunologic microenvironment of tumors may be crucial to carcinogenesis and anti-tumorigenesis, 19 and T cell-mediated cellular immunity may be an important mechanism for killing tumor cells. [20][21][22] Results from Sakaguchi et al 23 showed that TGF-β1 and interleukin 2 (IL-2) induced tumor tissues to produce more Treg (regulatory T) cells, which can inhibit the cytotoxic effect of cells, thereby impairing the immune system. [23][24][25] Strauss et al 26 reported that Treg cells with a distinct phenotype in tumor-infiltrating lymphocytes could produce TGF-β, which contributed to local immune suppression. There can be no doubt that radiotherapy can induce the suppression of immunity, 27 with significant decline in T lymphocyte levels. 28 In the present study, we found that after RT the percentage of CD3 + T cells and B cells had significantly decreased, but the percentage of CD8 + T cells significantly increased from pre-RT values. This suggests that the immune system, especially cellular immunity, was inhibited after RT. As for the CD4 + T cells, in our results, after radiotherapy it had a slight rise but did not achieve a statistical significance; therefore, we cannot determine the influence of the change of CD4 + T cells on the changes of whole T lymphocyte subsets. Large-sample clinical researches are urgently needed. Over the course of RT treatment, TGF-β1 levels significantly and negatively correlated with the percentages of the CD4 + T cells and the CD4 + /CD8 + ratio, but significantly and positively correlated with the percentages of CD8 + T cells. Previously, it was demonstrated that a decrease in Notes: a compared with pre-rT, P0.01; b compared with pre-rT, P0.05. The TgF-β1 levels decreased only slightly from pre-rT to post-rT, and the percentages of cD3 + T cells and B cells were significantly lower at post-RT. In addition, compared with pre-RT levels, at post-RT the CD4 + T cells and the cD4 + /cD8 + ratio were only slightly higher; cD8 + T cells were higher; and nK cells remained stable. Abbreviations: nK, natural killer; rT, radiation therapy. the CD4 + /CD8 + ratio was an independent negative prognostic factor for survival in NSCLC patients. 29 We previously showed that TGF-β1 levels negatively correlated with the CD4 + /CD8 + ratio, and an increase in TGF-β1 accompanied by a decline in the CD4 + /CD8 + ratio indicates a poor prognosis. This is consistent with previous reports.
Conclusion
Our research indicated that lung cancer patients with TGF-β1 levels in plasma after RT that are below pre-RT levels may experience better short-term efficacy. The underlying mechanism may be related to the influence of TGF-β1 on antitumor immunity. All these results support the hypothesis that the underlying mechanism of TGF-β1 may be related to an influence on antitumor immunity, especially cellular immunity. Studies have reported that Treg cells express membrane-bound TGF-β1, which directly inhibited the functions of NK effector cells and downregulated NK cell receptors on the NK cell surface. 30,31 Our present research suggested a weak correlation between TGF-β1 and NK cells, but unfortunately, the correlation was not statistically significant. Besides, in our study, one patient with large-cell neuroendocrine carcinoma (LCNEC) was included, considering the current treatment for LCNEC to be the same as NSCLC; 32 therefore, the patient was just classified under the NSCLC group. We failed refining the pathological types into four types (SCLC, adenocarcinoma, squamous cell carcinoma, LCNEC), and hence further exploration is required.
Disclosure
The authors report no conflicts of interest in this work. | 2019-01-22T22:20:09.431Z | 2018-11-01T00:00:00.000 | {
"year": 2018,
"sha1": "0c5769022857a796d8eb833ced8d910234edb35a",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=46518",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c5769022857a796d8eb833ced8d910234edb35a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
86180830 | pes2o/s2orc | v3-fos-license | Biology and devouring propensity of lady bird beetle , Coccinella septempunctata Linnaeus on rapeseed-mustard aphid , Lipaphis erysimi Kaltenbach
An experiment was conducted to study the biology and devouring propensity of Coccinella septempunctata Linnaeus in the laboratory at 25 ± 1°C and 65 ± 5% relative humidity (RH) on mustard aphid, Lipaphis erysimi (Kalt.) infesting rapeseed-mustard crop during the rabi cropping seasons of 2009 to 2010 and 2010 to 2011. The mean fecundity was 378.00 ± 26.51 eggs/female while ovipositional period, size of egg cluster, incubation period, percentage of grub emergence, larval period, pupal period, total developmental period (egg to adult), mating period and adult longevity were 4.32 ± 0.26, 9.00 ± 0.21, 99.00 ± 0.49, 4.50 ± 0.29, 94.65 ± 0.68, 11.15 ± 0.50, 5.60 ± 0.18, 25.57 ± 1.20, 49.43 ± 39.79 and 122.93 ± 4.05 days respectively. The mean devouring propensity of grubs and adult was 53.11 ± 1.46 and 86.20 ± 1.34 aphids per day per individual, respectively.
INTRODUCTION
Rapeseed-mustard, Brassica juncea (Linnaeus) is one of the important cruciferous oilseed crops cultivated all over India but its yield is largely affected by number of insect pests.Out of these, the mustard aphid, Lipaphis erysimi Kalt is the most dreaded insect, infesting the crop right from seedling stage to maturity.The losses in yield caused by mustard aphid ranged from 9 to 95% at different places of India (Singh et al., 1980;Singh et al., 2012).The bio-control agents like coccinellid and chrysopid have been reported to be effective for controlling the aphids, L. erysimi (Shukla et al., 1990;Singh and Singh, 2013).Among different predators and parasitoids, lady bird beetle, C. septempunctata Linn. is the most important groups of entomophagous predator preying upon wide variety of aphid species and reported as potential predator of aphids and manage the pest population in the field (Agarwala et al., 1987;Afroz, 2001;Pandey and Khan, 2002;Bilashini and Singh, 2009).In evolving eco-friendly strategy using the bio-agents for the management of mustard aphid, C. septempunctata could be a potential predator.
For the effective use of predaceous coccinellids in the integrated pest management programme, a complete investigation on their bio-ecology and predation potential are of utmost importance.The present study was, therefore, carried out to gather relevant information with particular reference to biology and devouring propensity of lady bird beetle on rapeseed-mustard aphid eastern *Corresponding author.E-mail: kuldeepsingh153@gmail.com.Tel: 09235132023, 09889230797.region of Uttar Pradesh.
MATERIALS AND METHODS
The experiment was conducted in the departmental laboratory, Department of Agricultural Entomology, U.P. Autonomous College, Varanasi (U.P.).20 pairs of C. septempunctata were collected from the experimental fields of the same institution and reared in the laboratory at 25 ± 1°C and relative humidity 65 ± 5% on mustard aphid in the specimen jars (15L × 15 W × 25 H) during the Rabi cropping seasons of 2009 to 2010 and 2010 to 2011.The experiment was replicated 10 times for each set during both the crop seasons.
The mustard twigs infested with mustard aphid were provided as food.The eggs were collected from the specimen jars and reared in other jars till they become adults.The males and females were collected from this stock culture and kept separately in petri dishes (15 cm dia) for mating.The mated females individually were allowed to oviposit in separate petridishes (15 cm dia) containing mustard aphid and observations on fecundity and oviposional period were recorded.Twenty freshly laid eggs were kept individually in separate petri dishes (15 cm dia) having moistened filter paper at the bottom.These filter papers were replaced daily to avoid contamination till hatching and the incubation period was recorded.Soon after hatching, the grubs were provided with mustard aphid as food, at least 3 times the number of aphids it had consumed on the previous day.
The numbers of aphids left uneaten were counted next morning.The final instar grubs were provided with additional mustard leaves as a shelter for pupation.The emergence of adults were observed, and also their longevity.The daily consumption of adults on mustard aphid was also recorded till death to find out devouring propensity.Observations were also recorded on the duration of different instar and their devouring propensity, pupal period and adult longevity.
RESULTS AND DISCUSSION
It was studied that, the freshly laid were small, cigar shaped and shiny deep yellow, and turned light grey just before hatching.It is clearly documented from the Table 1 that, the average incubation period was 4.60 ± 0.31 and 4.40 ± 0.26 days during 2009 to 2010 and 2010 to 2011 respectively with means as 4.50 ± 0.29 days which are similar to that of Agarwala and Saha (1986).The average duration of first, second, third, and fourth instar grubs were 2.45 ± 0.13 and 3.30 ± 0.10, 2.20 ± 0.09 and 2.65 ± 0.13, 2.15 ± 0.12 and 2.45 ± 0.14, and 3.20 ± 0.13 and 3.90 ± 0.16 days during 2009 to 2010 and 2010 to 2011 respectively.The total grub period was recorded as 10.00 ± 0.47 and 12.30 ± 0.53 days respectively, with mean period of first, second, third and fourth instars being 2.86 ± 0.12, 2.38 ± 2.43 ± 0.11, 2.30 ± 0.13 and 3.55 ± 0.14 days and total grub period was as 11.15 ± 0.50 days.Agarwala and Saha (1986) and Behera et al. (1999) reported 12.6 and 9.35 ± 0.20 days grub period of C. septempunctata on aphids.
The average pupal period was recorded as 6.50 ± 0.20 and 4.70 ± 0.15 days during 2009 to 2010 and 2010 to 2011 respectively while mean was 5.60 ± 0.18 days.Agarwala and Saha (1986) reported the pupal period was 6.4 days on Aphis gossypii and Singh et al. (2009) reported 5.35 ± 0.15 days on L. erysimi.The adults copulated/mated after 5.80 ± 0.26 to 6.20 ± 0.31 days of emergence, during 2009 to 2010 and 2010 to 2011 respectively with an average of 6.00 ± 0.29 days.The copulation taking place during daytime and night, and this lasted for an average of 48.60 ± 37.21 and 50.25 ± 42.37 min during the first and second year of study respectively with an average 49.43 ± 39.79 min and ranges from 3 to 130 min.
The females started oviposition after 6.20 ± 0.30 and 6.40 ± 0.42 days of mating with an average of 6.30 ± 0.36 days in between the range of 4.00 to 7.00 days and eggs laid in clusters of 8 to 100 eggs/cluster.These results are in corroboration with those of Singh and Malhrotra (1979), Behera et al. (1999), Petro and Behera, (2005) and Singh et al. (2009).The adult females laid an average of 360.75 ± 25.71 and 395.25 ± 27.31 eggs during 2009 to 2010 and 2010 to 2011, respectively while mean fecundity was 378.00 ± 26.51.Behera et al. (1999) reported fecundity as 330.80 ± 22.41 eggs.The average longevity of adults were 120.25 ± 3.75 and 125.60 ± 4.35 during 2009 to 2010 and 2010 to 2011, respectively, while mean was 122.93 ± 4.05 days.Saha (1987) reported the adult longevity of Menochilus sexmaculatus from 80 to 112 days.
The life cycle from egg to adult completed in 25.38 ± 1.14 and 25.75 ± 1.25 days during 2009 to 2010 and 2010 to 2011, respectively, while mean was 25.57 ± 1.20 days.Agarwala and Saha (1986) and Behera et al. (1999) reported total life cycle as 24.20 and 16.02 days respectively, which is in accordance with the present study.
It is inferred that, C. septempunctata had better longevity and high predatory potential/devouring propensity against the mustard aphid and it could be concluded that, it might play a suitable role in biointensive Integrated Pest Management programmes (Table 2).
Table 2 .
Devouring propensity of C. septempunctata Linnaeus on rapeseed-mustard aphid, L. erysimi Kalt.The mean female adult's consumption was 87.54 ± 1.36 aphids per day.The average consumption of adults was 85.98 ± 1.38 and 86.43 ± 1.29 aphids per day during 2009 to 2010 and 2010 to 2011, respectively with a mean of 86.20 ± 1.34 aphids per day.These findings are similar to that of | 2019-03-30T13:09:46.381Z | 2014-01-02T00:00:00.000 | {
"year": 2014,
"sha1": "7f32a4345c7707b6b43b49b007c195958138f60e",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJAR/article-full-text-pdf/66538C342534.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "7f32a4345c7707b6b43b49b007c195958138f60e",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
247783431 | pes2o/s2orc | v3-fos-license | Is the Health Behavior in School-Aged Survey Questionnaire Reliable and Valid in Assessing Physical Activity and Sedentary Behavior in Young Populations? A Systematic Review
Backgrounds Using the self-reported questionnaire to assess the levels of physical activity (PA) and sedentary behavior (SB) has been a widely recognized method in public health and epidemiology research fields. The selected items of the Health Behavior in School-aged (HBSC) Survey Questionnaire have been used globally for measurements and assessments in PA and SB of children and adolescents. However, there are no comprehensive and critical reviews to assess the quality of studies on reliability and validity of selected items for PA and SB measurement and assessment derived from the HBSC. Thus, this review aimed to critically assess the quality of those studies and summary evidence for future recommendations. Methods A systematic review protocol was used to search potentially eligible studies on assessing reliability and validity of PA and SB measures of the HBSC questionnaire. electronically academic databases were used. The information on the reliability and validity of the PA and SB measures were extracted and evaluated with well-recognized criteria or assessment tools. Results After a literature search, six studies were included in this review. The reliability of PA measures of the HBSC questionnaire showed a moderate agreement while the reliability of SB measures showed a great variation across the different items in the different subgroups. The validity of the PA measures had acceptable performance, whereas no studies assess the validity of the SB measures. The included studies all had quality weaknesses on reliability or validity analysis. Conclusions The PA and SB measures of the HBSC questionnaires were reliable in assessing PA and SB among adolescents. However, a little evidence showed that PA measures are partially valid in assessing PA, but no evidence confirmed the validity of SB measures. The included studies all had methodological weaknesses in examining the reliability and validity of the PA and SB measures, which should be addressed in the future. Further studies are encouraged to use a more standardized study design to examine the reliability and validity of the PA and SB measures in more young populations.
INTRODUCTION
It is well-known that sufficient physical activity (PA) and limited sedentary behavior have been two key determinants of health outcomes among children and adolescents, such as improved fitness, reduced body fat, increased cognitive ability, lower levels of depression and anxiety as well as fewer suicidal attempts (1)(2)(3)(4)(5)(6)(7)(8). The World Health Organization (WHO) and some national health sectors have released the guidelines on PA and SB based on epidemiological evidence, which recommend that children and adolescents should amass at least of 1 h for moderate to vigorous PA and <2 h of SB during leisure time (9,10). Despite numerous health benefits resulting from PA and SB based on convincing evidence, the prevalence of meeting the PA and SB guidelines was not ideal. Specifically, a global study including 1.6 million participants by Guthold et al. (11) reported that only about 20% of adolescents were physically active according to the PA guidelines. This result was highly similar to another study published in the Lancet 2012 PA Research Series (12). In the face of this concerning public health issue, it is of vital significance to promote PA while discouraging SB concurrently among children and adolescents (13).
To increase PA and decrease SB, an essential step is to know and understand the actual levels of PA and SB (e.g., prevalence of meeting the PA or SB guidelines, or time spent in PA or SB) accurately (12,(14)(15)(16). At a populational level, using selfreported questionnaires to collect data or information on PA and SB is a feasible and economical measurement because of its lower costs, reduced testing burdens, and easy data management (17)(18)(19)(20). To date, there are many specific questionnaires to assess PA and SB level, such as the International Physical Activity Questionnaire (IPAQ), the Global Physical Activity Questionnaire (GPAQ) (21), the Health Behavior in School-aged Questionnaire (HBSC) (22). These questionnaires have been used frequently across the populations in different countries (23)(24)(25)(26)(27)(28)(29)(30). Among the three questionnaires, the HBSC questionnaire is typically designed for assessing child and adolescent health behaviors, including PA and SB. In the HBSC questionnaire, some selected items are used for PA and SB measurement, of which four items were used for PA measurement and eight items were used for SB measurement. Using the measures from the HBSC questionnaire for PA and SB (selected items), many national estimates, reports, or studies of levels in PA and SB at the young population level have been published previously (31)(32)(33), which in turn provide national comparable evidence.
Although the PA and SB measures derived from the HBSC questionnaire has been tested for reliability and validity in multiple young populations (e.g., Chinese, Japanese, and Slovakian) (34)(35)(36), no systematic review studies assess those studies comprehensively and summarize evidence on reliability and validity of the PA and SB measures. This would be a barrier to making an overview of the studies using the PA and SB measures of the HBSC questionnaire. Also, being unaware of the quality of these studies is indeed a critical question for further behavioral epidemiological research and populational monitoring and surveillance. Another issue on this research topic is that there are no studies to assess the quality of studies on reliability and validity of the PA and SB measures. If researchers understand the information on reliability and validity, it would be beneficial to understand the assessed PA and SB levels among the young population through the HBSC questionnaire.
Thus, this review aimed (1) to comprehensively assess the studies on reliability (test-retest) and validity (criterion) of PA and SB measures derived from the HBSC questionnaire; (2) to evaluate the testing performance of PA and SB measurements of the HBSC questionnaire. It could be expected that this review can provide valued and supportive information for future studies using the HBSC questionnaire to assess PA and SB, and then offer implications for future research recommendations.
Selection Criteria
Papers based on the searches were screened against the following inclusion criteria: (1) full-text original report published in a peer-reviewed journal; (2) the study participants were healthy or typically developed; (3) the study participants were children or adolescents; (4) the study that reported either reliability or validity information of PA or SB measurement; (5) published language is English. The exclusion criteria for study selections detailed: (1) studies published as a conference paper, review, or meta-analysis; (2) studies published not in English; (3) studies not using measures for PA and SB from the HBSC questionnaire. Finally, following the literature search protocol and study screening process (see Figure 1), 6 eligible studies (34)(35)(36)(37)(38)(39) meeting the literature selection criteria were included in this review.
Data Extraction
Information was extracted from the included studies regarding the first author, published year, sample characteristics (e.g., sample size, % of sex), PA and SB measures (questions of PA and SB measures), statistical analyses, information on reliability (e.g., intraclass correlation coefficient, ICC; interval days) or validity (e.g., criteria validity correlation coefficient; objective standard). Two independent reviewers (YS and YZ) conducted data extraction, and any disagreement of them was discussed with and resolved by a third author (HW). If some studies reported the information on reliability and validity by age (grade) groups, sex, or other sociodemographic factors, those results were also extracted. The extracted data from the included studies are shown in tabular format.
Methodological Quality Assessment of the Included Studies
Using the consensus-based standards for the selection of health measurement instruments (COSMIN) (40), the included studies were rated. This checklist was used for the assessment of the methodological quality of the included six studies. Two authors (YS and YZ) of the review independently conducted the quality assessment; any differences between the independent assessments were resolved through discussion between the third author (HW) until they reached an agreement. For test-retest reliability, 10 mandatory items involved study design for quality assessment, and 4 optional items involved depended on each the statistical analysis of each study (some studies used ICC while others used Cohen's kappa to assess the reliability). Hence, the full score the of test-retest reliability analysis of each study were not the same (11-14 scores). For criterion validity, five mandatory items involved study design for quality assessment and two optional items involved depended on each study's statistical analysis. Hence, each study's full score of validity analysis varied (6 or 7 scores).
For the results of reliability and validity (coefficients), the criterion developed recommended by Landis and Koch (41) was used to assess the performance of reliability and validity of each included study. This criterion has been used frequently across the previously published studies (42)(43)(44)(45)(46). In detail, coefficient values of < 0.2 were considered poor, 0.21-0.4 were considered fair, 0.41-0.6 were regarded as moderate, 0.61-0.8 were deemed substantial, and 0.81-1.0 was almost perfect. Table 1 summarizes the specific questions or items for PA and SB measures derived from the HBSC questionnaire. In detail, (34,35). Only three studies performed validity analysis for PA measures (36,38,39). No studies assess the criterion validity for SB measures in the included studies. Concerning the statistical method for test-retest reliability and criterion validity, intraclass correlation coefficient and Spearman rank correlation were used frequently across the included studies.
RESULTS
Supplementary Table 1 shows the summarized results of coefficients of reliability and validity as well as their evaluated performance of the included studies in this review. In terms of the reliability of PA measures, most included studies reported that coefficient test-retest reliability coefficient ranged from about 0.5 to about 0.8, regardless of PA measurement items and subgroups, which indicated that PA measures showed moderate (or above) test-retest reliability (34,35,37,38). In the two studies reporting the reliability coefficients of SB measures (34,35) and the coefficients of different SB measures varied greatly (from 0.16 to 0.90; signifying poor to almost perfect) (34,35). The two studies that reported the validity coefficients of PA measures showed, indicating a fair level of validity performance in PA measures (36,38). Table 3 exhibits the methodological quality assessments of the included studies for reliability analysis using the COSMIN tool. The scores of quality assessment varied from 4 to 7. Although four studies had a full score of 11 while another study had a full Frontiers in Public Health | www.frontiersin.org score of 13, the results of each study's quality assessment were not high. Table 4 displays the methodological quality assessments of the included studies for validity analysis using the COSMIN tool. The three studies that conducted validity analysis all gained 2 scores on quality assessment, indicating low quality.
DISCUSSIONS
This comprehensive review summarized the evidence on the reliability and validity of PA and SB assessments derived from the HBSC questionnaire. This review also assessed the methodological quality of each study included that conducted reliability or validity analysis using the COSMIN tool (40). This systematic review had some research findings as follows. First, we found that only a few studies have examined the reliability and validity of PA and SB measures derived from the HBSC questionnaire. Second, the reliability of PA measures showed an acceptable level across the included studies while the validity of PA measures presented a fair level. Third, the reliability of SB measures showed a great variation in the performance while no studies assess the validity of SB measures. Fourth, the quality assessment revealed that studies that conducted reliability and validity of PA and SB measures derived from the HBSC questionnaire all showed a low quality, which casts doubts on those studies' results and findings.
PA and SB measures of the HBSC questionnaires have been used in many national surveys, such as in China (47)(48)(49) and some European countries (24,50,51). However, this review revealed that only a few studies have examined the properties of these measurements in particular populations (34)(35)(36)(37)(38)(39). The limited number (n = 6) of targeted studies indicated that these PA and SB measures have limited feasibility and utility in other young populations. On this standing, more studies in the future are encouraged to examine the reliability and validity of the PA and SB measures because adequate and vigorous validation on PA and SB measures derived from the HBSC questionnaire is an essential foundation for large-scale use (34,36). With more studies on the reliability and validity of PA and SB measures of the HBSC questionnaires, its adaptability can be enlarged into different cultures, countries, and societies (39).
Another interesting finding in addition to a few studies that conducted reliability and validity assessments is that some age groups were missing from the reliability study. For example, in Yang et al. (34), the authors' study failed to examine the reliability of PA measures in adolescents aged 13 years. In Ng et al. (37), they did not include adolescents aged 12-14 years. Such issues also occurred in other studies (35,39). Thus, theoretically, the current evidence can only inform the PA measures had satisfactory reliability in some particular adolescents with specific ages instead of all the adolescent populations. We thereby advocate that more studies should address this issue to expand the generalizability that PA measures are reliable for adolescents with a wider age range.
This review found that PA measures of the HBSC questionnaire show an acceptable test-retest reliability. This in turn indicates that the PA measures of the HBSC questionnaire are a reliable measure to collect PA data or information in adolescent populations. Interestingly, only one study by Yang et al. (34) examined the reliability of PA over the usual week and this study showed that this question for PA measure had satisfactory reliability in Chinese samples (Beijing). However, the current evidence is insufficient to support this kind of PA measure having good reliability. It is therefore urgently needed to examine the reliability of measurement of usual weekly PA by more studies in the future.
Concerning the reliability of SB measures, only two studies reported the coefficient values (34,35), which indicated that different questions of SB measures had varying coefficients of reliability across different subpopulations. For example, in Polish samples, the questions of sitting measures had coefficients over 0.9 of reliability, indicating a perfect performance (35). However, those measures showed a poor performance in the Chinese samples of 15 years in Yang et al.'s study (34). Such a large inconsistency may be owing to different measurement protocols, sociocultural country differences (34,35). However, overall, the SB measures of the HBSC questionnaire showed moderate (acceptable) reliability regardless of sex, age, and national difference. This suggests that SB measures of HBSC are reliable in capturing information on SB among adolescents. We still recommend that more studies should re-examine the reliability of SB measures of the HBSC questionnaire in more young populations.
There were two included studies in this review that examined the validity of PA measures of the HBSC questionnaire (36,39), demonstrating fair to moderate performance in validity. This evidence could illustrate that PA measures of the HBSC questionnaire are partially valid in assessing young people's PA. However, only two studies examining the validity of PA measures are inadequate to inform any robust conclusion that PA measures of the HBSC questionnaire are valid when assessing PA in younger populations with different socio-cultural backgrounds. More studies are encouraged to conduct validity analysis in other young populations.
Surprisingly, no studies so far assessed the validity of SB measures of the HBSC questionnaire in the current review. It is therefore acknowledged that the SB measures of the HBSC questionnaire may not be valid in assessing SB among adolescents. We also have to admit that assessing SB is a complex scientific issue (15,52). However, because SB measures of the HBSC have been used frequently in many national surveillances, knowing the validity of SB measures is a vital foundation to estimate SB more accurately. Thus, addressing this research gap would be greatly beneficial to increase the use of SB measures of the HBSC questionnaire across the world. To achieve these research aims, well-designed measurement protocols are strongly recommended in the future.
This systematic review assessed the study quality of the included studies, which found that the included studies had quality shortcomings when conducting test-retest reliability and validity. For the studies that conducted test-retest reliability, there were some methodological issues. For example, according to the COSMIN guidelines, some studies did not include sufficient sample size (recommended sample size = 100) to perform the test-retest reliability analysis (34,38). One study by Ng et al. even failed to report the interval days for test-retest reliability (37). Similar quality weaknesses of the included studies that conducted validity analysis were also observed. For example, Booth et al.'s (39) study used an aerobic fitness test to examine the validity of PA measures. However, it is well-known that the aerobic fitness test can be viewed as a goal-standard for PA measures validity examination. In addition, there were research issues involving sample size for validity study (36,38). In this regard, it is noticeable that previous studies that conducted reliability or validity analysis for PA and SB measures of the HBSC questionnaire had some inherent study design shortcomings, which may negatively influence the interpretations of the results of the studies. It is strongly recommended that future studies should undertake more standardized and rigorous study design to examine the reliability and validity of PA and SB measures of the HBSC questionnaire.
Study Strengths and Limitations
A primary strength of this review is that we firstly assessed literature for evidence on reliability and validity of PA and SB measurements derived from the HBSC questionnaire. This study highlights the challenges of using the HBSC questionnaires in many populational surveillance surveys across the world. Second, concerning the studies that examine the reliability and validity of PA and SB measures of the HBSC questionnaire, this review is first to assess the study quality, which can identify research gaps for future similar studies. Third, this study provides strong evidence of the validity and reliability of PA and SB items from the HBSC questionnaire, standardizing the use of the questionnaire in future research. However, one study limitation should be mentioned in our review. This limitation is that the literature search and included studies are restricted in English, which may omit some studies published in other languages.
CONCLUSIONS AND RECOMMENDATIONS
This study offers systematic evidence on the reliability and validity of the HBSC questionnaire (selected items) in assessing PA and SB among young populations across the world. This systematic review study indicates that PA and SB measures of the HBSC questionnaire are reliable (moderate agreement) in assessing PA and SB among adolescents. However, when assessing PA, the PA measures show fair to moderate performance, indicating being partially valid. The validity of SB measures remains unknown, which should be filled by future research.
Based on the present review study, it is highly recommended that more studies should re-examine the reliability and validity of the PA and SB measures of the HBSC questionnaire in more young populations using a more standardized study design. By doing this, the PA and SB measures of the HBSC questionnaire can be used for health surveillance in a wider range of populations in the world.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. | 2022-03-30T14:08:31.125Z | 2022-03-28T00:00:00.000 | {
"year": 2022,
"sha1": "e0ed893dc9f0cad4f7d1c37c3c0429869bd90631",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "e0ed893dc9f0cad4f7d1c37c3c0429869bd90631",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
31372581 | pes2o/s2orc | v3-fos-license | Highly stable ultrabroadband mid-IR optical parametric chirped-pulse amplifier optimized for superfluorescence suppression
We present a 9 GW peak power, three-cycle, 2.2 (cid:1) m optical parametric chirped-pulse amplification source with 1.5% rms energy and 150 mrad carrier envelope phase fluctuations. These characteristics, in addition to excellent beam, wavefront, and pulse quality, make the source suitable for long-wavelength-driven high-harmonic generation. High stability is achieved by careful optimization of superfluorescence suppression, enabling energy scaling.
Since the prediction of high-yield, soft-x-ray (Ͼ120 eV photon energy) high-harmonic generation (HHG) in gases with mid-IR drive pulses [1][2][3][4][5][6], the development of low-noise, high-energy, few-cycle, carrier-envelope-phase (CEP) stable light pulse sources in this wavelength range has attracted wide attention. Ultrabroadband optical parametric chirped-pulse amplification (OPCPA) is a competitive route toward this technology-it is, to date, unique in its demonstrated ability to produce few-cycle pulses with multiterawatt peak power, achieved at 800 nm wavelength [7,8]. Recent work has extended fewcycle OPCPA to 2 m wavelength with multigigawatt peak powers; intrapulse difference-frequency generation (DFG) of a 5 fs Ti:sapphire (Ti:S) oscillator output provides half-octave-bandwidth self-CEPstabilized 2 m wavelength seed pulses, and Ndbased amplifiers seeded by the same oscillator produce high-energy picosecond pump pulses at 1 m for degenerate parametric amplification at 2 m with half-octave phase-matching bandwidth in bulk periodically poled lithium niobate [9,10]. Scaling of these systems to both higher pulse energy and average power for use as an HHG drive laser (e.g., using Ͼ100 W cryogenically cooled Yb:YAG 1 m wavelength picosecond lasers [11] as the OPCPA pump) seems promising as a route to high-flux, tabletop, soft-x-ray HHG sources.
Extension of the OPCPA technology to the fewcycle, high-energy regime, however, has uncovered difficulty in achieving satisfying noise performance, especially in designs employing high gain and low seed energy. In this case, OPCPA is particularly susceptible to parasitic depletion of the pump by superfluorescence (SF), the amplification of spontaneous parametric generation at signal and idler wavelengths. In OPCPA, noise gain generally exceeds signal gain during amplification, a result of locally im-perfect phase matching of the seed (and/or lack of a seed altogether) throughout the spatiotemporal interaction cross section defined by the pump wave. In the spatial domain, the conical geometry of phase matching about the pump beam gives rise to preferential noise amplification in poorly seeded high-order spatial modes about the signal beam. In the temporal domain, the frequency sweep of the seed pulse results in a higher gain for phase-matched noise available at delays where the seed wavelength is imperfectly phase matched. Large drops in signal-to-noise ratio (SNR) occur especially during amplifier saturation, when there is preferential amplification of coordinates where both signal phase matching and seeding is poorest, thus boosting conversion efficiency and bandwidth but also preferentially amplifying noise. As a result, when gain is high and the initial SNR is low, SF energy can become comparable with or even overtake the amplified signal energy, thus placing a ceiling on the usable energy extractable by the signal and heavily degrading the noise performance.
For example, the half-octave phase-matching bandwidth and 30 ps pump pulse duration of the pioneering 2.1 m few-cycle OPCPA system of [9] set the equivalent noise energy of vacuum fluctuations (with the equivalent of one photon per mode [12]) in that amplifier to ϳ40 aJ ͑4 ϫ 10 −17 J͒. With a 4 pJ seed energy (by DFG of a 5 nJ Ti:S oscillator pulse), the initial SNR was only ϳ10 5 . After amplification, owing to SF buildup, 80 J was the practical limit in amplified signal energy [9], and the full available pump energy could not be employed. Using a Ti:S regenerative amplifier/nonlinear pulse compressor front end to boost the seed energy to a few nanojoules, [10] recently reported 2.1 m, 15 fs pulses with 920 J of energy in the signal band. This work obtained both good efficiency and excellent bandwidth in the final amplification stage (indicating amplifier saturation) but with a SF noise level of 20%, resulting in 9% rms energy fluctuations. Despite the increase in seed energy, scaling to higher pulse energies seems difficult.
This Letter outlines techniques essential for building highly stable, ultrabroadband, high-gain OPCPA systems, demonstrated here in a 2.2 m system with energy stability comparable to the standard of commercial high-peak-power Ti:S laser systems (1.5% rms energy and 0.8% rms intensity fluctuations). We obtain a clean amplified signal spectrum and fully compressible pulses while maintaining good efficiency and bandwidth by means of saturation in the final amplification stage. With higher-energy pump pulses, these SF suppression methods should allow scaling of the system to multimillijoule signal energy and potentially terawatt peak powers with noise performance suitable for HHG.
The OPCPA system (see Fig. 1) is constructed as follows. The full power ͑150 mW͒ of the 80 MHz Ti:S oscillator is used to generate a passively CEPstabilized, half-octave bandwidth, 2 m, 3 pJ energy pulse train by intrapulse DFG (mixing of the 650 and 940 nm components) in a 2 mm MgO-doped periodically poled congruent lithium niobate (MgO:PPCLN) crystal (poling period ⌳ is 13.1 m). The spectrum [ Fig. 2(a), red dotted curve], covers 1570 to 2470 nm at −20 dB. Once the remaining 1 m Ti:S light is sent to the pump amplifier chain, the 2 m seed pulses are stretched using normal dispersion in a 30 mm block of bulk silicon to 6.2 ps length (full width at −10 dB) and then preamplified in an optical parameter amplifier (OPA), OPA1 (3 mm MgO:PP-CLN, ⌳ = 31.0 m), to 1.5 J. After OPA1, an acoustooptic programmable dispersive filter (AOPDF, Fastlite) increases the signal duration to 9.5 ps, both optimizing efficiency-bandwidth product and SF suppression in the power amplifier stage [13] and compensating for higher-order dispersion mismatch between the stretcher and compressor materials. Losses from the AOPDF ͑ϳ90% ͒ and spatial filters are compensated by OPA2. The resulting 5 J pulse is amplified to 220 J in OPA3 and compressed in three passes through an antireflection-coated 10 cm, high-purity quartz glass block that introduces ϳ10% loss. The OPA2 and OPA3 crystals are, respectively, 3-mm-and 1.6-mm-length stoichiometric lithium tantalate (MgO:PPSLT) gratings with ⌳ = 31.4 m. In all stages, a 1°angle between pump and signal beams allows separation of signal and idler after amplification. Figure 2 shows the amplified spectrum [(a), black, solid] and the corresponding interferometric autocorrelation trace [(b), black, solid] of the final 2.2 m pulse. It is compressed to 23 fs (1.1ϫ its transform limit), or three cycles, FWHM.
The 4.5 W pump laser system produces 4.5 mJ, 12 ps, 1 m pulses. It consists sequentially of two Ybdoped fiber amplifiers (YDFA), an Nd:YLF regenerative amplifier, and two Nd:YLF multipass slab (MPS) amplifiers and is seeded by the 1047 nm component of the Ti:S oscillator. To reduce the peak power in the MPS amplifiers, a chirped fiber Bragg grating (CFBG) is placed between YDFA stages and imparts 440 ps/ nm group delay, resulting in a 1 kHz, 110 ps, 1.05 mJ pulse train with 0.25 nm bandwidth from the regenerative amplifier (High-Q Laser). The two three-pass MPS amplifiers (Q-Peak MPS gain modules) are customized to avoid B-integral-related damage. Each module is 28 mm long, 2 mm by 6 mm in aperture, and side pumped with 78 W optical power. Employing gains of 2.9 and 2.4, respectively, we obtain 7 mJ pulses. After compression in a grating pair, we obtain 4.5 mJ, 12 ps Gaussian pulses (at FWHM, ϳ1.3ϫ their transform limit).
Methods for SF noise suppression are summarized as follows. First, the stretcher/compressor scheme maximizes the 2 m seed energy by avoiding lossy elements in the pulse stretcher. In comparison to [9,10], we use an AOPDF as a compensator for stretcher/compressor dispersion mismatch rather than as a stretcher, placing it between pre-and power amplifier stages where its 90% transmission losses do not affect the seed energy of OPA1 and can be compensated by OPA2. This increases the initial SNR by 1 order of magnitude.
Second, we use multiple apertures after OPA1 to eliminate the phase-matched, SF-dominated highorder spatial modes of the signal. In addition to a hard aperture after OPA1, by setting the pump beam width less than half the signal beam width in OPA2 and OPA3 and placing the nonlinear crystal 2-3 dif- fraction lengths away from the signal focus, the amplifiers act as soft apertures and spatial filters. The apertures clean the signal beam by sequentially selecting a smaller portion of the initial seed beam. This cleans the wavefront, preserves only the region of the beam with highest SNR, eliminates a slight spatial chirp from the AOPDF, and impresses the clean pump beam profile on the amplified signal. AOPDF and aperture losses are recovered in OPA2. This ensures the final amplification stage (OPA3) gain is kept as low as possible in order to maximize the conversion efficiency.
Third, we carefully optimize the signal duration and spectrum at each stage. Here, several features of SF growth in OPCPA are relevant [13]; each temporal coordinate is essentially an independent amplifier, with local signal frequency and SNR; SF gain equals signal gain when the signal is perfectly phase matched but is otherwise greater, with the discrepancy increasing with the local signal phase mismatch, and unseeded temporal coordinates close to the pump pulse peak are most susceptible to depletion of the pump by SF. Optimization of SF suppression in ultrabroadband OPCPA requires, therefore, that all signal frequencies near the pump pulse peak are well seeded, and the signal is chirped enough to push frequencies at the edge of the phase-matching bandwidth away from the pump pulse peak. This results in a slight sacrifice in amplifier bandwidth relative to the full phase-matching bandwidth but strongly improves SF suppression. Separate optimization of seed chirp at each stage is necessary, since the peak gain determines the duration of the OPA gain window (i.e., the degree of temporal gain narrowing). To ensure these conditions, we use a broadband seed with a spectrum spanning 1.6-2.5 m, covering the full phase-matching bandwidth, but, through adequate chirp, limit the effective amplifier bandwidth at each stage to a central 600-nm-wide wavelength range. Seed chirps corresponding to signal durations of 6.2 and 9.5 ps for OPA1 (10 6 gain) and OPA2 /OPA3 (10 2 -10 3 gain), achieve this.
Finally, while some amplifier saturation in the final stage is necessary to obtain good conversion efficiency and is helpful in suppressing gain fluctuations due to pump intensity noise, in all stages we avoid pushing hard into saturation (as a tool to expand the effective amplifier bandwidth), since this preferentially amplifies coordinates of the signal pulse where the difference between SF and signal gain is highest.
As a result of these methods, we obtain a clean signal spectrum (single shot), 1.5% rms energy fluctuation of the 220 J amplified pulse, and 0.8% rms peak intensity fluctuation after compression, while maintaining a conversion efficiency of 7% (comparable to [10]) and enough bandwidth to support a three-cycle pulse. These numbers (and a 15% rms SF energy fluctuation measured in the absence of a seed pulse) allow us to calculate an SF level of 7%. With slightly less saturation in OPA3, we can obtain 170 J with slightly narrower spectrum (Fig. 2, dashed curve) and 2% SF. Using higher-energy pump pulses at OPA2 and OPA3, we estimate we will be able to further amplify the signal to the millijoule level while maintaining suppression of SF to Ͻ10% of the total energy without the need for costly highenergy seed generation. The CEP fluctuation, measured by an f-to-3f spectral interferometer, is 150 mrad rms over 10 s [ Fig. 2(c)], where the residual phase excursion at time= ϳ 2 s was traceable to amplitude-to-phase noise coupling in the continuum generation arm of the interferometer. No significant CEP drift is observed over 10 s. Finally, excellent beam and wavefront quality ͑M 2 = 1.3͒ allows high-quality focusing. With 200 J and a ϳ50 m waist, our beam generates a 2-mm-long plasma column in air.
In conclusion, we have demonstrated techniques of general use for highly stable ultrabroadband OPCPA with excellent noise performance even while maintaining high conversion efficiency and bandwidth in the final stage. Energy scaling of the demonstrated OPCPA system may provide a route toward the development of high-flux extreme UV and soft-x-ray HHG sources. | 2017-06-23T14:36:09.822Z | 2009-06-01T00:00:00.000 | {
"year": 2009,
"sha1": "afef5427e1902989d6944fcdfd246c308a652502",
"oa_license": "CCBYNC",
"oa_url": "https://dspace.mit.edu/bitstream/1721.1/70486/2/Moses_Highly%20Stable.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "ab03b30d2ae042d2b5efc3d7d0909a38b80050aa",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
219509678 | pes2o/s2orc | v3-fos-license | RISK RELEVANCE OF COMPREHENSIVE INCOME: EVIDENCE FROM NON-FINANCIAL INDONESIA COMPANIES
Purpose of the study: This study aims to examine the effects of net income volatility, other comprehensive income volatility, and comprehensive income volatility on stock return volatility. Methodology: This study employed a quantitative method with multiple linear regression. The sample is all nonfinancial companies listed on the Indonesia Stock Exchange from 2012 to 2017. Data used in this study are panel data sourced from www.idx.co.id and www.finance.yahoo.com. The sample selection in this study used a purposive sampling method with a total sample of 246 observations. Results: This study suggests that net income volatility is not associated with stock return volatility. However, other income volatility and comprehensive income volatility are positively associated with stock return volatility. Implications: Future studies can employ data from other developing country companies and developed countries to be able to compare the results of this study. Based on the result findings, the existing and potential investors must improve their ability and understanding of IFRS-based financial accounting standards. The Accounting Standard Board, especially in Indonesia, is expected to be able to improve the rules of financial accounting standards as well as the access to the availability of financial accounting standards for financial statements users, primarily related to the disclosure policies. Novelty: This study calculates risk-relevant, which is different from the previous studies, namely annual stock return volatility and annual comprehensive income components volatility. Annual stock return volatility is calculated based on the standard deviation of monthly stock return volatility, which is multiplied by √12. Besides, the annual comprehensive income components volatility is generated from the standard deviation of comprehensive income components generated every three months divided by the market value of equity at the beginning of the period, and multiplied by √4.
INTRODUCTION
The International Accounting Standards Committee (IASC) and the International Accounting Standards Board (IASB) are the drafting bodies of international financial reporting standards using the concept of principles-based standards, from now on, referred to as International Financial Reporting Standards (IFRS) and previously International Accounting Standards (IAS). The aim of compiling standards with this principle is to produce high-quality financial reporting. IFRS rules in each country are carried out through adoption in the hope that companies in the country can produce financial reports that have high quality and credibility. Some studies suggest that the adoption of IFRS is generally able to improve the quality of accounting standards in most countries (Chen & Lin, 2010;Daske et al., 2008). The obligation to use IFRS for companies listed on the capital market is one of the most significant changes in the history of accounting regulation (Daske et al., 2008).
In Indonesia, since January 1, 2012, there has been a change in financial accounting standards, namely the full implementation of financial accounting standards that have been converged with IFRS. The policy should be implemented in Indonesia because Indonesia has become a member of the G-20 country where the agreement of these countries to have a single set of high-quality global accounting standards to provide quality financial information on international capital markets (Cahyonowati & Ratmono, 2012). One of the effects of IFRS convergence on earnings reporting in the Indonesian Institute of Accountants regulates the presentation of financial statements in PSAK 1 (IAI, 2018) which states that the component of statements of income and other comprehensive income consists of net income, other comprehensive income, and comprehensive income.
Financial statements prepared by the IFRS-based Accounting Standards state two main measures of overall performance, namely net income and total comprehensive income. Net income is the difference between income realized in transactions and related historical costs that occur in a certain period, based on the accrual basis, realization principle, and matching principle (Liu & Liu, 2007). Comprehensive income covers all wealth acquired by the company, which reflects the measurement of the overall performance of the company (Devalle & Magarini, 2012). One of the functions of information from components of comprehensive income (net income, other comprehensive income, and One of the main problems for companies in reporting comprehensive income is that the share of other comprehensive income is more volatile than net income (Hirst & Hopkins, 1998). Thus, investors assume that the higher the volatility of the comprehensive income component, the higher the company's risk. With these conditions, value relevance research that tests the component of comprehensive income on stock prices and stock returns develops into the research of risk relevance that tests the volatility of a comprehensive income component against stock return volatility. The risk of information presented from accounting data can be reflected in the volatility of net income, other comprehensive income, and comprehensive income, which are the main business performance results that may confuse financial statement users and cause a significant misinterpretation of company performance (Khan & Bradbury, 2015). However, studies that examine risk relevance are still limited.
Since IFRS adoption was carried out in Indonesia, which began in 2012, research in Indonesia that examines the effect of net income volatility, comprehensive income volatility, and other comprehensive income volatility on stock return volatility (stock risk) are still rare. Thus, it is essential to examine the extent to which the risk of accounting data, as measured by the volatility of income, other comprehensive income, and comprehensive income that can be used by investors, influence market risk reflected in stock return volatility. The presence of IFRS in Indonesia should provide a guarantee for users of financial statements, especially investors, that the financial statements presented by the company must be more reliable and relevant. From financial report data that have adopted IFRS rules, investors can also test the extent to which the risk of accounting information data through testing earnings size volatility (net income, comprehensive income, other comprehensive income). On the other hand, the adoption of IFRS does not necessarily provide empirical evidence of an increase in the quality of financial statements. It does not necessarily prove the existence of low risk, especially for deviations in earnings quality, because opportunities for opportunistic behavior in the presentation of financial statements remain open.
Based on the description above, this study aims to examine the effects of net income volatility, other comprehensive income volatility, and comprehensive income volatility on stock return volatility. This study includes liquidity ratio as control variables. The liquidity ratio is related to the company's operating performance, which measures the extent to which cash flow from operating activities can guarantee the company's current liabilities. The higher cash flow from operating activities can guarantee the company's current liabilities that are due. Rajgopal & Venkatachalam (2011) stated that operating performance usually has a negative influence on stock return volatility. The higher operating performance is expected to reduce stock return volatility that reflects company risk. This study employs the financial statement data of non-financial companies listed on the Indonesia Stock Exchange starting in 2012, which is the year from the adoption of IFRS in Indonesia until 2017. Nasser & Hajilee (2016) stated that markets in emerging markets are integrated with the global markets. This study employs non-financial companies that have derivative transactions. Black (2016) emphasized that the company which has derivative transactions for hedging purposes may have the association between other comprehensive income volatility and stock return volatility. Zhang (2009) provided further evidence that accounting standards that regulate derivatives transactions can reduce speculative practices. Thus, the manager's effort to reduce the volatility of performance measures results in a decrease in the volatility of performance measures and stock return volatility through hedging that is by applicable standards carried out by the company. Barton et al. 2010 concluded that derivatives and accounting manipulation are used as instruments that can be substituted to manage earnings volatility. Managers can use derivatives to reduce volatility in cash flows due to changes in interest rates, foreign exchange rates, and commodity prices because these components can manage company accounts by reducing earnings volatility. Meanwhile, Huang et al. (2009) stated that managers could use derivatives when managers want to do earnings smoothing for the benefit of investors in the long run.
Also, in previous studies, the volatility of the comprehensive income component uses annual time series data, whereas in this study, uses the components of comprehensive income quarterly so that the volatility of the comprehensive income component is obtained annually. Therefore, the time frame used in this study during 2012-2017 makes it possible to use panel data. Based on the research literature that has been conducted, for research that examines the components of comprehensive income against company risk, it has been rarely used in the 3-month financial report data. Black (2014) stated that the use of shorter financial data (quarterly comprehensive income statement data/short windows) aims to find whether accounting information through shorter financial statements can attract investors' attention in describing company performance (comprehensive income component). In Indonesia, the use of net income volatility used was only carried out by Baskoro & Wardhani (2016) by annual net income data for three years.
Research questions
The investigation of risk relevance of comprehensive income in this study includes examining the effect of net income volatility, other comprehensive income volatility, and comprehensive income volatility on company risk. The data used in this study are company data in Indonesia after Indonesia conducted IFRS adoption in its financial accounting standards, starting in 2012. The adoption of IFRS led to Indonesian companies presenting the statements of comprehensive income that consists of net income information sourced from the company normal activities, other comprehensive income derived from activities outside the company's operations, and comprehensive income, which is a combination of net income and other comprehensive income. Before Indonesia adopted IFRS, the available information was only net income in the income statements. The investigating risk relevance of comprehensive income attempts to provide evidence whether the risk of accounting information contained in the financial statements after the adoption of IFRS in Indonesia is associated with the risk of investor response to the condition of the company in the Indonesia capital market. Barth et al. (2008) stated that the IFRS implementation could limit management opportunistic actions. Restrictions on managerial discretion in choosing a measurement method can reduce management's ability to provide accounting information that better describes the economic condition of the company. Also, flexibility in principles-based standards can provide more significant opportunities for companies to take earnings management actions. Besides conceptual debates, the results of previous studies show contradictory empirical evidence related to the benefits of IFRS/IAS in improving the quality of accounting information, as indicated by the earnings quality. According to Kanagaretnam et al. (2009), periodic performance measurement and financial position of business entities have always been a challenge for accounting decision-makers and a significant concern for users of accounting information. Financial statements prepared by the IFRS-based state two main measures of overall performance, namely net income and comprehensive income. Net income is the difference between income realized in transactions and related historical costs that occur in a certain period, based on the accrual basis, realization principle, and matching principle (Liu & Liu, 2007). Comprehensive income covers all wealth acquired by the company, which reflects the measurement of the overall performance of the company (Devalle & Magarini, 2012). One of the functions of information from components of comprehensive income (net income, other comprehensive income, and comprehensive total income) can reflect stock prices and stock returns. In accounting research, this condition is called value relevance. Research on value relevance has been conducted to test which component of earnings can better explain stock prices or stock returns. This information can provide a signal given by the company to investors in making investment decisions in the market. Research related to value relevance in Indonesia has been conducted by Cahyonowati & Ratmono (2012), proving that net income before IFRS adoption has a higher value relevance compared to profit and loss after IFRS adoption. While Sinarto & Christiawan (2014) suggested that an increase in the value relevance of net income after the implementation of IFRS and comprehensive income had a higher value relevance than net income after the IFRS implementation. Furthermore, Harimurti & Hidayat (2013) proved that comprehensive income in aggregate has value relevance. The value relevance of other comprehensive income is lower than net income. Also, this study suggested that other comprehensive income items that have value relevance are changes in the effective part of the revaluation surplus of the gains and losses of hedging instruments in the context of cash flow hedges. Furthermore, Ryan (2012) used the conceptual Standard Financial Accounting Board (FASB) and a stock return volatility benchmark to test the effect of other comprehensive income components on the company's total risk. Easton & Zmijewski (1989) concluded that the relationship between earnings and returns is different both with earnings persistence and with the firm's systematic risk in the equity market. Research conducted by Barth et al. (1995) found that income statement was more volatile compared to comprehensive income, while Bamber et al. (2010) and Khan & Bradbury (2014, 2015 proved that comprehensive income is more volatile than net income.
LITERATURE REVIEW
Meanwhile, Hodder et al. (2006) showed that comprehensive income based on fair value (after adjustment) is more volatile than comprehensive income and net income. Furthermore, research examining the volatility of profit and loss, other components of comprehensive income, and comprehensive income on stock return volatility (risk relevance)
Hypothesis Development
Beaver et al. (1970) stated that the portfolio theory specifies risk measurement solely in terms of determining market interactions. However, the important thing for the accounting profession is to understand the relationship between accounting measurements that are intended and the measurement of determining market risk. Meanwhile, the measure of accounting risk can be considered as a substitute for the total variability of returns from a company's equity securities (Beaver et al., 1970). The measure of accounting risk is closely related to the net income volatility obtained by the company from time to time. Thus, the accounting measure reflects the component of systematic risk and unsystematic (idiosyncratic) risk. Beaver et al. (1970) found a high level of contemporary relations between accounting measures and measures of market risk. Ryan (2012) found that earnings variability has historically been the accounting variable most related to equity risk. Khan & Bradbury (2014, 2015 found that net income volatility has historically been the accounting variable most associated with firm risk.
Agency theory states that the relationship between agent and principal problems can lead to information asymmetry. This problem causes the agent to freely use his incentives in determining accounting policies and other policies related to the company. The policies taken by the company can be reflected in the amount of profit and loss obtained by the company in a certain period. If the income statement is not stable from time to time, it can indicate that the policy chosen by the company results in uncertainty about the company's future conditions. This condition may be caused by the existence of specific policies from management that are opportunistic so that it results in instability of profit and loss from period to period. Therefore, the policies taken by management in the company specifically related to the company's operating activities can cause uncertainty in the future of the company. Thus, the hypothesis in this study is as follows:
H 1 : Net income volatility is positively associated with stock return volatility
Other comprehensive income is a component of comprehensive income generated not from the company's operating activities. Still, it has a role in influencing changes inequity in a company that occurs due to transactions or because of transactions or economic events in a reporting period other than transactions involving non-owners. Another component of comprehensive income is a new item that usually occurs after IFRS adoption is carried out by companies within a country, for countries that do not use US GAAP. In PSAK 1 (IAI, 2018), other comprehensive income consists of several components, namely changes in fixed asset revaluation surplus, gains and losses from remeasuring the value of investment / available-for-sale financial assets, the effective portion of the gain and loss in hedging instruments cash flow hedges, gains and losses arising from the translation/translation of financial statements in foreign currencies, as well as actuarial gains and losses on defined benefit programs.
Other comprehensive income items are the result of changes in interest rates, exchange rates, and other randomly running processes. In general, changes in the fair value of assets and certain obligations that a company itself results in the creation of other comprehensive income items (Chen & Lin, 2008). Khan & Bradbury (2015) found that when the results of financial statements that are reflected in fluctuations in net income and comprehensive income are not proven, the market will be confused about other comprehensive income information that can mislead users of financial statements.
Other comprehensive income arises as a result of activities outside the company's normal operations. If the amount of other comprehensive income is not stable every period, this shows that the company's policy for activities outside the regular operation changes. In addition to investors' poor understanding of other comprehensive income items and the high volatility of other comprehensive income, investors are worried about the condition of the company. Therefore, the instability of other comprehensive income can trigger the emergence of unsystematic risk due to management policies within the company. Thus, the hypothesis in this study is as follows: reporting separate components of other comprehensive income, at least related to fixed asset revaluation components and foreign exchange differences due to foreign currency translation. Chambers et al. (2007) found evidence that after the period of the SFAS 130 concerning the disclosure of comprehensive income, other comprehensive income as transitory income has value relevance on a dollar-to-dollar basis. However, Chambers et al. (2007) did not prove that investors pay more attention to other comprehensive income presented in the Financial Performance Report. Kanagaretnam et al. (2009) found that comprehensive income in aggregate had a more substantial influence on stock prices and stock return compared to net income. The study also proved that comprehensive income volatility is positively associated with stock return volatility. Khan & Bradbury (2014, 2015 stated that comprehensive income volatility leads to a perception of increased company risk. The comprehensive income volatility can reflect both the effect of the instability of the income statement component and the instability of other comprehensive income components. The existence of accounting policies chosen by management in the company related to the normal activities of the company or not the normal activities of the company can be reflected in the comprehensive income generated by the company during one period. Therefore, accounting policies made by companies that change for all company activities can result in company risk arising from policy mistakes taken by management within the company. Thus, the hypothesis in this study is as follows: H 3 : Comprehensive income volatility is positively associated with stock return volatility.
METHODOLOGY
The research method used in research is quantitative methods. The object of research uses companies listed on the Indonesia Stock Exchange. The population used in the study is non-financial companies listed on the Indonesia Stock Exchange. Data was collected using the documentation method through the official website of the Indonesia Stock Exchange, namely www.idx.co.id and finance.yahoo.com. Information data from the financial statements used in this study for components of comprehensive income using quarterly financial statement data, while other data use annual data. The technique in selecting the sample used is using a non-random sampling technique (purposive sampling). In this study, samples were taken with several criteria. First, companies used in the sample are non-financial companies that have registered their shares on the Indonesia Stock Exchange before January 1, 2012. Second, this study removed financial companies from the sample because the characteristics of asset structure and liabilities generate high leverage. Third, this study removed non-financial companies that have incomplete financial statements, including information on comprehensive income components and data needed in this study from the period January 1, 2012, to December 31, 2017. Fourth, non-financial companies have disclosure data of at least 1 type of derivative transaction either for hedging purposes or for speculative purposes or which have both during the period of January 1, 2012, to December 31, 2017. Based on the calculation of data for each variable, this study excludes one company because it has outlier data that have anomalous value. The amount of the company that can be used in this study is 41 companies so that the sample is 246 observations (firm-year). The data and information from financial statements sampled companies were obtained by content analysis.
The dependent variable in this study is stock return volatility. The proxy uses the standard deviation of monthly stock return as follows Khan & Bradbury (2014, 2015. The annual stock return volatility is calculated based on the standard deviation of monthly stock return volatility that needs to be multiplied by √12 to avoid bias. Standard deviations based on daily, weekly, monthly, or quarterly stock return data can be annualized by making a standard deviation of these data by multiplying by the root of the amount of daily, weekly, monthly and quarterly data so that they become standard deviations or annual volatility (financetrain.com). Therefore, in line with this study, to get annual stock return volatility, the standard monthly residual deviation from the above equation needs to be multiplied by √12 to obtain stock return volatility in one year.
This study uses comprehensive income volatility, net income volatility, and other comprehensive income volatility as independent variables. Comprehensive income volatility, net income volatility, and other comprehensive income volatility in this study follow the proxy used by Black (2014) and Khan & Bradbury (2014, 2015. The comprehensive income volatility, net income volatility, and other comprehensive income volatility are calculated by the quarterly components during one year divided by the market value of equity at the beginning of the period. The volatility of the components in one year is generated from the standard deviation of comprehensive income generated every three months divided by the market value of equity at the beginning of the period, and multiplied by √4. Also, this study employs a liquidity ratio as a control variable. It is measured by the ratio of operating cash flows to total current liabilities each year, as used by Khan & Bradbury (2014, 2015.
The equation of the regression analysis using stock return volatility is as follows:
RESULTS
The descriptive statistical components used in this research are the mean, median, maximum value, minimum value, and standard deviation. The regression test result for equation 2 in Table 3 is as follows:
DISCUSSIONS The effect of net income volatility on stock return volatility
The result of hypothesis testing suggests that net income volatility is not associated with stock return volatility. The result differs from Rajgopal & Venkatachalam (2011). The result of this study is also not in line with Khan & Bradbury (2014, 2015. Khan & Bradbury (2014, 2015 used volatility of annual income for several years. Whereas, this study follows the recommendation of Black (2014), which stated that shorter financial statement data might attract investors' attention in describing the company's performance. Therefore, investors as users of financial statements, do not use quarterly earnings movement information in determining company risk. Net income volatility does not affect stock return volatility may reflect that the quarterly net income volatility does not reflect earnings management activities in one period. Therefore, financial statements by IFRS-based financial accounting standards, the volatility of the current quarter net income still requires further analysis and interpretation. Thus, net income volatility is not information that can be used by investors in determining the systematic risk in the current year. Although Khan & Bradbury (2014, 2015 stated that net income volatility is the best measure of accounting risk in capturing company risk information, it is not proved in the Indonesia context. Investors may still use net income information as accounting information that results in economic consequences even though investors need more time to interpret the income statement by the new financial accounting standards. Also, they may be more careful in responding to accounting information issued by the company to the public as well as more attention to annual net income information than quarterly net income information. It is supported by Khan & Bradbury (2014, 2015, who found that investors use annual net income information as a basis for decision making related to company risk.
Information on quarterly net income in the current year cannot be used as decision making related to investment risk, especially firm risk. Investors may be more careful in responding to the movement of quarterly net income information in the current year, which may not necessarily be used in investment decision making in the capital market. Also, users of financial statements, especially investors, need time to be able to understand the company's activities that are reflected in net income, including the policies chosen by the company in determining revenue and expenses every quarterly period in one year. Changes in IFRS-based financial accounting standards implemented by companies in Indonesia, starting in 2012 is not an easy thing for investors to interpret. Net income information is still used by investors in making decisions regarding the information on company risk, given that investors are familiar with net income information using previous accounting standards.
This study found that the level of net income stability cannot conclude the systematic risk of a company. By using the context of Indonesian companies and stock markets, the efficient market theory does not apply because earnings information cannot be used as a basis for decision making, especially in assessing risk systematically. Based on the efficient market theory, financial statement data should be able to be used in investment decision making related to firm risk. Based on the test data in this study, the conditions occurring in Indonesia do not support the theory. The condition of net income in a reasonable period, which is more volatile due to the possibility of internal company policies that are too aggressive, cannot capture the firm risk.
The effect of other comprehensive income volatility on stock return volatility
The result of hypothesis testing suggests that other comprehensive income volatility is positively associated with stock return volatility. The result of this study indicates that information on the instability of other comprehensive income of a company could reflect the firm risk. It is reflected from descriptive statistics that other comprehensive income owned by the companies are various in amount. Other comprehensive income items that emerged after the adoption of IFRS in financial accounting standards in Indonesia since 2012 have come to the attention of investors. However, they are not a regular activity of a company that could endanger the company, and the amount is relatively low compared to net income. This finding is not relevant to the results of tests conducted by Khan & Bradbury (2015), which found that other comprehensive income volatility does not affect firm risk. Other comprehensive income items could be a problem for users of financial statements because they appear as an impact due to changes in financial accounting standards in Indonesia. The emergence of other comprehensive items is different from the net income component that can be done, which is partly an accrual component, which is a regular activity of the company is a risky action for users of financial statements, especially investors. In addition to the value of other comprehensive income is lower than net income, investors do not necessarily understand other comprehensive income items that arise as a result of changes in accounting standards starting in 2012. The result of his study is in line with the findings of Maines & Mc Daniel (2000), which stated that other comprehensive income volatility could capture the presence of firm risk. The findings in this study indicate that investors may use the information on other comprehensive income to assess firm risk. The emergence of comprehensive income items and the amount is unstable and relatively low, resulting in investor response to the information to the firm risk information.
Investors may consider items of other comprehensive income to be unstable and of relevance to low entity core business results so that investors assume that it is related to firm risk. Also, other quarterly comprehensive income information in one year can confuse users of financial statements and cause significant misinterpretations of the entity's performance (Khan & Bradbury, 2014;2015). Other comprehensive income is regulated in IFRS-based financial accounting standards, and it is not an easy thing for investors to interpret the activities that arise in these other comprehensive income items. According to Khan & Bradbury (2015), other comprehensive income items have different properties, are less controlled, challenging to predict, and cannot be linked to management performance. The usefulness of information on other comprehensive income is considered for investors in making investment decisions related to firm risk. The result does not confirm the findings of Bima & Afri (2017), which stated that comprehensive income information is less able to provide information on better quality financial information.
This study proves that other comprehensive income items arising from management policies within the company related to company activities outside the regular operation of the company are related to firm risk. Investors may consider comprehensive income items after the company has applied IFRS-based financial accounting standards since 2012, resulting in instability of other comprehensive income items that investors have not responded to in providing information related to company risk. Therefore, accounting information on other comprehensive income may not be used in assessing the current and future conditions of a company. Still, it reflects that the information is related to marketbased firm risk. Furthermore, the test results in this study prove the prediction of Black (2016), which stated that the use of shorter data could better capture firm risk in general and be more attractive to investors, especially other comprehensive income.
The effects of comprehensive income volatility on stock return volatility
The result of hypothesis testing suggests that comprehensive income volatility is positively associated with stock return volatility. The result of this study is also similar to Hodder et al. (2006) and Khan & Bradbury (2014, 2015, who found that comprehensive income is positively associated with firm risk. The data examined in this study uses company data in Indonesia as a developing country and uses quarterly comprehensive income data in one year. At the same time, Khan & Bradbury (2014, 2015 employed data from developed country companies and used annual comprehensive income data for several years. With IFRS-based financial accounting standards used by companies in Indonesia, investors may be more interested in using annual accounting information data such as comprehensive income information than quarterly accounting information data. Moreover, quarterly comprehensive income information consists of 4 financial statements in one year, which requires more time to interpret the information. However, the result of this study is not in line with the findings of Dhaliwal et al. (1999), which proved that comprehensive income volatility does not affect company risk.
The information on comprehensive income differs from net income information because the market only reacts for comprehensive income information. Investors assume that if the quarterly comprehensive income information for four periods in one year is volatile, they will react that it will be riskier for investment purposes. Financial accounting standards in Indonesia that have used IFRS starting in 2012 regulate the separation of net income components from normal company activities and other comprehensive income components that come from activities outside the company's regular business. The combination of these two components is then included in comprehensive income. The separation of these components should make it easier for investors to analyze the financial statement information submitted by the company so that comprehensive income information has usefulness for investors. It is easier for investors to detect items included in net income as well as those included in other comprehensive income. Although most components of comprehensive income come from net income, investors may assume that comprehensive income is new information after companies in Indonesia apply IFRS-based accounting standards. There are other components of comprehensive income in comprehensive incomes that are subject to change from time to time. Therefore, investors consider comprehensive income information to be influenced by other comprehensive income so that this information is related to firm risk. The finding in this study proved the prediction of Black (2016) that the shorter financial data that is quarterly comprehensive income information can attract investors' attention in Indonesia is proven.
CONCLUSIONS
Net income volatility is not associated with stock return volatility, but other comprehensive income volatility and comprehensive income volatility are positively associated with stock return volatility. Although Indonesia investors are not easy to interpret net income information using IFRS-based financial accounting standards, they have known net income information using previous financial accounting standards. Thus, they respond to comprehensive income information the same as investors' responses to net income. Information on net income volatility obtained by the company every quarter in the current year does not reflect the risk of not being systematic in Indonesia. Comprehensive income items appear after companies in Indonesia use IFRS-based financial accounting standards instead of becoming information that is easily understood by users of financial statements. In addition to the relatively low amount, other comprehensive income items originating from activities outside the company's regular operations do not become too attractive information for investors.
This study suggests that investors should have functional understandings of financial accounting standards because changes in financial accounting standards can result in different interpretations rather than the prior financial accounting standards. It will be beneficial for making investment decisions in the capital market. Based on the findings in this study, investors as users of financial statements need time to interpret information in financial statements, especially information on comprehensive income components. Based on the results of this study, the Accounting Standard Board, especially in Indonesia, is expected to continue to be able to improve the rules of financial accounting standards, especially related to the policy of disclosure of activities included in comprehensive income to increase the usefulness of financial statements for users of financial statements. Changes in financial accounting standards lead to adjustment investor's understanding of new financial accounting standards. Therefore, IAI needs to improve the quality of financial accounting standard settings as well as the access to the availability of financial accounting standards so that information in financial statements can be better understood and more useful for investors.
LIMITATION AND STUDY FORWARD
This study has several limitations. First, this study examines the risk of accounting information due to IFRS adoption in Indonesia, which began in 2012. Therefore, the scope of data and information is only limited to the condition of companies in Indonesia. Also, the results of this study cannot generalize the results for data from other developing and | 2020-06-08T01:44:28.523Z | 2020-05-20T00:00:00.000 | {
"year": 2020,
"sha1": "046599be166fee6093185d906b06ad03b33c461a",
"oa_license": null,
"oa_url": "https://doi.org/10.18510/hssr.2020.8340",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e4a59cc877669e58684dba4ff171943d891d9589",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
221912953 | pes2o/s2orc | v3-fos-license | An Improved Nonlinear Predictive Control Strategy Enhanced by Fractional Order Extremum Seeking Control of the Antilock Braking System of a Vehicle
Extremum seeking control can search the optimal slip rate of the antilock braking system of a vehicle through a high-frequency sinusoidal excitation signal. However, because of the bandwidth limitation of the braking actuator, the search speed of the optimal slip rate decreases and the stability of the extremum seeking control system becomes worse. To search and control the optimal slip rate, an improved nonlinear predictive control strategy enhanced by fractional order extremum seeking control is proposed for the vehicle antilock braking system. First, the nonlinear dynamic model of the braking system is established. Then, nonlinear prediction control is designed with the prediction of the slip rate response based on the nonlinear model to achieve slip rate control. Using fractional order calculus, a fractional extremum seeking controller is proposed to search for the optimal slip rate. Nonlinear predictive control integrated with fractional extremum seeking control is proposed to achieve the function of vehicle antilock braking. Finally, the effectiveness of the proposed method is verified by simulating the vehicle antilock braking system under different road conditions. The result shows that by considering the actuator available bandwidth, the proposed fractional order extremum seeking control can improve the search speed of the optimal slip rate compared with traditional integer order extremum seeking control. The proposed integrated controller achieves wheel slip rate optimal control regardless of the road conditions.
I. INTRODUCTION
With the development of the automobile industry, automobile safety requirements have risen, especially the braking performance at high speeds [1]- [3]. The antilock braking system (ABS) is an active safety device that is used to control and adjust the braking torque to prevent the wheels from locking during braking, so that the vehicle makes maximum use of the ground adhesion to slow down and stop [4], [5]. Therefore, the ABS plays a vital role in vehicle driving safety. With the development of new energy vehicles and autonomous vehicles, the technology of the brake-by-wire system has been greatly developed [6], [7]. Using the brake-by-wire system to control the slip rate improves not only the ABS performance The associate editor coordinating the review of this manuscript and approving it for publication was Choon Ki Ahn . of traditional vehicles but also the ABS performance of new energy vehicles and autonomous vehicles.
Although the ABS has been widely used in automobiles, designing an ABS controller with better performance has remained a challenge until now [8]. The difficulty in designing the ABS controller arises from two main reasons. First, the ABS is a strongly nonlinear system. Second, changes in the road conditions cause uncertainty in control objectives. For the first problem, numerous nonlinear controllers were designed, such as sliding mode control (SMC) [9], PID control [10], model predictive control (MPC) [2], nonlinear optimal control [11], [12], fuzzy logic control [13], neural network control [14], [15], iterative learning control [16], and other intelligent control methods [17]. The SMC method was widely used in control engineering because of its potential for handling the nonlinearity and to achieve the inherent robustness. Reference [18] used the memory and genetic properties of a fractional order calculation to design a fractional order SMC for antilock control. Reference [10] suggested that the combination of a fractional order sliding mode controller (FOSMC) and fuzzy logic control (FLC) further improved the robust performance of the ABS. However, the application of sliding mode control is restricted by its chattering effect. References [11] and [12] proposed an ABS nonlinear predictive control method and applied it to predicting the nonlinear response of a continuous nonlinear vehicle dynamics model. Through integral feedback technology and radial basis neural network technology, a better ABS performance is obtained compared to that of SMC. Reference [19] considered asymmetric slip rate constraints to track the optimal slip rate. The above control strategies all adopted a fixed slip rate regardless of the changing road conditions. However, the actual tire longitudinal characteristics indicate that the corresponding optimal slip rate varies under different road conditions.
To solve the second problem, much research has focused on automatically identifying road surface friction coefficients online and automatically adjusting the target slip rate according to the identified road conditions [20]- [22]. Reference [20] proposed a road adhesion coefficient recognition method based on IMM Kalman filtering. Reference [21] proposed a method for estimating the tire-road friction coefficient in real time that can independently estimate the friction coefficient of the front and rear wheels. However, automatic identification requires additional sensors to be installed on the vehicle, which increases the hardware costs. The algorithm is complex, and the poor real-time performance under emergency braking conditions also limits the application. Extremum seeking control (ESC) can adaptively converge and stabilize the optimal slip rate without the road surface friction coefficient recognition, which is an effective way to solve the second problem [23], [24]. Great progress has been made in parameter design, stability analysis, structural design of control systems, and performance improvement of the ESC algorithm. In [25], fractional-order calculus was used to improve the convergence speed of ESC, and the stability of fractional order extremum seeking control (FOESC) was proved. Scholars have studied the application of ESC in the ABS system. Reference [26] proposed an improved sliding mode extremum seeking control method that addressed the problem of time delay in the ABS and solved the problem of excessive oscillation of the system. Reference [27] treated the problem of extremum seeking control as an optimization problem with dynamic system constraints. An ESC control scheme based on numerical optimization was proposed and applied to the ABS. In [28], an ESC was successfully designed, and the steering law was adopted to modify the control law to compensate the lateral stability of the vehicle during cornering. However, the ESC algorithm scarcely considers the limitation of the brake actuator bandwidth. Gunter Stein proposed the concept of available bandwidth and the limitation principle through frequency domain analysis [29].
He emphasized the importance of actuator bandwidth to control system design.
Therefore, the available bandwidth is very important and must be considered in the design process of the ABS controller. Limited by the available bandwidth of the actuator, the convergence speed of ESC will be reduced. The fractional algorithm can improve the stability and response speed of ESC. To our knowledge, the application of FOESC to the ABS is completely unexplored. This article studies the ABS control based on an integrated control combining nonlinear predictive control (NPC) and FOESC. FOESC is proposed to search the optimal slip rate. NPC is developed to predict the slip rate response from the nonlinear vehicle model and control the searched slip rate. Compared with the traditional integer order extremum seeking control (IOESC), the FOESC algorithm improves the search speed of the optimal slip rate with the available bandwidth of the braking actuator.
The remainder of this article is arranged as follows. Section II introduces the definition of fractional calculus. In Section III, the dynamic model is established. Section IV presents the results of the ABS controller design. Section V illustrates the superiority and effectiveness of the proposed control method through simulation. Section VI draws the conclusion of this article.
II. DEFINITION OF FRACTIONAL CALCULUS
Fractional calculus has a 300-hundred year history, but it initially focused primarily on theoretical research. In recent years, the application of fractional calculus theory has begun in many fields, such as the new theory of fractional order control in the field of automation.
There have been many definitions of fractional calculus during its development, such as the Cauchy integral formula directly extended by integral order, the Grunwald-Letnikov fractional order calculus definition, the Riemann-Liouville fractional order definition of integration and the definition of Caputo fractional calculus.
A. DEFINITION OF THE GRUNWALD-LETNIKOV FRACTIONAL CALCULUS
The Grunwald-Letnikov integral formula is directly extended from the simple integer order integration.
The coefficient is obtained directly from the following recursive equations: According to this definition, the algorithm for fractional differential calculation can be derived as Assuming that the step size h is sufficiently small, the above equation can be used to directly find the approximate value of the function's numerical differentiation. The accuracy of this formula is denoted o (h).
B. DEFINITION OF THE RIEMANN-LIOUVILLE FRACTIONAL CALCULUS
The Riemann-Liouville fractional order integral is defined as Fractional differentiation is also defined by such integration. Assuming the fractional order n − 1 < β ≤ n, the fractional order differential is defined as
C. DEFINITION OF THE CAPUTO FRACTIONAL CALCULUS
The Caputo fractional differential is defined as where α = m + γ ; m is an integer; and 0 < β ≤ 1. Similarly, Caputo fractional integration is defined as
III. DYNAMICS MODEL
where R is the wheel radius; I t is the wheel inertia; v is the vehicle speed; w is the wheel angular velocity; T b is the braking torque; F x is the tire longitudinal force; and m t is the quarter vehicle total mass. The tire longitudinal force F x depends on the vertical load of the tire. The vertical load consists of two parts: the static load due to the vehicle's mass distribution, and the tire dynamic load generated during braking. Therefore, the vertical load of the rear tire of the 1/4 vehicle model is where l is the wheelbase; h cg is the height of the mass center; and F L is the dynamic load. The slip rate of the tire is expressed as Differentiating the slip rate by time, we obtaiṅ Substituting (8) and (9) into (12) giveṡ Equations (8) and (13) constitute the state-space equations for the vehicle braking system. The vehicle speed v and the slip rate λ are state vectors, and the braking torque T b is the control vector.
B. TIRE MODEL
The tire longitudinal force F x is a function of the tire longitudinal slip rate. When the longitudinal slip rate is small, the longitudinal force is linearly related to the slip rate. As the slip rate increases, the tire longitudinal force F x reaches a maximum value. When the slip rate is greater than the optimal slip rate value, the tire longitudinal force decreases with increasing slip rate.
We use the Dugoff tire model. The tire longitudinal force is expressed as and where C α is the cornering stiffness of the tire; µ is the road friction coefficient; ε r is the factor of road adhesion reduction; and C i is the tire longitudinal stiffness.
C. REFERENCE SLIP MODEL
To avoid a large slip rate tracking error, a reference model of tire slip transient response is established, and the reference model of tire slip is expressed as where λ opt is the target slip; and a is the time constant [29]. The inverse Laplace transform is performed on both sides of (8), and then the first order zero initial condition differential equation is obtained: Equation (16) describes a wheel slip reference model in the time domain. Based on this model, a nonlinear controller is designed to control the optimal slip rate.
IV. CONTROL METHOD OF ABS A. Design of NPC
The NPC is used to realize the ABS function. Equations (8) and (13) are used to construct the dynamic state equation of the ABS. The slip rate is used as the system output, and the state space is expressed as follows: To improve the robustness of the controller, the slip rate and slip rate integral are selected as the control targets of the ABS. The new state variable x 3 is defined as follows: The purpose of the control system is to control the tire slip x 2 and its integral x 3 to converge to the optimal slip rate.
The idea behind the NPC is that a Taylor series expansion can be used to predict the state vector x (t + h) at the next time. The concept of the prediction step h is similar to the concept of the prediction time domain in model predictive control. The control T b is calculated according to the principle of minimizing the tracking error.
The state variables x 2 and x 3 are selected as the output of the system, and a performance function is constructed to optimize the tracking error at the next moment: (20) which can be simplified as follows: where w 2 and w 3 are the weight coefficients of the tire slip rate and its integral, respectively.
The k order Taylor series of the state vector at time t is approximated as follows: The control order is one of the controller design parameters and must be a compromise between performance and input energy consumption. A sufficient condition for a Taylor series prediction is that the control order is not lower than the order of the prediction vector. Therefore, x 2 is expanded as a first-order Taylor series, and x 3 is expanded as a second-order Taylor series: Similarly, a Taylor series expansion is performed on the state vector of the reference slip rate: Therefore, by introducing (23)-(26) into (21), the performance function is obtained with the control input as a variable. According to the optimal theory, the necessary condition for the optimization of the performance function is which leads to where e 2 and e 3 are the tracking errors of the output: The traditional IOESC based on a disturbance signal is combined with NPC to realize the ABS function. The schematic is shown in Fig. 2. The purpose of NPC is to track the tire slip rate according to the IOESC requirements. The IOESC uses the braking deceleration as the objective function and the sine function as the disturbance to obtain the direction of the convergence gradient.
According to the basic principles of the above IOESC, the mathematical model of the IOESC can be expressed as follows: where the sinusoidal excitation signal is d sin (ωt); k 1 is the IOESC gain factor; G HPF (s) is the transfer function of a first-order high-pass filter; G LPF (s) is the transfer function of the first-order low-pass filter; z is the braking deceleration; y in Fig. 2 is the actual tire slip during braking;λ is the slip obtained from the ESC search; λ is the target slip that is actually applied to the nonlinear control system; and γ is the gradient signal obtained by multiplying the braking deceleration with a sinusoidal excitation signal after high-pass filtering. The average linearization model of the slip rateλ obtained from the ESC search and the optimal slip rate λ * is whereλ =λ − λ * , and Equation (31) is used for the stability analysis of the IOESC average model. If the phase delay of the disturbance signal is set to 0, the model can be simplified to Equation (32) shows that the system is in a stable state when k 1 > 0.
However, the closed-loop transfer function (32) has a pair of poles near the imaginary axis, which cause a slight damping effect and the slow convergence rate of the ESC system. If the closed-loop transfer function integer order of the average model of ESC is replaced with a fractional order, no pole is near the stable boundary of the fractional order closed-loop transfer function. Therefore, the system has a very fast convergence speed and a more robust performance. We propose to use FOESC to search for the optimal slip rate of the ABS. The integer order integral 1 s is replaced by the fractional order integral 1 s q , and high-pass filters are replaced by fractional-order filters s q ω h +s q , where q is the fractional value, as shown in Fig. 3.
The L (s) in FOESC is expressed as By defining ρ = s q , the average linearized models of λ and λ * for FOESC can be described as The GL formula calculates the fractional differential of a given signal more accurately, but this type of algorithm has great limitations in the study of control systems. Such an algorithm needs to calculate the sampled value of the signal in advance. The value of the function is unknown in the simulation of the control system, so the Oustaloup filter algorithm is used to approximate the fractional differential value [30]. Assuming that the selected fitting frequency band is (w b , w t ), the transfer function model of the continuous filter is be constructed as The zero, pole and gain of the filter are directly obtained from the following formulas: where γ is the fractional order; 2N + 1 is the order of the filter; and w b and w t are the lower and upper limits of the fitting frequency, respectively. Generally, the fractional differential operator is fitted well in this region, while the other regions are very different from the differential operator. The presented algorithm avoids the limitations of w b w h = 1 with two arbitrary frequency.
V. SIMULATION RESULTS
To verify the effectiveness of the integrated control of NPC enhanced by FOESC, the ABS simulation and comparison analysis of NPC enhanced by FOESC and IOESC, which are hereafter referred to as FOESC and IOESC, were performed on a single road and a changing road. Based on whether the bandwidth of the actuator is considered, the ABS simulation performance of the two control methods is studied. The expression for the braking torque without considering the actuator bandwidth is The brake torque expression considering the actuator bandwidth is [31], [32] The simulation parameters are shown in Table 1 [11], [29]. Different road adhesion coefficients result in different tire mechanical properties. First, the relationship between the slip rate and braking torque is simulated for a high adhesion road, medium adhesion road, and low adhesion road. The adhesion coefficients u of the high, medium, and low adhesion roads were selected as 0.8, 0.5, and 0.3, respectively. Fig. 4 shows that the optimal slip rates on high, medium, and low adhesion pavements are 0.11, 0.088, and 0.068, respectively. In the following simulations, the tire characteristics shown in Fig. 4 are all used for simulation analysis. Fig. 5 (a) and Fig. 5 (b) compare the braking torque of FOESC and IOESC with the ideal actuator and the available bandwidth actuator, respectively, on the high adhesion road. When using the ideal controller, FOESC has a 1709 N·m peak torque in the early stage of braking. This behavior allows the tire to reach the target slip rate faster. The IOESC braking torque peak is 555 N·m at the initial stage of braking. The slip rate reaches the optimal slip rate near 0.11 without overshoot. When using the available bandwidth actuator, the IOESC braking torque has a 0.58s phase lag when searching for the optimal slip rate. The IOESC response time is 0.11 s, although 893 N·m torque overshoots still occur. Fig. 6 (a) and Fig. 6 (b) compare the slip rate dynamic characteristics of FOESC and IOESC with the ideal actuator and the available bandwidth actuator, respectively, on the high adhesion road. Under the ideal conditions of the actuator, the slip rate of IOESC transitions quickly to the optimal slip rate of 0.11 in 0.27 s without overshoot. The slip rate of FOESC transitions quickly to the optimal slip rate of 0.11 in 0.03 s. The response speed of FOESC is better than IOESC. Corresponding to the braking torque, the slip rate also produces overshoot with a peak value of 0.42. When the available bandwidth actuator is used, the convergence speed to the optimal slip rate of IOESC decreases. The slip rate overshoot peak of FOESC decreases to 0.28, but FOESC converges to the optimal slip rate in 0.4 s, which is faster than the convergence speed of IOESC. Fig. 7 (a) and Fig. 7 (b) compare the braking deceleration dynamic characteristics of FOESC and IOESC with the ideal actuator and the available bandwidth actuator, respectively, on the high adhesion road. ESC brake deceleration quickly and smoothly transitions to the maximum braking speed in 0.10 s, which corresponds to the braking torque and slip rate. When the available bandwidth actuator is used, IOESC reaches the maximum braking deceleration in only 1.5 s, whereas FOESC converges to the maximum braking deceleration in 0.5 s. Moreover, the maximum braking deceleration obtained by FOESC, 6.04 m/s 2 , is higher than that of IOESC, 5.71 m/s 2 .
Comparing Fig. 5, Fig. 6 and Fig. 7 shows that the overshoot of the braking torque of FOESC makes the tire slip rate reach the optimal slip rate within a limited time. Because the vehicle needs to reach the optimal slip rate within a limited time, the actuator must provide the overshoot of the braking torque to get the optimal slip rate. If the tire slip rate remains optimal, the vehicle will attain the maximum braking deceleration. In a word, the overshoot of the braking torque creates the transient behavior of the vehicle, and the braking deceleration of the vehicle reaches the maximum rapidly. The braking deceleration of FOESC is greater than that of IOESC at the initial moment. The fractional theory analysis verifies that the fractional order improves the system lag caused by the integer order. Another reason is that the optimal slip rate estimated by FOESC has overshoot at the initial moment, which also leads to overshoot of the braking torque. Although the transient characteristics have overshoot, the braking deceleration obtained by FOESC is still better than that of IOESC. Of course, the increase in longitudinal acceleration inevitably leads to a decrease in ride comfort. However, for emergency braking, braking safety should be prioritized over ride comfort. Notably, the slip rate obtained by FOESC before 0.4 seconds is greater than the optimal value. This result leads to a slight decrease in the braking deceleration of the vehicle at the same time, which is consistent with the vehicle dynamics. Fig. 8 (a) and Fig. 8 (b) compare the braking distance dynamic characteristics of FOESC and IOESC with the ideal actuator and the available bandwidth actuator, respectively, on the high adhesion road. Under the ideal actuator conditions, comparing the braking distance and vehicle speed of IOESC and FOESC shows that the braking distances of the two methods are almost 73.31 m. This result is determined by the nonlinear characteristics of the tire, and the friction torque provided by the ground near 0.1 and 0.4 fluctuates less. Thus, under the ideal conditions of brake actuators, on high-adhesion driving roads, both control methods achieve a good anti-lock braking performance. However, when the available bandwidth actuator is used, during the braking process from 30 m/s to 5 m/s, the braking distance of FOESC is 74.07 m and that of IOESC is 84.43 m, indicating that FOESC has better robustness than IOESC. Figs. 9-12 compare the braking performances of the two ABS control methods with the ideal actuator and the available bandwidth actuator on the low adhesion road. Consistent with the braking on the high adhesion road, both controllers achieve better braking effects without considering the actuator bandwidth. Neither the brake torque nor the slip rate amplitude of IOESC undergo overshoot. However, the braking torque of FOESC reaches 200 N·m in 0.06 s, and that of IOESC reaches 200 N·m in 0.42 s; FOESC has a faster braking torque response. When the actuator is limited by the bandwidth, compared with FOESC, the slip rate obtained by IOESC increases slowly and converges to the optimal slip rate of 0.68 in 4.2 s. The FOESC search slip rate converges to the optimal slip rate in 1.58 s. The braking deceleration controlled by FOESC reaches 2.57 m/s 2 in 0.06 s, and that controlled by IOESC reaches 2.49 m/s 2 in 0.42 s. The convergence speed of IOESC's search for the optimal slip rate and the response speed of reaching the maximum deceleration are low. During the braking process when the braking speed is reduced from 30 m/s to 5 m/s, the braking distance of the two controllers is approximately 158 m during the braking process with the ideal actuator, whereas the braking distance of FOESC is 158.8 m, while that of IOESC is 162.6 m, when the available bandwidth actuator is used.
To further analyze the control effect of the FOESC controller, simulations were performed on the road surface where the adhesion coefficient increased from 0.3 to 0.8, as shown in Fig. 13. Fig. 14 (a) and Fig. 14 (b) compare the braking torque dynamic characteristics of FOESC and IOESC with the ideal actuator and the available bandwidth actuator, respectively, on the changing road. Fig. 14 shows that when the actuator is not limited by the bandwidth, the braking torques of the two types of controls basically match. Only in the initial braking phase and on the road step changes does the FOESC braking torque have a peak torque, 720 N.m. When the actuator is limited by the bandwidth, the braking torque of IOESC has a 0.42 s hysteresis at the initial moment of braking, and the braking torque of IOESC has a 0.11 s hysteresis. The braking torque response speed of FOESC has also not been greatly affected. Fig. 15 (a) and Fig. 15 (b) compare the slip rate dynamic characteristics of FOESC and IOESC with the ideal actuator and the available bandwidth actuator, respectively, on the changing road. When the actuator is not limited by the bandwidth, IOESC quickly transitions to the optimal slip rate near 0.668 without overshoot. At 2 s, IOESC searched again quickly and without overshoot to obtain an optimal slip rate of 0.111. The slip rate controlled by FOESC produces a certain overshoot at the beginning of braking and at 2 s; when the available bandwidth actuator is used, the slip rates obtained by IOESC on the changing road are 0.03 and 0.06, respectively, which are not the optimal slip rate. FOESC searched for slip rates of 0.068 and 0.11 at 0.82 s and 3.2 s, respectively, on the changing road. The convergence speed of FOESC's search for an optimal slip rate is barely affected. Fig. 16 (a) and Fig. 16 (b) compare the braking deceleration dynamic characteristics of FOESC and IOESC with the ideal actuator and the available bandwidth actuator, respectively, on the changing road. The acceleration controlled by the two controllers approximately transitioned from 2.6 m/s 2 to 6.0 m/s 2 under the condition that the actuator is not limited by the bandwidth. However, when the available bandwidth actuator is used, FOESC and IOESC attain the maximum braking deceleration of 2.5 m/s 2 in 0.1 s and 2.4 m/s 2 in 0.5 s, respectively. When the road changes in 2 s, FOESC and IOESC attain the maximum braking deceleration of 6.0 m/s 2 in 2.08 s and 5.8 m/s 2 in 2.20 s, respectively. Compared with FOESC, the IOESC process generates maximum braking deceleration slowly and to a smaller extent when the available bandwidth actuator is used. Fig. 17 (a) and Fig. 17 (b) compare the braking distance dynamic characteristics of FOESC and IOESC with the ideal actuator and the available bandwidth actuator, respectively, on the changing road. During the braking process, when the braking speed is reduced from 30 m/s to 5 m/s, the braking distance of the two controllers is approximately 104.5 m under the condition that the actuator is not limited by the bandwidth. However, the braking distance of FOESC is 105.1 m and that of IOESC is 108.5 m when the available bandwidth actuator is used.
To demonstrate the advanced performance of FOESC, its braking distance is comprehensively compared to that of IOESC during the braking process when the vehicle speed is reduced from 30 m/s to 5 m/s, as shown in Table 2. Superscripts 1 and 2 indicates an ideal actuator without and with the bandwidth limitations, respectively. Table 2 shows that, without considering the actuator bandwidth, the braking distance of FOESC in the ABS operation is almost identical to that of IOESC. Compared to IOESC, the largest performance reduction of FOESC is on the high adhesion road surface and is only 0.43%. However, after considering the actuator bandwidth, the braking distance of FOESC has been greatly improved compared to that of IOESC. When the ABS emergency braking is used on highadhesion roads and low-adhesion roads, the braking distance of FOESC is 11% and 2.3% shorter than that of IOESC, respectively. Furthermore, the braking distance of FOESC was decreased by 3.1% compared with that of IOESC when the ABS braking was performed on the road step condition. To investigate the effectiveness of FOESC, a comparison between the robust predictive control (RPC) and the FOESC is made regarding the changing road. The objective of the RPC is to track the wheel slip constant, which is 0.15. Fig. 18 shows the braking torque dynamic characteristics of FOESC and RPC. Figure 18 shows that, at the initial moment, the peak braking torque of FOESC is 508 N·m, whereas that of RPC is 331 N·m. When the road changes, the peak braking torque of FOESC is 637 N·m, whereas that of RPC of 366 N·m. More importantly, the peak braking torque time of FOESC is 0.10 s and 2.08 s, while that of RPC is 0.12 s and 2.2 s, respectively. The braking torque of RPC is relatively gentle, while that of FOESC always has a sinusoidal periodic disturbance. This is because the nature of FOESC is caused by the nature of searching for the optimal value through a sinusoidal disturbance. 19 shows that, at the initial moment of braking, the wheel slip rate is controlled by the RPC method and the FOESC method to 0.15 and 0.068, respectively. When the friction coefficient of the road changes, the wheel slip rate remains controlled by the RPC method at 0.15, while that controlled by the FOESC method is 0.11. This result shows that FOESC can obtain the optimal wheel slip rate according to different road surfaces, while the target wheel slip rate of RPC is not the optimal value. Fig. 20 shows that, at the initial moment of braking, the braking deceleration controlled by the RPC method and the FOESC method is 2.45 m/s 2 and 2.57 m/s 2 , respectively. At 2 s, the road friction coefficient changes, and the braking deceleration controlled by the RPC method and FOESC method is 5.86 m/s 2 and 6.04 m/s 2 , respectively. This result shows that FOESC can obtain greater braking deceleration than RPC on different roads. Fig. 21 shows that the braking distance of the car controlled by the FOESC method and the RPC method is 105.1 m and 106.6 m, respectively. FOESC decreases the braking distance of the car by 1.41% compared with that of the RPC method. This result shows that FOESC further improves the braking performance compared with the RPC method.
In terms of the computational effort and the possibility for implementation in a production of the ABS ECU, FOESC is an enhancement of ESC by fractional order operators. The fractional operator only needs to design an integer order continuous filter to fit the filter of the fractional action. These technologies are existing and mature, and the computational effort is not large. Introducing FOESC into the ECU of the ABS is technically possible. Furthermore, FOESC further improves the braking performance.
VI. CONCLUSION
In this research, an improved NPC enhanced by FOESC is proposed for the ABS considering the available actuator bandwidth. The key idea of the proposed controller is to use FOESC to improve the optimal slip rate search speed, which is limited by the bandwidth of the actuator. At the same time, the NPC is developed to predict the slip rate from the nonlinear vehicle model and control the optimal slip rate. The simulation results show that the IOESC and FOESC can search for the optimal slip rate regardless of the limitations of the actuator bandwidth. IOSEC is limited by the bandwidth of the actuator, and the convergence rate of the optimal slip rate becomes slower. FOESC is not greatly affected by the bandwidth of the actuator and can still quickly search for the slip rate. Compared with the traditional IOESC, the integrated controller of FOESC effectively obtains the optimal slip rate and implements effective tracking control under consideration of the actuator bandwidth. On the high adhesion road, the braking distance performance of FOESC is decreased by 11% compared with that of IOESC. In future works, it will be interesting to implement and analyze the proposed method on an actual vehicle considering the actuator time delay and the uncertainty of the test vehicle. | 2020-09-26T13:28:26.628Z | 2020-09-14T00:00:00.000 | {
"year": 2020,
"sha1": "14a734b3ae276ed939ca7fe6f26a7849c779a4af",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09195430.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "14a734b3ae276ed939ca7fe6f26a7849c779a4af",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
257513725 | pes2o/s2orc | v3-fos-license | Quatsomes Loaded with Squaraine Dye as an Effective Photosensitizer for Photodynamic Therapy
Photodynamic therapy is a non-invasive therapeutic strategy that combines external light with a photosensitizer (PS) to destroy abnormal cells. Despite the great progress in the development of new photosensitizers with improved efficacy, the PS’s photosensitivity, high hydrophobicity, and tumor target avidity still represent the main challenges. Herein, newly synthesized brominated squaraine, exhibiting intense absorption in the red/near-infrared region, has been successfully incorporated into Quatsome (QS) nanovesicles at different loadings. The formulations under study have been characterized and interrogated in vitro for cytotoxicity, cellular uptake, and PDT efficiency in a breast cancer cell line. The nanoencapsulation of brominated squaraine into QS overcomes the non-water solubility limitation of the brominated squaraine without compromising its ability to generate ROS rapidly. In addition, PDT effectiveness is maximized due to the highly localized PS loadings in the QS. This strategy allows using a therapeutic squaraine concentration that is 100 times lower than the concentration of free squaraine usually employed in PDT. Taken together, our results reveal the benefits of the incorporation of brominated squaraine into QS to optimize their photoactive properties and support their applicability as photosensitizer agents for PDT.
Introduction
Photodynamic Therapy (PDT) is a minimally invasive localized clinical treatment that has been developed to treat many diseases, including psoriasis [1,2] or several types of cancer [3][4][5]. PDT is based on the presence of three components: a photosensitizer (PS), light, and molecular oxygen. Studies on cells and animals started in the 1960s and led to the clinical approval by the Food and Drug Administration (FDA) of the first photosensitizer, Photophrin, in 1995 [5,6]. In PDT, PSs are exposed to light at a specific wavelength, depending on the nature of the molecule in use [7]. After irradiation, the PS absorbs the light, causing the electron to transition from its ground state (singlet state) to an excited singlet state. Then, the PS can lose energy and return to the ground state. Alternatively, the singlet state can undergo intersystem crossing (ISC), forming an excited triplet state caused by the spin conversion of the electron in the higher energy orbital. From this triplet state, the molecule can relax and go back to the singlet state via two different routes: (i) the molecule can reduce the substrate forming radicals, which then react with the oxygen, producing oxygenated radicals (Reactive Oxygen Species, ROS), known as a Type I reaction, or (ii) the PS can directly react with molecular oxygen, producing singlet oxygen ( 1 O 2 ), known as a Type II reaction [8,9]. These reactions also explain the importance of oxygen's presence in PDT. Both products then induce apoptosis or necrosis, causing damage to tumor cells and tumor-associated vascular structures, contributing to the stimulation of the immune response in the host [5,10].
For a safe and effective photodynamic treatment, the PS must be non-cytotoxic in the dark but highly cytotoxic after irradiation, photo-and chemically stable, non-mutagenic, and selective against neoplastic tissues. In addition, it should present a high degree of purity and ideally absorb light between 600 and 800 nm to promote a deeper tissue penetration and minimize light scattering by tissues [11,12]. Most of the FDA-approved or currently under clinical trials PS [13,14] are based on porphyrins or chlorin structures, such as Photofrin [15] or Foscan [16,17], porfimer sodium and temoporfin, respectively. Those PS, based on an extended aromatic ring system, are highly hydrophobic and susceptible to π-π stacking. This results in poor solubility in aqueous media and rapid clearance in blood circulation, severely compromising the therapeutic effectiveness of PDT. In this context, organic dyes, such as polymethine dyes (squaraine and cyanine dyes), represent a promising alternative to PS thanks to the higher selectivity, purity, and absorption at longer wavelengths compared to porphyrin-derived PS [18][19][20][21][22]. Indeed, both cyanine and squaraine dyes have shown excellent light-induced toxicity on different types of tumors.
Despite the many merits of improving PS performances, the poor water solubility (leading to aggregation in aqueous media) and low chemical stability are still the main challenges for their biomedical application [21,23,24]. In order to improve the PSs' performances and to protect them from photodegradation, different delivery systems have been developed [24][25][26][27], with liposomes and polymeric micelles being the most common ones. These colloidal nanostructures-formed by the self-assembly of amphiphilic molecules in water-are used for the encapsulation of non-water-soluble PS in either the hydrophobic core of micelles [28] or the membrane of liposomes [29,30]. It is worth noting that other strategies, such as the insertion of functional groups into the PS structure to increase water solubility, have been reported [31]. However, those modifications usually compromise the photochemical properties of the PS [32]. Thus, the nanoencapsulation of the PS improves its solubility in the aqueous environment without altering its chemical structure and benefits from either active targeting, with the functionalization of targeting agents [33][34][35], or passive targeting through the enhanced permeation and retention effect (EPR) [36]. However, most of those formulations are limited by the lack of stability over time. For example, liposomes tend to change morphology, aggregate, or suffer from PS leakage over time [37,38], thus requiring complex formulations and coatings [39] to overcome these aspects.
To address these challenges, in the present work, we have employed non-liposomal nanovesicles, named quatsomes (QS), as a nanocarrier for a hydrophobic squaraine dye. QS are non-liposomal thermodynamically stable nanometric vesicles with very low dispersity [40] composed of sterols and quaternary ammonium surfactants [41]. These sterols and surfactants self-assemble in water, forming amphiphilic spherical nanometric structures with high homogeneity. This kind of vesicle has been proven to be safe and non-toxic for biomedical applications and has remained stable for years [42,43]. The encapsulation of PSs into QS is an attractive approach since it offers not only a strategy to bring nonsoluble squaraine dyes into an aqueous media but also a list of advantages for its in vivo application: (i) longer times in circulation and higher cellular uptake [44,45], (ii) improved therapeutic efficiency with a lower dye concentration since the photosensitizer is highly localized at the therapeutic side [42,46] and, (iii) QS allows a targeted delivery by the nanovesicle functionalization with targeting units [47,48].
In previous work, Bromo-Squaraine-C4 (Br-Sq-C4), a newly synthesized squaraine photosensitizer, demonstrated successful results in vitro [49]. However, a small organic molecule has important limitations for its in vivo application, such as low solubility and poor spectroscopic properties in aqueous media and a tendency to aggregate. Thus, as a first trial, non-water soluble Br-Sq-C4 was incorporated into QSs nanovesicles to enhance its stability in aqueous media. Nonetheless, Br-Sq-C4 entrapment efficiency was lower than 50% after preparation and not stably anchored into the QS membrane over time, showing significant dye leaking with only 15% of the initial dye concentration remaining after 6 weeks (see Figure S1 in the Supporting Information). Considering the low dye loading efficiency as well as the instability of this system, the use of Br-Sq-C4-loaded QS as a potential photosensitizer agent was dismissed. Instead, we synthesized a similar squaraine bearing longer alkyl chains (Br-Sq-C12, see Scheme 1), i.e., C12 instead of C4 hydrocarbon chains. By the incorporation of this longer hydrocarbon chain, Br-Sq-C12 can be anchored to the QS membrane stably due to its larger lipophilicity.
improved therapeutic efficiency with a lower dye concentration since the photosensitizer is highly localized at the therapeutic side [42,46] and, (iii) QS allows a targeted delivery by the nanovesicle functionalization with targeting units [47,48].
In previous work, Bromo-Squaraine-C4 (Br-Sq-C4), a newly synthesized squaraine photosensitizer, demonstrated successful results in vitro [49]. However, a small organic molecule has important limitations for its in vivo application, such as low solubility and poor spectroscopic properties in aqueous media and a tendency to aggregate. Thus, as a first trial, non-water soluble Br-Sq-C4 was incorporated into QSs nanovesicles to enhance its stability in aqueous media. Nonetheless, Br-Sq-C4 entrapment efficiency was lower than 50% after preparation and not stably anchored into the QS membrane over time, showing significant dye leaking with only 15% of the initial dye concentration remaining after 6 weeks (see Figure S1 in the Supporting Information). Considering the low dye loading efficiency as well as the instability of this system, the use of Br-Sq-C4-loaded QS as a potential photosensitizer agent was dismissed. Instead, we synthesized a similar squaraine bearing longer alkyl chains (Br-Sq-C12, see Scheme 1), i.e., C12 instead of C4 hydrocarbon chains. By the incorporation of this longer hydrocarbon chain, Br-Sq-C12 can be anchored to the QS membrane stably due to its larger lipophilicity. Herein, in this work, we present the design of a new QS composed of Cholesterol and surfactant Sterealkonium Chloride loaded at different concentrations of Br-Sq-C12 ( Figure 1). First, we studied the physicochemical and spectroscopic characteristics of newly synthesized Br-Sq-C12-loaded quatsomes. The entrapment of Br-Sq-C12 into the QSs does not interfere with its ability to generate ROS rapidly, an essential requirement for PDT activity. Cellular uptake and PDT efficiency are then studied in vitro in a cancer cell model showing the benefits of loading Br-Sq-C12 into a QS vs. Br-Sq-C12 in its free form. In addition to overcoming the non-water solubility limitation of Br-Sq-C12, the nanometric volume facilitated by the QS allows highly localized PS loadings, maximizing in this way the PDT effectiveness. This Br-Sq-C12-loaded QS can not only be explored for the development of highly efficient PDT treatment against cancer but also offers a basis to attain the development of photosensitizers with improved characteristics for in vivo applications. Herein, in this work, we present the design of a new QS composed of Cholesterol and surfactant Sterealkonium Chloride loaded at different concentrations of Br-Sq-C12 ( Figure 1). First, we studied the physicochemical and spectroscopic characteristics of newly synthesized Br-Sq-C12-loaded quatsomes. The entrapment of Br-Sq-C12 into the QSs does not interfere with its ability to generate ROS rapidly, an essential requirement for PDT activity. Cellular uptake and PDT efficiency are then studied in vitro in a cancer cell model showing the benefits of loading Br-Sq-C12 into a QS vs. Br-Sq-C12 in its free form. In addition to overcoming the non-water solubility limitation of Br-Sq-C12, the nanometric volume facilitated by the QS allows highly localized PS loadings, maximizing in this way the PDT effectiveness. This Br-Sq-C12-loaded QS can not only be explored for the development of highly efficient PDT treatment against cancer but also offers a basis to attain the development of photosensitizers with improved characteristics for in vivo applications.
Synthesis of Bromo-Squaraine-C12 Dye
All the chemicals were purchased from Merck (Darmstadt, Germany), Alfa Aesar (Haverhill, MA, USA), or TCI (Tokyo, Japan) and were used without any further purification. All microwave reactions were performed in single-mode Biotage Initiator 2.5
Synthesis of Bromo-Squaraine-C12 Dye
All the chemicals were purchased from Merck (Darmstadt, Germany), Alfa Aesar (Haverhill, MA, USA), or TCI (Tokyo, Japan) and were used without any further purification. All microwave reactions were performed in single-mode Biotage Initiator 2.5 (Biotage, Uppsala, Sweden). TLC was performed on silica gel 60 F254 plates. 1 H NMR (600 MHz) spectra were recorded on a Bruker Avance 600 NMR (Bruker, Billerica, MA, USA) in CDCl 3 . ESI-MS spectra were recorded using an LTQ Orbitrap (Thermo Scientific, Waltham, MA, USA) spectrometer, with an electrospray interface and ion trap as a mass analyzer. The flow injection effluent was delivered into the ion source using nitrogen as sheath and auxiliary gas.
Preparation of Dye-Loaded Chol/Stk QS by DELOS-Susp
All the QS formulations described were prepared using the DELOS-susp method [47,51]. The employed quantities (Table S1) and the detailed protocol used are listed in the Supporting Information. To prepare the organic phase of the DELOS-susp, the desired amount of Chol (PanReac AppliChem, Castellar del Vallès, Spain) and Stk (TokyoChemical Industry CO. LTD, Tokyo, Japan) was dissolved in ethanol (HPLC grade purity, Avantor Performance Materials Poland S.A., Silesia, Poland). This ethanolic solution contains the already solubilized Br-Sq-C12 (see Table 1 for the exact concentration of each component). The solution was introduced in the high-pressure vessel, and the compressed CO 2 was added, with a final temperature of 38 • C and 11.5 MPa of pressure. After one hour, the expanded solution with all the membrane components dissolved was depressurized over the desired amount of water. After the production, the dye-loaded Chol/Stk QS was purified using diafiltration to remove ethanol and the non-incorporated dye and membrane components. Determination of the Dye Concentration and Dye Loading in QS Nanovesicles The concentration of dye entrapped in QS was determined by measuring the UV-Vis absorbance A using a UV-Vis spectrophotometer (V-780, Jasco, Easton, Sweden) and a high precision cell (Hellma Analytics, Müllheim, Germany) with a pathlength l of 1 cm. All the samples were diluted in ethanol to disrupt the membrane and release all the entrapped dye molecules. The concentration C of Br-Sq-C12 was determined using the Lambert-Beer law (A = ε l C), knowing that the extinction coefficient (ε) of Br-Sq-C12 in EtOH is 290.484 M −1 cm −1 .
The dye-loading coefficient was determined by lyophilization of the samples (LyoQuest-80, Telstar, Terrassa, Spain) at 193 K and 5 Pa for 5 days. Then, the samples were weighted, and the loading in mass was determined through the following equation. To determine the molar extinction coefficient of the Br-Sq-C12, different dilutions in ethanol were prepared from a stock solution (0.5 mM). The absorbances were measured and their maxima were plotted vs. the sample concentration, being the slope of the linear fitting of the molar extinction coefficient (ε). The analysis was performed in duplicate and data were considered acceptable when the difference between the measured log ε was equal to or lower than 0.02 to their average.
The determination of the solvatochromism was performed by preparing different solutions of Br-Sq-C12 and dye-loaded QSs in acetone, absolute ethanol (EtOH), methanol (MeOH), double distilled water (ddH 2 O), and Dimethyl sulfoxide (DMSO). The absorption was measured at room temperature by UV-Vis spectroscopy (Cary 300 Bio spectrophotometer, Varian, Santa Clara, USA or V-780, Jasco, Easton, Sweden) in the range of 500-800 nm using quartz cuvettes, using a 1 cm path length.
Fluorescence Spectroscopy
Fluorescence emission measurements were acquired in steady-state mode and recorded in the range of 595-750 nm using a Horiba Jobin Yvon Fluorolog 3 TCSPC fluorimeter (Kyoto, Japan) equipped with a 450-W Xenon lamp and a Hamamatsu R928 photomultiplier (Hamamatsu photonics, Hamamatsu, Japan) by using solvents with different polarity to investigate the solvatochromic behavior of both the Br-Sq-C12 and dye-loaded QS. The excitation wavelength was different depending on the solvents and was set at the squaraine hypochromic shoulder previously recorded at the UV-Vis spectra. The excitation and emission slits were 5 nm and 5 nm, respectively.
Fluorescence quantum yields (QY) were determined using the same instrument with Quanta-ϕ integrating sphere and De Mello method. The QY was evaluated in absolute ethanol for the Br-Sq-C12 and ddH 2 O for the dye-loaded QSs. The analyzed samples had an absorbance of around 0.1 to avoid aggregations/fluorescence quenching. The final result is an average of three independent measurements of different dye solutions.
Fluorescence lifetimes (LT) were determined using the time-correlated single photon counting method (Horiba Jobin Yvon, Horiba, Kyoto, Japan) using a 636 nm Horiba Jobin Yvon NanoLED (Horiba, Kyoto, Japan) as the excitation source and a pulse repetition frequency of 1 MHz positioned at 90 • with respect to a TBX-04 detector. Lifetimes were calculated using DAS6 decay analysis software. The LT was evaluated in absolute ethanol for the Br-Sq-C12 and ddH 2 O for the dye-loaded QSs. The mean size and size distribution of the QS loaded with 200 and 300 µM Br-Sq-C12 (QS_Sq_160 and QS_Sq_200, respectively) were determined by DLS, while the determination of the ζ-potential values (z-pot) was performed with the ELS. Both measurements were carried out using a Zetasizer Ultra (Malvern Instruments, Malvern, UK). The measurements with the DLS technique were performed using a fluorescence filter to block the light resulting from fluorescence emission, which may alter the correlation function (the instrument exploits a 633 nm laser). Meanwhile, the ELS measurements were performed employing A DTS1070 folded capillary cell (Malvern Instruments, Malvern, UK) was used, applying a voltage of 40 mV between the gold electrodes, and being calculated using the Helmholtz-Smoluchowski equation, which can potentially underestimate the real zetapotential [52,53]. All the measurements were performed in triplicate to ensure the reliability of the results.
Cryogenic Transmission Electron Microscopy
Cryogenic transmission electron microscopy (cryo-TEM) images were acquired with a JEOL JEM microscope (JEOL JEM 2011, Tokyo, Japan) operating at 200 kV under lowdose conditions. First, 10µL of the sample was deposited onto the holey carbon grid and, immediately after, vitrified by rapid immersion in liquid ethane. The vitrified sample was mounted on a cryo-transfer system and introduced into the microscope (Gatan 626, Gatan, Pleasanton, CA, USA). Images were recorded on a CCD camera (Gatan Ultrascan US1000, Gatan, Pleasanton, CA, USA).
Evaluation of ROS Generation with DPBF and DCFH
As a probe molecule, 1,3-Diphenylisobenzofuran (DPBF, Sigma Aldrich, Darmstadt, Germany) was used to evaluate Reactive Oxygen Species (ROS) generation by following the protocol previously described in the literature [49]. DPBF rapidly reacts with 1 O 2 forming the colorless o-dibenzoylbenzene derivative. The 1 O 2 scavenger activity can be monitored through a decrease in the electronic absorption band of DBPF at 415 nm. Stock solutions were prepared in DMSO, absolute ethanol, and phosphate buffer (2 mM, pH 7.4), respectively, for DPBF, free Br-Sq-C12, and Br-Sq-C12-loaded quatsomes. Each solution was then diluted in phosphate buffer (2 mM, pH 7.4) to obtain a DPBF concentration of 25 µM and a final concentration of 2.5 µM for both the free and the encapsulated dye. The solutions were placed in a 1 cm quartz cell and irradiated at various time intervals under stirring in an aerated solarbox (Solarbox 3000e, 250 W xenon lamp, CO.FO.ME.GRA, Milan, Italy). The light was filtered in an optical filter with a 515 nm cut-off to avoid DPBF degradation. At predefined time points (30,60,90, 120, and 180 s), absorption spectra were recorded on a Cary 300 Bio spectrophotometer instrument (Varian, Santa Clara, CA, USA). The decrease in the DBPF absorption contribution at 415 nm was plotted as a function of the irradiation time. humidified incubator (HeraCell 150, Heraeus, Hanau, Germany) with 5% CO 2 at 37 • C, using Falcon™ plates as supports.
To investigate QS's cytotoxicity, MCF-7 cells (0.5·10 4 cells/well) were seeded in 96-well plates (Sarstedt, Nümbrecht, Germany). Six hours after plating, cells were treated with QS_Blank at two different membrane components' (Chol + Stk) final concentrations (10 µg/mL and 2 µg/mL). Cell viability was assessed using CellTiter 96 ® Aqueous Non-Radioactive cell proliferation assay (Promega, Madison, WI, USA) according to the manufacturer's instructions 24, 48, and 72 h after treatment. Briefly, 2 h after MTS incubation at 37 • C, absorbance at 490 nm was recorded using a microplate reader (FilterMax F5, Multi-Mode Microplate Reader, Molecular Devices, San Jose, MO, USA). Absorbance values were normalized on the control at 24 h and analyzed as being proportional to the number of viable cells. Similarly, the cytotoxicity of QS (2 µg/mL) loaded with increasing dye concentrations ( Table 2) was assessed. To evaluate the photodynamic effect of QS_Sq, MCF-7 cells (0.5·10 4 cells/well) were seeded in 96-well plates. Six hours after plating, cells were treated with QS_Sq at the concentrations reported in Table 2 and Br-Sq-C12 in its free form at the same concentrations. After O/N incubation at 37 • C and 5% CO 2 , the cells were irradiated for 15 min with a RED-LED array (96 LEDs in a 12 × 8 arrangement, excitation wavelength: 640 nm, and irradiance: 8 mW/cm 2 ) specifically designed and produced by Cicci Research s.r.l (Grosseto, Italy). Cell viability was assessed 24, 48, and 72 h after irradiation using CellTiter 96 ® AQueous Non-Radioactive cell proliferation assay (Promega, Madison, WI, USA) as described above. The photodynamic effect of Br-Sq-C12-loaded QS was evaluated by comparing the viability of cells treated with QS_Sq or with the same concentration of Br-Sq-C12 in its free form upon irradiation. For each condition, eight technical replicates were set up and three independent experiments were performed.
Cellular Uptake
To verify the intracellular uptake of Sq-loaded QS and compare it with that of the dye in its free form, Calcein (Molecular probes ® , Invitrogen, Waltham, MA, USA) has been used to label and track the whole cellular volume in MCF-7 live cells. Briefly, MCF-7 cells were seeded in Ibidi µ-Slide 8 wells (1.6 × 10 4 cells/well), and 24 h after seeding, cells were treated overnight with 85 nM of Br-Sq-C12 in its free form or incorporated within 2 µg/mL QS (QS_Sq_200). After the incubation with the QS_Sq_200, the cells were washed twice with PBS and incubated with Calcein (500 nM) for 30 min, washed twice with Hanks' Balanced Salt Solution (HBSS), and fixed in 4% paraformaldehyde (PAF) at 37 • C for 2 min. The cells were observed using a Leica TCS SP8 confocal system (Leica Microsystems, Wetzlar, Germany) equipped with an HCX PL APO 63X/1.4 NA oil-immersion objective. To simultaneously detect the probes, Br-Sq-C12 was excited with a HeNe laser at 633 nm, whereas Calcein was excited with a DPSS laser at 561 nm. Images were acquired on the three coordinates of the space (XYZ planes) with a resolution of 0.081 µm × 0.081 µm × 0.299 µm and were processed and analyzed with ImageJ software (Rasband, W.S., U.S. National Institutes of Health, Bethesda, MA, USA). Three-dimensional images with Calcein allowed for assessing whether the Br-Sq-C12 encapsulated into QS or in its free form was included within the cellular volume or not. Data are shown as the average values of three independent pulled experiments ± standard error mean (SEM). Statistical analyses were performed using Graph-Pad Prism 6.0 software (La Jolla, CA, USA). The statistical significance between different conditions was determined by performing a t-test or Mann-Whitney test, according to the populations' distribution (normal or not-normal, respectively). Differences with p-values < 0.05 were considered statistically significant and *: p-value < 0.05, ***: p-value < 0.0005, ****: p-value < 0001.
Results and Discussion
In a previous study, Bromo-Squaraine-C4 (Br-Sq-C4), a non-water soluble indoleninebased dye quaternarized with a four-carbon atom chain, demonstrated successful PDT results in vitro [18,49]. However, Br-Sq-C4 application in biomedicine is hindered by its poor solubility and low chemical stability, especially in aqueous solutions. To overcome this drawback, a promising approach is represented by the incorporation into nanoparticle systems to shield its hydrophobicity, prevent the formation of dye aggregates, and improve solubility in physiological conditions [24,26]. Quatsomes (QS) nanovesicles have shown successful results incorporating different cyanine dyes [42,[54][55][56][57], demonstrating longterm stability and biocompatibility. Thus, as a first trial, we prepared Br-Sq-C4-loaded quatsomes (QS). However, the incorporation of Br-Sq-C4 into QS resulted in a limited amount of encapsulated dye (entrapment efficiency was~50%), as well as significant dye leaking over time (nearly 50% of Br-Sq-C4 was released after one month, see Figure S1). Considering the low stability of this system, Br-Sq-C4 was dismissed, and a new squaraine bearing a longer alkyl chain, i.e., Br-Sq-C12, was developed. By the incorporation of the longer alkyl chain, higher hydrophobicity is provided, hypothesizing that this would promote its stable incorporation in the vesicular membranes. This study evidences the importance of the hydrocarbon chain length for the stabilization of the dye in a vesicular membrane, and in particular, into a quatsome nanovesicle.
Synthesis of Br-Sq-C12 and Preparation of Br-Sq-C12-Loaded Quatsomes
The synthesis of Bromo-Squaraine-C12, Br-Sq-C12 dye (Scheme 1 and Figure 1), started with the quaternarization of the bromoindolenine ring (1), synthesized following a procedure reported in ref. [49], to obtain compound 2. This reaction was performed under microwave irradiation and led to increased acidity of the methyl group promoting the following condensation reaction. The final dye was then obtained in a one-step reaction under microwave heating following our well-established method for indolenine-based squaraines [50].
QS composed of Stearalkonium (Stk) and Cholesterol (Chol), with a 1:1 molar ratio Stk/Chol, and loaded with Br-Sq-C12 were prepared using the DELOS-susp methodology [47] at two different starting concentrations of Br-Sq-C12; 200 and 300 µM as initial, pre-processing dye concentration. As a result, two batches of QS encapsulating Br-Sq-C12 were obtained, named based on their final post-processing dye concentration: QS_Sq_160 and QS_Sq_200, from the 200 and 300 µM, respectively ( Figure 2). In addition, non-loaded QS (QS_Blank) was also prepared for comparison with the PS-loaded QS. All formulations were diafiltrated to remove the ethanol and non-entrapped dye or free membrane components from the solution, finally obtaining three batches of water-suspended filtered nanovesicles (see Section 2 for details).
All formulations showed a very similar concentration of membrane components (Table 3). Sample QS_Sq_160 showed a higher average dye encapsulation efficiency (~80%), yielding a final dye concentration of 160 µM, while sample QS_Sq_200 showed, as expected, a higher dye loading in mass (L), with an effective dye concentration of~200 µM at the vesicles. processing dye concentration. As a result, two batches of QS encapsulating Br-Sq-C12 were obtained, named based on their final post-processing dye concentration: QS_Sq_160 and QS_Sq_200, from the 200 and 300 μM, respectively (Figure 2). In addition, non-loaded QS (QS_Blank) was also prepared for comparison with the PS-loaded QS. All formulations were diafiltrated to remove the ethanol and non-entrapped dye or free membrane components from the solution, finally obtaining three batches of water-suspended filtered nanovesicles (see Materials and Methods section for details). All formulations showed a very similar concentration of membrane components (Table 3). Sample QS_Sq_160 showed a higher average dye encapsulation efficiency (~80%), yielding a final dye concentration of 160 μM, while sample QS_Sq_200 showed, as expected, a higher dye loading in mass (L), with an effective dye concentration of ~200 μM at the vesicles. Avg. geometric diameter (nm) 6 63 ± 19 66 ± 20 58 ± 18 1 Determined using UV-Vis spectroscopy; 2 Determined from the weight of lyophilized; 3 Calculated as mg dye per mg of membrane components (Stk + Chol), ×10 −2 ; 4 Average value determined from DLS ± SD of three repeat measurements; 5 Average ζ-potential determined from ELS ± SD of three repeat measurements; 6 Geometric diameter distribution determined from cryo-TEM analysis of one batch ± SD of the size distribution (n = 100).
Physicochemical Properties
QS_Sq_160 and QS_Sq_200 were analyzed with DLS to determine the mean hydrodynamic diameter and polydispersity index (PDI). First, a systematic DLS study was performed in order to elucidate the optimal dilution for QS measurements. As detailed in the section "Systematic DLS study" included at the SI, DLS, and ELS measurements were performed at 1:10 dilution from the final formulation to ensure reliable results. Average hydrodynamic diameter (z-average), PDI, and ζ-potential average values are summarized in Table 3. Both samples showed similar hydrodynamic diameters (~90 nm) and PDI values (<0.2), with a monomodal size distribution (Figure 2b). In addition, plain quatsomes (QS_Blank) display similar characteristics, confirming the high reproducibility of the QS preparation. A highly positive ζ-potential (~70 mV), due to the positive charge of Stearalkonium Chloride, is also comparable among the samples and contributes to the colloidal stability of the nanovesicles [58]. Transmission electron microscopy in cryogenic conditions (cryo-TEM) confirmed the unilamellar vesicle morphology unaffected by the dye encapsulation (Figure 2c-e). Br-Sq-C12-loaded samples showed high homogeneity in the size distribution (Figure 2d,e), as already shown by the low PDI values obtained in DLS. From the analysis of the cryo-TEM images, the average geometric diameter values were estimated as 66 ± 20 nm and 58 ± 18 nm for QS_Sq_160 and QS_Sq_200, respectively (n = 100). It is important to keep in mind that the averaged values obtained from cryo-TEM come from representative images, which help to confirm the data obtained from DLS being this last one more statistically representative.
Spectroscopic Characterization
Br-Sq-C12 shows an absorption maximum at around 640 nm in ethanol (Figure 3a) with a very high molar extinction coefficient (290,000 M −1 cm −1 ). The UV-Vis spectrum is characterized by a narrow absorption band in the NIR, an essential requirement for the PDT treatment, and a characteristic hypsochromic shoulder typical for polymethine dyes. The main absorption peak is associated with the π→π* HOMO-LUMO transitions, mainly localized on the squaraine core; on the other hand, the shoulder at higher energy can be ascribed to the HOMO-LUMO+1 transition [59,60]. As already observed for other SQs [61], Br-Sq-C12 shows an excellent fluorescence emission with a maximum emission at 649 nm when dissolved in ethanol, although both the absorption and the fluorescence emission are completely quenched when dissolved in water due to an aggregation caused quenching (ACQ) effect. As shown in Figure 3b,c, the loading into QSs fully overcomes this drawback, increasing the solubility of the dye in aqueous media, with absorbance and fluorescence emission maxima at 644 nm and 655 nm, respectively. The different intensity of the 600 nm shoulder for the free dye in ethanol and the dye-loaded QS is related to the presence of some H-aggregates. However, after the entrapment, the squaraine results are dispersible in 100% water. The solvatochromic effect on both the dye and dye-loaded QSs absorption and emission spectra were also investigated and reported in Table 4. In general, neither the absorption maxima nor the band shape has been affected by the solvent polarity. A slight difference has been observed in the absorption peak maxima in protic or aprotic solvents; in fact, DMSO induced a 15 nm bathochromic shift in comparison to MeOH, suggesting a higher polarity of the ground state compared to the excited state [62][63][64]. It is worth noticing that both the absorbance and emission maxima of the dye-loaded QSs, irrespective of the amount of incorporated dye, are very close to the values obtained with the free Br-Sq-C12, suggesting that the association with the QS did not change the energies and relative probabilities of the electronic transitions. The fluorescence quantum yield of Br-Sq-C12 in ethanol is in the ranges typical for squaraines in organic media. On the contrary, the QY in water cannot be detected due to the complete insolubility of the dye in aqueous media. However, after the entrapment in the QS, we were able to measure the QY in aqueous dispersion. The values are nevertheless very low, probably due to the presence of some H-aggregates, as also evidenced in the UV-Vis spectra reported in Figure 3b,c. The Br-Sq-C12 fluorescence lifetime showed a monoexponentially decay and is in the nanoseconds range, as already observed for several squaraines [59]. The solvatochromic effect on both the dye and dye-loaded QSs absorption and emission spectra were also investigated and reported in Table 4. In general, neither the absorption maxima nor the band shape has been affected by the solvent polarity. A slight difference has been observed in the absorption peak maxima in protic or aprotic solvents; in fact, DMSO induced a 15 nm bathochromic shift in comparison to MeOH, suggesting a higher polarity of the ground state compared to the excited state [62][63][64]. It is worth noticing that both the absorbance and emission maxima of the dye-loaded QSs, irrespective of the amount of incorporated dye, are very close to the values obtained with the free Br-Sq-C12, suggesting that the association with the QS did not change the energies and relative probabilities of the electronic transitions. The fluorescence quantum yield of Br-Sq-C12 in ethanol is in the ranges typical for squaraines in organic media. On the contrary, the QY in water cannot be detected due to the complete insolubility of the dye in aqueous media. However, after the entrapment in the QS, we were able to measure the QY in aqueous dispersion. The values are nevertheless very low, probably due to the presence of some H-aggregates, as also evidenced in the UV-Vis spectra reported in Figure 3b,c. The Br-Sq-C12 fluorescence lifetime showed a monoexponentially decay and is in the nanoseconds range, as already observed for several squaraines [59]. Table 4. Photochemical properties of the Br-Sq-C12 and Br-Sq-C12-loaded quatsomes.
By comparing the values obtained for QS_Sq_160 and QS_Sq_200 in water, bi-functional equations were necessary to fit the decay curves, suggesting that two different types of interactions occurred with the QSs. Specifically, one fluorescence lifetime is slightly longer, while the other ones (ca. 85%) increased by ca. 2.7 times compared to the free dye. This longer decay could be ascribed to the decrease in rotational/twisting degrees of freedom, confirming a good degree of dye entrapment into the QS vesicles. On the other hand, the shorter lifetime could be due to the presence of a small amount of free dye on the QS surface, leading to a detrimental effect caused by the interaction with highly polar media, such as water [59]. From that data, we can conclude that Br-Sq-C12 has been successfully incorporated into the QS membrane, allowing its fluorescence in an aqueous media.
Evaluation of Colloidal Stability and Photostability
Previous works on QS have already demonstrated the long colloidal stability, up to a year, of these nanovesicles [41][42][43]. In this work, we have evaluated the stability over time and photostability of the Br-Sq-C12-loaded QS with DLS and fluorescence spectroscopy, respectively. The obtained data proved the dye-loaded nanovesicular systems to be very stable for 18 months, with hydrodynamic diameter values around 90 nm and optimally low PDI values maintained around 0.2 (Figures 4a and S4). As previously mentioned, the high positive ζ-potential plays a crucial role in providing colloidal stability; thus, +70 mV ζ-potential values after 10 weeks for both dye-loaded systems demonstrated the long-term stability of those nanovesicles (Figure 4b).
Similarly, photostability was evaluated with periodical UV-vis absorbance measurements for up to 18 months. We noticed a lowering of the main absorbance peak at 644 nm during the time for both samples in the study (Figures 4c,d and S5), indicating a decrement in the PS concentration at the QS nanovesicle. The residual dye concentration encapsulated in QS was quantified again 4 months after the production, and the obtained values werẽ 130 µM for sample QS_Sq_160 and~180 µM for sample QS_Sq_200, resulting in a percentual dye leaking of~20% and~9% in 4 months, respectively (Figure 4e). In order to better understand this phenomenon, we followed the variation of the peaks' amplitude over time (ratio peak 660nm /shoulder 600nm ), which can be indicative of the formation of dye aggregates. The results, presented in Figure S5, show that both bands progressively decrease over time with no significant change in the ratio, suggesting that there is no significant dye aggregation in the QS membrane. Given the obtained results, we can assume that the stability of the dyes is not compromised in a major way by their inclusion in the QS membrane at the obtained loadings. nm and optimally low PDI values maintained around 0.2 (Figures 4a and S4). As previously mentioned, the high positive ζ-potential plays a crucial role in providing colloidal stability; thus, +70 mV ζ-potential values after 10 weeks for both dye-loaded systems demonstrated the long-term stability of those nanovesicles (Figure 4b).
ROS Production
A preliminary evaluation of the ability of both the free Br-Sq-C12 and dye-loaded QS to generate Reactive Oxygen Species (ROS) was carried out by using 1,3-diphenylisobenzofuran (DPBF) as a probe [49]. DPBF rapidly reacts with ROS generated by the light-activated dye, resulting in the disappearance of DPBF's characteristic absorption band at 415 nm due to the formation of the colorless o-dibenzoylbenzene derivative. As a function of the irradiation time, the decrease in the DPBF absorption band at 415 nm has been compared to the values obtained by irradiating a standard, the efficient and well-known ROS generator Rose Bengal (RB). As shown in Figure 5, both the free squaraine and the squaraine loaded into QSs possess faster and higher ROS generation ability compared to the RB. In particular, Br-Sq-C12 is able to promote the complete decay of DPBF absorption within 180 s, while the same result was obtained in 10 min for reference RB. This fast ROS generation could be ascribed to the presence of bromine, which may facilitate the singlet to triplet state intersystem crossing due to the well-known heavy atom effect [18,19]. More importantly, the entrapment of Br-Sq-C12 into the QSs does not interfere with the ability of the dye to generate ROS rapidly, an essential requirement for PDT activity.
Cytotoxicity and PDT Assays
Despite the outstanding properties of QS, the remarkable cytotoxicity of quaternary ammonium surfactants, including Stk, could represent a challenge in their in vivo application [65,66]. Therefore, to assess QS biocompatibility, we first performed cell viability assays on MCF-7 cells treated with two different concentrations of blank QS, i.e., 10 and 2 µg/mL (Stk/Chol); see Section 2.6.1 for details. As shown in Figure 6a, QS diluted to a final membrane components' concentration of 10 µg/mL revealed marked cytotoxicity starting 24 h after treatment and occurring up to 72 h. On the contrary, QS at 2 µg/mL showed good biocompatibility on MCF-7 cells (Figure 6b). QS_Sq_160 and QS_Sq_200 were diluted to 2 µg/mL membrane components, corresponding to a final dye concentration of 68 nM and 85 nM, respectively. Interestingly, the newly synthesized Br-Sq-C12-85nMloaded quatsomes (QS_Sq_200) are slightly more cytotoxic than QS_Blank at 24 and 48 h after treatment with 2 µg/mL, although still highly biocompatible compared to QS_Blank at 10 µg/mL (Figure 6c). This effect is in agreement with previously reported data on polymethine dyes loaded in solid lipid nanoparticles [24]. Consequently, the measurement of the photoactivity of Br-Sq-C12-loaded QS was performed by diluting the formulation up to a membrane components' concentration of 2 µg/mL to avoid any non-targeted cytotoxicity provided by the carrier itself.
to generate Reactive Oxygen Species (ROS) was carried out by using 1,3diphenylisobenzofuran (DPBF) as a probe [49]. DPBF rapidly reacts with ROS generated by the light-activated dye, resulting in the disappearance of DPBF's characteristic absorption band at 415 nm due to the formation of the colorless o-dibenzoylbenzene derivative. As a function of the irradiation time, the decrease in the DPBF absorption band at 415 nm has been compared to the values obtained by irradiating a standard, the efficient and well-known ROS generator Rose Bengal (RB). As shown in Figure 5, both the free squaraine and the squaraine loaded into QSs possess faster and higher ROS generation ability compared to the RB. In particular, Br-Sq-C12 is able to promote the complete decay of DPBF absorption within 180 s, while the same result was obtained in 10 min for reference RB. This fast ROS generation could be ascribed to the presence of bromine, which may facilitate the singlet to triplet state intersystem crossing due to the well-known heavy atom effect [18,19]. More importantly, the entrapment of Br-Sq-C12 into the QSs does not interfere with the ability of the dye to generate ROS rapidly, an essential requirement for PDT activity. Despite the outstanding properties of QS, the remarkable cytotoxicity of quaternary ammonium surfactants, including Stk, could represent a challenge in their in vivo application [65,66]. Therefore, to assess QS biocompatibility, we first performed cell viability assays on MCF-7 cells treated with two different concentrations of blank QS, i.e., 10 and 2 μg/mL (Stk/Chol); see Section 2.6.1 for details. As shown in Figure 6a, QS diluted to a final membrane components' concentration of 10 μg/mL revealed marked cytotoxicity starting 24 h after treatment and occurring up to 72 h. On the contrary, QS at 2 μg/mL showed good biocompatibility on MCF-7 cells (Figure 6b). QS_Sq_160 and QS_Sq_200 were diluted to 2 μg/mL membrane components, corresponding to a final dye concentration of 68 nM and 85 nM, respectively. Interestingly, the newly synthesized Br-Sq-C12-85nM-loaded quatsomes (QS_Sq_200) are slightly more cytotoxic than QS_Blank at 24 and 48 h after treatment with 2 μg/mL, although still highly biocompatible compared to QS_Blank at 10 μg/mL (Figure 6c). This effect is in agreement with previously reported data on polymethine dyes loaded in solid lipid nanoparticles [24]. Consequently, the measurement of the photoactivity of Br-Sq-C12-loaded QS was performed by diluting the formulation up to a membrane components' concentration of 2 μg/mL to avoid any nontargeted cytotoxicity provided by the carrier itself. The photoactivity of the molecules was quantified by irradiating MCF-7 cells for 15 min in the presence of free Br-Sq-C12 or Br-Sq-C12 incorporated into the previously described QS nanosystem (Figure 7). Interestingly, Sq-loaded QS are significantly more phototoxic than free Sq, even at the lowest concentration tested. In fact, Br-Sq-C12, in its free form, showed no phototoxicity at any of the tested concentrations (data shown in Figure S6). Of all formulations, QS_Sq_160 (at a final Br-Sq-C12 concentration of 68 nM) was found to be the most active formulation, as it was significantly different from free Br-Sq-C12 at all monitored time points (Figure 7). On the contrary, QS_Sq_200 (corresponding to 85 nM effective dye concentration) did not yield a significant improvement in the photoactive properties of the nanosystem after irradiation. Those results point out that a higher concentration of photosensitizer in nanovesicles does not always correlate to higher PDT efficiency. We suspect that Br-Sq-C12 molecules in the The photoactivity of the molecules was quantified by irradiating MCF-7 cells for 15 min in the presence of free Br-Sq-C12 or Br-Sq-C12 incorporated into the previously described QS nanosystem (Figure 7). Interestingly, Sq-loaded QS are significantly more phototoxic than free Sq, even at the lowest concentration tested. In fact, Br-Sq-C12, in its free form, showed no phototoxicity at any of the tested concentrations (data shown in Figure S6). Of all formulations, QS_Sq_160 (at a final Br-Sq-C12 concentration of 68 nM) was found to be the most active formulation, as it was significantly different from free Br-Sq-C12 at all monitored time points (Figure 7). On the contrary, QS_Sq_200 (corresponding to 85 nM effective dye concentration) did not yield a significant improvement in the photoactive properties of the nanosystem after irradiation. Those results point out that a higher concentration of photosensitizer in nanovesicles does not always correlate to higher PDT efficiency. We suspect that Br-Sq-C12 molecules in the QS_Sq_200 suffer from aggregation due to the higher concentration compared to QS_Sq_160, leading to a decrease of the squaraine photochemical properties and, in consequence, a lower PDT efficiency. Of note, the concentration of free squaraine usually employed for PDT studies ranges from 1 to 100 μM [49,67,68]. Here, we show that the use of a nanocarrier system allows using much lower concentrations of photosensitizer (up to 10-1000 times lower). In agreement with previous data obtained on other types of nanosystems [24], our results demonstrate that the encapsulation within QS highly enhances the dye's photoactivity. It is likely that the nanocarrier, by significantly increasing the local concentration of the dye, in addition to reducing its aggregation phenomena, improves its overall spectroscopic properties. Furthermore, taking into account our results on ROS production ( Figure 5), it is also possible to hypothesize that the relatively low photoactivity of the free dye in contact with MCF-7 cells may be due to a failure of the molecule to enter the cell, contrary to the nanoparticle system. To test this hypothesis, we performed confocal laser scan microscopy experiments on MCF-7 cells treated O/N with QS_Sq_200 and the corresponding amount of Br-Sq-C12 in its free form (85 nM). More specifically, we labeled the whole cellular volume using Calcein (illustrated in red), and, Br-Sq-C12 is shown in blue ( Figure 8). We acquired the images in three-dimensional space (XYZ), allowing the 3D cellular volume reconstruction and elucidating whether the Br-Sq-C12 (encapsulated or in its free form) was included within the cellular volume. As shown in Figure 8a, we found that Sq-loaded QS are efficiently internalized by MCF-7 after O/N incubation as demonstrated by the orthogonal view, revealing probe signals included within the Calcein-labeled cell volume (orthogonal view in Figure 8a). By contrast, as previously hypothesized, the Br-Sq-C12 in its free form at the same concentration present in the Br-Sq-C12-loaded QS samples was not properly internalized (Figure 8b). This confirms the absence of photoactivity due to hindered cellular uptake of the dye in its free form. Of note, the concentration of free squaraine usually employed for PDT studies ranges from 1 to 100 µM [49,67,68]. Here, we show that the use of a nanocarrier system allows using much lower concentrations of photosensitizer (up to 10-1000 times lower). In agreement with previous data obtained on other types of nanosystems [24], our results demonstrate that the encapsulation within QS highly enhances the dye's photoactivity. It is likely that the nanocarrier, by significantly increasing the local concentration of the dye, in addition to reducing its aggregation phenomena, improves its overall spectroscopic properties. Furthermore, taking into account our results on ROS production ( Figure 5), it is also possible to hypothesize that the relatively low photoactivity of the free dye in contact with MCF-7 cells may be due to a failure of the molecule to enter the cell, contrary to the nanoparticle system. To test this hypothesis, we performed confocal laser scan microscopy experiments on MCF-7 cells treated O/N with QS_Sq_200 and the corresponding amount of Br-Sq-C12 in its free form (85 nM). More specifically, we labeled the whole cellular volume using Calcein (illustrated in red), and, Br-Sq-C12 is shown in blue (Figure 8). We acquired the images in three-dimensional space (XYZ), allowing the 3D cellular volume reconstruction and elucidating whether the Br-Sq-C12 (encapsulated or in its free form) was included within the cellular volume. As shown in Figure 8a, we found that Sq-loaded QS are efficiently internalized by MCF-7 after O/N incubation as demonstrated by the orthogonal view, revealing probe signals included within the Calcein-labeled cell volume (orthogonal view in Figure 8a). By contrast, as previously hypothesized, the Br-Sq-C12 in its free form at the same concentration present in the Br-Sq-C12-loaded QS samples was not properly internalized (Figure 8b). This confirms the absence of photoactivity due to hindered cellular uptake of the dye in its free form. Taken together, these results point out the advantage that nanoencapsulation supposes for photosensitizers such as squaraines. First, the entrapment of Br-Sq-C12 into a nanovesicular system such as QS allows not only its dispersity and stability in aqueous media but also the protection of its photophysical characteristics. Secondly, cell permeability issues that the PS faces in its free form can be overcome by its encapsulation (i.e., QS uptake through phagocytosis [42]). As a result, higher phototoxicity is observed for Br-Sq-C12-loaded QS than for the free dye.
Conclusions
A novel formulation based on Cholesterol-Sterealkonium QS loaded with a squaraine dye (i.e., Br-Sq-C12) has been proposed as an alternative photosensitizer in PDT. QS were prepared with green and scalable technology in a single step; the membrane components are available at pharmaceutical grade and readily adaptable for the incorporation of active targeting ligands. Considering the poor water solubility of squaraine dyes, their loading into QS offers an interesting strategy to bring them into aqueous media, enabling their use for bioapplications. Br-Sq-C12-loaded QS are stable for at least 18 weeks in aqueous media, showing modest dye leaking over time and weak photo instability. Moreover, we have demonstrated that Br-Sq-C12 incorporation into QS does not compromise its photophysical characteristics nor the ability of the dye to generate ROS rapidly, an essential requirement for PDT activity. Indeed, both the free Br-Sq-C12 and Br-Sq-C12-loaded QS possess faster and higher ROS generation ability compared to the well-known ROS generator Rose Bengal. Phototoxicity assays Taken together, these results point out the advantage that nanoencapsulation supposes for photosensitizers such as squaraines. First, the entrapment of Br-Sq-C12 into a nanovesicular system such as QS allows not only its dispersity and stability in aqueous media but also the protection of its photophysical characteristics. Secondly, cell permeability issues that the PS faces in its free form can be overcome by its encapsulation (i.e., QS uptake through phagocytosis [42]). As a result, higher phototoxicity is observed for Br-Sq-C12-loaded QS than for the free dye.
Conclusions
A novel formulation based on Cholesterol-Sterealkonium QS loaded with a squaraine dye (i.e., Br-Sq-C12) has been proposed as an alternative photosensitizer in PDT. QS were prepared with green and scalable technology in a single step; the membrane components are available at pharmaceutical grade and readily adaptable for the incorporation of active targeting ligands. Considering the poor water solubility of squaraine dyes, their loading into QS offers an interesting strategy to bring them into aqueous media, enabling their use for bioapplications. Br-Sq-C12-loaded QS are stable for at least 18 weeks in aqueous media, showing modest dye leaking over time and weak photo instability. Moreover, we have demonstrated that Br-Sq-C12 incorporation into QS does not compromise its photophysical characteristics nor the ability of the dye to generate ROS rapidly, an essential requirement for PDT activity. Indeed, both the free Br-Sq-C12 and Br-Sq-C12-loaded QS possess faster and higher ROS generation ability compared to the well-known ROS generator Rose Bengal. Phototoxicity assays demonstrated that MCF-7 cells internalize Br-Sq-C12-loaded QS, and upon irradiation, an increase in the apoptotic/necrotic cell population is observed. The higher phototoxicity observed for the QS-loaded Br-Sq-C12 vs. its free form can be explained by the higher efficiency in PS delivery. QS provides more photostability in the aqueous media, higher cellular uptake, and significantly increases the local concentration of the PS. Indeed, the use of the nanocarrier system allows using much lower concentrations of photoactive dye (up to 10-1000 times lower). Taken together, our in vitro results support the applicability of QS as nanocarriers for PDT application and highlight the benefits of encapsulating squaraine dyes into stable nanostructures to optimize their photoactive properties. | 2023-03-15T15:17:49.236Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "743008400854316f07faf2518860ab0e3a8dcaae",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4923/15/3/902/pdf?version=1678438982",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "400951a68b3357081bc4ed28a5ce624373e51686",
"s2fieldsofstudy": [
"Medicine",
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
28243097 | pes2o/s2orc | v3-fos-license | Neonatal Seizures and Inborn Errors of Metabolism : An Update
Early identification of an underlying inborn error of metabolism in newborns with otherwise unexplained seizures may address appropriate disease-specific treatment and provide important tools about the choice of the antiepileptic drugs. Neonatal seizures usually present as prolonged or recurrent, often configuring epileptic status. Striking features of an underlying metabolic disorder include abnormal neurological examination, lethargy and/ or symptoms of acute decompensation. Ex adiuvantibus, trial with intravenous pyridoxine administration could be attempted in refractory unexplained neonatal seizures. Peculiar EEG patterns such as Suppression Burst may address diagnosis and laboratory work-up being most frequently associated to specific metabolic disorders. Neonatal Seizures and Inborn Errors of Metabolism: An Update Publication History: Received: May 23, 2015 Accepted: December 02, 2015 Published: December 04, 2015
Introduction
Neonatal seizures (NS) constitute the most frequent and distinctive neurological symptom in the neonatal period.The incidence is estimated to be between 1.5 and 5.5/1000 living births, whose onset being during the first week in 80% of cases [1].Neonates may present with different types of seizures: clonic, tonic, myoclonic (axial, focal, erratic), epileptic spasms, and subtle seizures, including autonomic signs or automatisms [2].Seizures in the neonatal period differ considerably from those observed later in life with respect to their aetiological profile and clinical presentation [2,3].NS often represent the first clinical indicator of a central nervous system (CNS) dysfunction.Although 40-50% of them are secondary to hypoxicischemic encephalopathy (HIE), other less frequent etiologies must be taken into account in the diagnostic work-up including infections, cortical malformations (readily identifiable through routine testing and imaging) and inborn errors of metabolism (IEMs) [3,4].Diagnosis can be quite difficult, thus, a high index of suspicion is required [4].Notably, when epilepsy occurs in a patient with IEM is commonly associated to other neurological and extra-neurological symptoms , that may address appropriate laboratory and neuroimaging investigations [5].Epilepsy occurring in newborn with IEM may be classified according to clinical or etiopathogenetical criteria.From a pathogenetic point of view they can be divided into epilepsies due to 1) "intoxication-type" disorders of intermediary metabolism; 2) neurotransmitters defects and related disorders; 3) disorders of energy metabolism; 4) storage disorders with impaired neuronal function and 5) IEMs associated to brain malformations [6,7].The identification of a treatable disorders is always mandatory in IEMs.This review emphasizes the importance of considering an IEM in the differential diagnosis of neonatal seizures, discusses red flags for metabolic origin of seizures, and provides an overview of diagnosis and treatment.
with onset within the neonatal age mostly occurring in newborns with IEMs.Patients present marked hypotonia or hypertonia, opisthotonus, apneic spells and abnormal eye movements [2].The main epileptic triad encompasses erratic or fragmentary myoclonus, simple partial seizures and tonic spasms [2].Bursts of paroxysmal activity alternating with lack of activity define the suppressionburst (SB).EEG pattern that is a striking feature of EME (figure 1).Several IEMs may be associated to EME such as non-ketotic hyperglycinaemia (NKH), propionic or methylmalonic acidurias, methylene tetrahydrofolate reductase deficiency, GABA transaminase deficiency, serine deficiency, congenital glutamine deficiency, sulfite and xantine oxidase deficiency, and vitamin-responsive syndromes as pyridoxine, pyridoxal-phosphate, folinic and biotin deficiencies [2].Other less specific types of seizures such as focal or generalised clonic, tonic and/or myoclonic ones may frequently occur [8].
Urea cycle defects
The urea cycle disorders (UCDs) constitute the final common pathway for the excretion of waste nitrogen caused by single genes defects of each of the enzymes involved into ammonia detoxification [9].The components of the pathway are: carbamyl phosphate synthase I (CPSI); ornithine transcarbamylase (OTC); argininosuccinic acid synthetase (ASS); argininosuccinic acid lyase (ASL); arginase (ARG) and the cofactor, N-acetyl glutamate synthetase (NAGS) [6].Deficiencies of CPSI, ASS, ASL, NAGS, and ARG are inherited in an autosomal recessive manner.OTC deficiency is inherited in an X-linked manner [9].Infants with a UCD often appear normal initially but rapidly develop cerebral edema and the related signs of lethargy, anorexia, hyperventilation or hypoventilation, hypothermia, seizures, neurologic posturing, and coma [10].In milder (or partial) UCD, ammonia accumulation may be triggered by illness or stress at almost any time of life, resulting in multiple mild elevations of plasma ammonia concentration; the hyperammonemia is less severe and the symptoms more subtle [6].Seizures are frequent during the early stages of hyperammonaemia, expecially in newborns [8].The EEG may shows variable pattern of epileptic discharge, i.e., multifocal spike-and sharp-wave discharges, repetitive paroxysmal activity, unusually low-voltage fast activity, and findings consistent with complex partial seizures [7].The therapy of UCD include dialysis to reduce plasma ammonia concentration, intravenous administration of arginine chloride and nitrogen scavenger drugs to allow alternative pathway excretion of excess nitrogen, restriction of protein for 24-48 h to reduce the amount of nitrogen in the diet, reduce catabolism [6].
Maple syrup urine disease
Maple syrup urine disease is the prototype of the disorders of catabolism of branched-chain amino acids (BCAA; leucine, isoleucine, and valine) and is caused by deficiency of the branchedchain keto-acid (BCKA) dehydrogenase enzyme [11].Neonatal presentation with poor feeding, vomiting, lethargy and abnormal movements (rhythmic boxing and cycling movements of the limbs), fluctuating ophthalmoplegia, and seizures is quite common [8].Early detection is critical, as initiation of therapy within the first 5 days may be associated with near normal cognitive outcome [12].The diagnosis is suggested by the odor of maple syrup or burnt sugar in cerumen at 24 to 48 hours of life or in urine during the latter part of the first week and is confirmed by detecting increased values of BCAAs and BCKAs in blood and urine (2,8).MRI scan may show brain edema affecting the myelinated white matter (cerebellar white matter, dorsal brainstem, cerebral peduncles, posterior limb of the internal capsule, and peri-Rolandic cerebral white matter), thalami, and globi pallidi [5].Vasogenic edema due to blood-brain barrier disruption may occur in MSUD mainly related to water increase in the extracellular spaces and mostly evident during acute metabolic decompensation [13].
Isovaleric acidemia
Isovaleric acidemia (IVA) is due to a defect of isovaleric acid CoA dehydrogenase gene with increased plasma and urine levels of free isovaleric acid, 3-hydroxyisovaleric acid, N-isovalerylglycine and isovalerylcarnitine [6].Diagnosis can be strongly suggested by the typical "sweaty feet" odor of the patients'urines [6].Seizures may occur within the course of acute metabolic decompensation or as a consequence of intracranial hemorrhages (subarachnoid, intra-or periventricular, cerebellar, and diffuse, petechial lesions in the white matter) that have been reported in IVA [5].Hemorrhages may be the result of various factors such as CNS edema due to accumulation of abnormal organic acids, thrombocytopenia, coagulopathy secondary to associated liver disease, and as a complication of anticoagulation therapy during hemofiltration [5].
Propionic acidemia
Propionic acidemia presents during the neonatal period with vomiting, dehydration and rapid deterioration, after a short symptomfree interval [14].Seizures may present as focal or generalised, spasms and myoclonic jerks [8].Diffuse swelling may be present on MRI of neonatal onset PPA [15].Basal ganglia may be normal in PPA during neonatal life, whereas, lesions of the globi pallidi and delayed myelination are typically identified during later ages.An increased frequency of intracranial hemorrhages has been also reported in PPA [15,16].
Methylmalonic acidemia
Methylmalonic acidemia presents in the neonatal form with rapid deterioration after a short symptom-free interval [17].Seizures may complicate acute metabolic decompensation.Diffuse swelling may be present on MRI of neonatal onset MMA [5].Imaging in neonatal MMA may also show swelling, while later imaging reveals volume loss, delay in myelin maturation, calcification of the basal ganglia, and focal necrosis of the globi pallidi [5].Additionally, an increased frequency of intracranial hemorrhages has also been reported in MMA [15].
Glutaric aciduria type 1
Glutaric aciduria type 1 (GA1) is an inborn error of lysine, hydroxylysine, and tryptophan catabolism [18].Typically, GA1 presents in later infancy as an acute encephalopathy with predominant dystonia and dyskinesia due to necrosis of the basal ganglia, particularly affecting the putamina [19].However, neonates may present with macrocephaly and subtle neurological signs such as hypotonia, irritability and jitteriness.Seizures usually occurr within the context of acute decompensation in association with symptoms of rapid deterioration.Dyskinetic movements may be often misdiagnosed for seizures [20].Urine organic acids profile shows increased 3-OH-glutaric acid and glytaryl carnitine as the major peak [6].Neuroimaging typically shows enlarged frontotemporal CSF spaces, wide Sylvian fissures, and a large cavum septi pellucidi [5].Treatment relies on dietary restriction, carnitine and vitamin supplementation [6].
Pyridoxine or pyridoxal-phosphate-dependent seizures
Pyridoxine or pyridoxal-phosphate responsive epilepsies must be considered in a neonate with unexplained and refractory seizures, with onset before or shortly after birth [20].Seizures are mainly prolonged or recurrent, configuring status epilepticus or EME [6].Rarely they appear to be brief and intermittent [8].They constitute a group of metabolic disorders that share as common mechanism a defective production of pyridoxal phosphate (PLP), the active form of pyridoxine.The first one, pyridoxine-responsive epilepsy, is caused by mutation in the ALDH7A1 gene encoding for the protein antiquitin, involved into lysine catabolism within CNS [21].Antiquitin deficiency results in increased alpha-aminoadipic semialdhyde (alpha-AASA) and piperideine-6-carboxylic acid (P6C) [21,22].P6C has been shown to inactivate PLP leading to a secondary deficiency [21,22].The final dysfunctional pathway is brain GABA deficiency leading to an imbalance between excitatory and inhibitory activity and reduced epileptic threshold.Intravenous administration of pyridoxine (100 mg) induces cessation of clinical seizures and electrographic discharges within minutes, however, seizures may relapse and pyridoxine may be repeated up to 500 mg total within 24 hours or continued at 30/mg/kg for seven days in case of partial response [6].Alternatively, in patients with persistence of seizures 30-40 mg/Kg pyridoxal phosphate may be attempted [6].Pyridoxamine 5'-Phosphate oxidase deficiency causes PLP dependent seizures by means of an impaired dietary vitamin B6 conversion into PLP.The first patients reported were preterm with neonatal encephalopathy, early acidosis and hypoglycemia [21].The PLP dependent enzyme activity, aromatic L-aminoacidic decarboxylase, is reduced with consequent low CSF concentration of the dopamine and serotonin catabolites, homovanillic acid and 5-hydroxyindoleacetic acid, and high L-DOPA catabolites [6].Chronic therapy with oral PLP 30-50 mg/kg/d may induce seizures recovery.Folinic-responsive seizures were reported in few newborns with encephalopathy and apneas within 5 days after birth, ceasing with 3-5 mg/kg/d enterally of folinic acid.These patients were shown to have mutations in ALDH7A1 gene, thus, the administration of adequate doses of pyridoxine was proposed [23].
Biotin-responsive disorders
Biotinidase deficiency is an autosomal recessive disorder characterized by reduced production of biotin both from esogenous and endogenous sources, resulting in reduced or absent biotin in plasma and urine [6].Biotinidase residual activity is low or absent in serum and diagnosis is confirmed by molecular analysis [6].Symptoms may appear also during neonatal age and include hypotonia, lethargy, respiratory abnormalities and skin lesions that are the hallmarks of the disease [24].Patients may present with eczematoid dermatitis covering large parts of the body and/or alopecia [25].Seizures may occur mostly with generalized tonic-clonic or myoclonic features.Neonates or infants may benefit from biotin supplementation 5-10 mg/d, in contrast, untreated older patients may develop irreversible brain damages [2].
Patients with holocarboxylase synthase deficiency show acuteonset of lethargy, hypothonia, vomiting, hypothermia and seizures within the first days of life.Severe metabolic acidosis, ketosis and hyperammoniemia may lead to misdiagnosis with classical organic acidurias [26].It is an autosomal recessive disorders due to mutations of HCS gene on 21q22.1.Diagnosis is allowed by detection of hyperammoniemia, high plasma and CSF lactate, organic acids in urine and CSF, and, confirmed by molecular analysis [6].Treatment is based on oral biotin administration 5-10mg/d [6].
Non-ketotic hyperglycinaemia
Non-ketotic hyperglycinaemia (NKH) is a rare IEM manifesting with severe drug-resistant epilepsy and neonatal encephalopathy [27].The more common neonatal type is a severe glycinergic encephalopathy occurring a few hours after birth characterized by lethargy, hypotonia, apneic attacks, hiccup and weak Moro response leading to deep coma without biochemical evidence of ketoacidosis [28].EME is a striking feature of NKH and a typical suppressionburst (SB) pattern is commonly observed at the EEG record.Response to antiepileptic drugs is poor [2].The biochemical bases lay on changes in one of the four proteins that compose the large enzyme complex involved into the glycine cleavage system (GCS), but there is no phenotype-genotype correlation.The mechanism underlying the SB pattern partly depends on glutamate -or glycine, the co-neurotransmitter for NMDA transmission -overflow, mainly in the immature brain [29].Although glycine encephalopathy has a very severe outcome in its classical expression, it may be transient in the neonatal period, for reasons yet not identified [8].Diagnosis is supported by determination of high plasma glycine levels in absence of biochemical markers of an organic acidemia (mainly propionic and methylmalonic acidemias) with a simultaneously elevated CSF glycine [6].Magnetic resonance Imaging (MRI) detected in some of these patients brain malformations such as dysgenesis of corpus callosum and gyral abnormalities [5].
Treatment may advantage of dietary glycine and serine restriction.Administration of sodium benzoate, dextromethorphan or pantothenic acid did not induce clear benefits in previous reports [6].
Creatine deficiency
Creatine is a crucial compound for energy metabolism and is carried to the brain and muscle by a specific transporter [31].Three inherited defects in the biosynthesis and transport of creatine were reported including : guanidinoacetate methyltransferase deficiency (GAMT gene), L-arginine-glycine amidinotransferase deficiency (GATM gene) and the X-linked creatine transporter deficiency (due to the SLC6A8 gene mutations) [31].GAMT deficiency has been usually associated to the worst phenotype and it may be disclosed within the first month of life with epileptic seizures [31].Diagnosis need to be supported by brain magnetic resonance spectroscopy showing reduced or completely absent creatine peak [31].A reduced creatine peak has been shown as early as 9 days of age [5].Creatine supplementation may partially restore brain creatine levels and provide clinical improvement [6].
Disorders of GABA metabolism
GABA-transaminase deficiency and succinic semialdehyde dehydrogenase deficiency (SSADH) are two inborn errors of GABA metabolism [6].GABA-transaminase deficiency is a very rare disease with only few reported cases possibly associated to neonatal-onset seizures [32].It is later characterized by abnormal development, epilepsy with high levels of GABA in serum and cerebrospinal fluid [32].
Defects of Energy Metabolism
Mitochondrial disorders can be the underlying cause of neonatal refractory seizures [33].To date, they are defined as that group of disorders due to defects of the respiratory chain complexes [34].Epilepsy has been associated to 26-60% of all mitochondrial disorders, however, few of them show neonatal onset [33,34].
Pyruvate dehydrogenase complex
Pyruvate dehydrogenase complex (PDHc) is a multienzyme that catalyzes the conversion of pyruvate to acetyl-CoA with subsequent oxidation and entering into the Krebs' cycle [6].PDHc subunits are encoded by nuclear genes and inherited in a autosomal recessive manner.Only the E1α-subunit gene is located on chromosome Xp22.3and is mostly severely expressed in males.PDHc deficiency is an important cause of neonatal encephalopathy associated with lactic acidosis [6].Clinical symptoms include severe muscular hypotonia, lethargy, poor sucking, microcephaly, facial dysmorphic signs, and tachypnea [33].Epileptic seizures occur in about one-third of the patients.Diagnosis relies on detection of markedly increased level of lactate in blood and CSF or only in CSF is mandatory for the diagnosis [6].MRI scan disclose severe cortical/subcortical atrophy, dilated ventricles and brain malformations including complete corpus callosum agenesis, pachygyria or heterotopias [5].
Mitochondrial oxidative phosphorylation disorders
Oxidative phosphorylation (OXPHOS) disorders may present in the neonatal period [6,33,34].The OXPHOS system comprises the mitochondrial respiratory chain complexes (complexes I-IV) and adenosine triphosphatase (complex V) [34].In newborns, deficiency of complex I, II, and IV have been reported [34].Features of OXPHOS disorders at birth and before include fetal hydrops, IUGR, prematurity, respiratory disturbances, poor feeding, vomiting, lactic acidosis, and lack of a symptom free interval following birth [6,33].Newborns with neurologic and behavioral deterioration, lethargy and seizures that present with marked lactic acidosis in blood and CSF are suspected to have OXPHOS deficiency [34].MRI in newborns may be less indicative than infants that usually shows Leigh syndrome features.Common findings include cerebral atrophy, white-matter abnormalities, involvement of the posterior columns in the lower brainstem, pontine corticospinal tracts and subcortical white matter, and an HIE-like involvement of the cortex and thalami in the absence of obstetrical history of birth asphyxia [5].
Molybdenum cofactor and sulphite oxidase deficiencies
Sulphite oxidase deficiency (SOD) can present as an isolate finding or associated with molybdenum cofactor (MOCO) deficiency and may both occur during the first days of life with seizures, feeding difficulties, and vomiting [6].The diagnosis is based on high sulfite level in fresh urine.S-sulfocysteine and taurine concentrations are also increased.Low uric acid levels in serum and urine and increased urinary xanthine and hypoxanthine concentrations allow to distinguish MOCO deficiency from isolated SOD [6,35] .Isolated sulfite oxidase deficiency (ISOD) that may present with neonatal encephalopathy and seizures [35].Diagnosis of ISOD may be complicated by the neuroimaging findings that may resemble those of HIE thus it is not surprisingly that most of the SOD cases are initially misdiagnosed as HIE [36].However, differently than newborns experiencing HIE, those affected from ISOD usually do not stabilize few weeks after delivery [36].
Peroxisomal disorders
Peroxisomal disorders with neonatal-onset and seizures include Zellweger syndrome (ZS), neonatal adrenoleukodystrophy (NALD) and Refsum disease and rhizomelic chondroplasia punctate (RCDP) [8].ZS is a autosomal recessive disorder that could manifest at birth reflecting the ubiquity of peroxisomes and is characterized by multiple congenital abnormalities involving the eyes, bone, liver, kidneys, endocrine glands.Brain malformations may cause neonatal severe hypotonia and seizures [37].The intracellular accumulation of VLCFA damages developing organs (e.g.liver, bone, kidneys) and is especially deleterious to the organizing brain with disorganization of the physiological structure of the neocortex.The cortical abnormalities include cortical gyral abnormalities (lissencephaly, pachygyria, polymicrogyria), generalized or focal leukoencephalopathy, and brain atrophy [5].NALD is a autosomal recessive disease characterized by accumulation of complex lipids, including cholesterol, cholesterol esters, total phospholipids, total galactolipids, and gangliosides.Clinical onset may be at birth with deafness and blindness, severe hypotonia and failure to thrive [38].A retinopathic "leopard spot" is a pathognomonic sign.Seizures may occur during the first weeks of life as focal or generalised, often involving limbs or perioral area [5].Interictal EEGs of the patients with ZS showed infrequent bilateral multifocal spikes, predominantly in the frontal and pre-frontal motor cortex Patients with NALD had tonic seizures or epileptic spasms drug resistant.Interictal EEGs showed high-voltage slow waves and bilateral multifocal spikes [5,6].
Krabbe Disease or Globoid-Cell Leukodystrophy
Krabbe disease or Globoid-Cell Leukodystrophy (GLD) is a neurodegenerative disorder with neonatal-infantile onset that is due to the deficiency of the lysosomal enzyme galactocerebroside β-galactosidase with subsequent accumulation in brain and other tissues [39].Neonatal onset of GLD is exceptional [40] but clinical presentation is similar to the infantile form.Newborns show hypotonia, macrocephaly and rarely seizure [40].At 3-6 months of age these infants show arrest of psychomotor development and framework of generalized spasticity [39].Neonatal MRI may be normal although abnormalities of the lateral thalami, corona radiata, and dentate nuclei have been reported in one newborn [5].Diagnosis is confirmed by detecting low or absent β-galactosidase levels in plasma, measurement of galactosylceramidase activity in leuckocytes or cultured skin fibroblasts and molecular analysis of the galactosylceramidase gene (GALC) located on 14q31 [6].
Conclusion
Seizures may represent a 'red flag' for diagnosis of an IEM in newborns.Several types of seizures or epileptic syndromes have been associated to IEMs with neonatal onset and associated to a variable phenotypic expression.Diagnosis of IEM in a newborn with seizures could be a challenge in particular when the clinical picture and even neuroimaging findings might resemble those of the more common cause of neonatal encephalopathy such as HIE or sepsis without signs and symptoms of clear metabolic decompensation.As a rule metabolic investigations should be assessed in a newborn with seizures expecially when epilepsy is not the sole neurological manifestation and they occurs together to other extraneurological signs and symptoms.Furthermore, specific adjunctive compounds supplementation or dietary restriction may play a synergic action with antiepileptic therapy. | 2017-10-26T23:41:10.706Z | 2015-12-04T00:00:00.000 | {
"year": 2015,
"sha1": "89c301588d6541cb70228cb38f874b7c419f3fde",
"oa_license": "CCBY",
"oa_url": "https://www.graphyonline.com/archives/IJPNC/2015/IJPNC-111/article.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "89c301588d6541cb70228cb38f874b7c419f3fde",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
117241237 | pes2o/s2orc | v3-fos-license | Scalable Single-Phase Multi-Functional Inverter for Integration of Rooftop Solar-PV to Low-Voltage Ideal and Weak Utility Grid
Venkata Subrahmanya Raghavendra Varaprasad Oruganti 1,* , Venkata Sesha Samba Siva Sarma Dhanikonda 1 and Marcelo Godoy Simões 2,* 1 Department of Electrical Engineering, National Institute of Technology Warangal, Warangal 506004, India; sivasarma@gmail.com 2 Electrical Engineering Department, Colorado School of Mines, 1610, Illinois Street, Golden, CO 8400, USA * Correspondence: varaprasad.oruganti@gmail.com (V.S.R.V.O.); msimoes@mines.edu (M.G.S.); Tel.: +91-99487 97712 (V.S.R.V.O.); +1-303-384-2350 (M.G.S.)
Introduction
The number of installations of rooftop solar-PV (RTSPV) systems in low-voltage distribution systems (LVDS) is growing.Such a high penetration is due to the rise of consumer demand associated with high electricity tariffs furthering deterioration of environmental concerns [1,2].The availability of low-cost SPV panels and advanced power electronic converters made RTSPV feasible and viable, reducing the dependency on conventional energy resources [3,4].Moreover, government policies are encouraging consumers to install the RTSPV systems and become prosumers to receive payback for the energy supplied to the grid through net metering which reduces consumption of grid power [4,5].
While the growth of grid-connected RTSPV systems is welcome, the poor power quality (PQ) introduced by power electronic converters is not worthy.The other reason for the deterioration of PQ in LVDS is power electronic-based nonlinear loads such as air conditioners, arc welding machines, compact fluorescent lamps, consumer electronics, light emitting diode lamps, personal computers, uninterruptible power supplies, and electric vehicle charging stations [6][7][8][9].The poor PQ affects the performance of appliances in a single-phase LVDS due to current harmonic distortion and induced reactive power causing overheating of the equipment, machine vibration, blowing of capacitor fuses, imprecise metering, and malfunctioning of protection systems [9].This will influence the stability and the reliability of LVDS, impacting the national economy negatively.
A two-stage RTSPV integration is an efficient system for single-phase rooftop installations which perform both active power feeding and power conditioning at the point of common coupling (PCC) to the LVDS in tune with the global standards.This approach helps us avoid the use of power conditioning equipment [10][11][12][13][14][15].A two-stage RTSPV integration system consists of a DC-DC boost stage and multi-functional inverter (MFI) stage.Literature survey suggests that the selection of an efficient maximum power point tracking (MPPT) technique for the DC-DC boost stage, designing of a simple control methodology for the inverter to coordinate the multi-functional operations such as grid synchronization with active power feeding, reactive power, and harmonic compensation are still major topics to be explored under both ideal and distorted grid conditions.However, designing simple control for single-phase multi-functional system is not addressed sufficiently in the literature [15].
Several MPPT techniques have been reported in the literature [15][16][17][18].MFI perturb and observe (P&O) technique applications have been adopted for maximum power extraction [15,16].Incremental conductance (INC)-based MPPT is also another technique for maximum power extraction.The advantages of INC-based MPPT over P&O-based MPPT are: (i) No need to compute power, (ii) better dynamic response, and (iii) low ripple power.Given the simplicity, efficiency, accuracy, and tracking capabilities of the INC-based MPPT technique for the DC-DC boost stage, converters are exploited in this research work.
The control technique is very crucial to improve the robustness and efficiency of an MFI.Various current control techniques are reported in the literature, with their own merits and demerits, depending on the system operating conditions [10].Cupertino et al. proposed a proportional-integral (PI) current control method [19] to perform the multi-functional operations using a grid-connected SPV inverter.However, tuning the gain values of the PI controller as per the variations in solar irradiation and consumer loads is quite complicated.The proportional-resonant (PR) controller has been proposed for better dynamics over the PI controller [20], but the PR controller is a complex controller and selection of the individual harmonic frequency for compensation under distorted grid conditions is one of the limitations.Chatterjee et al. reported that the model predictive controller-based PV inverter integrated with the grid [21].Nevertheless, the requirement of the high sampling frequency, i.e., 200 kHz, and the optimization process agnostic to the controller cost function and heavy computation during the operation of single-phase systems affected nonlinear loads.Kim proposed sliding mode control for grid-connected SPV systems.However, the time-varying surface selection is a complex task to perform multifunctional operations [22].
In contrast to the above methods, the hysteresis current control (HCC) is a simple method for multi-functional operation as it offers advantages such as rapid current controllability, easy implementation, and load insensitivity with respect to parameter variations, maximum current limit, and better stability [15,16,[23][24][25][26][27][28].But, the major limitation of the HCC is its highly variable switching frequency to match variation in solar irradiation and associated compensation requirements that results in higher switching losses.To overcome the drawback of high variable switching frequency, advanced HCC methods are described in the literature [29][30][31].A variable double-band (VDB)-HCC concept is presented in [30] for single-phase full-bridge bi-directional converters.However, the selection of the optimum variable hysteresis band to attain the highest efficiency is not yet reported.
Moreover, the double-band (DB)-HCC concept for single-phase active filtering is presented in References [29,31].The bandwidth of the DB-HCC method proposed in References [29,31] is constant, and the compensation objective depends on the bandwidth values.Moreover, there is an offset issue under distorted grid condition which has a limitation in compensation with varying highly distorted nonlinear load conditions under a weak grid.The detailed comparison of the HCC methods, in view of the performance under weak grid (PWG), active power injection (API), PQ improvement (PQI), steady-state and transient characteristics (SS & TC), inverter peak efficiency (IPE), inverter efficiency under low irradiation (IELI), average switching frequency (ASF), and implementation complexity (IC) are exhibited in Table 1.GI-based HCC [15] NR 1 Yes Yes Good M 5 Low High M 5 SB-HCC [22,23] NR 1 Yes NR 1 Good Low Low High High NF-based HCC [24] NR 1 Yes Yes VG 3 Low Low High M 5 Grid-interactive system using HCC [25] NR 1 Yes Yes Good M 5 Low High M 5 Single-phase PQ theory based [26] NR 1 Yes Yes Good High Low High High Modified PQ theory-based HCC [27] NR 1 Yes Yes Good M 5 Low High M 5 SOGI-based HCC [28] NR 1 Yes Yes VG 3 High Low High M 5 VDB-HCC [30] NR 1 Yes NR 1 VG 3 High NR 1 M 5 Low DB-HCC [29,31] NR 1 NR 1 Yes VG Low Low 1 NR = Not Reported, 2 Ex = Excellent, 3 VG = Very Good, 4 VH = Very High, 5 M = Medium.
Table 1 shows the metrics for single-phase grid-tied inverters, where the HCC method shows a limitation related to the highly variable switching frequency leading to switching losses and hence reduction in the MFI conversion efficiency.On the other hand, the VDB-HCC-based bi-directional converter [30] and the DB-HCC-based active power filter configuration [31] have not been explored for weak-grid conditions; also, the IELI is not reported.Particularly, the DB-HCC [29,31] exhibits unipolar switching characteristics to reduce the switching losses, but during the zero modulation index case, it has a limitation of high switching frequency.Moreover, the inverter switches are triggered with imbalance switching pulses.In addition to that, the inverter multi-functional capabilities are not explored using DB-HCC.The VDB-HCC method is an alternative method to overcome the limitations of the DB-HCC method, however, the selection of the optimum variable band to attain maximum efficiency is not yet reported.Moreover, VDB-HCC is also not considered for MFI operation.Hence, it is necessary to explore the DB-HCC method application concerning the fast, reliable, and efficient operation of the MFI to perform multifunctional tasks effectively and also to overcome the limitations of the VDB-HCC method for MFI operation.The selection of the appropriate MPPT for boost converter is also important, to extract the maximum power from RTSPV under variable environmental conditions in order to support the multi-functional capabilities of MFI.Further, it is also necessary to consider the effect of the distorted grid condition while designing the control method for MFI as per IEEE 519-2014 [32].To fulfill the aforementioned tasks, in this paper the authors propose a scaling factor-based multi-band (MB)-HCC with a simple switching logic by employing the two hysteresis bands with reduced switching frequency to reduce switching losses in the inverter and thereby increasing the inverter efficiency.Proposed MB-HCC bandwidths are adjusted as per the current reference value using scaling factors under ideal and distorted grid conditions.In the proposed MB-HCC-based two-stage RTSPV system, the DC-DC boost converter stage is controlled with the INC-based MPPT to extract the maximum power from RTSPV, and the MFI stage is tuned with the proposed MB-HCC, with scaling factors to execute the multifunctional operations to reduce the grid consumption and improve the PQ of the LVDS.The proposed system configuration is modeled and simulated in a MATLAB/Simulink environment using sim power system toolbox.The simulated results are validated in real-time (RT) using an RT grid simulator.
The main contributions of this article are as follows: • Proposed scaling factor-based MB-HCC was proposed to MFI for power injection and power conditioning operations.
•
Verified the proposed MB-HCC MFI operation under both ideal and distorted grid conditions using simulation and RT experimental studies.
•
Compared the proposed MB-HCC method effectiveness with the VDB-HCC method reported in Reference [30].
The organization of the paper is as follows: The proposed RTSPV integration system is presented in Section 2 followed by a control methodology in Section 3. The simulation and RT results are presented in Sections 4 and 5.The results discussion is described in Section 6, and, finally, conclusions are presented in Section 7.
Configuration of RTSPV Integration System for LVDS Applications
In this section, the detailed schematic of the RTSPV integration system configuration for LVDS applications is illustrated in Figure 1.The RTSPV system is connected in parallel to the LVDS at the point of common coupling (PCC).The single-phase system is modeled as an ideal AC voltage source in series with the source impedance, and it is connected to the nonlinear loads as depicted in Figure 1.The key building blocks of the RTSPV system are the two-stage power circuit and the MB-HCC architecture.The two-stage power circuit consists of an INC-based MPPT-controlled DC-DC boost converter coupled with a current controlled voltage source inverter (VSI) as MFI with RTSPV interfacing. Compared the proposed MB-HCC method effectiveness with the VDB-HCC method reported in Reference [30].
The organization of the paper is as follows: The proposed RTSPV integration system is presented in Section 2 followed by a control methodology in Section 3. The simulation and RT results are presented in Section 4 and Section 5.The results discussion is described in Section 6, and, finally, conclusions are presented in Section 7.
Configuration of RTSPV Integration System for LVDS Applications
In this section, the detailed schematic of the RTSPV integration system configuration for LVDS applications is illustrated in Figure 1.The RTSPV system is connected in parallel to the LVDS at the point of common coupling (PCC).The single-phase system is modeled as an ideal AC voltage source in series with the source impedance, and it is connected to the nonlinear loads as depicted in Figure 1.The key building blocks of the RTSPV system are the two-stage power circuit and the MB-HCC architecture.The two-stage power circuit consists of an INC-based MPPT-controlled DC-DC boost converter coupled with a current controlled voltage source inverter (VSI) as MFI with RTSPV interfacing.The nomenclature of the RTSPV integration system is illustrated in Appendix A. x
RTSPV System Modeling
A single diode model is considered as a PV cell for building the PV array in simulation studies [33].According to the single diode model, the RTSPV output current and power are expressed as follows: In this work, the RTSPV array is designed to deliver a maximum power of 6 kWp with 1000 W/m 2 solar irradiation at 25 • C temperature conditions.
The shade-free rooftop space required for the SPV installation is considered as 732 sq.ft as per the design procedures [5].While building the SPV array using a MATLAB/Simulink environment, five series and four parallel SUN POWER SPR-305 WHT PV [34] modules were considered to build the PV array, which is capable of generating the 6.1 kWp power using MPPT-controlled DC-DC boost converter.The I rtpv -V rtpv and P rtpv -V rtpv characteristics of the RTSPV array concerning the variation of solar irradiation are depicted in Figure 2.
RTSPV System Modeling
A single diode model is considered as a PV cell for building the PV array in simulation studies [33].According to the single diode model, the RTSPV output current and power are expressed as follows: In this work, the RTSPV array is designed to deliver a maximum power of 6 kWp with 1000 W/m 2 solar irradiation at 25 °C temperature conditions.
The shade-free rooftop space required for the SPV installation is considered as 732 sq.ft as per the design procedures [5].While building the SPV array using a MATLAB/Simulink environment, five series and four parallel SUN POWER SPR-305 WHT PV [34] modules were considered to build the PV array, which is capable of generating the 6.1 kWp power using MPPT-controlled DC-DC boost converter.The Irtpv-Vrtpv and Prtpv-Vrtpv characteristics of the RTSPV array concerning the variation of solar irradiation are depicted in Figure 2.
DC-DC Boost Converter Stage with MPPT Control
The RTSPV is connected to a DC-DC boost converter to supply the regulated DC-link voltage to the inverter.Here the DC-link voltage of the MFI is rated at 500 V.The boost stage DC-DC converter will amplify the RTSPV array voltage to rated DC-link voltage; simultaneously, it will inject the PV power.The duty cycle (D) of the MPPT algorithm for boost operation is computed by using the following expressions: The duty cycle to trigger the boost converter for boost operation is calculated according to the INC MPPT algorithm to maintain the set value of MFI DC-link voltage.The MPPT control is depicted in Figure 1.Due to the intermittent nature of solar energy and the availability of a shade-free rooftop area of the building, the boost stage with MPPT is essential for a single-phase system to extract the maximum power from RTSPV.
DC-DC Boost Converter Stage with MPPT Control
The RTSPV is connected to a DC-DC boost converter to supply the regulated DC-link voltage to the inverter.Here the DC-link voltage of the MFI is rated at 500 V.The boost stage DC-DC converter will amplify the RTSPV array voltage to rated DC-link voltage; simultaneously, it will inject the PV power.The duty cycle (D) of the MPPT algorithm for boost operation is computed by using the following expressions: The duty cycle to trigger the boost converter for boost operation is calculated according to the INC MPPT algorithm to maintain the set value of MFI DC-link voltage.The MPPT control is depicted in Figure 1.Due to the intermittent nature of solar energy and the availability of a shade-free rooftop area of the building, the boost stage with MPPT is essential for a single-phase system to extract the maximum power from RTSPV.
Muti-Functional Inverter Design
A single-phase four-switch H-bridge VSI is considered as MFI to execute the active power injection and power conditioning operations.The inverter is connected to the PCC by a ripple filter (L MFI ) as shown in Figure 1.Here the source current (i s ) is sensed and compared with the reference current (i sref ) generated by the sine-based unit vector template approach, then processed through the proposed MB-HCC method to perform the multifunctional operation with reduced switching frequency.The system parameters used in this study are tabulated in Table 2.The detailed control configuration is presented in the next section.
Multi-Functional Inverter Control Configuration
The objective of the controller was to pump the harvested maximum power from RTSPV to the PCC of the LVDS through the MFI and PQ enhancement at PCC.The active power generated from the MFI is expressed as follows P MFI = P pv + P losses (6) where P losses constitutes the majority of switching losses of the MFI.The efficiency of the MFI can be improved by minimizing the switching losses.While converting the RTSPV DC power to the AC power using MFI, there will be some amount of losses, but if the losses are reduced by minimizing the inverter switching losses, then the efficiency of the MFI is improved.Hence, the MB-HCC switching logic was designed to reduce the switching losses, thereby improving the MFI efficiency.The MFI controller configuration had two control loops-one was for inverter DC-link voltage control, and the other one was for current control.
Reference Current Generation Scheme for MFI Control and Grid Synchronization
The detailed block diagram of the RTSPV interfaced MFI control is illustrated in Figure 1.The sensed DC-link voltage supplied from the DC-DC boost stage was processed by a low-pass filter (LPF) to reduce the DC-link voltage ripples compared with its reference DC-link voltage value.To regulate the DC-link voltage, the error voltage was processed through the proportional-integral (PI) voltage controller.The DC-link voltage error (V dce ) sample value at n th point is expressed as This output of the PI voltage controller is the peak value of the source current.The output expression of the discrete PI voltage controller is represented as where the k p and k i gain values are obtained by using the Ziegler-Nichols second method [35,36].Initially, the k p gain value was set by the Ziegler-Nichols second method table value as 0.6, then the tuning procedure was continued by the proportional-integral-derivative (PID) controller autotuning in the MATLAB/Simulink environment to obtain the k i value for improved performance.After successful tuning using the PID auto-tuning procedure in the MATLAB/Simulink environment, the k i gain value was obtained as 10.The main objective of this tuning is for attaining the lowest percentage total harmonic distortion (THD) of the grid current which is within the limits of IEEE 519-2014 and IEEE 1547 standards, respectively [11,32].The PI controller gains of the DC-link voltage loop were tuned for a low crossover of the frequency range, i.e., in between 10 Hz to 20 Hz, to attenuate the high-magnitude ripple content in MFI DC-link voltage.The Bode plot of the tuned PI controller with the obtained gain values of the MFI DC-link voltage control loop which regulated the DC-link voltage and reduced the steady-state error is depicted in Figure 3.
Reference Current Generation Scheme for MFI Control and Grid Synchronization
The detailed block diagram of the RTSPV interfaced MFI control is illustrated in Figure 1.The sensed DC-link voltage supplied from the DC-DC boost stage was processed by a low-pass filter (LPF) to reduce the DC-link voltage ripples compared with its reference DC-link voltage value.To regulate the DC-link voltage, the error voltage was processed through the proportional-integral (PI) voltage controller.The DC-link voltage error (Vdce) sample value at n th point is expressed as This output of the PI voltage controller is the peak value of the source current.The output expression of the discrete PI voltage controller is represented as where the kp and ki gain values are obtained by using the Ziegler-Nichols second method [35,36].Initially, the kp gain value was set by the Ziegler-Nichols second method table value as 0.6, then the tuning procedure was continued by the proportional-integral-derivative (PID) controller autotuning in the MATLAB/Simulink environment to obtain the ki value for improved performance.After successful tuning using the PID auto-tuning procedure in the MATLAB/Simulink environment, the ki gain value was obtained as 10.The main objective of this tuning is for attaining the lowest percentage total harmonic distortion (THD) of the grid current which is within the limits of IEEE 519-2014 and IEEE 1547 standards, respectively [11,32].The PI controller gains of the DC-link voltage loop were tuned for a low crossover of the frequency range, i.e., in between 10 Hz to 20 Hz, to attenuate the high-magnitude ripple content in MFI DC-link voltage.The Bode plot of the tuned PI controller with the obtained gain values of the MFI DC-link voltage control loop which regulated the DC-link voltage and reduced the steady-state error is depicted in Figure 3.The unit vector (Us) is generated by using the grid synchronizing angle ( ), which is obtained from a phase-locked loop (PLL) [37][38][39].The PLL used in the MFI control under both ideal and distorted grid conditions is illustrated in Figure 4.The PLL parameters selected in the MATLAB/Simulink 2013 environment for both simulation and RT implementation are tabulated in Table 3.The main objective of this PLL was to obtain the synchronization angle accurately under ideal and distorted grid conditions, in order to generate the sine unit vector template (Us).The peak value of the source current (im) was multiplied by the Us to generate the source current reference (isref) in phase with source voltage at unity power factor.To purge the harmonics and reactive power under ideal and distorted grid conditions, it was necessary to force the source current to maintain a sinusoidal nature, and it should be in phase with the source voltage.The sine unit vector template The unit vector (U s ) is generated by using the grid synchronizing angle (ωt), which is obtained from a phase-locked loop (PLL) [37][38][39].The PLL used in the MFI control under both ideal and distorted grid conditions is illustrated in Figure 4.The PLL parameters selected in the MATLAB/Simulink 2013 environment for both simulation and RT implementation are tabulated in Table 3.The main objective of this PLL was to obtain the synchronization angle accurately under ideal and distorted grid conditions, in order to generate the sine unit vector template (U s ).The peak value of the source current (i m ) was multiplied by the U s to generate the source current reference (i sref ) in phase with source voltage at unity power factor.To purge the harmonics and reactive power under ideal and distorted grid conditions, it was necessary to force the source current to maintain a sinusoidal nature, and it should be in phase with the source voltage.The sine unit vector template and reference source current expressions under ideal and distorted grid conditions are given as follows: The reference current is compared with the actual current to initiate the switching of the inverter.The design of the MB-HCC control methodology is discussed in the next section.
Electronics 2019, 8, 302 8 of 31 and reference source current expressions under ideal and distorted grid conditions are given as follows: = sin ( ) = × The reference current is compared with the actual current to initiate the switching of the inverter.The design of the MB-HCC control methodology is discussed in the next section.Filter cut-off frequency for frequency measurement (Hz) 25
Sample time 20 µsec
Automatic gain control enable
Current Control Methodology
In this section, the VDB-HCC and the proposed MB-HCC methods are described.
VDB-HCC
In the VDB-HCC, the switching logic was derived by considering the two hysteresis bands (HBs), where the HBs bandwidths were comprised of a minimum band value and a maximum band value.Here, the maximum band was multiplied by one fundamental periodic cycle [30].
The switching logic of the VDB-HCC method used in Reference [30] is illustrated as given below:
Current Control Methodology
In this section, the VDB-HCC and the proposed MB-HCC methods are described.
VDB-HCC
In the VDB-HCC, the switching logic was derived by considering the two hysteresis bands (HBs), where the HBs bandwidths were comprised of a minimum band value and a maximum band value.Here, the maximum band was multiplied by one fundamental periodic cycle [30].
The switching logic of the VDB-HCC method used in Reference [30] is illustrated as given below: VDB-HCC Switching Logic for MFI where, the HB is expressed as follows By considering the average hysteresis band expression, various combinations are derived as listed in Reference [30].Among the various combinations of maximum and minimum hysteresis bandwidth values, h min = 0.005 and H = 0.149226 are selected in view of the better performance indices as reported in Reference [30].The main limitation of the VDB-HCC is the high variable switching frequency during the variations in solar irradiation and nonlinear load compensation requirements.The inverter switches Turned ON and OFF for a long time during the source current touched the maximum hysteresis band as shown in Figure 5a.Moreover, efficiency was low for the modulation index value of less than 0.7 [30].The mathematical expression for switching frequency (f sw ) of the single-phase inverter using VDB-HCC described in Reference [30] is expressed as o Turn OFF S1, S3 and Turn ON S2, S4; alternatively, Turn ON S1, S3 and Turn OFF S2, S4 o Turn OFF S1 and S2 o Turn ON S3 and S4 o Turn ON S1, S3 and Turn OFF S2, S4; alternatively, Turn OFF S1, S3 and Turn ON S2, S4 where, the HB is expressed as follows By considering the average hysteresis band expression, various combinations are derived as listed in Reference [30].Among the various combinations of maximum and minimum hysteresis bandwidth values, hmin = 0.005 and H = 0.149226 are selected in view of the better performance indices as reported in Reference [30].The main limitation of the VDB-HCC is the high variable switching frequency during the variations in solar irradiation and nonlinear load compensation requirements.The inverter switches Turned ON and OFF for a long time during the source current touched the maximum hysteresis band as shown in Figure 5a.Moreover, efficiency was low for the modulation index value of less than 0.7 [30].The mathematical expression for switching frequency (fsw) of the single-phase inverter using VDB-HCC described in Reference [30] is expressed as Here, DC-link voltage (Vdc) and ripple inductor (LMFI) values are fixed values.Hence, the switching frequency can be controlled by varying the hysteresis bandwidths as represented in Equation (11).The reported bandwidth values are not optimum for the multifunctional operation of the inverter.Moreover, the optimum bandwidths of the hysteresis bands to attain high efficiency are not yet reported.To overcome the aforementioned limitations and also to achieve low average switching frequency, an MB-HCC method is proposed in this paper and is described in the next subsection.Here, DC-link voltage (V dc ) and ripple inductor (L MFI ) values are fixed values.Hence, the switching frequency can be controlled by varying the hysteresis bandwidths as represented in Equation (11).The reported bandwidth values are not optimum for the multifunctional operation of the inverter.Moreover, the optimum bandwidths of the hysteresis bands to attain high efficiency are not yet reported.To overcome the aforementioned limitations and also to achieve low average switching frequency, an MB-HCC method is proposed in this paper and is described in the next subsection.
Proposed MB-HCC
In this proposed MB-HCC method, two hysteresis bands are derived based on the scaling factors approach.This method controls the MFI switches in such a way that it can force the actual source current (i s ) to rise and fall and closely tracks reference current (i sref ) between the main and sub hysteresis bands (HB1 and HB2) as depicted in Figure 5b.The detailed flowchart of the MB-HCC algorithm is illustrated in Figure 6.The switching pulses are generated through tracking current response between dual hysteresis bands (HB1 and HB2).In contrast to the VDB-HCC method, the MB-HCC method balances the switching frequency based on the current tracking uniformly, as shown in Figure 5b.
The h 1 and h 2 are the scaling factors of the dual hysteresis bands (HB1 and HB2) of the MB-HCC.The optimum values of the scaling factors were set and verified by the RT simulation studies to obtain the reduced THD as well as the MFI switching frequency simultaneously.
approach.This method controls the MFI switches in such a way that it can force the actual source current (is) to rise and fall and closely tracks reference current (isref) between the main and sub hysteresis bands (HB1 and HB2) as depicted in Figure 5b.The detailed flowchart of the MB-HCC algorithm is illustrated in Figure 6.The switching pulses are generated through tracking current response between dual hysteresis bands (HB1 and HB2).In contrast to the VDB-HCC method, the MB-HCC method balances the switching frequency based on the current tracking uniformly, as shown in Figure 5b.The h1 and h2 are the scaling factors of the dual hysteresis bands (HB1 and HB2) of the MB-HCC.The optimum values of the scaling factors were set and verified by the RT simulation studies to obtain the reduced THD as well as the MFI switching frequency simultaneously.
The scaling factors are obtained by the following step by step procedure.
Step-by-step procedure for MB-HCC scaling factor selection Step 1: Initially, the hysteresis band scaling factor (h1) range is obtained as 0.01 to 0.1 by using the generalized instantaneous switching frequency formula reported in Reference [40] to get the lowest % THD at the reduced switching frequency (fsw).
Step 2: The scaling factor (h2) is considered as 10% of the h1 to prevent the offset issues.
Step 3: MATLAB and RT simulations are performed for the range of scaling factors 0.01 to 0.1 in order to obtain the optimum scaling factors for accurate tracking of the actual source current (is).
Step 4: Based on the series of simulation studies with nonlinear loads, the current tracking is accurate with the scaling factor values of h1 = 0.0125 and h2 = 0.00125 (i.e., 10% of h1).Moreover, the % THD low and the average instantaneous switching frequency is nearly constant by considering these scaling factors in MB-HCC method for different nonlinear loads under ideal and distorted grid conditions.The scaling factors are obtained by the following step by step procedure.
Stop
Step-by-step procedure for MB-HCC scaling factor selection Step 1: Initially, the hysteresis band scaling factor (h 1 ) range is obtained as 0.01 to 0.1 by using the generalized instantaneous switching frequency formula reported in Reference [40] to get the lowest % THD at the reduced switching frequency (f sw ).
Step 2: The scaling factor (h 2 ) is considered as 10% of the h 1 to prevent the offset issues.
Step 3: MATLAB and RT simulations are performed for the range of scaling factors 0.01 to 0.1 in order to obtain the optimum scaling factors for accurate tracking of the actual source current (i s ).
Step 4: Based on the series of simulation studies with nonlinear loads, the current tracking is accurate with the scaling factor values of h 1 = 0.0125 and h 2 = 0.00125 (i.e., 10% of h 1 ).Moreover, the % THD low and the average instantaneous switching frequency is nearly constant by considering these scaling factors in MB-HCC method for different nonlinear loads under ideal and distorted grid conditions.
In the MB-HCC method, the ranges of the two hysteresis bands are adjusted based on the reference current (i sref ) and their scaling factors (h 1 and h 2 ), whereas in the VDB-HCC method, the hysteresis band is determined by using Equation (11).The simple switching logic of the MB-HCC method is as follows: MB-HCC Switching Logic for MFI Unlike the VDB-HCC, in the MB-HCC method, the hysteresis bandwidths are determined by using the reference current and scaling factors as illustrated in Figure 6 in order to determine the optimum bandwidths for improved efficiency and power quality.The MFI switches are triggered by the switching pulses as per the sequence illustrated in Figure 6.These switching pulses generate a pulse width modulated (PWM) AC voltage at the MFI output side (v MFI ).This voltage causes a current (i MFI ) to flow through the ripple inductor (L MFI ), which is injected at PCC to reduce the grid consumption and mitigate the current harmonics and induced reactive power.
Simulation Study and Results
A set of simulations on the proposed MB-HCC-based RTSPV integration system configuration was carried out using the MATLAB/Simulink software environment to validate the multi-functionalities under ideal and distorted grid voltage conditions.The voltage was considered as the third-and seventh-order harmonic distorted voltage, in accordance with the limits of IEEE 519-2014 standard [32], i.e., the voltage percentage THD is 8%.The nonlinear loads connected to the LVDS were modeled using a frontend diode bridge rectifier fed with RL and RC elements.The system parameters specified in Table 2 were used in the simulation studies.The performance of the RTSPV interfaced MFI was demonstrated in four modes, under both ideal and distorted grids, which were classified as follows: • Mode 1: MFI is OFF, and there is no power injection and power conditioning.
•
Mode 2: MFI is ON, with grid sharing or power conditioning.
•
Mode 3: MFI is ON, with grid feeding and power conditioning.
•
Mode 4: MFI is ON, with grid sharing and power conditioning during irradiation change.
Mode 1: MFI OFF, with No Power Injection or Power Conditioning
Initially, the load behavior at PCC under MFI OFF condition was assessed by simulation studies, and the simulation results illustrate the source current harmonic distortion and reactive power effect under ideal and distorted grid conditions as depicted in Figure 7.
Ideal Grid Voltage Case
Under ideal grid conditions, the source current had a THD percentage of 34.08%.The active and reactive power profiles at PCC under ideal grid conditions is depicted in Figure 7a.The loads draw an active power of 9.068 kW from the grid under the ideal grid condition as shown in Figure 7a.The reactive power under the ideal grid case was 1.005 kVAR.
Distorted Grid Voltage Case
The percentage of THD of the source current under a distorted grid condition was 42.69%.The active and reactive power profiles at PCC under the distorted grid condition is depicted in Figure 7b.
Ideal Grid Voltage Case
Under ideal grid conditions, the source current had a THD percentage of 34.08%.The active and reactive power profiles at PCC under ideal grid conditions is depicted in Figure 7a.The loads draw an active power of 9.068 kW from the grid under the ideal grid condition as shown in Figure 7a.The reactive power under the ideal grid case was 1.005 kVAR.
Distorted Grid Voltage Case
The percentage of THD of the source current under a distorted grid condition was 42.69%.The active and reactive power profiles at PCC under the distorted grid condition is depicted in Figure 7b.The active power consumed from the grid under a distorted grid condition was 9.331 kW as described in Figure 7b.The reactive power under a distorted grid case was 0.655 kVAR.In both cases, the harmonics and reactive power were deteriorating the LVDS power quality.Therefore, it is necessary to compensate for the harmonics and reactive power to improve the operating efficiency and reliability of the LVDS under both ideal and distorted grid conditions.
Mode 2: MFI is ON, with Grid Sharing and Power Conditioning
In this mode of operation, the MFI functionality as active power injector-to reduce the grid consumption by sharing the RTSPV power and power conditioner to compensate the harmonics and reactive power at PCC-are exhibited under both ideal and distorted grid cases.
Ideal Grid Voltage Case
The source voltage and current responses after compensation of harmonics and reactive power under the ideal grid condition are depicted in Figure 8a along with MFI and load currents.From these results, it is observed that the source current harmonics were compensated, and it was in-phase with the source voltage.Thus, the power conditioning task is achieved effectively.Moreover, grid consumption was also minimized simultaneously.The detailed response of RTSPV power generation at irradiation of 1000 W/m 2 and the DC-link voltage responses are presented in Figure 8b.These results confirm that the DC-link voltage is maintained stably at a rated value of 500 V.The load active and reactive power profiles are also illustrated in Figure 8b.The grid sharing response is depicted in Figure 8c; from this, the source power consumption was reduced, which was owing to the active power feeding from the MFI.The MFI also supplied the necessary reactive power to reduce the source side reactive power close to zero, as shown in Figure 8c.
Distorted Grid Voltage Case
In Figure 8d the source voltage and current responses after compensation of harmonics and reactive power, under the distorted grid voltage condition, are represented along with MFI and load currents.Here, the source current was transformed to sinusoidal and in-phase with the source voltage, irrespective of the load current nature, with the help of the MFI current.The inverter DC-link and RTSPV generation at an irradiation level of 1000 W/m 2 is depicted in Figure 8e, where the DC-link was stable at 500 V, and the RTSPV power was 6 kW.In addition to that, the load active and reactive power profiles under distorted grid conditions are also illustrated in Figure 8e.The source reactive power was also reduced close to zero by injecting the required reactive power, as shown in Figure 8f.
Based on the listed active and reactive power summary of the source, load, and MFI under both ideal and distorted grid voltage conditions in Table 4, the MFI performed the grid sharing and power conditioning tasks effectively.
Mode 3: MFI is ON, with Grid Feeding and Power Conditioning
In this mode, the grid feeding and power conditioning operations of the MFI during the reduced load condition under both ideal and distorted grid situations are demonstrated.
Ideal Grid Voltage Case
The source voltage and current responses under grid feeding and power conditioning mode are illustrated in Figure 9a.Here the source current is out of phase with the source voltage, which means that the surplus current, after injecting to the local loads at PCC, is feeding to the grid; the reduced load current nature due to the reduction of the loads is illustrated in Figure 9a.Here, the MFI was serving the local load simultaneously, pumping the excess power to the grid.The inverter DC-link voltage response and the active and reactive power profiles of the loads are presented in Figure 9b.The active and reactive power supplied by the MFI is depicted in Figure 9c.
Distorted Grid Voltage Case
The simulation studies verify the MFI performance under distorted grid voltage condition.Here, Figure 9d illustrates the source voltage and source current responses after compensation of harmonics and reactive power along with MFI current and load current.In this condition, the source current
Distorted Grid Voltage Case
The simulation studies verify the MFI performance under distorted grid voltage condition.Here, Figure 9d illustrates the source voltage and source current responses after compensation of harmonics and reactive power along with MFI current and load current.In this condition, the source current was out of phase with the source voltage, similar to the ideal grid case, which enumerates the grid feeding of surplus MFI current in sinusoidal form, irrespective of source voltage disturbance.
The inverter DC-link voltage response and the partial load active and reactive power profiles under the distorted grid condition are presented in Figure 9e.The active and reactive power profiles of the source and MFI are illustrated in Figure 9f.
Given the listed active and reactive power summaries of the source, MFI, and load under both ideal and distorted grid conditions in Table 5, the MFI executed the grid feeding and power conditioning tasks successfully.In this mode of operation, the MFI performance during irradiation change under ideal and distorted grid conditions is illustrated in Figure 10a,b.
In the current response depicted in Figure 10a, there was a rise in amplitude during the irradiation change from 1000 W/m 2 to 500 W/m 2 at time t = 1.6 s because of the reduction in RTSPV power.This means the grid consumption was raised.However, the source current harmonics were compensated for successfully.The inverter current injected at the PCC, and the load current responses are illustrated in Figure 10a.The DC-link voltage and PV power concerning the variation in solar irradiation are described in Figure 10b at t = 1.6 s.Even though the solar irradiation was decreased, the MFI could supply the active power corresponding to that irradiation as per the INC-based MPPT characteristics.Thereby, the MFI was performing the grid sharing partially and power conditioning without interruption.
Distorted Grid Voltage Case
The dynamic response of the system during the solar irradiation change under the distorted grid condition is presented in Figure 11a,b.In this case, the current consumption from the grid was increased due to the reduction in PV power because of the irradiation change as shown in Figure 11a.However, the MFI executed the power conditioning effectively.The MFI was capable of extracting the maximum power with respect to the reduced irradiation as per the INC-based MPPT effectively.Thus, the MFI is successful in executing the multi-functional tasks simultaneously without interruption.In the current response depicted in Figure 10a, there was a rise in amplitude during the irradiation change from 1000 W/m 2 to 500 W/m 2 at time t = 1.6 s because of the reduction in RTSPV power.This means the grid consumption was raised.However, the source current harmonics were compensated for successfully.The inverter current injected at the PCC, and the load current responses are illustrated in Figure 10a.The DC-link voltage and PV power concerning the variation in solar irradiation are described in Figure 10b at t = 1.6 s.Even though the solar irradiation was
Real-Time Experimental Validation
In this section, the RT software in loop (SIL) testing of the proposed RTSPV MFI system interfaced to the LVDS is presented.
Real-Time Experimental Validation
In this section, the RT software in loop (SIL) testing of the proposed RTSPV MFI system interfaced to the LVDS is presented.
The proposed RTSPV integration system with the LVDS loads was modeled in the RT-LAB environment and tested in RT using the OP4500 RT grid simulator [41,42].The main purpose of validating in RT is to understand the proposed MFI system behavior for real-world implementation.The OP4500 is one of the commercially available RT power grid simulators.The detailed architecture and specifications are illustrated in Reference [42].The laboratory test setup for RT validation is shown in Figure 12.The proposed RTSPV integration system with the LVDS loads was modeled in the RT-LAB environment and tested in RT using the OP4500 RT grid simulator [41,42].The main purpose of validating in RT is to understand the proposed MFI system behavior for real-world implementation.The OP4500 is one of the commercially available RT power grid simulators.The detailed architecture and specifications are illustrated in Reference [42].The laboratory test setup for RT validation is shown in Figure 12.
SIL RT Test Results of Mode 1: MFI OFF
The SIL RT test results of the LVDS during the MFI OFF condition are depicted in Figure 13.These responses are identical to the simulation results depicted in Figure 7, and showcase the harmonic and reactive power effects at the PCC of the LVDS under both ideal and distorted grid conditions.Hence, it is essential to improve the LVDS power quality for efficient and reliable operation.
Ideal Grid Voltage Case
The SIL RT results of the MFI grid sharing and power conditioning mode are presented in Figure 14a-c.The RT results authenticate the simulated results of MFI grid sharing and power conditioning operations presented in Figure 8.Here, the source current harmonics were compensated, and the source voltage and source current were an in-phase nature.The compensated current injected at the PCC by the MFI to make the source current sinusoidal is presented in Figure 14a.However, the load draws the nonlinear current as shown in Figure 14a.The regulated DC-link voltage of 500 V and the RTSPV maximum power of 6 kW extracted at the solar irradiation of 1000 W/m 2 are illustrated in Figure 14b.In 14c, the active and reactive power profiles of the source and MFI are depicted.
SIL RT Test Results of Mode 1: MFI OFF
The SIL RT test results of the LVDS during the MFI OFF condition are depicted in Figure 13.These responses are identical to the simulation results depicted in Figure 7, and showcase the harmonic and reactive power effects at the PCC of the LVDS under both ideal and distorted grid conditions.Hence, it is essential to improve the LVDS power quality for efficient and reliable operation.The proposed RTSPV integration system with the LVDS loads was modeled in the RT-LAB environment and tested in RT using the OP4500 RT grid simulator [41,42].The main purpose of validating in RT is to understand the proposed MFI system behavior for real-world implementation.The OP4500 is one of the commercially available RT power grid simulators.The detailed architecture and specifications are illustrated in Reference [42].The laboratory test setup for RT validation is shown in Figure 12.
SIL RT Test Results of Mode 1: MFI OFF
The SIL RT test results of the LVDS during the MFI OFF condition are depicted in Figure 13.These responses are identical to the simulation results depicted in Figure 7, and showcase the harmonic and reactive power effects at the PCC of the LVDS under both ideal and distorted grid conditions.Hence, it is essential to improve the LVDS power quality for efficient and reliable operation.
Ideal Grid Voltage Case
The SIL RT results of the MFI grid sharing and power conditioning mode are presented in Figure 14a-c.The RT results authenticate the simulated results of MFI grid sharing and power conditioning operations presented in Figure 8.Here, the source current harmonics were compensated, and the source voltage and source current were an in-phase nature.The compensated current injected at the PCC by the MFI to make the source current sinusoidal is presented in Figure 14a.However, the load draws the nonlinear current as shown in Figure 14a.The regulated DC-link voltage of 500 V and the RTSPV maximum power of 6 kW extracted at the solar irradiation of 1000 W/m 2 are illustrated in Figure 14b.In Figure 14c, the active and reactive power profiles of the source and MFI are depicted.
Ideal Grid Voltage Case
The SIL RT results of the MFI grid sharing and power conditioning mode are presented in Figure 14a-c.The RT results authenticate the simulated results of MFI grid sharing and power conditioning operations presented in Figure 8.Here, the source current harmonics were compensated, and the source voltage and source current were an in-phase nature.The compensated current injected at the PCC by the MFI to make the source current sinusoidal is presented in Figure 14a.However, the load draws the nonlinear current as shown in Figure 14a.The regulated DC-link voltage of 500 V and the RTSPV maximum power of 6 kW extracted at the solar irradiation of 1000 W/m 2 are illustrated in Figure 14b.In Figure 14c, the active and reactive power profiles of the source and MFI are depicted.This enumerates the reduction of grid consumption from 9 kW to 3 kW.Furthermore, the source reactive power is reduced to zero by the MFI, as described in simulated results.This enumerates the reduction of grid consumption from 9 kW to 3 kW.Furthermore, the source reactive power is reduced to zero by the MFI, as described in simulated results.The simulated DC-link voltage controller had high bandwidth and responded instantaneously, including to the high-frequency current harmonics flowing through the capacitor, observed by rapid variations in the DC-link voltage, as depicted in Figure 8.However, in the RT-implemented system, the DC-link voltage was discretized; the whole bandwidth decreased, and there was also movingaverage filtering of the analog-to-digital converter (ADC) feedback signals.Therefore, the closed-loop response for such a DC-link voltage control loop was not so rapid, the capacitor voltage had fewer fluctuations, and had less ripple when compared to the simulated case-study, as observed in Figure 14.Such a smoother capacitor voltage in RT control is, indeed, a very desirable feature.
Distorted Grid Voltage Case
The RT results of the MFI grid sharing and power conditioning operation under the distorted grid condition are presented in Figure 14d-f.These RT results were identical to the simulated results of the MFI grid sharing and power conditioning mode.In this case, the source current was sinusoidal, and it was in-phase with the source voltage.However, the load behavior was nonlinear as illustrated in Figure 14d.The MFI current injected at the PCC to compensate for the harmonic distortion is described in Figure 14d.The DC-link voltage regulated at 500 V and the RTSPV maximum power of 6 kW extracted at the solar irradiation of 1000 W/m 2 under the distorted grid are illustrated in Figure 14e, which are similar to the simulated results.In Figure 14f, the active and reactive power profiles of the source and MFI obtained by the RT simulation are illustrated.These profiles enumerate the The simulated DC-link voltage controller had high bandwidth and responded instantaneously, including to the high-frequency current harmonics flowing through the capacitor, observed by rapid variations in the DC-link voltage, as depicted in Figure 8.However, in the RT-implemented system, the DC-link voltage was discretized; the whole bandwidth decreased, and there was also moving-average filtering of the analog-to-digital converter (ADC) feedback signals.Therefore, the closed-loop response for such a DC-link voltage control loop was not so rapid, the capacitor voltage had fewer fluctuations, and had less ripple when compared to the simulated case-study, as observed in Figure 14.Such a smoother capacitor voltage in RT control is, indeed, a very desirable feature.
Distorted Grid Voltage Case
The RT results of the MFI grid sharing and power conditioning operation under the distorted grid condition are presented in Figure 14d-f.These RT results were identical to the simulated results of the MFI grid sharing and power conditioning mode.In this case, the source current was sinusoidal, and it was in-phase with the source voltage.However, the load behavior was nonlinear as illustrated in Figure 14d.The MFI current injected at the PCC to compensate for the harmonic distortion is described in Figure 14d.The DC-link voltage regulated at 500 V and the RTSPV maximum power of 6 kW extracted at the solar irradiation of 1000 W/m 2 under the distorted grid are illustrated in Figure 14e, which are similar to the simulated results.In Figure 14f, the active and reactive power profiles of the source and MFI obtained by the RT simulation are illustrated.These profiles enumerate the reduction of grid consumption from 9 kW to 3 kW, as discussed in the simulated results.Furthermore, the reactive power effect was compensated for successfully.The MFI grid feeding and power conditioning responses under the ideal grid case when the load consumption as reduced are described in Figure 15a-c.In this case, the MFI was feeding the load as well as the grid, simultaneously.Moreover, it as also taking care of the harmonic and reactive power compensation.The DC-link voltage regulated at 500 V and the reduced load power and reactive power responses are presented in Figure 15b.The active and reactive power profiles under grid feeding mode are depicted in Figure 15c.Here, the negative response of the active power represents the grid feeding operation; simultaneously, the reactive power at the source side was reduced to zero.The RT results are identical to the simulated results.
Electronics 2019, 8, 302 22 of 31 reduction of grid consumption from 9 kW to 3 kW, as discussed in the simulated results.Furthermore, the reactive power effect was compensated for successfully.The MFI grid feeding and power conditioning responses under the ideal grid case when the load consumption as reduced are described in Figure 15a-c.In this case, the MFI was feeding the load as well as the grid, simultaneously.Moreover, it as also taking care of the harmonic and reactive power compensation.The DC-link voltage regulated at 500 V and the reduced load power and reactive power responses are presented in Figure 15b.The active and reactive power profiles under grid feeding mode are depicted in Figure 15c.Here, the negative response of the active power represents the grid feeding operation; simultaneously, the reactive power at the source side was reduced to zero.The RT results are identical to the simulated results.
Distorted Grid Voltage Case
The MFI grid feeding and power conditioning responses under the distorted grid case when the load consumption was reduced are described in Figure 15d-f.In this case, the MFI was feeding the load as well as the grid, simultaneously.Moreover, it as also taking care of the harmonic and reactive power compensation.The DC-link voltage regulated at 500 V and the reduced load power and reactive power responses are presented in Figure 15e.The active and reactive power profiles under
Distorted Grid Voltage Case
The MFI grid feeding and power conditioning responses under the distorted grid case when the load consumption was reduced are described in Figure 15d-f.In this case, the MFI was feeding the load as well as the grid, simultaneously.Moreover, it as also taking care of the harmonic and reactive power compensation.The DC-link voltage regulated at 500 V and the reduced load power and reactive power responses are presented in Figure 15e.The active and reactive power profiles under the grid feeding mode are depicted in Figure 15f.Here, the negative response of the active power represents the grid feeding operation.Moreover, the reactive power at the source side was reduced to zero, simultaneously.The RT results are identical to the simulated results.The pictorial representations of the dynamic source current variation during irradiation change under both the ideal and distorted grid conditions are represented in Figure 16a-d.Here, the source current magnitude was increased because of the drop in active power delivery from the MFI due to irradiation change from 1000 W/m 2 to 500 W/m 2 .Figure 16 responses are identical to the simulation responses of Figures 10 and 11.Hence, in RT it was also confirmed that the proposed RTSPV interfaced MFI performs the power conditioning and power injection for ideal and distorted supply conditions effectively during the irradiation change.
Electronics 2019, 8, 302 23 of 31 the grid feeding mode are depicted in Figure 15f.Here, the negative response of the active power represents the grid feeding operation.Moreover, the reactive power at the source side was reduced to zero, simultaneously.The RT results are identical to the simulated results.The pictorial representations of the dynamic source current variation during irradiation change under both the ideal and distorted grid conditions are represented in Figure 16a-d.Here, the source current magnitude was increased because of the drop in active power delivery from the MFI due to irradiation change from 1000 W/m 2 to 500 W/m 2 .Figure 16 responses are identical to the simulation responses of Figures 10 and 11.Hence, in RT it was also confirmed that the proposed RTSPV interfaced MFI performs the power conditioning and power injection for ideal and distorted supply conditions effectively during the irradiation change.
Active and Reactive Power Exchange
The summary of the active and reactive power profiles of the source, MFI, and loads obtained from the RT SIL test are tabulated in Table 6.
Active and Reactive Power Exchange
The summary of the active and reactive power profiles of the source, MFI, and loads obtained from the RT SIL test are tabulated in Table 6.
Discussion
In this section, the detailed summary of the SIL RT results of the proposed RTSPV integration system and its comparison with the VDB-HCC method presented in Reference [30] are discussed.
Switching Frequency
The instantaneous switching frequency values of the VDB-HCC and the proposed MB-HCC are calculated concerning the switching frequency formula given in Reference [40].The instantaneous average switching frequency of the MFI using MB-HCC was maintained at 10 kHz, whereas the VDB-HCC [30] had 16 kHz under nonlinear load conditions.The precise tracking of the current in between the two bands (HB1 and HB2) and the corresponding switching logic of the MFI resulted in a reduction of MFI switching frequency.
MFI Efficiency
The MFI efficiency concerning the solar irradiation under both full load and reduced load conditions using VDB-HCC [30] are presented in Figure 17.
Discussion
In this section, the detailed summary of the SIL RT results of the proposed RTSPV integration system and its comparison with the VDB-HCC method presented in Reference [30] are discussed.
Switching Frequency
The instantaneous switching frequency values of the VDB-HCC and the proposed MB-HCC are calculated concerning the switching frequency formula given in Reference [40].The instantaneous average switching frequency of the MFI using MB-HCC was maintained at 10 kHz, whereas the VDB-HCC [30] had 16 kHz under nonlinear load conditions.The precise tracking of the current in between the two bands (HB1 and HB2) and the corresponding switching logic of the MFI resulted in a reduction of MFI switching frequency.
MFI Efficiency
The MFI efficiency concerning the solar irradiation under both full load and reduced load conditions using VDB-HCC [30] are presented in Figure 17.The MFI had a peak efficiency of 98.3% and an average efficiency of 97.18% under the ideal grid condition at full load.In a distorted grid case, the MFI had a peak efficiency of 95.9% and an average efficiency of 93.82% at full load condition.In the reduced load case, the MFI had a peak efficiency of 98.1% and an average efficiency of 96.9% under the ideal grid voltage case, whereas in the distorted grid condition, the peak efficiency was 96.9% and average efficiency was 95.62%.
The MFI efficiency variation concerning the solar irradiation under both the full load and reduced load conditions using MB-HCC are presented in Figure 18.The peak efficiency of the MFI was 99.01%, and the average MFI efficiency under the ideal grid voltage was 98.77% at the fully loaded condition, whereas in the distorted grid voltage case, the peak efficiency was 98.5% and average efficiency was 97.56%.The MFI had a peak efficiency of 98.3% and an average efficiency of 97.18% under the ideal grid condition at full load.In a distorted grid case, the MFI had a peak efficiency of 95.9% and an average efficiency of 93.82% at full load condition.In the reduced load case, the MFI had a peak efficiency of 98.1% and an average efficiency of 96.9% under the ideal grid voltage case, whereas in the distorted grid condition, the peak efficiency was 96.9% and average efficiency was 95.62%.
The MFI efficiency variation concerning the solar irradiation under both the full load and reduced load conditions using MB-HCC are presented in Figure 18.The peak efficiency of the MFI was 99.01%, and the average MFI efficiency under the ideal grid voltage was 98.77% at the fully loaded condition, whereas in the distorted grid voltage case, the peak efficiency was 98.5% and average efficiency was 97.56%.
During the reduced load case, the peak efficiency of the MFI was 98.9%, and the average MFI efficiency under the ideal grid voltage was 98.55%.In a distorted grid voltage condition, the peak efficiency was 98.5%, and the average efficiency was 97.57%.This means that the MFI exhibited better efficiency using MB-HCC under both ideal and distorted grid voltage conditions when compared to the VDB-HCC method.The MFI inverter efficiency was reasonable under the lower irradiation case when compared to the VDB-HCC method.However, in the proposed MB-HCC method, the MFI efficiency under the lower irradiation condition (i.e., <500 W/m 2 ) in a distorted grid case was slightly lower when compared to the peak efficiency of the MFI at irradiation >500 W/m 2 , but the power conditioning task performed effectively as per the requirements of IEEE 519-2014 and IEEE 1547.This was because, under the lower irradiation case, the MB-HCC based MFI was more efficient in compensating for the highest percentage THD when compared to the ideal voltage case nonlinear load percentage THD.During the reduced load case, the peak efficiency of the MFI was 98.9%, and the average MFI efficiency under the ideal grid voltage was 98.55%.In a distorted grid voltage condition, the peak efficiency was 98.5%, and the average efficiency was 97.57%.This means that the MFI exhibited better efficiency using MB-HCC under both ideal and distorted grid voltage conditions when compared to the VDB-HCC method.The MFI inverter efficiency was reasonable under the lower irradiation case compared to the VDB-HCC method.However, in the proposed MB-HCC method, the MFI efficiency under the lower irradiation condition (i.e., W/m 2 ) in a distorted grid case was slightly lower when compared to the peak efficiency of the MFI at irradiation >500 W/m 2 , but the power conditioning task performed effectively as per the requirements of IEEE 519-2014 and IEEE 1547.This was because, under the lower irradiation case, the MB-HCC based MFI was more efficient in compensating for the highest percentage THD when compared to the ideal voltage case nonlinear load percentage THD.
Percentage THD at Point Common Coupling
The %THD at PCC during no compensation and with MFI compensation cases under ideal and distorted grid voltage conditions using the VDB-HCC method [30] is illustrated in Figure 19.The %THD was brought down from 34.08% to 3.56% under the ideal voltage condition at full load.For the reduced load case, it as reduced from 53.79% to 3.19%.In a distorted grid voltage condition, the %THD was reduced from 42.69% to 2.64% at full load case.In the reduced load case, the %THD was minimized from 64.5% to 3.4%.
Percentage THD at Point Common Coupling
The %THD at PCC during no compensation and with MFI compensation cases under ideal and distorted grid voltage conditions using the VDB-HCC method [30] is illustrated in Figure 19.The %THD was brought down from 34.08% to 3.56% under the ideal voltage condition at full load.For the reduced load case, it as reduced from 53.79% to 3.19%.In a distorted grid voltage condition, the %THD was reduced from 42.69% to 2.64% at full load case.In the reduced load case, the %THD was minimized from 64.5% to 3.4%.During the reduced load case, the peak efficiency of the MFI was 98.9%, and the average MFI efficiency under the ideal grid voltage was 98.55%.In a distorted grid voltage condition, the peak efficiency was 98.5%, and the average efficiency was 97.57%.This means that the MFI exhibited better efficiency using MB-HCC under both ideal and distorted grid voltage conditions when compared to the VDB-HCC method.The MFI inverter efficiency was reasonable under the lower irradiation case when compared to the VDB-HCC method.However, in the proposed MB-HCC method, the MFI efficiency under the lower irradiation condition (i.e., <500 W/m 2 ) in a distorted grid case was slightly lower when compared to the peak efficiency of the MFI at irradiation >500 W/m 2 , but the power conditioning task performed effectively as per the requirements of IEEE 519-2014 and IEEE 1547.This was because, under the lower irradiation case, the MB-HCC based MFI was more efficient in compensating for the highest percentage THD when compared to the ideal voltage case nonlinear load percentage THD.
Percentage THD at Point Common Coupling
The %THD at PCC during no compensation and with MFI compensation cases under ideal and distorted grid voltage conditions using the VDB-HCC method [30] is illustrated in Figure 19.The %THD was brought down from 34.08% to 3.56% under the ideal voltage condition at full load.For the reduced load case, it as reduced from 53.79% to 3.19%.In a distorted grid voltage condition, the %THD was reduced from 42.69% to 2.64% at full load case.In the reduced load case, the %THD was minimized from 64.5% to 3.4%.The %THD comparison at PCC during no compensation and with MFI compensation using MB-HCC method under ideal and distorted grid conditions are depicted in Figure 20.The %THD at full load was brought down from 34.08% to 2.34% under the ideal grid condition, whereas in the half load case it was reduced from 53.79% to 2.98%.In a distorted grid condition, the %THD was reduced from 42.69% to 2.04% under the full load case.In the reduced load case, the %THD was minimized from 64.5% to 2.77%.The %THD results using VDB-HCC [30] and the proposed MB-HCC methods comply with the IEEE 519-2014 and 1547 standards.
These results exhibit the scalability and feasibility of the proposed system in LVDS under ideal and distorted grid conditions.The TPF was improved from 0.9407 to 0.9997 at full load under the ideal grid voltage, whereas in the half load case it was improved from 0.8379 to 0.9995.The TPF under the distorted grid condition was enhanced from 0.9174 to 0.9997 at the full load case.In the reduced load case, the TPF was enhanced from 0.816 to 0.9996.Given the TPF results of the VDB-HCC and MB-HCC methods, both will maintain the TPF close to the unity value.
From the summary of the results presented in Table 7, it is justified that the MFI efficiently performs the power feeding at PCC as per the solar irradiation and compensates the source harmonics and reactive power with reduced switching frequency as per the IEEE 519-2014 and IEEE-1547 standards.These results exhibit the scalability and feasibility of the proposed system in LVDS under ideal and distorted grid conditions.
Conclusion
This research article has demonstrated a two-stage RTSPV integration system using a DC-DC boost converter with INC-based MPPT and single-phase MFI with the proposed scaling factor-based MB-HCC method.The importance of the RTSPV interfaced MFI for grid sharing and feeding capabilities were investigated by a MATLAB/Simulink simulation and validated with OPAL-RT results.The proposed MB-HCC triggers insulated-gate bipolar transistor (IGBT) switches of the MFI through tracking current response between inner and outer hysteresis bands, leading to reduced switching frequency which, in turn, reduces switching losses.The RT results enumerated the potential of the proposed system regarding efficiency and improved PQ capabilities as per IEEE 519-2014 and IEEE 1547 standards.During the variation of solar irradiation, the proposed MFI had a peak efficiency of 99.01% and an average output efficiency of 98.77% under the ideal grid, and in the distorted grid case the peak efficiency was 98.5%, and the average efficiency was 97.31%.Moreover,
Conclusions
This research article has demonstrated a two-stage RTSPV integration system using a DC-DC boost converter with INC-based MPPT and single-phase MFI with the proposed scaling factor-based MB-HCC method.The importance of the RTSPV interfaced MFI for grid sharing and feeding capabilities were investigated by a MATLAB/Simulink simulation and validated with OPAL-RT results.The proposed MB-HCC triggers insulated-gate bipolar transistor (IGBT) switches of the MFI through tracking current response between inner and outer hysteresis bands, leading to reduced switching frequency which, in turn, reduces switching losses.The RT results enumerated the potential of the proposed system regarding efficiency and improved PQ capabilities as per IEEE 519-2014 and IEEE 1547 standards.During the variation of solar irradiation, the proposed MFI had a peak efficiency of 99.01% and an average output efficiency of 98.77% under the ideal grid, and in the distorted grid case the peak efficiency was 98.5%, and the average efficiency was 97.31%.Moreover, the percentage of total harmonic distortion under ideal and distorted grid conditions was brought down to below 5%, and reactive power compensation was effective, resulting in unity power factor operation.The MFI efficiency under the full load condition was elevated by the proposed MB-HCC over the VDB-HCC by 1% under the ideal grid condition and 4% in the distorted grid condition.In the reduced load condition, the MFI efficiency was raised by the proposed MB-HCC over the VDB-HCC by 2%, under both ideal and distorted grid conditions.MB-HCC is scalable for variations in the solar irradiation and efficient over the VDB-HCC.The average percentage of THD was reduced up to 2.75%, and the source side power factor was close to unity.The simulation and RT results substantiate the hypothesis of higher efficiency and scalability of the MB-hysteresis current controlled MFI for a single-phase LVDS.The proposed two-stage MFI with MB-HCC is a unified solution to reduce grid consumption and
Figure 4 .Table 3 .
Figure 4. Single-phase phase-locked loop (PLL) used for both ideal and distorted grid voltage cases.
Figure 4 .
Figure 4. Single-phase phase-locked loop (PLL) used for both ideal and distorted grid voltage cases.
Figure 5 .
Figure 5. Comparision of the variable double-band (VDB)-HCC method and the proposed MB-HCC method current tracking; (a) VDB-HCC current tracking; (b) MB-HCC current tracking.
Figure 5 .
Figure 5. Comparision of the variable double-band (VDB)-HCC method and the proposed MB-HCC method current tracking; (a) VDB-HCC current tracking; (b) MB-HCC current tracking.
Figure 7 .
Figure 7. Simulated waveforms under MFI OFF mode; (a) ideal grid source voltage, current, active, and reactive power; (b) distorted grid source voltage, current, active, and reactive power.
Figure 7 .
Figure 7. Simulated waveforms under MFI OFF mode; (a) ideal grid source voltage, current, active, and reactive power; (b) distorted grid source voltage, current, active, and reactive power.
Electronics 2019, 8 , 302 15 of 31 Figure 9 .
Figure 9. Simulated output waveforms under MFI ON with grid feeding and power conditioning mode; (a) ideal grid input voltage, current, MFI current, and load current; (b) MFI DC-link voltage, and load active and reactive power under ideal grid; (c) source active power, reactive power, and MFI active and reactive power under ideal grid; (d) distorted grid input voltage, current, MFI current, and load current; (e) MFI DC-link voltage, and load active and reactive power under distorted grid; (f) source active power, reactive power, and MFI active and reactive power under distorted grid.
Figure 9 .
Figure 9. Simulated output waveforms under MFI ON with grid feeding and power conditioning mode; (a) ideal grid input voltage, current, MFI current, and load current; (b) MFI DC-link voltage, and load active and reactive power under ideal grid; (c) source active power, reactive power, and MFI active and reactive power under ideal grid; (d) distorted grid input voltage, current, MFI current, and load current; (e) MFI DC-link voltage, and load active and reactive power under distorted grid; (f) source active power, reactive power, and MFI active and reactive power under distorted grid.
Figure 10 .
Figure 10.Dynamic response at PCC during grid sharing and power conditioning; (a) ideal grid input voltage, current, MFI current, and load current; (b) MFI DC-link voltage, solar irradiation, and PV power under the ideal grid.
Figure 10 .
Figure 10.Dynamic response at PCC during grid sharing and power conditioning; (a) ideal grid input voltage, current, MFI current, and load current; (b) MFI DC-link voltage, solar irradiation, and PV power under the ideal grid.
Figure 11 .
Figure 11.Dynamic response at PCC during grid sharing and power conditioning; (a) distorted grid input voltage, current, MFI current, and load current; (b) MFI DC-link voltage, solar irradiation, and PV power under the distorted grid.
Figure 11 .
Figure 11.Dynamic response at PCC during grid sharing and power conditioning; (a) distorted grid input voltage, current, MFI current, and load current; (b) MFI DC-link voltage, solar irradiation, and PV power under the distorted grid.
Figure 12 .
Figure 12.Laboratory test setup of real-time (RT) software in loop (SIL) validation.
Figure 13 .
Figure 13.RT SIL test waveforms under MFI OFF mode under (a) the ideal grid and (b) the distorted grid.
Figure 12 .
Figure 12.Laboratory test setup of real-time (RT) software in loop (SIL) validation.
Figure 12 .
Figure 12.Laboratory test setup of real-time (RT) software in loop (SIL) validation.
Figure 13 .
Figure 13.RT SIL test waveforms under MFI OFF mode under (a) the ideal grid and (b) the distorted grid.
Figure 13 .
Figure 13.RT SIL test waveforms under MFI OFF mode under (a) the ideal grid and (b) the distorted grid.
Figure 14 .
Figure 14.RT SIL test output waveforms under the MFI ON, with grid sharing and power conditioning mode; (a) ideal grid input voltage, current, MFI current, and load current; (b) MFI DClink voltage, solar irradiation, and PV power under the ideal grid; (c) source active power, reactive power, and MFI active and reactive power under the ideal grid; (d) distorted grid input voltage, current, MFI current, and load current; (e) MFI DC-link voltage, solar irradiation, and PV power under the distorted grid; (f) source active power, reactive power, MFI active and reactive power under the distorted grid.
Figure 14 .
Figure 14.RT SIL test output waveforms under the MFI ON, with grid sharing and power conditioning mode; (a) ideal grid input voltage, current, MFI current, and load current; (b) MFI DC-link voltage, solar irradiation, and PV power under the ideal grid; (c) source active power, reactive power, and MFI active and reactive power under the ideal grid; (d) distorted grid input voltage, current, MFI current, and load current; (e) MFI DC-link voltage, solar irradiation, and PV power under the distorted grid; (f) source active power, reactive power, MFI active and reactive power under the distorted grid.
5. 3 .
SIL RT Test Results of Mode 3: MFI is ON, with Grid Feeding and Power Conditioning 5.3.1.Ideal Grid Voltage Case
Figure 15 .
Figure 15.RT SIL test output waveforms under MFI ON, with grid feeding and power conditioning mode; (a) ideal grid input voltage, current, MFI current, and load current; (b) MFI DC-link voltage, solar irradiation, and PV power under the ideal grid; (c) source active power, reactive power, and MFI active and reactive power under the ideal grid; (d) distorted grid input voltage, current, MFI current, and load current; (e) MFI DC-link voltage, solar irradiation, and PV power under the distorted grid; (f) source active power, reactive power, and MFI active and reactive power under the distorted grid.
Figure 15 .
Figure 15.RT SIL test output waveforms under MFI ON, with grid feeding and power conditioning mode; (a) ideal grid input voltage, current, MFI current, and load current; (b) MFI DC-link voltage, solar irradiation, and PV power under the ideal grid; (c) source active power, reactive power, and MFI active and reactive power under the ideal grid; (d) distorted grid input voltage, current, MFI current, and load current; (e) MFI DC-link voltage, solar irradiation, and PV power under the distorted grid; (f) source active power, reactive power, and MFI active and reactive power under the distorted grid.
5. 4 .
SIL RT Test Results of Mode 4: MFI is ON, with Grid Sharing and Power Conditioning during Irradiation Change
5. 4 .
SIL RT Test Results of Mode 4: MFI is ON, with Grid Sharing and Power Conditioning during Irradiation Change.
Figure 16 .
Figure 16.RT SIL test dynamic response at PCC under grid sharing and power conditioning mode during Solar irradiation change (a) Ideal Grid Input Voltage, Current, MFI current and load current; (b) MFI DC-link voltage, solar Irradiation and PV power under ideal grid; (c) Distorted Grid Input Voltage, Current, MFI current and load current; (d) MFI DC-link voltage, solar Irradiation and PV power under distorted grid.
Figure 16 .
Figure 16.RT SIL test dynamic response at PCC under grid sharing and power conditioning mode during Solar irradiation change (a) Ideal Grid Input Voltage, Current, MFI current and load current; (b) MFI DC-link voltage, solar Irradiation and PV power under ideal grid; (c) Distorted Grid Input Voltage, Current, MFI current and load current; (d) MFI DC-link voltage, solar Irradiation and PV power under distorted grid.
Figure 22 .
Figure 22.True power factor at PCC using MB-HCC: (a) ideal grid condition; (b) distorted grid condition.
Figure 22 .
Figure 22.True power factor at PCC using MB-HCC: (a) ideal grid condition; (b) distorted grid condition.
Table 1 .
Comparison of previous hysteresis current control (HCC)-based single-phase grid-tied inverter literature.
p , k i , k d ) k p = 180, k i = 3200, k d = 1Time constant for derivative action (s)
Table 4 .
Active and reactive power summary of the source, MFI, and load under grid sharing mode.Simulated output waveforms under MFI ON, grid sharing, and power conditioning mode.(a) Ideal grid input voltage, current, MFI current, and load current; (b) MFI DC-link voltage, solar irradiation, PV power, and load active and reactive power under ideal grid; (c) source active power, reactive power, and MFI active and reactive power under ideal grid; (d) distorted grid input voltage, current, MFI current, and load current; (e) MFI DC-link voltage, solar irradiation, PV power load, and active and reactive power under distorted grid; (f) source active power, reactive power, and MFI active and reactive power under distorted grid.
Table 5 .
Active and reactive power summary of the source, MFI, and load under grid feeding mode.
Table 6 .
Active and reactive power summary under ideal and distorted grid conditions under Ir = 1000 W/m 2 .
Table 6 .
Active and reactive power summary under ideal and distorted grid conditions under Ir = 1000 W/m 2 . | 2019-04-15T22:13:51.618Z | 2019-03-07T00:00:00.000 | {
"year": 2019,
"sha1": "8a8a1eda2afd3c90049b82756e8256bf923d8443",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/8/3/302/pdf?version=1552555144",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8a8a1eda2afd3c90049b82756e8256bf923d8443",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
202541683 | pes2o/s2orc | v3-fos-license | Fuller singularities for generic control-affine systems with an even number of controls
In this article we study how bad can be the singularities of a time-optimal trajectory of a generic control affine system. In the case where the control is scalar and belongs to a closed interval it was recently shown in [6] that singularities cannot be, generically, worse than finite order accumulations of Fuller points, with order of accumulation lower than a bound depending only on the dimension of the manifold where the system is set. We extend here such a result to the case where the control has an even number of scalar components and belongs to a closed ball.
Introduction
1.1. Time-optimal trajectories of control-affine systems. Let M be a smooth and connected n-dimensional manifold. Given k + 1 smooth vector fields f 0 , . . . , f k on M , we study control systems of the form where B k 1 = {u ∈ R k | u < 1} is the (open) unit ball contained in R k , and B k 1 denotes its closure. Systems of the form (1.1) are called control-affine systems, and the geometric aspects of their evolution has attracted a lot of interest in the mathematical control community (see e.g. [4,10,17]).
An admissible trajectory of (1.1) is a Lipschitz continuous curve q : [0, T ] → M , T > 0, for which there exists u ∈ L ∞ ([0, T ], B k 1 ) such thaṫ holds almost everywhere on [0, T ]. Definition 1. The time-optimal control problem associated with (1.1) consists into finding the admissible trajectories q : [0, T ] → M of the system that minimize the time needed to join q(0) and q(T ), among all the admissible curves. Admissible trajectories that solve the time-optimal control problem associated with (1.1) are called time-optimal trajectories.
Candidate time-optimal trajectories are characterized by the Pontryagin maximum principle [20] (PMP, in short). Every admissible time-optimal trajectory can be lifted to a Lipschitz continuous trajectory λ : [0, T ] → T * M of an associated time-dependent Hamiltonian system (see Section 2.1 for details). Moreover, λ(t) = 0 for every t ∈ [0, T ], and for almost every t ∈ [0, T ] the triple (q(t), λ(t), u(t)) has the property that The triple (q(·), λ(·), u(·)) is said to be an extremal triple, and the PMP reduces the study of time-optimal trajectories to the study of extremal triples. We call extremal trajectory any admissible trajectory which is part of an extremal triple, so that any time-optimal trajectory is an extremal trajectory, but the converse does not hold in general.
Regularity of extremal trajectories.
Our goal is to establish regularity results for timeoptimal trajectories of control-affine systems. Our methods, however, apply to the broader class of extremal ones. Given an extremal triple (q(·), λ(·), u(·)), the control u can be smoothly reconstructed from the maximality condition (1.2) whenever λ(t) is not simultaneously orthogonal to f 1 (q(t)), . . . , f k (q(t)). However, smoothness may stop at times where λ(t) annihilates f 1 (q(t)), . . . , f k (q(t)) and, actually, for any given measurable control t → u(t), there exist a dynamical system of the form (1.1) and an initial datum q 0 ∈ M for which the admissible trajectory driven by u and starting at q 0 is time-optimal. This has been noticed in [24] for the single-input case, i.e., when k = 1, but can be easily extended to the general case. It makes anyhow sense to investigate regularity properties of extremal trajectories for generic systems or, more generally, for systems satisfying low-codimension non-degeneracy conditions. The single-input case, in particular, gave rise to a vast literature (see, e.g., [5,7,8,21,22,23,25] and the references therein).
Recently, the same questions about the regularity of time-optimal trajectories have been posed in the multi-dimensional input case, but only few results are available [3,11,12,14,19,26]. , associated with q(·), which is smooth on O q . We also define Σ q (or Σ, if no ambiguity is possible) as An isolated point of Σ is usually called a switching time. The accumulation of switching times is referred to in the literature as Fuller phenomenon (after the pathbreaking work [15]), or also chattering or Zeno behavior.
Definition 3 (Fuller times). Let us define Σ 0 to be the set of isolated points of Σ. Inductively, we set Σ j to be the set of isolated points of Σ \ ( j−1 i=0 Σ i ). A time t ∈ Σ j is said to be a Fuller time of order j. Finally, we declare points of to be Fuller times of infinite order.
Remark 4. For every j ∈ N, the set Σ j consists of isolated points only, hence it is countable.
We measure the worst stable behavior of "generic" systems of the form (1.1) in terms of the maximal order of their Fuller times. The more an instant t is nested among Fuller times of high order, the greater is the number of relations satisfied by the vectors f 0 (q(t)), . . . , f k (q(t)).
Transversality theory is then used to guarantee that generically not too many of these conditions can hold at the same point. As opposed to the analysis in [6], we restrict ourselves to the case of global frames of everywhere linearly independent vector fields, and the word generic must be intended with respect to this property.
Definition 5.
For every open set U ⊂ M , we denote by • Vec(U ) the set of smooth vector fields f on U , endowed with the C ∞ -Whitney topology.
The next statement contains the precise formulation of our main result, which is obtained under the condition k = 2m, that is, assuming that the number of controlled vector fields is even.
Theorem 6. Let m, n ∈ N be such that 2m + 1 ≤ n. Let M be a n-dimensional smooth manifold. There exist a positive integer K depending only on n and an open and dense set has at most Fuller times of order K, i.e., where Σ and Σ j are as in Definitions 2 and 3.
Combining Theorem 6 and Remark 4, we deduce that any extremal trajectory q(·) of a generic control-affine system of the form (1.1) with k = 2m is smooth out of a countable set.
1.3. Remarks on the main result and open problems. We conclude this introduction proposing two lines of investigation related to our study. The first one consists into extending our analysis to the case of linearly dependent frames, as the first and the third author have done in [6, §4.1] for the single-input case. Even though we expect that similar arguments work also in the multi-input case, the differential structure of the singular locus where the fields f 0 , . . . , f 2m become dependent is more complicated, and needs to be properly investigated.
A different, and possibly more substantial line of research consists into establishing Theorem 6 for systems of the form (1.1) and an odd number (greater than one) of controls. The fact that an extremal triple (q(·), λ(·), u(·)) crosses the singular locus {λ ∈ T * M | λ, f i (q) = 0, i = 1, . . . , 2m, q = π(λ)} imposes in the even case a differential condition that we can exploit to begin our iterative arguments (Proposition 20). This condition is based on the results in [3] where the switching behavior in time-optimal trajectories for multi-input control-affine systems is characterized (see also [11] for a study in the same spirit for a class of control-affine systems issuing from the circular restricted three-body problem). In the odd case, it is not clear how to derive such a first additional relation at times at which an extremal triple (q(·), λ(·), u(·)) crosses the singular locus. In the single-input case, this difficulty has been overcome with a suitable analysis of extremal trajectories around Fuller times [6, Theorem 18], but the arguments there depend decisively on the fact that the control is scalar. For the general odd case, the problem is open, and new ideas are required.
1.4. Structure of the paper. In Section 2 we present the Pontryagin maximum principle (PMP) to recast the time-optimal problem into its proper geometric framework. Based on the Hamiltonian formalism of the PMP, we establish a differentiation lemma that we will use intensively in the paper (Lemma 10). Section 2 also contains some general observation on the maximal order of the Fuller times in a set (Section 2.3) and classical definitions about jet spaces and transversality theory (Section 2.4). Section 3 collects additional algebraic material on skewsymmetric matrices that we need in subsequent arguments. Sections 4 and 5 are devoted to the recursive characterization of dependence conditions holding at accumulations of Fuller times, when the Goh matrix is, respectively, invertible and singular. Finally, in Section 6, we conclude the proof of the main result, Theorem 6.
Main technical tools
2.1. The Pontryagin maximum principle. Let us introduce some technical notations that we will employ extensively throughout the rest of the paper. Let π : T * M → M be the cotangent bundle, and s ∈ Λ 1 (T * M ) be the tautological Liouville one-form on T * M . The non-degenerate skew-symmetric form σ = ds ∈ Λ 2 (T * M ) endows T * M with a canonical symplectic structure.
With any C 1 function p : T * M → R let us associate its Hamiltonian lift p ∈ C(T * M, T T * M ) by the condition Fix f = (f 0 , . . . , f 2m ) ∈ Vec(M ) 2m+1 . The Pontryagin Maximum Principle (PMP, for short) [20] gives then a necessary condition satisfied by candidate time-optimal trajectories of recalled in the theorem below. Introducing the control-dependent Hamiltonian function H : the precise statement is the following.
For every i = 0, . . . , 2m, let us define the smooth functions h i : More generally, let k be an integer and D = i 1 · · · i k a multi-index of {0, 1, . . . , 2m}, and let |D| := k be the length of D. A multi-index D = i · · · ij with k consecutive occurrences of the index i is denoted as D = i k j. We use f D to denote the vector field defined by and h D to denote the smooth function on T * M given by λ, f D for λ ∈ T * M .
By a slight abuse of notations, given a time-extremal triple (q(·), λ(·), u(·)) defined on [0, T ], we define h i (t) := h i (λ(t)) for every i = 1, . . . , 2m and t ∈ [0, T ]. Throughout the rest of the paper, we further extend this convention in the following way: whenever ϕ : T * M → R is a scalar function defined on T * M and t → λ(t) is an integral curve of H, we denote by ϕ(t) the evaluation of ϕ at λ(t) if no ambiguity is possible.
Denote by I the set {1, . . . , 2m} and by h I the map h I : Let us first recall that the time-extremal control u is smooth (up to modification on a set of measure zero) on the open set R q := {t ∈ [0, T ] | h I (t) = 0}, i.e., in terms of the set Σ q introduced in Definition 2, Indeed, the maximality condition (2.3) provided by the PMP yields the explicit characterization Therefore an extremal trajectory on R q is an integral curve of the vector field which is well-defined and smooth on T * M \ {λ ∈ T * M | h I (λ) = 0}. In particular, its integral curves are smooth as well.
We also recall the following differentiation formula along a time-extremal lift t → λ(t), which follows as a consequence of the symplectic structure on T * M (see [1,Section 3.3]).
In particular, Proposition 9 implies that for every X ∈ Vec(M ) and every extremal triple associated with (2.1) the identity holds true for a.e. t (here we apply the proposition to ϕ(λ) = λ, X(π(λ)) ).
Denote by M j,k (R) the set of j × k matrices with real entries and let M j (R) = M j,j (R). We introduce the map i=1 and differentiating h I along a time-extremal triple, we find by the previous considerations thaṫ for a.e. t (notice that the minus sign is a consequence of considering the transposition in (2.6)). In particular, within the set R, the dynamics of h I are described bẏ
2.2.
A differentiation lemma. We present in this section a result that we will extensively use in the paper. It concerns the differentiation along an extremal curve of a smooth function on T * M that vanishes at a converging sequence of times.
such that, for every smooth function ϕ : 1 ), there exists a subsequence (t lw ) w∈N such that the limit exists and belongs to B 1 2m . Consider a smooth function ϕ : T * M → R such that ϕ(λ(t l )) = 0 for every l ∈ N. By continuity we have ϕ(λ(t * )) = 0, so that by Proposition 9 for every l ∈ N we can write Rewriting (2.8) along the subsequence t lw and taking the limit as w → ∞ permits then to conclude, since t → {h i , ϕ}(λ(t)) is absolutely continuous for every i = 0, . . . , 2m.
2.3. Fuller order of a set. For a subset Ξ of R we denote by Ξ 0 its subset made of isolated points and, inductively, by Ξ j the set of isolated points of Ξ \ ( Definition 11. We say that Ξ has Fuller order k ∈ N if Ξ = Ξ 0 ∪ · · · ∪ Ξ k and Ξ k = ∅. We say that ∅ has Fuller order −1 and that Ξ has Remark 12. The notion of Fuller order is strictly related to the one of Cantor-Bendixson rank: if X is a topological space (in particular, a subset of R with the induced topology) the Cantor-Bendixson rank of X is the least ordinal such that (1) , and X (β) = ∩ α<β X (α) . For scattered sets, i.e., sets such that X (k) = ∅ for some k ∈ N, the Cantor-Bendixson rank is equal to the Fuller order plus 1. For perfect sets, on the contrary, the Fuller order is infinite and the Cantor-Bendixson rank is zero.
The properties of the Fuller order described in the following two results have been probably already observed in the context of Cantor-Bendixson rank but we were not able to find a precise reference for them.
Lemma 13. Let Ξ, S be two subsets of R. If Ξ has Fuller order at least k and S has Fuller order at most j, with k > j ≥ 0, then Ξ \ S has Fuller order at least k − j − 1.
Proof. Without loss of generality Ξ has order k and S has order j. Notice that it is enough to prove the lemma in the case j = 0, since every set S i , i = 0, . . . , h, is of Fuller order 0 and Let us prove the property by induction on k, assuming that S = S 0 . In the case k = 1, we just need to notice that Ξ \ S is nonempty and hence has nonnegative Fuller order. Assume now that the property holds for k − 1 and let us prove it for k. Consider a point x ∈ Ξ k . If x is in S, then there exists a neighborhood of x which does not contain any point of S except x. Since x is a density point for Ξ k−1 , we deduce that there exist points in Ξ k−1 at positive distance from S. Hence Ξ \ S has Fuller order at least k − 1. Assume now that x is in Ξ \ S. Notice that, by the induction hypothesis, for every neighborhood U of x, the set U ∩ ((Ξ 0 ∪ · · · ∪ Ξ k−1 ) \ S) has Fuller order at least k − 2. We can then extract a sequence in ((Ξ 0 ∪ · · · ∪ Ξ k−1 ) \ S) k−2 converging to x. We deduce that Ξ \ S has Fuller order at least k − 1.
As an immediate consequence, we get the following result.
2.4.
Jet spaces and transversality. Following [13], for any nonempty open subset U of M we introduce: • JT U : the jet space of the smooth vector fields on U , T U are endowed with the Whitney C ∞ topology. If N is a positive integer and f ∈ Vec(U ) (respectively, f ∈ Vec(U ) 2m+1 ), we use j N (f ) and j N q (f ) (respectively, j N (f ) and j N q (f )) to denote respectively the jet of order N associated with f (respectively, the (2m + 1)-tuple of jets of order N associated with f ) and its evaluation at q ∈ U (respectively, the evaluation of j N (f ) at q ∈ U ).
Fix N ∈ N and let P (n, N ) be the set of all polynomial mappings Similarly, we call P (n, N ) 2m+1 the set of all (2m + 1)-tuples of elements in P (n, N ), that is, Assume from now on that U is the domain of a coordinate chart (x, U ) centered at some q ∈ U . This allows one to identify the typical fiber T 2m+1,N of J N 2m+1 T U with P (n, N ) 2m+1 as explained below. There is a standard way [7] of introducing coordinates on the semi-algebraic set which we briefly recall.
Let K 0 = {0}, and K k be the set of k-tuples of ordered integers in {1, . . . , n}. If f : R n → R is a homogeneous polynomial of degree k, and ξ = (ξ 1 , . . . , ξ k ) ∈ (R n ) k , the polarization of f along ξ is the real number where, for every η ∈ R n , D η f denotes the directional derivative of f along η.
associates with any element Q ∈ V a basis of R n . For 1 ≤ i ≤ n and Q ∈ V , we also employ the notation ev(Q) i to refer to the i-th component of ev(Q). In particular ev(Q) i ∈ R n . This allows to introduce a coordinate chart X V on V , in such a way that every Q = (Q 1 , . . . , Q 2m+1 ) ∈ V can be written with coordinates where the element X j i,σ denotes the polarization of the j-th coordinate of the homogeneous part of degree k = |σ| of Q i along the element (ev(Q) σ1 , . . . , ev(Q) σ k ).
Consider the now the chart ( In local coordinates, Q i is represented by and X i,σ is a constant vector field.
l is a polynomial in the coordinates X a s,σ , with 1 ≤ a ≤ n, 1 ≤ s ≤ 2m + 1, |σ| ≤ l and σ = j l . Similar computations can be carried out for all iterated brackets.
Remark 15. Let (x, ψ), π −1 (U ) be the induced chart on T * U , where ψ = (ψ r ) n r=1 . In particular, we use λ ψ to denote the elements of T * 0 M given in coordinates by (0, ψ). The typical fiber T 2m+1,N of the vector bundle J N 2m+1 T U × U T * U is isomorphic to P (n, N ) 2m+1 × R n . Clearly, h ik (λ ψ ) = ψ, X k,i − ψ, X i,k and, for l ≥ 1, where R ′ i,k,l is a polynomial in the coordinates ψ r , X a s,σ with 1 ≤ a, r ≤ n, 1 ≤ s ≤ 2m + 1, |σ| ≤ l and σ = j l . By an induction argument, h D (λ ψ ), with D a multi-index, can be expressed as a polynomial function in terms of the coordinates ψ r , X a s,σ . Therefore, this choice of the chart (X V , x) allows one to see every h D and h D • ev as a real-valued function on J N 2m+1 T U × U T * U and on its typical fiber T 2m+1,N , respectively, where N is large enough. This will also be the case for any polynomial function in the h D 's.
The following result follows by standard transversality arguments (see, e.g., [2,16]). . . , f 2m ) such that, for every q ∈ M , j N q (f ) ∈ B q . Assume that B q has codimension larger than or equal to n + 1 in J N 2m+1,q T M for every q ∈ M . Then V is also dense in Vec(M ) 2m+1 0 .
Algebraic considerations
3.1. Decomposition of skew-symmetric matrices. We collect in this section some general facts regarding the algebraic structure of skew-symmetric matrices. For any l ∈ N, we recall that the notation so(l) stands for the linear space of l × l skew-symmetric real matrices. We begin by recalling some useful properties concerning the Pfaffian of a skew-symmetric matrix. Proof. Item i) is classical, and we refer the reader to [18] for a proof. Concerning Item ii), it can be found, for instance, in [9,Equation (3.2)].
The next proposition collects a list of useful properties valid for general skew-symmetric matrices of size k.
Proposition 18. Let k ∈ N and A ∈ so(k) be nonzero. Then the following holds true.
i) The rank of A is an even integer 1 ≤ 2m 0 ≤ k and there exists a nonzero principal minor of order 2m 0 . As a consequence, there exists a permutation matrix P such that In particular, A 1 , A 2 , and A 3 satisfy the relation iii) Let e 1 , . . . , e k−2m0 be the canonical basis of R k−2m0 . Define where adj Pf (A 1 ) denotes the adjoint Pfaffian of A 1 introduced in Lemma 17. Then the family v 1 , . . . , v k−2m0 is a basis of ker(P T AP ), and the coordinates of each v i , for i = 1, . . . , k − 2m 0 , are homogeneous polynomials of degree m 0 in the entries of A.
Proof. We begin by i). First note that the conclusion is equivalent to prove that A admits a 2m 0 × 2m 0 nonzero principal minor, i.e., the determinant of an 2m 0 × 2m 0 principal submatrix. Recall that, for 1 ≤ l ≤ k, the coefficient of (−1) l x k−l of the characteristic polynomial of any k × k matrix is equal to the sum of its l × l principal minors. If A is a k × k skew-symmetric matrix, notice that its principal submatrices are themselves skew-symmetric. One deduces that the coefficients of (−1) l x k−l in the characteristic polynomial P A of A are zero if l is odd and sums of squares if l is even, according to i) of Lemma 17. Moreover, if the rank of A is equal to 2m 0 , then P A (x) = x k−2m0 Q(x) with Q(0) = 0 since A is diagonalizable over C. Hence the coefficient of x k−2m0 of P A is nonzero, yielding the existence of a 2m 0 × 2m 0 nonzero principal minor.
We pass now to Point ii). Let us consider any element w = (w 1 , w 2 ) ∈ ker(P T AP ). Computing the product P T AP w = 0, and recalling that A 1 is invertible, we obtain the relations By assumption, ker(P T AP ) has dimension k−2m 0 , therefore there exists a basis w 1 2 , . . . , w k−2m0 2 of R k−2m0 , such that the elements . . , k − 2m 0 , belong to ker(P T AP ) and are linearly independent. In particular the (k − 2m 0 ) × (k − 2m 0 ) skew-symmetric matrix (A T 2 A −1 1 A 2 + A 3 ) has a (k − 2m 0 )-dimensional kernel, and therefore it is the zero matrix.
As for Point iii), it is sufficient to notice that the elements
3.2.
Consequences on the structure of the Goh matrix. We apply here below Proposition 18 to the skew-symmetric Goh matrix H II defined in (2.7).
Let (q(·), λ(·), u(·)) be a time-extremal triple of (2.1), and assume that t * ∈ [0, T ] is such that 1 ≤ rank (H II (t * )) = 2m 0 ≤ 2m. Then, up to a permutation of the basis of R 2m we can present H II (t * ) in the block form where H 2m0 II (t * ) ∈ M 2m0 (R) and F (t * ) ∈ M 2(m−m0) (R) are skew-symmetric matrices, H 2m0 II (t * ) is invertible and E(t * ) ∈ M 2m0,2(m−m0) (R). Then the following holds true. , II (t))e i is a 2m-dimensional vector whose components are homogeneous polynomials of degree m 0 in the entries h ij (t) of the Goh matrix; ii) if t ∈ I is such that rank (H II (t)) = 2m 0 , then iii) if t ∈ I is such that rank (H II (t)) = 2m 0 , the non-trivial relations expressed by the matrix equality )F (t) = 0 are homogeneous polynomial relations of degree m 0 + 1 in the entries h ij (t) of the Goh matrix.
Iterated accumulations of points in Σ with invertible Goh matrix
Let (q(·), λ(·), u(·)) be an extremal triple of (2.1). Consider the set where Σ is the set constructed in Definition 2. In analogy with Definition 3, we define Σ 2m 0 to be the set of isolated points of Σ 2m and, inductively, we set Σ 2m j to be the set of isolated points of Σ 2m \ ( j−1 i=0 Σ 2m i ). The starting point of the study of accumulations of singularities in Σ 2m is the following result.
Proof. Since t * ∈ Σ 2m ⊂ Σ, we have that det(H II (t * )) = 0 and we deduce from (2.5) that h I (t * ) = 0. Moreover, since t * / ∈ Σ 2m 0 , there exists a nontrivial sequence (t l ) l∈N ⊂ Σ 2m converging to t * such that h I (t l ) = 0 for every l ∈ N. Applying Lemma 10 to ϕ = h i , i ∈ I, we infer the existence of u * ∈ B 2m 1 such that 1 . Then we deduce from [3, Theorem 3.4] that h I vanishes identically in a relative neighborhood I ⊂ [0, T ] of t * . Note that [3,Theorem 3.4] is stated for time-optimal trajectories, but it actually holds true for extremal trajectories, since its proof only relies on the properties of the extremal flow characterized by the PMP.
Upon shrinking I, we can assume that det(H II (t)) = 0 for every t ∈ I. Differentiating the relation h I | I ≡ 0, we find that u(t) = H II (t) −1 h 0I (t) holds true a.e. on I. The differential system generated by the Hamiltonian function , is well-defined on the set {p ∈ T * M | rank (H II (p)) = 2m}. Moreover, the time-extremal triple (q(·), λ(·), u(·)) satisfieṡ λ(t) = H 0 (λ(t)), almost everywhere on I, that is, it is an integral curve of H 0 on I. But this forces u(·) to be smooth on I, contradicting the assumption that t * is an element of Σ 2m . The contradiction argument yields H II (t * ) −1 h 0I (t * ) = 1, and the statement follows.
As a direct consequence of Lemma 17 and Proposition 20, we deduce the following.
It is useful to make the following observation on the structure of the constraint φ ℓ (λ) = 0. Its proof can be obtained by an easy inductive argument.
The following result illustrates the relation between the functions φ ℓ and the Fuller order of the set Σ 2m .
We proceed by induction, observing that the case ℓ = 0 follows from Corollary 21.
In the next lemma, using the fact that the conditions φ ℓ = 0 define independent constraints on the jets, we deduce from Proposition 25 and Lemma 16 that the set Σ 2m has Fuller order at most 2n − 1.
Lemma 26.
There exists an open and dense set V 2m ⊂ Vec(M ) 2m+1 0 such that, for every f = (f 0 , . . . , f 2m ) ∈ V 2m and every extremal triple (q(·), λ(·), u(·)) of (2.1), Proof. The proof of the lemma follows a classical strategy found, e.g., in [7]. Let us construct where φ 0 , . . . , φ 2n−1 are defined in (4.2) and (4.3). We denote then by B the canonical projection of B onto J 2n+1 2m+1 T M . Similarly, for q ∈ M , we define B q ⊂ J 2n+1 2m+1,q T M × T * q M by and by B q the canonical projection of B q onto J 2n+1 2m+1,q T M . Notice that, for every coordinate chart (x, U ), B ∩ J 2n+1 2m+1 T U × T * U is an algebraic subset of J 2n+1 2m+1 T U × T * U for the coordinates (X V , x, ψ) introduced in Section 2.4. Hence, B ∩ J 2n+1 2m+1 T U is a semi-algebraic subset of J 2n+1 2m+1 T U . We now consider the set V 2m of vector fields f ∈ Vec(M ) 2m+1 0 verifying the following: for every q ∈ M , j 2n+1 q (f ) / ∈ B q . We claim that (4.4) holds true if f ∈ V 2m . In fact, arguing by contradiction, assume that for such an f and an extremal triple (q(·), λ(·), u(·)) of (2.1), there exists t * ∈ Σ 2m \ 2n−1 j=0 Σ 2m j . Then, Proposition 25 implies that yielding that j 2n+2 q(t * ) (f ) ∈ B q(t * ) and contradicting the fact that f ∈ V 2m . The claim follows. We conclude the proof of Lemma 26 thanks to Lemma 16, by showing that for every q ∈ M , the set B q defined above has codimension larger than or equal to n + 1 in J 2n+1 2m+1,q T M . Let q ∈ M , and consider a local coordinate chart (x, U ) on M centered at q. Lift this chart to a coordinate chart (x, ψ), π −1 (U ) on T * U as in Remark 15, and recall that J 2n+1 2m+1,q T M × T * q M is isomorphic to P (n, 2n + 1) 2m+1 × R n . By taking into account Remark 23, the map is well defined. Then, up to the identification of J 2n+1 2m+1,q T U × T * q U and P (n, 2n + 1) 2m+1 × R n , . In order to prove that B q has codimension larger than or equal to n + 1 we first show that B q has codimension 2n by proving that E 2n φ is a submersion at every point of B q . To that purpose, we compute in local coordinates the maps φ i (λ ψ ) for 0 ≤ i ≤ 2n − 1.
We proved that B q has codimension 2n, from which it follows readily that the codimension of B q is larger than or equal to 2n − n + 1 = n + 1 by projection, where the extra term +1 is due to the homogeneity of each of the relations φ l (λ ψ ) = 0 with respect to λ ψ . This concludes the proof of Lemma 26.
Iterated accumulations of points in Σ with singular Goh matrix
We consider in this section the complementary case in which the Goh matrix H II does not have full rank.
Let us fix 1 ≤ a ≤ m, and consider the sets Observe that the notation is consistent with the notation Σ 2m introduced in (4.1), which effectively corresponds to the case a = 0. By point i) of Proposition 18, for every λ ∈ (T * M ) 2(m−a) there exists a permutation matrix P λ ∈ M 2m (R) such that and, finally, letting we list all of the a(2a − 1) independent entries of G λ as a collection of functions g λ l : Proposition 28. Let 1 ≤ a ≤ m and consider, for 1 ≤ i ≤ 2a and 1 ≤ l ≤ a(2a − 1), the functions κ λ i and g λ l defined in (5.2) and (5.3), respectively. Consider an extremal triple (q(·), λ(·), u(·)). Then the following holds true: Proof. Our considerations being local, it is not restrictive to work with the Goh matrix H II in the block form (5.1). The fact that for t ∈ Σ 2(m−a) and 1 ≤ l ≤ a(2a − 1), g λ(t) l (t) = 0 is the content of Point iii) of Proposition 19. If, in addition, t is in Σ 2(m−a) \ Σ 0 , then by definition there exists a nontrivial sequence (t l ) l∈N ⊂ Σ 0 that converges to t and yielding by (2.5) and Lemma 10 the existence of some u * ∈ B 2m 1 such that Since H II (t) is a skew-symmetric matrix, the above relation implies that The following rather long and technical definition aims at identifying sufficiently many independent functions that vanish at high order density points of Σ. Then ρ λ r+1 = ρ λ r . Let, moreover, Z λ r (·) be the matrix extracted from S λ r (·) with column indices in J λ r , and definẽ S λ r : .
For every λ ∈ ∪ m a=1 (T * M ) 2(m−a) the sequence (ρ λ r ) r∈N is nondecreasing and takes values in {0, . . . , 2m}. Hence, given any N ∈ N, the pigeonhole principle implies that for every λ there exists r ≤ 2mN such that . We denote by Υ N the range of R N and we notice that it is of finite cardinality.
The main property justifying the above definition is the following.
Proof. Let us first notice that ρ λ k , J λ k , V λ k and the other matrices introduced in Definition 29 do not depend on λ provided that R N (λ) =R. To simplify the notations we then drop the index λ.
Let us prove the proposition by induction on k. For k = 0 recall that µ 0 = g 1 and the conclusion follows from Proposition 28. The same argument works in the inductive step from k − 1 to k whenever ρ k−1 < ρ k , since in this case µ k = κ ρ k −ρ0 . When, instead, ρ k−1 = ρ k , notice that by the inductive assumption and by Lemma 10 there exists u * ∈ B 2m 1 such that for every j = 1, . . . , k−1 and every ℓ = 1, . . . , 2m. In particular, Since, moreover, the ranks of and of its extracted matrix .
In order to study the independence of the constraints µ j (λ) = 0 we investigate in the next lemma their expression.
Proof. Let us prove Equation (5.5) by induction on j. In the case j = 0, by the assumption made on r, µ r = κ ρr−ρ0 and the conclusion follows. For j = 1, . . . , k, µ r+j = det(S r+j−1 ), V r+j = V r , Z r+j = Z r , and a simple recursive argument allows to conclude.
Using the properties of the functions µ j obtained in the last two results, we are able to prove the following lemma on the Fuller order of the set SR introduced in the statement of Proposition 30.
To conclude as in Lemma 26 and deduce from Lemma 16 that VR is dense in Vec(M ) 2m+1 0 , it suffices to show that for every q ∈ M the codimension of B q in J (2m+1)N +2 2m+1,q T M is larger than or equal to n + 1.
Let q ∈ M , and consider a local coordinate chart (x, U ) on M centered at q. Lift this chart to a coordinate chart (x, ψ), π −1 (U ) on T * U as in Section 2.4. By construction, B ∩ J 2n+1 2m+1 T U is a semi-algebraic subset of J (2m+1)N +2 2m+1,q T U .
Proof of Theorem 6
Let N ≥ 2n and define U = V 2m ∩ ∩R ∈ΥN VR , where V 2m is as in Lemma 26 and the sets VR as in Lemma 32. | 2019-09-03T11:02:39.000Z | 2019-09-03T00:00:00.000 | {
"year": 2019,
"sha1": "50e14884b4e893f8765c1c434b02801cc9108c20",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1909.01061",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "69768dceff5a5d9eee895fe6b07b104ffa152d76",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
237600545 | pes2o/s2orc | v3-fos-license | Editorial: Microbial Communities of Polar and Alpine Soils
In recent years Arctic, Antarctic, and Alpine regions have experienced the highest rates of warming worldwide (Zemp et al., 2006; Anisimov et al., 2007). In Arctic and Alpine environments these phenomena are resulting in an increase of the duration of ice-free periods and an overall greening of terrestrial areas. The effects of warming on microbial decomposition of vast carbon pools in permafrost soils have the potential to cause a significant positive feedback to global climate change (Cavicchioli et al., 2019). Climate change in Antarctica, is firstly feared to result in the loss of unique and highly adapted ecosystems, mainly because of shifts in temperature and precipitation regimes, as well as longer term changes in edaphic profiles and the invasion of allochthonous, more competitive species (Convey and Peck, 2019). Soil microorganisms play a crucial role in mediating carbon and nitrogen balance and other biogeochemical cycles of global importance. Therefore, understanding the soil microbial diversity and ecology, including the ecological drivers that shape microbial communities, may be a key for understanding how biogeochemical cycles will respond to large-scale environmental and climatic changes. Given the key role of microorganisms in maintaining the balance of these environments, they could be viewed both as sentinels and amplifiers of global change (Maloy et al., 2016). In this framework, the e-book Microbial Communities of Polar and Alpine Soils aimed to collect original and noticeable research papers about diversity and functionality of soil microbial communities, their interactions with the other biotic components, including the aboveground plant coverage, and the abiotic factors determinant for the colonization of these habitats, as well their adaptation and resilience abilities to stressing conditions and environmental changes.
THE THREE POLES AND THE CHALLENGES OF CLIMATE CHANGE
In recent years Arctic, Antarctic, and Alpine regions have experienced the highest rates of warming worldwide (Zemp et al., 2006;Anisimov et al., 2007). In Arctic and Alpine environments these phenomena are resulting in an increase of the duration of ice-free periods and an overall greening of terrestrial areas. The effects of warming on microbial decomposition of vast carbon pools in permafrost soils have the potential to cause a significant positive feedback to global climate change (Cavicchioli et al., 2019). Climate change in Antarctica, is firstly feared to result in the loss of unique and highly adapted ecosystems, mainly because of shifts in temperature and precipitation regimes, as well as longer term changes in edaphic profiles and the invasion of allochthonous, more competitive species (Convey and Peck, 2019).
Soil microorganisms play a crucial role in mediating carbon and nitrogen balance and other biogeochemical cycles of global importance. Therefore, understanding the soil microbial diversity and ecology, including the ecological drivers that shape microbial communities, may be a key for understanding how biogeochemical cycles will respond to large-scale environmental and climatic changes. Given the key role of microorganisms in maintaining the balance of these environments, they could be viewed both as sentinels and amplifiers of global change (Maloy et al., 2016).
In this framework, the e-book Microbial Communities of Polar and Alpine Soils aimed to collect original and noticeable research papers about diversity and functionality of soil microbial communities, their interactions with the other biotic components, including the aboveground plant coverage, and the abiotic factors determinant for the colonization of these habitats, as well their adaptation and resilience abilities to stressing conditions and environmental changes.
COLD-LIVING MICROORGANISMS AND THEIR ROLE IN POLAR AND ALPINE ENVIRONMENTS
This brief editorial summarizes and highlights experimental researches carried out in different environments, ranging from the Qinghai-Tibetan Plateau, to the Arctic, to maritime and continental Antarctica, or spreading across the poles. Different types of soils were studied, from oligotrophic to nitrogen rich soils, from soils underneath plants to thawed permafrost, dry soils, moraines, gullies, polygon soils, or soils around rocks. Some papers dealt with fungi, others with bacteria, others to actinomycetes and/or other organisms.
Ray et al. wanted to define the spread of the "atmospheric chemosynthesis, " a microbial carbon fixation process supporting primary production in dry and nutrient-poor environments. Those genes associated to this process were reported as widespread across cold desert soils, spanning the Tibetan Plateau, and both Antarctic and high Arctic sites.
Otherwise, in some coastal sites of maritime Antarctica, seabirds and marine mammal colonies exert, through the accumulation of their feces and urine, a strong influence on the edaphic N content. Nitrogen cycle in Antarctic tundra ecosystems has also been investigated by Dai et al. and Acuña-Rodríguez et al. The former recorded differences in the denitrification rates and the denitrifier community structures between nitrogen rich soils and animal-lacking tundra soils. The latter observed as fungal symbionts (root endophytes), associated to the only two Antarctic vascular plants Colobanthus quitensis and Deschampsia antarctica, actively participate in the plants Nuptake, even in non-N limited soils, with positive impacts on plant biomass.
Two contributions by Newsham et al. dealt with the effects of climate change on a single fungal species and the whole communities of soils from maritime Antarctica, respectively. An undescribed member of the order Helotiales was recorded to be superabundant in Antarctic Islands soils under D. antarctica (Newsham, Cox et al.). A range of its physiological and morphological features were reported, and an increase of its growth rate was suggested with the rising temperatures that are expected to occur in maritime Antarctica at the end of this century, with the potential loss of ancient C from soils.
Three fungal guilds and growth forms-lichenized and saprotrophic fungi and yeasts-of barren fellfield soils sampled from along a transect encompassing almost the entire maritime Antarctica were studied, in order to define the main environmental factors affecting their richness, relative abundance and taxonomic structure. Air temperature and edaphic factors were reported as main drivers, and discussed in view of the expected future climate changes of the region (Newsham, Davey et al.).
Glacier retreats expose new ice-free barren soils. Some areas have been only recently deglaciated, while others have been mostly deglaciated for millennia. The bacterial communities of these soils have been studied by Almela et al. Those of older soils seemed to be significantly different along the soil profiles, while they were similar in recently (from decades) deglaciated soils. A high degree of heterogeneity was also observed among microbial communities of soils at different sampling locations.
Water tracks, that seasonally flow on the ice-free soils of the McMurdo Dry Valleys in continental Antarctica, are expected to increase with ongoing climate change. They select for bacterial taxa able to cope with challenging environmental conditions. Significant differences in microbial community assembly between on-and off-water track samples, and across two distinct locations were recorded, mainly driven by soil salinity (George et al.). Heterogeneous microbial communities have been found to characterize four different habitats present in higher elevations of Taylor Valley, where biological soil crusts were reported in a gully and moraine next to Canada Glacier, accounted as islands of biodiversity, able to spread organisms and nutrients in the surrounding landscape (Solon et al.).
The Northern high latitudes are a preferential open-air laboratory to study the impact of climate change on soil microbial communities, as they are warming twice as fast as the global average. Four out of 13 contributing articles of the Topic report studies carried out in the Arctic.
Arctic permafrost has become particularly vulnerable to thaw, with consequences on microbial communities that are not yet perfectly known. The bacterial community assembly during permafrost thaw was studied using in situ observations and a laboratory incubation of soils in sub-Arctic Sweden, where permafrost thaw has occurred over the past decade. It showed to be driven by randomness (i.e., stochastic processes) immediately after thaw, while environmentally driven (i.e., deterministic) processes became increasingly important in structuring them in post-thaw successions (Doherty et al.). Geospatial differences in hydrology in polygon soils causing gradients in biogeochemistry, soil C storage potential, and thermal properties influencing the distribution of microbial CO 2 and CH 4 release, has been studied by Roy Chowdhury et al. by laboratory incubation at increasing temperatures of frozen soil cores collected in the Arctic coastal tundra in Alaska.
A comparison between the microbial community structure of rocks (and surrounding soils) in a high Arctic polar desert (Svalbard) showed significant differences between these substrates. Differences were also reported between rock sandstones and limestones, due to the determinant role of rock physiochemical properties in determining the successful establishment of lichens in lithic environments (Choe et al.).
Shifts in vegetation and soil fungal communities have been recorded in the Arctic tundra as a response to warming temperatures. In this context, fungal community composition in long-term experimental plots simulating the expected increase in summer warming and winter snow depth, were compared using DNA metabarcoding data, and dry and moist tundra appeared to have different trajectories in response to climate change (Geml et al.).
Cultivable actinomycetes isolated from soils near roots of different plants from the Qinghai-Tibetan Plateau were investigated for their enzymatic activity, and for the production of diffusible pigments and organic acids (Ma et al.).
This volume brings together the scientific community to cover all aspects of cold-adapted microorganisms and their role in Polar and Alpine environments. It gives a significant contribution to the long-standing debate on the multiple ecological roles of microorganisms in cold soil ecosystems. We hope that this range of articles will be highly attractive to worldwide researchers involved in the study of soil microbial communities.
AUTHOR CONTRIBUTIONS
The authors have equally contributed to the Editorial, and approved it for publication. | 2021-09-23T13:17:19.449Z | 2021-09-23T00:00:00.000 | {
"year": 2021,
"sha1": "42d47bed57d2052c423d5ea3977cfc513b82d122",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2021.713067/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "42d47bed57d2052c423d5ea3977cfc513b82d122",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251711571 | pes2o/s2orc | v3-fos-license | Geography, not lifestyle, explains the population structure of free-living and host-associated deep-sea hydrothermal vent snail symbionts
Background Marine symbioses are predominantly established through horizontal acquisition of microbial symbionts from the environment. However, genetic and functional comparisons of free-living populations of symbionts to their host-associated counterparts are sparse. Here, we assembled the first genomes of the chemoautotrophic gammaproteobacterial symbionts affiliated with the deep-sea snail Alviniconcha hessleri from two separate hydrothermal vent fields of the Mariana Back-Arc Basin. We used phylogenomic and population genomic methods to assess sequence and gene content variation between free-living and host-associated symbionts. Results Our phylogenomic analyses show that the free-living and host-associated symbionts of A. hessleri from both vent fields are populations of monophyletic strains from a single species. Furthermore, genetic structure and gene content analyses indicate that these symbiont populations are differentiated by vent field rather than by lifestyle. Conclusion Together, this work suggests that, despite the potential influence of host-mediated acquisition and release processes on horizontally transmitted symbionts, geographic isolation and/or adaptation to local habitat conditions are important determinants of symbiont population structure and intra-host composition. Video Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s40168-023-01493-2.
Introduction
Mutualistic animal-microbe associations are globally significant phenomena, shaping the ecology and evolution of both host animals and microbial symbionts [1]. These symbiotic associations are maintained by transmission of symbionts from host parent to progeny either [1] directly, for example via the germline (vertical transmission), [2] indirectly, for example through an environmental population of symbionts (hereafter referred to as "free-living" symbionts) (horizontal transmission), or [3] via a combination of both vertical and horizontal transmission (mixed mode transmission) [2].
Horizontal transmission is more commonly found in aquatic than terrestrial habitats, likely due to the ease with which microbes can be transported in water compared to air or soil [3]. However, even for marine symbioses where horizontally transmitted microbial symbionts are observed in the environment [4], it is not yet clear whether free-living, environmental populations of symbionts represent host-associated populations at the strain level, or whether their diversity and composition differs. Free-living symbiont populations may be shaped by local environmental conditions as well as the dynamic interactions with their host-for example, host animals may "seed" the environment by the release of their symbionts into the water column only upon host death [5] or via continuous release from live adults [6,7]. In addition, ecological and evolutionary processes, such as dispersal barriers, natural selection, and genetic drift, can contribute to the diversity and biogeography of environmental symbionts [8,9].
Deep-sea hydrothermal vents are discontinuous, island-like habitats dominated by vent-endemic invertebrates that host primarily horizontally transmitted chemoautotrophic bacterial symbionts, making them opportune natural systems for understanding the biogeography of free-living microbial symbionts. In these mutualisms, the symbiotic bacteria are either obtained during a narrow competence window in early developmental stages or throughout the lifetime of the host [10,11] and are, in most cases, housed intracellularly within the host's tissues, e.g., gill or trophosome. The symbionts oxidize chemical reductants (e.g., H 2 S, H 2 , CH 4 ) in venting fluids to generate energy for the production of organic matter, thereby providing the primary food source for the host in an otherwise oligotrophic deep ocean [12] and accounting for the high ecosystem productivity characteristic of hydrothermal vents [13][14][15].
Despite reliance on horizontal transmission, the majority of host species from hydrothermal vents affiliate with only one or two specific endosymbiont phylotypes (i.e., species or genera based on 16S rRNA gene sequence similarity) [12], possibly as a means to reduce the acquisition of cheaters [16]. While a significant number of studies have focused on the diversity, composition and structure of the host-associated symbiont populations (e.g., [10,[17][18][19][20][21][22]), their free-living, environmental stages remain poorly investigated [4], partly due to the difficulty of detecting low abundance free-living symbionts in environmental samples. As a consequence, few free-living symbiont studies exist. Most of these studies have so far relied on investigations of particular marker genes [4,23]; only one used an -omics approach but was limited to a single metagenome [24].
A recent shotgun metagenomic study found putative free-living symbiont populations of the provannid snail Alviniconcha hessleri in low-temperature diffuse venting fluids at two distinct vent fields of the Mariana Back-Arc, Northwest Pacific (15.5-18° N), [25], providing a unique opportunity to compare free-living and host-associated stages of chemosynthetic symbionts at hydrothermal vents. Alviniconcha hessleri belongs to the dominant fauna at hydrothermal vents in the Mariana Back-Arc Basin, where it lives in nutritional endosymbiosis with one species of sulfur-oxidizing, environmentally acquired Gammaproteobacteria [26,27]. Although patterns of host-symbiont phylogenetic discordance strongly support a mode of horizontal transmission for the A. hessleri symbiont [26,27], the exact dynamics of symbiont uptake and release are unknown. As an endemic species to the Mariana region, A. hessleri is currently listed as "Vulnerable" on the International Union for Conservation of Nature Red List of Threatened Species (https:// www. iucnr edlist. org), highlighting the need to identify the factors that contribute to its limited biogeographic range, including the population structure of its obligate microbial symbiont.
In this study, we applied phylogenomic and population genomic methods to evaluate the evolutionary relationships as well as the genetic and functional variation of Alviniconcha hessleri symbionts based on lifestyle by comparing free-living and host-associated symbiont populations collected from the same habitats. In addition, we addressed the effect of geography by comparing populations of both host-associated and free-living symbionts between vent fields of the northern and central Mariana Back-Arc Basin that are approximately 300 km apart and differ notably in their geochemistry: the central vent sites are known to support both low-temperature diffuse flow and black smokers that emit high-temperature fluids, with high amounts of hydrogen sulfide (H 2 S), whereas the northern sites only harbor diffuse flow habitats with lower concentrations of H 2 S [25].
Methods
Host-associated symbiont collection, sequencing, and genome assemblies Three A. hessleri specimens each were collected from snail beds at the Illium vent field (3582 m) and the Voodoo Crater-2 (VC2) location within the Hafa Adai vent field (3277 m) in the Mariana Back-Arc Basin (Fig. 1 Table 1). In addition, one gill sample from each vent field-Hafa Adai 172 (VC2) and Illium 13-was selected for long-read Nanopore sequencing on 2-3 MinION flow cells (Oxford Nanopore Technologies, Oxford, UK) using the SQK-LSK109 ligation kit (Supplementary Table 1). Financial constraints prevented the ability to perform long-read sequencing on all host-associated samples.
Free-living symbiont collection, sequencing, and genome assembly
All sequences from free-living samples used here were retrieved from a previous study [25], including two highquality MAGs of environmental A. hessleri symbionts from the Illium (GCA_003972985.1) and Hafa Adai-VC2 (GCA_003973075.1) vent sites. The fluid samples from which these MAGs were assembled were collected in direct vicinity of snail beds where the A. hessleri specimens for host-associated analyses were obtained [25]. We further included a third previously assembled freeliving symbiont MAG from diffuse venting fluids at the Voodoo Crater-1 (VC1) (GCA_003973045.1) location within the Hafa Adai vent field, ~ 5 m from Hafa Adai-VC2. Though symbiont MAGs were not previously able to be assembled from the other vent sites sampled in ref.
[25]-Burke, Alice Springs, Perseverance, and Hafa Adai-Alba-we attempted again to retrieve symbiont MAGs from these samples by assembling and binning the raw reads from these sites with our methods described above, but did not produce usable symbiont MAGs.
All details of the hydrothermal fluid collection, sample storage, sample processing, sequencing, assembly, and binning of metagenome-assembled genomes can be found in ref. [25]. Information about raw sequencing reads is provided in Supplementary Table 1.
Genome similarity and phylogenomic analyses
To confirm that all symbiont MAGs belong to the same bacterial species, we calculated average nucleotide identities (ANIs) via FastANI [42]. A phylogenomic tree that included the six host-associated and the three free-living symbiont MAGs as well as reference genomes of other chemosynthetic Gammaproteobacteria (Supplementary Table 2) was then constructed with IQ-TREE2 [43] based on 70 single-copy core genes in the Bacteria_71 collection [44]. Parameter choice for phylogenomic reconstructions followed ref. [18].
Population structure and gene content analysis
To determine symbiont population structure according to geography and lifestyle, we inferred DNA sequence polymorphisms in the free-living and host-associated samples by mapping metagenomic reads to a pangenome created with Panaroo [45] from all nine symbiont MAGs. Variants were called and filtered following the pipeline in ref. [18]. All samples from Illium and Hafa Adai met our minimum 10 × coverage threshold. Free-living samples from Burke and Alice Springs mapped at 5.9 × and 3.9 × coverage, respectively. The population structure and gene content analyses (see below) were repeated for these lower-coverage samples. All other free-living metagenomic samples from the remaining vent sites collected in ref. [25] (i.e., Perseverance, Hafa Adai-Alba) had an insufficient number of reads mapped for further analyses. Principal coordinate analysis (PCoA) plots were created based on nucleotide counts converted to Bray-Curtis dissimilarities with the ggplot2 [46] and vegan [47] packages in Rstudio [48]. To quantify the qualitative variant calling results depicted in the PCoAs, fixation indices (F ST ) between individual metagenomic samples (wherein each individual gill metagenome was treated as a population) were calculated following ref. [49] and plotted with pheatmap [50]. The method from ref. [49] as well as scikit-allel (https:// github. com/ cggh/ scikit-allel) were further used to calculate pairwise F ST values between samples pooled by lifestyle or vent field.
Gene content variation among symbiont populations was determined via Pangenome-based Phylogenomic Analysis (PanPhlAn) [51] and visualized through a PCoA plot based on the Jaccard Similarity Coefficient. Genes that were uniquely associated with lifestyle and vent field, respectively, were extracted from the PanPhlAn gene presence/absence matrix. Functional predictions for these genes were either obtained from the Prokka [52] annotations created during pangenome construction or inferred by blasting the respective protein sequences against the NR database. Hypothetical and unknown proteins were further annotated via KEGG [53] and Alphafold [54]. Differences in gene content between symbiont populations were visualized through Likert plots with the HH package [55] in RStudio.
Validation of free-living symbiont populations
To gain confidence that the symbionts detected in our environmental samples represented truly "free-living" symbiont stages as opposed to symbionts associated with host larvae or shed gill cells, we calculated the ratio of symbiont 16S rRNA genes to host mitochondrial CO1 genes in all nine samples by mapping raw metagenomic reads from the snail gills to custom-generated Alviniconcha symbiont 16S rRNA and host mtCO1 gene databases. To account for false positive mappings, we created additional background databases consisting of select bacterial (SUP05 clade bacteria, Thiomicrospira, and Marinomonas) and mollusk gene sequences. Bacterial 16S rRNA genes were downloaded from SILVA [56], while all Alviniconcha and mollusk mtCO1 genes were downloaded from BOLD [57]. BBSplit (https:// sourc eforge. net/ proje cts/ bbmap/) was then used to separate Alviniconcha symbiont and host reads based on the taxon-specific and background 16S rRNA and CO1 gene databases.
Free-living and host-associated symbionts belong to the same bacterial species
Our analysis included nine A. hessleri symbiont MAGs from the Illium and Hafa Adai vent fields: six host-associated symbiont genomes assembled in this study, and three previously published, free-living symbiont genomes from the diffuse venting fluids around A. hessleri beds [25] (Supplementary Table 3). All host-associated and two of the three free-living MAGs were of very high quality, with > 90% completeness and < 3% contamination. The third free-living MAG, Hafa Adai-VC1, had a medium quality (~ 67% completeness). ANI values between all MAGs were > 97.7% (Supplementary Table 4), suggesting that the nine A. hessleri symbiont genomes belong to the same bacterial species [42] within the genus Thiolapillus based on the Genome Taxonomy Database, and confirming that the previously assembled free-living symbiont genomes were indeed A. hessleri symbionts (Supplementary Table 3). Corroborating the ANI results, the nine A. hessleri MAGs were monophyletic in our phylogenomic analysis relative to the gammaproteobacterial symbionts of other vent invertebrates (Fig. 2). In agreement with phylogenetic analyses of the 16S rRNA gene [27], the nearest neighbors of the A. hessleri symbionts were the Ifremeria nautilei SOX symbiont, as well as Thiolapillus brandeum, a microbe not known to be symbiotic [58].
Environmental samples contain free-living symbiont populations
To investigate whether the symbionts observed in the diffuse flow samples were true free-living symbionts rather than symbionts associated with A. hessleri larvae or shed gill tissue, we calculated the ratio of symbiont 16S rRNA gene to host mitochondrial CO1 gene reads in all nine environmental and host-associated samples (Table 1). If the environmental symbiont samples were associated with larvae or host tissue debris/cells, we expect the ratio in the environmental and host-associated samples to be similar to one another. However, the 16S rRNA:mtCO1 ratio was consistently orders of magnitude higher in environmental samples than in host-associated samples, indicating the presence of a population of symbiont cells independent from host tissue. This finding provides evidence that our environmental samples include truly freeliving A. hessleri symbiont populations.
A. hessleri symbiont populations are structured primarily by vent field, not lifestyle
Our genome assemblies from both host tissue and diffuse vent fluids likely represent the dominant symbiont strain in each sample, but do not reveal the full extent of strain-level population variation between samples. To determine whether A. hessleri symbionts form subpopulations consistent with geography or lifestyle, we created a pangenome out of the individual symbiont MAGs from the Illium and Hafa Adai vent fields that we used as reference for variant calling (Supplementary Tables 5, 6). Our variant detection method resulted in 2177 sequence polymorphisms for investigation of population genomic structure based on F ST and ordination analyses (Fig. 3).
F ST values were calculated pairwise between all nine populations (Fig. 3a), as well as between samples pooled by lifestyle and vent field. Table 7). Genetic isolation among individual samples was typically stronger between (0.54-0.76) than within (0.21-0.46) vent fields (i.e., Illium vs Hafa Adai-VC2). Within vent sites, the degree of differentiation was comparable among samples independent of lifestyle at Illium, while host-associated samples were more similar to one another than to free-living samples at Hafa Adai-VC2. When samples were pooled, overall pairwise F ST values were markedly higher by vent field (0.47 ± 0.03 s.d.) than by lifestyle (0.05 ± 0.01 s.d.). The dominant effect of geography on symbiont population structure was supported by PCoAs where both free-living and host-associated samples from Illium clustered distinctly from Hafa Adai (VC1 and VC2) (Fig. 3b). Despite the fact that Hafa Adai-VC1 and -VC2 differ spatially by only ~ 5 m, the free-living VC1 sample formed its own distinct subpopulation from both host-associated and free-living populations at VC2 (F ST 0.58-0.67), suggesting very fine-scale geographic or environmental structuring.
These patterns were consistent in analyses based on 1271 and 793 variant sites that included the free-living, low-coverage symbiont samples from Burke and Alice Springs, respectively ( Supplementary Fig. 1, 4; Supplementary Tables 8,9). Burke represented the most divergent population, reaching F ST values > 0.8 in all pairwise comparisons. Although Alice Springs clustered closely with free-living and host-associated symbionts from Illium in the PCoAs, F ST values indicated a high degree of genetic isolation for this population (F ST > 0.7). Analyses with samples pooled by vent field confirmed patterns of strong genetic differentiation between geographic locations without evidence for isolation-by-distance (Supplementary Table 10).
A. hessleri symbiont gene content differs by vent field, not lifestyle
Gene content variation between symbiont populations was assessed based on lifestyle and geography. Similar to the population structure analyses, PCoA plots based on gene content variation across all nine host-associated and freeliving populations revealed clustering by vent field but not by lifestyle: symbiont populations from Hafa Adai-VC1 and Hafa Adai-VC2 were more similar to one another than to Illium (Fig. 4), although Hafa Adai-VC1 clustered as an independent population from all other samples. Gene content differed more substantially by geography than by lifestyle: the Illium symbionts had 44 unique gene clusters, and the Hafa Adai (VC1 and VC2) symbionts had 26 (Fig. 5a, Supplementary Table 11), while only three total gene clusters were unique by lifestyle (group_681 for host-associated; group_2104 and group_2131 for freeliving). However, these genes could not be characterized by any database we used for functional annotations. For all unique gene clusters across both biogeography and lifestyle, hypothetical and unknown proteins based on Prokka and the NR database were also assessed via KEGG and Alphafold, but yielded low-confidence results. Of the successfully annotated genes unique to the Illium symbionts, most were predicted to be involved in the mobilome and DNA metabolism, followed by membrane transport, virulence, disease, defense; RNA metabolism; sulfur metabolism; cell signaling and regulation; conjugation; iron metabolism; glycolysis and gluconeogenesis; and detoxification and stress response. Genes unique to the Hafa Adai (VC1 & VC2) symbionts were predominantly associated with the mobilome, followed by membrane transport; RNA metabolism; motility and chemotaxis; DNA metabolism; virulence, disease and defense; and glycolysis and gluconeogenesis.
Given the small-scale geographic structuring found between VC1 and VC2 at Hafa Adai, and given that VC2 has a larger sample size to represent its subpopulation, we also compared the unique genes between Illium and Hafa Adai VC2 symbionts alone (i.e., without VC1) (Supplementary Table 12, Fig. 5b). In this case, there were 62 unique gene clusters for symbionts from Illium and 28 unique gene clusters for symbionts from Hafa Adai-VC2 (Fig. 5b), i.e., two additional as compared to VC-1 and VC-2 combined. Only one of the genes unique to the Hafa Adai symbionts could be annotated and fell under the larger subcategory of "Virulence, Disease and Defense, " whereas unique genes of the Illium symbionts spanned a variety of metabolic functions. Analyses that included symbiont reads from Alice Springs and Burke (Supplementary Figs. 2, 3, 5; Supplementary Tables 13, 14) further supported the effect of geography over lifestyle on gene content variation in the A. hessleri symbionts. The population at Burke harbored a single unique, uncharacterized gene (Supplementary Table 13). When pooled with Illium as a "northern site, " additional genes unique to DNA metabolism and membrane transport were found, followed by genes involved in the mobilome, RNA metabolism, virulence, glycolysis and gluconeogenesis, cell signaling, conjugation, and stress response (Supplementary Fig. 3; Supplementary Table 13).
Alice Springs harbored three uncharacterized or hypothetical genes. When all three northern sites (Alice Springs, Illium, and Burke) were pooled together, seven unique genes were found. Four of these were related to DNA metabolism, virulence, conjugation, and transposition (Supplementary Table 14). Since Alice Springs and Illium are more geochemically similar to one another than either vent is to Burke [25], we also investigated the unique genes shared by these two vent fields alone: four unique genes were found, one of which fell under the functional category of virulence, disease, and defense.
Discussion
Here, we compared free-living and host-associated symbiont populations of Alviniconcha hessleri from two vent fields in the Mariana Back-Arc. Based on ANI and taxonomic assignments, our nine representative, medium-to high-quality MAGs can be considered to represent a single species within the genus Thiolapillus [58]. Our results provide strong evidence that diffuse fluid flow microbial communities include populations of free-living symbionts, further supporting an expected model of horizontal transmission in Alviniconcha species [18,59].
Both population structure and gene content analyses suggest that A. hessleri symbionts form subpopulations that segregate by geography more strongly than by lifestyle. These patterns agree with previous studies of non-symbiotic hydrothermal vent microbial communities, which show that microbes are shaped by their local environment [60], as well as of host-associated A. hessleri symbiont biogeography at the 16S rRNA gene level [27] and other horizontally transmitted associations from hydrothermal vents, such as bathymodiolin mussels [49,61,62] and provannid snails [18], that have been shown to partner with habitat-specific symbiont strains. These results, therefore, provide further evidence for horizontal transmission in the A. hessleri symbiont system. Such uptake of environmental symbiont strains bears a risk of infection to the host by cheaters [16], but also enhances an animal's ability to flexibly associate with locally available symbiont strains and, therefore, to maximize the habitat range in which they can settle [2,18]. Furthermore, since hydrothermal vents are ephemeral and geochemically dynamic habitats that harbor microbial communities shaped by local environmental conditions [60], it may be ecologically and evolutionarily advantageous for vent animals to acquire symbiont strains that are likely locally adapted [63].
The dynamics of microbial interaction with the host during acquisition and release processes can have significant impacts on the population structure and composition of horizontally transmitted symbionts. It is not known whether A. hessleri can replenish or recycle its symbionts, or if symbiont acquisition occurs only once upon settlement. For example, hydrothermal vent tubeworms seed the environment with their symbionts only upon death [5], Bathymodiolus mussels can acquire and release their symbionts throughout their lifetime [11,64], and Vibrio fischeri symbionts are expelled every morning by their sepiolid squid host [65]. In V. fischeri, it is well established that evolution in the free-living stage-for example, via horizontal gene transfer-impacts the evolution of host-microbe interactions, though the role of novel mutations remains unclear [65]. Although A. hessleri symbionts were overall more strongly partitioned by geography than by lifestyle, all symbiont samples were genetically distinct from each other and formed separate free-living or host-associated subpopulations. These findings suggest that symbiont exchanges between host and environment throughout the lifetime of the host are limited but might occur occasionally via symbiont uptake or release [49], thereby leading to mixing of host-associated and free-living symbiont pools. Periodic switching of symbiont strains could increase shared genetic variation among intra-and extra-host-symbiont populations, while maintaining geographic differentiation in the presence of dispersal barriers and/or environmental selection. All samples from Illium showed a comparably small degree of differentiation from each other, while samples from Hafa Adai were notably divergent between free-living and host-associated lifestyles. These patterns could arise from differences in the sampling locations of the freeliving symbiont populations (e.g., distance from the snail beds) and/or the age of the Alviniconcha host individuals. Although we do not have size-related data for the collected specimens, it is possible that the snail individuals from Hafa Adai were older than those from Illium, giving host-associated symbiont populations more time to diverge from their free-living counterparts. Strong genetic differentiation between host-associated and freeliving symbiont populations can be expected if hosts take up similar symbiont strains that have limited exchange with the environment post-infection, while the free-living symbiont population experiences more turnover.
The high genetic isolation of symbiont populations observed between vent fields may reflect the influence of both neutral (e.g., dispersal barriers and isolation-by-distance) and selective processes (e.g., adaptation to habitat differences between vent fields) on symbiont biogeography. Illium, Burke, and Alice Springs are all northern vent fields within the Mariana Back-Arc Basin that are characterized by sites of low-temperature diffuse fluid flow, while Hafa Adai is located further south and contains high-temperature black smokers [25]. Illium and Alice Springs are similar geochemically, notably in that they are both low in H 2 S concentrations, while Burke and Hafa Adai exhibit elevated H 2 S concentrations [25]. The close proximity (~ 360 m) and overlap in geochemical characteristics between Alice Springs and Illium may explain why these vent fields clustered together in our population structure analyses. By contrast, Burke's distinct geochemical signature might contribute to the high genetic isolation seen for this vent field, despite its relative proximity to Alice Springs and Illium (~ 4 km). Overall, however, no clear pattern of isolation-by-distance was observed, indicating that ecological factors might play a more important role than dispersal barriers in shaping symbiont population structure, in agreement with the oceanographic connectivity between the northern and central Mariana Back-Arc Basin [66].
Interestingly, Hafa Adai-VC1-while more similar to Hafa Adai-VC2 than any other vent site-represented its own symbiont subpopulation, suggesting small-scale population structuring of symbionts within vent fields. Local patchiness of symbionts, as observed in our study, mirrors patterns found for host-associated symbionts of cold-seep vestimentiferan tubeworms [67] and Acropora corals [68]. Although Hafa Adai-VC1 and -VC2 were only ~ 5 m apart, it is possible that Alviniconcha hessleri symbionts have extremely low dispersal potential that could be further reduced by small-scale circulation within vent sites due to physical structuring in the subseafloor [69,70]. Alternatively, micro-niche adaptation driven by locally fluctuating environmental conditions might contribute to these patterns.
Among the identified differences in gene content, symbionts from Illium uniquely harbored genes related to iron and sulfur metabolism. As iron and sulfur concentrations appear to be reduced at northern Mariana Back-Arc vents [25,71] and are typically lower in diffuse flow than black smoker fluids such as those found at Hafa Adai, it is possible that symbionts at Illium harbor high affinity sulfur and Fe 2+ transporters to efficiently obtain this essential element for their metabolism. All symbiont populations, including the low-coverage samples from Alice Springs and Burke, showed differences in the presence of genes related to the mobilome and virulence, disease, and defense. This suggests that each vent field supports distinct viral communities that may uniquely infect and interact with the symbionts, as hydrothermal vent viruses have restricted bacterial and archaeal host ranges, and viral communities are typically endemic to a given vent site due to limited dispersal or environmental selection [72,73]. The high number of unique genes related to the mobilome may be a consequence of integrated phage-derived genetic material that reflect the local, free-living viral communities.
Conclusions
Our research demonstrates that Alviniconcha hessleri symbiont populations are primarily structured by geography rather than by their host-associated or freeliving lifestyle. Future work using population genomic approaches should help clarify the predominant force(s) shaping the geographic population structure, as recent analyses of the symbionts associated with other Alviniconcha species suggest that both genetic drift and local adaptation play a role in symbiont biogeography [18]. Although our analyses indicate a weak effect of lifestyle on symbiont genetic structure, it is possible that freeliving and host-associated populations are characterized by differences in gene expression. A comparison of gene expression between lifestyles may provide additional clarity on the extent to which these symbiont subpopulations differ functionally. Our work also strengthens previous evidence for horizontal symbiont transmission in Alviniconcha species [18,59], despite the fact that almost nothing is currently known about the dynamics of symbiont acquisition and release in these species. Given that A. hessleri has been classified as "Vulnerable" on the IUCN Red List (https:// www. iucnr edlist. org) and is a dominant species at vents that are part of the Marianas Trench Marine National Monument, it is critical for future conservation and management that we understand the genetic connectivity of the symbiotic microbes that support this foundation species. symbionts based on both lifestyle and vent field, including all nine samples from Illium and Hafa Adai in addition to free-living symbiont samples from the Burke and Alice Springs vent fields. | 2022-08-22T13:32:16.933Z | 2022-08-18T00:00:00.000 | {
"year": 2023,
"sha1": "96ff45b0238037489a9d4daeac0bfe245dd5484f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "1eb12a116715c6e61d88a68d50d039e657885a20",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
252873640 | pes2o/s2orc | v3-fos-license | Dimensionality of datasets in object detection networks
In recent years, convolutional neural networks (CNNs) are used in a large number of tasks in computer vision. One of them is object detection for autonomous driving. Although CNNs are used widely in many areas, what happens inside the network is still unexplained on many levels. Our goal is to determine the effect of Intrinsic dimension (i.e. minimum number of parameters required to represent data) in different layers on the accuracy of object detection network for augmented data sets. Our investigation determines that there is difference between the representation of normal and augmented data during feature extraction.
Introduction and Related work
Figure is taken from [3].
Autonomous driving is a trending area of research in computer vision. Neural networks are an integral part of the autonomous driving pipeline through which images and lidar points are processed to predict objects. Events have been witnessed where weather changes had led to disastrous consequences of self-driving cars for eg. in 2016 Tesla's self-driving car failed to discriminate between white tractor and bright sky [16]. Our objective is to estimate Intrinsic dimension (ID) of augmented data sets in object detection network trained over normal data to observe change in data representation due to noise or affine transformation. Bac et al. [4] states that estimation of ID is important in choosing machine learning methods and its applications including validation, deployment, and explainability. Recognition of labels in intrinsic space is efficient in terms of memory requirements and computation time [8]. It is found in [1] that addition of noise to input increases ID. In our study, TwoNN [6] algorithm (Fig 1) is used to estimate ID. It is based on the ratio of distances between two nearest neighbours which makes it computationally efficient, and also overcomes the issue of data lying on a curved manifold. It is numerically consistent and reliable estimator even in presence of low number of points. From available local and global ID estimators, TwoNN algorithm is used for ID estimation because of interesting results in [3]. The aim of this paper is to verify first, if similar characteristic shape is evident in the case of augmented data sets, second, the classification layer ID providing an idea about network performance, third, if there is a increase in ID due to irrelevant features and fourth, do the augmented data representation behave like an untrained network? The use of three data sets is to study the effect of different data on augmentations. ID is analyzed in Faster R-CNN [14] with VGG-16, VGG-19 [15] backbones for KITTI [7], MS COCO [11] and VOC [5] data sets. Increase in ID due to vertical shift augmentation for KITTI data is observed. Behaviour of rotated images resemble the representation of data in untrained networks for all data sets and dimensional behaviour of COCO data is opposite to KITTI and VOC at classification layer.
Intrinsic Dimension
One of the geometric properties of representing data in neural network is Intrinsic dimension i.e. minimum number of co-ordinates required to represent data without information loss. Local ID estimators [2] [9] compute in local subspaces of data representation. Global ID estimators [6] compute over whole data point representation. Both global and local ID estimators can be used for estimation in alternate data neighbourhood. Our aim is to estimate ID at different layers for object detection networks and determine the relationship between average precision of augmented data and estimated ID [1]. In [17], ID characteristics are distinguishable for normal and adversarial generated samples in local space. This motivates us to experiment with ID estimation in global space. TwoNN algorithm is implemented in our paper to estimate ID.
• Compute pairwise distances for each point in the data set.
• For each point i find two shortest distances r 1 and r 2 and compute µ i = (r 1 /r 2 ).
• Sort the values of µ in ascending order through a permutation σ, then define the empirical cumulate F emp (µ σ(i) ) . = i/N .
• Fit the points of the plane given by coordinates {log(µ i ), −log(1−F emp (µ i ))} with a straight line passing through the origin.
The slope of the line give us an estimate of ID. With this approach, the estimated ID is asymptotically correct even for data sampled from non-uniform probability distributions. TwoNN algorithm is referenced from [6].
Experiments
In this paper, ID is computed at each pooling layer in VGG backbone network (labeled as pool1, ..., pool5). After feature extraction layers, in Faster R-CNN, ID is computed at classification layer(rpn c) and bounding box layer(rpn b) in region proposal network. Then, ROI pooling layer(roi), second FC layer(fc) and again for classification(cls p) and bounding box layer(box p) at the end. In RetinaNet architecture, ID is computed at each pooling layer in VGG backbone. Next layers to compute ID that follow are classification head convolution(cls h) block, classification(cls l) layer, regression head convolution(box h) block and bounding box(box r) layer. The reason to compute ID after a block of layers instead of every single layer is due to computational requirements [3]. ID is also estimated for MS COCO data set on Faster R-CNN model trained on VOC data and alternatively for VOC data set on model trained on COCO data (Fig 3b). Other implemented augmentations from While estimation of ID, bounding box with highest score is used as input to ROI pooling layer from the region proposal network due to constraint of our ID estimation algorithm where each image is represented as a point at layers of our network leads to no change in ID at layers after RPN. What happened while using bounding box with lowest scores? Our results did not have an impact because average precision depends on all objects predicted by the network. Another reason is removal of images from our estimation process if there are no predictions for bounding boxes, because in such scenarios there will be no data points for representation at ROI pooling layer. With 1200 pixels square image the memory requirements during computation of high dimensional tensor (400 x 2304000) is 33.8G. So to reduce computational requirement and save time ID is estimated using 400 images. To check the stability of results, ID is estimated for both small and large sizes, the ID value is higher in the case of a larger image but ID follows similar structure when plotted against the layers used for estimating ID. Plots can be found at our repository (https://github.com/ajaychawda58/ID CNN). As per findings in (See 3.1 in [3]) for classification tasks hunchback shape is evident in trained networks whereas in untrained networks the network displays a flat profile. In our experiments, flatter trajectory for rotated images (Fig 2) is observed, which indicate that rotated images have poor representation in the manifold. It is proven from the evaluation of rotated images where average precision (Table 1) is low compared to other augmentations over all data sets. Hunchback profile for other augmented data sets with varying ID at different layers is present in (Fig 2), hence they are represented better within network in comparison to rotated images.
Results
Vertical shift in KIITI (Fig 2a) has high ID ∼ 187 whereas the normal data has ID ∼ 84 at pool1 layer. It may be because of irrelevant features like filling of resized image with interpolation that attribute to increase in ID [3] and due to original image size of KITTI being around 1200 x 350. When image is shifted vertically and empty pixels are filled by interpolation, the added pixels are irrelevant features to the network. Comparing with COCO and VOC (Fig 2b & 2c) large difference between vertical shift and normal data is absent. Therefore, claim of increased ID can be confirmed due to aspect ratio 3:1 of KITTI images because in case of COCO and VOC the aspect ratio is close to 1:1. If the increase in shift was only due to filling of shifted image, it would be also present in horizontally shifted images. But the absence of increased ID in initial pooling layers for horizontal shift supports our claim.
ID of classification layer does not predict the object detection performance in contradiction to (See 3.2 in [3]) that corresponds to relationship between last hidden layer and accuracy of classification. In our case last hidden layers(fc layer) ID also have no relationship with AP (Table 1). So, using TwoNN [6] algorithm, dependence of ID with AP over data sets cannot be confirmed but difference in ID is observed at feature extraction level that motivates us to investigate our hypothesis using a different approach later. RetinaNet on KITTI data for both backbone networks which perform similarly, with slightly increased ID at bounding box head indicating that using VGG-19 the network generalizes [18] worse in comparison to VGG-16 backbone at bounding box head layer.
While comparing Fig 2a, Fig 2b and Fig 2c our observations are that ID is lower in classification layer in comparison to bounding box layer for KITTI and VOC data, but for COCO data the phenomenon is reversed with higher ID in classification layer than bounding box layer. One possibility is that the network generalizes poorly at classification layer due to large number of classes(n=91) [12]. Evaluating COCO data on model trained on VOC data and vice-versa, our goal is to investigate how different data sets behave within the network trained on another data set. There is decrease in ID at pool3 layer in both data sets. The reason for decrease can be attributed to change in number of classes in the network affecting the ID at this particular layer because other hyper parameters for the network are same in both data sets.
Conclusion and Future work
The presented approach is based on data representation in object detection networks by estimating ID. Results are compared against classification task in [3] and observed that they are comparable at feature extraction level but not beyond region proposal network. The approach is constrained due to choice of estimator for ID, still interesting behaviour is observed at backbone level, which motivates to continue research with different estimators. Further the research will continue with comparison between current results and model trained on augmentations and network without proposals eg. YOLO [13] and eliminate bottleneck due to ID estimation with current approach. Our work starts at a basic level by estimating ID of data sets on Faster R-CNN which indicates the novelty of approach and hope to find more explanations about object detection networks in future. | 2022-10-14T01:15:45.977Z | 2022-10-13T00:00:00.000 | {
"year": 2022,
"sha1": "383de5f2a9808d711c01f536a8b821f529c889b0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "383de5f2a9808d711c01f536a8b821f529c889b0",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
266264532 | pes2o/s2orc | v3-fos-license | The Potential Effect of Sugar-Sweetened Beverages Tax on Obesity Prevalence in Tanzania
Background: Obesity and the associated non-communicable diseases contribute significantly to the disease burden in Tanzania. Obesity can be attributed to the consumption of Sugar Sweetened Beverages (SSB) due to their high sugar content that leads to high caloric intakes. This study estimates the effect of SSB tax on the prevalence of obesity. Methods: A mathematical model that compares the reference population which is unchanged and a counterfactual population in which tax intervention has been introduced is developed. Changes in price and consumption of SSBs, and subsequent changes in energy intake are applied to estimate the body mass change by age groups. The change in body mass by age groups is merged with the reference population to estimate changes in body mass index and obesity. Results: Imposing a 20% SSB tax in Tanzania is estimated to reduce the average overall energy intake by 76.1 kJ per person per day. This change is associated with an overall reduction of prevalence of obesity by 6.6%; and by 12.9% and 5.2% in adult males and adult females, respectively. The number of obese people will potentially decrease by about 47,000 among adult males and about 85,000 among adult females from the current levels. Conclusions: The SSB tax is a potential strategy to complement efforts to reduce obesity prevalence in Tanzania. The revenue generated from the tax should be channelled towards public health promotion programs.
BACKGROUND
O besity is a growing global challenge in terms of prevalence, health outcomes and economic burden.The World Health Organisation (WHO) estimated that 39% (1.9 billion) of adults aged 18 years and above were overweight and about 13% (650 million) of the world's adult population (11% of men and 15% of women) were obese. 1 The WHO report also shows that an estimated 41 million children under the age of 5 years were overweight or obese.Once considered a high-income country problem, overweight and obesity are now on the rise in low-and middle-income countries, particularly in urban settings.In Africa, the number of overweight children under 5 has increased by nearly 50 per cent in 2014 since 2000. 2 The trend of obesity and overweight prevalence in Sub-Saharan Africa has continued to increase among women and people dwelling in urban populations. 3,4n Tanzania, the trend of obesity prevalence rate has increased drastically, for both men and women, from 5.9% in 2014 to 8.4% in 2016. 5Being overweight and obese contributes to high prevalence rate of people with Non-Communicable Disease (NCD) risk factors such as diabetes, cardiovascular diseases (CVDs) and cancer, and the overall health effects. 6][9] It is estimated that in Tanzania, the trend of deaths due to NCDs, has increased from 19.5% of all deaths in 2000 to 25.8% in 2010 and further increased to about 32.9% in 2016. 1 According to the Global Burden of disease report, 10 metabolic risks contributed 17.3% of total deaths (both sexes, all ages), of these 4.95% are directly attributed to high body mass index.Metabolic risks contributed to 7.36% of total disability-adjusted life years (DALYs) in 2019, of these 2.55% of total DALYs were directly attributed to high body mass index.
2][13] In addition to this, increasing SSBs consumption leads to other NCDs such as CVD, type II diabetes, dental caries and metabolic syndrome. 14urthermore, through habitual consumption of higher caloric intake from SSBs in childhood the risk of obesity can persist in adulthood . 15The catastrophic expenses on cost of care, loss of income and other indirect costs for treating NCDs puts much financial burden on families.As more people suffer and die from costly chronic NCDs and fall into poverty, consequently, the government is expected to shoulder the tremendous cost of treating NCDs.7][18] Several studies have used mathematical simulation models to analyse the impact of SSBs tax on SSBs consumption, subsequent caloric intake, and obesity prevalence. 16,18][21][22][23] However very few studies have been conducted in developing countries where consumption patterns, and tax structure and mechanism are different from those in developed countries.
Over the years, Tanzania has been increasing tax on alcohol and SSBs with the aim of generating revenues.Little were those fiscal measures implemented as corrective tax with the aim to discourage consumption of alcohol and SSBs.Evidence on how taxation of SSBs would reduce SSBs consumption and consequent reduction in obesity prevalence in Tanzania remains unkown.This study, therefore, seeks to fill that gap by investigating the potential impact of SSBs tax on obesity prevalence in Tanzania using mathematical simulation models.
METHODS
The imposition of SSBs tax is expected to be passed to consumers through higher prices of SSB products.Assuming that SSBs are normal goods, the income and substitution effect of price increase will lead to lower consumption of SSBs according to the price elasticity of demand of the SSB product.Changes in the amount of SSB consumed will lead to changes in total intake of calories which in turn will lead to change in energy resulting in changes in body weight and eventually the change in Body Mass Index (BMI), as a measure of obesity and overweight (Figure 1).Obesity and overweight follow the standard BMI classification as: Underweight (BMI < 18.5); Normal weight (BMI 18.5 -24.9);Overweight (BMI 25.0 -29.9) and Obesity (BMI 30+) A mathematical simulation model is constructed and executed using Microsoft Excel and STATA software, to estimate the effect of SSB tax on obesity in Tanzania.The effects of different tax rates on the prevalence of obesity are tested.The analysis presents a partial equilibrium effect and is disaggregated according to gender and across age groups to explore heterogeneities.
Data and Assumptions Pass Through Rate
Pass through rate is the proportion of tax change that is passed on to buyers in form of price changes.The SSB tax once introduced may be passed on in full to consumers, or manufacturers and retailers may absorb some of the tax by reducing price margins.In some other cases a passthrough rate may even exceed 100%.Various research informed the pass-through rate to be assumed.The study by Besley and Rosen using data from the USA suggested that the pass through-rate was in excess of 100% for soft drinks. 24The study by Berardi, Sevestre, Tepaut and Vigneron 25 showed that a 'soda tax' was fully shifted to soda prices and almost fully shifted to the prices of fruit drinks.However, the study of the Irish tax on SSBs in the 1980s 26 suggests a pass-through rate of less than 100%.In cases where there is uncertainty with the pass-on rate, it is considered reasonable to assume a pass through rate of 100%. 27The study by Briggs and others co-authors 20 assessing the impact of a 10% SSB tax on obesity assumed a pass through rate of between 80% and 100% whereas the study by Manyema and other co-authors in South Africa 16 assumes a pass through-rate of 100%.Since we do not have data for Tanzania it seems reasonable to assume a pass through rate of 100%.
Price Elasticities
Price elasticity refers to the rate of response of the quantity of a good demanded when the price increases.Own-price elasticity measures the change in demand that occurs for a good in response to price changes of the same good.Cross-price elasticity is the change in purchases that occur for a good in response to price changes of another good.Price elasticity estimates from the Economic and Social Research Foundation (ESRF) survey data are used.The survey collected data on how much SSB and their substitute's individuals consumed in the past seven days, and then asked how much they would consume in case the price rises by 20%.The responses were then used to calculate the elasticities.The reference period of one week was used to reduce information and recall bias.
The ESRF survey data collected information from the following groups: households, patients, caretakers, and health workers.The field work was conducted in eight (8) regions, namely: Dar es Salaam, Dodoma, Arusha, Mbeya, Tanga, Mwanza, Mtwara, and Kigoma representing each of the geographical zones in Tanzania.
In each region, one district was randomly selected from which a random selection of households and patients was then done.For health workers, the sampling was purposive to get one who could provide the best required information.Different questionnaires and interview guides were developed for each category of respondents depending on the type of information sought from each group.The number and distribution of targeted samples that have been collected by regions can be seen in Table 1.
Prevalence of Obesity in Tanzania
Obesity was measured by BMI.BMI was estimated from the anonymized dataset of the third wave of the Tanzania National Panel Survey (TNPS) which was conducted in 2012/13.These TNPS were implemented by Tanzania's National Bureau of Statistics (NBS) and are part of the Living Standard Measurement Studies initiated and partially funded by the World Bank.The survey data was collected from October 2012 to November 2013.The TNPS is a national level longitudinal survey designed to provide data from the same households over time in an attempt to understand poverty dynamics and to evaluate policy impacts in the country.The TNPS is based on a stratified, multi-stage cluster sample design.The sampling frame for the third wave is the 2002 Population and Housing Census, more specifically, the National Master Sample Frame, which is a list of all populated enumeration areas in the country.The dataset contains information for 25,412 individuals from 5,050 households.Among the individuals only those who were 15 years of age or above were considered for the analysis (from the sample, 13,239 individuals were 15 years or above).The TNPS household survey aimed to collect household and individual data as well as anthropometric measures.Data was cleaned and coded using STATA Version 14.For analysis, the sample was disaggregated by age and sex.BMI for each adult whose measurement was taken was computed as weight in kilograms divided by the square of height in metres.Extreme BMI values falling below 10 and above 60 were excluded from the sample used for the analysis.
Modelling
Step 1 -Effect of SSB tax introduction on SSB consumption.Valoric tax rate of 20% and 100% pass through rate are used to estimate a price rise, which together with own-price elasticity for SSBs are used to estimate the percentage change in purchasing and hence consumption of SSBs.The own price elasticities for SSBs and the cross elasticities for SSB substitutes are used to estimate the changes in their consumption.Consumption of beverages was measured in milliliters per person per day.
Step 2 -Effect of change in SSB consumption on energy intake.Average calorie density estimates for each drink are used to convert change in volume consumed to change in energy intake, assuming the percentage change in energy intake to be the same as percentage change in volumes of SSB and their substitutes consumed.The changes in caloric intake for each beverage type are assumed to give the net change in energy intake.The different baseline beverage consumption levels by age and sex combined with the percentage change in consumption give different absolute estimates for change in amount consumed by age and sex.
Step 3 -Effect of change in energy intake on body mass index and obesity prevalence.Change in body mass is estimated using mathematical relationships which have been established by previous studies.It is assumed that a new 'steady state' body mass is achieved if either total energy intake and/or level of physical activity change. 21In the modelling conducted in this study, we assume that the average level of physical activity is unchanged, so all the derived changes in body mass come from change in energy intake.The study adopts the conversion rate used by Manyema and co-authors 16 which requires a daily increase in energy intake of 94 kJ/ day to change a body mass of adults in equilibrium for 1 kg. 28On average, half the body mass change can occur in one year and 95% of the change in three years. 29This change in average body mass is converted to change in average BMI in a particular age group by using the height of individuals in the age group using the third wave of TNPS data.
Apart from SSB consumption, there are other factors which may also correlate with obesity.The main confounders are physical activity and diet, others include age, sex, socioeconomic status, location (rural urban).While the data used cannot account for all confounding factors, we account for some by disaggregating the analysis by age groups and sex.
Baseline Consumption
Baseline consumption data from ESRF survey show that on average adults in the sample consume 150.8 ml of SSBs, 3.4 ml of diet drinks, 123.6 ml of milk and 196.4 ml of tea or coffee a day.Adults in age group 25-34 years consume more SSBs (213 ml per day) compared to any other group and then consumption declines with age, while those who are 65 years and above consume the least amount of SSBs (on average, 98.5 ml per day) (Table 2).
The baseline consumption of SSBs substitutes is on average 323.4 ml per day.A great part of this consumption is tea or coffee (196.4 ml).Those above 45 years consume relatively more of SSB substitutes compared to those below.
Change in Daily Energy Intake and Body Mass
We assume a pass through rate of 100% which implies that if a 20% valoric tax is imposed on SSBs, price will also increase by the same percent (20%).The change in price of SSBs will translate into change in consumption of SSB.The magnitude of this change will depend on the price elasticity of the particular product.The change in price of SSBs may also affect the consumption of SSB substitutes; the magnitude of which will depend on the cross-price elasticity.
Table 3 presents the own-and cross-price elasticity computed from ESRF survey.The own-price elasticity of SSB products is negative implying that imposing a tax on SSB decreases the amount of SSB purchased and consumed.This is because the price of SSBs becomes relatively higher compared to their substitutes (substitution effect), and/or because given the income, the purchasing power decreases because of higher level general price (Income effect).With the exception of diet drinks, the cross elasticity of SSB substitutes is positive, implying that the increase in price of SSBs will increase consumption of SSB substitutes.Table 4 presents the impact of SSB valoric tax of 20% on total daily energy intake.The average overall change in energy intake is 76.14 KJ per day per person.The changes in energy intake are statistically significant for all age groups for the overall sample and among males but significant only for some age groups (25-34 years and 55-64 years) among females.Changes in energy intake are greater among males relative to females.The reduction in energy intake shows variation by age but without a clear and consistent pattern, reflecting the consumption pattern.The reduction in energy intake is higher among males in age group 25-34 years (288.6KJ per day) compared to any other group, and is the lowest among females of age 35-44 years (159 KJ per day) (Table 4).
Reductions in daily energy intake translate into reduction in body mass according to the established conversion rates.The changes in body mass presented in Table 5 are directly proportional to change in energy intake.So, similar to the changes in energy intake, the changes in body mass are statistically significant for all age groups for the overall sample and among males; but significant for females of age groups 25-34 years and 55-64 years only .
Change in BMI
Figure 2 shows the mean BMI levels at the baseline and after the SSB tax intervention for both men and women based on anthropometric measures of adults above 15 years from the third wave of the TNPS.
The baseline mean BMI is higher for females compared to males.On average, at the baseline, the BMI for males is 21.6 kg/m 2 and for females 23.3 kg/m 2 .SSB tax of 20% leads to the decline in BMI for both men, by 0.5 kg/m 2 (equivalent to 2.3% decrease) and women, by 0.29 kg/m 2 (equivalent to 1.3%); and still remains higher for females after the tax (Figure 3). Figure 3 indicates that the mean BMI levels before and after the intervention by sex and age groups.
The baseline mean BMI is higher among females compared to males in all age groups except for those who are of age 65 and above.Those in the lower age group (15-24 years) have the lowest average BMI for both males and females.Among females, the middle aged (45-54 years) have the highest average BMI; while among males the older (65+ years) have, by far, the highest average BMI compared to other age groups.
The imposition of 20% SSB tax leads to significant BMI declines for all adults in all sex and age groups (Table 6).The decline is higher among those in age group 18-29 years.
Effect on Obesity
The prevalence of obesity during the baseline is 6.5% and it is more pronounced among females where 9.5% are obese compared to males who are at 2.7% as shown in Figure 4.The mathematical model projects the overall prevalence of obesity to go down by 0.4 percentage points which is equivalent to 6.6% change.Obesity declines more among males, going down by 0.3 percentage points, equivalent to 12.9% change.Obesity among females declines by 0.5 percentage points which is equivalent to 5.2% change.
Analysing the prevalence of obesity by age groups, it is observed that prevalence is high among the middle-aged adults (35-64 years).This could probably be explained by the low level of metabolism for these age groups.However, the effect of SSB tax on obesity prevalence does not suggest a systematic pattern by age groups (Figure 5).The reduction of obesity is greater among those in age groups 25-34 years and 55-64 years, while there is no change in obesity prevalence among those above 65 years.This is probably because the young age group and the old age group have less income or tend to be more cautious with income allocation thus more likely to switching consumption after the tax introduction.Further exploration of the impact of SSB tax on BMI classes shows that the tax will reduce the prevalence of obesity and overweight (Figure 6).A reduction in SSB consumption is unlikely to increase amount of underweight people since the source of the calorie affects the quality of the nutrition (though not captured in the model).It is assumed that those who already have low caloric intake will adjust food intake if spending on SSBs reduces to have sufficient caloric intake.
Sensitivity Analysis
Sensitivity analysis is undertaken to assess the effects of various tax rates and pass through rates on obesity prevalence.The results in Table 7 show, for different pass through rates, that the higher the SSB tax rate, the greater the reduction in the obesity prevalence for both males and females.There are therefore higher gains in terms of reduction of obesity prevalence with higher tax rates.However, this may also be associated with higher costs to producers and consumers as well.
The Cost of Implementing the SSB Tax Intervention
As well as knowing how effective a tax is as a public health intervention, its cost-effectiveness should also be understood. 30The introduction or increase of a tax may bring in revenue from the tax, but implementation comes with its own administration cost.The due process for implementation of tax change in Tanzania should necessarily start with the finance ministry's fiscal (tax) policy decision regarding such matters as the relevant tax base or taxable item and the applicable tax rate.The policy choices made should then be enacted into law by parliament before the tax administration authority assumes responsibility for giving effect to the resulting legal provisions.This is the context in which the notion of implementation cost come.The approach by Lal and coauthors 18 which considered the cost of passing legislation in parliament; administration and compliance time costs; field audit time costs; field audit direct costs; accountant yearly salary (government); and accountant yearly salary (industry) is adopted with some modification based on the practical realities.(The details of the estimation are provided as supplementary materials).
A total of TZS 69.1 million is estimated to be incurred in the first year of introduction of the fiscal (tax) policy intervention, comprising TZS 29.8 million as one-off cost in terms of preparation of the reform proposal and its passing into law, in the year of introduction of the reform.The other component comes in terms of continuous monitoring associated with increase in non-compliance risk arising from the additional/increased tax, estimated at TZS 39.3 million annually.
To permit a cost-benefit analysis, an estimate of tax revenue that will arise from the introduction of 20% increase in the tax rate on SSBs is also made.In computation of this estimate, it is assumed that there are two goods, SSBs and Substitutes for SSBs and that specific taxes () are imposed on the two goods (consistent with Tanzania's imposition practice in the area of beverages).The total revenue (R) from these excises can be obtained by: Assuming that the supply of the two goods is perfectly elastic, the amount of tax increase per unit is equal to the increase in demand price.That is, .If the tax levied on SSBs is increased, the change in the total tax revenue can with respect to the price of good 1 (SSBs).
From this formula, the total tax revenue is calculated and found to rise by 108.7% from TZS 416 Billion to TZS 868 Billion, an increase of TZS 452 Billion.Comparison of the TZS 452 billion increase in tax revenue to the administration cost in the year of introduction of the cost measure (TZS 69.1 million) results into a cost of collection ratio (as a measure of tax administration efficiency) of 0.02% (determined as cost of collection/tax revenue collected).be obtained by differentiating R with respect to T i The percentage increase in total tax revenue can be calculated as where stands for the demand elasticity of the ith goods Fourthly, the model used in this study took into account the substitution effect of the SSBs substitutes through the use of cross-price elasticities.This ensures that the reduction in total liquid caloric consumption is not overestimated.However, it was assumed that the substitution between the SSBs and foods is insignificant as suggested by the study by Finkelstein and co-authors. 17 the other hand, the study has some limitations.
The first limitation of this study is that there was no available nationally representative data for consumption of SSBs and SSB substitutes.Second, the consumption and price estimates of SSBs were self-reported and they may have been affected by recall bias.Third, the data on SSBs' consumption could not capture all the SSBs consumed in Tanzania, rather the ones that are mostly consumed.Also, the data on SSBs' consumption could not specifically ascertain the amount of sugar content in each type of drink reported, therefore the study used the sugar content of the mostly consumed variety of drinks for each type of drink.Fourth, the study assumed a "full''
DISCUSSION
Imposing a 20% SSB tax in Tanzania is predicted to reduce obesity by 6.6% overall and by 12.9% and 5.2% in adult males and adult females, respectively.The average overall reduction in energy intake is estimated to be 76.1 kJ per person per day.The SSB tax has more effect on adults at age group 15 to 34 and 55 to 64 years while it has no impact for those aged 65 years and above.
The results obtained are similar to other studies such as Manyema and co-authors and Briggs and co-authors. 16,21t is projected that in 2020 the population of adult Tanzania's aged 15 years and above was around 32.65 million, of which 15.66 million are males and 16.99 are females.Using the baseline levels of obesity prevalence, this implies that there are about 423,000 males and 1.6 million females who are obese.The introduction of a 20% SSB tax will potentially reduce the number of obese people by about 47,000 among adult males and about 85,000 among adult females.
The SSBs tax already exists and that the proposed reform is essentially that of increasing the tax rates on an existing tax rather than introducing a new tax.Thus, the cost of administration to implement the proposed tax policy intervention is insignificant and the SSB tax can also potentially generate significant revenue.
Strengths and Limitation to the Study
This is the first study in East Africa to model and quantify the potential effect of SSB tax on obesity.A number of studies have been conducted in developed countries 18,31,32 and South Africa 16 where the levels of obesity prevalence and NCDs are high but none have been done in East Africa where increasing levels of obesity and NCDs pass through rate of the tax increase, but this may not be the case.Pass through rate may be different across age or income groups.Fifth, the model predicts a onetime tax effect of the changes in consumption of SSBs on body mass change.However, with persistent tax, the level of consumption will remain low and may also trigger behavior change which implies that the impacts may be underestimated.Sixth, this study has focused on the effect of SSB tax on obesity, while other NCDs' such as CVDs, diabetes, cancer have not been considered.Lastly, the study has not considered the effect of SSBs tax on non-health outcomes, such as disposable income and employment; and like other indirect taxes, this tax is likely to be regressive.
Policy Implications
Our findings suggest that SSB tax is one of the strategies that can contribute to reversing the excess weight in the population and reduce obesity prevalence.The of SSB tax should not, per se, be seen as a solution.It should be part of a broader approach complementing other strategies to reduce obesity prevalence and related NCDs such as promotion of physical activity and increased health promotion activities.Special attention should be given to women who already have a higher rate of obesity prevalence but are less affected by the tax compared to men to reduce their consumption of SSBs.Complex gender specific socioeconomic and cultural factors that increases women's risk of obesity need to be taken into account.
Secondly the study recommends that revenue raised from SSB tax should be dedicated to public health promotion programs including incentivizing production, supply and consumption of healthy foods such as fruits and vegetables, nutrition programs, improving the infrastructure that support increased physical activity and early detection of NCDs.Health care coverage especially, one related to NCDS, should be expanded at all levels of care starting from community health care programs and to a large extent be targeted to reach the poor who will be disproportionately affected by the SSB tax increase.
CONCLUSION
There is limited specific recognition of sugar and SSBs as a huge contributor to NCDs despite the increasing evidence showing consumption of SSBs as a risk factor for obesity and diet related NCDs.(for example 11,13,14 ) Diet related NCDs have become an increasing problem in our developing countries.Introduction of SSB taxation in Tanzania is a complex process that required evidence on the potential impact on obesity.
Recently, SSB taxes have been introduced in both developed and developing countries, such as France (2012), Mexico (2014), Berkeley, USA (2015), Mexico (2017) the United Kingdom (2018), Ireland (2018) and South Africa (2018).It has been documented that these taxes have led to the drop in the households purchase of sugary drinks for the general population especially for for the poor. 33,34It is also documented that the reduction was substituted by an increase in sales of light/zero drinks and that the reduction in purchases was stronger in areas with a higher incidence of obesity, higher household incomes and for products with higher sugar content. 34e experience in South Africa shows that habitual and addictive behavior towards consumption of SSBs, fueled by mass advertising campaigns and wide accessibility of SSB requires the introduction of SSB tax to be complemented with a multipronged behaviour change strategy. 35e findings of this study show that an SSB tax in Tanzania will lead to reduction in the average overall energy intake and consequently an overall reduction in prevalence of obesity.It is practically feasible to introduce SSB tax beyond the existing excise tax in Tanzania, since the system is already existing and what remains is to increase the rate.The challenge will be on the stakeholders and public support and understanding of the aim of the proposed tax increase.Although there is a general recognition of NCDs as an emerging problem across the board, there is still an imbalance between public health concerns and commercial and economic interests.The soft drink industry is economically powerful and has strong lobbying power.The very influential industry may diminish the feasibility of introducing SSB tax.
Further, its implementation requires active involvement of all the stakeholders guided by evidence-based policies of implementation, monitoring and evaluation and this should be done within the parameter of the country's legal framework.The importance of policy champions in Tanzania policy making context cannot be understated in boosting political commitment on NCDs.There has been lack of active civil society engagement in the fight against SSBs though they have a big opportunities and role to strengthen this effort.
East
African Health Research Journal 2023 | Volume 7 | Number 2
FIGURE 2 :
FIGURE 2: Mean BMI by Sex at Baseline and After SSB Tax
FIGURE 6 :
FIGURE 6: BMI Classification Groups Before and After SSB tax From the NPS Data
TABLE 1 :
Selected Regions with Sample Size From the Fieldwork
TABLE 2 :
Baseline Consumption of SSBs
TABLE 3 :
The Own-and Cross-price Elasticities of SSB and Their Substitutes
TABLE 4 :
Estimated Changes in Energy Intake
TABLE 5 :
Estimated Changes in Energy Body Mass
TABLE 7 :
Sensitivity Analysis of the Effect of Changing Pass through Rate and SSB tax Rate on Obesity Source: Tanzania National Panel Survey (2012/13) and authors' calculations | 2023-12-16T16:21:42.442Z | 2023-11-30T00:00:00.000 | {
"year": 2023,
"sha1": "a3c97e7f6f562a7b31964264b9fafb76ea572561",
"oa_license": "CCBY",
"oa_url": "https://eahrj.eahealth.org/eah/article/download/743/1583",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "601cc5e627766bb4c083915234b18638ee9db23b",
"s2fieldsofstudy": [
"Economics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
228991612 | pes2o/s2orc | v3-fos-license | Predicting effect of emotional-social intelligence on academic achievement of nursing students
Background . Academic achievement refers to the extent to which a learner, instructor or institution has accomplished their short- or long-term educational goals. There are inconclusive results about the individual factors that successfully predict academic performance. Emotional intelligence has been a popular topic in the field of higher educational learning. Several research reports have shown that emotional intelligence is one of the factors that successfully predicts students’ academic achievement. Objectives. To examine the relationship between emotional-social intelligence (ESI) and self-reported academic achievement among nursing students. Methods. A descriptive-comparative approach was used. The study was carried out on 127 nursing students from different academic levels. The study used two tools, namely an ESI questionnaire and an academic achievement scale. Results. The females had statistically significant higher means than the males in their scores on the ESI questionnaire ( p =0.042) and interpersonal competencies ( p =0.003). There were positive correlations between ESI score, its five components and students’ self-reported academic achievement. Conclusion. The outcome of this study suggests that educational planners and academicians should embrace emotional intelligence-developing courses at college and university levels.
Research
The instrument uses a 5-point Likert scale, with item response scores ranging from 1 (not true for me) to 5 (true for me). The total score for each student (265) was calculated and converted into a % score, and was categorised into unsatisfactory if the score was <60% and satisfactory if the score was ≥60%. The researchers tested the reliability of the questionnaire and found that this questionnaire had high internal consistency (Cron bach's α coefficients were 0.980, 0.850, 0.880, 0.840 and 0.870 for intrapersonal competencies, interpersonal competencies, adaptability, stress management and general mood, in that order).
In terms of determining the academic achievement of the students, a tool was constructed and developed by researchers based on the review of literature, as per Zimmerman and Schunk. [10] Students indicated their level of agreement using the scale 0 = very little efficacy, 1 = little, 2 = moderate, 3 = a lot and 4 = quite a lot. This scale consisted of 45 self-report statements to capture respondents' views of their academic achievement. The tool was divided into five main categories as: academic performance; extracurricular activities; student's interaction; student's behaviour; and student's attendance. Each category has nine statements, and the scores for each category were summed to give a total score and categorised into lower academic achievement (0 -<60), moderate academic achievement (60 -<120) and higher academic achievement (120 -180).
The higher score means the greater the student's academic achievement. A high internal consistency was observed by the researchers, through tested Cronbach α coefficients to measure the reliability of the tool in this study guided by Sun et al. [11] and Tavatol and Dennis. [12] The reliability of five main categories of academic achievement mentioned before (0.935, 0.860, 0.860, 0.935, 0.900) correspond.
A pilot study with 11 randomly selected nursing students (10% of the study sample) was performed to ensure applicability, clarity and feasibility of the instruments. The students took around 15 -20 minutes to complete the questionnaire. No modifications were made, and the results of the pilot study were included in the study results.
Ethical approval (ref. no. E1032) was given by the Deanship of Scientific Research, Shaqra University (Saudi Arabia). The study was conducted at the College of Applied Medical Sciences in Shaqra. The students' participation in the study was voluntary and all participants were assured that their marks would not be affected if they did not participate in the study, and that they could withdraw from the study at any time.
Statistical design
The data collected were computerised, revised, categorised, tabulated, analysed and presented in descriptive and associated statistical form using Statistical Package for Social Sciences version 20 (IBM Corp., USA). The data were tested by numerical data expressed as mean (standard deviation (SD)). Qualitative data were expressed as frequency and percentage. Difference between quantitative variables was tested by using the independent t-test and one-way analysis of variance test. The correlations between different numerical variables were tested using Pearson's correlation test. P≤0.05 was considered significant, and highly significant at p≤0.001. Table 1 contains demographic traits of the students, and shows that 66.1% of the participants were male, and the mean (SD) age of participants was 20.7 (2.4), with 57.5% >20 years old. The majority (83.5%) reported a high grade point average score. Most of the students' parents were highly educated (62.2% and 54.3% of the mothers and fathers of the students, respectively, were university graduates). Fig. 1 describes categories of ESI among participants: it shows that more than two-thirds of the study sample had a satisfactory level for emotional intelligence (66.9%), as compared with 33.1% who had an unsatisfactory level of emotional intelligence. Table 2 illustrates that there were significant statistical differences between male and female students regarding their scores on the ESI questionnaire and interpersonal competencies score, in favour of females, with a mean difference of 9.52 and 4.65, respectively, and p-values of 0.003 and 0.042, respectively, at a 95% confidence interval. Fig. 2 describes levels of academic achievement among the study sample, and demonstrates that it was high in 70% of participants, compared with 8.7% of the participants whose academic achievement was low. Table 3 explains the difference in academic achievement between male and female students. However, there were no significant differences between the two sexes regarding total mean score of self-reported academic achievement or its dimensions. Table 4 reveals the results of analysis of variance tests for the relationship between the level of education of students' parents and ESI questionnaire scores, and shows a significant difference between parents' education with regard to ESI score (p≤0.05).
Research
Additional results found a low degree of positive correlation between age and ESI score (p=0.012 and r=0.223). Table 5 shows that there is an association between ESI score and academic achievement score. Also, all ESI questionnaire dimensions (intrapersonal and interpersonal competencies, adaptability, stress management and general mood) separately correlated positively with academic achievement score.
Research Discussion
It is believed that the application of the ESI concept in nursing education programmes helps students to deal with pressures related to their studies, and also improves their communication skills. [13] Earlier studies have indicated an association between ESI and academic achievement. The aim of the present study was to examine the association between emotional intelligence and academic achievement in undergraduate nursing students at the College of Applied Medical Sciences, Saudi Arabia. The results identified an association between ESI and the demographic characteristics of participants, and recognised a correlation between academic achievement and level of ESI. About two-thirds of the study sample were male, the mean age of the participants was 20.73 years, with most aged >20 years, and the majority of participants' parents had obtained higher education.
The demographic characteristics of participants in this study were different from those in earlier studies among nursing students. Jacob and Pavithran [14] and Fallahzadeh [15] studied the impact of ESI in nursing students, the majority of whom were female. The age of participating nursing students in our study was similar to that reported by Sinha et al. [16] and Moawed et al. [17] With respect to parents' education, the data in the present study were similar to those in a study by Moawed et al., [17] who carried out a comparative study of emotional intelligence skills of nursing students in Riyadh (Saudi Arabia) and Tanta (Egypt), and showed that most of the students' parents in Riyadh had high education levels.
The current study revealed that more than two-thirds of participants were of satisfactory ESI level (>60% of the total score). This may be due to the presence of more extracurricular activities and summer courses, which enhance and refresh abilities that help students improve their ESI. This emotional intelligence level may also be due to increased attention given to the affective and emotional domains during the teaching process. This result was similar to that reported by Manjusha et al. 18] on emotional intelligence and academic performance among nursing students, where 68.3% of assessed students had a satisfactory emotional intelligence level. The results are also in line with Sinha et al., [20] who reported that 61% of assessed students had normal and high emotional intelligence (46% had normal levels and 15% had high levels).
The mean total ESI scores of male and female participants were 233.27 and 242.79, respectively, and this difference between male and female participants was statistically significant (p<0.042). The interpersonal competencies score was significantly higher in female students (p=0.003).
Earlier studies comparing ESI scores in male and female students reveal varying results. Fallahzadeh [15] reported no statistically significant difference in total ESI scores between male and female students. Further, he reported that there was no significant difference between male and female students in the mean of all dimensions of ESI (p>0.05) except for the difference in mean score of adaptability scale. The findings of this study were supported by Saddki et al. [19] and Acebes-Sánchez, [20] who reported that females had a significantly higher emotional intelligence score than males.
About two-thirds of the participants had a high level of self-reported academic achievement in the present study. This is possibly because most participants had a satisfactory emotional intelligence level. Our results support an earlier report by Manjusha et al. [18] in which 69% of nursing students had good and very good levels of emotional intelligence and academic performance.
Regarding academic achievement and sex, results of this study showed that there was no significant correlation between the two, which supports the findings of earlier studies. Blackman et al. [21] and Ugoji [22] showed that there was no significant correlation between sex and academic achievement of students. Wan Chik et al. [23] indicated that male students have lower academic performance than female students, as measured by grade point average. This may have been due to a difference in the percentage of male and female participants in their study.
The Pearson correlation test presented a positive correlation between the age of participants and the total score in ESI. The findings are in line with Carstensen et al., [24] Snowden, [25] Suleman et al., [26] Nagar [27] and Hamouda and Al Nagshabandi, [28] who reported the presence of a significant positive relationship between age and ESI. This may be due to the fact that older adults are more skilled at regulating their emotions than younger adults, and that particular aspects of emotional intelligence may increase with age.
The education of participants' parents significantly affected participant's ESI scores, with high scores among participants whose parents had a high educational level. The result was similar to Haraluretal [29] and Pant and Singh, [30] who reported significant statistical differences in ESI scores based upon parents' education levels.
Our study revealed a positive correlation between emotional intelligence and academic achievement. Several studies support this result. Manjusha et al. [18] and Kouchakzadeh et al. [31] also support a significant positive relationship between academic performance and emotional intelligence of nursing students. Similarly, Kumar et al. [32] and Suleman et al. [26] confirmed the strong positive relationship between emotional intelligence and academic success. The results of our study are also supported by a comprehensive quantitative review by Ranjbar et al. [33] for all published studies on emotional intelligence and academic achievement in Iranian students, and a systemic review by Hanafi and Noor. [34] ESI can be considered as a predictor for academic achievement level, as students with low emotional intelligence may have low concentration and show aggression in their relations and in dealing with their peers. They may also struggle to communicate their feelings to their colleagues. In contrast, students with higher levels of emotional intelligence are able to manage themselves better and communicate more effectively with their peers and teachers. This can assist them to improve self-motivation and effective communication skills, and help students become more confident learners.
The findings of our study contradict earlier reports by Shah et al. [35] and Gilani et al. [36] that show that academic achievement and ESI are negatively correlated. Our results are also different from those reported by Zirak and Ahmadian, [37] which suggest an absence of a significant relationship between total emotional intelligence and academic achievement.
Our study results also indicated the presence of positive correlations between the five dimensions of the ESI questionnaire, including intrapersonal and interpersonal competencies, adaptability, stress management and general mood (r values 0.440, 0.379, 0.385, 0.236 and 0.407, respectively). Our results are supported by Oyewunmi et al. [38]
Conclusion
Our results indicate an association between emotional intelligence and the academic achievement of nursing students. Nursing educators should create ESI-developing courses that can be taught by experts in the field at college and university level, and workshops on strategies to boost the ESI Research of learners. Emotional intelligence should be part of the educational plan for students, and students should be provided with workshops to boost their ESI. This study should be replicated with a larger sample size and in a different setting to further confirm its findings. | 2020-10-28T19:21:01.034Z | 2020-10-16T00:00:00.000 | {
"year": 2020,
"sha1": "d3e829211b330cc3fd4b71a8dcf05166170bf54f",
"oa_license": "CCBYNC",
"oa_url": "http://www.ajhpe.org.za/index.php/ajhpe/article/download/1096/616316",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a9a3572bb4e5ea4bf65890183fe401d652c42422",
"s2fieldsofstudy": [
"Psychology",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
264805167 | pes2o/s2orc | v3-fos-license | Smooth Stable Foliations of Anosov Diffeomorphisms
In this paper, we focus on the rigidity of $C^{2+}$-smooth codimension-one stable foliations of Anosov diffeomorphisms. Specifically, we show that if the regularity of these foliations is slightly bigger than $2$, then they will have the same smoothness as the diffeomorphisms.
Introduction
Let f be a C r -smooth (r ≥ 1) Anosov diffeomorphism of a smooth closed Riemannian manifold M , i.e, there exists a D f -invariant splitting T M = E s f ⊕ E u f such that D f is contracting on E s f and expanding on E u f , uniformly.It is well known that the distributions E s f and E u f are Hölder continuous and uniquely integrable to foliations F s f and F u f , respectively with C r -smooth leaves varing continuously with respect to C r -topology.However, the regularity of these foliations may not be C r .Indeed, if the regularity r ≥ 2 of f , the foliation F s f (or symmetrically F u f ) is absolutely continuous and if we further assume that it is codimension-one, then it is C 1 -smooth [2,8,15] but could not be C 2 in general [2].
Conjecture 1 ( [4]).If the foliations F s f and F u f of a C k -smooth Anosov diffeomorphism f : M → M are both C 2 -smooth, then f is C max{2,k} conjugate to a hyperbolic automophism of an infra-nilmainfold.This conjecture can be divided into two parts which are still open.One is the famous conjecture of Smale [20] which expects to classify the Anosov diffeomorphisms in the sense of topology, i.e, every Anosov diffeomorphism is topologically conjugate to a hyperbolic automophism of an infranilmainfold.The other one is a rigidity issue, i.e, whether the smooth foliation leads to a higher regularity of the conjugacy or not, since the conjugacy between two Anosov diffeomorphisms is usually Hölder continuous only.
The topological classification conjecture has some evidences [5,13,14].For instance, when E s f (or E u f ) is codimension-one, then M is a torus.Moreover, under the assumption of M is a nilmanifold, f is conjugate to a hyperbolic algebraic model.These are also why researches of rigidity usually focus on the toral Anosov diffeomorphisms.The rigidity issue has been extensively and deeply studied under some restriction of Lyapunov exponent [3,7,18].However we know few of the rigidity on smooth foliation.Indeed, as far as authors know, it has only partial answer in [3,4,6,9].
In [4], Flaminio and Katok proved that a volume-preserving Anosov diffeomorphism f of 2-torus T 2 with C r (r ≥ 2) stable and unstable foliations is C r conjugate to a linear one.Moreover, they obtained a similar result for an Anosov diffeomorphism f of T 4 preserving a symplectic form with C ∞smooth stable and unsable foliations.However de la Llave [3, Theorem 6.3] constructed a counterexample on T d (d ≥ 4), precisely, for any k ∈ N there exist hyperbolic automorphism A : As a corollary of [4], C 2 -regularity of hyperbolic foliation on T 2 implies higher-regularity of itself.In a same sense of such bootstrap of foliation, Katok and Hurder [9] proved that for a C r (r 5) volume-preserving Anosov diffeomorphism f of T 2 , if distributions E s\u f are C 1,ω , i.e, the derivatives are are respectively of class ω(s) = o(s|log(s)|), then F s/u f are actually C r −3 -smooth and f is C r −3conjugate to a toral hyperbolic automorphism.Similarly, Ghys [6] showed that for a C r (r ≥ 2) Anosov diffeomorphism f of T 2 , if the stable foliation F s f is C 1+Lip -smooth, then it is actually C r -smooth.Our aim in this paper is getting higher regularity of codimension-one hyperbolic foliations under the assumption of more or less C 2 -smoothness, see Theorem 1.1 and Theorem 1.5.In particular, we get some rigidity results on T 2 .Let us give two notations.We denote by λ u f (x) the sum of Lyapunov exponents (if it exists) of f on the unstable subbundle at the point x, namely, A , for all periodic point p of f , where A is the linearization of f .Remark 1.2.Here we briefly explain why we just get C r * -smoothness.In this paper, the regularity of foliation is given by foliation chart, see Section 2 for precise definition.Instead of the regularity of local chart, we will first prove that the foliation has C r -smooth holonomy.However, the regularity of foliation may be lower than its holonomy, e.g., see [16,Section 6].
In particular, we have the following corollary linking the regularity of foliation with Lyapunov exponents of its transversal.1.There exists small ε > 0 such that F s f is C 2+ε -smooth; 2. For all periodic point p of f , λ u f (p) ≡ λ u A ; 3. The foliation F s f is C r * -smooth.
Remark 1.4.By the same way of proving " 2 =⇒ 3" in Corollary 1.3, one can get an interesting result for non-invertible Anosov maps, that is their codimension-one unstable foliations are always smooth as nearly same regularity as maps.Concretely, for a C r (r > 1) non-invertible Anosov endomorphism f : Then the proof of Corollary 1.3 leads to C r * regularity of F u f .We mention in advance that our method to prove Theorem 1.1 is different from one of [4,6,9].Indeed, we will consider a diffeomorphism of circle S 1 induced by codimension-one foliation and apply KAM theory (see Theorem 2.6) to it.Hence the regularity C 2+ε of foliation is in fact a condition of the induced circle diffeomorphism for using KAM.Particularly, when T d is restricted to be T 2 , we can lower the regularity of our assumption to be C 1+AC , i.e, the derivative of foliation charts are absolutely continuous.
for all periodic point p of f , where A is the linearization of f .
By combining our result and a rigidity result of R. de la Llave [3] which says that constant periodic Lyapunov exponents implies smooth conjugacy on T 2 , we have following two direct corollaries.
Corollary 1.6.Let f be a C r (r 2) Anosov diffeomorphism of T 2 .If the stable and unstable foliations of f are both C 1+AC , then f is C r * conjugate to its linearization.In particular, f preserves a smooth volume-measure.
Preliminaries
As usual, a foliation F with dimension l of a closed Riemannian manifold where D l and D d−l are open disks with dimension l and d − l respectively and the chart ( and constants C , λ > 1, such that for all n > 0, [5] which is called the linearization of f .Denote the A-invariant hyperbolic splitting by Since f and A are always conjugate [5], we denote the conjugacy by h : By the topological character of (un)stable foliation, i.e., It is convenient to observe the foliations on the universal cover R d .Let π : R d → T d be the natural projection.Denote by F, A and H : R d → R d the lifts of f , A and h : T d → T d respectively.For convenience, we can assume that H (0) = 0. We denote the lift of Note that the holonomy map Hol s f and foliation F s f are both absolutely continuous [15].As mentioned before, the regularity of foliation may be lower than one of its holonomy.However, we still have the following lemma.We refer to [16,Section 6 ] for more details about the next lemma and also the counterexample of foliations whose regularity is strictly lower than the holonomy.
Lemma 2.1 ( [16]
).Let f : T d → T d be a C r -smooth (r ≥ 1) Anosov diffeomorphisms.Then 1.If the holonomy maps Hol s f of F s f are uniformly C r -smooth, i.e., for any x, y, z ∈ R d the holonomy map Hol s f ,x,y is C r and its derivatives (with respect to z) of order ≤ r vary continuously with respect to (x, y, z).Then the foliation Remark 2.2.Note that the second item of Lemma 2.1 is trivial.And the first item is just an application of Journé's lemma [10] which asserts that the regularity of a diffeomorphism can be obtained from the uniformly regularity of its restriction on two tranverse foliations with uniformly smooth leaves.Indeed, considering a point x ∈ R d and let α : D l → F s f (x) and β : gives us a foliation chart whose derivatives along D l and D d−l are both C r .Hence by Journé's lemma, it is a C r * -foliation chart.
On the other hand, the regularity of holonomy induced by F s f can be given by the smoothness of the conjugacy H restricted on the transversal direction.Lemma 2.3.Assume that the conjugacy H : R d → R d is uniformly C r -smooth along the unstable foliation F u f , then the holonomy Hol s f is uniformly C r -smooth.
Proof.For given x, y ∈ R d and z ∈ F u f (x), the holonomy map satisfies since H preserves the foliations.Note that the holonomies Hol s A induced by F s A are actually translations.Therefore the holonomies Hol s f has the same regularity as H | F u .
Combining Lemma 2.1 and Lemma 2.3, we can get Theorem 1.1 and Theorem 1.5 by proving that H is C r -smooth along the unstable leaves.Precisely, we will prove the following property.Then the conjugacy H between lifts F and A is uniformly C r -smooth along each unstable leaf.
We will prove this proposition in Section 3. Before that, we note that to get C r -regularity of we can just prove a lower one.Indeed, by an enlightening work of de la Llave [3], one can get the C r -smoothness from the absolute continuity, see [ is smooth, so is h| F u f .Hence λ u f (p) ≡ λ u A , for all periodic point p of f .Combining with Lemma 2.1 and Lemma 2.3, we can get these two theorems immediately.
We will get the absolute continuity of H restricted on unstable leaves by applying following KAM theory.Let R α : R → R be the translation on R such that R α (x) = x + α, x ∈ R. Denote the induced rigid rotation on S 1 by R α .
Theorem 2.6 ( [11, 12]).Let T be an orientation-preserving circle diffeomorphism with irrational rotation number α which is algebraic.Then one has the following two properties 2. If the pair (T, α) satisfies the K.O.condition [11], in particular, the conditions: T is C 1+AC , T ′′ /T ′ ∈ L p for some p > 1 and deg(α) = 2 satisfy the K.O.condition.Then T is absolutely continuously conjugate to R α .
In the case 2, the fact [19] that deg(α) = 2 if and only if α has a periodic simple continued fraction expansion is helpful to check the K.O.condition.
Absolutely continuous rotation induced by smooth foliation
In this section, we will obtain our main result Proposition 2.4.To prove it, as mentioned before, we can consider the circle diffeomorphism induced by the codimenison-one foliation F s f and show it is smooth conjugate to a rigid rotation given by F s A .We use the same notations as Section 2. For reducing the action of F s f on F u f (0) to action on S 1 , one can apply the Z d -actions.By the Global Product Structure, the following map T i n is well defined.For n ∈ Z d and i ∈ { f , A}, Proof.The regularity of T n i = Hol s i ,n,0 • R n (x) is directly from the C k -holonomy, since the holonomy is smoother than the foliation (see Lemma 2.1).And {T n i } n∈Z d is commutative by the fact that the holonomy maps are commutative with the Z d -actions on R d , i.e., Recall that we can assume H (0) = 0. Note that H preserves the foliation and satisfies This completes the proof of proposition.Now we are going to prove Proposition 2.4.Let {e i } d i =1 be an orthonormal basis of R d .We will reduce a pair of conjugate Z d -actions, for instance (T A ), to be a pair of conjugate circle diffeomorphisms and show the conjugacy is absolutely continuous by applying the KAM theory (Theorem 2.6).This method has a similar spirit with one used by Rodriguez Hertz, F. in [17].
Proof of Proposition 2.4.We pick two unit vector of the normal orthogonal basis, for example e 1 , e d .Assume that F s f is a C k -smooth codimension-one foliation, where k satisfies the condition of proposition.Firstly, we use T e 1 f to construct a C k circle diffeomorphism.We still denote the translation on R by R α (x) = x + α and the natural projection by π : R → S 1 for short.
Claim 3.2. There exists a C
Proof of Claim 3.2.We would like to define the conjugacy h f locally and extend it to R by T e d f .More specifically, let γ : (−ε, ε) → F u f (0) be a C r diffeomorphism onto the image and ε be small enough such that T e d f γ(−ε, ε) ∩ γ(−ε, ε) = .This can be done by the C r leaf F u f (0) and the where ϕ is a C k diffeomorphism onto the image and can be chosen arbitrarily.Let where [x] stands for the integer part of x.By the construction, one can verify that h f and T f are both C k diffeomorphisms and Then we obtain the desired diffeomorphism h f and hence a C k diffeomorphism T f : S 1 → S 1 .) is an eigenvector of A in F s A (0). Then α is an irrational algebraic number.Indeed, the irrational eigenvectors of A implie that there is at least a pair of irrationally related coordinates (x i , x j ), (i = j ) of v which we may assume that is (x 1 , x d ) and by the fact that the set of algebraic numbers is a field.Moreover, α =
Claim 3.3. There exists a C
f and H −1 are also absolutely continuous along unstable leaves.Finally, by Lemma 2.5, H is C r -smooth restricted on unstable leaves.
Corollary 1 . 3 .
Let f be a C r (r > 2) Anosov diffeomorphism of T d (d ≥ 2) with the (d − 1)-dimension stable foliation F s f and linearization A : T d → T d .Then the followings are equivalent:
2 . 1 A
where α is an irrational algebraic number.In particular, R α induces a rotation R α on S 1 .Moreover if d = 2 (the T 2 case), one has deg(α) = Proof of Claim 3.3.Let h A : R → F u A (0) be the linear map such that h A (0) = 0 ∈ R d and h A (1) • h A is actually a translation R α (x) = x + α, x ∈ R.By elementary calculate, α = x 1 /x d where v = (x 1 , ..., x d ) (given under the an orthonormal basis {e i } d i =1
1 f = T e 1 A
x 1 /x d is a quadratic irrational in the case of d = 2. Since H • T e • H (see Proposition 3.1), H also induces a conjugacy H : S 1 → S 1 from T f to R α .Indeed, let H h −1 A • H • h f : R → R.Then by Proposition 3.1, Claim 3.2 and Claim 3.3, one has1.H • R 1 = R 1 • H; 2. R α • H = H • T f .In particular, H : R → R induces H :S 1 → S 1 with π • H = H • π.Moreover, H • T f = R α • H.Namely, we have the following commutative diagram:
1 +
H ±1 is absolutely continuous, so is H ±1 : R → R. Note that in the case of T 2 and k = AC, one has that bothT f , T −1 f are C 1+AC -smooth.It follows that there is C > 1 such that |T ′′ f (x)| < C and |T ′ f (x)| > 1C for Lebesgue-almost everywhere x ∈ S 1 .Hence T R d by F σ i which are also the stable/unstable foliation of the lift F /A. Recall that H F σ f = F σ A , σ = s/u and hence F s f and F u f admit the Global Product Structure just like F s A and F u A , i.e., each pair of leaves 3, Lemma4.1,Lemma 4.5 and Lemma 4.6].Here we state it for convenience.
r -smooth.Now we can finish the proof of our main theorems.Proof of Theorem 1.1, Corollary 1.3 and Theorem 1.5.Let f satisfy the condition of Theorem 1.1 or Theorem 1.5.By Proposition 2.4, H | F u f | 2023-11-01T06:42:54.870Z | 2023-10-29T00:00:00.000 | {
"year": 2023,
"sha1": "830cdd6d3ac64eae9902e7e62cb3971dbac5ad40",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "830cdd6d3ac64eae9902e7e62cb3971dbac5ad40",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
249961546 | pes2o/s2orc | v3-fos-license | Exosome-mediated lncRNA SND1-IT1 from gastric cancer cells enhances malignant transformation of gastric mucosa cells via up-regulating SNAIL1
Background Gastric cancer (GC), as one of the most common malignancies across the globe, is the fourth leading cause of cancer-related deaths. Though a large body of research has been conducted to develop the therapeutic methods of GC, the survival rate of advanced patients is still poor. We aimed to dig into the potential regulatory mechanism of GC progression. Methods Bioinformatics tools and fundamental assays were performed at first to confirm the candidate genes in our study. The functional assays and mechanism experiments were conducted to verify the regulatory mechanisms of the genes underlying GC progression. Results Long non-coding RNA (lncRNA) SND1 intronic transcript 1 (SND1-IT1) is highly expressed in exosomes secreted by GC cells. SND1-IT1 was verified to bind to microRNA-1245b-5p (miR-1245b-5p) through competitive adsorption to promote ubiquitin specific protease 3 (USP3) messenger RNA (mRNA) expression. SND1-IT1 was validated to recruit DEAD-box helicase 54 (DDX54) to promote USP3 mRNA stability. SND1-IT1 induces malignant transformation of GES-1 cells through USP3. USP3 mediates the deubiquitination of snail family transcriptional repressor 1 (SNAIL1). Conclusions Exosome-mediated lncRNA SND1-IT1 from GC cells enhances malignant transformation of GES-1 cells via up-regulating SNAIL1. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s12967-022-03306-w.
Background
Gastric cancer (GC) is one of the most common digestive malignant tumors [1]. As much progress has been made in surgical techniques, the 5-year survival rate of early GC can reach > 95%. However, it's difficult to make timely diagnosis, which means that most patients are diagnosed at advanced stage, missing the opportunity for surgery. Therefore, we aimed to deepen the understanding of the mechanism underlying GC for improving its treatment.
Long non-coding RNAs (lncRNAs) are defined as a type of non-coding RNAs with over 200 nucleotides in length. Most of lncRNAs are generated from gene introns, intergenic regions, promoter regions of coding messenger RNA (mRNA), antisense strands of mRNAs and pseudogenes [2]. More importantly, lncRNAs have been widely reported to function as potential biomarkers in modulating a variety of biological and pathological processes, including GC progression. For instance, lncRNA MEG3 inhibits proliferation and metastasis of GC through targeting p53 signaling pathway [3]; ALKBH5 propels invasion and metastasis of GC via hampering methylation of the lncRNA NEAT1 [4]; lncRNA AK023391 facilitates tumorigenesis and invasion of GC by activating PI3K/ Akt signaling pathway [5]. By using Gene Expression Omnibus (GEO) database, we found that lncRNA SND1-IT1 was highly expressed in the exosomes from plasma of GC patients. However, its effects on GC progression have rarely been reported. Therefore, the present study focuses on the role of SND1-IT1 in GC.
MicroRNAs (miRNAs) are small non-coding RNAs, which down-regulate gene expression by repressing or degrading target mRNAs [6]. Moreover, mRNA-based therapeutics has been widely applied in preclinical and clinical studies for cancer treatment [7]. It has been reported that the competing endogenous RNA (ceRNA) network (lncRNA-miRNA-mRNA) has been identified in various cancers including squamous cell carcinoma [8], melanoma [9] and lung adenocarcinoma [10]. In addition, the ceRNA network has also been widely studied in GC. For example, LINC01133 as a ceRNA inhibits the progression of GC via sequestering miR-106a-3p to modulate APC expression and the Wnt/β-catenin pathway [11]; MT1JP as a ceRNA regulates FBXW7 via competitively binding to miR-92a-3p in GC [12]; HOTAIR functions as a ceRNA to modulate the expression of HER2 via sequestering miR-331-3p in GC [13].
Graphical Abstract
Page 3 of 18 Jin et al. Journal of Translational Medicine (2022) 20:284 According to the previous study, it has been identified that lncRNA plays an important role through the interaction with RNA-binding proteins (RBPs) in various cancer cells [14]. The functions and roles of RBPs in GC have already been studied. However, the RBPs which can bind to SND1-IT1 in GC cells still remain to be disclosed.
To summarize, the main focus point of our research was to study the underlying mechanisms by which SND1-IT1 affects the progression of GC.
Cell culture
Human gastric mucosa epithelial cell line GES-1 was commercially acquired from Shanghai EK-Bioscience Biotechnology Co., LTD (Shanghai, China). Cells were cultured in Dulbecco's Modified Eagle's Medium (DMEM) + 10% fetal bovine serum (FBS). Human GC cell line AGS was procured from America Type Culture Collection (ATCC) and cultured in F-12 K medium + 10% FBS. HEK-293 T cells were procured from ATCC and cultured in DMEM + 10% FBS + 2 mM glutamine. All cells were preserved at 37 °C with 5% CO 2 .
Plasmid transfection
For the overexpression of SND1-IT1 and USP3, the fulllength sequences were separately synthesized and subcloned into pcDNA3.1 vectors. Empty pcDNA3.1 vector was utilized as the negative control (NC). MiR-1245b-5p was overexpressed using miR-1245b-5p mimics. For the inhibition of DDX54 and SND1-IT1, specific small interfering RNAs (siRNAs) were respectively synthesized and established with non-targeting siRNA (si-NC) as NC.
Bioinformatics analysis
GEO database is a public functional genomics data repository supporting MIAME-compliant data submissions. GEO database (GSE153413) was used in our study to explore lncRNAs significantly overexpressed in exosomes from the plasma of GC patients. UALCAN database (http:// ualcan. path. uab. edu/ index. html) is a web resource for the analysis of cancer OMICS data, which was utilized in our study to predict the expressions of ZNF529-AS1, LIMD1-AS1, LINC00837, SND1-IT1, VWA8-AS1, LINC01005, ST20-AS1, LINC00511 and LINC01266 in stomach adenocarcinoma (STAD). StarBase (https:// rna. sysu. edu. cn/ encori/ index. php) is designed to decode the interaction networks; it was used in our study to predict the miRNAs binding to SND1-IT1, mRNAs binding to miR-1245b-5p and their expressions, RBPs binding to both SND1-IT1 and USP3 and interaction between DDX54 and USP3 3′UTR. BioGRID (https:// thebi ogrid. org/) is an online interaction repository with data compiled through comprehensive curation efforts, which was used to predict the proteins interacting with USP3.
Western blot
Western blot was conducted to estimate the protein levels of CD63, CD81, CD9, DDX54, ISLR, SNAIL1, CHEK1, β-actin and USP3. Total protein extracted from GC cell lines was subjected to isolation using RIPA buffer, followed by being separated via sodium dodecyl sulfatepolyacrylamide gel electrophoresis (SDS-PAGE). Subsequently, the samples were transferred to polyvinylidene fluoride (PVDF) membranes and blocked in 5% skim milk. The membranes were then cultivated with primary antibodies overnight at 4 °C. After being washed in TBST, the samples were incubated with the secondary antibodies. Finally, the results were visualized by enhanced chemiluminescence (ECL) substrate.
Transwell assay
Transwell assays were conducted for the evaluation of GC cell migration and invasion. Transfected cells were planted into the upper chambers of 24-well transwell chambers at a density of 2 × 10 4 cells per well. The chambers coated with Matrigel were applied for the implementation of invasion assay while ones without Matrigel for migration assay. Complete medium was added into the lower chambers. Twenty-four hours later, cells in the upper chambers were abraded with caution using a cotton swab and cells in the lower chambers were fixed in methanol solution for 15 min. Crystal violet was adopted to stain the membranes for 10 min. The invaded or migrated cells were observed and calculated under a microscope (10 × 10).
RNA-binding protein immunoprecipitation (RIP) assay
RIP assay was performed using Magna RIP ™ RNA-Binding Protein Immunoprecipitation Kit. To set up immunoglobulin G (IgG), Argonaute 2 (AGO2) and DDX54 groups, 50 μg Protein A/G Agarose magnetic beads were incubated with the antibodies against AGO2, IgG and DDX54 overnight at 4 °C through rotation. 6 × 10 7 gastric cells were collected, followed by lysis in Immunoprecipitation (IP) Lysis Buffer (Beyotime Biotechnology Co., LTD). 100μL lysis buffer was added to IgG and DDX54 groups respectively, and 10 μL to the Input group. Next, lysates were cultured with anti-DDX54 antibody and anti-IgG antibody (Abcam) overnight at 4 °C in rotation. Finally, the Imprint ® RNA Immunoprecipitation Kit (RIP-12RXN, Sigma-Aldrich, USA) was employed to purify and extract the RNA precipitates.
RNA pull-down assay
In brief, a single biotinylated desthiobiotinylated cytidine was attached to 3′end of RNAs (SND1-IT1, miR-1245b-5p and USP 3′UTR). After reaching a final concentration of 20 nmol/L, the biotinylated RNAs were subjected to co-cultivation with streptavidin-coated magnetic beads (Ambion, Life Technologies). Afterwards, the beads were incubated with the cell lysate. Western blot and qRT-PCR was performed to analyze the abundance of DDX54 in bound fractions after the pull-down of biotin-coupled RNA complex.
After 48-h transfection, the luciferase activities were analyzed utilizing the dual-luciferase reporter assay system (Promega).
Wound healing assay
In the wound healing assay, the 5 × 10 5 transfected GES-1 cells were seeded in 24-well plates and cultivated at 37 °C until cells reached 100% confluence. Next, cells were scraped by 200 μL sterile micropipette tip, followed by being cultured at 37 °C for 24 h. Afterwards, the cells were washed three times in serum-free medium for the removal of the detached cells. The scratch was imaged via microscopy at the time 0 h and 24 h for analysis.
Co-immunoprecipitation (Co-IP) assay
The prepared cell lysates were collected from the treated cells in IP lysis buffer, followed by incubation with indicated antibodies of SNAIL1 and USP3 and control IgG antibody overnight at 4 °C. After being mixed with beads, samples were washed in IP lysis buffer, followed by analysis of western blot.
In vivo xenograft experiments
Nude mice aged 6-8 weeks were randomly divided into 2 groups (n = 5). Each group received subcutaneous injection of exosomes Exo/pcDNA3.1 or Exo/SND1-IT1 (1 × 10 9 exosomes/ml). Seven days after the injection, the tumor volume was measured every 3 days. Twenty-eight days after the injection, all the mice were sacrificed for tumor resection, followed by the measurement of tumor weight. The animal studies were implemented with the approval of the First Hospital of Jilin University.
Statistical analysis
Statistical analyses for separate and triplicated experiments were conduct using SPSS version 17.0. The experimental data were presented as mean ± standard deviation (SD). The differences between two groups were subjected to analysis by the Student's t-test, while the differences between more than two groups by one-way or two-way analysis of variance (ANOVA). P value under 0.05 was considered to be the criterion for statistical significance.
GC cells induce the malignant progression of GES-1 cells via exosomes
It was reported that in the tumor microenvironment (TME), cancer cells induce the carcinogenesis of normal cells [15]. Hence, we performed assays to explore whether GC cells have the same influence on gastric mucosa epithelial cells. At first, we co-cultured GC cell line AGS with gastric mucosa epithelial cell line GES-1 (Fig. 1A). Secondly, the GES-1 cell migratory capacity before and after the co-culture was detected by wound healing assay. The result showed that in GES-1 cells with AGS, the wound width was narrower than that in GES-1 cells without AGS, which indicated that the co-culture with AGS cells promotes GES-1 cell migration (Fig. 1B). Thirdly, transwell assay was performed to detect the cell invasive capability. After co-culture with AGS, GES-1 cell invasion was promoted (Fig. 1C). It has been reported that cancer cells can regulate the malignant progression of normal epithelial cells through secreting exosomes [16]. Hence we speculated that GC cells can influence GES-1 cells via secreting exosomes. GW4869, with its ability to inhibit exosome secretion, was adopted [17]. AGS cells, with and without the treatment of GW4869 were then co-cultured with GES-1 cells respectively. The results of wound healing assay showed that cell migration in co-cultured cells was suppressed after GW4869 treatment (Fig. 1D). It was unearthed by the results of transwell assay that invasive ability of GES-1 cells was attenuated after GW4869 treatment in the co-culture system (Fig. 1E). To conclude, it was verified that GC cells promotes GES-1 cell migration and invasion via secreting exosomes.
GC cells secrete SND1-IT1 via exosomes
From Fig. 1 we realized that GC cells can induce malignant transformation of gastric mucosa epithelial cells.
Next, we dug into the specific mechanisms underlying induction. By utilizing GEO database (GSE153413), lncRNAs significantly overexpressed in exosomes from the plasma of GC patients were screened out under the conditions of p < 0.05 and logFC > 2. Nine overexpressed lncRNAs were listed as shown in Fig. 2A. Then UAL-CAN database was used to analyze the expressions of the
Fig. 1 GC cells induce malignant progression of GES-1 cells via exosomes.
A GES-1 cells were co-cultured with AGS cells. B GES-1 cell migratory before and after being co-cultured with AGS cells was detected by wound healing assay. C Transwell assay was performed to detect the invasive capability of GES-1 cell before and after being co-cultured with AGS cells. D Wound healing assay was performed to measure the migration of AGS-cultured GES-1 cells before and after treatment with GW4869. E Transwell assay was taken to determine AGS-cultured GES-1 cell invasion before and after treatment with GW4869. ** P < 0.01 lncRNAs in STAD (Additional file 1: Fig. S1). SND1-IT1, VWA8-AS1 and LINC00511 were chosen as candidates. SND1-IT1 was reported to promote the proliferation and migration of osteosarcoma [18], while VWA8-AS1 and LINC00511 have never been reported previously. The exosomes were extracted from AGS and GES-1 cells, and named as Exo/AGS and Exo/GES-1 respectively. Nanoparticle tracking analysis (NTA) was implemented to examine the diameter of Exo/AGS and Exo/ GES-1. The result showed that the diameter of the extracted exosomes was about 60-160 nm (Fig. 2B). Afterwards, morphology of exosomes was observed by electron microscope, which is in accordance with the basic characteristics of exosomes (Fig. 2C). Western blot assay was then performed to detect the protein level of CD63, CD81 and CD9, showing their existence (Fig. 2D). The results from Fig. 2B-D all demonstrated that exosomes were successfully extracted. Subsequently, the expressions of candidate lncRNAs were measured by q-PCR in Exo/AGS and Exo/GES-1. As SND1-IT1 expression was most up-regulated among other candidates, it was chosen for the following experiments (Fig. 2E). The expression of SND-IT1 in the exosomes from the plasma of GC patients was analyzed through GSE153414 dataset. As shown in Fig. 2F, SND-IT1 was predicted to be up-regulated in the exosomes from the plasma. Next, SND1-IT1 was overexpressed by transfection with pcDNA3.1-SND1-IT1 vector, the efficiency of which was detected by q-PCR in AGS and GES-1 cells (Additional file 2: Fig. S2A, B). Exosomes extracted from GC cells transfected with pcDNA3.1 and pcDNA3.1-SND1-IT1 were termed as Exo/pcDNA3.1 and Exo/ SND1-IT1 respectively. Similarly, the diameter and morphology of these exosomes were examined (Fig. 2G, H). The protein levels of CD63, CD81 and CD9 were detected by western blot in these exosomes (Fig. 2I). The results from Fig. 2G-I demonstrated that GC cell-derived exosomes were extracted successfully. The expression of SND1-IT1 in AGS cells treated with Exo/pcDNA3.1 or Exo/SND1-IT1 was detected by q-PCR. The results showed that the expression of SND1-IT1 in exosomes was up-regulated in Exo/SND1-IT1 group, compared with control group (Fig. 2J). Next, GES-1 cells were treated with Exo/pcDNA3.1 and fluorescent-labeled Exo/ SND1-IT1 respectively, followed by observation under fluorescence microscope. It was proved that SND1-IT1 can be delivered through GC cell-derived exosomes into GES-1 cells (Fig. 2K). Taken together, GC cells secrete exosomal SND1-IT1 into GES-1 cells.
Exosomal SND1-IT1 promotes GES-1 cell migration and invasion
In this section, we decided to explore the effect of exosome-mediated SND1-IT1 on GES-1 cells. The GES-1 cells were treated with Exo/pcDNA3.1 or Exo/SND1-IT1 respectively. Next, q-PCR was utilized to detect SND1-IT1 expression, showing an increase in Exo/SND1-IT1 group compared to Exo/pcDNA3.1 group (Fig. 3A). Wound healing assay and transwell assays were conducted to measure the migratory and invasive capabilities of GES-1 cells. It was found that exosomal SND1-IT1 promoted GES-1 cell migration and invasion (Fig. 3B, C). At last, the expression levels of MMP2 and MMP9 were detected by q-PCR in GES-1 cells. MMP2 and MMP9 are invasion-related factors, and their expressions are positively related to cell invasion. The results of q-PCR showed that, after treatment with Exo/SND1-IT1, the levels of MMP2 and MMP9 were increased, suggesting that exosomal SND1-IT1 promotes GES-1 cell invasion (Fig. 3D). Taken together, exosomal SND1-IT1 promotes GES-1 cell migration and invasion.
SND1-IT1 competitively adsorbs miR-1245b-5p to elevate USP3 expression
Next, we investigated the specific regulatory mechanism of SND1-IT1 in GES-1 cells through the following experiments. It was found that SND1-IT1 can regulate the progression of osteoblast via ceRNA pattern [18]. We conjectured that in GES-1 cells, exosomal SND1-IT1 regulates downstream genes by competitively adsorbing miRNA. In GES-1 cells, SND1-IT1 expression was detected by q-PCR after RIP assay. It was found that SND1-IT1 existed in RNA-induced silencing complex (RISC) as evidenced by the preferential enrichment of SND1-IT1 in Anti-AGO2 group, which indicated that SND1-IT1 might regulate downstream gene expression via miRNA (Fig. 4A). The miRNAs which can bind to SND1-IT1 were predicted by starBase under the certain conditions (AgoExpNum ≥ 2 and Pan-Cancer ≥ 3). Seven miRNAs were screened out and listed as shown in Fig. 4B. Among them, miR-1245b-5p and miR-873-3p were chosen as candidates as they had not been studied in GC. According to the literature review, miR-873-3p can inhibit lung cancer progression [19], and there are few researches on the role of miR-1245b-5p in cancers. RNA pull-down assay in GES-1 cells was performed to detect the interaction of SND1-IT1 with miR-873-3p and miR-1245b-5p. As evidenced by the higher enrichment, The miRNAs which can bind to SND1-IT1 were predicted by starBase (https:// rna. sysu. edu. cn/ encori/ index. php) when AgoExpNum ≥ 2 and Pan-Cancer ≥ 3. C In GES-1 cells, the interaction of SND1-IT1 with miR-873-3p and miR-1245b-5p was detected by RNA pull-down assay. D The interaction between SND1-IT1 and miR-1245b-5p was detected by RNA pull-down assay. E The expression of miR-1245b-5p was detected by q-PCR after SND1-IT1 was overexpressed. F In 293 T cells, dual-luciferase reporter assay detected the interaction between SND1-IT1 and miR-1245b-5p. G The mRNA which can bind to miR-1245b-5p was predicted and screened by starBase database. H In GES-1 cells, q-PCR detected the expression of candidate mRNAs before and after up-regulation of SND1-IT1. I In 293 T cells, dual-luciferase reporter assay detected the interaction between USP 3′UTR and miR-1245b-5p. J The expression level of USP3 was detected by q-PCR in GES-1 cells after the transfection of pcDNA3.1, pcDNA3.1-SND1-IT1, pcDNA3.1-SND1-IT1 + mimics NC or pcDNA3.1-SND1-IT1 + miR-1245b-5p mimics. K The interaction between USP3 and miR-1245b-5p was detected in GES-1 cells by RNA pull-down assay before and after SND1-IT1 overexpression. L In 293 T cells, dual-luciferase reporter assay was performed to detect the interaction between USP3 3′UTR and miR-1245b-5p before and after SND1-IT1 overexpression. ** P < 0. miR-1245b-5p had a better binding ability to SND1-IT1 than miR-873-3p (Fig. 4C). Thus miR-1245b-5p was selected for further study. For verification, RNA pulldown assay proved that SND1-IT1 can bind to miR-1245b-5p (Fig. 4D). After SND1-IT1 was overexpressed, q-PCR was performed to detect miR-1245b-5p expression. It was shown that the expression of miR-1245b-5p had no obvious change after SND1-IT1 overexpression, indicating that SND1-IT1 competitively adsorbs miR-1245b-5p (Fig. 4E). In 293 T cells, dual-luciferase reporter assay was conducted to detect the luciferase activity under different conditions. After co-transfection with miR-1245b-5p mimics, the luciferase activity in pmirGLO-SND1-IT1-WT group was decreased, further proving the interaction between miR-1245b-5p and SND1-IT1 (Fig. 4F). The mRNA which can bind to miR-1245b-5p was predicted and screened by starBase database under the condition of AgoExpNum ≥ 10 and CleaveExpNum ≥ 1 (Fig. 4G). Furthermore, the expressions of predicted mRNAs in STAD were analyzed by starBase (Additional file 2: Fig. S2C-J). USP3, SLC7A6, IGF2BP1 and TKT were chosen as candidates because they were predicted to be highly expressed in STAD. According to the literature review, USP3 promotes GC cell migration and invasion [20]; SLC7A6 hasn't been reported in studies related to cancer; IGF2BP1 accelerates GC progression [21]; and TKT facilitates breast cancer metastasis [22]. In GES-1 cells, q-PCR was performed to detect the expression of candidate mRNAs after up-regulation of SND1-IT1. Among the candidates, USP3 expression was most increased after SND1-IT1 overexpression (Fig. 4H), thus it was selected for our study. The overexpression efficiency of miR-1245b-5p mimics was assessed by q-PCR as shown in Additional file 3: Fig. S3A. Afterwards, dual-luciferase reporter assay in 293 T cells was performed to detect the interaction between USP3 3′UTR and miR-1245b-5p. It was shown that when miR-1245b-5p was overexpressed, the luciferase activity of pmirGLO-USP3 3′UTR-WT group was decreased, while that of mutant groups had no significant change compared with control group (Fig. 4I). Subsequently, rescue experiment was conducted. We performed q-PCR to detect the expression of USP3 after the transfection of pcDNA3.1, pcDNA3.1-SND1-IT1, pcDNA3.1-SND1-IT1 + mimics NC or pcDNA3.1-SND1-IT1 + miR-1245b-5p mimics. It was found that, the expression of USP3 was evidently increased by SND1-IT1 overexpression, and was then partially reversed by miR-1245b-5p overexpression (Fig. 4J). The enrichment of USP3 under different conditions was detected by q-PCR in GES-1 cells after RNA pull-down assay. We found that USP3 was highly enriched in Bio-miR-1245b-5p-WT group, and the abundance of USP3 was enhanced after the overexpression of SND1-IT1, indicating the binding between USP3 and miR-1245b-5p is strengthened by SND1-IT1 overexpression (Fig. 4K). In 293 T cells, dual-luciferase reporter assay was performed to detect the luciferase activity under different conditions. The results showed that the luciferase activity was decreased after miR-1245b-5p was overexpressed, which was recovered after co-transfection with pcDNA3.1-SND1-IT1 (Fig. 4L). To sum up, SND1-IT1 competitively adsorbs miR-1245b-5p to up-regulate USP3 expression.
SND1-IT1 recruits DDX54 to facilitate USP3 mRNA stability
The results of rescue experiment showed that USP3 expression increased by SND1-IT1 overexpression was partially reversed by overexpressed miR-1245b-5p. Therefore, we speculated SND1-IT1 might regulate USP3 expression through other ways. Literature indicated that lncRNA might regulate downstream mRNA by recruiting RBP [23], thus we speculated that SND1-IT1 might promote USP3 expression via recruiting a RBP. The RBPs (TARDBP, DDX54 and HNRNPC) which can bind to both SND1-IT1 and USP3 were predicted by star-Base under the condition of Pan-Cancer ≥ 15, Cluster-Num ≥ 30 and ClipExpNum ≥ 5 (Fig. 5A). Afterwards, RIP assay followed by q-PCR was performed in GES-1 cells to detect the enrichments of the candidate RBPs. As evidenced by the higher enrichment, DDX54 had the best binding ability to SND1-IT1 (Fig. 5B). Therefore, DDX54 was chosen for further study. RNA pull-down assay followed by western blot was performed to detect the interaction of SND1-IT1 with DDX54. It was further proved that DDX54 can bind to SND1-IT1 (Fig. 5C). DDX54 was then predicted to bind to USP3 3′UTR by starBase (Fig. 5D). Next, RNA pull-down and RIP assays in GES-1 cells was used to verify the binding condition In GES-1 cells, the interaction of SND1-IT1 with RBP candidates was detected by RIP assay. C RNA pull-down assay detected the interaction of SND1-IT1 with DDX54. D DDX54 was predicted to bind to USP3 3′UTR by using starBase. E In GES-1 cells, western blot was used to verify the binding between USP3 3′UTR and DDX54 after RNA pull-down assay. F In GES-1 cells, RIP assay was used to verify the binding between USP3 3′UTR and DDX54 after. G The expression of DDX54 in GES-1 cells was measured by q-PCR after the overexpression of SND1-IT1. H In GES-1 cells, the binding between USP3 3′UTR and DDX54 was detected by RIP assay before and after SND1-IT1 overexpression. I USP3 and β-actin half-lives were measured by q-PCR after DDX54 was interfered. ** P < 0.01 between USP3 3'UTR and DDX54. The results proved the interaction (Fig. 5E, F). The expression of DDX54 in GES-1 cells was then measured by q-PCR after SND1-IT1 overexpression. The results showed that DDX54 expression remained almost unchanged after overexpression (Fig. 5G). The binding between USP3 3′UTR and DDX54 was detected by RIP assay in GES-1 cells. It was shown that the binding of DDX54 with USP3 3'UTR was enhanced after SND1-IT1 overexpression (Fig. 5H). The interference efficiency of si-DDX54-1/2/3 was detected by q-PCR and western blot (Additional file 3: Fig. S3B, C). Due to its higher efficiency, si-DDX54-1 was selected for follow-up experiments. Next, we performed q-PCR to explore the effect of DDX54 on USP3 mRNA stability in GES cells treated with transcription inhibitor actinomycin D (ActD). USP3 and β-actin half-lives were measured after DDX54 was interfered. It was shown that USP3 half-life was shortened (Fig. 5I). The above experimental outcomes suggested that SND1-IT1 can promote USP3 mRNA stability via DDX54.
SND-IT1 induces the malignant transformation of GES-1 cells via USP3
In this section, we explored whether SND1-IT1 regulates the malignant transformation of GES-1 cells by USP3. Firstly, the interference efficiency of si-USP3-1/2/3 was detected by q-PCR (Additional file 3: Fig. S3D). Due to higher efficiencies, si-USP3-1 and si-USP3-2 were selected for follow-up experiments. Afterwards, experiments were conducted in GES-1 cells to detect cell migration and invasion after transfection with pcDNA3.1, pcDNA3.1-SND1-IT1, pcDNA3.1-SND1-IT1 + si-NC or pcDNA3.1-SND1-IT1 + si-USP3-1 respectively. Wound healing assay was performed to determine the cell migration. It was shown that cell migration was promoted by SND1-IT1 up-regulation, and was then totally reversed by USP3 inhibition (Fig. 6A). The expressions of invasion-related factors, MMP2 and MMP9 were confirmed by q-PCR to detect cell invasion in transfected cells. The results showed that cell invasion promoted by SND1-IT1 overexpression was totally recovered by USP3 depletion (Fig. 6B). Transwell assay was then conducted to assess cell migration and invasion. It was unmasked by the result that, the promoted cell migration and invasion caused by SND1-IT1 overexpression was totally reversed by USP3 interference (Fig. 6C). Taken together, SND-IT1 induces the malignant transformation of GES-1 cells via USP3.
USP3 mediates SNAIL1 deubiquitination
According to literature review, USP3 can mediate protein deubiquitination by acting as a deubiquitinase to promote protein expression and regulate cancer progression [24]. The protein which can interact with USP3 was predicted by BioGrid database under the condition of Evidence ≥ 3. Three proteins were screened out: ISLR, SNAIL1 and CHEK1 (Fig. 7A). According to previous literatures, ISLR promotes colorectal cancer (CRC) progression [25]; SNAIL1 promotes GC cell proliferation and migration [26]; and CHEK1 is up-regulated in lung cancer and inhibits the radiotherapy sensitivity of GC cells [27]. The interference efficiency of si-SND1-IT1-1/2/3 was confirmed by q-PCR in GES-1 cells (Additional file 1: Fig. S3E). Due to higher efficiencies, si-SND1-IT1-1 and si-SND1-IT1-2 were selected for follow-up experiments.
The protein level of the candidates was analyzed by western blot after SND1-IT1 down-regulation. As the protein level of SNAIL1 was decreased evidently, SNAIL1 was selected for follow-up assays (Fig. 7B). The combination between USP3 and SNAIL1 was verified by Co-IP assay (Fig. 7C). In GES-1 cells, SNAIL1 mRNA and protein levels were detected by q-PCR and western blot respectively after USP3 ablation. It was shown that the mRNA level of SNAIL1 remained unchanged, while its protein level was significantly decreased (Fig. 7D, E). Subsequently, SNAIL1 ubiquitination level under different conditions was detected by IP-western blot assay in cells treated with MG132, an inhibitor of protein degradation, at 10 µM for 24 h [28]. The outcome showed that USP3 mediated the deubiquitination of SNAIL1 (Fig. 7F). The overexpression efficiency of pcDNA3.1-USP3 was shown in Additional file 3: Fig. S3F. SNAIL1 protein level under different conditions was detected by western blot assay. The results showed that SNAIL1 protein level was decreased when SND1-IT1 was interfered, and was then recovered totally when USP3 was overexpressed (Fig. 7G). Finally, western blot assay was utilized to measure the SNAIL1 protein level under different time points after addition with chlorhexidine (CHX), a protein synthesis inhibitor. CHX treatment concentration was 20 μM [28]. The result displayed that, USP3 ablation inhibited SNAIL1 protein degradation (Fig. 7H). To sum up, USP3 mediates SNAIL1 deubiquitination.
Exosomal SND1-IT1 is carcinogenic in vivo
Outcomes from Figs. 3-7 have indicated that in in vitro experiments, exosomal SND1-IT1 induces GES-1 malignant transformation. Thus, we next verified the carcinogenic role of exosomal SND1-IT1 in vivo. Nude mice received subcutaneous injection of exosomes Exo/ pcDNA3.1 or Exo/SND1-IT1. Seven days after the injection, tumor volume was measured every 3 days thereafter [29]. Volume of xenografts from 0 to 28 days of subcutaneous tumorigenesis in nude mice was measured. Compared to the control group, the tumor volume was higher in nude mice injected with Exo/SND1-IT1 (Fig. 8A). Twenty-eight days after injection, the weight of xenografts resected from nude mice was measured. Likewise, the tumor weight was higher in the Exo/SND1-IT1 group than that in Exo/pcDNA3.1 group (Fig. 8B). Western blot assay was conducted to detect the protein levels of USP3 and SNAIL1 in xenografts. The results showed that the protein levels in Exo/SND1-IT1 group were increased obviously compared to the control group (Fig. 8C).
Discussion
GC, as a prevalent and heterogeneous disease, features unsatisfying prognosis [30,31]. Despite great progress made in GC treatment, the proper therapeutic strategies for GC still remain to be explored. Biological and molecular markers are of great value in the diagnosis, prognosis and treatment of malignant tumors. For instance, metadherin has the potential to function as a diagnostic and prognostic marker in CRC [32]; neuropilin-1 and angiopoietin-2 serve as markers for hepatocellular carcinoma [33]; miR-150 expression can be used to predict imatinib response chronic myeloid leukemia patients [34]; and the expression of Oct4 is correlated with GC progression [35]. In order to improve the treatment of GC, we conducted the study on the specific mechanism of lncRNA SND1-IT1 and its potential as a biomarker in GC cells.
LncRNAs have been reported to participate in the regulation of disease development and play a critical role in various biological functions [36]. As indicated in the previous studies, lncRNA SND1-IT1 has been reported to modulate various cancers. For instance, SND1-IT1 accelerates the proliferation and migration of osteosarcoma via sponging miR-665 to up-regulate POU2F1 expression [18]. In addition, SND1-IT1 is involved in rat myocardial ischemia/reperfusion injury via regulating miR-183-5p [37].
It has been reported that in TME, cancer cells can induce normal cells to promote carcinogenesis [15]. In our research, we co-cultured GC cells and gastric mucosa epithelial cell GES-1 at first. It was found that the malignant transformation of GES-1 cells was promoted by exosomes secreted from GC cells. With the help of GEO database (GSE153413), lncRNAs which were highly expressed in the exosomes from the plasma of GC patients were analyzed. It was found that lncRNA SND1-IT1 was obviously overexpressed in exosomes. Therefore, SND1-IT1 was chosen as the focus of the present study. Next, we conducted experiments and discovered that exosomal SND1-IT1 can induce the malignant transformation of GES-1 cells. In order to figure out the specific mechanism of SND1-IT1 underlying its promoting effect on canceration, we conducted experiments for exploration after bioinformatics prediction. It was verified that SND1-IT1 can promote USP3 mRNA expression through competitively adsorbing miR-1245b-5p. Meanwhile, SND1-IT1 was proved to recruit DDX54 to improve USP3 mRNA stability. As USP3 can mediate protein deubiquitination, we used BioGrid to predict the proteins which can interact with USP3. Further assays were performed to confirm that USP3 modified deubiquitination of SNAIL1 and facilitated its protein expression. In the end, animal experiments were conducted to verify that exosome-mediated SND1-IT1 can promote cancer progression in vivo. The graphical abstract illustrated the underlying mechanism. As shown in Fig. 9, exosomal lncRNA SND1-IT1 secreted from GC cells could competitively absorb miR-1245b-5p and simultaneously recruit DDX54 to up-regulate USP3 expression, thus mediating SNAIL1 deubiquitination and inducing the malignant transformation of GES-1 cells.
Conclusion
Our study firstly manifested the promoting effect of exosomal SND1-IT1 on gastric mucosa cell malignant transformation. It has firstly been discovered in the present study that SND1-IT1 can competitively adsorb miR-1245b-5p and recruit DDX54 to facilitate USP3 expression. Moreover, our study suggested that USP3 can mediate the deubiquitination of SNAIL1 for the first time. However, our report can be improved in many aspects. Because of the lack of fund support and human power, we could not conduct clinical trials. In addition, only one type of GC cell line was utilized in experiments, which limits the stringency of our report. In the future, we will probe into the clinicopathological relevance and validate the results in other GC cell lines in the further study. Fig. 9 Exosome-mediated SND1-IT1 from GC cells interacts with miR-1245b-5p and DDX54 to up-regulate the expression of USP3, which mediates SNAIL1 deubiquitination to enhance the malignant transformation of gastric mucosa epithelial cell | 2022-06-24T13:25:19.083Z | 2022-06-23T00:00:00.000 | {
"year": 2022,
"sha1": "993e3d0e45000068ab9dc5781e38972e9d7078b6",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "74cf26a3502849769cbe96372e7ce1797cde66af",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247631707 | pes2o/s2orc | v3-fos-license | A neutrophil - B-cell axis governs disease tolerance during sepsis via Cxcr4
Sepsis is a life-threatening condition characterized by uncontrolled systemic inflammation and coagulation, leading to multi-organ failure. Therapeutic options to prevent sepsis-associated immunopathology remain scarce. Here, we established a model of long-lasting disease tolerance during severe sepsis, manifested by diminished immunothrombosis and organ damage in spite of a high pathogen burden. We found that, both neutrophils and B cells emerged as key regulators of tissue integrity. Enduring changes in the transcriptional profile of neutrophils, included upregulated Cxcr4 expression in protected, tolerant hosts. Neutrophil Cxcr4 upregulation required the presence of B cells, suggesting that B cells promoted tissue tolerance by suppressing tissue damaging properties of neutrophils. Finally, therapeutic administration of a Cxcr4 agonist successfully promoted tissue tolerance and prevented liver damage during sepsis. Our findings highlight the importance of a critical B-cell/neutrophil interaction during sepsis and establish neutrophil Cxcr4 activation as a potential means to promote disease tolerance during sepsis. Summary We show that a B cell/neutrophil interaction in the bone marrow facilitates tissue tolerance during severe sepsis. By affecting neutrophil Cxcr4 expression, B cells can impact neutrophil effector functions. Finally, therapeutic activation of Cxcr4 successfully promoted tissue tolerance and prevented liver damage during sepsis.
Introduction 1
Sepsis is a life-threatening condition triggered by severe infections with bacteria, viruses or fungi.
2
In spite of the successful use of antimicrobial therapies, mortality rates remain high with up to 3 50%, (1,2). The main determinant of sepsis-associated mortality is rarely the pathogen, but 4 instead the combination of dysregulated systemic inflammation, immune paralysis and hemostatic 5 abnormalities that together cause multi-organ failure (3). Upon pathogen sensing, ensuing 6 inflammation promotes the activation of coagulation, which in turn generates factors that further 7 amplify inflammation, thus creating a vicious, self-amplifying cycle. These events result in systemic 8 inflammation and the widespread formation of microvascular thrombi that together cause vascular 9 leak, occlusion of small vessels and ultimately multi-organ failure (4,5). Whether a patient 10 suffering from sepsis enters this fatal circuit of immunopathology or instead is able to maintain 11 vital organ functions and survives sepsis is not well understood (6-8).
13
The concept of "disease tolerance" describes a poorly studied, yet essential host defense strategy,
20
DNA damage response, tissue remodeling or oxidative stress (10). However, little is known about 21 the specific contribution of immune cells to disease tolerance during severe infections, and 22 therapeutic options to increase disease tolerance are limited due to a lack of knowledge about 23 detailed molecular and cellular tolerance mechanisms (6)(7)(8).
25
In this study, we investigated mechanisms of disease tolerance, by comparing tolerant and 26 sensitive hosts during a severe bacterial infection. While sensitive animals developed severe 27 coagulopathy and tissue damage during sepsis, tolerant animals were able to maintain tissue 28 integrity in spite of a high bacterial load. Tolerance was induced by the prior exposure of animals 29 to a single, low-dose of LPS and could be uncoupled from LPS-induced suppression of cytokine 30 responses. We provide evidence for a deleterious and organ-damaging interaction between B 31 cells and neutrophils during sepsis in sensitive animals, while in tolerant animals neutrophils and 32 B cells jointly orchestrated tissue protection during sepsis, which was associated with 33 transcriptional reprogramming of neutrophils and B cell dependent upregulation of neutrophil Cxcr4. Our data suggest that B cells can modulate the tissue damaging properties of neutrophils 1 by influencing neutrophil Cxcr4 signaling. Consequently, the administration of a Cxcr4 agonist 2 prevented sepsis-associated microthrombosis and resulting tissue damage, thereby exposing a 3 potential therapeutic strategy to foster tissue tolerance in severe sepsis.
LPS pre-exposure induces long-term tissue tolerance during Gram-negative sepsis 2
To establish a model for the study of tissue tolerance during sepsis we challenged mice 3 intravenously (i.v.) with a subclinical dose of LPS 1 day, 2 weeks, 5 weeks or 8 weeks, 4 respectively, prior to the induction of Gram-negative sepsis by intraperitoneal (i.p.) injection of the 5 virulent E. coli strain O18:K1. While LPS pretreatment 24h prior to infection significantly improved 6 pathogen clearance, any longer period (i.e. 2-8 weeks) between LPS administration and infection 7 did not affect the bacterial load when compared to control mice ( Figure 1A, Figure S1A).
8
Importantly though, all LPS pre-treated groups were substantially protected from sepsis-9 associated tissue damage, illustrated by the absence of elevated liver transaminase (ASAT and 10 ALAT) plasma levels ( Figure 1B). Thus, short-term (24h) LPS pre-exposure improved resistance 11 to infection and consequently tissue integrity, while long-term (2-8 weeks) LPS pre-exposure 12 enabled the maintenance of tissue integrity irrespective of a high bacterial load, which per 13 definition resembles disease tolerance.
14 To dissect the underlying mechanism of tissue tolerance, we thus performed all subsequent 15 experiments by treating mice with either LPS or saline two weeks prior to bacterial infection, 16 allowing us to compare tolerant with sensitive hosts. Mice were either sacrificed two weeks after 17 LPS pretreatment to assess changes in tolerant hosts prior to infection, or six to 18h after E. coli 18 infection to determine early (6h) or late inflammation and organ damage (18h), respectively, during 19 sepsis (Fig. 1C). Doing so, we observed that organ protection ( Figure 1B Figure S1B and 1C). A major cause of organ damage during sepsis is the disseminated activation 23 of coagulation, which is characterized by systemic deposition of micro-thrombi and substantial 24 platelet consumption, resulting in a critical reduction in tissue perfusion (4)(5)(6). While we discovered 25 a severe decline in platelet numbers upon E. coli infection in sensitive mice, tolerant mice 26 maintained significantly higher blood platelet counts ( Figure 1G) and, in sharp contrast to sensitive 27 animals, showed almost no deposition of micro-thrombi in liver ( Figure 1H and 1I) and lung 28 sections ( Figure S1D), indicating that tissue tolerance occurred systemic and not organ specific.
29
Considering that LPS exposure itself can impact coagulation factor levels and blood platelet 30 numbers (11,12), we importantly found similar platelet counts in sensitive and tolerant mice at the 31 onset of E. coli infection (2 weeks post LPS) ( Figure 1G). In addition, we did not detect any 32 indication for an altered coagulation potential in tolerant mice before sepsis induction, as both 33 groups showed a similar plasma thrombin generation potential prior to infection ( Figure 1J left panel, Figure S1E). However, compared to sensitive animals, the thrombin generation capacity 1 was only preserved in tolerant mice after infection (18h p.i.), suggesting that tolerance 2 mechanisms prevented sepsis-associated consumption coagulopathy (Figure 1J right panel and 3 1K). Taken together, low-dose LPS pretreatment prevented the formation of micro-thrombi and 4 induced a long-lasting state of tissue tolerance during subsequent sepsis.
29
These findings indicated that B cells, but not T cells, played an ambiguous role as they were 30 involved in both, sepsis-associated organ damage and the establishment of LPS-triggered tissue 31 tolerance. We then tested if splenectomy would replicate the protective effects of B cell deficiency 32 during sepsis and interestingly found that splenectomy was associated with reduced liver damage 33 in naïve, sensitive mice, but, in contrast to complete lymphocyte deficiency, not sufficient to abrogate LPS-induced tissue protection in tolerant animals ( Figure 2G and S2F). This suggested 1 that mature splenic B cells contributed to tissue damage during severe infections, while other, not 2 spleen derived, B cell compartments were instrumental in driving disease tolerance.
3
Given that B cells were shown to promote early production of proinflammatory cytokines such as 4 IL-6 during sepsis in a type I IFN dependent manner (13), we next investigated if LPS pretreatment cytokine production during endotoxin tolerance in vitro (16,17). These data suggested that in 15 tolerant hosts, B cells contributed to tissue protection during sepsis, and that an LPS mediated 16 modulation of early inflammation is unlikely to explain these protective effects.
16
(H-I) IL-6 levels in plasma and liver of NaCl or LPS pretreated wildtype or Rag2 -/mice at 6h p.i.
29
it seemed counterintuitive at first, that the absence of neutrophils or B cells, respectively, 30 prevented tissue damage in a primary infection, while they at the same time seemed critical for 31 tissue protection in a model of LPS-induced tolerance. We thus hypothesized that B1 and B1-like 32 cells, in contrast to B2 cells, reduced neutrophil's tissue damaging effector functions. Using sIgM 33 deficient mice, enabled us to rule out a major role for IgM in tissue tolerance during sepsis, even though IgM was reported to exhibit anti-thrombotic functions in cardiovascular diseases (39) and 1 high plasma IgM levels positively correlate with a better outcome in human sepsis (24) and mouse 2 models (23). However, while sIgM deficiency did not prevent LPS-induced tolerance, naïve sIgM -3 /mice developed less organ damage during primary sepsis as compared to control animals. As 4 sIgM deficiency goes along with a decreased abundance of B2 and an increased abundance of 5 B1 cells (40) this further supported the notion of tissue damaging B2, and tissue protective B1 6 cells.
8
Since we discovered that LPS-induced protection was still observed in splenectomized animals, to the periphery as well as their homing back to the bone marrow when they become senescent 27 (28, 29). Importantly, Cxcr4 signaling is essential, as Cxcr4 knockout mice die perinatally due to 28 severe developmental defects ranging from virtually absent myelopoiesis and impaired B 29 lymphopoiesis to abnormal brain development (43). Antagonizing SDF1/Cxcr4 signaling is 30 approved for stem cell mobilization from the bone marrow and is under extensive research in 31 oncology, as it is critical for tumor development, metastasis and tumor cell migration (44). More recently, Cxcr4 signaling was described to delay neutrophil aging and to protect from vascular effects of upregulated Cxcr4 on neutrophils in sepsis. Strikingly, activating, but not antagonizing, 1 Cxcr4 during sepsis induced tissue tolerance, suggesting that B cell driven regulation of Cxcr4 is 2 a potential mechanism of disease tolerance and thus might be an interesting therapeutic target 3 during severe sepsis. Greenberger lysis buffer (300mMol NaCl, 30mMol Tris, 2mMol MgCl2, 2mMol CaCl2, 1% Triton X-21 100, 2% protease Inhibitor cocktail) (61), and supernatants were stored at -20°C. For RNA 22 isolation, lysates were stored in RLT buffer (Qiagen, containing 1% β-mercaptoethanol) at -80°C.
23
Pathogen burden was evaluated in organ homogenates by plating serial dilutions on blood agar 24 plates (Biomerieux), as previously described (57). Blood platelet counts were determined in freshly 25 isolated anticoagulated EDTA blood using a VetABC differential blood cell counter. Liver
4
CD4 + and CD8 + T cell depletion was performed by i.v. administration of anti CD4 (200μg/mouse) 5 or anti CD8 (400μg/mouse) antibodies 36h prior LPS treatment and repeated every three days
Cell transfers and splenectomy 13
Splenocytes were isolated from naïve WT C57BL/6 mice and i.v. injected into Rag2 deficient mice or LPS and two weeks later, challenged with E. coli as described above. Resting B cells were isolated from spleens of naïve UBC-GFP mice using magnetic beads (Milteny Biotec, Mouse B 18 cell isolation kit) and i.v. injected into Rag2 deficient mice (5 x 10 6 cells/mouse) after erythrocyte 19 lysis (ACK lysis buffer) two weeks and four days prior to LPS/NaCl treatment. After pretreatment 20 with NaCl or LPS transplanted animals were challenged with E. coli as described above. Mice 21 were splenectomized or sham operated as described previously (62) and after 1 week recovery, treated with NaCl/LPS and challenged with E. coli as described above.
23
In vitro thrombin-generation assay 24 Thrombin generation was assayed according to the manufacturer's instruction (Technoclone).
Flow cytometry 32
Splenocytes were isolated by passaging spleens through 70μm cell strainers and after erythrocyte followed by filtering through 70μm cell strainers. Cells were counted using a CASY cell counter 1 and after unspecific binding was blocked using mouse IgG (Invitrogen), cells were stained in PBS 2 containing 2% FCS using antibodies (see table) against mouse CD45, CD3, CD19, CD23, IgM, 3 CD21, CD43, CD11b and Ly-6G. This was followed by incubation with a Fixable Viability Dye 4 eFluor 780 (eBioscience) according to the manufacturer's instructions to determine cell viability.
5
After several washing steps, cells were fixed (An der Grub Fix A reagent) and analyzed via flow 6 cytometry using a BD LSRFortessa™ X-20 cell analyzer. Liver sections (4 μm) were stained with H&E and analyzed by a trained pathologist in a blinded 2 fashion according to a scoring scheme, involving necrosis, sinusoidal-and lobular inflammation, 3 steatosis and endothelial inflammation (0 representing absent, 1 mild, 2 moderate, and 3 severe).
4
The sum of all parameters indicated the total histology score. After staining for fresh fibrin (MSB 5 stain, performed at the routine laboratory at Newcastle University), samples were scored for the 6 presence of microthrombi by a trained pathologist in a blinded fashion. NIMPR1 immunostaining 7 was performed on paraffin-embedded liver sections as described earlier (64). Briefly, antigen 8 retrieval was achieved using a citrate-based buffer at pH 6.0 (Vector laboratories), followed by 9 several blocking steps. Incubation with anti-NIMP-R14 antibody (Abcam) was performed at 4°C,
Statistical analysis 19
Statistical evaluation was performed using GraphPad Prism software except for statistical analysis 20 of RNA sequencing data, which was performed using R. Data are represented as mean ± SEM 21 and were analyzed using either Student´s t-test, comparing two groups, or one-way ANOVA analysis, followed by Tukey multiple comparison test, for more than two groups. Differences with 23 a p-value ≤ 0.05 were considered significant. For DEG, genes with an FDR-adjusted p value of < 24 0.1 were considered differentially expressed.
Declaration of interests 19
The authors declare no financial or commercial conflict of interest. Data in (A-E) are from a single experiment (n = 4-5/group) and data in (F) are from a different 1 single experiment (n = 3-4/group). All data are presented as mean +/-SEM. * p ≤ 0.05. | 2022-03-25T13:27:57.900Z | 2022-03-21T00:00:00.000 | {
"year": 2022,
"sha1": "1758138df08baa091b7aaa2069785b1ec6352973",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2022/03/21/2022.03.21.485114.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "1758138df08baa091b7aaa2069785b1ec6352973",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
256258377 | pes2o/s2orc | v3-fos-license | A Comprehensive Mini-review on COVID-19 Pathogenesis on Perspectives of Cytokine Storm and Recent Developments in Anti-Covid Nucleotide Analogues
The world has been rocked by the 2019 coronavirus disease (COVID-19), which has significantly changed our way of life. Despite the unusual measures taken, COVID-19 still exists and affects people all over the world. A remarkable amount of study has been done to find ways to combat the infection's unsurpassed level. No ground-breaking antiviral agent has yet been introduced to remove COVID-19 and bring about a return to normalcy, even though numerous pharmaceuticals and therapeutic technologies have been reused and discovered. The cytokine storm phenomenon is of utmost importance since fatality is strongly connected with the severity of the disease. This severe inflammatory phenomenon marked by increased amounts of inflammatory mediators can be targeted for saving patients’ life. Our analysis demonstrates that SARS-CoV-2 specifically generates a lot of interleukin-6 (IL-6) and results in lymphocyte exhaustion. Tocilizumab is an IL-6 inhibitor that is currently thought to be both generally safe and effective. Additionally, corticosteroids, tumor necrosis factor (TNF)-blockers and Janus kinase (JAK) inhibitors could be effective and dependable methods to reduce cytokine-mediated storm in SARS-CoV-2 patients.
INTRODUCTION
Riding on the roller coaster of coronavirus evolution, it became evident that the first coronavirus was discovered in domestic poultry and companion animals in the 1930s, and since then, coronaviruses have been recognized as pathogens that caused respiratory, gastrointestinal, liver, and neurologic diseases in animals.Subsequently, seven coronavirus strains/genera came to be known to cause respiratory disease in humans. 1,2oronaviruses are classified by the International Committee on Taxonomy of Viruses as belonging to the realm Riboviria, order Nidovirales, and subfamily Orthocoronavirinae of the family Coronaviridae. 3The synthesis of a 3′-coterminal nested collection of subgenomic mRNAs by this order during infection is referred to here as the order Nidovirales. 4They are enclosed viruses with a positive-sense single-stranded RNA genome and a nucleocapsid with helical symmetry 5 that contain coronaviruses, one of the largest RNA viruses with genomes ranging from about 26 to 32 kilobases.The presence of peplomers, rodshaped glycoprotein spikes that protrude from their surface and remind us of the solar corona from which their name is derived, was discovered through electron microscopy research. 5n general, coronaviruses 229E and OC43 cause the common cold; the serotypes such as NL63 and HUK1 were found to produce common cold symptoms. 5These coronaviruses that cause respiratory infections of varying degrees showed zoonotic transmissions from animals to humans and then displayed their uncanny ability of horizontal transmissions in humans. 3Serious lower respiratory tract infections, such as pneumonia, have been reported, usually in children, veterans, and immunocompromised people.The World Health Organization (WHO) regards coronaviruses as a vast family of viruses that have the potential to cause a wide range of illnesses in humans or animals.It was the year 2003 when severe acute respiratory syndrome coronavirus (SARS-CoV) was the major cause of an outbreak of severe acute respiratory syndrome (SARS).A mutational RNAstranded virus known as SARS-CoV-2 caused the recently identified coronavirus, COVID-19, which exhibits clinical signs, to be reported in Wuhan, China in 2019.As a result of the unique SARS-CoV-2 virus' post-infection worldwide spread, WHO had to announce it as a pandemic on March 11, 2020.Similar to the previous two coronaviruses i.e., SARS-CoV and the Middle East Respiratory Syndrome Corona Virus (MERS-CoV) that have caused life-threatening infections during the past 20 years, this SARS-CoV-2 is a unique betacoronavirus. 6uring COVID-19 progression the body responds aggressively by releasing a lot of proinflammatory cytokines, a phenomenon described as a "cytokine storm". 6The severity of SARS-CoV-2 tends to be directly related to the cytokine storm.An excessive amount of inflammation results from the immune system's hyperactive response to the SARS-CoV-2 virus.5][6] The present review provides the comprehensive scenario not only about the cytokine storm and its clinical manifestations in COVID-19 infected individuals but also presented the treatments given to COVID-19 patients.
Cytokine Storm: a general definition
It is a hyperactive immune response marked by the release of interferons, interleukins, tumor necrosis factors (TNFs), chemokines, and other mediators. 4These mediators are a crucial component of an effective innate immune response required for the clearance of pathogens.Cytokine storm alludes to cytokine levels that are harmful to host cells. 5However, it has been very difficult to distinguish between a healthy and a dysregulated inflammatory response in the pathophysiology of severe illness. 6Most of the mediators linked to the cytokine storm have pleiotropic downstream effects and interact regularly with one another in biological activity, which adds to its complexity. 7imilarly, hypercytokinemia or cytokine storm is thought to be the cause of the condition seen in critically ill individuals infected with the SARS-CoV-2 virus in the current COVID-19 global pandemic scenario. 7Acute respiratory distress syndrome, thromboembolic illnesses such acute ischemic strokes brought on by major artery occlusion and myocardial infarction, encephalitis, acute kidney injury, and vasculitis are among the severe signs of COVID-19 that are associated with SARS-CoV-2 infections (Kawasaki-like syndrome in children and renal vasculitis in adult) (Figure 1).Understanding the immunopathogenesis of the cytokine storm in COVID-19 patients may provide new opportunities for early diagnosis and the implementation of therapeutic measures to reduce the risks of morbidity and death brought on by cytokine storm. 8
Cytokine storm: Mechanism, Pathogenesis, and Clinical Manifestations Mechanism of cytokine storm
T h e i n f l a m m a t o r y r e s p o n s e i s indispensable for the pathogen identification that then leads to the onset of immune cells, helps eliminating these unwanted hostile guests, and enables the tissue repair process. 9As an exception, SARS-CoV-2 triggers a rather prolonged and dysregulated cytokine/chemokine response in many infected individuals, known as the cytokine storm.Spike glycoproteins are extremely immunogenic parts of the coronaviruses 9,10 ; the SARS-CoV-2 peplomer binds angiotensinconverting enzyme (ACE-2) receptors to access human alveolar epithelial cells type II. 9 Using the highly flexible three-hinged stalk-domain which acts 'like a balloon on a string, the spikes appear to hover over the virus-surface and thus can scan for the presence of specific receptors for docking to the alveolar macrophages.The uniform distribution of ACE-2 receptors on the alveolar epithelial type II cell surface, endothelial cells, and renal and intestinal cells of the target organs have been found strongly correlate with the manifestations of clinical symptoms in COVID-19 infection. 10,11Contrary to "secondary cytokine" storms caused by various subsets of T lymphocytes activated at later stages of viral infection or as a side effect of T cell-involving therapies, "primary cytokine" storms are caused by viral infections and are primarily produced by cells such as epithelial, endothelial and alveolar macrophages. 12It's interesting to note that proinflammatory T cell subsets, such as cytotoxic T cells that express perforin and granulysin and produce IL-17 (T-helper 17 or TH17 cells), were observed to rise, severely harming the lungs' immune system. 12Besides, an efficient and long-lasting antiviral response is typically exhibited by the natural killer (NK) cells in association with essential players in the immune system, including neutrophils, macrophages, and dendritic cells.NK cells in normal situation deem to kill infected macrophages responsible for cytokine storms, therefore diminished humoral counts may augment the severity of disease in COVID-19 patients. 13During viral infections, these intricate cellular interactions can control the cytokine milieu, initial viral load, and CD4+ T-cell mediated cellular immune responses.Though killing off infected target cells effectively lowers viral load, NK cells also counter the systemic inflammatory response and unwelcome cytokine storm known as "hyperferritinemic syndrome" synonymous with "macrophage activation syndrome" by lysing and eliminating activated inflammatory cells namely neutrophils, dendritic cells, monocytes/ macrophages, and T cells. 13
Pathogenesis of cytokine storm
As a part of the general pathophysiological phenomenon, cellular infection and viral replication lead to the activation of the hostcell inflammasome. 14Due to these aggressive proinflammatory responses, the release of proinflammatory cytokines takes place with the concomitant cell death by a process known as pyroptosis. 14The damaged cells are further triggered to exhibit the amplified inflammatory responses and elevated cytokine release in response to viral infection, a pathophysiological condition termed as cytokine release syndrome (CRS) 15 or cytokine storm, deemed partially responsible behind the acute respiratory disease syndrome (ARDS) and multiple organ dysfunction syndrome (MODS) in COVID-19. 16apid burst of intracellular virus replication initiate pyroptosis, immune system evasion, and cell lysis, and together these trigger the mass release of pro-inflammatory cytokines and chemokines.Therefore, deterioration of COVID-19 victims' clinical symptoms may be the result of a combination of cytopathic effects caused directly by the virus infection and immunopathology injury caused by a turbulent cytokine storm. 17In this context, a few studies on cytokine profiles from COVID-19 patients indicated that the cytokine storm correlated positively with pulmonary cell and tissue damage, the unfavorable prognosis of severe COVID-19, and extrapulmonary multipleorgan failure. 18Experimental results from various studies on old and young non-human primates led the viral epidemiologists to postulate that the virus titer could be less significant and instrumental than the uncontrolled inflammatory responses in inflicting deaths of the old nonhuman primates. 16,18Thus, it may be stated that "cytokine storm" potentially exacerbates the pathophysiological conditions in COVID-19 infected patients. 12,19
Clinical manifestations of cytokine storm
A recent series of studies have shown that COVID-19 infected patients had increased levels of inflammatory cytokines, such as interleukin (IL)-1β, IL-2, IL-6 IL-7, IL-8, IL-9, IL-10, IL-18, tumor necrosis factor (TNF)-α, granulocyte colony-stimulating factor (G-CSF), granulocyte-macrophage colonystimulating factor, fibroblast growth factor, macrophage inflammatory protein 1, compared to healthy individuals. 10Besides this, amongst the intensive care unit (ICU) patients, circulating levels of three cytokines IL-6, IL-10, and TNF-α also correlated with the severity of infection as reflected by their elevated concentrations compared to mild/moderate cases.ILs such as IL-1, IL-6, TNF, and interferon (IFN)-γ orchestrate the pathological process that leads to vascular permeability, plasma leakage, and disseminated intravascular clotting (DIC). 20This drastic increase in cytokine releases results in an influx of macrophages, neutrophils, and T lymphocytes from the circulation into the adjacent infection site, severely damaging human tissue and unleashing the destabilization effect on endothelial cell-to-cell interactions, damaging on vascular barrier and capillaries diffusion of alveolar damage.It has been reported that the suppression of the usual T-cell activation was caused by IL-629, and TNF-α could induce the T-cell apoptosis via interacting with its receptor TNF receptor 1. 21 In a separate animal study, it was observed that the rapid replication of SARS-CoV in BALB/c mice induces the delayed release of IFN-α/β accompanied by the influx of numerous pathogenic inflammatory mononuclear macrophages. 22 h e a c c u m u l ate d m o n o n u c l e a r macrophages receive many activating signals via the IFN-α/β receptors on their cell surface and produce more monocyte chemo-attractants (such as CCL2, CCL7, and CCL12), resulting in the further accumulation of mononuclear macrophages.Subsequently, these mononuclear macrophages produce higher levels of proinflammatory cytokines (TNF, IL1-β, and IL-6), thereby worsening the disease progression.A mesenchymal stem cell (MSC) therapy has been suggested recently to abrogate the undesirable activation of macrophages and T-lymphocytes given stimulating their appropriate differentiation and thus thwarting the burst of proinflammatory cytokine release. 23,24Stem cells have been found to suppress the activities of viruses via Chaf1a-mediated and Sumo2-mediated epigenetic regulation (termed proviral silencing). 25For patients suffering from lung fibrosis and cytokine storm, MSCs-based immunomodulation has been suggested as a suitable therapeutic approach. 26ecently type IV transplantation of MSCs has been suggested as safe and effective in critically ill patients suffering from COVID-19 pneumonia 27 though no approved MSC-based approaches have been reported up till now for the prevention and/ or medication of COVID-19 patients, though the initial data of clinical trials has been immensely promising.
It has been observed that acute COVID-19 patients admitted to the ICU ward had elevated erythematosus sedimentation rate (ESR), C-reactive protein (CRP), and enhanced IL-6, TNFα, IL-1β, IL-8, IL2R, besides being associated with ARDS, hypercoagulation and disseminated intravascular coagulation (DIC), manifested as thrombosis, thrombocytopenia and gangrene. 28,29The illness due to COVID-19 is characterized by thrombus and inflammation, causes extensive alveolar injury as a result of heightened macrophage activities and cytokine storms.Due to these events, the cell membranes are disrupted, and significant endothelial damage occurs leading to thrombosis. 30Thrombocytopenia is a pathophysiological condition, where there is a reduction in platelet counts.Since platelets are involved in antifungal immune responses, thrombocytopenia may aggravate the risk of mucormycosis infection.Besides, the spikes in mucormycosis cases were correlated to the immunosuppression caused by the administration of corticosteroids and dexamethasone for COVID-19 treatment may trigger cytokine storm and vascular cell damage.Another pathophysiological condition known as secondary haemophagocytic lymphohistiocytosis (sHLH) is a hyperinflammatory syndrome characterized by a fulminant and fatal hypercytokinaemia with multiple-organ failure. 30n adults, sHLH is most commonly triggered by viral infections and occurs in 3.7-4.3% of sepsis cases. 31
Remedies for the cytokine storm
Accumulative studies of clinical trials have observed the 'cytokine storm' in critical patients with COVID-19.Correct treatment of hyperinflammation using existing and approved therapies considering the safety aspect has been recommended to check the spiraling mortality.Therefore, appropriately suppressing the cytokine storm would be a significant clinical step to prevent the eventuality. 31Timely clinical intervention to quell the cytokine storm at its early stage by using a regimen of immunomodulators and cytokine antagonists, and the reduction of lung inflammatory cell infiltration, holds the key in ameliorating the recovery and survival rate amongst the critically ill patients. 19 e ve ra l a nt i -c y to k i n e d r u g s o r formulations may potentially address the cytokine storm and mitigate the severity of the storms.One such medication that can be used to treat cytokine storms and macrophage activation syndrome (MAS) in autoimmune/autoinflammatory illnesses is a corticosteroid. 32,33If used at the right moment, they may be helpful in the COVID-19 scenario in the more severe forms of CRS to control the systemic inflammatory response and avoid the development of ARDS. 33he prompt use of corticosteroids can result in early improvements such as a decrease in body temperature and an increase in oxygenation. 17he correct administration of glucocorticoids in patients with severe SARS considerably decreased the mortality rate and shortened the hospital stay, according to a retrospective analysis of 401 patients with the illness. 16Studies have revealed, however, that administering corticosteroid medication while a person was infected with the SARS-CoV-2 had unfavorable effects.Early corticosteroid administration to SARS patients increased plasma viral load in non-ICU patients, worsening the illness. 17,19The short-term (3-5 days) use of glucocorticoids has been hypothesized to be acceptable and may be advised for individuals who have excessive inflammation and a steady decline in oxygenation indicators. 33It should be kept in mind that high glucocorticoid doses can weaken the immune system, which can cause a delay in coronavirus clearance. 20
TNF blockers
As previously discussed, being one of the major inflammatory factors, TNFs are the key players to trigger cytokine storms.TNFs thus deem to be attractive targets that can subdue the cytokine storm. 19Certain in vivo studies in murine models have demonstrated that TNFs could potentially contribute to acute alveolar injury and cause the impairment of T-cell responses in SARS-CoV-challenged mice. 19It has been demonstrated in mice that either the loss of the TNF receptor or its neutralization renders protection against virus-mediated morbidity and mortality. 198][19] Nevertheless, the clinical efficacy of TNF blockers should be further tested though TNF blockers failed to earn recommendations for the treatments of COVID-19 patients. 34The SARS-CoV-2 spike protein appears to induce a TNFα-converting enzyme (TACE)-dependent alteration of ACE-2, which allows virus penetration into host cells. 35Adalimumab, a TNF-blocker, is now being tested in a clinical trial for COVID-19 infection (ChiCTR2000030089).
IL-1 family antagonists
The surge of IL-1 family interleukins including IL-1 β, IL-18, and IL-33 were reported previously12 Studies involved in the inhibition of IL-1β to suppress the cytokine storm have got immense importance.The IL-1 β antagonist anakinra was thought to be used for treating the cytokine storm.36Re-evaluation of data from a phase 3 randomized controlled trial of IL-1 blockade (anakinra) in severe sepsis, showed a significant survival benefit in patients with hyperinflammation, without adverse effects. 36However, presently there is no clinical recommendation for any specific IL-1 family blockers as their effects have not been clinically demonstrated by in vivo systems and clinical trials.
IL-2 and IL-6 Immunotherapy
Unrestrained release of IL-2 like TNF in COVID-19 condition not only leads to the development of fever but also capillary leakage or increased capillary permeability to various proteins showing clinical manifestations of edema, ARDS and renal injury.
A prominent inflammatory cytokine, IL-6, is raised in the serum of COVID-19 patients and is involved in inflammatory cytokine responses. 34n COVID-19 patients admitted to the ICU, the aberrant increase in the number of CD14+ CD16+ inflammatory monocytes capable of producing IL-6 was noticed. 34A potential IL-6 antagonist is tocilizumab, a recombinant humanized monoclonal antibody directed toward the IL-6 receptor.Tocilizumab binds to the membraneassociated and humoral IL-6 receptor thereby suppressing the Janus activated kinase (JAK)-signal transducer and activator of transcription (STAT) signaling pathway and production of downstream inflammatory molecules. 37Zhou et al. reported that GM-CSF produced by hyperactivated TH1 cells and IFNγ in lung cells promoting the production of monocytes through the release of GM-CSF could potentially be the therapeutic targets towards the treatments of COVID-19 patients. 38
JAK Inhibitors
The possibility of treating CS using cytokine downstream inhibitors, such as JAK inhibitors, is also being investigated.Baricitinib, fedratinib, and ruxolitinib were among the authorized medications found by Stebbing et al. 39 for myelofibrosis, rheumatoid arthritis, and other conditions.These medications may prevent clathrin-mediated endocytosis, which prevents viral cell infection.Members of the NAK family, such as AP2-associated protein kinase 1 (AAK1) 40 and cyclin G-associated kinase (GAK), are the targets of these medications; inhibition of these enzymes has been demonstrated to decrease viral infection in vitro.These medications are being investigated for the treatment of CS because they are specific inhibitors of JAK-STAT signaling and have anti-inflammatory characteristics.The disruption of AAK1 may therefore prevent the virus from entering cells and from virus particles assembling inside of cells. 41The AP2-associated protein kinase 1 (AAK1) enzyme is one of the recognized endocytosis regulators. 41According to reports, baricitinib can bind the cyclin G-associated kinase, another regulator of viral endocytosis, and suppress AAK1 activity when used in therapeutic doses. 41ue to its low plasma protein binding and negligible interaction with CYP enzymes and drug transporters, baricitinib presents great potential for combination therapy. 42Due to its limited interaction with the relevant CYP drugmetabolizing enzymes, baricitinib may also be used in combination with the direct-acting antivirals (Lopinavir or ritonavir and remdesivir), which are now being utilized to treat COVID-19 epidemic.Baricitinib and related direct-acting antivirals may lessen virus multiplication, infectivity, and the abnormal host inflammatory response.Subject to adequate clinical testing, baricitinib has been deemed useful in treating SARS-CoV-2 infections. 43he justification for utilizing JAK inhibitors during a SARS-CoV-2 infection is that many cytokineswhich immune system cells release to activate one another-need JAKs to do their jobs. 44According to Schett et al. 45 JAK inhibitors specifically can block the cytokine IL-6, which is produced by alveolar cells in the lungs.High levels of IL-6 have been associated with acute lung injury in the past with SARS-CoV-1.Table lists the pharmacotherapies used to treat diabetes and its complications as well as COVID-19 infection.
Therapeutics for specific treatment of patients with both COVID-19 and diabetes
Insulin therapy is determined by the severity of COVID-19, and patients are closely watched, even though it has been suggested for diabetic people with severe COVID-19. 45In one study, patients who were given insulin had poor clinical outcomes than those who were given metformin. 46Despite the evidence of better results in diabetic patients with COVID-19 receiving metformin, this medicine should be stopped if patients develop respiratory distress, renal dysfunction, or cardiac failure as a result of acidosis. 45In the CORONADO research, Cariou et al. 47 showed that the usage of metformin was lower in patients who died as well as other treatments such as insulin therapy, renin-angiotensin-aldosterone system (RAAS) blockers, β-blockers, and loop diuretics were related to a fatality on the seventh day. 58They hypothesized this observation might be linked to the existing comorbidities and diabetic issues in those who died because those individuals had more recurrent therapy with insulin and other multiple medicines. 48A recent study found that patients with COVID-19 had considerably greater postprandial glycemic variations and exposures to hyperglycemia when evaluated 49 Excessive stress and increased release of hyperglycemiarelated hormones including catecholamine and glucocorticoids may be triggered by COVID-19 illness in a diabetic individual.These hormones produce unpredictable glucose fluctuation in the blood and raise blood glucose levels (Figure 2).Periodic blood sugar monitoring has to be part of the therapy strategy as a result. 49 n h i b i t o r s o f s o d i u m -g l u c o s e transporter-2 should be used with caution because these drugs may lead to ketoacidosis and poor fat metabolism. 50Furthermore, glucagon-like receptor-1 (GLP-1R) analogues should be used with caution because they can cause diarrhoea, nausea, vomiting, and headaches. 52Sitagliptin, a highly selective dipeptidyl peptidase 4 (DPP4) inhibitor, was utilized as an additional oral medication for patients with type-II diabetes and COVID-19 in a recent multicenter, retrospective, case-control, and observational trial.In this trial, sitagliptin medication was linked to lower mortality, better clinical outcomes, and a higher number of hospital discharges. 52DPP4 and ACE2, two of the most important coronavirus receptor proteins, are well-known metabolic signal transducers that regulate inflammation, and glucose homeostasis.Furthermore, glucoselowering medicines like DPP4 inhibitors, which are commonly used in type 2 diabetes patients, have been shown to alter the biological activities of a variety of immunomodulatory substrates. 53Several therapies for COVID-19 have been proposed by researchers.For instance, the benefits of ACEI and ARBs for renal and heart health in people with diabetes have already been reported. 54owever, as previously indicated, the use of ACEI and ARBs in COVID-19 patients with diabetes should be carefully considered.Hyperglycemia is a known side effect of glucocorticoids in both diabetic and non-diabetic people.But even though these substances can worsen insulin resistance, reduce insulin sensitivity, and lead to severe hyperglycemia, they have been used to treat severely ill patients to suppress the high levels of cytokines and c-reactive peptides that are frequently seen in those patients.In clinical trials, no research has indicated that they can reduce mortality or impede virus clearance. 53
CONCLUSION
The elevations of IL-6 and IL-10 in COVID-19 are very consistent.The IL-6 receptor 50 is targeted by IL-6, and the receptor recruits JAK, which activates the signal transducer and activator of transcription 3 via a cascade signal. 55Some experts think that tofacitinib, a small molecule medication that targets JAK1 and JAK3, could be used to treat COVID-19 and that tofacitinib was effective in treating COVID-19 patients suffering from ulcerative colitis. 56Because IL-10 can impede the activity of NF-κB to downregulate the synthesis of IL-6, we might think of high levels of IL-10 as negative feedback to counteract the increase in IL-6. 57When utilizing any strategy to modulate cytokine dysregulation, we should closely monitor the laboratory index to avoid over-treatment.For example, if tocilizumab can be administered to lower IL-6 levels, control of IL-6 levels every two days to keep it at a safe level could be investigated in the future. 58People with comorbidities, such as cardiovascular illness, hypertension, and diabetes, have been found to have severe cases of COVID-19.Diabetes has been demonstrated in a growing number of studies to be a significant risk factor for the severity of a variety of different infections.Diabetic patients' dysregulated immune response has a key role in worsening severity.Diabetes is one of the comorbidities linked to COVID-19-related mortality and morbidity.Cardiovascular diseases, obesity, and hypertension, as well as dysregulated immune response, altered ACE2 expression levels, and endothelial dysfunction, may aggravate the risk of COVID-19 infection in diabetic individuals.
People's awareness and opinions are likely to influence a large number of safety strategies and, in turn, clinical study findings.As a result, it's crucial to investigate COVID-19's unique characteristics in diabetics and treat comorbidities that come with COVID-19 infection, especially among the elderly who have existing critical diseases.Except for corticosteroids, there is little information in the COVID-19 literature about the efficacy and safety of the other prospective treatments.The advantages, duration, dose, and timing of corticosteroids are still up for discussion, and clinical evidence is needed to support the other less promising treatments.
Figure 1 .
Figure 1.Various infection phases of SARS-CoV-2 and potential therapeutics
Figure 2 .
Figure 2. COVID-19 and diabetes have reciprocal effects.Diabetic individuals with COVID-19 infection have seriousrepercussions as a result of several coexisting diseases that increase the risk.SARS-CoV-2 may cause hyperglycemia during hospitalization due to its affinity for β-cells.Critical metabolic diseases, such as diabetic ketoacidosis, can be caused by β-cell destruction, leading to cytokine storm, and a counter-regulatory hormone response.50
Table .
Pharmacotherapies used during COVID-19 infection and associated complications | 2023-01-26T16:14:30.065Z | 2023-01-24T00:00:00.000 | {
"year": 2023,
"sha1": "ae61a8761d21aa713404f36342111445f55a572b",
"oa_license": "CCBY",
"oa_url": "https://microbiologyjournal.org/download/62151/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "96a42e6d62b0bdf4a345f46272b136b499734694",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264286262 | pes2o/s2orc | v3-fos-license | NiGA MOF-based dispersive micro solid phase extraction coupled to temperature-assisted evaporation using low boiling point solvents for the extraction and preconcentration of butylated hydroxytoluene and some phthalate and adipate esters
The first-ever attempt to apply nickel gallic acid metal–organic framework (NiGA MOF) in analytical method development was done in this research by the extraction of some plasticizers from aqueous media. The greenness of the method is owing to the use of gallic acid and nickel as safe reagents and water as the safest solvent. Low boiling point solvents were applied as desorption solvents that underwent temperature-assisted evaporation in the preconcentration step. Performing the evaporation using a low-temperature water bath for a short period of time streamlines the preconcentration section. Into the solution of interest enriched with sodium sulfate, a mg amount of NiGA MOF was added alongside vortexing to extract the analytes. Following centrifugation and discarding the supernatant, a μL level of diethyl ether was added onto the analyte-loaded NiGA MOF particles and vortexed. The analyte-enriched diethyl ether phase was transferred into a conical bottom glass test tube and located in a water bath set at the temperature of 35 °C under a laboratory hood. After the evaporation, a μL level of 1,2-dibromoethane was added to the test tube and vortexed to dissolve the analytes from the inner perimeter of the tube. One microliter of the organic phase was injected into a gas chromatograph equipped with flame ionization detection. Appreciable extraction recoveries (61–98%), high enrichment factors (305–490), low limits of detection (0.80–1.74 μg L−1) and quantification (2.64–5.74 μg L−1), and wide linear ranges (5.74–1000 μg L−1) were obtained at the optimum conditions.
Introduction
Adipate and phthalate esters are categorized as plasticizers that are exploited to increase the exibility of plastic containers used for food and drink packaging. 1Although they act successfully to soen plastic bottles and containers, their entrance into the content of the containers due to low molecular weights and not having chemical bonds with the polymers 2,3 such as polyethylene terephthalate (PET) 4 and polyvinyl Chloride (PVC) 5 is a big concern.Subsequently, the entrance of plasticizers into the human body is health-threatening.They have been detected even in amniotic uid, breast milk, and serum. 6The maximum contaminant levels of di(2-ethylhexyl)adipate (DEHA) and di(2ethylhexyl)phthalate (DEHP) have been documented to be 400 and 6 mg L −1 , respectively. 7DEHA is also associated with liver cancer in mice 6 and postnatal death in rats. 8DEHP which is used in PVC medical packages such as blood bags is known to be carcinogenic for humans.Its tolerable daily intake is 50 mg kg −1 per body weight per day. 9DEHP has also shown DNA damage to human lymphocytes. 10Di-iso-butyl phthalate (DIBP) triggers male and female reproductive toxicity.It also results in adverse effects on the liver.Moreover, DIBP presence in the body is associated with the risk of diabetes. 11Di-n-butyl phthalate (DNBP) decreases progesterone production at midpregnancy. 12DNBP and DIBP have shown genotoxicity in human epithelial cells of the upper aerodigestive tract. 13DNBP was documented to be correlated with DNA damage to human mucosal cells and DIBP is linked with lymphocytes' DNA damage. 14Also, to restrict the oxidation of polymeric compounds, butylated hydroxytoluene (BHT), as an antioxidant, is added to polymers. 15The maximum limit of BHT, butylated hydroxyanisole (BHA), and tert-butyl hydroquinone (TBHQ) should not be more than 200 mg kg −1 (either single or in combination) in oil samples. 16BHT was observed to be toxic to the neurobehavioral activity of rats.Also, it shows pathological effects on the brain, heart, and lungs. 17ccording to the health-threatening effects of the compounds of interest, they should be monitored in foods and beverages stored in plastic containers.Up to now, highperformance liquid chromatography 18 and gas chromatography (GC) 19 have been used for monitoring BHT, and phthalate and adipate esters.Since direct analysis is rarely possible using GC or results in high limits of detection (LODs) and quantication (LOQs) and also suffers from the matrix effect of the real samples, sample preparation procedures are inevitably necessary to be applied on samples prior to their injection into analytical apparatuses.Solid phase extraction, 20 liquid-liquid extraction, 21 solid phase microextraction, 22 headspace solid phase microextraction, 23 and dispersive liquid-liquid microextraction (DLLME) 24 have been performed for the extraction of the target compounds.The evolved version of dispersive solid phase extraction was introduced as dispersive micro solid phase extraction (DmSPE) by applying low sorbent weights which makes the approach more efficient. 25Although DmSPE is bene-cial, a preconcentration method is needed to couple with it in order to dwindle the LOD and LOQ values.Previously, DLLME has been coupled to DmSPE. 26To ease the extraction process, this study eliminates the use of DLLME and applies temperature-assisted evaporation (TAE) by using low boiling point desorption solvents for the preconcentration aim.
Metal-organic frameworks (MOFs) as hybrid and crystalline coordination polymers have revolutionized various elds including sample preparation, 27 supercapacitors, 28 and water treatment. 29,30Specically in the eld of extraction, MOF-70, 31 MIL-101(Cr), 32 MIL-68 (Al), 33 ZIF-8, 34 MIL-53 (Cr), 35 Basolite F300 MOF, 36 magnetic graphene@ZIF-8, 37 TMU-23@TMU-24, 38 and TMU-6 (ref.39) have been utilized for sample preparation of matrices containing plasticizers.The application of bio-MOFs is missing among MOF uses for the extraction of plasticizers.Bio-MOFs are superior to MOFs due to being synthesized from biologically active compounds, green, medium-compatible, nontoxic, and well-dispersed in solutions. 40,41ecause of the plasticizers' addition to the structure of polymers, they can enter into different liquids that are stored in plastic bottles.Since plasticizers have health-threatening effects, their presence in different edible stuff has to be monitored.It is worth mentioning that their direct analysis in samples is barely possible due to the matrix effect of samples and their low concentrations.So, they have to be extracted, preconcentrated, and subsequently injected into analytical instruments.Plastic bottled water samples were selected to monitor the quality of the stored drinkable water in this study.Moreover, their presence in tap water was investigated due to the spread of plastic pipes in the construction industry.
Furthermore, based on the widespread utilization of plasticizers and the environment's contamination, their presence was also monitored in rainwater samples.For the rst time in this study, a bio-MOF called nickel-gallic acid MOF (NiGA MOF) was applied for the extraction of some phthalate and adipate esters and BHT.Using a bio-MOF instead of an MOF is an asset for the study.Applying no organic solvents for the sorbent preparation is also an asset.The approach is green owing to the use of nickel, gallic acid, and water in the synthesis process.No long reaction time, high reaction temperature, and expensive apparatus are needed to propel the bio-MOF synthesis.The low weight of bio-MOF used in the extraction process is also appreciable.The elimination of DLLME streamlined the procedure by reducing the applied tools, organic solvent volumes, and the analyst's fatigue.Centrifugation was also eliminated from the preconcentration step.The reasons for the selection of NiGA MOF in this study can be summarized as being composed of green reagents (nickel and gallic acid), application of the safest solvent (water), and no need for high temperatures in the synthesis process, being bio-MOF and beneting from its related natural advantages, medium compatibility, well dispersion into the aqueous medium, and the ability for the creation of intermolecular bonds with the surveyed analytes (see Section 3.6.).Based on the given facts, both the extraction and preconcentration steps are economical which is precious.Also, for the rst time, NiGA MOF-based DmSPE was coupled to TAE of the desorption solvent.A mL level of an organic solvent was applied to dissolve the residues obtained from the evaporation step and one microliter of it was injected into GC-ame ionization detection (FID).
Chemicals and solutions
The utilized chemicals for the provision of NiGA MOF including nickel(II) chloride hexahydrate (NiCl 2 $6H 2 O), gallic acid, and potassium hydroxide were provided by Merck (Darmstadt, Germany).Deionized water was bought from Ghazi Co. (Tabriz, Iran).The target compounds of the survey including BHT, DNBP, DIBP, DEHP, and DEHA were purchased from Sigma-Aldrich (St Louis, MO, the USA).Their chemical structures and physicochemical properties are consolidated in Table 1.The desorption solvents including diethyl ether (DE), tert-butyl methyl ether (TBME), carbon disulde, n-pentane, and petroleum ether (PE) were provided by Sigma-Aldrich.Sodium chloride and sodium sulfate for performing the salting-out effect were from Merck.The elution solvents including carbon tetrachloride, 1,2-dibromoethane (1,2-DBE), and 1,1,1-trichloroethane (1,1,1-TCE) were purchased from Janssen (Beerse, Belgium).Sodium hydroxide and hydrochloric acid solution (37%, w/w) were purchased from Merck and utilized for pH adjustment.A methanolic stock solution with a concentration of 250 mg L −1 (with respect to each analyte) was prepared and used for direct injection into the separation system and also spiking into the deionized water and surveyed aqueous samples.
Samples
Four freshly-produced bottled water samples were bought from a local hypermarket in Tabriz city (East Azerbaijan Province, Iran).They underwent the extraction and preconcentration method as were bought.Also, two tap water and two rainwater samples were collected from Tabriz city and subjected to the developed method.The samples were directly extracted with no dilution.
Apparatus
The separation of the ve surveyed analytes was done using a Shimadzu gas chromatograph (2014, Kyoto, Japan) with an FID and a splitless/split injection port.The temperature of the column oven was xed at 60 °C for 1 min and then increased to 300 °C at the rate of 18 °C min −1 .It was maintained at 300 °C for 1 min nally.Zebron capillary column (5% diphenyl, 95% dimethyl polysiloxane; Phenomenex, Torrance, CA, the USA), (30 m × 0.25 mm i.d., with a lm thickness of 0.25 mm) was used in the study.Helium (99.999%;Gulf Cryo, Dubai, United Arab Emirates) was used as the makeup (ow rate, 30 mL min −1 ) and carrier (linear velocity, 30 cm s −1 ) gasses.300 °C was xed for both FID and injection port.The sampling time and split ratio of the injection port were 1 min and 1 : 10, respectively.The air inlet of FID was set at 300 mL min −1 and the fuel (hydrogen) at the ow rate of 30 mL min −1 was generated by a Shimadzu hydrogen generator (OPGU-1500S).A Metrohm pH meter (Herisau, Switzerland), model 654, was utilized in the preparation of the samples.A Hettich centrifuge (D-7200, Kirchlengern, Germany) was used in the DmSPE step.A Falc (Labsonic LBS2) thermostatic and ultrasonic water bath (Treviglio, Italy) was used in the preconcentration step.For the dispersion of NiGA MOF into the solutions in order to facilitate the adsorption process, an L46 vortex (Labinco, Breda, the Netherlands) was used.A UT 12 Heraeus oven (Hanau, Germany) was applied to propel the synthesis of the bio-MOF.Different analyses including Brunauer-Emmett-Teller (BET, BELSORP-mini II, Japan) for surface area, total pore volume, and average pore diameter, scanning electron microscopy (SEM) (Mira 3 microscope, Tescan, Czech Republic) for the morphology of the bio-MOF, energy dispersive X-ray (EDX) for the elemental analysis, Fourier transform infrared (FTIR) spectrophotometry (Bruker, Billerica, USA) for the functional groups, and X-ray diffraction (XRD) (Siemens D500 diffractometer, and Siemens AG, Karlsruhe, Germany) for crystallinity evaluations were carried out on the synthesis product.
Synthesis of NiGA MOF
Based on upscaling the previously-introduced method, 42 NiGA MOF was synthesized and used in the analytical method.Initially, 50 mL of 0.16 mol L −1 potassium hydroxide aqueous solution was prepared, and 10 mmol (2.38 g) NiCl 2 $6H 2 O and 20 mmol gallic acid (3.75 g) were added and sonicated for 30 min.The mixture was then transferred into a Teon-lined stainless steel autoclave and heated for 24 h at the temperature of 120 °C.Aer the reaction was completed, the brown product was ltered and washed with 50 mL of deionized water.Then, it was dried at room temperature and transferred into a beaker and put in an oven for 24 h at the temperature of 100 °C for activation.Finally, the bio-MOF was collected and stored in a sealed airtight vial.
2.5.Extraction procedure 2.5.1.DmSPE section.A 250 mg L −1 concentration of each target compound was spiked in 5 mL of deionized water located in a 10 mL conical bottom glass test tube.750 mg of sodium sulfate (15%, w/v) was dissolved in the above-mentioned solution via vortexing to perform the salting-out effect.15 mg of NiGA MOF was added into the solution of interest and vortexed for 5 min to streamline the adsorption of the analytes onto the bio-MOF particles.Following the process, 5 min centrifugation at the rate of 5000 rpm isolated the analyte-loaded NiGA MOF particles from the solution.700 mL of DE was added onto the bio-MOF and the glass test tube was sealed using a lid and sealing lm.Vortexing for 3 min was implemented to desorb the analytes from the NiGA MOF particles.
2.5.2.TAE section.The analyte-enriched DE phase obtained from the above-mentioned section was poured into a conical bottom glass test tube and located in a thermostatic water bath set at the temperature of 35 °C under a laboratory hood.The DE phase evaporated using the set temperature.10 mL of 1,2-DBE was added into the tube and vortexed for 3 min to dissolve the residues from the inner perimeter of the tube.One microliter of the organic phase was injected into the GC-FID system for analysis.
Enrichment factor and extraction recovery calculations
The performed preconcentration on the analytes through the method is shown by enrichment factor (EF).This term illustrates the ratio of the organic phase analyte concentration (C org ) to the analyte's concentration in the aqueous phase (C 0 ).Eqn (1) shows EF calculation.
The ratio of the migrated analytes into the extracted phase is shown by extraction recovery (ER).Based on eqn (2) it is understood that the percentage of the migrated analyte number into the organic phase (n n ) to the same term in the aqueous solution (n 0 ) is called ER.
In this equation, V aq is the volume of the initial aqueous phase and V n is the volume of the organic phase.
Characterization of NiGA MOF
Once the desired bio-MOF was synthesized, XRD, SEM, BET, FTIR, and EDX analyses were carried out to reveal the chemical characteristics of the sorbent used in the method.
XRD analysis was carried out on NiGA MOF to result in the XRD pattern of the coordination polymer.This pattern demonstrates the crystalline feature of the product.Fig. 1a shows the XRD pattern of NiGA MOF.The pattern is recorded at the 2q range of 4-74°.As can be seen, there are some small XRD peaks in this window denoting the presence of different crystallographic planes in the structure of NiGA MOF.The planes are shown at 2q values of around 11, 14, 20, 21, 24, 25, 27, 36, and 41°.The low intensity of the crystallographic peaks and the upshi of the XRD pattern result in small XRD peaks.Moreover to the existence of various peaks showing the crystallographic planes of the bio-MOF, the obtained pattern's overlapping with a documented XRD pattern in the previous study proves the successful synthesis of NiGA MOF. 43TIR analysis reveals the existence of different functional groups.The presence of functional groups streamlines the adsorption of target compounds onto the bio-MOF structure.Fig. 1b demonstrates the FTIR spectrum of NiGA MOF.The FTIR spectrum is recorded in the range of 400-4000 cm −1 .The absorption peaks at 1614.07 and 1535.64 cm −1 are related to C]C stretching which stems from the cyclic section of the bio-MOF.The absorption peak at 1376.67 cm −1 ascribes C-H bending of the organic section of the framework.The 1054.49cm −1 absorption peak shows C-O stretching which is the basis of bio-MOF formation through the deprotonation of the hydroxide groups that leads to oxygen-nickel bond creation.C]C bending triggered by the organic section of NiGA MOF is shown by 880.13, 786.78, 751.10, and 710.13 cm −1 absorption peaks.The observed peaks at 623.04 and 558.86 cm −1 prove the formation of nickel-oxygen bonds that paves the way to obtain NiGA MOF.SEM analysis can be helpful by providing some informative data about the chemical's morphology, dimensions, and shape distributions.Fig. 1c-e illustrate the SEM images obtained by the implementation of a 15 000 V electron beam and work distances of 9.57, 9.63, and 9.63 mm, respectively.1500, 1500, and 2500 times magnication scales were applied to get the illustrated SEM images, respectively.Fig. 1c reveals a mm-level layer of NiGA MOF resulting from vertical stacking of the bio-MOF particles creating a rugged surface.In Fig. 1d, it is seen that the longitudinal dimension of the bio-MOF particles is ranged from 15.95-32.17mm.Also, the transverse dimension is ranged from 2.68-5.91 mm.Fig. 1e also demonstrates the needle-like morphology of the synthesized NiGA MOF.
EDX analysis provides surface elemental analysis of an MOF.By paying heed to the obtained results of EDX, the composing elements of an MOF are revealed and also the presence of any impurity or undesired element can be detected according to the analysis.Moreover, the percentage of the ligand elements and cation can be disclosed.The results of the EDX analysis carried out on NiGA MOF are shown in Fig. 1f.No extra peak except for the composing elements (nickel, carbon, and oxygen) are detected.The gold peak results from the applied procedure for gold coating of the sample.It is obtained that the surface of NiGA MOF is composed of 35.59% carbon, 44.18% oxygen, and 20.23% nickel.
BET analysis based on the adsorption and desorption of nitrogen gas is able to reveal average pore diameter, surface area, and total pore volume.Fig. 1g illustrates the obtained BET plot for the synthesized NiGA MOF.19.39 nm average pore diameter, 0.0087 cm 3 g −1 total pore volume, and 1.80 m 2 g −1 surface area were the recorded data for the synthesized bio-MOF.
Optimization of effective parameters
3.2.1.Optimization of the weight of NiGA MOF.In DmSPEoriented procedures, the weight of sorbent is of great importance since it determines the adsorptive efficiency of the sorbent and the method's economical aspect.In the case of MOFs, this point is highlighted.So, in order to optimize the bio-MOF weight for the adsorption of the plasticizers from the aqueous medium, different weights including 5, 10, 15, 20, and 25 mg were applied.Fig. 2 illustrates that increasing NiGA MOF weight to 15 mg enhances the ERs.This phenomenon happens because increasing the bio-MOF weight provides sufficient surface area for the adsorption of the target compounds.On the other hand, increasing the weight of sorbent to 20 and 25 mg dwindles the ER values of all the analytes.The observation denotes that the use of higher than 15 mg sorbent decreases the efficiency of extraction because of agglomeration of the particles of NiGA MOF in the solution or decient desorption of the target compounds from the bio-MOF surface.Obtaining 15 mg as the optimum weight is a blessing for the procedure since the process can be done by applying low bio-MOF weight.So, 15 mg of NiGA MOF was selected to perform the extraction process.
3.2.2.Optimization of the ionic strength of DmSPE.Studying the ionic strength of a solution is of great importance to infer the efficiency of the salting-out effect on the procedure.The salting-out effect is based on reducing the solubility of the analytes in the aqueous solution in order to be extracted with higher ER values.To evaluate this effect, 15%, w/v, Na 2 SO 4 and NaCl salts (separately) were dissolved in the aqueous solution containing the analytes and subjected to the developed extraction process.The resulted data were compared with the data obtained from the extraction of the saltless solution.Fig. 3 demonstrates the preference for Na 2 SO 4 dissolved solution over the other tested ones.It is seen that the presence of Na 2 SO 4 increases the ERs signicantly in the case of all the target compounds.In the next step, the concentration of Na 2 SO 4 was evaluated.For this aim, 5-30%, w/v, Na 2 SO 4 (with intervals of 5%) were investigated.The results of the analyses are obvious in Fig. 4. It is seen that 15%, w/v, Na 2 SO 4 enhances the ERs more than the other tested concentrations in the case of most of the analytes.It is understood that lower than 15%, w/v, Na 2 SO 4 deciently performs the salting-out effect so they result in lower ERs.Also, higher than the optimum concentration of Na 2 SO 4 , decreases the ERs which stems from the increased viscosity of the solution that hinders the migration of the target compounds from the aqueous solution onto the sorbent surface.So, 15%, w/v, Na 2 SO 4 was selected.
Optimization of vortexing time in DmSPE.
To facilitate the extraction of the plasticizers from the aqueous medium, vortexing can be helpful to decrease the equilibrium time.To evaluate the parameter, 1, 3, 5, and 7 min vortexing were tested and the obtained ERs are compared in Fig. 5.It is seen that 5 min vortexing is sufficient to reach the high ERs.Increasing the vortexing time to higher than 5 min has no positive consequence.Even it can lead to back extraction of the surveyed analytes which is seen by reducing the ERs of most of the analytes.So, vortexing was implemented for 5 min.
3.2.4.Optimization of solution pH in DmSPE.Deviating the pH of the solution of interest in DmSPE can impact the obtained ERs.pH alteration when using bio-MOFs can signicantly affect their structure and even destruct them.Also, severe basic and acidic conditions can result in the decomposition of the analytes and free O-H sections' deprotonation in NiGA MOF.The mentioned facts alter the ERs of the analytes and the solubility of the bio-MOF in the aqueous phase.The decomposition of the analytes dwindles their affinity to be adsorbed onto the bio-MOF surface.This consequence reduces their ERs.On the other hand, dissolved NiGA MOF in the solution decreases the accessible adsorption surface in the DmSPE step which leads to decient extraction of the target compounds.This also induces lower ERs.To investigate the pH impact on the ER values of the procedure, different pH values were adjusted including 3, 5, 6, 7, 8, 9, and 10.It was seen that (data not shown here) the pH values of 8 and 7 which represent the pH of Na 2 SO 4 -dissolved solution and the neutral pH, resulted in the highest ERs.So, the process was propelled pH alterations.
3.2.5.Optimization of desorption solvent type and volume.In order to streamline the desorption process, low boiling point organic solvents including DE, TBME, PE, n-pentane, and carbon disulde (500 mL of each, separately) were applied to transfer the analytes from the NiGA MOF surface into the organic solvents.Aer desorption, the analyte-containing solvents were subjected to TAE under a laboratory hood.Fig. 6 demonstrates the ERs resulting from each desorption solvent.It is seen that except for the case of DNBP, DE acts as the best desorption solvent among the tested ones for the surveyed analytes.Moreover, DE has the lowest boiling point among the investigated solvents which eases its evaporation process and needs a minimum temperature to fulll the evaporation.This is easier and more economical, and results in higher ER values.So, the volume of DE was evaluated in the next step.For this aim, 300, 500, 700, 1000, 1200, and 1500 mL of DE were used for proper desorption of the analytes.Fig. 7 shows the ERs obtained by the implementation of the mentioned DE volumes.It is seen that 700 mL application of DE results in the highest ER values for most of the analytes.Lower than 700 mL DE use leads to lower ERs that is due to inefficient desorption of the analytes due to the lack of DE volume.In the case of higher DE volumes, the ERs are also decreased.This stems from the deciency in the subsequent dissolving of the residues.When higher than 700 mL desorption solvent is used, the solvent level in the conical bottom test tube increases.The TAE leads to the evaporation of the solvent and during the evaporation, DE leaves the tube and the analytes remain at the perimeter of it.Using 10 mL of the elution solvent cannot dissolve the analytes from higher levels of the tube via vortexing when using 1000, 1200, and 1500 mL of DE.But when using 700 mL of the desorption solvent, the solvent can effectively dissolve the residues at the perimeter of the tube.So, 700 mL of DE was chosen to continue the optimization steps.
3.2.6.Optimization of vortexing time in the desorption step.Another parameter that determines the efficiency of desorption is vortexing time.This parameter was evaluated by the implementation of 0.5, 1.0, 3.0, and 5.0 min vortexing.The results are shown in Fig. 8.It is seen that 3 min vortexing is sufficient to reach the aim of proper desorption and increasing the vortexing time has no positive effect on the ERs.So, 3 min vortexing was implemented in the desorption step.Although increasing the bath temperature enhances the evaporation rate, it leads to analyte loss through their susceptibility to evaporation when dealing with higher temperatures.Also, maintaining the bath temperature at 35 °C is close to the boiling point of DE and it is economical and energy-saving, too.So, 35 °C temperature was set as the water bath temperature for the evaporation of the desorption solvent.
3.2.8.Selection of the elution solvent type and volume.Aer the complete evaporation of DE, 15, 10, and 15 mL of 1,1,1-TCE, 1,2-DBE, and carbon tetrachloride, respectively, were used to elute the inner perimeter of the tube containing the analytes.10 mL of the collected phase was obtained for the tested solvents.Fig. 9 illustrates the efficiencies of the elution solvents.Although there is no signicant difference among the obtained ERs for the analytes using the applied solvents, 1,2-DBE has priority over the other two solvents by resulting in higher ERs in the cases of BHT and DNBP and also less organic solvent use for dissolving the residues.So, 1,2-DBE was chosen as the elution solvent.Then, the volume of 1,2-DBE underwent evaluation by testing 10, 15, and 20 mL volumes.The obtained data (data not shown here) showed decreasing the EFs by increasing the volume of elution solvent.This stems from the dilution effect that occurs when using higher volumes.So, 10 mL of 1,2-DBE was selected.
3.2.9.Optimization of vortexing time in the elution step.Vortexing the elution solvent creates a mL-level eddy of 1,2-DBE which induces the transfer of the target compounds from the inner perimeter of the test tube into the organic solvent.In order to reach the optimum conditions of vortexing in the elution step, 1, 2, 3, 4, and 5 min vortexing were implemented.The results (data not shown here) demonstrated the sufficiency of 3 min vortexing.Implementation of more than 3 min Fig. 9 Selection of the elution solvent type.Extraction conditions: are the same as those used in Fig. 8, except that 3 min vortexing was selected for the desorption step.Paper RSC Advances vortexing did not enhance the ERs of the analytes.So, 3 min vortexing was selected.
Validation of the developed method
The analytical gures of merit obtained in this study for the extraction of the analytes are presented in Table 2. Different values including linear range (LR), relative standard deviation (RSD), LOQ, LOD, coefficient of determination (r 2 ), ER, and EF are presented in the table and discussed here.The obtained LRs were 2.64-600 mg L −1 for BHT, 5.74-1000 mg L
Analysis of real samples
In order to connect the accomplished optimizations and drawn calibrations in the aqueous medium with the matrices of real samples, relative recovery data were calculated.Relative recovery equals the ratio of an analyte's peak area in a specic concentration extracted from the real sample to the same term extracted from deionized water multiplied by 100%.The consolidation of the relative recovery data for the extraction of the target compounds from the surveyed real samples is presented in Table 3.All the calculated relative recoveries were in the acceptable range.Fig. 10 shows three GC-FID chromatograms.They include the direct injection of 250 mg L −1 standard solution of the compounds of interest, extracted aqueous solution with the concentration of 250 mg L −1 with respect to each analyte, and the extracted bottled water sample.None of the analytes were detected in the samples.
Comparison of the method with similar approaches
To reveal the differences among some similar extraction procedures and also observe the priorities of the methods over each other, Table 4
The interaction mechanism between NiGA MOF and analytes
According to the successful extraction of the surveyed compounds from the aqueous media using the MOF, the discussion of the adsorption mechanism can be interesting.
Based on the chemical structure of BHT, it can be understood that hydrogen bonds can play a signicant role in the adsorption of BHT onto NiGA MOF based on the interactions between the hydrogen atom of BHT and the oxygen atoms in the MOF ligand.Moreover, p-p stacking occurs between the conjugated cyclic sections of the MOF ligand and BHT, DIBP, DNBP, and DEHP.Also, p-p stacking happens between the gallate section of NiGA MOF and the double bonds of DEHA.Nonpolarnonpolar interactions also take place between the organic structure of the MOF and the compounds of interest.Furthermore, the organic nature of the analytes propels their adsorption onto the MOF surface based on their low solubility values in the aqueous medium.The chemical structural familiarity between the MOF ligand and BHT, DIBP, DNBP, and DEHP based on the shared cyclic section also inames the adsorption affinity.Branched organic structures of the target compounds increase the adsorption affinity of the analytes onto the MOF structure by enhancing their organic feature.So, based on the discussed points, it can be inferred that NiGA MOF can extract the surveyed analytes from the aqueous media successfully.
Conclusions
For the rst time in this study, NiGA MOF, as a green bio-MOF synthesized using nickel, gallic acid, and water was applied to develop a method based on the extraction of some plasticizers and BHT.NiGA MOF was characterized using SEM, BET, XRD, EDX, and FTIR analyses.XRD proved the crystallinity, FTIR demonstrated the creation of metal-organic bands, and EDX showed no extra elemental peak denoting the purity of the nal product.TAE approach was adopted using low boiling point desorption solvents.Using DE as the desorption solvent streamlined the preconcentration by reducing the evaporation time and temperature.Bottled water, tap water, and rainwater samples were chosen to be extracted as the real samples of the study.Low bio-MOF weight (15 mg), mL-level utilization of organic solvents (700 mL of DE and 10 mL of 1,2-DBE), the greenness of the sorbent, the elimination of DLLME from the preconcentration approach, wide linear ranges (5.74-500 mg L −1 ), low LODs (0.80-1.74 mg L −1 ) and LOQs (2.64-5.74mg L −1 ), appreciable ERs (61-98%), and high EF values (305-490) were the achievements of the developed method.In further studies, DmSPE-TAE can be adopted as a streamlined extraction and preconcentration method for the extraction of pesticides, drugs, etc. from different matrices due to its low cost, ease of performance, and short application time.Also, different MOFs and bio-MOFs can be tested to observe their efficiencies for the extraction of the target compounds.
Fig. 3
Fig.3Influence of salt type on ERs of the analytes.Extraction conditions: are the same as those used in Fig.2, except that 15 mg NiGA MOF was used.
Fig. 4
Fig.4Optimization of Na 2 SO 4 concentration.Extraction conditions: are the same as those used in Fig.3, except that Na 2 SO 4 was chosen as the salting-out agent.
Fig. 5
Fig.5Optimization of vortexing time in the adsorption step.Extraction conditions: are the same as those used in Fig.4, except that 15%, w/v, Na 2 SO 4 was selected.
3 . 2 . 7 .
Optimization of the water bath temperature.To observe the efficiency of the water bath temperature, different temperatures including 35, 55, 75, and 95 °C were set and their consequent effects on the obtained ERs were evaluated.The outcome of the experiments (data not shown here) demonstrated the priority of 35 °C over the other tested temperatures.
Fig. 6
Fig.6Selection of desorption solvent type.Extraction conditions: are the same as those used in Fig.5, except that 5 min vortexing was selected.
Fig. 7
Fig. 7 Selection of DE volume.Extraction conditions: are the same as those used in Fig. 6, except DE was used as the desorption solvent.
Fig. 8
Fig.8Optimization of vortexing time in the desorption step.Extraction conditions: are the same as those used in Fig.7, except that 700 mL of DE was used.
−1 for DIBP, 3.46-1000 mg L −1 for DNBP, 4.72-700 mg L −1 for DEHA, and 5.02-500 mg L −1 for DEHP.The r 2 values ranged from 0.993-0.998.The ER values denoting the migration of the analytes from the aqueous solution into the nal organic phase were in the range of 61-98%.The EFs representing the preconcentration of the target compounds ranged from 305 to 490.According to the EFs, low LODs (0.80-1.74 mg L −1 ) and LOQs (2.64-5.74mg L −1 ) were recorded for the method.The obtained RSDs ranged from 3.7 to 5.0% for intra-(n = 5) and 4.6 to 6.4% for inter-day (n = precisions which were recorded by extracting the analytes at the concentration of 50 mg L −1 (of each).Appreciable ERs, high EFs, and low RSD values besides using low NiGA MOF weight for the development of the method are the highlights of the research.
a
Comparison of NiGA MOF-based method with some similar approachesLimit of detection (mg L −1). b Limit of quantication (mg L −1). c Linear range (mg L−1
Table 1
Chemical structures and physicochemical properties of the surveyed target compounds
Table 2
The obtained figures of merit for the developed method based on NiGA MOF
Table 3
Study of matrix effect in the surveyed samples spiked at different concentrations © 2023 The Author(s).Published by the Royal Society of Chemistry RSC Adv., 2023, 13, 30378-30390 | 30385 consolidates the gures of merit for the developed method and the previously developed ones.The LOD and LOQ values are comparable with previous studies, except for the ones in which a mass spectrometer (MS) has been applied.MS is inherently more selective and sensitive than FID.Except for the two methods, the LRs of the study are wider than the others.The r 2 values representing the linearity of the calibration curves are comparable with the given examples in the table.The RSD values are lower than most of the compared methods.Unfortunately, most of the developed methods suffer from not reporting the ER and EF values.Appreciable ERs and high EFs are also the highlights of the developed NiGA MOFbased method. | 2023-10-19T05:18:01.974Z | 2023-10-11T00:00:00.000 | {
"year": 2023,
"sha1": "6e2a86bc02826e1150d6aecb3f711b22c9266640",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6e2a86bc02826e1150d6aecb3f711b22c9266640",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249242066 | pes2o/s2orc | v3-fos-license | Oocytes maintain ROS-free mitochondrial metabolism by suppressing complex I
Oocytes form before birth and remain viable for several decades before fertilization1. Although poor oocyte quality accounts for most female fertility problems, little is known about how oocytes maintain cellular fitness, or why their quality eventually declines with age2. Reactive oxygen species (ROS) produced as by-products of mitochondrial activity are associated with lower rates of fertilization and embryo survival3–5. Yet, how healthy oocytes balance essential mitochondrial activity with the production of ROS is unknown. Here we show that oocytes evade ROS by remodelling the mitochondrial electron transport chain through elimination of complex I. Combining live-cell imaging and proteomics in human and Xenopus oocytes, we find that early oocytes exhibit greatly reduced levels of complex I. This is accompanied by a highly active mitochondrial unfolded protein response, which is indicative of an imbalanced electron transport chain. Biochemical and functional assays confirm that complex I is neither assembled nor active in early oocytes. Thus, we report a physiological cell type without complex I in animals. Our findings also clarify why patients with complex-I-related hereditary mitochondrial diseases do not experience subfertility. Complex I suppression represents an evolutionarily conserved strategy that allows longevity while maintaining biological activity in long-lived oocytes.
Oocytes form before birth and remain viable for several decades before fertilization 1 .
Although poor oocyte quality accounts for most female fertility problems, little is known about how oocytes maintain cellular fitness, or why their quality eventually declines with age 2 . Reactive oxygen species (ROS) produced as by-products of mitochondrial activity are associated with lower rates of fertilization and embryo survival 3-5 . Yet, how healthy oocytes balance essential mitochondrial activity with the production of ROS is unknown. Here we show that oocytes evade ROS by remodelling the mitochondrial electron transport chain through elimination of complex I. Combining live-cell imaging and proteomics in human and Xenopus oocytes, we find that early oocytes exhibit greatly reduced levels of complex I. This is accompanied by a highly active mitochondrial unfolded protein response, which is indicative of an imbalanced electron transport chain. Biochemical and functional assays confirm that complex I is neither assembled nor active in early oocytes. Thus, we report a physiological cell type without complex I in animals. Our findings also clarify why patients with complex-I-related hereditary mitochondrial diseases do not experience subfertility. Complex I suppression represents an evolutionarily conserved strategy that allows longevity while maintaining biological activity in long-lived oocytes.
Human primordial oocytes are formed during fetal development and remain dormant in the ovary for up to 50 years. Despite a long period of dormancy, oocytes retain the ability to give rise to a new organism after fertilization. Decline in oocyte fitness is a key contributor to infertility with age 2 . However, little is known about how oocytes maintain cellular fitness for decades to preserve their developmental potential, complicating efforts to understand the declining oocyte quality in ageing women.
Oocytes remain metabolically active during dormancy 6,7 , and thus must maintain mitochondrial activity for biosynthesis of essential biomolecules 8 . Yet, mitochondria are a major source of ROS, generating them as by-products of mitochondrial oxidative metabolism. Although ROS can function as signalling molecules 9 , at high concentrations ROS promote DNA mutagenesis and are cytotoxic. Indeed, ROS levels are linked to apoptosis and reduced developmental competence in oocytes and embryos 3-5 . However, the mechanisms by which oocytes maintain this delicate balance between mitochondrial activity and ROS production have remained elusive.
Mitochondrial ROS in early oocytes
Early human oocytes can be accessed only through invasive surgery into ovaries. Therefore, biochemical investigations into oocyte biology have historically been hindered by severe sample limitations. As a consequence, mitochondrial activity in primordial oocytes remains largely unstudied. Here we overcome challenges imposed by human oocytes by utilizing an improved human oocyte isolation protocol recently developed in our laboratory 6 , which we combine with a comparative evolutionary approach using more readily available Xenopus stage I oocytes (both referred to as early oocytes hereafter; Extended Data Fig. 1a,b). This approach allowed us to generate hypotheses using multi-species or Xenopus-alone analyses, and subsequently test those hypotheses in human oocytes.
We began our studies by imaging live early human and Xenopus oocytes labelled with various mitochondrial probes that quantify ROS levels. Neither Xenopus nor human early oocytes showed any detectable ROS signal, whereas mitochondria in somatic granulosa cells surrounding the oocytes exhibited ROS and served as positive controls (Fig. 1a-c and Extended Data Fig. 1c-g). ROS induction in oocytes also served as a positive control for live ROS probes (Extended Data Fig. 1h,i).
To distinguish between the possibilities that low ROS probe levels resulted from low ROS production or, alternatively, a high scavenging capacity to eliminate ROS, we treated Xenopus oocytes with menadione and assessed their survival (Extended Data Fig. 1j). Mild treatment with menadione promotes the formation of ROS (ref. 10 ) but does not affect survival negatively in cell lines and fruit flies 11,12 . However, most early oocytes (78.3%) died when they were left to recover overnight after menadione treatment, in contrast to what was observed for late-stage oocytes ( Fig. 1d and Extended Data Fig. 1j). Treatment with an antioxidant that quenches ROS was able to rescue oocyte survival (Fig. 1d). These results indicate that evasion of ROS damage in oocytes involves tight control of ROS generation, rather than a higher scavenging capacity of oocytes against ROS.
Mitochondrial respiration in oocytes
Using dyes that sense membrane potential (tetramethylrhodamine ethyl ester perchlorate (TMRE) and the cyanine dye JC-1), we found that mitochondria in human and Xenopus early oocytes exhibit lower membrane potentials compared to those of neighbouring granulosa cells, which served as positive controls (Fig. 2a,b and Extended Data Fig. 2a-d). Undetectable ROS levels and low membrane potential suggest that the mitochondrial electron transport chain (ETC) activity in early oocytes is either low or absent. To differentiate between these two possibilities, we measured respiration rate in Xenopus oocytes. Early oocytes stripped of granulosa cells exhibited a low basal respiration rate but a similar maximal respiration rate compared to those of growing oocytes ( Fig. 2c and Extended Data Fig. 2e,f). This respiration was efficiently coupled to ATP synthesis, resulting in an undetectable proton leak (Extended Data Fig. 2e). Therefore, we conclude that mitochondria in early oocytes have a functional ETC, with low activity.
To assess the importance of individual complexes of the oxidative phosphorylation (OXPHOS) machinery for oocyte health, we exposed Xenopus oocytes to inhibitors specific for each OXPHOS complex. We found that both early and late-stage oocytes died after treatment with inhibitors of complexes II, III, IV and V (malonate, antimycin A, KCN and N,N′-dicyclohexylcarbodiimide (DCCD), respectively). Although late-stage oocytes died after treatment with the complex I inhibitor rotenone, 78% of early oocytes survived exposure to rotenone ( Fig. 2d and Extended Data Fig. 2g). The insensitivity of early oocytes to complex I inhibition indicates that they do not utilize complex I as an essential entry port for electrons.
Mitochondrial proteome in oocytes
Mitochondria in early oocytes have an apparent lack of ROS, low membrane potential, low basal respiration rates and rotenone resistance in culture. We next investigated the mechanistic basis of this unusual mitochondrial physiology.
To do this, we purified mitochondria from early and late-stage Xenopus oocytes isolated from wild-type outbred animals, and performed proteomics using isobaric-tag-based quantification including muscle mitochondria as a somatic cell control (Extended Data Fig. 3a). Our efforts identified 80% of all known mitochondrial proteins (Extended Data Fig. 3b,c and Supplementary Table 1). Most ETC subunits showed a lower absolute abundance in early oocytes compared to that in late-stage oocytes (Fig. 3a), and to muscle (Extended Data Fig. 3d), which is expected owing to the presence of fewer cristae in mitochondria of early oocytes 13-15 and compatible with their NADH levels 16 . In support of our findings with the ETC inhibitors ( Fig. 2d and Extended Data Fig. 2g), the depletion of complex I in early oocytes was the most pronounced of all ETC complexes ( Fig. 3a and Extended Data Fig. 3e). We reinforced this result by repeating proteomics with heart, liver and white adipose tissues (Extended Data Fig. 3f -h and Supplementary Table 2).
Furthermore, among the most abundant proteins in the mitochondria of early oocytes were mitochondrial proteases and chaperones ( Fig. 3b and Extended Data Figs. 3i,j and 4a). These proteins are upregulated after the activation of the mitochondrial unfolded protein response (UPR mt ) 17-19 , which is often triggered by an imbalance of ETC subunits in mitochondria. Consistent with an active UPR mt (ref. 20 ), nuclear transcripts encoding complex I subunits were downregulated in early oocytes whereas mitochondrially encoded transcripts of complex I did not show significant changes compared to those of late-stage oocytes (Extended Data Fig. 3k,l).
We next examined whether complex I subunits were also depleted in human oocytes. Early oocytes and ovarian somatic cells were isolated from ovarian cortices of patients, and analysed by label-free proteomics. The ROS level was measured using MitoTracker Red CM-H 2 XRos (H2X), a reduced mitochondrial dye that does not fluoresce until it is oxidized by ROS. The boxed area is magnified in the top right image. Xenopus granulosa cells were imaged at the basal plane of the oocyte. DIC, differential interference contrast. Scale bars, 15 µm (human oocytes), 50 µm (Xenopus oocytes), 3 µm (human granulosa cells) and 10 µm (Xenopus granulosa cells). b,c, Quantification of the mean fluorescence intensity (MFI) of H2X in the oocyte and in the population of granulosa cells surrounding the equatorial plane of the oocyte for human (b) and Xenopus (c) oocytes. The data represent the mean ± s.e.m. of three biological replicates, shown in different colours. **P = 0.0001 and ***P = 4.13 × 10 −11 using a two-sided Student's t-test. d, Overnight survival of oocytes at the indicated stages of oogenesis after treatment with menadione, N-acetyl cysteine (NAC) or the combination of both (see Extended Data Fig. 1j for experimental design). At least ten oocytes were incubated per condition. The data represent the mean ± s.e.m. across four biological replicates. *P = 1.94 × 10 −9 , **P = 3.77 × 10 −18 and ***P = 2.37 × 10 −19 compared with the untreated condition using a two-sided Student's t-test with Šidák-Bonferroni-adjusted P values for multiple comparisons.
Article
We identified 40% of all known mitochondrial proteins (Supplementary Table 3). The upregulation of proteins related to UPR mt was conserved in human early oocytes, and further confirmed with immunofluorescence ( Fig. 3c and Extended Data Fig. 4b). An analysis of the OXPHOS machinery comparing oocytes and ovarian somatic cells revealed that, in line with the Xenopus data, many complex I subunits were either at very low levels or not identified in human oocytes (Fig. 3d,e and Extended Data Fig. 5a).
In conclusion, our proteomic characterization of mitochondria revealed an overall reduction of ETC subunits in early oocytes of human and Xenopus, with complex I levels exhibiting the strongest disproportionate depletion.
Absence of complex I in early oocytes
Taken together, the results of our proteomics and survival experiments suggest that both early human and Xenopus oocytes remodel their ETC to decrease complex I levels to an extent that complex I becomes unnecessary for survival. This result is unexpected, because no other animal cell type with functioning mitochondria has been shown to be able to dispense with complex I in physiological conditions, and only one other multicellular eukaryote, the parasitic plant mistletoe, is known to dispense with complex I entirely 21 . Therefore, we directly assayed complex I assembly status and function in early oocytes, using colorimetric, spectrophotometric and metabolic assays. We first investigated the assembly status of complex I in oocytes, which is tightly linked to its function 22 . Complex I is an approximately 1-MDa complex composed of 14 core and 31 accessory subunits in humans, some of which are essential for its assembly and function 23 . We first examined our proteomics data for any specific downregulation of a particular complex I module in early oocytes. However, levels of subunits belonging to the four major functional modules of complex I, namely N, Q, PP and PD modules, were not significantly different between Xenopus early and late-stage oocytes (Extended Data Fig. 6a). The size of complex I in native protein gels has been used as a tool to reveal the assembly status of the complex 22,24,25 . Thus, we compared mitochondria isolated from early oocytes to those from late-stage oocytes, and from muscle tissue of Xenopus and mice as somatic cell controls, by blue native polyacrylamide gel electrophoresis (BN-PAGE) followed by complex I in-gel activity assays or by an immunoblot against a complex I core subunit, Ndufs1. Notably, complex I neither was fully assembled nor exhibited any in-gel activity in early oocytes ( Fig. 4a and Extended Data Fig. 6b,c). Denaturing SDS-PAGE gels also verified comparable mitochondrial loading and very low protein levels of complex I subunits in early oocytes (Extended Data Fig. 6d). To rule out any possibility of immunoblotting detection problems, areas corresponding to assembled complex I and complex II from BN-PAGE gels were analysed by proteomics (Extended Data Fig. 6e). Although complex II subunits were detected at comparable levels in all samples, most complex I subunits were not detected in early oocytes (Extended Data Fig. 6f and Supplementary Table 4). Thus, we conclude that complex I is not fully assembled in early oocytes.
In-gel activity assays detect the presence of flavin mononucleotide (FMN)-containing (sub)assemblies of complex I, but do not detect the physiological activity of the assembled complex. Therefore, we measured NADH:CoQ oxidoreductase activity in isolated mitochondrial membranes from early and late-stage oocytes, as well as muscle tissue, to measure substrate consumption by complex I, which reflects physiological activity of complex I (Fig. 4b). We also measured complex IV and citrate synthase activities to confirm the presence of mitochondrial activity in these samples. Complex IV and citrate synthase activities were detected in all three samples ( Fig. 4b and Extended Data Fig. 6g). However, complex I activity was absent in early oocyte samples, in contrast to the findings for late-stage oocyte samples and muscle samples (Fig. 4b).
Finally, to validate the absence of complex I in early oocytes, we checked the levels of FMN, an integral part of complex I in early and late-stage oocytes. Although levels of another flavin nucleotide, flavin adenine dinucleotide (FAD), were within a 2-fold range between these stages, FMN levels were about 200-fold higher in late-stage oocytes, compared to the low levels detected in early oocytes (Fig. 4c). The remarkable depletion of FMN is complementary evidence supporting complex I deficiency in early oocytes.
The absence of complex I could also explain the reduced activity of other ETC complexes in early oocytes by affecting the stability of supercomplexes 26 . Assessment of supercomplex distribution showed no supercomplex formation in early oocytes, in contrast to the findings in late-stage oocytes and muscle (Extended Data Fig. 6h,i). Thus, we conclude that the absence of complex I impedes the formation of supercomplexes, which might contribute to the overall reduction of ETC activity in early oocytes.
Complex I and ROS throughout oogenesis
We then reasoned that an absence of complex I, one of the main ROS generators in the cell, might be sufficient to explain the undetectable ROS levels in early oocytes 27 . Therefore, we studied the relationship between complex I abundance and ROS levels throughout oogenesis.
First, we investigated the assembly of complex I during oogenesis. Complex I activity was barely detectable in stage II oocytes, but peaked and plateaued in maturing (stage III) oocytes (Fig. 5a). We then assessed the survival of oocytes in the presence of rotenone throughout oogenesis. The overnight survival of oocytes in rotenone was consistent with their levels of assembled complex I: stage I and II oocytes survived in the presence of rotenone whereas maturing and mature oocytes died (Fig. 5b). Hence, we conclude that complex I is assembled and fully functional in maturing (stage III) and late-stage oocytes but absent in early oocytes. ; n = 6. ***P = 6.92 × 10 −9 and **P = 3.57 × 10 −5 using two-sided Student's t-test with Šidák-Bonferroni-adjusted P values for multiple comparisons.
Article
Second, we investigated whether the assembly of complex I throughout oogenesis was accompanied by the production of ROS in oocytes. The opacity of maturing Xenopus oocytes impedes the use of most fluorescent ROS markers. Therefore, we turned to known metabolic and protein 'sentinels' of ROS levels in cells and evaluated the redox state of glutathione 28-30 and mitochondrial peroxiredoxin 3 (Prdx3) in oocytes. We found that ratio of reduced glutathione to oxidized glutathione was 20-fold higher in early oocytes compared to that in late-stage oocytes (Extended Data Fig. 7a), indicating a reduced cellular redox state in early oocytes, consistent with oocytes having undetectable levels of ROS. Next, we checked the redox state of Prdx3 during oogenesis. Peroxiredoxins dimerize in the presence of peroxide, and thus, the ratio of peroxiredoxin dimers to monomers correlates with the level of cellular peroxide 31-33 . Prdx3 dimerization increased throughout oogenesis, from negligible levels in early (stage I) oocytes to the highest measured level in late-stage (stage VI) oocytes ( Fig. 5c and Extended Data Fig. 7b). Stage II oocytes, in which complex I activity is very low (Fig. 5a), showed a nonsignificant increase in dimer/monomer ratio (Fig. 5c).
The timing of complex I assembly and the increase in ROS levels correlate: ROS start to build up as soon as complex I is assembled in oocytes. On the basis of these results, we speculate that the maturation of oocytes involves a slow, gradual transition to a metabolism that involves a functional complex I.
Combining the in vivo evidence with proteomics and biochemical assays in vitro, our results demonstrate that early oocytes avoid ROS by eliminating one of the main ROS generators in the cell, mitochondrial complex I. Complex I subunits are reduced to such low levels that complex I cannot be fully assembled, nor can its activity be detected in early oocytes. This reveals a new strategy used by Xenopus and most likely human oocytes to maintain a low-ROS-producing mitochondrial metabolism. Although quiescence is associated with ETC remodelling in Drosophila oocytes 34 , to our knowledge, vertebrate early oocytes are the first and only physiological cell type in animals that exist without a functional mitochondrial complex I.
Discussion
Here we have shown that dormancy involves survival with an inactive mitochondrial complex I. By shutting down complex I and keeping the rest of the OXPHOS system active, early oocytes keep their mitochondria polarized to support the synthesis of haeme, essential amino acids and nucleotides, while keeping their activity low to avoid ROS. Other quiescent cells, such as neuronal and haematopoietic stem cells, exhibit similarly low ROS levels, and reduced ETC activity 9,35 , raising the possibility that this regulatory mechanism might be utilized by other cell types. Furthermore, UPR mt is activated in early oocytes (Fig. 3b,c and Extended Data Fig. 4), probably in response to an imbalance of ETC complexes caused by the absence of complex I. Given that UPR mt activation itself is sufficient to increase the lifespan of Caenorhabditis elegans and mouse 17-19 , we speculate that complex I inhibition further enhances the longevity of oocytes through its downstream activation of UPR mt . The causal relationships between these interacting factors and oocyte lifespan remain a fascinating future direction to investigate.
Severe sample limitations prevent biochemical assays of human oocytes-30 thousand donor ovaries would be required for one experiment to directly measure complex I function using current technologies. Ideally, future methodological developments will allow direct evaluation of complex I activity in human oocytes. It would also be interesting to investigate whether similar mechanisms apply in the oocytes of other mammals such as mice. Until then, we rely on proteomics, imaging and the activation of downstream pathways (UPR mt ) that suggest that complex I is also absent in human primordial oocytes. Moreover, the absence of complex I in early oocytes can also explain why complex-I-related mitochondrial pathologies (such as Leber's hereditary optic neuropathy) do not lead to subfertility or selection against homoplasmic mitochondrial DNA mutations that occur in other types of ETC dysfunction 36-38 . As the oogenic mitochondrial bottleneck occurs in early oogenesis 39 , there would not be a selective pressure against mutations affecting an inactive complex I.
Our findings reveal yet another unique aspect of physiology that oocytes have evolved to balance their essential function of beginning life with the requirement for longevity. This raises the question whether complex I deficiency in primordial oocytes can be exploited for other purposes. Some cancers seen in young women are highly treatable; however, their treatment leads to a severe reduction of the ovarian reserve and reduced prospects of motherhood. Drugs against complex I exist, and are already proposed for cancer treatments 40 . Future studies will show whether repurposing complex I antagonists can improve chemotherapy-related infertility, and thus life quality of young female cancer survivors.
Online content
Any methods, additional references, Nature Research reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41586-022-04979-5. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Ethics
Ethical committee permission to work with primordial oocytes from human ovary samples was obtained from the Comité Étic d'Investigació Clínica CEIC-Parc de Salut MAR (Barcelona) and Comité Ético de Investigación Clínica-Hospital Clínic de Barcelona with approval number HCB/2018/0497. Written informed consent was obtained from all participants before their inclusion in the study.
Animals used in this study were housed in the Barcelona Biomedical Research Park, accredited by the International Association for Assessment and Accreditation of Laboratory Animal Care. Animal euthanasia was performed by personnel certified by the competent authority (Generalitat de Catalunya) and conformed to the guidelines from the European Community Directive 2010/63 EU, transposed into Spanish legislation on RD 53/2013 for the experimental use of animals.
Animal models
Xenopus laevis adult females of between 2 and 4 years old were purchased from Nasco and maintained in water tanks in the following controlled conditions: 18-21 °C, pH 6.8-7.5, O 2 4-20 ppm, conductivity 500-1,500 µs, ammonia <0.1 ppm. The C57BL/6J mice used in the experiments were purchased from Charles River Laboratories and maintained in the Animal Facility of the Barcelona Biomedical Research Park under specific-pathogen-free conditions at 22 °C with 40-60% humidity, in a 12 h light/dark cycle, and with access to food and water ad libitum. Female mice of 7 weeks of age were used for extracting muscle tissue.
Oocyte isolation and culture
Human primordial oocytes. Ovaries were provided by the gynaecology service of Hospital Clinic de Barcelona, from women aged 19 to 34 undergoing ovarian surgery and were processed as previously described 6 . Briefly, ovarian cortex samples were digested in DMEM containing 25 mM HEPES and 2 mg ml −1 collagenase type III (Worthington Biochemical, LS004183) for 2 h at 37 °C with occasional swirling. Individual cells were separated from tissue fragments by sedimentation, and collagenase was neutralized by adding 10% FBS (Thermo, 10270106). Follicles were picked manually under a dissecting microscope. All human oocyte imaging experiments were conducted in DMEM/F12 medium (Thermo, 11330-032) containing 15 mM HEPES and 10% FBS (Thermo, 10270106).
Xenopus oocytes. Ovaries were dissected from young adult (aged 3 to 5 years) female X. laevis that had undergone euthanasia by submersion in 15% benzocaine for 15 min. Ovaries were digested using 3 mg ml −1 collagenase IA (Sigma, C9891-1G) in Marc's modified Ringer's (MMR) buffer by gentle rocking until dissociated oocytes were visible, for 30 to 45 min. The resulting mix was passed through two sets of filter meshes (Spectra/Mesh, 146424 and 146426). All washes were performed in MMR. For live-imaging experiments with intact granulosa cells, oocytes were transferred to oocyte culture medium (OCM) 41 at this stage. For the rest of the experiments, oocytes were stripped of accompanying granulosa cells by treatment with 10 mg ml −1 trypsin in PBS for 1 min, followed by washes in MMR. Removal of granulosa cells was confirmed by Hoechst staining of a small number of oocytes.
Live-cell imaging
Human or Xenopus early oocytes were labelled in their respective culture medium (see above). Human oocytes were imaged using a 63× water-immersion objective (NA 1.20, Leica, 506346) with an incubation chamber maintained at 37 °C and 5% CO 2 . Frog oocytes were imaged using a 40× water-immersion objective (NA 1.10, Leica, 506357) in OCM at room temperature and atmospheric air, unless stated otherwise. All images were acquired using a Leica TCS SP8 microscope with the LAS X software (Leica, v3.5.5.19976). Mean fluorescence intensities in granulosa cells and oocytes were quantified using Fiji software.
ROS probes.
Oocytes and associated granulosa cells were incubated in 500 nM MitoTracker Red CM-H2Xros (Thermo, M7513) for 30 min, 5 µM MitoSOX Red (Thermo, M36008) for 10 min, or 5 µM CellROX for 30 min. Cells were then washed and imaged in 35-mm glass-bottom Mat-Tek dishes in culture medium, except for CellROX labelling, for which MMR was used for imaging to satisfy the manufacturer's instructions.
Oxygen consumption rate
Oxygen consumption rate (OCR) of Xenopus oocytes was measured using a Seahorse XFe96 Analyser (Agilent) with Seahorse Wave software (Agilent, v2.6). Granulosa-cell-stripped oocytes were placed in XFe96 culture plates immediately after their isolation in Seahorse XF DMEM medium pH 7.4 supplemented with 10 mM glucose, 1 mM pyruvate and 2 mM glutamine (Agilent; 103015-100, 103577-100, 103578-100 and 103579-100). A cartridge was loaded with concentrated inhibitor solution to achieve 5 µM oligomycin, 2 µM carbonyl cyanide 4-(trifluoromethoxy) phenylhydrazone or a combination of 0.5 µM rotenone and 0.5 µM antimycin A. Mock medium injections were performed to account for inhibitor-independent decline in OCR. For each sequential injection, at least 4 measurement cycles were acquired consisting of 20 s mix, 90 s wait and 3 min measure, in at least 3 replicates. For basal and maximal respiration rates, assay-independent OCR decline was corrected, and non-mitochondrial respiration (resistant to rotenone-antimycin mix) was subtracted. OCR measurements for growing oocytes (stage III; with a diameter of 450-600 µm (ref. 42 )) had to be performed statically because the probe of the analyser compressed and destroyed these large oocytes in long-term measurements. For growing (stage III) oocytes, OCR was acquired during 5 cycles per well, each cycle being 20 s mix, 90 s wait and 3 min measure, in at least 4 replicates. The well size imposed a technical limitation on the maximum number of oocytes per well (100 early and 8 growing oocytes); thus, respiration data were normalized for the total protein amount per sample. (Abcam, ab141229). Survival was assessed by counting the number of oocytes with intact morphology before and after treatments. Cell death in stage III to VI oocytes was recognized by the development of a mottling pattern in the pigmentation 43 . Images were taken by a Leica IC90 E stereoscope.
Treatments with OXPHOS inhibitors
Early (stage I) oocytes were treated with 10 µM menadione (Sigma, M5625) or left untreated, for 2 h in OCM, and washed into fresh OCM. Untreated oocytes were labelled with wheat germ agglutinin 488 (Biotium, 29022-1) to mark their plasma membrane and mixed with menadione-treated oocytes in a glass-bottom MatTek dish 4 h after menadione was removed. The mixed population of oocytes were then labelled with MitoSOX and imaged. At least 50 stage I and II oocytes and at least 10 stage III and VI oocytes were treated with 10 µM menadione (Sigma, M5625) in the presence or in the absence of 10 mM N-acetyl cysteine (NAC) (Sigma, A9165). After 2 h, menadione was removed and N-acetyl cysteine was retained for an overnight incubation. Survival was determined by counting the number of oocytes immediately before menadione treatment (t = 0) and after 16 h in recovery.
Mitochondrial-enriched extracts
Mitochondrial-enriched fractions were obtained as described previously for gastrocnemius muscle and with minor adaptations for oocyte samples 44 . Freshly isolated early oocytes from Xenopus were lysed in mitochondria buffer (250 mM sucrose, 3 mM EGTA, 10 mM Tris pH 7.4), and spun at low speed to remove debris. The resulting supernatant was centrifuged at 20,000g for 20 min at 4 °C. Late-stage oocytes were spin-crashed, and yolk-free fraction was combined 1:1 with mitochondria buffer and centrifuged at 20,000g for 20 min at 4 °C to pellet mitochondria. Mitochondrial pellets from early and late-stage oocytes were resuspended in mitochondria buffer and subjected to DNase treatment for 10 min and proteinase K treatment for 20 min. Phenylmethylsulfonyl fluoride was added to stop proteolytic activity and samples were centrifuged again at 20,000g for 20 min at 4 °C. Protein concentration was estimated and aliquots of crude mitochondria were stored at −80 °C until use.
Spectrometric assessment of enzymatic activities of mitochondrial complexes
The specific activities of mitochondrial complex I, complex IV and citrate synthase were determined as described before with minor modifications 45 . Briefly, mitochondrial extracts were subjected to three freeze-thaw cycles in hypotonic buffer (10 mM Tris-HCl) before activity analysis using an Infinite M200 plate reader (Tecan) with Tecan i-control software (Tecan, v3.23) in black-bottom 96-well plates (Nunc) at 37 °C. For complex I NADH:CoQ activity assessment, reaction solutions (50 mM KP pH 7.5, 3 mg ml −1 BSA, 300 µM KCN and 200 µM NADH) with or without rotenone (10 µM) were distributed into each well first. Mitochondrial extracts were then added and NADH absorbance at 340 nm was measured for 2 min to establish baseline activity. The reaction was then started by the addition of ubiquinone (60 µM). NADH absorbance was recorded for 15 min every 15 s.
For complex IV activity assessment, reaction solutions (50 mM KP pH 7, 60 µM reduced cytochrome c) with or without KCN (600 µM) were distributed into each well first, and absorbance of reduced cytochrome c at 550 nm was recorded for 2 min to establish baseline oxidation. Mitochondrial extracts were then added and absorbance was measured for 15 min every 15 s.
For citrate synthase activity, reaction solution (100 µM Tris pH 8, 0.1% Triton X-100, 100 µM DTNB and 300 µM acetyl CoA) was distributed into each well first. Mitochondrial extracts were then added and absorbance at 410 nm was measured for 2 min to set the baseline; then the reaction was started by addition of the substrate oxaloacetic acid (500 µM). Production of TNB (yellow) was recorded by measuring the absorbance at 410 nm for 15 min every 15 s. Enzymatic assays were plotted with the baseline represented as 1 for simplicity.
Denaturing SDS gel electrophoresis
Oocytes were collected after isolation, frozen in liquid nitrogen and kept at −80 °C until further use. Samples were processed as described previously 46 . Gastrocnemius total homogenates were obtained as described previously 47 . HeLa cells were lysed in RIPA buffer (50 mM Tris-HCl, 150 mM NaCl, 1% Nonidet P-40, 01% SDS and 1 mM EDTA, supplemented with protease inhibitor cocktail (Complete Roche Mini, 1 tablet per 50 ml)) and spun at 20,000g to eliminate cell debris. Oocyte lysates for determination of the redox state of peroxiredoxin were protected against artefactual oxidation by alkylation as described previously 48 , but in OCM. Cell lysates or mitochondrial-enriched fractions were resolved by SDS-PAGE using 4-12% NuPAGE Bis-Tris gels.
BN-PAGE electrophoresis, and in-gel activity assays
Mitochondrial content in samples of different cell types (different stages of oocytes and muscle tissue) was first assessed by western blotting for their citrate synthase levels ( Supplementary Figs. 1b and 2c,d).
Next, similar amounts of mitochondrial fractions were solubilized in 1% n-dodecyl-β-d-maltoside (DDM) or digitonin, and were resolved in the native state using NativePAGE 3-12% Bis-Tris (Thermo, BN1001BOX) gradient gels as described before 49 . The left part of the gel was cut and stained with Coomassie (InstantBlue, Sigma) after BN-PAGE to reveal the native protein molecular weight marker protein ( Supplementary Figs. 1a,b and 2a,c,d). Complex I and complex IV activity in-gel assays were performed as described previously 24 . Briefly, immediately after the run, BN-PAGE gels were incubated in assay solution: for complex I in 2 mM Tris pH 7.4, 0.1 mg ml −1 NADH and 2.5 mg ml −1 nitro blue tetrazolium chloride (NBT) to asses NADH:FMN electron transfer, denoted by the appearance of dark purple colour; and for complex IV in 10 mM phosphate buffer pH 7.4, 1 mg ml −1 cytochrome c and 0.5 mg ml −1 of 3,3′-diaminobenzidine (DAB) in the presence or absence of 0.6 mM KCN to assess the specific cytochrome c oxidation, denoted by the appearance of dark brown colour. The intensities of reduced nitro blue tetrazolium chloride (NBT) were normalized to citrate synthase levels of the same samples, detected by SDS-PAGE followed by immunoblotting. Gels were imaged using an Amersham Imager (GE Healthcare; Supplementary Figs. 1 and 2). Intensity measurements were performed using Fiji software.
Measurement of FMN and glutathione
Samples were prepared using the automated MicroLab STAR system from Hamilton Company in the presence of recovery standard for quality control by Metabolon. After protein precipitation in methanol, metabolites were extracted and analysed by ultrahigh-performance liquid chromatography with tandem mass spectrometry by negative ionization. Raw data were extracted, peak-identified and processed for quality control using Metabolon's hardware and software.
Statistics and reproducibility
Sample sizes were chosen based on published studies to ensure reliable statistical testing and to account for variability among outbred populations. Experimental limitations were also taken into account, such as the number of primordial oocytes that could be obtained from human ovaries. All experiments were performed on isolated oocytes or tissues. Sample randomization was performed by two means. First, all outbred frogs used in this study were chosen by blinded animal facility personnel without our knowledge. Second, all isolated oocytes or tissue samples were first grouped together and then randomly distributed to different experimental groups. Blinding during data collection was not required as standard experimental procedures were applied for different groups, such as western blots and immunohistochemistry. Blinding during data analysis was performed in oocyte survival experiments by involving multiple lab members for analysing blinded datasets. Blinding for the analysis of other experiments was not required since the different experimental groups were analysed using the same parameters. All data are expressed as mean ± s.e.m. A simple linear regression was performed to fit a model between the mitochondrial protein abundances of primordial follicle and ovarian somatic cell samples (Fig. 3d,e). Unpaired two-tailed Student's t-test was used in all other analysis, P values are specified in figure legends, and those <0.05 were considered significant. Multiple t-tests were used in Figs. 1d, 4c and 5b,c and Extended Data Figs. 2c,d, 3k,l and 6b, and were corrected by the Šidák-Bonferroni method using GraphPad Prism. In Xenopus proteomics experiments, q values were calculated as adjusted P values and significance was considered for q value < 0.05 for comparing protein levels. A fold-change heatmap was generated using JMP (version 13.2) software. For Extended Data Fig. 6f, we excised the indicated bands in Extended Data Fig. 6e from one of three gels represented in Fig. 4a; gel-identification MS was performed once.
MS
Sample preparation. For isobaric-tag-based quantification for Xenopus, mitochondrial extracts from early (stage I) oocytes, late (stage VI) oocytes, gastrocnemius muscle, heart, liver and white adipose tissues were processed in two parallel experiments: stage I, stage VI and muscle in triplicates; and stage I, heart, liver and white adipose tissue in duplicates. Samples were quantified and 100 µg of each sample was processed with slight modifications from ref. 46 . In brief, methanol-precipitated proteins were dissolved in 6 M guanidine hydrochloride (GuaCl). Samples were then digested with LysC (20 ng µl −1 ) in 2 M GuaCl overnight at room temperature. The next morning, samples were further diluted to 0.5 M GuaCl and digested with trypsin (10 ng µl −1 ) and further LysC (20 ng µl −1 ) for 8 h at 37 °C. Later, samples were speed-vacuumed, and the resulting pellet was resuspended in 200 mM EPPS pH 8.0. Ten-microlitre volumes of tandem mass tag (TMT) stock solutions (20 µg µl −1 in acetonitrile) were added to 50 µl of samples, and samples were incubated 3 h at room temperature. The TMT reaction was quenched with a 0.5% final concentration of hydroxylamine. The samples were combined in one tube, acidified by 10% phosphoric acid, and subjected to a MacroSpin C18 solid-phase extraction (The Nest Group) to desalt and isolate peptides. TMT mixes were fractionated using basic pH reversed-phase fractionation in an Agilent 1200 system. Fractions were desalted with a MicroSpin C18 column (The Nest Group) and dried by vacuum centrifugation 50 .
For label-free proteomics for human oocytes, human primordial follicles and ovarian somatic cells were collected from two individuals who underwent ovarian surgery. Samples were dissolved in 6 M GuaCl pH 8.5, diluted to 2 M GuaCl and digested with LysC (10 ng µl −1 ) overnight. Samples were further diluted down to 0.5 M GuaCl and digested with LysC (10 ng µl −1 ) and trypsin (5 ng µl −1 ) for 8 h at 37 °C. Samples were acidified by 5% formic acid and desalted with home-made C18 columns.
For detection of complex I and II subunits from BN-PAGE gels, gel bands were destained, reduced with dithiothreitol, alkylated with iodoacetamide and dehydrated with acetonitrile for trypsin digestion. After digestion, peptide mix was acidified with formic acid before analysis through liquid chromatography with MS/MS.
Chromatographic and MS analysis. TMT and label-free samples were analysed using a Orbitrap Eclipse mass spectrometer (Thermo) coupled to an EASY-nLC 1200 (Thermo). Peptides were separated on a 50-cm C18 column (Thermo) with a gradient from 4% to 32% acetonitrile in 90 min. Data acquisition for TMT samples was performed using a Real Time Search MS3 method 51 . The scan sequence began with an MS1 spectrum in the Orbitrap. In each cycle of data-dependent acquisition analysis, following each survey scan, the most intense ions were selected for fragmentation. Fragment ion spectra were produced through collision-induced dissociation at a normalized collision energy of 35% and they were acquired in the ion trap mass analyser. MS2 spectra were searched in real time with data acquisition using the PHROG database 52 with added mitochondrially encoded proteins. Identified MS2 spectra triggered the submission of MS3 spectra that were collected using the multinotch MS3-based TMT method 53 .
Label-free samples were acquired in data-dependent acquisition mode and full MS scans were acquired in the Orbitrap. In each cycle of data-dependent acquisition analysis, the most intense ions were selected for fragmentation. Fragment ion spectra were produced through high-energy collision dissociation at a normalized collision energy of 28%, and they were acquired in the ion trap mass analyser.
Gel bands were analysed using a LTQ-Orbitrap Velos Pro mass spectrometer (Thermo) coupled to an EASY-nLC 1000 (Thermo). Peptides were separated on a 25-cm C18 column (Nikkyo Technos) with a gradient from 7% to 35% acetonitrile in 60 min. The acquisition was performed in data-dependent acquisition mode and full MS scans were acquired in the Orbitrap. In each cycle, the top 20 most intense ions were selected for fragmentation. Fragment ion spectra were produced through collision-induced dissociation at a normalized collision energy of 35%, and they were acquired in the ion trap mass analyser.
Digested bovine serum albumin was analysed between each sample and QCloud (ref. 54 ) was used to control instrument performance.
Data analysis. Acquired spectra were analysed using the Proteome Discoverer software suite (v2.3, Thermo) and the Mascot search engine (v2.6, Matrix Science 55 ). Label-free data were searched against the SwissProt Human database. Data from the gel bands were searched against a custom PHROG database 52 that includes 13 further entries that correspond to mitochondrially encoded proteins for the Xenopus samples and the SwissProt mouse database for the mouse samples. TMT data were searched against the same custom 'PHROG' database. False discovery rate in peptide identification was set to a maximum of 5%. Peptide quantification data for the gel bands and the label-free experiments were retrieved from the 'Precursor ion area detector' node. The obtained values were used to calculate an estimation of protein amount with the top3 area, which is the average peak area of the three most abundant peptides for a given protein. For the TMT data, peptides were quantified using the reporter ion intensities in MS3. Reporter ion intensities were adjusted to correct for the isotopic impurities of the different TMT reagents according to the manufacturer's specifications. For final analysis, values were transferred to Excel. For all experiments, identified proteins were selected as mitochondrial if they were found in MitoCarta 3.0 (ref. 56 ). MS3 spectra with abundance less than 100 or proteins with fewer than 2 unique peptides were excluded from the analysis. Each TMT channel was normalized to total mitochondrial protein abundance. A total of 926 mitochondrial proteins were identified (and 807 quantified) in 3 biological replicates from wild-type outbred animals, representing 80% of known mitochondrial proteins (Supplementary Table 1 and Extended Data Fig. 3b). Although the mitochondrial proteome in diverse cell types could be quite different 57 , we found comparable levels of mitochondrial housekeeping proteins (such as the import complexes TIMMs and TOMMs) across different maturity stages (Extended Data Fig. 3c and Supplementary Table 1), enabling us to compare and contrast changes in other pathways.
For human somatic cell samples, we analysed three dilutions: the 1× reference had a similar level of protein loading to that of the primordial follicle sample (0.55 µg total protein); a twofold dilution (0.25 µg total protein); and a fivefold dilution (0.1 µg total protein). In scatter plots (Fig. 3d,e), we estimated differences in mitochondrial complex I protein abundance using the twofold somatic cell dilution, a conservative approach that compared primordial follicle samples (0.55 µg total protein) to somatic cells half their loading concentration (0.25 µg total protein), nevertheless observing similar levels of the mitochondrial import machinery subunits TOMMs and TIMMs. The fivefold-dilution somatic cell sample was useful for establishing detection limits; indeed, many complex I subunits absent in oocytes were detected with high confidence even at this dilution. In the heatmap (Extended Data Fig. 5), we considered normalizing our data using the mitochondrial loading controls citrate synthase and COX4I1 to estimate differences in protein abundance. The abundance of COX4I1 fell within the linear range of our proteomic methodology (R 2 = 0.99), in contrast to that for citrate synthase (R 2 = 0.89) whose higher abundance led to measurement saturation at higher concentrations. Therefore, COX4I1 was chosen to normalize protein abundances in the heatmap representation. We identified 454 mitochondrial proteins (Supplementary Table 3; 298 and 397 proteins were quantified for early oocyte and somatic cell samples, respectively), representing 40% of all known mitochondrial proteins. Here too, levels of the mitochondrial import proteins TIMMs and TOMMs were similar between oocytes and ovarian somatic cells (Fig. 3d,e), demonstrating an equivalent mitochondrial abundance that facilitated comparison of protein levels between different cell types.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this paper.
Data availability
Isobaric-tag-based quantification data shown in Fig. 3, Extended Data Fig. 3 and Supplementary Tables 1 and 2 are available through PRIDE (ref. 58 ) with the identifiers PXD025366 and PXD030576. Label-free data shown in Fig. 3, Extended Data Fig. 5 and Table 3 are available through PRIDE (ref. 58 ) with the identifier PXD025369. Data for the gel band identification in Extended Data Fig. 6 and Supplementary Table 4 are available through PRIDE (ref. 58 ) with the identifier PXD025371. Source data are provided with this paper. Fig. 1 | Undetectable levels of ROS in the early oocytes. a, A comparison between select reproductive traits of humans and Xenopus laevis, which live on average 76 years 59 and 15 years in captivity 60 , respectively. Xenopus oocytes have long been used in reproduction studies 61,62 , in part because they are more accessible and share many conserved features with human oocytes, such as (1) a long dormancy period: Xenopus oocytes arrest at late-stage I for several years. Moreover, a large population of early oocytes is maintained in adult female ovaries throughout most of its life-time, suggesting the presence of an oocyte reserve similar to humans 63-65 . (2) A similar duration of the maturation period from early oocytes to mature eggs. (3) A measurable decline in fertility with age 60,66,67 . (4) A cytoplasmic distribution and activity of organelles similar to humans, including a Balbiani body found in the ooplasm of both species 6,68,69 . On the other hand, humans and Xenopus differ in their modes of fertilization: humans undergo internal fertilization, while Xenopus fertilization takes place externally. This important difference affects several features related to fertilization: Xenopus lay many eggs, each with considerable internal nutrient reserves for survival outside of the body, whereas humans ovulate only 1-2 eggs per cycle with little internal nutrient reserves [70][71][72] . b, A schematic of Xenopus laevis oogenesis according to 42 . Oogenesis in Xenopus is divided into six stages based on the morphology of the developing oocytes: oocytes are transparent and measure 50-300 microns in stage I. Oocytes grow and gradually accumulate pigments and yolk to become opaque and measure more than 1 mm in stage VI, when they are ready to be ovulated. c, Schematic representation of human and Xenopus early oocytes with attached granulosa cells. Nuclei (n) are depicted in blue and Balbiani bodies (Bb) in green. Note that Xenopus early oocytes are so large that their granulosa cells are visible only as small puncta on the periphery of the oocyte in the same magnification. d, f, Live-cell imaging of Xenopus early (stage I) oocytes with attached granulosa cells with MitoSOX (d), and CellROX (f) to detect their ROS levels. Granulosa cells were imaged in the basal plane of the oocyte. DIC, differential interference contrast. Scale bars: 50 µm and 10 µm for oocytes and granulosa cells, respectively. e, g, Quantification of MitoSOX (e) and CellROX (g) probes inside oocytes and in granulosa cells (n=3; biological replicates shown in colours). The data represent the mean ± s.e.m. ***P = 4.298 × 10 −8 and **P = 1.86 × 10 −5 using two-sided Student's t-test. h, Live-cell imaging of ROS in early oocytes untreated or treated with menadione 10 µM for 2h. Untreated oocytes were incubated with Wheat Germ Agglutinin (WGA) 488 (green) to mark the plasma membrane, then combined with treated oocytes in the same dish and labelled with MitoSOX. Scale bar: 50 µm. i, Quantification of MitoSOX in oocytes at the 4h timepoint (untreated or treated with 10 µM Menadione for 2h followed by 2h wash). The data represent the mean ± s.e.m, n=3 biological replicates, at least 3 oocytes were quantified per replicate ****P = 2.21 × 10 −9 using two-sided Student's t-test. j, Experimental design for the assessment of survival upon mild ROS production. Freshly isolated early (stage I), maturing (stage II and III), and late-stage (stage VI) oocytes of were treated with 10 µM menadione in the presence or in the absence of 10 mM N-acetyl cysteine (NAC). After 2h, menadione was removed and NAC was maintained for an overnight, when survival was determined. | 2022-06-02T13:24:15.963Z | 2022-05-28T00:00:00.000 | {
"year": 2022,
"sha1": "ff592261b42f25ef3773750c57d8a9823b7e95a5",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41586-022-04979-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "86e3c9567311d06f065c6e879d2e93784774c696",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
2128996 | pes2o/s2orc | v3-fos-license | Troponin I and echocardiography in patients with systemic sclerosis and matched population controls.
Objectives: Cardiac manifestations in systemic sclerosis (SSc) are associated with poor prognosis. Few studies have investigated cardiac troponins in SSc. We studied the relationships between echocardiographic abnormalities, cardiac biomarkers, and disease manifestations in a population-based cohort of patients with SSc and controls. Method: The study comprised 110 patients with SSc and 105 age- and sex-matched population-based controls. We examined ventricular function, heart valves, and estimated pulmonary arterial pressure (ePAP) by echocardiography in all participants. Disease characteristics, manifest ischaemic heart disease (IHD), and measurements of N-terminal prohormone brain natriuretic peptide (NT-proBNP) and high-sensitivity cardiac troponin I (hs-cTnI) were tabulated. Results: NT-proBNP and hs-cTnI levels were higher in SSc patients than controls. Both NT-proBNP and hs-cTnI were associated with the presence of echocardiographic abnormalities. Forty-four SSc patients and 23 control subjects had abnormal echocardiograms (p = 0.002). As a group, SSc patients had lower (but normal) left ventricular ejection fraction (LVEF, p = 0.02), more regional hypokinesia (p = 0.02), and more valve regurgitations (p = 0.01) than controls. Thirteen patients and four controls had manifest IHD. Decreased right ventricular (RV) function (n = 7) and elevated ePAP (n = 15) were exclusively detected among SSc patients. Conclusions: Both NTproBNP and hs-cTnI were associated with echocardiographic abnormalities, which were more prevalent in SSc patients than in controls. Our results thus suggest that hs-cTnI could be a potential cardiac biomarker in SSc. Low RV function and signs of pulmonary hypertension (PH) were uniquely found in the SSc group. SSc patients had more valve regurgitation than controls, an observation that warrants more clinical attention.
Systemic sclerosis (SSc) is an autoimmune systemic disease involving a diversity of internal organs. The hallmarks of the disease are vasculopathy, extensive fibrosis, and autoantibody production. Focal myocardial fibrosis, as described by Bulkley et al in 1976 (1), progresses silently while clinical symptoms such as arrhythmias, left and right heart dysfunction, or cardiac death may manifest suddenly without warning (2). The occurrence of cardiac manifestations in SSc is associated with poor prognosis (3) and international guidelines recommend yearly echocardiographic screening to detect pulmonary hypertension (PH) and/or cardiac abnormalities (4). Echocardiography has been used in SSc since the 1980s but, with few exceptions (5)(6)(7)(8), previous studies have been small or performed on selected patient groups.
Biomarkers such as N-terminal prohormone brain natriuretic peptide (NT-proBNP) and cardiac troponin (cTn) have more recently been used to identify subjects at risk of cardiovascular disease in the general population (9,10). NT-proBNP is mainly produced by ventricular myocytes under haemodynamic stressful conditions. NT-proBNP levels are used to monitor heart failure (9) and have been studied extensively in SSc, where they have emerged as a biomarker to monitor pulmonary arterial hypertension (PAH) (11). cTn has a high specificity for myocardial tissue and is the preferred biomarker to diagnose myocardial necrosis/infarction. Recently, high-sensitivity (hs) immunoassays for cTn have become available. These can detect low levels of circulating troponins, which may be released in other conditions than acute ischaemic heart disease (10).
Both cardiac troponin T (cTnT) and cardiac troponin I (cTnI) are widely used today, and elevated levels of both predict unfavourable long-term outcomes (10). Because of the elevated levels of cTnT in patients with muscular or renal disease even in the absence of cardiac manifestations, cTnI has been suggested as a better marker of cardiac disease in patients with these conditions (12). Uric acid (UA) is the final oxidation product of purine metabolism. Elevated levels occur in conditions with impaired oxidation such as chronic heart failure (13) and PH (14) and are associated with poor prognosis (15), but whether they can be used for screening is still under investigation. We compared echocardiographic findings in a population-based group of SSc patients and matched controls. Additionally, we investigated the associations between echocardiographic abnormalities, clinical characteristics, and circulating levels of NT-proBNP, hs-cTnI, and UA.
Patients and controls
All participants were > 18 years old and recruited from the adult population in Stockholm County between August 2006 and December 2009 (n = 1 534 272). During this period we identified 149 prevalent cases who fulfilled the American College of Rheumatology (ACR) criteria for SSc (16). We asked all 149 SSc patients if they wanted to participate in this study and 110 patients (74%) gave their consent. We recruited 105 control subjects from the same population. These were identified through use of the national registration number (includes date of birth and is coded for gender) and matched to the patients for age, gender, and region of living.
All participants underwent a thorough medical examination at the Department of Rheumatology, Karolinska University Hospital. The echocardiograms were performed at the Department of Clinical Physiology, Karolinska University Hospital or at Aleris Fysiologlab, Sophiahemmet. All were investigated for previous cardiovascular disease (CVD), traditional CVD risk factors, biomarkers of systemic inflammation, and autoantibody patterns. Carotid ultrasound and electrocardiograms were performed. These data have been described previously (17,18) Skin thickness was measured by the modified Rodnan skin score (mRSS) (19). Patients were classified as limited cutaneous SSc (lcSSc) or diffuse cutaneous SSc (dcSSc) (20). Organ involvement was defined as follows: • Pulmonary fibrosis: signs of fibrosis on X-ray or high-resolution computed tomography (HRCT) • PAH: a resting mean pulmonary artery pressure (PAP) ≥ 25 mmHg with a pulmonary capillary wedge pressure of ≤ 15 mmHg measured at right heart catheterization • Myositis: muscular weakness and elevated creatine kinase (CK) and signs of inflammation on magnetic resonance imaging (MRI), electromyography, or muscular biopsy • Kidney disease: a history of scleroderma renal crisis (SRC) (21) • Ischaemic heart disease (IHD): myocardial infarction (MI) [confirmed by electrocardiography and a reversible rise in plasma CK muscle-brain fraction (CK-MB) or troponin T] or angina pectoris (confirmed by an exercise stress test).
The local ethics committee of Karolinska University Hospital approved the study and all participants gave their written informed consent.
Echocardiography
Echocardiography was performed either with an Acuson Sequoia ultrasound system (Acuson, Mountain View, CA, USA) with a 2.5-or 3.5-MHz transducer or with a GE Vingmed ultrasound system (Vivid 7; Horten, Norway). The results were interpreted by a single experienced reader, blinded to patient or control status and without knowledge of other test results. The patients and controls were investigated in random order. Twodimensional measures were taken as recommended by the American Society of Echocardiography (22). Measures of wall thickness and left ventricle diameter are given as the mean of two measurements. Global left ventricular (LV) function was assessed with visual estimation of LV ejection fraction (LVEF) (23) and by measuring atrioventricular plane displacement (24). The valves were studied carefully for valve thickening and other malformations. Doppler and colour Doppler were used to assess valvular stenosis and/or leakage. Regurgitations were graded from the spectral Doppler intensity, the width of the colour jet at the base, and the appearance of the colour Doppler jet. Regurgitation was graded from 1 to 4, where 1 is mild and 4 severe, and considered present if it was grade 1 or more. Valvular abnormalities were classified as either abnormal localized echodensity adjacent to valve leaflets or valve thickening. PAP was estimated by continuous wave Doppler measurement of the peak systolic velocity of the tricuspid regurgitation. We used the following criteria for suspected PH: 1. Tricuspid regurgitation velocity > 2.9 m/s, corresponding to an estimated pulmonary artery pressure (ePAP) > 34 mmHg at rest with or without additional echocardiographic parameters suggesting PH such as a dilated right ventricle and impaired right ventricular (RV) function. 2. Tricuspid regurgitation velocity < 2.9 m/s but additional echocardiographic parameters suggesting PAH.
In the absence of tricuspid regurgitation, PAP was considered normal.
RV function was considered abnormal if the tricuspid annular plane systolic excursion (TAPSE) of the RV free wall was < 17 mm. Tissue Doppler measurements were made at the septal basal segment of the left ventricle and the systolic velocity and diastolic E and A wave velocities were measured.
Laboratory analyses
hs-cTnI (reagent 3P23) and UA (reagent 3P39-21) were measured with an Architect ci16200 ® Integrated System (Abbott Laboratories, Abbot Park, IL, USA). The limit of detection of the troponin I assay was 2 ng/L and the total coefficient of variation (CV) was 5.5% at 22 ng/L and 4.4% at 200 ng/L. The UA method had a total CV of 2.1% at 280 μmol/L and 0.7% at 580 μmol/L. NT-proBNP was measured with a Roche cobas 8000 analyser, using the e602 module (Roche Diagnostics, Mannheim, Germany) according to the manufacturer's specifications. The instrument had a total CV of 0.9% at 107 ng/L and 1.3% at 2060 ng/L. Glomerular filtration rate (GFR) was estimated from cystatin C measurements. Cystatin C (reagent 1014, Gentian, Moss, Norway) was analysed on an Architect ci8200 ® analyser (Abbott Laboratories). The total analytical imprecision of the cystatin C method was 1.1% at 1.25 mg/L and 1.4% at 5.45 mg/L. The equation used for calculating GFR in mL/min/1.73 m 2 from the cystatin C results (raw data in mg/mL) was y = 79.901x −1.4389 .
Statistics
Continuous variables are presented as mean ± standard deviation or, when non-normally distributed, as median and interquartile range. Categorical variables are presented as proportions. Non-normally distributed variables were log transformed to achieve a normal distribution, when possible. Continuous variables were compared using an analysis of variance (ANOVA) or a t-test or, if a normal distribution was not achieved, the Mann-Whitney test. The χ 2 test or Fisher's exact test was used to evaluate categorical variables, the latter when any cell contained five or fewer observations. As our primary aim was to describe patterns of associations for the investigated variables, we did not adjust for multiple comparisons.
Logistic regression models were used to estimate crude odds ratios (ORs) and 95% confidence intervals (CIs) for the association between echocardiographic findings and age and sex. Thereafter, multivariable logistic regression models, adjusted for age and sex, were performed to investigate the association between echocardiographic findings and disease characteristics and biomarkers.
Statistical analyses were performed using JMP software (SAS Institute, Cary, NC, USA). A p-value of < 0.05 was considered statistically significant.
Results
The SSc patients had a lower body mass index (BMI), diastolic blood pressure (BP), and eGFR, but higher triglyceride and NT-proBNP levels than the controls. The levels of inflammatory biomarkers and hs-cTnI were also higher in SSc patients, although the numerical absolute differences were fairly small. Levels of UA did not differ. All patients and 88% of the controls had detectable hs-cTnI levels (> 2 ng/L). A history of IHD was more common in patients than in controls; seven patients and three controls were diagnosed with MI and six patients and one control with angina pectoris. Although almost 50% of both patients and controls had ever smoked, only 11% of the patients and 7% of the controls were current smokers. Participant characteristics are presented in Table 1.
Echocardiographic findings in patients and controls
Overall, 44 SSc patients and 23 controls had abnormal echocardiograms. The patients had lower (but within normal range) LVEF, more often LVEF < 50% and/or LV hypokinesia than the controls. Of the 11 participants with LVEF ≤ 50%, seven SSc patients and the only control had a history of IHD.
The mitral E velocity/septal e velocity (the E/eʹ ratio) was higher (but normal) in patients than in controls (p = 0.04), but signs of diastolic dysfunction (an E/eʹ ratio > 13) did not differ between patients and controls (16 patients vs. 10 controls, p = 0.1).
Eight patients, but no controls, had TAPSE < 17 mm and/or large RV diameter and the patients had a lower TAPSE than the controls (p = 0.03).
Fifteen patients but no controls had a tricuspid insufficiency (TI) velocity > 2.9 m/s and the pressure difference between the right atrium and the right ventricle was greater in the patients than in the controls (p = 0.0001). Ten of the 15 patients with a TI velocity > 2.9 also had pulmonary fibrosis.
The patients had more valvular regurgitation than the controls (p = 0.04), especially when we included the three SSc patients who had been subject to valvular replacement due to regurgitation; two mitral and one aortic valve prosthesis (p = 0.01). One additional SSc patient was diagnosed with severe mitral regurgitation and subsequently had a valve replacement. The two controls with prostheses had aortic valve replacements due to stenosis.
Echocardiographic findings are reported in Table 2. The characteristics of the 44 patients with echocardiographic abnormalities are described in detail in Supplementary Table S1.
Factors associated with echocardiographic abnormalities in SSc patients
All variables presented in Table 1 were investigated for associations with the following four echocardiographic outcomes: (i) any echocardiographic abnormalities, (ii) ePAP > 34 mmHg, (iii) valvular regurgitation, and (iv) LVEF < 50% and/or LV hypokinesia. As these outcomes and investigated variables were often associated with age and sometimes with sex, we adjusted for age and sex in the multivariable analysis.
1. Any echocardiographic abnormality was associated with kidney disease, PAH, a low eGFR, inflammatory biomarkers, and higher levels of hs-cTnI, NT-proBNP, and UA. 2. ePAP was associated with pulmonary fibrosis, PAH, conduction defects, hs-cTnI and NT-proBNP, a low eGFR, and inflammatory biomarkers.
3. Valvular regurgitation was associated with kidney disease, PAH, and high levels of hs-cTnI and NT-proBNP.
Notably, no patient with dcSSc or ATA had valvular regurgitation. 4. A low LVEF/LV hypokinesia was associated with kidney disease, higher levels of hs-cTnI and NT-proBNP, and conduction defects. There was also a trend towards association with IHD, as expected.
Associations between echocardiographic outcomes and clinical/laboratory variables in SSc patients are presented in Table 3.
Patient characteristics associated with cardiac biomarkers
All variables presented in Table 1 were investigated for associations with cTnI, NT-proBNP, and UA. Patients with manifest IHD or PAH had higher hs-cTnI and NT-proBNP levels, but the levels of UA did not differ. Elevated hs-cTnI and NT-proBNP was also seen in patients with previous myositis but there was no association with present CK levels. Patients with kidney disease and/or a low eGFR had higher levels of hs-cTnI, NT-proBNP, and UA. hs-cTnI was associated with elevated markers of inflammation while NT-proBNP was associated with a lower β-glucose level, lower BMI, and lower levels of low density lipoprotein (LDL). We did not find any association with smoking status. Levels of hs-cTnI and characteristics of the outliers are presented in Figure 1.
Age-and sex-adjusted β-coefficients and p-values are presented in Table 4.
Discussion
Our results demonstrate that high circulating levels of hs-cTnI constitute, similar to NT-proBNP, a good biomarker of echocardiographic abnormalities in SSc.
Our study is the first to evaluate hs-cTnI in SSc patients but a few other studies have investigated other troponins. In 2006, Montagnana et al analysed cTnT in 40 female SSc patients and 40 controls and found no difference in troponin levels (25). Since then, high-sensitivity troponin assays have been introduced (10) Values are given as odds ratio (95% confidence interval).
Troponin I in systemic sclerosis
Impaired renal function is often associated with IHD (28). In our study, abnormal echocardiographic findings were associated with both kidney disease and a low eGFR, but the positive associations between echocardiographic findings and NT-proBNP and hs-cTnI remained after adjustment for measures of renal disease/function. Thus, the high levels of these cardiac biomarkers cannot solely be explained by accumulation due to impaired renal clearance, a mechanism previously reported in other settings (29).
An increased prevalence of cardiac disease in SSc patients with myopathy has been reported previously (30) and we observed a higher hs-cTnI in patients with a history of myositis (p = 0.02), but there was no association between CK levels and hs-cTnI. The high levels of hs-cTnI found in SSc patients are thus likely to originate from the heart and not from skeletal muscles. Aggarwal et al reported similar observations in patients with polymyositis without cardiac involvement. They noted a positive association between high levels of CK and cTnT but no association between CK and hs-cTnI (31).
In line with the study by Avouac et al (8), we found an association between PAH and elevated hs-cTn, but in contrast we did not find any association between hs-cTn and traditional risk factors for cardiac disease. Instead we found an association between inflammatory parameters and elevated hs-cTn. This could imply an underlying inflammatory component to the findings of elevated cTn, such as myocarditis, but were not able to verify this in the present study. Myocarditis needs to be evaluated in larger studies or confirmed by MRI.
We consider it would be useful to evaluate hs-cTn as a potential biomarker for cardiac involvement in SSc. Whether hs-cTnI or hs-cTnT is preferable in SSc remains to be determined, a topic discussed recently by Hughes et al (32).
The small number of patients with a history of myositis or kidney disease in our study is a limitation and these findings should be confirmed in larger cohorts. However, we consider it is important to examine both renal and muscular disease/dysfunction in SSc patients in further studies to determine the specificity of cardiac troponins as measures of SSc-related cardiac disease.
In this population-based study, 40% of the SSc patients had one or more echocardiographic abnormalities. Both the left and right sides of the heart were affected. We also found that the patients had more valvular regurgitations than the controls, which in four of our cases had led to valvular prosthesis surgery. In autopsy studies only minor valve abnormalities have been reported (1,33). However, autopsy records report morphological changes whereas valvular insufficiencies can also be found in heart valves with normal structure. Although there are several echocardiographic studies on SSc, there are only a few reports on valve regurgitations or stenosis. In one small study of 11 patients with progressive SSc, mitral valve prolapse was recorded in two patients (34). In 1990, Kazzam reported that patients with SSc had the highest frequency of mitral and/or aortic regurgitation, with 10% vs. 1.7% in SLE and 0% among myositis patients and controls (36). In a large study comprising 570 SSc patients, 7.2% had mitral regurgitation and 2.4% aortic regurgitation (6). Taken together, our study provides further evidence that valve regurgitation is enhanced and should be specifically looked for in SSc, a fact that has not yet gained much attention. In our study, impaired LV function was associated with male gender, conduction defects, and a history of kidney disease. There was also a trend towards an association with myositis and IHD. These observations are essentially similar to the large multicentre European League Against Rheumatism (EULAR) scleroderma trial and research (37) study of 7073 patients, although the contribution of manifest IHD was not addressed in that study.
In many previous studies, systolic dysfunction seems to be less common than diastolic dysfunction (38)(39)(40)(41)(42) and, as in our study, mainly occurs together with coronary artery disease.
The decline in diastolic function has been the subject of numerous studies of SSc because of the assumption that it mirrors myocardial fibrosis. In our study, LV diastolic dysfunction was seen in only 15% of patients. The prevalence of diastolic dysfunction in SSc measured with conventional Doppler echocardiography has been higher in previous studies: 20-40% (5,6,39,42). Several variables affect the assessment of diastolic function and studies use various definitions. Lee et al found that the measure used in this study, the E/eʹ ratio, was more sensitive than the E/A ratio in SSc patients (43). We did not record any difference in diastolic dysfunction between patients and controls. Other studies have similar results, especially after adjusting for other predisposing factors such as heart rate, systolic dysfunction, and PH (44,45).
In our study, RV abnormalities such as decreased RV function and/or large RV diameter and signs of PH were exclusively detected in SSc patients. Altogether, 15 patients (14%) had findings indicating PH. The association between pulmonary fibrosis, age, and PH is well known (46) but we also found an association with a low SSc, Systemic sclerosis; hs-cTnI, high-sensitivity cardiac troponin I; NT-proBNP, N-terminal prohormone brain natriuretic peptide; lcSSc, limited cutaneous systemic sclerosis; PH, pulmonary hypertension; PAH, pulmonary arterial hypertension; ACA, anticentromere antibodies; ATA, anti-topoisomerase 1 antibodies; ARA, anti-RNA polymerase 3 antibodies; IHD, ischaemic heart disease; BMI, body masss index; BP, blood pressure; HDL, high density lipoprotein; LDL, low density lipoprotein; CK, creatine kinase; eGFR, estimated glomerular filtration rate; hsCRP, high-sensitivity C-reactive protein; ESR, erythrocyte sedimentation rate.
eGFR and a trend towards an association with SRC. This finding further highlights the importance of including kidney function when examining cardiac disease in SSc. We did not adjust for medication in our study, which is a limitation as different drugs can affect both the GFR and the troponin values.
Conclusions
Levels of NT-proBNP and hs-cTnI were higher in SSc patients than controls, and both NT-proBNP and hs-cTnI were associated with pathological findings on echocardiography. Our results thus suggest that hs-cTnI could be a potential biomarker for detecting cardiac involvement in SSc.
SSc patients have a higher prevalence of abnormal echocardiograms than matched population-based controls. As a group, our SSc patients had lower (but normal) LVEF. More SSc patients than controls had regional hypokinesia. We also observed that valvular regurgitation is associated with SSc, while the occurrences of valve thickening and valve prostheses were similar to controls. None of the controls, but 14% of SSc patients, had signs of PH. RV abnormalities were only detected in SSc patients. | 2018-04-03T03:16:02.452Z | 2017-05-01T00:00:00.000 | {
"year": 2017,
"sha1": "5bb497cc5dd50e3b16903c4f1825311fc32ef3c6",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Troponin_I_and_echocardiography_in_patients_with_systemic_sclerosis_and_matched_population_controls/3808224/1/files/5931039.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "c65f534372a31f9f2a0ce94361da85f850d63c3a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221073652 | pes2o/s2orc | v3-fos-license | FAM122A Inhibits Erythroid Differentiation through GATA1
Summary FAM122A is a highly conserved housekeeping gene, but its physiological and pathophysiological roles remain greatly elusive. Based on the fact that FAM122A is highly expressed in human CD71+ early erythroid cells, herein we report that FAM122A is downregulated during erythroid differentiation, while its overexpression significantly inhibits erythrocytic differentiation in primary human hematopoietic progenitor cells and erythroleukemia cells. Mechanistically, FAM122A directly interacts with the C-terminal zinc finger domain of GATA1, a critical transcriptional factor for erythropoiesis, and reduces GATA1 chromatin occupancy on the promoters of its target genes, thus resulting in the decrease of GATA1 transcriptional activity. The public datasets show that FAM122A is abnormally upregulated in patients with β-thalassemia. Collectively, our results demonstrate that FAM122A plays an inhibitory role in the regulation of erythroid differentiation, and it would be a potentially therapeutic target for GATA1-related dyserythropoiesis or an important regulator for amplifying erythroid cells ex vivo.
INTRODUCTION
Erythropoiesis, a stepwise process of differentiation by which red blood cells (RBCs) are generated from hematopoietic stem and progenitor cells (HSPCs), is finely controlled by master transcription factors that tightly regulate erythroid-specific gene expression networks (Alvarez-Dominguez et al., 2017;Li et al., 2019;Merryweather-Clarke et al., 2011;Nandakumar et al., 2016;Perreault and Venters, 2018). The core erythroid network of transcription factors is comprised of DNA-binding GATA1, TAL1, and KLF1, as well as non-DNA-binding LDB1 and LMO2 (Nandakumar et al., 2016;Xu et al., 2012). For example, it is well known that erythroid differentiation depends upon GATA-1 in a dose-dependent manner, which is also important for the survival and cell-cycle regulation of erythroid progenitors by erythropoietin (EPO) signaling (Fujiwara et al., 1996;Gutierrez et al., 2020;Xu et al., 2012). Accordingly, GATA1 deficiency arrests erythropoiesis at a proerythroblast stage and induces apoptosis (Fujiwara et al., 1996;Gutierrez et al., 2020).
As a housekeeping gene, FAM122A (also known as C9orf42) is highly conserved among a variety of mammalian species (Eisenberg and Levanon, 2013). Previously, we reported that FAM122A inhibits the phosphatase activity of protein phosphatases of the type 2A family (PP2A) by interacting with its Aa scaffold and Ba regulatory subunits (Fan et al., 2016), a major fraction of cellular Ser/Thr phosphatase activity in any given human tissue, to play important roles in germ cell maturation, embryonic development, metabolic regulation, tumor suppression, and homeostasis of many adult organs (Reynhout and Jans-sens, 2019). We also demonstrated that FAM122A is critical for maintaining the growth of hepatocellular carcinoma cells and acute myeloid leukemia (AML) cells in a PP2A activity-independent or -dependent manner Zhou et al., 2020). However, the biological functions of FAM122A protein are poorly understood to date. Based on the fact that FAM122A is highly expressed in human CD71 + early erythroid cells, here we report that FAM122A significantly inhibits erythrocyte differentiation in primary human erythroid cells and erythroleukemia cells through interacting with and inhibiting the transcriptional activity of GATA1.
Downregulation of FAM122A Expression during Erythroid Differentiation
During erythropoiesis, CD34, CD71, GATA1, and hemoglobin are expressed closely in relation to early and late erythroid progenitors, such as erythroid burst-forming unit (BFU-E) and colony-forming unit (CFU-E) and the early stages of erythroid terminal differentiation. In brief, CD34 is a marker of progenitor cells, such as early erythropoietic progenitors, including BFU-E and CFU-E but is lost with differentiation, while CD71 is expressed in late erythroid progenitors and during the early stages of terminal erythroid differentiation (Chen et al., 2009;Hu et al., 2013;Li et al., 2014). In search of the human FAM122A gene expression in BioGPS datasets (http://biogps.org/ dataset/GSE1133/geneatlas-u133a-gcrma/) (Su et al., 2004), we found that FAM122A is highly expressed in human CD71 + erythroid cells among all tissues and cells ( Figure 1A). Thus, we asked whether FAM122A is involved in erythrocytic development, during which GATA1 is expressed in both early and late erythroid progenitors and during the early stages of terminal differentiation, but is at peak expression level in late erythroid progenitors at the same time as CD71 expression (Kobayashi and Yamamoto, 2007), and hemoglobin (Hb) is expressed during terminal differentiation. Also, erythroid cells at successive terminal stages of human erythropoiesis were also identified by using the combination of glycophorin A (GPA) and Band 3 (Auffray et al., 2001;Hu et al., 2013). Toward this end, we used hemin at 50 mM to treat human erythroleukemic K562 cells as a cellular model for erythroid differentiation induction, as indicated by benzidine stainingpositive (DAB + ) cells (an indicator for Hb production) and accumulation of HbG (Rutherford et al., 1979;Wang et al., 2018). More intriguingly, our results demonstrated that FAM122A mRNA and protein levels were gradually decreased upon hemin-induced erythroid differentiation (Figures S1A,1B,and 1C). We also in vitro expanded primary CD34 + HSPCs from human umbilical cord blood by 100 ng/mL stem cell factor (SCF), 10 ng/mL interleukin-3 (IL-3), and 1 U/mL EPO for 6 days, followed by 3 U/mL EPO for an additional 6 days, as depicted in Figure 1D. Consistent with the previous report (Sun et al., 2015), the treatment effectively induced CD34 + cells to undergo erythroid maturation, as assessed by morphological features and HbA/HbG expression (Figures S1B and 1E). As expected, both FAM122A protein and mRNA were reduced in terminal erythroid differentiation after EPO treatment for 12 days (Figures 1E and 1F). In line, human RBCs are (B and C) K562 cells were treated with 50 mM hemin for the indicated times. The expression levels of FAM122A protein and mRNA were, respectively, examined by western blot (n = 3) (B) and qPCR (n = 3) (C). mRNA level data indicate the means with bar ± SD in an independent experiment (C). (D) Schematics indicated the process of CD34 + cell expansion and differentiation induction by EPO. (E-G) FAM122A levels were examined by western blot (E and G) (n = 3) and qPCR (F) (n = 3), in which the cells were treated as indicated in (D). GAPDH was a protein loading control, and HbG or HbA were used as indicators of erythroid differentiation. FAM122A proteins were quantified according to the densitometric value and the relative protein levels against control cells are shown as means ± SD from three independent experiments bottom panels (B and E) and right panel (G). Figure 2. The Effects of FAM122A Modulation on EPO-Induced Erythroid Differentiation in CD34 + Cells CD34 + cells were infected with lentivirus carrying shFAM122A or negative control shRNA (shNC) (A-F), as well as Flag-FAM122A or empty vectors (G-L) for 48 h, followed by induction with EPO at 3 U/mL for 4 days. Lentivirus-infected CD34 + cells with EPO treatment were (legend continued on next page) Stem Cell Reports j Vol. 15 j 721-734 j September 8, 2020 723 absent of FAM122A protein ( Figure 1G). All these results indicate that FAM122A is downregulated during erythroid differentiation.
Inhibition of Terminal Erythroid Differentiation by FAM122A Next, we attempted to explore the potential roles of FAM122A in erythropoiesis by small hairpin RNA (shRNA)-mediated knockdown in human CD34 + HSPCs from human umbilical cord blood. For this, CD34 + cells were expanded for 4 days and followed by lentivirus infection with specific shRNA against FAM122A (shFAM122A) or a negative control shRNA (shNC). Two days post-infection, a significant silencing effect was confirmed in shFAM122A-expressing CD34 + cells ( Figure S1C). By utilizing the flow cytometry-based strategy for isolating human BFU-E and CFU-E , we found that FAM122A knockdown did not impact the amounts of BFU-E (IL-3R À GPA À CD34 + CD36 À ) and CFU-E (IL-3R À GPA À CD34 À CD36 + ) populations ( Figure S1D), and also failed to affect their colony-forming abilities ( Figure S1E). Thus, we examined whether FAM122A knockdown impacts terminal erythroid differentiation in CD34 + cells induced by EPO. Intriguingly, the results showed that FAM122A knockdown significantly increased various globin protein and/or gene expressions (Figures 2A and 2B), and enhanced the percentages of CD71 + /GPA + and GPA + /Band3 + cells (Figures 2C and 2D) and DAB + cells ( Figure 2E). The morphological observation also showed that, under EPO induction for 4 days, the percentages of orthochromatic erythroblasts and reticulocytes were significantly increased in FAM122A knockdown CD34 + cells ( Figure 2F).
We also used CRISPR/Cas-9 to delete FAM122A in K562 cells, and found that FAM122A knockout (FAM122A KO) significantly enhanced hemin-induced erythroid differentiation ( Figures S2A-S2C), which could be significantly rescued by re-expression of FAM122A ( Figures S2D-S2F). Because the FAM122A gene is localized within the first intron of PIP5K1B (phosphatidylinositol 4-phosphate 5-kinase), we also found that PIP5K1B knockdown failed to influence hemin-induced erythroid differentiation , excluding the role of this gene in erythroid differentiation. Notably, FAM122A knockout itself could promote the expressions of globin genes and HbG protein, and increased DAB + cells, indicating that FAM122A deletion may spontaneously trigger or be prone to erythroid differentiation.
Contribution of GATA1 to FAM122A-Regulated Erythroid Differentiation
To identify the possible proteins interacting with FAM122A, we incubated nuclear extracts of K562 cells together with the in-vitro-translated GST-FAM122A with GST as a control, followed by GST pull-down. The precipitates were fractionated by SDS-PAGE and stained with Coomassie brilliant blue. The separated proteins by electrophoresis from GST-FAM122A-and GST-bound lysates were excised and further identified by liquid chromatography-tandem mass spectrometry analysis (Table S1). In total, we identified 142 FAM122A-interacting proteins, including GATA1 ( Figures 3A and 3B), the latter being further confirmed by western blot ( Figure S3). On the other hand, we also performed RNA sequencing to examine the global gene expression profiling of K562 cells with (FAM122A KO no. 1) and without (NC) FAM122A knockout (Figures 3C and 3D). A comparison of the transcriptomes using a statistical cutoff of p < 0.01 and a fold change >1.5 revealed that FAM122A knockout significantly altered the transcriptome of K562 cells with 133 increasing transcripts and 129 decreasing ones (Table S2). By gene ontology analysis, many upregulated genes were closely associated with the components and functions of hemoglobin for molecular functions, cellular components, or biological processes (Figures 3C and 3D; Table S2). Among these upregulated genes, some of which were confirmed by qPCR ( Figure 3E), globins (HbA1, HbA2, HbB, HbG1, and HbG2), ALAS2 (a critical enzyme analyzed for the indicated protein expression with western blot (A and G) (n = 3), various globin gene levels with qPCR (B and H) (n = 3), the percentages of CD71 + /GPA + (C and I) (n = 3) and GPA + /Band3 + cells (D and J) (n = 3) with FACS, the percentages of DAB + cells (E and K) (n = 3), together with morphological observation with Giemsa staining (F and L) (n = 3). The qPCR data were analyzed by normalizing against the corresponding shNC or empty treated with EPO for 4 days (B and H). The related quantitative data are shown on the right panels (A, C, D, E, F, G, I, J, K, and L). The percentages of distinct stages during erythroid differentiation were calculated and shown, in which solid black arrows indicate reticulocytes, dotted blue arrows indicate orthochromatic erythroblasts, dotted red arrows indicate polychromatic erythroblasts, and dotted black arrows indicate proerythroblasts or basophilic erythroblasts. ProE + Baso, proerythroblasts and basophilic erythroblasts; Poly, polychromatic erythroblasts; Ortho + Reti, orthochromatic erythroblasts and reticulocytes (F and L), right panels.
The above-described results led us to extrapolate that GATA1 has a role in FAM122A-modulated erythroid differentiation. To consolidate this, two pairs of GATA1 shRNAs (shGATA1 no. 1 and shGTAT1 no. 2) with shcontrol were infected into K562 cells to knockdown GATA1 in either FAM122A knockdown or shNC cells ( Figure 4A). Subsequently, these cells were, respectively, treated with or without hemin at 50 mM for 48 h. As shown in Figures 4B-4D, GATA1 knockdown significantly abrogated the effects of FAM122A silencing-increased hemoglobin and globin gene expressions either in the presence or absence of hemin induction. However, knockdown of GATA2 did not affect the effects of FAM122A silencing on differentiation ( Figures S4A-S4C). These results suggest that GATA1 mediates FAM122A-regulated erythroid differentiation.
Direct Interaction of FAM122A with GATA1
To elucidate how GATA1 works in FAM122Aregulated erythroid differentiation, we found that FAM122A did not affect GATA1 expression in K562 and CD34 + cells ( Figure S4D and S4E). As described above, our protein interactomic analysis showed (C and D) FAM122A KO and NC K562 cells were subjected to RNA sequencing and bioinformatics analysis. Differentially expressed genes were analyzed by gene ontology (C), and the most upregulated genes in FAM122A KO K562 cells were enriched as indicated in molecular functions, cellular components and biological processes. Heatmap presents the enriched candidate and upregulated genes involved in erythroid maturation and function as indicated with red words (D). (E) The representative candidate genes regulated by FAM122A were confirmed by qPCR (n = 3). Data indicate means with bar as SD in an independent experiment. that FAM122A interacts with GATA1. To confirm this, 293T cells were co-transfected with Flag-tagged FAM122A and GFP-tagged GATA1, followed by co-immunoprecipitation assays. The results showed that Flag-FAM122A could pull-down GATA1, and GFP-GATA1 could reciprocally precipitate FAM122A ( Figure 5A). The physical interaction of endogenous FAM122A and GATA1 was further confirmed in K562 cells ( Figure 5B). The immunofluorescence assay also revealed intranuclear colocalization of FAM122A and GATA1 proteins in K562 and CD34 + cells ( Figure 5C). The in vitro GST pull-down assay further showed that FAM122A interacts directly with GATA1 ( Figure 5D). With a serious of GATA1-truncated mutants ( Figure 5E), we showed that (D) Expressions of various globin genes were examined by qPCR. Data were normalized as relative fold changes against shNC/shcontrol cells without hemin treatment and displayed as means ± SD in an independent experiment (n = 3).
the C-terminal zinc finger domain of GATA1 is crucial for the physical interaction of GATA1 and FAM122A ( Figures 5F and 5G).
Inhibition of the Transcriptional Activity of GATA1 by FAM122A
Considering that the C-terminal zinc finger domain of GATA1 for its interaction with FAM122A is also a critical region for the binding of GATA1 to its target DNA (Ferreira et al., 2005;Kaneko et al., 2012), we tested whether FAM122A affects the DNA binding activity of GATA1. For this, a biotin-labeled DNA probe bearing the core canonical GATA DNA sequence (Cantor and Orkin, 2002) was incubated with purified GATA1 and/or FAM122A proteins expressed in E. coli. The electrophoretic mobility shift assay (EMSA) showed that a specific shift of the DNA-protein complex was observed only with the incubation of GATA1 protein (lane 3, Figure 6A) but not with the FAM122A protein (lane 2, Figure 6A), while this shift band was competitively eliminated when in the presence of excessive amounts of wild-type (WT) probes without biotin labeling (lanes 6 and 7, Figure 6A), but still appeared in the presence of the excessive probe with mutant DNA binding sequence (lane 8, Figure 6A). A super-shift band could be seen when co-incubating the GATA1 antibody with the GATA1 protein (lane 9, Figure 6A), suggesting that GATA1 specifically and efficiently bound to this probe. As expected, the co-incubation of FAM122A together with GATA1 almost eliminated the shift band produced by GATA1 binding activity (lanes 4-5, Figure 6A). (B) 293T cells were co-transfected with the increasing concentrations of GFP-FAM122A with or without Flag-GATA-1, together with pro-ALAS2-pGL3-basic (ALAS2-pro)/pro-PRG2-pGL3-basic (PRG2-pro) and pRL-SV40 plasmids (as an internal reference plasmid), and the luciferase activities were measured after transfection for 24 h. The efficiency of protein expression was confirmed by western blot (bottom panels) (n = 3). (C) ChIP assay was performed to analyze GATA1 binding to the promoter and enhancer regions of PBGD, AHSP, and AQP1 genes in FAM122A knockout (upper panels) or overexpression (bottom panels) K562 cells treated with 50 mM hemin for 48 h. Input, 1% of chromatin lysate subjected to immunoprecipitation; IgG, normal IgG as a negative control (n = 3).
Furthermore, FAM122A also reduced the super-shift band intensity (lane 10, Figure 6A). These results implied that FAM122A can significantly inhibit the DNA binding activity of GATA1.
These facts promoted us to ask whether FAM122A influences the transcriptional activity of GATA1. For this purpose, 293T cells were co-transfected with luciferase reporters containing the promoter regions of ALAS2 or PRG2 (Surinya et al., 1997;Wu et al., 2014) and increasing amounts of GFP-FAM122A plasmids, together with or without Flag-GATA1. The results showed that FAM122A inhibited GATA1-triggered reporter activity in a dose-dependent manner ( Figure 6B).
We further assessed whether FAM122A affects the chromatin occupancy of GATA1 in FAM122A KO and NC K562 cells treated with hemin by a chromatin immunoprecipitation (ChIP) assay, and monitored GATA1 recruitment to several promoter and enhancer regions of erythroid-specific genes, including PBGD, AHSP, and AQP1 (Hasegawa et al., 2012;Welch et al., 2004). As shown in Figure 6C, FAM122A KO significantly enhanced GATA1 chromatin occupancy at the promoter regions of these genes (upper, Figure 6C), suggesting that FAM122A deletion also increases the association of GATA1 with the promoter of its target genes in vivo. In addition, FAM122A overexpression reduced GATA1 chromatin occupancy on the promoters of its target genes (bottom, Figure 6C).
To further investigate the potential role of FAM122A in dyserythropoiesis, we analyzed FAM122A mRNA expression levels in the purified early and late erythroblasts from CD34 + cells, isolated from the peripheral blood of six transfusion-dependent patients with b-thalassemia (before transfusion) and six healthy controls (Forster et al., 2015). We found that FAM122A is significantly upregulated in the patient group with delayed erythroid maturation post-induction with EPO for 14 days ( Figure S5A). Furthermore, we examined another dataset from a patient and her mother in one family with inherited b-thalassemia (Taghavifar et al., 2019), and found that FAM122A expression is also abnormally upregulated in the blood of the patient (daughter) and carrier (mother) with b-thalassemia ( Figure S5B).
DISCUSSION
In this work, we showed that FAM122A is abundant in erythroid progenitor cells and downregulated during terminal differentiation, similar to the expression pattern of CD71 and/or GATA1. Moreover, FAM122A negatively regulates the terminal differentiation of erythrocytes, but does not affect the early process of erythropoiesis, as determined in either human CD34 + or K562 cells with genetically modulated FAM122A expression, suggesting that FAM122A specifically contributes to the process of terminal erythroid differentiation. How FAM122A expression is regulated during terminal erythroid differentiation remains to be further investigated. According to our results that both mRNA and the protein of FAM122A were downregulated during terminal differentiation, we extrapolated that the regulation of FAM122A expression during erythroid differentiation is mainly involved in its transcriptional level.
The in vitro protein binding assay accompanied with mass spectrometry (MS) analysis and RNA sequencing data showed that GATA1 might be involved in the effects of FAM122A on erythroid differentiation. Knockdown of GATA1, but not GATA2, can significantly rescue FAM122A silencing-enhanced erythroid gene expression and maturation potential, indicating that GATA1 mediates the effect of FAM122A-regulating erythroid differentiation. GATA1 plays a central role in the development of erythrocytes, especially in terminal erythroid differentiation (Moriguchi and Yamamoto, 2014), and abnormal regulation of GATA1 is associated with dyserythropoietic disorders (Ferreira et al., 2005;Gutierrez et al., 2020;Tremblay et al., 2018).
GATA1 activity can be regulated by transcriptional and/ or translational regulation, posttranslational modification, and protein-protein interaction (Ferreira et al., 2005;Morceau et al., 2004). FAM122A modulation does not change the mRNA and protein levels of GATA1, excluding the possibility of the regulation by transcriptional or translational levels. The posttranslational modifications of GATA1, including acetylation, phosphorylation, and sumolyation, have been found to regulate its DNA binding and/or transcriptional activity (Gutierrez et al., 2020;Hernandez-Hernandez et al., 2006;Yu et al., 2010). Considering that FAM122A was previously identified as a PP2A inhibitor, we further found that FAM122A modulation did not alter the phosphorylation of GATA1 at Ser142 and Ser310 (data not shown), the latter site being correlated with the binding and transcriptional activities of GATA1 (Kadri et al., 2005;Zhao et al., 2006).
Mounting evidence shows that GATA1 exerts its function by interacting with a serious of cofactors, either co-activators or co-repressors (Ferreira et al., 2005;Gutierrez et al., 2020;Morceau et al., 2004). FAM122A interacts directly with GATA1 and inhibits its DNA binding and transcriptional activities, supporting the notion that FAM122A may act as a co-repressor of GATA1 to suppress the transcriptional activity, since their interaction reduces the association of GATA1 with the target gene promoters, thus interfering with erythroid differentiation. On the other hand, several lines of evidence showed that GATA1 is acetylated at two conserved lysine-rich motifs localized closer to its C-terminal zinc finger domain and that this modification promotes its transcriptional activity (Lamonica et al., 2006). FAM122A can interact directly with the C-terminal zinc finger of GATA1, thus we do not exclude the possibility that this interaction may affect the acetylation state of GATA1 and/or influence its transcriptional activity, which deserves to be investigated in future.
Recently, we have demonstrated that FAM122A is abnormally upregulated in AML patients and its expression level is negatively correlated with the overall survival of AML patients. More importantly, FAM122A is found to be essential for the growth of AML cells in vitro and in vivo by modulating PP2A activity and sustaining c-Myc protein levels , showing an essential role of FAM122A in hematological malignancy. In this study, we found that FAM122A is a negative regulator of normal human erythropoiesis process possibly by acting as a corepressor to interfere with the DNA binding and transcriptional activities of GATA1, pointing to the potential and physiological role of FAM122A as a GATA1 coregulator in erythroid differentiation. The aberrant upregulation of FAM122A in patients with b-thalassemia further implies the important role of FAM122A in the regulation of erythropoiesis.
A deep understanding the mechanisms of erythropoiesis is extremely important and necessary for not only generating massive amounts of erythroid cells in vitro or ex vivo for transplantation and therapeutics (Chang et al., 2011;Zeuner et al., 2012), but also for providing the opportunity to govern stress or pathological dyserythropoiesis (such as blood loss, allogeneic stem cell transplantation, anemia, and b-thalassemia). Our findings propose a novel mechanism for the inhibitory effect of FAM122A on the regulation of human erythropoiesis ex vivo using CD34 + cells, and inhibition of FAM122A may enhance the effect of erythroid differentiation and amplify the bulk products of erythroid cells, which will potentially overcome current hurdles in the fields of bulk RBC production due to the lack of blood donor resources and high costs (Zeuner et al., 2012). During the last decade, efficient procedures or technology to produce RBC ex vivo using primary HSCs, embryonic stem cells, or induced pluripotent stem cells, have become an increasing concern to achieve maximal RBC quality, quantity, and maturation. Our results suggest that limitation of the inhibitory effects by negative regulatory factors, such as FAM122A, may significantly enhance the quantity of matured RBCs similar to TRAIL (Migliaccio et al., 2011;Zeuner et al., 2012).
In summary, our study demonstrates that FAM122A plays an inhibitory role in human erythropoiesis in a GATA1-dependent manner by suppressing the DNA binding and transcriptional activities of GATA1. These findings not only elucidate the new function of FAM122A in the regulation of erythropoiesis, but also propose that FAM122A would be a potentially therapeutic target for GATA1-related dyserythropoietic disorders or an important regulator for amplifying erythroid cells ex vivo.
Cells and Culture Conditions
293T cells were maintained in Dulbecco's modified Eagle's medium (Invitrogen). Human erythroleukemia cells K562 were cultured in RPMI 1640 medium (Invitrogen). All media were supplemented with 10% FBS (Invitrogen) and 1% penicillin/streptomycin. For erythroid differentiation, K562 cells were induced by addition of 50 mM hemin (Sigma, USA).
Purification and In Vitro Culture of Human CD34 + Cells CD34 + cells were purified from human umbilical cord blood by applying CD34 + magnetic selective beads system (Miltenyi Biotec, Germany) according to the manufacturer's instructions. Cells were cultured at 10 5 cells/mL for 5-6 days in Serum-Free Expansion Medium (STEMCELL Technologies) supplemented with 10% FBS (STEMCELL Technologies), 100 ng/mL SCF, 10 ng/mL IL-3, and 1 U/mL EPO (STEMCELL Technologies) at 37 C in 5% CO 2 for cell expansion, and then cultured in the medium with the presence of 30% FBS and 3 U/mL EPO for erythroid differentiation at the indicated days (4 or 6 days). CD34 + cells were derived from human umbilical bloods obtained in the Department of Obstetrics and Gynecology of Ren-Ji hospital. All procedures followed were in accordance with the ethical standards of the responsible committee on human experimentation (SJTU-SM and national) and with the Helsinki Declaration of 1975, as revised in 2013. Informed consents were obtained from all patients included in the study.
Benzidine Staining, Cell Morphology, and Colony Assay
For benzidine staining, cells were washed twice with ice-cold phosphate-buffered saline. Benzidine dihydrochloride (Sigma, USA) solution was prepared with 0.5 M ethylic acid. One microliter of 30% hydrogen peroxide was prepared and added to a 50-mL benzidine solution immediately before use. Then, 1 mL of fresh whole benzidine solution containing hydrogen peroxide was added in 10 mL cells. The dark blue particles of oxidized benzidine were readily distinguished under a light microscope. Two or three hundred cells (about five fields) were examined in each assay, and the percentages of benzidine-positive cells were calculated. For cell morphology and colony assays, see Supplemental Information.
RNA Sequencing and qPCR
See Supplemental Information. RNA sequencing data have been deposited in the Gene Expression Omnibus (GEO).
GST Pull-Down
GST alone and GST-tagged FAM122A fusion proteins were expressed in E. coli BL21 by induction with isopropyl b-D-1-thiogalactopyranoside at 28 C for 6 h and purified with GST Bind Resin (Novagen). GATA1 was bacterially expressed as six His-tagged protein, followed by purification using nickel-nitrilotriacetic acid resin (QIAGEN). The purified GST or GST-tagged FAM122A proteins were incubated with the purified GATA1 protein for 2 h at room temperature. Then the precipitates were eluted by the SDS sample buffer and followed by western blot.
Luciferase Assay
For reporter plasmid construction, human ALAS2 gene promoter (À797$ À617 bp) and human proteoglycan 2 gene promoter (À117 $ À67 bp) were PCR amplified and cloned into pGL3 basic vector (Promega). 293T cells were plated at 5 3 10 4 cells per well in 12-well plates 1 day before being transfected with Lipofectamine 2000 (Invitrogen) according to the manufacturer's instructions. After 24 h, transfected cells were lysed with passive lysis buffer and lysates were analyzed for both firefly and Renilla luciferase activity using a Dual-Luciferase Reporter Assay Kit (Promega). Luciferase activity was normalized for transfection efficiency using Renilla activity as an internal control (10 ng).
ChIP Assay
ChIP assays were performed using Pierce Agarose ChIP Kit (26156, Pierce Biotechnology, Rockford, IL) according to the manufacturer's instructions. K562 cells were treated with 50 mM hemin for 48 h, and protein-DNA complexes were crosslinked with 1% formaldehyde for 30 min at room temperature. The reaction was stopped by adding glycine to a final concentration of 125 mM and incubating for 5 min at room temperature. Chromatin solutions were precipitated overnight with rotation at 4 C using GATA1 antibody (NBP1-47492, Novus) or anti-mouse IgG (sc-3877, Santa Cruz) as a negative control. The DNA associated with immunoprecipitates was isolated and used as a template for the PCR to amplify the promoter and enhancer sequences containing the GATA1 binding element. The PCR conditions were as follows: 95 C for 3 min and 38 cycles of 30 s at 94 C, 30 s at 60 C, and 40 s at 72 C, followed by an extension time of 5 min at 72 C. The primer pairs used were as followed: PBGD gene, 5 0 -TCTAGTCTACTCCATGTGGC-3 0 and 5 0 -ACCAAGGCAGTTGT CAGTGG-3 0 , yielding a 231-bp fragment; AHSP gene, 5 0 -AGGGCT CAGTAAACGTC-3 0 and 5 0 -AGAAGGGAGAGGCTTCC-3 0 , yielding a 186-bp fragment; AQP1gene, 5 0 -AATGCAGGGCTGGGTTAGCC Stem Cell Reports j Vol. 15 j 721-734 j September 8, 2020 731 CGGCTC-3 0 and 5 0 -TGACACCTCTTATCGCATCTGCCTCC-3 0 , yielding a 120-bp fragment. The precipitated DNAs were further analyzed by qPCR. Each sample was detected in triplicate, and the amount of precipitated DNA was calculated as the percentage of input sample.
Immunofluorescence
Cells were cytospun onto slides and fixed for 10 min at room temperature in 4% formaldehyde and permeabilized in 0.1% Triton X-100 for 15 min at room temperature. Nonspecific sites were blocked by incubation with PBS containing 2.5% BSA for 1 h at room temperature. Cells were then incubated with anti-GATA1 (NBP1-47492, Novus) and anti-FAM122A (NBP2-31646, Novus) antibodies overnight at 4 C. Cells were subsequently washed three times with 13 PBS. Secondary antibodies (Alexa Fluoro secondary 488/595) (z25402/z25407; Invitrogen, Carlsbad, CA) were applied at 1:200 dilution for 1 h at room temperature. Finally, the cells were incubated in 4 0 6-diamidino-2-phenylindole for 10 min at room temperature. Stained cells were visualized using a confocal laser scanning microscope Nikon Eclipse Ti (Nikon, Kanagawa, Japan).
Statistical Analyses
Data are expressed as mean ± SD and were analyzed by Student's t test, with p < 0.05 indicating significant difference. All experiments were repeated at least three times.
Data and Code Availability
The accession numbers for the RNA-seq data reported in this paper is GEO: GSE141735.
AUTHOR CONTRIBUTIONS
J.C. designed the research and performed most of the experiments. Q.Z. provided normal human umbilical cord bloods. M.H.L., Y.S.Y., and Y.Q.W. cultured the cells, analyzed the results, and carried out some of the experiments. G.Q.C. and Y.H. designed the research, analyzed and interpreted data, and prepared the manuscript. | 2020-08-09T13:06:11.149Z | 2020-07-31T00:00:00.000 | {
"year": 2020,
"sha1": "fcb21f3215306d4d7e84a3dc59f51699506fbb62",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2213671120302873/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d3406cc8256f0fa69207c98bca81b5a9236ef4aa",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
263607023 | pes2o/s2orc | v3-fos-license | Mental health issues in antenatal women with prior adverse pregnancy outcomes: Unmasking the mental anguish of rainbow pregnancy
Background & objectives: Mental health issues in pregnancy have adverse implications on the quality of life, however still they go unevaluated and underreported. Women with previous history of abortions or stillbirth may have a higher risk of experiencing mental health problems. The present investigation was aimed to study the prevalence of depression, anxiety, stress and domestic violence in antenatal women with prior pregnancy losses and the need for interventions to treat the same. Methods: One hundred pregnant women with a history of prior pregnancy losses (group 1) and 100 women without obstetrical losses (group 2) were enrolled in this cross-sectional study carried out in a tertiary care hospital in India. Women were screened for depression, anxiety, stress and domestic violence using various questionnaires: EPDS (Edinburgh postnatal depression scale), PRAQ-2 (pregnancy-related anxiety questionnaire-revised 2), GAD 7 (generalized anxiety disorder-7) and PSS (perceived stress scale). Results: The prevalence of depression (EPDS scale) and pregnancy specific anxiety (PRAQ-2 scale) was significantly higher in group 1 than in group 2 (27 vs. 10%, P=0.008; and 15 vs. 6%, P=0.03). The prevalence of general anxiety (GAD 7 scale) and stress (PSS), however, was high and comparable in both the groups (33 vs. 29%, P=0.44; and 33 vs. 27%; P=0.35 respectively). Recurrent abortions was found to be an independent risk factor for depression [adjusted odds ratio=26.45; OR=28]. In group 1, 31 per cent required counselling in the psychiatry department and nine per cent required medication. Interpretation & conclusion: Mental health issues, especially depression, are prevalent in antenatal women with previous losses. Unrecognised and untreated, there is a need for counselling and developing screening protocols at India’s societal and institutional levels.
women is estimated to be seven per cent in the general population 2 with records of it in developing countries like India being overall higher 3 .During pregnancy, mental health issues have various implications affecting their quality of life, such as nutritional deprivation and poor maternal weight gain 4 .Depression in pregnancy may persist into the postpartum period and lead to difficult parenting 5 .Studies show the association of antenatal mental health problems with foetal growth restriction (FGR) and low neonatal birth weight 6,7 .Women with a history of miscarriages are prone to marital disharmony and domestic violence 8 .In addition, poor social support, including conflict, ineffective communication and dissatisfaction with one's partner, can further precipitate antenatal mental disorders.Studies show that abortions and perinatal loss increase the odds of depression and anxiety, and very few bereaved mothers with anxiety symptoms access psychiatric treatment 9 .India leads the world in having a high number of stillbirths.However, limited data are available evaluating mental health issues in antenatal women with recurrent abortions and stillbirths 10 .In this study, we aimed to determine the prevalence of depression, anxiety, stress, the occurrence of domestic violence and their correlation with perinatal outcome in subsequent pregnancy.
Material & Methods
This comparative cross-sectional study was conducted by the department of Obstetrics & Gynaecology, Postgraduate Institute of Medical Education & Research, a tertiary care hospital in northern India for a period of 18 months from July 2018 to December 2019.The study was approved by the Institutional Ethics Committee (INT/ IEC/2018/002160) and tenets of the Declaration of Helsinki were strictly adhered to.Written informed consent for participation and publication was duly taken from all the study participants.
A total of 200 antenatal women were recruited from the Recurrent Abortions Clinic/Antenatal Clinic/ Gynaecology and Maternity Ward of the hospital.The participants were divided into two groups.Group 1 consisted of antenatal women from the first, second or third trimester who had a previous history of stillbirths and/or recurrent pregnancy loss.Stillbirth was defined as a baby delivered with no signs of life and known to have died after 24 completed weeks of pregnancy or weighing 500 gm.Recurrent pregnancy loss was defined as the loss of three or more consecutive pregnancies until 24 wk of gestation.Group 2 included antenatal women with a viable pregnancy but without a history of stillbirth or recurrent pregnancy loss.
Assuming the prevalence of depression of 28 per cent in women with stillbirth and eight per cent in women without obstetrical losses 11 , after using the formula: n = (Z α/2 +Z β ) 2 × (p 1 (1-p 1 )+p 2 (1-p 2 )) / (p 1 -p 2 ), a sample size of 100 participants per group was considered at a 90 per cent power and 95 per cent confidence interval.
All participants were administered questionnaires in English/Hindi language, Edinburgh postnatal depression screening scale (EPDS) for depression, generalized anxiety disorder-7 (GAD 7) anxiety scale for pregnancy, pregnancy related anxiety questionnairerevised 2 (PRAQ-2) for anxiety and perceived stress scale (PSS) for stress [12][13][14][15][16] .The intimate partner violence (IPV) questionnaire was used to screen for domestic violence.The questionnaires were self-administered in 85 per cent of the participants and assisted/read out by research workers in 15 per cent.Those who scored above the cut-off specific for the questionnaires were followed up in the psychiatry department for counselling/ treatment with medications.While primary outcome was the prevalence of depression, secondary outcomes included the prevalence of anxiety, stress, domestic violence and the need for therapeutic intervention.The association of mental health issues with demographic factors, recurrent abortions, stillbirth, comorbidities, trimester of pregnancy and perinatal outcomes were studied under secondary outcomes.
Statistical analysis: Data were analysed using SPSS v 22.0 (IBM Corp. Armonk, NY: USA).The comparisons of quantitative variables between the two groups were performed using the Student's t test for age and Mann-Whitney U test and Fisher's exact test or Chi-square test for categorical variables.The Spearman correlation coefficient was calculated to examine the relation between variables.Logistic regression analysis was carried out to find an independent factor associated with depression and anxiety.
Results and Discussion
The demographic characteristics of both the study groups were comparable (Table I).
Anxiety: A meta-analysis of 19 studies published in 2017 showed that women with a previous history of perinatal loss had higher anxiety levels 17 .In our study, the prevalence of pregnancy-specific anxiety was high in group 1; however, general anxiety was high in both Contd... the groups (Table I).Women with previous losses had 3.1 times higher odds of suffering from anxiety after adjusting for the difference in the period of gestation, pregnancy related complications, income and previous live births, though the same were not significant.Anxiety rate did not differ with the number of abortions (Table II).The maximum prevalence of anxiety was found in the first trimester in group 2 (Table III).In group 1, the highest anxiety was found in the 15-28 wk period of gestation, and general anxiety persisted even after crossing the period of gestation of previous pregnancy loss.
Depression: Prevalences of depression in women in group 1 and group 2 were 27 and 10 per cent, respectively, and the difference was significant (P=0.008;OR=3.32) with a significantly higher mean score in group 1 (Table I).This is comparable with other studies [18][19][20][21] .Group 1 had 3.1 times higher odds of suffering from depression after adjusting for the difference in the period of gestation, pregnancyrelated complications, income and previous live births (P=0.01).This difference was seen significantly in women with previous losses between 15-28 wk and beyond 33 wk (Table III).Recurrent abortion was found to be an independent risk factor associated with depression after adjusting for the effects of confounding factors such as age, income, married life years, period of gestation, history of live birth and pregnancy complications (adjusted odds ratio (aOR)=26.45;95% C.I: 1.84-378.92;P=0.01; OR=28).Sofia Rallis, in their study, concluded that maximum depression was found at 16 and 32 wk of gestation 1 .However, in our study, it was more prevalent between 15 and 28 wk in group 1 and initial 14 wk in group 2 (Table III).I).The stress levels were similar in the two groups after adjusting for the gestation of previous loss, pregnancy-related complications, previous live births and income status.In group 1, a positive correlation was found between stress as the number of abortions increased (P=0.004;Table II).High stress levels in the control group could be due to physical changes, emotional changes, increased demands in the form of need for frequent antenatal check-ups, increased dietary requirements, increased expenditure and social pressure.However, it could also be due to the participants being chosen from a tertiary care referral institute.
Domestic violence:
The prevalence of domestic violence was three per cent in group 1 compared to one per cent in group 2; the difference was not significant.
Drug abuse: A study conducted by Carvalho et al 21 in Brazil reported a 13 per cent prevalence of drug abuse, but it was not prevalent in our study population.
Effect of social, marital and parental support:
The presence of marital, parental and social support was Obstetric outcomes: In our study, anxiety and depression were more prevalent in group 1, but these did not lead to an adverse obstetric outcome.This is in contrast to previous studies, which had shown an association of FGR with antenatal depression 4 .
Intervention: It is noteworthy that 57 (28%) participants required counselling (31 from group 1 and 26 from group 2) and 13 (5.5%)required drug therapy (9 from group 1 and 4 from group 2).Although the differences in group 1 and group 2 for counselling and drug therapy were not significant, our study revealed the differences existing in these groups.Starting on psychiatric medications in antenatal patients is itself accompanied by fear of potential side effects on the foetus, ultimately affecting compliance.Family and the social environment might influence their decisionmaking in this regard.
This study was not without limitations.There may have been a selection bias as it was a single facility based study that did not represent the general population.The study was also limited by self-report bias.However, this study adds to the already existing knowledge, and can help us to plan antenatal services in rainbow pregnancy clinics.To conclude, our study highlights that pregnancy specific anxiety and depression are significantly more prevalent in women with bad obstetric history, which is evident from significant differences in PRAQ-2 and EPDS scores.Recurrent abortion was an independent risk factor associated with depression.There is a need to develop social support, community-level screening, institutional protocols and addressing mental health issues in health programmes, which have largely remained uncatered.Developing such protocols in a low-and middle-resource setting like India and building coping mechanisms to overcome mental health issues during pregnancy is the need of the hour.
Figure .
Figure.Scatter plots of correlation of stress with (A) social, (B) marital, and (C) parental support.PSS, perceived stress scale
Table II .
Prevalence of mental health problems in group 1 according to the number of abortions
Table III .
The proportion of participants with mental health issues according to the period of gestation ≤0.01.There was no significant difference in prevalence of mental health problems in both the groups in different periods of gestation; group 1 had significantly high prevalence of anxiety and depression than group 2 between 15-28 wk and also had significantly higher prevalence of depression beyond 33 wk inversely correlated with perceived stress (correlation coefficient (r)-0.28,P=0.005, OR 0.93, 95% CI 0.79-1.10;r -0.41, P<0.001, OR 0.81 95% CI 0.64-1.04 and r -0.22,P=0.03, OR 0.75, 95% CI 0.49-1.11respectively; Figure). | 2023-10-04T06:18:10.593Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "85dc62c14b55390adeee4340e5a15bb950288c63",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "00febaa1647fe8c46b8b6709680d8d9f2f9fe591",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247011500 | pes2o/s2orc | v3-fos-license | Towards Digital Twin Oriented Modelling of Complex Networked Systems and Their Dynamics: A Comprehensive Survey
This paper aims to provide a comprehensive critical overview on how entities and their interactions in Complex Networked Systems (CNS) are modelled across disciplines as they approach their ultimate goal of creating a Digital Twin (DT) that perfectly matches the reality. We propose a new framework to conceptually compare diverse existing modelling paradigms from different perspectives and create unified assessment criteria to assess their respective capabilities of reaching such an ultimate goal. Using the proposed criteria, we also appraise how far the reviewed current state-of-the-art approaches are from the idealised DTs. We also identify and propose potential directions and ways of building a DT-orientated CNS based on the convergence and integration of CNS and DT utilising a variety of cross-disciplinary techniques.
Abstract-This paper aims to provide a comprehensive critical overview on how entities and their interactions in Complex Networked Systems (CNS) are modelled across disciplines as they approach their ultimate goal of creating a Digital Twin (DT) that perfectly matches the reality. We propose a new framework to conceptually compare diverse existing modelling paradigms from different perspectives and create unified assessment criteria to assess their respective capabilities of reaching such an ultimate goal. Using the proposed criteria, we also appraise how far the reviewed current state-of-the-art approaches are from the idealised DTs. We also identify and propose potential directions and ways of building a DT-orientated CNS based on the convergence and integration of CNS and DT utilising a variety of cross-disciplinary techniques.
Index Terms-Complex Network Systems, Digital Twins, Dynamic Processes, Network Dynamics.
I. INTRODUCTION
A complex network can be seen as a universal concept used for representation and analyses of complex systems. Given the growing interest in real complex systems and a fast development of modelling techniques, the complex networked system (CNS) area has become a highly cross-disciplinary field that involves multiple modelling approaches with various research aims posed and achieved over the years.
There is a considerable literature about complex networks and various researchers have published several surveys reviewing and exploring the topic from different perspectives and application areas. Those include works on complex networks and their applications covering multiple application areas [1] or orientated towards specific topics such as networks of cryptocurrency transactions [2], vehicular networks [3], internet of things [4] and networks of short written text [5].
When it comes to the models of complex networks in the context of their structure, there is also a body of work surveying a variety of network topologies [5], [6] and dynamics. The networks dynamics can be either considered as: (i) dynamic processes over networks, that involve surveys on spreading processes like epidemic processes, information spreading processes [7], [8], or (ii) dynamic networks with evolving structures and features [9], [10].
When it comes to modelling techniques that have been applied to complex networked systems, there are also surveys that review certain types of modelling approaches for complex networks, including Graph Neural Network [9], game theory [11] or non-parametric Bayesian modeling [12].
Complex networked systems are modelled with the goal of accurate reflection of reality with the aim of simulation, prediction and/or control. Over the years, proposed CNSs models have become more and more accurate with more realistic networks topologies, characteristics and evolving dynamics modelled. Increasingly, they can capture features of real world scenarios and behave like their twins. Researchers have already focused on the studies of Digital Twining of real systems across disciplines, which involves a wide range of applications but so far complex networks area has been marginalised in this development space.
Therefore, the convergence of Digital Twinning and Complex Networked Systems emerges as an exciting research focus with a potential to address some of the outstanding modelling and representational challenges ultimately leading to the establishment of a DT-orientated CNS area as its main goal.
Multiple modelling approaches have been applied to answer various questions about CNSs, and there is a need to review and explore where, how and why complex networked systems are modelled across many different disciplines. However, most of the surveys only account for certain types of complex networked system from a specific perspective or a single discipline. Therefore, to fill in this gap, we review research on complex networks from a holistic view while trying to deal with the questions concerning diverse modelling paradigms and their distance to the idealised DTs. We devise a framework that enables to compare various modelling paradigms and evaluate their respective distances to DTs, which involves answering the following four fundamental questions: (1) What is the aim of the modelling?; (2) How to represent information about a system in a form of a network?; (3) How to model the dynamics in a networked system?; and (4) How do we approach the ultimate goal of building a CNS that models the reality at a Digital Twin level?
Hence, while answering the above questions, this critical survey aims to integrate and overview the current, relevant state-of-the-art from multiple disciplines and inform future research directions and foci for this new multidisciplinary area.
Our paper is organised as follows. Section II-B illustrates and discusses different modelling aims across different disciplines and summarises their research foci. This is followed by two sections concerned with complex networked systems with section III reviewing different network representational and modelling paradigms and section IV focusing on modelling dynamics in networked systems. In Section V a new framework is proposed for evaluating models of complex network systems given the ultimate goal of achieving DTs faithully representing the reality. The research gaps and future research directions are also identified and discussed in this section. Finally, the Conclusions are provided in the last section.
II. MODEL'S AIMS
There is always a need to determine the aim of developing a new model at the start of the modelling process. The choice of modelling paradigms, which can be understood as the approaches employed to model real state or dynamics of the system, is largely dependent on the model's aims determined based on the research questions. The model's aims include a wide range of functions like link prediction, network detection, mimicking of real systems, etc. while they are developed given different disciplinary focuses such as epidemiology, sociology, microeconomics, etc. We categorise the modelling paradigms based on their concrete model's aims, which helps us to understand what is expected from modelling complex network systems and serves as the basis for further discussions of detailed modelling paradigms. In this section, we also set an ultimate goal of modelling complex networked systems, having in mind various model's aims, and this goal is to create a Digital Twin (DT) of real world system.
A. Prerequisites of setting and fulfilling model's aims
Selecting the right modelling paradigm together with the availability of the needed input data are the prerequisites of setting and fulfilling model's aims. Given varying complexity of reality which can only be partially observed and modelled, there are no concrete standards or measures to ensure and prove the implementation of these prerequisites. Thus the model is always built on the strong assumptions of appropriate variable and model selection. The discrepancy between these assumptions and the truth may result in incapability of predicting dramatic changes due to partially observable input space [13] or overfitting problem that is typical for overly complex models [14]. Therefore, researchers start to evaluate and reduce their distance to these prerequisites by considering observability.
Observability, in structural way, is about the ability of reconstructing the state of a system from a limited set of measured variables in finite time [15], while from the perspective of dynamics, observability is related to a deeper understanding about how the selected variables interact and evolve to change the states of the system. The structural observability is dependent on the available data set while the dynamical observability is analysed based on a model of dynamics, which is inevitably wrong to some extent but may be useful either in reflecting the reality or diagnosing an input space that is partially observable. Linear time-invariant system proposed by [16] is one typical modelling paradigm widely used in the studies on observability of dynamical systems [17].
The measures of observability vary depending on the linearity of systems. Linear networks without symmetries have been well studied research objects with respect to the topic of observability, where the observability matrix based on the dynamic model for a linear (time-invariant) system proposed by [16] is widely used. Nonlinearity of dynamical networks have been recently considered in the studies on dynamical observability. For example, [20] quantify the observability and controllability of nonlinear networks with explicit symmetries that shows the connection between symmetries and nonlinear measures of observability and controllability. [21] propose a nonlinear graph-based theory for dynamical network observability from the Jacobian matrix of the governing equations of nonlinear systems.
B. Model's aims in different research fields
The model's aims can be categorised as: (i) specific and (ii) abstract goals. Specific model's aims, such as community discovery, link prediction, anomaly detection, synchronization and controllability of networked systems, focus on specific external tasks for observable research object (i.e. CNS in our case) with measurable model outputs. Abstract model's aims, like topological feature analysis and the mimics of real life systems, approach the inner rules of real dynamics, analyse observables and simulate unobservables for further research on specific model's aims. Examples of different CNS models' aims together with the relevant references are shown in Table I: Community discovery aims to decompose complex networks into meaningful sub-networks that better describe local phenomena [10]. The local phenomena refers to a set of entities that share some closely correlated sets of actions with the other entities of the community [10], [74]. This has been explored and discussed in a wide range of applications, including the detection of community structure hidden in real social networks [22]- [24], collaboration network analysis like detecting citation patterns [25], improving routing of telecommunication network [26], reconfiguration of the brain network [27] or political affiliation [28].
Link prediction aims to infer the behaviour of the network link formation process by predicting missed or future relationships based on observed links and the attributes of both nodes and relationships [30], [31]. Link prediction involves questions of dealing with missing links or link labels of networks and predicting links in changing networks, including social networks [29], [32], [33], food webs [34], networks in collaborative recommendation tasks [35], knowledge graphs [36] and biochemical networks of protein interaction [37] and metabolism [34].
Node classification aims to provide a labeling for unlabeled nodes in a network composed of partially labelled nodes and edges [38]. Node classification, as an important way to explore node features and links, has been widely studied in social networks [38], [39], citation networks [40] and co-author networks [41].
Synchronisation of complex networks implies that the states of two or more interacting nodes in a network with different initial conditions gradually approach each other and finally reach the same state [42]. The applications of synchronisation in complex networks range from the stability of power grids [43], [44], controllablity of neuronal networks [45]- [47], optimising timetables for transportation [48] to the synchronization patterns affected by network topology in chemical systems [49] and IoT systems [50].
Controllability of networks represents the ability of controlling the networks, which is independent of the way that the outputs are formed, while its related concept of observability, depends only on the outputs but not on the inputs [75]. The studies on controllability are often combined with observability, which range from social networks [76], protein interaction networks [77], brain networks [78] to transportation networks [79].
Anomaly detection of networks is about finding objects, relationships, or points in time that are unlike the rest [52]. There are many studies on anomaly detection in various application scenarios, ranging from anomaly detection in social networks [53]- [55], public health [54], IP networks [56], wireless sensor networks [57] to intrusion of networks [58]. Similarly to the principles of anomaly detection, pattern recognition is also used for diagnostic analysis [59] and heterogeneous component detection in knowledge networks [60].
Topological feature analysis is a very popular model's aim in both real-life networks and artificial networks. This is a research topic that probes network topological features from real data for applications across disciplines based on network-based models, such as the probed topological features of text network for language organisation [5], [61], probed boundary features within the small world complex networks for image classification [62] and organisation of brain network [45]. Local network topological features like three-node motif [80], directed closure [81] and quadrangle structure [82] are researched through topological feature analysis of social networks. There are also discussions of artificial network topological effects on network based models, which involves studies of topological effects on P2P trading in financial market [63], artificial neural networks in computation tasks [64], epidemic spreading [65] and enhancing synchronization of IoT systems [50].
Mimics of reality helps to deal with questions about analyses of dynamics over networks or features of networks [66]. There are studies of spreading processes in artificial networks including epidemic dynamics [8], [67], opinion dynamics [68], [69] and meme diffusion [70]. There are also studies that simplify the real complex systems as data-driven networks to assist further analyses, which involves applications ranging from Digital Twins of IoT systems [50], [71], image information representation [59], mimics of transportation systems [72] and the representation of human-object interactions via networks [73].
C. Digital Twin: an ultimate goal
Researchers focus on the studies of twining real systems across many disciplines and those efforts have already resulted in a development of a field on its own known as Digital Twins (DT). DT serves as an "almighty" paradigm of mimics across spatial and temporal scales. It has also grown to become an ultimate goal of modelling complex networked systems due to its reality-friendly nature, integration of model functions and the wide range of applications.
Digital Twin is a virtual extension of reality, which not only allows to compare current conditions with historical data to provide meaningful information to assist in decision-making, but also enables forecasting and feedback of eventualities that have never happened before [83]. Researchers have defined DTs from different perspectives across the application scenarios. In a fully digitalized product life cycle, DT is a comprehensive virtual product model with the features of real-time monitoring, simulation and forecasting [84]. For mechanical and cyber-physical systems, DT is a linked collection of digital artefacts that evolves with the real system along the whole life cycle and integrates currently available knowledge with the purpose of describing behaviour and deriving solutions for the real system [85].
DTs have three elementary components repeatedly emphasised: the digital (virtual part), the real physical product and the connection between them [86], while there are also other imperative components added with the accumulation of practice, including data, service, machine learning, and DT Performance evaluation [87]- [89]. DTs also resemble a series of models with integrated functions like simulation, optimization and data analytics [90] and features of realtime processing and continuous updates [91]. This makes DTs impossible to be replaced by any single tool and ideal modelling paradigms for health monitoring [92] , planning of manufacturing [93], management of smart city [94], accurate healthcare [95] and anomaly detection [96] within a wide range of complex systems, including complex networked systems like IoT systems [97], [98] and blockchain-encapsulated systems [99], [100].
The above mentioned components, integrated functions and universal application of DTs differentiate them from any other simulation tool or modelling paradigms by emphasising the properties of real-time data acquisition of observations and feedback, and self-evolution through continuous machine learning analysis. They contribute to DTs' status as a powerful tool for the mimicking of a series of realities and an ultimate goal for modelling complex networked systems across disciplines.
Modelling complex networked systems using a Digital Twin paradigm has a potential to build a universal model that can be adapted to fulfil multiple, different models' aims discussed in this section. But before this can be attempted, we need to review and assess how the modelling of complex networks, their dynamics as well as dynamics on those networks is approached currently and this is the focus of the following sections.
NETWORK
This question involves two important issues in building complex networked systems, which are: (1) what types of networks are needed in modelling certain phenomena including the ways of building the required network topology with appropriate complexity using real data and simulations, and (2) how to obtain these networks from different perspectives of obtaining and processing data to realise such complexity. To achieve a faithful representation of a network that preserves as much information as needed in the modelling of networked systems (see section III) for certain models' aims (see II-B), one needs to collect and process observable data and information about the network structures and associated dynamics with modelling paradigms that minimise the information loss in the process of a network generation.
A. How to represent networked data According to [101] network topology is a representation of the physical connections that exist among entities in a communication network. This definition can be and has been easily expanded across disciplines as network topology describes how the entities of any type relate to each other in any type of network. The establishment of network topology is a crucial step for modelling complex networked system, and its effect on the dynamics has been a popular research focus.
1) Complexity dimensions: Network topologies vary in complexity when they represent networked information that can vary in data availability and modelling necessity. Complexity of the topology results from different types of nodes, edges and their attributes. As shown in Figure 1, we propose to describe this complexity alongside and using the following four dimensions: (i) a structural dimension connected to the scale and diversity of the topology, (ii) a temporal dimension concerned with time-to-live of different components of the network, (iii) a spatial dimension connected with the space in which the topology can be embedded, and (iv) a dynamics dimension connected to the topology's exerted or encapsulated dynamics. The complexity in structural, temporal and spatial dimensions describes the necessary reality required to be represented via networks, while the dynamics complexity depends on the models selected to explore the complex reality. These modelling paradigms are shown in Figure 1. Given available networked data set that is observable from the perspective of the above mentioned four complexity dimensions, in topological feature analysis, only structural information about networks need to be represented and analysed based on a model that reveals the inner rules of network formation, while for prediction of link formation over time, temporal complexity is further incorporated and a complex model that performs well in such an external task is employed. a) Structural complexity: Structural complexity of the scale and diversity of nodes, edges and their attributes involves discussions of non-attributed and attributed networks in terms of their diversity, as well as small/big networks and sparse/dense networks in terms of their scale, where structural complexity increases with more detailed information needed to be represented.
Research studies usually start from the exploration of nonattributed networks that are built only with homogenous nodes and edges. Such networks represent simplified real world scenarios and are thoroughly studied in terms of their topological features, like the analysis of potential energy landscape with both a small-world and scale-free character [102] or the conduction of specific model's aim e.g. community discovery which only uses network topology to find partitions [103].
Attributed networks can better represent real-world networked interactions and information as they introduce the auxiliary information via node or edge attributes [104], [105]. The node attributes describe the features of nodes within interactions or relations, while the edge attributes capture information about how the adjacent nodes interact with others in the networks [104]. These attributes vary with application scenarios. Taking online social networks as an example, nodes represent users and are attributed with user profiles [106], while edges represent online relationships and have attributes like nature of relation, direction, intensity and durability [107]. Many researchers study the structural complexity of nodeattributed networks having in mind such modelling aims as community discovery [23], [108], link prediction [109], anomaly detection [106], controllability [110], topological feature analysis and mimicking of reality [111]. However, few researches focus on the representation and modelling of generalised edge-attributed networks. Edge-attributes of such networks within most researches typically take the numerical or categorical forms [104], [112], where directions [113] and edge weights [114], [115] are often studied. Compared with the large number of studies on node-attributed networks with various model's aims, only a small number of studies focuses on community discovery [116] and anomaly detection [104] The structural complexity also increases with the requirements of representing more information within large-scale networks and the difficulty of processing networks with sparse edge information. Structural complexity of large-scale networks with thousands and millions of nodes results from a complicated and higher-order inner structure [117], which are common in DTs like city IoT [111] and DT of manufacturing with big data [118]. Structural complexity of sparse networks, given fewer edges, lies in their restrictions of attributes processing [105], [119] and optimal modelling [120]. There is an even more complex case for large sparse networks where both complicated large-scale inner structure and problematic sparse edges are involved [120]. b) Temporal complexity: Temporal complexity of networks increases when more temporal information about nodes, edges and their attributes can be captured and modelled meaning that less information is lost. Networks can be conceptually described as static, edge-weighted, evolving and temporal [9], [10] as they are transforming to be instantaneous and approaching the state of continuity without temporal aggregation.
The basic modelling of real-world phenomena and systems using CNS starts with a static network topology where nodes and edges are fixed and they are assumed to be "frozen" in time. Such an assumption greatly simplifies the modelling process but fails to capture the evolving features of real-world systems.
The attempts of accounting for temporal information in the network modelling process have gained attention as they usually improve the model performance. For example, social network analysis initially viewed networks as static rather than changing over time [121], [122]. As the field developed, social interactions started being represented using temporal networks to capture dynamics and instantaneous character of the contacts [115], [123]. In terms of biomolecular networks, studies on protein-protein networks first employed static datadriven networks to represent and analyse the protein-protein interaction [124] until the dynamic protein-protein networks have been found to benefit the study on the molecular systems of protein complexes [125]. The analyses of transportation networks also initially used static networks [126] and then shifted to dynamic management of transportation systems for better model performance [127]. However, the transition from the static networks under the most stringent assumptions to the temporal networks that result in smaller loss of information can not be achieved overnight, while the scale of involving time dimension also differs from case to case.
Some of the researches focus on edge-weighted networks with temporal weights in the analysis of social relations [115] and the mobility in wireless networks [114]. Other researches focus on evolving networks where network topology changes slowly over time so that its instantaneous snapshot yields a well defined network [9]. Some evolving networks just represent more stable relations, rather than instantaneous interactions between the nodes, which can be captured with durable edges like citations [128] and friendships [107], [129], while some researches use evolving networks built with snapshots to aggregate the temporal information of interactions within a time window for a more stable representation in the analysis of instantaneous features [25], [26], [51], [129]. There are also studies on temporal networks that have non-trivial topology changes and can not be represented via instantaneous snap-shots. They preserve all the temporal information and build networks in a more faithful way, such as instantaneous contacts of communication via e-mail, text messages, or phone calls with temporal edges of networks [130]. c) Spatial complexity: Spatial complexity involves discussion of spatial networks, which are proposed and defined as networks where nodes are located in a space equipped with a metric of the usual euclidean distance [131]. For example, spatial networks that represent urban street patterns can be built with a metric of distance, which is measured not just in topological terms (steps), but in properly spatial terms (meters, miles) [132]. Considerable applications of spatial networks involve modelling of human activities that take place on a spatial matrix obtained from largely three types of transportation networks including matter (streets, roads, highways, railways, airport networks), energy (the power grid) and information (Internet, telephone networks) [133], [134].
Researchers have been characterising and understanding the structure and the evolution of networks under the impact of spatial constraints [131], where topology alone does not contain all the information to understand the dynamics of networks and the spatial information is needed. Some socialspatio networks consider the interplay of social networks and locations, such as social interactions under the impact of transportation [135], employment outcomes in a labour market driven by social contacts under an explicit geographic structure [136] and the social properties of Twitter users' networks with the spatial proximity of the networks [137]. Traffic under the constraints of transportation networks also involves studies on commuter flows constrained in large transportation networks [138] and planning of unobstructed paths in traffic-aware spatial networks [139]. d) Dynamics complexity: Dynamics complexity is about the dynamics of networks that enables to develop a deeper understanding of temporal complexity based on structural and spatial complexity by investigating the rules of network evolution via simulation or modelling. Temporal and dynamics complexity, which are respectively derived from structural and dynamical observability, represent varying degrees of reality. The relations between observability and complexity dimensions are shown in Figure 2.
For dynamics built on inner rules that direct network formation and evolution, the dynamics complexity increases as networks are generated with less human involvement based on more realistic rules ranging from statistical relations [140], [141] to realistic principles like homophily [142], [143]. This dynamics can either be exerted dynamics that controls the network changes to a desired state of CNS [144]- [146], or encapsulated dynamics that motivates the network formation and evolution with higher degrees of automation [39], [142], [143]. For example, exerted dynamics used for social network intervention can control each step of network evolution by changing attributes of nodes that are identified based on different man-made strategies [145]. Encapsulated dynamics mainly focuses on the edge formation mechanisms, e.g. based on preferential attachment principles [39], [143] which are e) The synergy effects of complexity dimensions: The synergy effects of complexity dimensions describe the "1+1¿2" effect of the combined complexities from different dimensions, which are more complex but closer to reality. Spatial temporal networks are typical examples for combined complexity of temporal and spatial dimension, which are proposed for a more faithful representation of reality with the influence of space on constraining the structure of temporal networks considered. Some researches employ temporal networks to capture and process temporal information under consecutive frames, while they construct spatial networks to extract certain static features. Such spatial-temporal networks have been widely used in computer vision like facial expression [147], video-based person reidentification [148] and identification of human-object interaction [73]. Some researches introduce temporal information to spatial networks. Taking recommendation task as an example, [123] build spatial temporal networks with temporal edge-weights by incorporating time dimension to user-location graphs and using sessions that capture the colocations among two or more users during a time window.
It is clear that different complexity dimensions are intertwined. They influence and build on each other and this is one of the challenges that need to be considered when modelling of complex networked systems is attempted. Research has been conducted in each of the complexity dimensions but building overarching framework that would enable to flexibly adjust a level of complexity of each of the dimensions and simulate various what if scenarios is still an outstanding challenge. With the recent developments in the Digital Twin space, modelling CNS using DT paradigm is a promising way forward.
2) Data-driven vs simulation-based networks: Researchers have made a lot of effort to faithfully represent the information from real world systems by developing variety of modelling approaches. This involves data-driven networks, simulationbased networks and networks that are built by combining these two approaches. These networks are featured with varying degrees of complexity. Components of different network types are shown in Figure 3. Data-driven networks built on rich real data sets may not necessarily capture all the information as they may be confined by relatively simple model's aims under strict assumptions, like the ignorance of temporal information for temporal networks analysed in a "static" time scale [121]. As researchers explore the real data sets with more complex modelling aims and advanced techniques, data-driven networks gradually approach reality as more complexity is allowed to be introduced to network topology with more relaxed assumptions. For example, the analysis of social networks started from statistical analysis of static networks and their topology [121] and then evolved into a more complex modelling aims such as community discovery in networks that are increasingly more and more complex, starting from static [149] to evolving [22], [150], attributed [119] and both evolving and attributed networks [10].
When real data is not available, simulated (synthetic) networks can be generated and used to analyse various phenomena [7]. These simulated networks enable to model network phenomena with different levels of complexity. The statisticsbased simulations based on predetermined network statistics involve classical examples like the Barabasi-Albert model for the scale-free network [151], the Watts-Strogatz smallworld model for the small-world network [152] or the Erdos-Renyi model for the random graph network [140], [141]. The simulation-based approaches increase their complexity and flexibility as more complex rules governing creation of structure and its dynamics are introduced. The capability of generating networks with distinctive features enabled the similutaion-based approaches to become a universal tool for topological feature analysis. They also help with assessing the impact of network topology on dynamic processes, involving researches like synchronization of IoT systems using networks ranging from scale-free to small world models [50].
Another type of simulation-based networks, the principlebased simulations, are built according to different connection principles like homophily [142], [143], triadic structure [143], geographic proximity [146], [153]. These networks have typically higher degrees of temporal and dynamics complexity than statistics-based network simulations as they self-evolve with highly autonomous and interpretable edge formation process and generate temporal networked information. However, few researches involve principle-based network simulations with various edge attributes [39] or network simulations embedded in spatial dimension [154]. They are more complex but closer to reality, which calls for future research on network simulations with the combination of different levels of complexity from structural, temporal, spatial and dynamics space.
There are also networks built using combination of real data and simulations. They are featured with the mix of different levels of complexity either captured from real data or represented via simulation. Such networks consist of real attributed nodes and connections between the nodes are created using network simulation dynamics. For example, [145] build social networks by extracting necessary information about nodes' attributes from real database and simulate the edges using scale-free networks based on a network density from referenced literature. Some researches simulate missing edges for real networks, where the single imputation methods including null-tie imputation and reconstruction summarised by [155] are typical examples. The complexity of networks built using this hybrid approach increases as more data-driven features are captured by having the network simulation dynamics trained to fit observable real network components. For example, as an improvement of single imputation, researches propose a multiple imputation method that fits an exponential random graph model (ERGM) to the real data and simulates missing ties via inference [155].
B. How to generate networks using different modelling paradigms
Networks, no matter to what extent real data or simulation is used, are created to faithfully represent information about a given system in preparation for further analyses. Depending on the modelling goal, networks used for analysis will differ with respect to the four introduced above complexity dimensions and will have varying settings of nodes, edges and attributes. There are several modelling paradigms that are used to obtain desired networks and those include rule-based, agent-based and event-based approaches that focus on fundamental generative process from a local perspective of network formation, as well as basic graph-based, probabilistic graph-based and network-embedding based approaches that aim at condense network representation from the global perspective. These modelling paradigms, of local or global level, differ in ways of observing and processing networks but converge to a faithful representation of reality that aims at minimising the information loss between reality and the model. Observed local interactions lead to global emerging characteristic behaviours observed and analysed via graphs, while graphs lose less information as the four complexity dimensions are introduced via local level observations. 1) A Local view: Modelling paradigms of networks from a local view focus on mechanisms ruling the network formation. They take local level observations as a starting point and explore how they construct the characteristic CNS at a global level, corresponding to the discussion of observability considering the structural, temporal, spatial and dynamics complexity. The rule-based, agent-based and event-based approaches at a local level are introduced below with more information observed or simulated for the four complexity dimensions.
a) Rule-based paradigm: Rule-based paradigm generates networks under explicit dependence laws from predetermined assumptions or rules detected from real world cases, which not only focuses on network topology, but also investigates dynamics of network structures via simplified rules of edge formation and evolution.
Rule-based paradigms are controlled by a limited number of parameters according to a rule-based mathematical function, where randomness is introduced via certain probability distributions involved in edge formation or variable changes, like the scale-free network generation via a scale-free powerlaw distribution [156], edge formation with a bias probability dependent on the similarity of node attributes [143], and the changes of node property according to a uniform distribution [39]. For rule-based paradigms, there is inevitable information loss resulting from the divergence of rules and reality, as the rules greatly simplify the partially observable and complex real world scenarios. Researchers have made much effort to bridge this gap, involving studies on rule-based dynamics transforming from simulated [140], [141], [151], [152] to trained to better fit real networks [143], as well as rules evolving from statistics-based [155], [157]- [160] to principlebased [143], [161]. They include temporal complexity by incorporating temporal changes of network topology into the rule-based network generation process, and sometimes, to better characterise network generation, introduce the impact of node attributes with increasing structural complexity.
Rule-based paradigm started from simulated dynamics of networks either based on statistical rules [140], [141], [151], [152] or principles [142], [143], [162] (as is mentioned in section III-A2.), where they can be tuned to approach reality via seeking optimal parameters for the rule-based mathematical functions to fit real data and make inference [143]. For example, the ERGM involved in multiple imputation of networks can fit real data and simulate missing ties via inference [155], where the involved dynamics represent data-driven features to some extent, but neither preserve rich information about node/edge attributes nor explore topological information other than typical interactive patterns of ERGM rules.
Statistics-based network generators are able to fit real networks via statistical inference based on an explicit likelihood function, like scale-free model [157], ERGM [155], [158] and geometric branching growth model [160]. As statisticsbased dynamics on typical interactive patterns can hardly represent diverse real-world networks, principle-based network generators have been studied with their flexible and adaptable design of principles, which fit the statistics of real networks via likelihood-free inference such as approximated Bayesian computation [143]. For example, scale-free structure is empirically rare in social networks [157], while a principle-based simulator based on cumulative effects of triadic closure and homophily is able to reveal social network dynamics [143]. b) Event-based paradigm: Event-based paradigm refers to the network representations with two elementary components: nodes and their local pairwise interactions referred to as events [163] or nodes of events and their logical relationships [164]- [166], which involves event-based network representation and analysis [165]- [167] as well as stochastic point processes for events that can perform network inferential tasks [168].
Event-based paradigm started from a simple network representation that captures richer heterogeneous information of interactions related to events with increasing structural complexity. As a typical example, event-based social networks (EBSNs) are enabled to further capture and use the information of offline social interactions, in addition to the online social relationships included in conventional online social networks [167]. The enriched information observed about events assist further analysis and modelling tasks. For example, the consideration of both online and offline interactions for EBSNs provides adequate information for global level analysis and modelling via graph-based models [166] and improves the prediction power for event recommendation tasks [167].
Then it comes to stochastic point processes that are enabled to make inference about nodes [168] or edges (events) [169] via estimating dependencies between events or dependencies between events and latent space models with increasing dynamics complexity. Taking Hawkes process as an example, [168] employ a self-exciting point process on the edge to perform an identity-inference task, considering the effect of available observations of geographically distributed interactions (edges). [169] apply mutually exciting point process on the edge that includes the effect of node-specific latent vectors to assess the significance of previously unobserved connections for anomaly detection tasks.
c) Agent-based paradigm: Agent-based paradigm applied in the context of complex networked system refers to interactive structures that are typically composed based on three basic components: agents, interaction rules, and space (this could be geo-space or some other abstract space) [170], which enables more degrees of freedom in building networks from rich data or simulations as they can select more detailed information about interactions from microscopic perspective of agents.
Agent-based paradigms vary in degrees of information loss with different dimensions and degrees of complexity required for various research goals. The interactive structure can either be static with neighbour sets determined once and for all, or dynamic with evolution along time depending on model assumptions [171], while the interaction rules may either be simulation-based under certain constraints of space [63], [172], [173] or data-driven based on realistic scenarios [174]- [176].
The information about interactions of agents is initially represented in a fixed network topology, where decisions of agents are affected by network structures over which they interact. Such networks serve as an environment and a constraint for agents' behaviours [171]. Most static networks involved in the researches on agent-based models are simulation-based, focusing on the network effect on interactions in agent-based systems like trading behaviours in double auction [63], [177] and tax compliance and evasion [173]. Recently, these agentbased networks approach reality by introducing real data with features of time and space. [174] proposed a data-driven agentbased model for forecasting emerging infectious diseases, where data-driven networks are built with spatial information and social contact of realistic synthetic population. [175] also proposed an agent-based computational model under a datadriven decision-making framework for supply chain networks given their complicated micro structures, macro emergencies and dynamic evolution.
Multi-agent systems are also important simulation tools for modelling evolving networked systems with multiple agents interacting under certain constraints. [51] proposed a multiagent system that replays the evolution of a network and reproduces the rise and fall of communities with the strength of adapting to real-time, changing problems. [178] research on the constrained concensus and optimisation of multi agent networks, where multiple agents align their estimates with a particular value over a network with time-varying connectivity in different local constraint sets.
To better capture the structural patterns and instantaneous dynamics of networks, event-driven models (also named as activity-driven models) have been proposed with an activity potential, a time invariant function characterising the agents' interactions and encoding the instantaneous time description of the network dynamics [179]. These paradigms include the rich information observed or simulated from microscopic views of agents and model their connections activated by an event trigger, involving observations varying from binary interactions [179] to simplicial complexes [180], as well as applications ranging from event-based consensus [181] to contagion problems [180], [182].
2) A Global view: Modelling paradigms of networks from a global view aim at representing high-dimensional and heterogeneous networked information in a way that can be easily analysed. More information can be observed at a global level as more information about interactions is captured via local methods, which involves discussion on how global performance can change through controlling and modifying those interactions, corresponding to the research aim about controllability. The basic graph-based, probabilistic graphbased and network embedding-based methods at a global level are introduced below with increasing complexity as they are able to preserve more observable information in network representation and modelling. a) Basic graph-based paradigm: Basic graph-based paradigm is based on the graph theory and it can be seen as a set of selection principles for microscopic laws of behaviour in network science [183] which typically involves a simplified graph representation and analysis of networks just concerned with nodes and their connections, e.g. [184], [185].
Graph theory began when, in 1735, Leonhard Euler presented the first mathematical demonstration based on geometry of position to solve the seven bridges of Köningsberg puzzle [186], [187]. Graph theory focuses on providing rigorous proofs for graph properties, such as graph enumeration, coloring, and covering [183], [188], while the evolution of random graphs motivated graph theory to generate a new branch of network science for a separate direction: quantifying the structure and dynamics of real-world complex systems [183].
Graph-based paradigm simply represents networks as basic graphs composed of nodes and connections, enabling readily available graph analysis but taking the cost of information loss to varying degrees especially in terms of structural complexity [184], [185], [189]. For example, from an accumulation of experimental data on biomolecules, graph-based models for cell biology build the graph only with cellular components (nodes) and their interactions (edges), which allows for network topology analyses using graph-theoretical concepts but lose information other than the graph-structure [184].
Graphs are one of the widely studied data structures in computer science and discrete mathematics [190], while the graph-based models are also widely applied in the modelling of CNS across disciplines, such as analyses of graphic characteristics for networks in cell biology [184], anomaly detection of computer networks using protocal graphs [189], graph representation of vulnerability relations for industry IoT network [185].
b) Probabilistic graph-based paradigm: Probabilistic graph-based paradigm models networks with uncertainties on the relationships between nodes [191], which has two elementary components: a graph that defines the network structure and a set of local functions whose product is the joint probability of this compact representation [192]. The network representation with probabilistic graph-based paradigm is typically flexible in directed or undirected, static or dynamic dimensions, each corresponding to varying degrees of structural complexity and temporal complexity. The exploration of the local functions for dependence rules or causeeffect relationships enables the inference and learning of real networks in sophisticated models [193].
The probabilistic graphs can either be undirected with symmetric relations like conditional random fields [194] and Markov networks [195], [196], or directed with cause-effect relationships between the nodes, such as sigmoid belief networks [197], Bayesian networks [198] and hidden Markov model [199]. The usage of directed or undirected graphs depends on the application scenarios. For example, Markov networks, as a typical undirected graph, represent relational dependencies without the hindrance of acyclicity constraint and thus are well suited for discriminative training [195]. They have been widely used in argumentation tasks like finding labellings or probabilistic inference tasks by deciding credulous and sceptical acceptance [200]. While Bayesian networks, as a directed acyclic graph with explicit cause-effect assumptions for interactions [198], can handle problems including missing data, prediction and data over-fitting [201]. Some researchers also employ Bayesian networks in DTs due to efficient linear computation [97], [202], [203], while nonparametric Bayesian networks are more flexible in capturing time varying features with computation efficiency [12], [204].
Probabilistic graph-based paradigm can either be static or dynamic based on whether the probabilistic graphs represent variables at a time point or across different times [192], such as the Bayesian networks [198] versus the dynamic Bayesian networks [202]. They are transforming from static to dynamic with the trend of modelling networks with more temporal complexity. For example, continuous time Bayesian networks [205] and continuous time Markov networks [206] are proposed to capture changeable variables given the evolving features of structured stochastic processes and the graphical structure of Markov networks and Bayesian networks.
Probabilistic graph-based paradigms are able to conduct inference that helps in answering different probabilistic queries based on the model and some evidence as well as learning that estimates the graph structure and parameters of probabilistic graph-based paradigms' local functions [192]. As exact inference is often intractable, researchers tend to use approximation algorithms to find distributions that are close to the correct posterior distribution [207], like Gibbs sampling [208] and belief propagation [209]. While maximum likelihood estimation methods and the expectation maximization (EM) algorithms [210] are respectively employed for learning problems with or without hidden variables. However, probabilistic graph-based paradigms have difficulty representing, inferring and learning high-dimensional, heterogeneous networks, which calls for combined application of network embedding methods. c) Network embedding-based paradigm: Network embedding-based paradigm aims at network construction and network inference via embedding node information in the network into low-dimensional space [211], which is featured with a concatenation of an encoder and a decoder [9], [212], [213].
Network embedding starts from dimensionality reduction techniques that are also applicable in scenarios other than networks, which include stochastic multidimensional scaling [214], isometric mapping (ISOMAP) [215], principle component analysis (PCA) [216], linear discriminant analysis (LDA) [217], stochastic neighbor embedding (SNE) [218] and tdistributed stochastics neighbor embedding (t-SNE) [219]. There are also basic models that merely focus on network embedding tasks, which, as summarised by [220], are respectively built upon the skip-gram models [221] and matrix factorization models [222]. These models are able to encode network information into a low-dimensional space, but do not focus on decoding information to reconstruct networks or perform inferential tasks.
To preserve more information in the modelling process and perform inferences, deep learning (DL) techniques have been widely utilised to embed highly diverse, heterogeneous, highdimensional network information into a low-dimensional latent space. DL-based network embedding, also referred to as a network representation learning, is able to make inferences and assist network analytic tasks including node classification, link prediction, clustering, recommendation, similarity search and visualization, involving unsupervised and semi-supervised learning methods [223]. As it is categorised by [224], DL based graph embedding methods are either random walk based like Skip-Gram based deep learning models [225]- [227], or without random walk but directly utilising DL methods on a whole graph or its proximity matrix via autoencoder [228], deep neural networks [229] and graph convolutional networks [230].
Graph Neural Networks (GNN) refer to a widely used DLbased embedding method that encode graph structures via a neural networks architecture, which is able to decrease information loss by aggregating features of neighbouring nodes together [9]. This motivates the combination of GNN with other models, which is typically composed of an encoder, a generative model and a decoder. For example, graph Bayesian networks [231] and graph Markov Neural Networks [232], [233] can be employed to infer graph parameters (statistical relational learning) based on encoder/decoder -graph convolutional neural networks = and generative model = the involved PGM. The GNN combined with ordinary differential equation (ODE) is able to infer the latent states of irregular observations (encoding), learn the state transition in latent space via generative model = ODE, and make predictions continuously (decoding) [234].
3) An Overall view: Three trends can be discovered if we look through the modelling paradigms of generating networks from either local or global view: (i) increasing complexity of network topology and dynamics, (ii) decreasing interpretability with the increase of complexity, (iii) models' aims transforming from abstract to specific. These trends are shown in Figure 4: Complexity of networks increases when they are represented and generated more faithfully with less information loss.
Models at a local level focus on the network formation process. Rule-based paradigms are usually statistics-based and generate static networks that are just composed of fixed nodes and edges [156]. They recently turned to be principle-based while allowing for the introduction of node attributes with increasing structural complexity and edge addition over time with increasing temporal complexity [39], [143]. Given the above complexity dimensions, rule-based paradigms generate networks under the simplest rules of edge addition with lowest level of dynamics complexity. To incorporate more information about attributes and temporal changes of network topology including edge addition and removal, event-based paradigms based on more complex rules of local pairwise interactions, such as the stochastic point processes, are employed with increasing dynamics complexity [168]. Agentbased paradigms introduce more complex interaction rules to account for more detailed information of agents (nodes), including their attributes and various actions that result in edge addition or removal over time [51].
Models at a global level aim at a condense representation of high-dimensional and heterogeneous networked information. Basic graph-based paradigms focus on a simplified graph representation of networks that are just about nodes and [184]. Probabilistic graph-based paradigms further incorporate information of network components with increasing structural and temporal complexity via the introduction of node attributes, edge directions or addition of edges over time [31], [192]. These models, with increasing dynamics complexity, also enable the inference and learning of real networks via modelling the uncertainties on the relationships between nodes [191]. Network embedding-based paradigms can conduct network construction and network inference via embedding highly diverse, heterogeneous, high-dimensional node information into a low-dimensional space. This is characterised with the highest level of complexity in terms of network representation and dynamics.
Interpretability represents the ability to explain or to present in understandable terms to a human [235]. The interpretability of networks is about the understanding of network representation and the corresponding network dynamics. It decreases as the network complexity increases with more information represented and modelled. From a local view, more complex rules of network formation enable smaller information loss but may result in less interpretable global emerging characteristic behaviours observed in networks. For example, compared with rule-based paradigms, agent-based paradigms may generate less interpretable networks as they incorporate the impact of various observables from microscopic perspective of agents [51]. From a global view, more complex models can embed more complex networked information but are characterised by less interpretable embedding process and inference results, such as the network embedding paradigms which are less interpretable in encoding or decoding but can represent highly diverse and heterogeneous networks.
Measurability represents the ability to measure a characteristic of a class of objects [38]. Measurability of networks generated for abstract model's aims is about measuring the similarity between the inner rules of a networked system and that of real networks, while the measurability of networks under specific model's aims focuses on the measurable output for external tasks. Networks of higher interpretability but lower complexity, like non-attributed static networks generated by rule-based or graph-based paradigms, can be easily analysed, measured and compared in terms of network components and dynamics. These networks are usually involved in researches with abstract model's aims including topological feature analysis [151], [156], [236], [237] and mimics of reality [158], [186]. Networks of high complexity but low interpretability like the attributed temporal networks are always generated by event-based, agent-based, probabilistic graph-based or network embedding-based paradigms. They incorporate rich information especially in structural and temporal dimensions while preserve enough dynamics complexity for the fulfilment of specific model's aims including link prediction, node classification, community discovery, anomaly detection, synchronization and controllability.
SYSTEMS
In this section, we explore the answers to the question "how?" about the modelling of dynamics in complex networked systems. This involves a discussion about two key concepts of complex networked systems: (i) dynamics of networks and (ii) dynamics over networks. Dynamics of networks considers the network's generation process and network's changes over time (evolution), while the dynamics over networks refers to the dynamic processes that occur on networks over time. In addition to the process and network complexity, we also emphasise the complexity resulting from various interrelations between dynamic processes and network dynamics, where unilateral or mutual influence from one-way or two-way interactions are reviewed and discussed. Given complex networks (obtained via methods from section III) and observable information of dynamic processes, their dynamics and interrelations should be modelled in a way that models' aims (see section II-B) are fulfilled.
A. An overview of dynamics and their interrelations Taking the networks obtained via modelling paradigms introduced in section III as a starting point, the modelling of dynamics within the CNS, at the appropriate complexity level as per complexity dimensions introduced in section III-A, enables a deeper understanding of changeable network structures and processes over them with and contributes to achieving certain models' aims under the constraints of observability.
The complexity of modelling CNS dynamics comes from three primary elements: the modelling of networks, processes over networks, as well as the interrelation between the two. To have an overview of various CNS resulting from those three elements, we introduce a 4-generation modelling framework (see Figure 5) to navigate a pathway through different levels of complexity of CNS.
As is shown in Figure 5a, in the first generation (generation 1), the research concentrates on static networks, where spreading processes are introduced without changing their parameters. The static networks with fixed nodes and edges just provide necessary spatial or structural information rather than one-way influence that might trigger the parameter changes (evolution) of dynamic processes. Some studies employ spatial networks that utilize a metric to embed spatial information [131], [133], where the dynamic processes are modelled under the constraints of space, like the human activity on transportation networks [133]. Majority of research on dynamic processes over complex networks can be classified as the first generation of models [7].
Assuming a virus a spreading in the society where all relevant information is observable, in the generation 1, we can simplify this scenario into an epidemic spreading process of virus a with fixed parameter (e.g. infection rate) on a static non-attributed networks that are just built with fixed nodes (e.g. people) and edges (e.g. social contact). This simplified CNS increases complexity in structural and spatial dimensions once we incorporate more information about node attributes (e.g. age, gender, location, etc.) and edge attributes (e.g. direction, weights, distance, etc.).
With the temporal complexity introduced in CNS in the second generation (generation 2), parameter changes of dynamic processes or network topology changes over time can be observed via snapshots in discrete time steps. In the generation 2a, CNS represents the interactions via evolving dynamic processes (captured in snapshots) and static networks (see Figure 5b), where the spreading process on static networks can evolve through parameter changes [238]. In the generation 2b, CNS represents the interactions directly via evolving networks (captured in network snapshots) and dynamic process with no parameter changes (see Figure 5c). In this context, researchers aim to transform the snapshots of CNS into latent states and model their transition process discretely, involving the networks that switch arbitrarily between different adjacency matrices according to stochastic mechanisms like positive linear switched systems [239] and Markov switching rules [240], [241].
Taking the above mentioned spreading of virus a as an example, in the generation 2a, a parameter (e.g. infection rate) of virus a can change just arbitrarily or according to an external factor outside of CNS (e.g. temperature, etc.), while in the generation 2b, edge addition (e.g. social contact over time) or edge removal (e.g. enabled by the implemented policies of social distancing) can naturally lead to observations of evolving networks.
The modelling framework steps into the third generation(generation 3) as the dynamic process starts to co-evolve with the evolving networks (see Figure 5d). The involved evolving dynamic processes are captured via evolving snapshots with discrete parameter changes, where the changes of parameters and the transition of the latent states of evolving networks can either be independent or interrelated. The interrelations, including unilateral and mutual influences, enable the modelling of co-evolution of dynamics and greatly increase the complexity of modelling CNS. For example, [242] model the co-evolution process of dynamic social networks and the opinion migration on networks via introducing mutual influence between these two dynamics, where opinion migrates based on social structure and social networks evolve considering the similarity of opinions. Generation 3 can easily collapse into the generation 2a when there is only one snapshot of dynamic processes and no interrelation is considered and vice versa for the generation 2b.
To have a better illustration of the interrelation in the generation 3, we also start with the simplest scenario where the virus a spreads on a static social network (a single snapshot of evolving networks). In the generation 3, we can allow the network topology to change over time in response to the spreading of virus a (e.g. people die of virus a and are removed from the social networks). These evolving networks are interrelated with dynamic processes and are characterised with increasing temporal complexity. The parameter (e.g. infection rate) of virus a can also change according to the network topology (e.g. number of social contacts, etc.) or node attributes (e.g. age, gender, etc.). In this way, the complexity of the CNS also increases with an introduction of the evolving dynamic process and interrelations. The CNS will become even more complex if we further incorporate a vaccination b in the modelling framework of the generation 3, where the spreading of the vaccination b can affect the node attribute (e.g. vaccinated or not) and directly change the parameter (e.g. infection rate) of the virus a.
The modelling framework then goes through a fundamental change in the fourth generation (generation 4) as the time gaps between the CNS snapshots are narrowed, in the limit, to zero and the co-evolving dynamics are represented and modelled continuously with the introduction of real time data. As is shown in Figure 5e, based on both historical data from a data storage and the real time data, the temporal networks capture and model all the instantaneous interactions, while the temporal dynamic processes are modelled with continuous parameter changes. The complexity of the CNS increases as more complex interrelations between continuous dynamics are There are studies on modelling continuous changes of networks via introducing ordinary differential equations (ODE) to the GNN methods [234], but none of them involve the most complex case of modelling continuous co-evolving dynamics of interrelated dynamic processes and dynamic networks. CNS in the generation 4 can approach the ultimate goal of DTs when the model output of CNS is additionally fed back to and can influence the reality in a real time manner as a reference for practice. We refer to this scenario as the fifth generation of modelling framework (generation 5), where a closed feedback loop of real time monitoring, simulation, forecasting and deriving solutions for reality is formed and enables the CNS to approach DTs as an extension of reality.
To be more specific and continuing with our illustrative example, in the generation 4, the constant event streams about the spreading of the virus a and the temporal networks can be observed and modelled simultaneously. The instantaneous social contact and the virus a spreading can be captured instantly. CNS in this context can react to sudden changes observed in reality and evolve in real time. Any changes to the temperature, recorded all the time, can trigger continuous parameter (e.g. infection rate) changes of the spreading virus a. The vaccination b can also be introduced at any time and trigger network attribute (e.g. vaccinated or not) change right away. In generation 5, the simulation result of CNS about the spread of the virus a can be fed back to the policy makers and trigger, for instance, a launch of a "promote vaccination b" campaign, where the spread of vaccination b will be simultaneously monitored and modelled by CNS with a real time output about social networks fed back to the policy makers.
The above mentioned modelling framework shows with examples of real scenarios that CNSs can be represented and modelled with increasing complexity through generations and finally reach the goal of a DT in generation 5. To be more aware of the progress of studies on CNSs under this modelling framework, we need to review and discuss what kind of model's aims can be achieved by modelling the dynamics in networked systems, and how these dynamics are modelled in network dimension, process dimension and both of these dimensions.
B. Modelling Dynamics of Networks
Dynamics of networks can result in networks with different structural characteristics and their changes over time. Their modelling fulfils models' aims via analysing and learning these resulting patterns and changes of nodes, edges and attributes. There are overlapping areas between modelling dynamics of networks (in a way that models' aims can be fulfilled) and modelling paradigms of network generation (in a way that minimises information loss), as a good model that approaches real network dynamics can naturally accomplish these two tasks simultaneously. Studies on modelling dynamics of networks start from patterns of non-attributed static networks and then turn to dynamic networks that consist of nodes and edges that change over time. Those networks include temporal information about characteristics and behaviours of their components. Time dimension is introduced to break the assumption of static networks with fixed nodes and edges [9], [10], [52].
Modelling of the network dynamics, considering the patterns and changes of nodes, edges or attributes from a structural view, can be categorised as models that allow for: (i) no changes, (ii) topology changes, (iii) attribute changes, (iv) both attribute and topology changes and (v) structural pattern changes. They differ in whether network structures change and how they change over time, each to varying degrees coping with the models' aims including prediction and classification of network components, as well as the pattern discovery of network structures. As networks with only attribute changes are only involved in studies of dynamic processes on networks, we only discuss models of network dynamics for category (i), (ii),(iv) and (v) just from the perspective of a network dimension.
1) No changes: The modelling of network dynamics starts from static networks without any changes of structural components over time and focuses on the prediction of unobservable network components.
Networks built only with fixed nodes and edges are of the least complexity within this no change category, which involves missing link prediction fulfilled with approaches that are applicable for static networks. Some techniques take a link prediction of these networks as an unsupervised ranking problem based on a score for each non-observed link, which can be calculated either with a structural similarity index or probabilistic and statistical functions [31]. Structural similarity-based methods assume that nodes tend to form links with other similar nodes, which solely consider the local or global topological information of networks [30]. Probabilistic and statistical function-based models abstract the network structure and then predict probabilities of the missing links using the learned model [31], [243], where rule-based modelling paradigms in section III-B1 and probabilistic graph-based paradigms mentioned in section III-B2 can be employed to fit network topology for a probability score of each link. There are also techniques that take link prediction as a supervised binary classification problem about whether each pair of nodes are connected or not, where machine learning models like logistic regression and decision tree can be applied to account for the effects of different topological similarity metrics [244], [245]. Another universally applied supervised approach is the network embedding modelling paradigm mentioned in section III-B2, where the low-dimensional latent space representation of nodes is learned and their connections can be inferred via dependencies of latent space.
Networks increase in structural complexity when attributes are gradually introduced for nodes and edges. Some approaches use node attributes to assist prediction of links [37], where above introduced basic approaches for link prediction have been altered to incorporate node features. For unsupervised ranking algorithms, as is summarised by [244], the sim-ilarity score between two attributed nodes can be calculated to assist a link prediction using methods including vertex feature aggregation [246], kernel feature conjunction [247], extended graph formulation [248] and generic SimRank method [249]. In addition, probabilistic and statistical function-based models can also incorporate attributes to model the probability of links between each pair of nodes [244]. Supervised models that take link prediction as a binary classification problem can also be improved based on the above mentioned information about attributed nodes. For example, [248] introduces node attributes via vertex feature aggregation to the machine learning algorithms like a decision tree or SVM in the link prediction tasks. There are also improved network embedding methods extensively reviewed by [211] for the link prediction task considering an effect of both network topology and node attribute. An example of such an approach is the deep attributed network embedding method designed by [250] using a deep neural network based on the topology proximity and attribute proximity.
Some studies additionally introduce attributes to the edges and transform the link prediction into a classification task for multi-relational networks. Once the transformation is completed, this problem can be approached by the above mentioned probabilistic and statistical function-based approaches and supervised learning approaches considering the similarity of nodes and dependencies of inner principles. For example, [251] uses relational Markov network to investigate the probability of link labels given the known node attributes. [252] performs link prediction in multi-relational networks using a non-negative matrix factorization algorithm based on relational similarity.
Other methods introduce attributes to the networks and focus on the node classification tasks, which employ models considering the effect including node attributes [250], [253], [254] and edge attributes such as weights [250], [250], [255]- [257]. Given the sparsity of graphs with fully labelled nodes and the time consuming manual labelling, most studies employ partially labelled graphs and train a classifier for the prediction of unlabelled nodes. Referring to the models for a node classification, very well summarised by [258], we further categorise these modelling approaches into three types: (i) unsupervised learning approaches including probabilistic and statistical relational learning [255], metric modelling [256], spectral partitioning [257] and graph clustering [259]; (ii) supervised learning approaches that build classifiers like logistic regression model given features of labelled nodes [253]; and (iii) the semi-supervised learning approaches given the labels of few nodes, where network embedding methods like random walk-based [260] and GCN-based Network embedding [250], [261] are employed to account for both topological and attribute proximity effect [250].
2) Topology changes: The modelling of network dynamics becomes more complex as network components start to change over time. The changes of topology include the edge addition and removal, node addition and removal. As the network formation and evolution can be captured in a time series of static snapshots, modelling approaches of link prediction for static networks mentioned in section IV-B1 can be extended and improved to learn one or several types of topology changes.
Studies on network topology changes start from networks built with fixed nodes and addition of edges, where the edges can be predicted and inferred via edge formation process extracted from networks over time using structural similaritybased models or probablistic and statistic function-based models discussed in section IV-B1. For example, [262] employ a Markov model based on both topological and semantic features similarity between two nodes to evaluate the probability of a link. [33] predict formation of new links based on a combined popularity and similarity measure, which incorporates both global and local topological information via the introduction of Newton's gravitational law.
Modelling network dynamics becomes more complex when it comes to networks built with fixed nodes and temporal edges that are added or removed over time. Link prediction tasks for these networks focus on the temporal topology information and its evolution, where modelling approaches for prediction of missing links in static networks have been improved to account for the effect of historical topology. For unsupervised similarity ranking-based models, the link prediction task is based on the predicted similarity scores calculated using past structural similarity scores via a time series forecasting model such as ARIMA [263]. To deal with the model capacity and computational efficiency problem for probabilistic and statistical function-based models, efficient learning algorithms can be introduced to account for influence of topology, such as the proposed neighbour influence clustering algorithm in a conditional temporal restricted Boltzmann Machine for a prediction of temporal edges [264]. As for supervised learning approaches, graph convolution network (GCN) is widely used to learn a node structure of a network snapshot for each time slide and LSTM is employed to performs a temporal feature learning for all the network snapshots [40], [265]. The discussed so far approaches only use network topology, but the heterogeneous prior information, such as node attributes, is suggested to be used to further improve the accuracy [266] but is not commonly explored in the research community. [267] propose a nonparametric link prediction algorithm that can use both topology and node labels for the calculation of linkage probability with seasonality linkage patterns.
There is an even more complex case when the networks shrink or grow in size as they evolve with temporal nodes and edges. However, as the networks can be modified to include all observed nodes from all the snapshots of temporal networks, it is always assumed that there is a fixed set of nodes for all the networks at different time points [268], where the above mentioned modelling approaches for link prediction of networks built with fixed nodes and temporal edges can be used.
To conclude, the existing studies focus on modelling topology changes where nodes are fixed. The current methods consider one of the two scenarios: (i) edge addition and (ii) edge addition or removal. There is space for further research on modelling networks with temporal nodes and edges, where addition and removal of nodes, as well as the resulting change of network sizes should be considered.
3) Attribute and topology changes: The modelling of network dynamics becomes more complex as we allow, on top of the topology changes, for network attributes to change over time. As current models of topology evolution are limited to networks built with a fixed set of nodes and edge changes over time, the models of networks where both attributes and topology change also have the same limitation. These models, given an addition or removal of edges, consider: (i) edge attributes changes, (ii) node attributes changes and (iii) both of these changes. Modelling approaches for the link prediction and node classification of static networks presented in section IV-B1 and the network topology changes introduced in section IV-B2 can be extended to learn both attribute changes and topology changes.
The topology change, linked to the edge addition or removal, can be accompanied with changes of edge attributes. There are studies on link prediction in temporal networks that have edge weights [40], [266], [268] and directions [268]. The typical supervised learning approach for topology changes mentioned in the section IV-B2, where GCN explores the local topology of each snapshot and LSTM characterises the evolving features of dynamic networks, can be improved by introducing the generative adversarial network (GAN) to tackle the sparsity and the wide-value-range problem of edge weights [40]. Network embedding methods based on matrix factorisation can also be improved to include the information about edge weight or direction into adjacency matrix for a prediction of temporal edges [266], [268]. Further research is needed for a variation of other edge attribute changes.
When node attributes change in networks built with fixed nodes and non-attributed edges that change over time, modelling approaches of node classification can be employed to learn the changes of the node attributes by considering the evolution of attributes and topology. [39] use GCN to conduct node classification task on the social networks built with a fixed number of attributed nodes, changeable node labels and non-attributed edges that are added over time. GCN implemented in this research not only considers the local topology and its attributes, but also uses the similarity-based matrices to account for patterns of high-order neighborhood. Further research is needed for node classification in networks built with dynamic labelled nodes and temporal edges. There also comes the most complex case where node attributes, edge attributes and network topology all can change at the same time. Currently, there is no research in this space and further studies are required to model this very complex scenarios. 4) Structural pattern changes: The above mentioned structural components and their changes result in various structural patterns and corresponding changes. Structural patterns refer to the correlated combination of nodes, edges and attributes within a community or a network. Research in the space of structural patterns includes patterns and their dynamics discovery, analysis and prediction. The common models' aims here are e.g. a community discovery and an anomaly detection.
A community discovery starts from defining a community. This characterises the structural patterns of the sub-networks to be discovered via generally unsupervised way of modelling. A community in a complex network, as is defined by [10], [74] in a generic way, is a set of entities that share some closely correlated sets of actions with the other entities of the community. To fulfil the requirement of reflecting certain features of reality, the closeness within each community can be measured based on density, vertex similarity, actions of nodes or influence spread, all corresponding to different types of community discovery algorithms well summarised by [74].
Considering the increasing temporal and structural complexity of networks resulting from the topology variations and an introduction of attributes, the community discovery approaches have also been further developed and categorised from the perspective of process that is seen as the inner structures of algorithms. To deal with the community instability problem in dynamic networks, the temporal smoothing operations have been included in community discovery models to smooth-out the evolution of communities, which accordingly involve a new category based on varying extent of temporal-smoothness [10]. For attributed networks, built with attributed nodes and edges, a fusion procedure has been introduced to community discovery models to account for both effects of topology and node features, where another category based on when and how they use and fuse network structure and attributes is proposed [119]. To deal with the directed networks that are featured with edge directions and asymmetrical matrices, different community discovery models have been proposed and well summarised depending on the way directed edges are treated [269].
However, the perfect community discovery algorithm does not exist despite of all the above mentioned attempts, as each of them performs well on one specific declination of the general problem and can achieve different partitions even for the same networks [10]. Based on that, [270] further categorise the community detection algorithms according to the similarity of their results, which attempts to confirm the valid definitions of a community and help with the choice of algorithms for future research. There is still a need for further research on the community discovery of network variations with varying degrees of temporal complexity and structural complexity, given the superposed challenges of community definition, temporal smoothness as well as topology and attribute information incorporation.
Studies on anomaly detection focus on the rare occurrences of structural components, patterns as well as their changes, involving detection of anomalous nodes, edges, subgraphs, events and graphs [52], [55], [271]. They start from static networks built with fixed nodes and edges, where anomaly detection of nodes, edges and subgraphs can be realised via traditional non-deep learning approaches based on the network statistical features or using representation learning methods, as is summarised by [271], [272]. These methods have also been improved respectively when it comes to attributed networks with richer information about network structures [271], [272].
As temporal complexity is introduced to networks, two types of anomaly detection methods can be distinguished. More specifically, there is a two-stage approach that maps networks into a vector of real numbers and then employs an anomaly detector on it for a node, edge or subgraph anomalies, involving well categorised community, compression, decomposition, distance, and probabilistic model-based models [52]. There are also deep learning approaches respectively applied to the anomaly detection of nodes, edges, subgraphs and graphs [271]. There is also research gap that calls for further study on anomaly detection models incorporating both attribute and temporal information of networks.
C. Modelling Dynamic Processes
In this subsection, we focus on the dynamic processes that can take place over networks. These dynamic processes interact with the dynamics of networks and vice versa. Dynamic processes can result in three types of changes: (i) a network topology change; (ii) a network attribute change; and (iii) a parameter change of dynamic processes [273]. The network topology change and the attribute change can also lead to the parameter change of dynamic processes [274]. There are many interesting variations of the combination of dynamic processes and dynamic networks, which differ depending on the research objectives and application scenarios. Corresponding studies start from single parameterized dynamic processes on the static networks and then turn to multiple dynamic processes with dynamically changing parameters on the dynamic networks, which is featured with increasing process complexity, network complexity, as well as the complexity resulting from various interrelations between dynamic processes and network dynamics Modelling of the dynamic processes research can be categorised into the following groups: (i) a single dynamic process, (ii) independent multiple processes and (iii) interrelated multiple processes. They differ in a number of dynamic processes and how they interact, each to a varying degree mimicking the real world. Within each category, researchers either focus on parameterized dynamics or dynamically changing dynamics.
The modelling of spreading dynamics on networks starts from classic population models with the simplest analysis that considers the evolution of the state for all individuals rather than the state of each individual [8]. They include the stochastic population model that describes the evolution of the population state via a Markov process, as well as its approximation, deterministic population model with deterministic definitions of the population state [8], [284]. To model the states of each individual independently and allow for arbitrary interactions among them, as is summarised by [8], these population models are extended and improved to faithfully learn spreading processes on networks.
A spreading process on static networks is an example of a simplified representation of various real world scenarios where networks underlying the dynamic processes are simulated or represented under the most stringent assumption of nodes and edges that do not change. When a dynamic process evolves much faster than the network of interactions, static networks can serve as accurate proxies of slowly switching topologies. This real world situation can be approximated as dynamic processes over static networks [284].
The modelling of dynamic processes on static networks start from analysing the impact of pairwise interactions using the extended classic models, including stochastic network model and deterministic network model [8], [284], where networks with varying features and structures can be introduced to the studies. Stochastic network models assume the state transition for each node as a Markov jump process or its extension under relaxed Markovian assumptions [284]- [287]. Deterministic network models, as approximations of stochastic network models with deterministic definitions of node states, are also widely used [288], [289]. A threshold model is one example of a typical deterministic model for an epidemic process [283]. The modelling of spreading dynamics becomes more complex and realistic as complex contagion is also considered with the incorporation of group interactions, which currently has been realised via simplicial complexes [290].
A spreading process on dynamic networks refer to one of the most common real-world application scenarios. The underlying networks of interactions are not static, but dynamically change while co-evolving with and influenced by the dynamic processes over the networks [291]. Models that capture their co-evolution at comparable time-scales have been well categorised into temporal-switching, activity-driven, and edge-Markovian networks [284]. For dynamic processes on dynamic networks, temporal switching networks model them as snapshots switching arbitrarily between a set of topologies according to stochastic mechanisms such as Markov switching rules [240], [241]. Activity-driven approaches focus on the networks' interactions generated according to a time invariant function characterising individual properties, which involves a series of extensions with the introduction of epidemic threshold due to its analytical tractability, as is summarised in detail by [292]. Edge-Markovian dynamic graphs can model stochastic evolution of dynamic networks, which also involves analytically tractable extensions with spreading dynamics [293], [294].
Spreading dynamics on dynamic networks can also be modelled with data-driven machine learning approaches. They focus on transforming spatial information and other temporal features involved in a spreading process to well handled temporal information, where deep learning-based predictive models like Recursive Neural Networks and Convolutional Neural Networks can be employed to predict spreading pro-cess, as is well summarised by [295]. Researchers also start to employ network embedding approaches, as is mentioned in section III-B2, to incorporate network information into the predictive systems, which involves typical examples of predicting epidemic spreading with graph neural networks [296] or using node regression based on transfer learning [297].
Parameter changes of processes are discussed in terms of the above mentioned single dynamics. The existing modelling approaches to a single dynamic process propagating over the network are generally parameterized without any change of parameters implemented. Only few studies focus on the evolution of spreading processes on networks. [238] incorporate mutation of pathogen strains and corresponding changes of epidemic transmission probability, which trigger the evolutionary adaptations of the spreading processes with dynamically changing epidemic threshold. For this example, transmissibility changes between limited number of fixed values and is controlled via mutation and transition probabilities.
2) Independent multiple processes: Dynamic processes can result in three types of changes, including: (i) a network topology change; (ii) a network attribute change; and (iii) a parameter change of dynamic processes [273]. As almost all the models of multiple processes in section IV-C1 are based on probability of state transition, the parameters of multiple processes in section IV-C2 and section IV-C3 refer to the transmissibility, adoptability or probability of an entity being infected or activated or an entity adopting a given behaviour or state.
Independent multiple processes take place independently without direct influence on their parameters as they ignore the dependence of multi-spread in co-infected status or just exclude the concurrent infections of multi-spread. Since no research can be found on independent multiple processes where co-infected status is allowed, we mainly focus on independent multiple processes that exclude co-infected status and interact via changes of network topology, attributes or structural patterns.
Mutually exclusive processes refer to the multiple processes propagating over the network where the concurrent infection by more than one spread is not possible. The goal is to investigate circumstances under which the dominance of a single spread can emerge [298]- [300].
These dynamic processes interact on the same network while preserving their independence via assumptions including temporal separation [299], structural separation [274], cross immunity [298], [301], [302] and cross adoption [303], [304]. Under temporal separation, two pathogens can spread independently in separate time steps and interact via network topology changes, such as node removal as a result of death or immunity [299]. Studies on concurrent multiple processes generally adopt cross immunity or cross adoption assumption to deal with the concurrent infection that is not allowed for mutually exclusive competing processes. Under cross immunity, the states of an infected/recovered network vertex are changed to be immune to any other infections [298], [301], [302]. Under cross adoption, the infected network nodes can transit to be just infected by another spread with a specified probability [303]- [305]. There is also research on concurrent multiple processes that use structural separation, where network nodes are grouped and are only available for specific spreading dynamics, such as the simple contagion and the complex contagion that work out for predetermined vertex groups [274].
Under the above mentioned assumptions and basic settings, the models widely employed in studies of single spreading dynamics in section IV-C1 can be extended and used to model independent, multiple processes. In addition, there are already studies using SI [274], SIS [301], SIR [302], [303], independent cascade model [300], percolation model [299], [302] and Reed-Frost model [298]. For example, [274] propose dynamic message-passing equations for two SI-type competing processes to incorporate the message-passing into the parameters. [301] use the extended model, SI 1 I 2 S, to model the propagation of two concurrent epidemic spreading processes.
Parameter changes of dynamic processes are discussed in terms of independent multiple processes. There are scenarios (e.g. election information spread and entertainment information spread in the same social networks) concerning parameterized independent multiple processes where co-infected status is allowed under the ignorance of the dependence of multispread without changing their parameters, though no research can be found here.
Most of the mutually exclusive processes involved in the existing studies do not consider dynamically changing parameters of the process [298], [299], [301], [302]. A small number of studies introduce dynamics with parameter changes over time under the impact of node states, where transmissibility varies from nodes' groups (structural patterns) [303], [304], states of neighboring nodes [305], message-passing [274]. The parameters can either change between limited number of fixed values [303], [304] or change continuously according to the network attributes resulting from another spreading process [274].
3) Interrelated multiple processes: Interrelated multiple processes are characterised by direct unilateral or mutual influence of processes on their parameters. They can interact not only by changing network topology or attributes, but also via parameter changes.
Partially inclusive processes refer to the multiple processes that allow the concurrent infections of nodes while also incorporate the dependence between spreads themselves.
Interrelated multiple processes can have suppressing [306], [307] or supporting [274] relations, which are involved in only limited number of studies and so far all of them consider static networks [274], [306], [307]. These spreading dynamics change parameters with the transition of concurrent infection states. For example, epidemic spreading process, under the suppressing impact of awareness spreading, has different infection probabilities given nodes' different levels of awareness [306]- [308]. Similarly, collaborative multiple processes also have different probabilities under their supporting impact [274].
Under the above mentioned assumptions of concurrent infection, the models mentioned in section IV-C1 and section IV-C2 can also be extended and used, where some of them have already been used to model interrelated multiple processes, including SI [274], SIS [308], SIR [306] and SIS − SIRS [307].
Parameter changes of processes here are discussed in the context of interrelated multiple processes. For almost all the existing research, at least one of the considered spreading dynamics has dynamically changing parameters in response to the impact of another spreading dynamics [306]- [308]. [274] further introduce collaborative multiple processes that all change parameters with the message passing of nodes. Similarly to the case of independent multiple processes, parameters of interrelated multiple processes can either change between limited number of fixed values [306]- [308] or change continuously according to the network attributes resulting from another spreading process [274].
D. Combination of the network and process dimensions
In this section, we focus on the superposition of the network dimension and the process dimension, as well as the increasing complexity of modelling CNS considering the interactions and interrelations between the network and a dynamic process.
1) Superposition of networks and processes: Propagation process dynamics, either on static networks or dynamic networks, has been extensively studied using non-machine learning approaches [8], [284], where data-driven machine learning approaches have recently been a popular choice of incorporating more structurally and temporally complex network information [296], [297]. Dynamic networks involved in these studies only allow for topology changes [240], [241], [296], [297] which influences the result of spreading dynamics.
The majority of the independent multiple processes considered in the existing and current research take place on static networks [274], [298], [300]- [303] and only a small number of studies can be found for those on dynamic networks [299], [304]. As a typical example of competing epidemic processes, [299] allows removal of nodes and their edges over time as a representation of death or immunity and this results in the topology changes. However, interrelated multiple processes are involved in only a limited number of studies and so far all of them take place on static networks [274], [306], [307]. Modelling approaches on multiple processes generally employ non-machine learning approaches to model single dynamics on networks, where further research is needed for modelling multiple processes using both non-and data-driven machine learning approaches.
2) Interactions of networks and processes: Network dimension and dynamic process dimension can either be interrelated or independent based on whether one dimension can trigger the dynamics of another dimension to change. Parameter changes of dynamic processes as well as network changes of (i) topology, (ii) attributes and (iii) structural pattern, each indicates the changes of dynamics in the process dimension or the network dimension. Thus, the interrelations exist in two scenarios: (a) certain states of networks trigger the parameter changes of a dynamic process, (b) a dynamic process results in one of the above mentioned three types of the network changes. The interrelations can either be described as a oneway influence that is just about (a) or (b), or a mutual influence that refers to both (a) and (b).
Independent VS Interrelated relations between a network and a dynamic process are discussed considering their changes and the corresponding causes of change within the CNS.
An independent relation between the network and a dynamic process is common in terms of dynamic processes on networks and the research space is dominated by this approach [8], [241], [284]. In this case, a dynamic process only influences and causes changes of the node attributes connected with the process itself (e.g. whether a node, as a result of the process, has been infected or adopted new behaviour) rather than a change of the network structure or dynamics. In this scenario parameters of a dynamic process are not altered by the changes in the network.
Networks and dynamic processes with interrelated relations between them consider their mutually or unilaterally triggered changes. Multiple processes that interact with each other via changing network attributes are typical examples of a one-way influence of the type (a), where parameters of spreading dynamics can change with network attributes [274], [303]- [308]. For example, in a rumour-truth mixed spreading scenario, the truth spreading rate gets lower when nodes are attributed as rumour-believers [305]. There are also cases for a one-way influence of the type (b), where networks change topology in response to the spread. For example, disease spreading through the network can leave some nodes dead and get them removed [299]. Currently no research is found on interrelation about a mutual influence between the process dimension and the network dimension so the situation where a closed feedback loop between the process and the network is considered.
Parameter changes of dynamic processes are here discussed in terms of interactions and interrelations between networks and processes. All the available examples of processes on networks that are dynamically changing under the impact of networks are about multiple processes, where networks serve as a media for their interactions [274], [274], [303], [303], [304], [304], [305]. Further research is required in the space of dynamic processes that are dynamically changing under the impact of network changes. Network topology that changes in response to the dynamically changing dynamics is another interesting and not addressed research gap.
E. Control mechanisms on CNS
Control mechanisms of CNS aim to find the optimal strategy to attain its desired state, which involves controllability and synchronization of networks and a control of dynamic processes on networks.
1) Network control: Controllability of networks is achieved using model-based or data-driven approaches [309]. Model-based approaches aim to find an optimal set of driver nodes for CNS under the assumption of a tractable model for network dynamics, where a linear time-invariant systems are often employed to approximate the nonlinear processes that drive the directed networks [17], [310], [311]. This approach identifies the driver nodes via maximum matching approach [312], which enables the calculation of the structural controllability [310] and exact (state) controllability [311]. In this context, many studies investigate the impact of topology variations on the controllability of directed networks, ranging from degree distributions [310], connection types [17], topology switching [19] to even all possible network structures [312]. To measure the robustness of controllability, simulations of node removal and edge removal attacks are also conducted [312] and convolutional neural networks can be further utilised to improve computation efficiency [313]. Data-driven based approaches, on the contrary, learn controls from network data without knowing network dynamics [309]. Relevant studies generally focus on undirected, directed or weighted networks, where machine learning methods like reinforcement learning are used to find their optimal control parameters for desired network states [309], [314].
Synchronization of CNS, as another type of control towards the desired synchronized state, also involves intensive studies that are well summarised from the perspective of phase oscillator models, stability of synchronised state, and synchronisation in complete or spare networks [42], [315]. In addition, given the similar definitions of synchronization and consensus problem of multi-agent systems [316], they can also be studied from a unified view point by employing ideas about consensus problems across disciplinary areas to complex networks [317]- [319].
2) Process control: Controllability of dynamic processes is achieved via changing networks or introducing another dynamic process.
Control via changes of networks refers to the control of spreading dynamics via changing network topology and attributes, involving non-deep learning-based, deep learningbased and manual strategy-based approaches. Taking control of epidemic spreading processes as an example, researchers seek an optimal set of control actions including topology changes like node removal and edge removal as well as attribute changes via an antidote allocation to minimise infections [8], [284], [314]. Non-deep learning-based approaches, as is well summarised in [284], mimick spreading dynamics, with the non-deep learning models reviewed in section IV-C1, and optimise the action sequence under corresponding model constraints by a mean-field approximation or geometric programming. Deep learning-based approaches use machine learning methods to seek the optimal action sequence over graphs based on the network information that is incorporated by the network embedding. For example, [320] control the epidemic processes over a temporal attributed network using reinforcement learning as a ranking module for actions of changing node attributes, where GNN is encapsulated to embed information about networks and epidemic process.
Manual strategy-based approaches focus on the simulation and comparison of manual strategies that change network topology or attributes, like different social network-based distancing strategies proposed and compared to reduce infections of Covid-19 [146].
Control via interactions between processes refers to the control of spreading dynamics realised via introducing another spreading dynamics with their competitive, suppressing or supporting impact. For example, the disease containment of a single epidemic spreading dynamics A can be controlled by introducing its competitive process B, which is realised by an optimal allocation of a limited number of B spreaders to minimise the spreading of A [274], [300]. The advertising campaign A can also be controlled via introducing its collaborative spreading process B, where the best joint advertising campaign can be designed via the optimal allocation of spreader B with the aim of maximising the number of susceptible nodes [274].
V. HOW DO WE APPROACH THE ULTIMATE GOAL?
In this section, we answer the question of how to approach the ultimate goal of modelling complex networked systems: building a Digital Twin (DT) for real world networked systems. This question is decomposed into three sub-questions: (1) What have we done so far to achieve the goal? (2) How far are we from building the DT for CNS? and (3) How can we move forward?
Existing research on modelling networked systems and their dynamics aims at representing the complex reality through networked structures that minimise information loss and ensure that the model's aims can be fulfilled. The complementary effect of the model's aims fulfilment and minimised information loss contributes to a good model of CNS and its convergence to a DT. To assess the networked systems models and narrow their gaps with DTs, we build an assessment framework from two perspectives: (i) the CNS model's aim fulfilment that diverges when a specific model's aims and an abstract model's aims each focuses on the external tasks and inner rules; and (ii) a DT faithful representation and modelling that helps to merge the requirement of both the specific model's aims about external tasks and the abstract model's aims about its inner rules.
A. What have we done so far?
Researchers have done a lot of work on modelling real world networked systems. A small number of studies have already attempted to develop Digital Twins of complex networked systems for specific application contexts, like IoT systems [97], [98] and blockchain-encapsulated systems [99], [100]. However, while many recent studies on modelling, simulation and control of complex networked systems started taking into account the necessary details to faithfully represent aspect of complex reality, none of them explicitly attempted to create a DT of CNS with all its implications which we will now discuss in more detail.
1) Bottom-up view: CNS-based attempts: Researchers have been trying to build a Complex Networked System (CNS) that can faithfully represent and adapt to the real world situation. The networks, as the basic representations of CNS, are approaching reality with more faithful representation of real world information and infusion of evolving dynamics. These attempts partially enable CNS to meet the requirements of Digital Twins in terms of similarity to reality by incorporating complex inner rules and fulfilling external tasks.
Networks are approaching reality as the structural, temporal, spatial and dynamics dimensions are gradually taken into account. For networks constructed in a data-driven way based on readily observable data set, the description of the the impact of time enables the modelling of data-driven networks to capture the evolving feature of real-world systems, while spatial information represented in the network structure serves as space where dynamics take place. Spatial temporal networks, where temporal networks are modelled under the constrain of space that influences the structure of the networks, are proposed to encompass both temporal and spatial information, which are the closest to the reality data-driven network structures considered until now. For simulation-based networks and hybrid networks, the rule-based simulations have been developed to approach reality as more complex inner rules of complex networked systems are simulated to incorporate the above mentioned complexity dimensions. Networks simulated from microscopic views of agents enable the representation of either real or simulated information in a flexible and faithful manner. From another perspective, the networks have also been widely employed in different scenarios across disciplines, where a wide range of research objects can be represented and analysed with network structures. As is detailed in section III-B, networks are employed to represent information in agent-based systems and graph structures, relations that are statistically or semantically extracted and constructed, complex systems infused in networks like IoTs and Blockchains.
Dynamics of and on CNSs with different possible interrelations between those two is another key aspect of building an accurate model of CNS. With the infusion of dynamics into the networks, the CNS is getting further on the road to a DT. Referring to section IV, there are built-in dynamics that trigger the evolution of networks, while there are also dynamics that take place on the networks under the influence of network structures. CNS composed of networks and dynamics are approaching reality as researchers try to understand and model the interrelations among network dynamics and dynamic processes. Until now, there is considerable literature on dynamic processes on networks that are either static or dynamic. Among different interrelations between dynamics and networks, we find that some studies consider the unilateral interrelation between dynamic processes and static networks. However, we can hardly find any literature on the interrelation of dynamic processes and network dynamics, which is actually the closest to the real world situation.
2) Top-down view: DT-directed methods: The digital twining tasks vary for different cases and require to be adopted and adapted in the context of a CNS. For example, [83] model the DT of an urban-integrated hydroponic farm, where they decompose the modelling process into three crucial elements: data creation that enables an extensive monitoring system for a virtual representation of the farm through data, data analysis that helps to identity key influencing variables, data modelling that enables forecast and feedback. [321] try to shape the actual state and a possible future of the Product Data Technologies from a Closed-Loop Product Lifecycle Management (C-L PLM) perspective, where they see an intelligent product as a product system which contains sensing, memory, data processing, reasoning and communication capabilities at four intelligence levels. [322] view the physical asset and its digital twin as two coupled dynamical systems that evolve over time through their respective state spaces, where the digital twin acquires and assimilates observational data from the asset (e.g., data from sensors or manual inspections) and uses this information to continually update its internal models so that they reflect the evolving physical system. These up-to-date internal models can then be used for analysis, prediction, optimisation and control of the physical system. Referring to the previous studies, we generally decompose the modelling of a DT as tasks including (a) data processing that includes data creation and data integration, (b) data analysis with the purpose of parameter selection, (c) data modelling that enables forecast of eventualities and feedback of real system under the impact of the decision that is made by reference to the forecast.
Data processing, as the fundamental task of modelling a DT, is composed of two parts: (i) data creation enabled by an extensive and robust monitoring system that tracks the observable information, (ii) data integration that features with the record, management and retrieval of information in a real time. In the data creation stage, taking the DT of hydroponic farm built by [83] as an example, they track changing environmental conditions and crop growth through unstructured manual records and a wireless sensor network that sends data in real time to a server. For data integration function, semantic modelling that includes the application of ontology has also been employed to equip the DT with context awareness through record of data, answering queries and information retrieval [85], [323]- [327]. While there also emerges another popular data integration approach that employs blockchain technology. The blockchain can serve as the middleware of IoT with improved interoperability, privacy, security, reliability and scalability [328]- [330]. However, in some cases, it is hard to collect and process readily real-time data for a well-established DT in an efficient way and there are studies attempting to deal with problems of data integrity. For example, [331] propose a collaborative city digital twin based on federated learning, where multiple city DT can learn a shared model while keeping all the training data locally. This is a promising solution to accumulate the insights from multiple data sources efficiently and avoid violating privacy rules.
Data analysis and variable selection task is of importance to the establishment of DTs, which is presented by [83] as something that includes: (a) the influence of the environment on physical asset, (b) the influence of operable controls on the environment, and (c) the influence of manual changes on the operational controls, where, within the limitations of the data, this exercise identifies the variables which are crucial to track and forecast. Data analysis that enables variable selection is of importance to the modelling of DT. However, most studies ignore this process and just predetermine the observable variables, while the fundamental question of how to identify the minimum number of observable variables has been understudied over the years and needs a systematic research and answer. As is detailed by [322], a well-designed digital twin should be comprised of models that provide a sufficiently complex digital state space, capturing variation in the physical asset that is relevant for diagnosis, prediction, and decision-making in the application of interest. On the other hand, the digital state space should be simple enough to enable tractable estimation of the digital state, even when only partially observable. Specifically, as a rare example of selecting variables, [83] identify key influencing variables on energy use and crop yield by analysing the relationships between the broad data collected based on temperature, visible radiation, and CO2 levels.
Data modelling process for a DT is essentially a forecasting model that predicts and provides feedback on real world system to help control the DT, which includes two fundamental tasks, a forecast of extended future and a feedback of realworld system. Prediction and inference of reality that has happened before is the basic function of forecasting models, while DTs can further forecast the extended reality by predicting facts that have never happened before. For example, [332] propose a disaster city DT for enhancing disaster response and emergency management processes, where disasters that have never happened before are simulated and real world systems are extensively forecast to enable increased visibility into network dynamics of complex disaster management and humanitarian actions. The digital twining of complex networked systems are also featured with a decision-making feedback loop with dynamically updated asset specific computational models infused [322]. Especially in cases of solving multiobjective optimisation problems for complex systems using DT, which are common in analyses of entire product lifecycle in manufacturing, researchers have proposed DT frameworks aimed at multi-objective optimisation with effective feedback from different dynamics. [333] enhance DTs of autonomous manufacturing systems through reinforcement learning of continuous data fed back from DT, where residual errors between DT and its physical counterpart are compensated and an improved autonomous system can be established. [334] propose a bi-level iterative coordination mechanism to achieve optimal design performance for AFMS, where an effective feedback of collected decision-support information from the intelligent multi-objective optimisation of the dynamic execution is presented.
B. How far are the CNSs, as they are currently modelled, from DTs?
In most cases, CNS models presented in the literature, while fulfilling the relatively simple model's aims under relatively stringent assumptions, have not been developed with the goal of becoming DTs of their modelled aspects of reality. In effect they only posses partial features of DTs. Therefore, in order to bring the two areas closer together, we propose a unified assessment approach by discussing and attempting to answer the following three questions: (1) What constitutes a good DT? (2) What is a good CNS model? (3) To what extent the current CNS models approach a DT? We try to answer the first two questions with measures that aim at assessing the performance of CNS and metrics that evaluate the quality of DT, as is shown in Fig.6, and try to answer question (3) in the context of a good CNS that performs well under certain model's aims and thus has the potential of becoming a DT.
1) What constitutes a good DT and how to assess it?: DTs are featured with integrated functions like simulation, optimization and data analytics [90]. DTs use real-time processing and updates characterised by: (1) real-time connection with the physical entity, (2) self-evolution that enables a DT to learn and adapt in real-time by providing feedback to both the physical asset and the DT, (3) continuous machine learning analysis (dependent on the frequency of the synchronisation), not just a one-time output forecasting, (4) availability of timeseries (or time continuous) data for monitoring, (5) level of autonomy that defines if a DT could either make changes to the physical asset itself or if it relies on a human in control who could make changes to the DT, where the property of a DT to be autonomous, not autonomous, or partly autonomous is case-dependent and (6) synchronisation which could be partly continuous or partly event-based [89]. A good DT should meet the requirement of researchers using relatively simple models while preserving the trust in the data, model and their updates [335], [336]. Based on the above features, the assessment of a good DT in the context of CNS can be categorised into two parts: (i) efficiency of data processing and modelling and (ii) similarity with reality from the perspectives of multiple model's aims, self-evolving dynamics and model updates (see Fig.6).
The evaluation of efficiency includes data processing efficiency and modelling efficiency. Data processing efficiency involves data quality such as validity and reliability enabled by handling the imperfection of real-time data ranging from imprecision, uncertainty, incompleteness to ambiguity during the process of information retrieval and data integration [337]. Metadata ("data about the data") captures aspects of the measurement process that may affect the reliability and future usability of the data, which partially addresses trust in data gathered by sensors [335]. The belief function theory is also utilised to estimate the reliability of the information sources [337]. However, as the efficiency of data processing is centered on the observability of experimental physical asset confined by availability of time-series and the real-time connection with Fig. 6: The plot of unified assessment criterion for CNS and its distance to a DT the physical entity, it is hard to create quantifiable measures for various evolving application scenarios. For modelling efficiency of a DT, cost, model maturity and model adaptability summarised as a DT cross-phase metrics can be utilised in the assessment [336], where a high-quality model may cost less in maintenance and reuse. A more mature model gives the expected outcomes and meets application requirements better as time and frequency of using the model increase and a highly-adaptable model recreates the status of the real system better. The cost and model maturity are quantifiable in each application scenario [338], [339], while the adaptability is hard to be quantified but can be enhanced via parameter sensitivity analysis [340] and continuos monitoring of the model's accuracy over time.
Similarity level between modelled dynamics and reality can be evaluated from the perspectives of multiple model's aims, dynamics and the model updates of parameters in response to the real time data integration and feedback from real systems. A good DT is characterised with multiple model's aim fulfilment and well-handled trade-off between model performance and model complexity. The validation and verification of a good DT is dependent on the model-aim directed evaluation of model output for external tasks and the faithful representation of the inner rules of real systems. In case of a CNS modelled using a DT approach, the evaluation involves the comprehensive application of CNS model-aim evaluation methods (see section V-B2). Similarity of the modelled network dynamics and dynamic processes with that of real world systems can be evaluated using DT construction metrics summarised by [336], including quantifiable credibility, fidelity, maturity and qualitative description of complexity and DT standardisation, while the similarity in the context of model updates with the evolving reality can also be evaluated using DT application metrics including failure rate and qualitative description of decoupling ability and parallelizabiiity as well as DT reuse metrics including degree of reconfigurability, reconstructibility and composability. Based on the forementioned methods, the capability of forecasting events that have never happened before and the synchronisation of nodes existing in the CNS can be assessed and enhanced. However, there is no unified quantifiable measures across application scenarios, where the assessment and the comparison of DT modelling methods can be further studied.
2) Is it a good CNS?: To answer this question, both measures and standards proposed for CNS and DT can be considered towards the goal of building a good DT with high data processing efficiency and similarity of dynamics with the reality. The assessment of CNS can be divided into two parts: (i) model's aim fulfilment assessment, and (ii) model efficiency assessment (see Fig.6).
There is considerable literature on the specific model's aims with measurable outputs such as link prediction, community discovery, synchronization, observability and controllability. The evaluation methods for community discovery can be summarised as the internal and external quality evaluation, where more detailed measures can be found in [10]. The assessment of synchronization of networks focuses on the stability of identical states [42], where evaluation of consensus of multi-agent system can also be utilised as they have similar definitions and can be studied from a unified point [317]- [319]. The observability, with its dual: controllability, can be categorised as structural and dynamical, each representing observability of topology [17], and variables for coupling nodes and node dynamics [18], [19]. For its evaluation, observability matrix based on the dynamic model for a linear (time-invariant) system proposed by [16] has been widely used and extended, while the observability and controllability of nonlinear networks are also studied to investigate the effect of nonlinear dynamical interdependences among variables or the connection with symmetries of networks [20], [21]. Particularly, as an evolving model can be mapped to a link prediction algorithm, performance metrics for link prediction can also assist the quantitative comparison of the accuracies of different evolving models [30]. For example, link prediction can be utilised to validate dynamic social network simulators with graph convolutional neural networks (GCN) [39]. Link prediction performance can be evaluated using precision [30], Area Under the Precision-Recall (AUPR) curve [341], Receiver Operating Characteristic (ROC) curves, Area Under the ROC (AUC) [30], Geometric Mean of AUC and PRAUC (GMAUC) [342], Error Rate [265], SumD [264], Kendall's Tau Coefficient (KTC) [245] and Micro/Macro/Weighted Average Precision/Recall/F1 Score [343].
For abstract model's aims such as the mimics of reality using simulation-based networks, we can evaluate these models based on their similarity with the characteristics of reality that they are required to capture, where DT construction metrics [336], as well as observability and controllability measures for CNS can be utilised. In addition, when the simulation-based networks are encapsulated in complex networked systems with forementioned specific model's aims, the evaluation measures for specific model's aims can also reflect to what extent the network simulations reach the research goals, especially in case of link prediction where similarity of nodes, edges and their dynamics is the focus.
DT feature assessment that is applicable in CNS is mainly about the efficiency of simulation or modelling, as modelling efficiency is a good quality persued by both CNS and DT. DT application metrics and the DT cross-phase metrics utilised in the DT modelling efficiency are also applicable in the context of a CNS, even though it is not a DT. The data processing efficiency can be assessed and persued via observability and controllability measures for CNS, as well as the measures utilised to ensure the trust in the data in the evaluation of data processing efficiency for a DT, 3) To what extent is it approaching a DT?: The assessment of the distance of a CNS model to a DT is built on the prerequisite that the CNS is a good CNS, where a good CNS has partial DT features and has the potential to approach a DT. A DT has an appropriate level of complexity that enables it, with good model performance in terms of faithful representation of real systems, to meet the models' aim. As we have discussed in the previous sections, a CNS can approach a DT with better model performance through appropriately increased complexity. The distance between a CNS and a DT can also be discussed from these two perspectives: (i) complexity and (ii) model performance (See Fig. 7).
There are no clear boundaries for the development path of a CNS towards a DT, either in terms of model performance or complexity. CNSs with an unnecessary level of complexity, under the lower bound of development path, can be identified when there exists a less complex CNS with equally good or even better model performance. The upper bound of model performance for a CNS naturally exists under the limit of modelling paradigms. When a CNS achieves better model performance through increasing complexity, while falling out of the "unlikely" scope and the "unnecessary" scope, it gets closer to a DT. The bounds of development path for DTorientated CNS can also be updated with empirical findings in this space. a) Complexity: of a CNS is hard to measure using one, concrete measure, but we are able to rank the complexity of (i) network representation in each complexity dimension (see section III-A), and (ii) CNS modelling based on the 5-generation framework (see section IV). Based on that, a complexity metric can be identified for each generation of CNSs, including their two components: (i) process and (ii) network representation (see Fig. 8).
For generations of CNSs shown in Fig. 8, their components, the process dimension and the network dimension, are each represented with the coloured symbols of G 1 , G 2a , G 2b , G 3 , G 4 and G 5 . The CNSs in each generation of models vary in dynamics complexity and temporal complexity, while for structural and spatial complexity, they can be built with any complexity level from those two dimensions. The temporal complexity of a CNS increases as its process, network representation, or both of them, start to change over time in a manner from static (frozen in the time scale), evolving (captured in time windows) to temporal (continuous). The Fig. 9: The model performance metric of a CNS dynamics complexity of a CNS also increases when the dynamics are changeable, with the process modelled based on the changing parameters or the networks evolving via the changes of inner rules. We can compare the complexity of one component of CNSs in one complexity dimension based on this complexity metric. For example, compared with the CNSs in G 2a , CNSs in G 2b are characterised with the temporal complexity that is higher in network representation but lower in process dimension.
b) Model performance: of a CNS can be assessed based on the two requirements of building a good CNS: (i) the model's aim fulfilment, and (ii) the model efficiency (see section V-B2 and Fig. 6). There are both quantitative measures and qualitative description for a CNS assessment, but how to combine them for a comprehensive assessment and to deal with the multi-objective optimisation problem for a good CNS, still requires further study. To gain a rough understanding of the levels of model performance for each generation of CNSs, a model performance metric is built based on (i) the accuracy from the perspective of model's aim fulfilment and (ii) the efficiency (see Fig. 9).
As is shown in Fig. 9, the efficiency is described as ex-post, delayed, real-time and ex-ante based on the CNSs' ways of data processing and modelling. CNSs and their components in the ex-post group have the lowest ranking of efficiency due to the completely post-hoc modelling, like CNSs in G 1 . CNSs in the delayed group are characterised with streams of snapshots feeding into the systems across time windows with a time lag behind the real systems. CNSs in G 4 fall in the real-time group as they conduct real-time data processing and modelling. CNSs in G 5 , also termed as DTs, are classified in the ex-ante group as they are not only reactive to the observations of real systems in a real-time manner, but also proactive to the things that have never happened before (enabled by the closed feedback loop). The other perspective, i.e. the accuracy, represents a generalised conception of model performance considering how accurately the model' aims are fulfilled. It is classified as punctual, periodic, continuous and advanced. These groups each requires a faithful representation and modelling of the information at only one static time point, within a discrete period, captured continuously or simulated in advance. The required accuracy level increases with the upgraded assessment criterion and the paradigm shift from G 1 to G 5 . For example, the evaluation metrics of community discovery, like the modularity and error rate, are widely used for static networks in G 1 . They can be further supplemented with a relative reconstruction error rate to analyse the temporal evolution of dynamic networks and communities in G 2 [150]. . c) Current CNS,: in terms of data efficiency, tends to rely more on complex simulation-based networks to be able to capture more realistic features or employ data-driven networks introduced with observable temporal and spatial information. However, the data quality is case-dependent in each application scenario and confined by data sparsity, data security, as well as data processing and representation techniques. It is hard to find studies on CNS built and modelled with real time information because of its observability and the difficulties of building realistic real time data simulator. Though there are studies on CNS built with big data [117], it is still hard to achieve data efficiency at a "real-time" level. Though in some applications of CNS in a DT like IoT, where networked information can be gathered in a real time by sensors and integrated into a Knowledge Graph or a block chain, such a method is not applicable in all application scenarios given the requirement of equipment. Therefore, there exists a large gap when it comes to data efficiency between a good CNS in current studies and a good DT. On the other hand, for model efficiency, some CNS start to approach a DT with the development of modelling and computation techniques like parallel computation [344], edge computing [345], [346] and cloud computing [347], but it is hard to find such empirical research on CNS except for CNS encapsulated in a DT like IoT.
When it comes to the similarity of dynamism of CNS to that of the reality, there are studies on dynamics of the spatiotemporal networks where both time and space are considered to mimic the reality [73], [147], [148]. While there are some studies that model dynamics over dynamic networks with interrelations between dynamics, including interrelations of dynamic processes over networks [7] and unilateral influence of dynamic process on static networks [348]. However, we can hardly find CNS with mutual influence between dynamics on the networks and dynamics of the networks or dynamic processes over spatio-temporal networks with interrelations between dynamics across temporal and spatial dimensions. Therefore, as the state-of-the-art CNS models have relatively simple aims and often are developed under some strict assumptions, it is hard to find a CNS that fulfills DT standards, unless we focus on DT systems with encapsulated networks like IoT [349] and blockchain [329], where the networked systems are built using DT-oriented and directed methods and the network features like topology and interaction of networked node dynamics are ignored.
In terms of model updates, most studies model evolving dynamics in an offline manner without model updates. There are very few attempts to enable model updates e.g. where complex systems are built on networked mobile devices utilising Federated Learning (FL) [350], [351]. FL selects random subsets of devices in an offline manner to collect the local model updates and share the updated global model with the devices. This method is also used to deploy distributed data processing and learning in wireless networks in a blockchain encapsulated in a DT [352], [353]. The divergence of model updates remain a future research gap. In addition, the assessment methods of CNS are also in need of updates, as they are required to be more dynamic and able to evolve with model changes.
C. How can we go further?
To answer the question of how to go further to achieve the ultimate goal of building complex networked systems model that faithfully reflects a real system, we build a framework of CNS-based DT, where the modelling can be decomposed into a series of tasks that require modelling methods from both DT and CNS spaces while setting the half-way point as DTorientated CNS. We start with listing the research gaps that will guide us in setting goals and tasks for future research.
1) Current research gaps: Based on the five generations modelling framework proposed in section IV and the conducted review of the state-of-the-art, the current research can realise modelling framework of generation 1 and generation 2 and a small number of approaches can reach generation 3. There is no research on generations 4 and 5, where further studies are required to model this very complex scenarios. To build CNSs under generation 4 or 5, and in this way achieving DT-orientated CNS, seven research gaps need to be tackled: 1) fulfilment of external tasks (model aim's) while faithfully mimicking the inner rules of the real system; 2) meaningful feature extraction and model selection that enables network representation to preserve as much information as needed for the model's aim fulfilment; 3) network simulation via models built with interpretable inner rules which are able at the same time to deal with structural observability and dynamical observability problems; 4) dynamics of networks that not only focuses on the topology change but also incorporates attribute changes; 5) modelling dynamically changing dynamic processes in a way that allows for continuous parameter changes under the impact of network changes; 6) real time data acquisition and processing for CNS modelling; 7) the establishment of a feedback loop that enables continuous updates of CNS and the changes of real systems referring to the CNS modelling. 2) Set the half-way point: DT-orientated CNS: DTorientated CNS emerges with the convergence of DT and CNS modelling approaches, where CNS models approach reality by introducing DT features to the modelling process while a DT incorporates networked information through blockchain, knowledge graph or IoT to assist data processing and modelling. Studies on CNS generally focus on a single model's aim and simple research objectives with predefined set of assumptions. Also, a vast majority of them use historical rather than streaming and continuous data. DTs can encompass various functions of tools like simulation, optimization and data analytics [90] via real-time processing and updates, with research objects ranging from a single product to the society and relaxed assumptions that allow for a CNS representation and modelling in structural, temporal, spatial and dynamics complexity dimensions. When CNS approaches DT more assumptions are being relaxed, more model's aims fulfilled and more complex features can be modelled via more efficient data processing and modelling.
There are still several challenges to overcome and tradeoffs to be made on the way to DT-orientated CNS: (i) trade-off between model performance and multiple model's aims, (ii) trade-off between controllability and complexity, (iii) the trade-off between efficiency and accuracy. The existence of multiple model's aims poses a demanding challenge for modelling, which can only be fulfilled by compromising the model performance for a certain model's aim. As complexity of CNS increases with richer information about network components represented and more complex inner rules modelled, it becomes more difficult to control the CNS with limited number of features. To achieve higher accuracy, which is a measure of model performance for external tasks, also requires more complex CNS structures and dynamics, diminishing the efficiency of CNS representation and modelling.
A good DT-orientated CNS refers to the CNS that simulates or models necessary reality to achieve the predetermined model's aim with a DT-level efficiency, which may not need to be a ready DT but is required to deal with the before mentioned trade-offs. Unified assessment criteria made of mathematical measures for particular model's aims of CNS and the DT evaluation metrics that reflect model efficiency can be used to assess DT-orientated CNS in a dynamic way, as an assistance of model selection and updates. Modelling paradigms of DT across disciplines can also be introduced to CNS to go further on the road to a DT. Therefore, the modelling tasks of DT-oriented CNS and the research gaps are mainly about the resolution of trade-offs between performance (output accuracy, input controllability, model efficiency) and complexity, dynamic assessment of evolving dynamics and the introduction of DT features and paradigms to CNS.
3) Embarking on the modelling tasks: The modelling tasks of DT-orientated CNS should be based on the modelling tasks of CNS and DT that are detailed in section V-A and the assessment approach from a unified view point of CNS and DT in section V-B. The network representation involves tasks of data processing, data analysis and variable selection, while the training of a node and relationships dynamics and dynamics over networks can be summarised as data modelling process, where trust in data, model and updating procedure should be considered with the requirements of observability, similarity and synchronization. Therefore, the modelling tasks of DTorientated CNS can be generally categorised and presented as: 1) Data processing that cope with imperfection of data; 2) Data analysis and feature selection that considers observability and controllability; 3) Network representation based on the selected variables and the similarity measures; 4) Modelling of real-time self-evolving dynamics; 5) Model updates enabled by reconfiguration and reconstruction; 6) Model evaluation over the entire process.
For the task (1) on data processing and data management, uncertainty analysis has been utilised to deal with data imperfection, while data fusion emerges as a prevalent way to capture reliable, valuable and accurate information. Also knowledge graphs as well as blockchain have been popular choices for data integration and information retrieval. However, it is challenging to ensure the efficiency of data processing and data management, especially given the "realtime" feature of a DT and the requirements of data quality, where further research is needed. There is also an issue of data sparsity which requires further research on simulation of CNS to deal with the unobservability and unavailability of data.
For the task (2), data analysis and variable selection that considers observability and controllability, has been studied in the context of CNS over the years. However, more effort is needed given the high demand for adaptability of CNS as they approach reality. Specifically, when it comes to simulationbased networks, how to choose the changeable variables that drive the evolution of networks while preserving the characteristics of real world situation is an interesting research gap given the problem of data scarcity resulting from the rules of keeping data security.
For the task (3), network representation based on the selected variables and the similarity measures, is thoroughly studied research area, while CNS built with DT approaches with the emphasis on network properties has not been deeply studied. Spatio-temporal network, together with the interrelation and interconnection of dynamics within or over such networks are very interesting topics that can be further studied.
The task (4) on modelling of real-time self-evolving dynamics enabled by continuous machine learning is the core element of building a DT, where the interrelations between dynamics in CNS remain an unexplored area especially when it comes to the mutual influence of dynamics on and of networks. More specifically, the evolving dynamics on and of a network is the most relevant to the case-dependent autonomy of DT system, which can be autonomous, not autonomous, or partly autonomous, where the research on the interventions in networked systems can be further introduced with contextawareness and autonomy.
For the task (5), model updates enabled by the reconfiguration and reconstruction, is closely related to the task (4), where the construction of feedback loop is crucial for the continuous modelling process. There is research on a DT built from the perspective of state transition in a discrete way, where how to narrow the time gap between states and extend the state transition to the continuous modelling is a research gap.
For the task (6), model evaluation over the entire modelling process of DT-orientated CNS needs a unified assessment framework, where concrete measures that consider features of both DT and CNS should be explored. There is already literature on measures of network analysis and DT analysis, though the principled combination of these two or proposing new integrated quality measures remain an outstanding research gap.
The above tasks show the complexity of the research that is needed to be accomplished along the way of working towards DT-orientated Complex Networked Systems.
VI. CONCLUSIONS
This survey focuses on the modelling approaches of Complex Networked Systems that pave the path for its convergence to the ultimate goal: a Digital Twin of a CNS.
We review and discuss the CNS from three perspectives: (i) model's aims that have been studied for CNS (see section II), (ii) modelling paradigms that enable to represent a networked system in a way that preserves as much information as needed (see section III), (iii) modelling approaches for dynamics of networks and dynamics over networks that enable to meet model's aims (see section IV). Those themes are discussed through the lenses of four, proposed by us, complexity dimensions of complex systems: (i) structural, (ii) temporal, (iii) dynamics and (iv) spatial. A discussion that considers those complexity dimensions enables to better understand current modelling challenges and quantify how far we are from achieving Digital Twin modelling capabilities when representing networked systems.
The model's aims for CNS distinguish when they focus either on the external task to be performed on the system or modelling of the inner rules of real systems, and this division and specialisation can be eliminated when a Digital Twin is considered since it is able to undertake multiple external tasks by faithfully covering and reflecting the complexities of real systems. Models of CNS proposed over the years have been found to approach real systems with increasing complexity in structural, temporal, spatial and dynamics dimensions. To generate and preserve this heterogeneous networked information, modelling paradigms of network representation get more complex with compromised interpretability. These models either focus on inner rules of network generation at a local level or aim at a compressed network representation at a global level, but all converge to the goal for a faithful representation of real systems.
Dynamics of networks, dynamic processes on networks as well as their interrelations are three elementary sources of complexity for dynamics in the CNS. To navigate a pathway through different levels of complexity of modelling CNS, we devise a modelling framework of CNS that considers all these three elements and consists of five generations reflecting the progress of work that has been done in this field. Each generation builds upon the previous one meaning that the next generation encompasses higher complexity levels than the previous one. This modelling framework is agnostic to the model's aim so any of the discussed aims can be attempted using models built within each of the generations. Though, one needs to remember that models from different generations will enable to achieve selected aim to a different extent. The proposed framework also shows how Complex Networked Systems' models approach a Digital Twin with more complexity through generations, (i) generation 1: dynamic process on static networks, (ii) generation 2 with two variations: dynamic process on evolving networks and evolving dynamic process on static networks, (iii) generation 3: evolving dynamic processes on evolving networks with interrelations between them, (iv) generation 4: temporal dynamic processes on temporal networks with interrelations between them and the acquisition of real time information and finally (v) generation 5 that further introduces modelling framework of generation 4 with an information feedback from CNS's model to the real system. From generation 1 to generation 5, the real system can be represented more faithfully with richer information captured and finally a CNSbased DT can be created in generation 5. Current studies have made good progress under modelling framework of generation 1 and generation 2. Only small number of approaches reach generation 3. For generation 4 and 5, there is no research in this space and further studies are required to model this very complex scenarios to achieve better performance in the context of any of the presented model's aims.
To be more aware of how to approach a Digital Twin with CNS, we propose an assessment framework (see section V) that aims at quantifying the distance of CNS to DT from the perspective of CNS model's aim fulfilment and the perspective of a DT's faithful representation of reality. A half-way point referred to as a DT-orientated CNS is proposed to bridge the gap between the current approaches to modelling of CNS and the ultimate goal of a DT (generation 5 models) for future study.
The goal of the future research in the space of complex networked systems and network science more broadly is to develop a DT-orientated CNS that will enable to address research gaps presented in Section V-C. Integrating dynamic networks with dynamic processes and allowing for mutual influence between them (allowing at the same time for continuous adaptation of the system using streaming data as an input) will make it possible to create the DT-orientated CNSs. This will be a major breakthrough in the space of modelling CNS. | 2022-02-22T06:47:06.689Z | 2022-02-15T00:00:00.000 | {
"year": 2022,
"sha1": "ed793d037624f8de199f41025cc4819eef88d05a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ed793d037624f8de199f41025cc4819eef88d05a",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
} |
247438495 | pes2o/s2orc | v3-fos-license | Experimental study to enhance the productivity of single-slope single-basin solar still
Existence of potable water is considered as one of the important issues that are related to the survival of human life, especially in fresh water scarce areas. So, it is necessary to find a solution to this problem. In the current work, the productivity of fresh water in conventional single-slope single-basin solar still is increased by using two modification methods. The first method is reflecting the solar ray to the still basin by using aluminum foils that are pasted on the interior surfaces of the still walls. This method will enhance the fallen solar rays on basin water and reduce heat losses. The second method increases the evaporation surface area by introducing blackened stainless steel balls with different diameters at the still basin. Balls of two diameters chosen: 5 and 10 mm. The experimental results show that the productivity of solar still with 10 mm-diameter balls is higher than that of the conventional solar still by 38.07%. The corresponding values of the stills with 5 mm-diameter balls and aluminum foils are 31.41 and 14.87%, respectively. The thermal efficiency of the highest productivity solar still is 27.81%. Other stills are characterized by lower thermal efficiencies by various rates.
Introduction
For the survival of humanity, fresh water is urgently needed. High ratio of water resources on the earth is oceans and seas. Water from these resources has high salt concentration, which is not suitable for human consumption. The other water resources such as lakes, rivers, marshes, and underground water supply fresh water that is not completely fresh to match the international standard due to the existence of bacteria, viruses, and undesirable impurities. As the population increases, the demand for drinking water also increases. So, the development of water purification systems is important to keep human life and reduce the danger of water scarcity on our planet. Thermal energy, generated from the burning of fossil or hydrocarbon fuels, is widely used to convert brackish or impure water into potable water by the water distillation process. However, this method is not environment friendly due to air pollution from the exhaust that contains carbon dioxide and monoxide gases. So, to produce fresh water with keeping clean environment, solar energy can be used as thermal energy for the water distillation process. Solar energy is free and abundantly available in most days of the year. Different solar distillation systems are used to produce fresh water, and a solar still is one of them. Solar still is a simple device constructed from an insulated metallic box enclosed by a glass cover. Impure water is kept inside the solar still bottom, which is a rectangular basin with black inside surfaces. Sun ray passes through the transparent cover to reach basin water that is heated and evaporated. Due to the difference between temperatures of the water vapor and cover inside surface, fresh water is produced by water vapor condensation and is collected outside the solar still. Compared with other solar distillation systems, productivity and efficiency of a solar still are low. In several studies, researchers have been attempted to enhance fresh water productivity using a solar still by modifying its design for different operational conditions. Several methods have been implemented to enhance the productivity of conventional solar still such as increasing its evaporating surface area or using absorbing materials by introducing additional geometries at the still basin. Al-Karaghouli and Minasian [1] concluded that the yield of solar still increased with the use of floating type wick that was the cause of the increase in the evaporating surface area. This was also presented in the study by Jani and Modi [2]. Akash et al. [3] observed that the distillate yield of solar still increased by 35 to 60% by using various types of heat storage materials such as rubber mat, black dye, and black ink. Nafey et al. [4] modified a conventional solar still by using black rubber sheet with different thicknesses (2, 6, and 10 mm) and black gravel with different sizes (7-12, 12-20, and 20-30) as absorbing materials. Also, the obtained daily distillate output increased by 20% using 10 mm-thick black rubber and by 19% using 20-30 mm black gravel. Naim and Abd El Kawi [5] enhanced the evaporation surface area of solar still by using charcoal particles. Abu-Hijleh and Rababa'h [6] used black and yellow sponge cubes, black steel cubes, and coal cubes to increase fresh water productivity of a conventional solar still. They found that the still distillate output increased by about 255% using sponge cubes.
Velmurugan et al. [7] showed 29.6, 15.3, and 45.5% increase in fresh water productivity of a conventional solar still using wick, sponges, and fins, resepctively, at the still basin. Badran [8] compared the operational parameters of a solar still with and without the asphalt basin liner. It was observed that with the modification method, the efficiency of the solar still enhanced up to 51%. Abdallah et al. [9] used three types of absorbing materials at the still basin to modify the performance of solar still. The absorbing materials were uncoated metallic wiry sponge, coated metallic wiry sponge, and black volcanic rocks. Fresh water productivities of the modified solar stills were 28 and 43% using coated and uncoated metallic wiry sponge, respectively, and 60% using black volcanic rocks. Kabeel [10] modified the daily productivity of solar still with four pyramid-shaped sides cover by using a concave jute wick. In the experiments conducted by Sakthivel et al. [11], a conventional solar still was modified by using jute cloths placed at its middle and rear wall. The results showed that the increase in the still efficiency was 8% due to this modification. As well as, the daily productivity of fresh water increased by about 20%. Srivastava and Agrawal [12] conducted an experiment to improve the performance of conventional solar still by using blackened jute cloth at the still basin. In this study, pieces of blackened jute cloth were porous absorbers to modify a conventional solar still. In case of the modified solar still, fresh water production increased by 68 and 35% during clear and cloudy days, respectively. In the study by Srivastava and Agrawal [13], it was observed that the maximum distillate output of modified solar still was about / 7.5 kg m 2 , which was the result of using extended porous fins manufactured from blackened old cotton rags at the still basin. This amount of fresh water was 15% higher than that of the conventional solar still.
Ahmed [14] studied the effect of five different wick materials on performance of conventional solar still. The wicks covered the whole area of the still basin. It was observed that the distilled output value of still with black cotton fabric was the highest. Different energy storage materials were experimentally used in the study by Samuel et al. [15] to increase the distillate output of a conventional solar still. Spherical salt balls and sponge were used in this process. It was shown that the daily distillate output of the conventional solar still was 2.2 kg/m 2 , and it increased due to the effect of the energy storage materials to 3.7 kg/m 2 with spherical salt balls and 2.7 kg/m 2 with sponge. Alaian et al. [16] performed an experiment to enhance the productivity of conventional solar still by using pin-fin wick. The results showed that the efficiency of modified solar still was 55% with higher productivity by 23%. Sellami et al. [17] experimentally evaluated the performance of a solar still by employing blackened sponge sheets of different thicknesses pasted on the still basin. The study found that the decrease in the thickness of sponge sheet led to the increase in the yield of solar still. Fresh water productivity increased by 23.03% with 10 mm-thick sponge sheet, while using 5 mm-thick sponge sheet enhanced the distillate output by 57.77%. The performance of a solar still with the vertical rotating wick was presented by Haddad et al. [18]. It was seen that the modified still daily productivity was 7.17 and 5.03 kg/m 2 in summer and winter, respectively. The experiment was performed by Kabeel et al. [19] to enhance the productivity of conventional solar still by wrapping knitted jute cloths around sand heat energy storages. The daily distillate output of modified solar still was 5.9 kg/m 2 , which was 18% more than that of the conventional solar still.
Rashidi et al. [20] introduced black sponge rubber to a conventional solar still to improve its performance. The amount of fresh water produced by the modified solar still was higher than that of the conventional solar still by 17.35%. Carbon-impregnated foam as a porous absorber and bubble-wrap insulation were used by Arunkumar et al. [21] to modify a conventional solar still. The results showed that the modified solar still provided different amounts of fresh water per the day depending on the materials used: 1.9 L/m 2 without bubble wrap insulation, 2.3 L/m 2 with bubble wrap insulation, 3.1 L/m 2 with both bubble wrap insulation and porous absorber, and 2.2 L/m 2 with wooden insulation only. V-shaped floating wicks were placed at the basin of conventional solar still in the study by Agrawal and Rana [22]. The productivity and efficiency values in modified solar still were 6.20 kg/m 2 and 56.62% in summer and 3.23 kg/m 2 and 47.75% in winter, respectively. Bhargva and Yadav [23] compared thermal performances of solar stills that were modified by using different rectangular-shaped fins wicks: bamboo cotton, jute, wool, and cotton. The efficiency and daily productivity obtained in the solar still with bamboo cotton wick were 34.5% and 3.03 L/m 2 , respectively. These values were reported as the highest in the experiments. Modi and Modi [24] conducted an experiment to enhance the productivity of single-slope double-basin solar still by employing cloths of jute and black cotton as basin wicks. These wicks were arranged as a small pile over the basin plate of solar still. It was revealed that the distillate output value of the solar still with jute cloth was higher than that of the solar still with the black cotton cloth. Jaafar et al. [25] experimentally enhanced the thermal performance of a single-basin singleslope solar still by introducing various basin wicks. The first wick was an iron mesh with grid space of × 25 25 mm. The grid space was increased to × 50 50 mm in the second wick. It was observed that the increase in the efficiency of solar still was 86.65% with the first wick and 72.53% with the second wick.
In the study by Tiwari and Tiwari [26], the effect of the depth of basin water depth on the productivity of solar still was investigated. It was shown that the decrease in the depth of basin water led to increase in the productivity of solar still. This relation between the productivity of solar still and the basin water depth was also presented by Modi and Modi [24], Agrawal et al. [27], Jaimes et al. [28] and Kumar et al. [29]. It was also documented in the experiment of Jaimes et al. [28] that the increase in the efficiency of solar still was a consequence of the decrease in the thickness of still condenser (glass cover).
In the present work, experiments are performed to modify a conventional single-slope single-basin solar still (referred to hereafter as the CS) by using two modification methods. In the first modification method, the absorber surface of the solar still is increased to enhance the distillate output by the increase in the evaporation surface area. The solar still is modified by using blackened stainless steel balls placed at the still basin. The balls of two sizes are used to present two modified solar stills. The first modified solar still is a single-slope singlebasin solar still, in which 10 mm-diameter balls are used (referred to hereafter as the BS1). In the second modified solar still, 5 mm-diameter balls are used to increase the evaporation surface area of single-slope single-basin solar still (referred to hereafter as the BS2). It is known that the increase in the evaporation surface area of solar still will increase fresh water productivity. In the second modification method, 0.5 mm-thick aluminum foils (reflection coefficient near 0.8) are pasted on the inner surfaces of the solar still sides, except the bottom. The act of the aluminum foils is to increase the effective solar radiation by reflecting it back toward the still basin and to reduce the heat loss through the walls. This case study will be referred to hereafter as the MS.
The present study sheds light on the increase in the value of fresh water productivity using aluminum foils or blackened stainless steel balls and finds the relation between the ball size and the distillate output of the solar still. As well as, the experiments are conducted to evaluate the thermal performance of the conventional and modified solar stills during summer by obtaining different parameters. The measured parameters are the hourly temperatures of basin water, the hourly temperatures of vapor, the hourly temperatures of inner and outer surfaces of a glass cover, and the hourly and daily productivities of fresh water. The hourly and total efficiencies of solar stills will be calculated.
Experimental setup
The photograph and line diagram of the conventional solar still (CS) are shown in Figures 1 and 2, respectively. The single-slope single-basin solar still was locally manufactured using available materials. Basin and sides of the still were fabricated from 1.5 mm-thick galvanized iron sheets (specific heat capacity and thermal conductivity are / 0.462 kJ kg K and / 73 W m K, respectively) to construct the still box that is open from the top. To minimize heat losses, the outside surfaces of the box were insulated by 25 mm-thick white cork layer (thermal conductivity is / 0.045 W m K). The inside surface of the basin was painted with a muddy black paint (absorptivity 0.88) to enable the maximum absorption of solar radiation. The inner dimensions of the basin are × 100 100 cm. Height of the higher (right) and lower (left) sides of the box are 528 and 60 mm, respectively. The box was covered with a condensing surface, which is made of 4 mmthick window type glass sheet (average transmissivity is 0.88). The rubber gasket was applied between the box edges and the glazing cover to avoid the leakage inside vapor. Holes were also provided in the still body to fix thermocouples. To collect the condensed fresh water that flows down the tilted glass cover, a U-shape galvanized steel channel was used as a channel of distillate water. This channel was fitted on the lower side of the still and joined to a container by a flexible tube. The container was located outside the still to collect fresh water, which was then poured in a measuring jar to measure the amount of distillate water at each hour of the experiment. To speed up the condensate velocity and avoid the re-evaporation state, the collection channel was inclined by ∘ 5 toward the container of fresh water. To keep the solar still away from the ground and save its components, a woody frame was designed to cover the whole still except its cover. The basin water depth for all current case studies was 15 mm. One side of the still was joined with a tank of brackish water by a flexible tube. The brackish water tank is a closed plastic storage of water (with salinity of 1,850 ppm).
A blow-off valve was fitted under the basin bottom for cleaning operation after each experiment.
The experiments were carried out in Najaf city, Iraq (latitude ∘ 32.0259 N and longitude ∘ 44.3463 E). Thus, the tilt angle of the condenser is ∘ 32 with the horizontal to obtain the maximum solar radiation on the still throughout the day [30].
To modify the CS, two modification methods were implemented. The first one is the BS1 and BS2, and the other is the MS. To conduct the experiments of BS1 and BS2, stainless steel balls (specific heat capacity and thermal conductivity are / 0.468 kJ kg K and / 20 W m K, respectively) were used and randomly distributed at the still basin of the CS to increase the evaporation surface area. The balls were painted with black muddy paint. For the BS1, 100 of 10 mm-diameter balls were used, while 100 of 5 mm-diameter balls were used for the BS2.
For the MS, 0.5 mm-thick aluminum foils (reflection coefficient near 0.8) were pasted on the inner surfaces of the CS sides, except the bottom. The aluminum foils increase the effectiveness of solar radiation by reflecting it back toward the still basin and reduce the heat loss through the walls. Line diagrams of the BS1, BS2, and MS are illustrated in Figure 3. Properties of the CS and modified solar stills are presented in Table 1.
The experiments were conducted on May 8, 9, 10, and 11, 2021, where the first day was for the CS and the other days were for BS1, BS2 and MS, respectively. The experiments daily time was from 8:00 to 16:00.
To obtain the maximum energy as possible during all experiments, the solar stills were located from the east to the west facing the south direction. The wind speed was hourly measured by a digital anemometer type (AM-4206M). Temperatures of the ambient, glass cover surfaces, vapor, and basin water were identified at each experimental hour by calibrated K-type thermocouples, which were distributed on different locations inside and outside the solar still. The fallen solar radiation was hourly recorded based on a digital solar radiation meter type (TM-207), which was located on the plane parallel with the glass cover plane, i.e., with the same condenser tilt angle ( ∘ 32 ). More details about the currently employed measuring instruments are found in ref. [31]. Table 2 details the accuracy and range of the measuring instruments.
Experimental uncertainty analysis
In the present work, measuring instruments are K-type thermocouples for temperature measurements, pyrometer for solar radiation intensity measurement, and anemometer for the wind speed measurement. To identify the experimental uncertainty, these devices are calibrated by comparing their readings with that for standard equipment in the same measurement conditions. It is worth pointing out to show that range and accuracy of instruments can affect accuracy of measurements.
K-type thermocouple is calibrated using a standard mercury thermometer, where both of them are placed in similar temperature path at different temperatures range (from 0 to ∘ 80 C). The comparison result is shown in Figure 4. To calibrate measuring devices of the solar radiation intensity and wind speed, standard Davis weather station that is located at 10 m above the ground in Najaf Engineering Technical College/Iraq is used. Ranges of solar radiation intensity and wind speed that can be measured by this station are / 0-1,800 W m 2 with accuracy of ±0.3% and / 0.1-89 m s with accuracy of ±5%, respectively. Figures 5 and 6 show the reading error ratio for the pyrometer and anemometer, respectively.
Productivity and thermal efficiency
In the current work, the productivity of potable water was collected at each experimental hour. The daily or total productivity, which is the amount of the cumulated fresh water within the daily working hours (d.w.h) of the solar still, is calculated from: The fresh water productivity enhancement (p enh. ) of solar still is evaluated as: ( ) p d m and ( ) p d c are the total productivity of fresh water from the modified and conventional solar still, respectively.
As presented in refs [32][33][34][35], the hourly thermal efficiency of solar still is defined as the ratio of the heat transfer per unit mass (q e ) by evaporation-condensation in the still to the incident solar radiation (I) in the still.
The hourly productivity of fresh water in (kg/h) is defined as follows: So, the hourly thermal efficiency of solar still is calculated as follows: Thus, the total thermal efficiency of solar still is expressed as follows: L is calculated based on ref. [36] in J/kg as follows or using online tables [37]:
Results and discussion
The ambient conditions for the experimental days are shown in Figure 7. It is clearly seen that there is no considerable difference among the hourly temperatures and solar radiation at each experimental hour for the experiments days. This is because the experiments were carried out on consecutive days in the same month (May 8-11, 2021). So, it can be concluded that the difference between the results of the CS and modified solar stills is due to the effect of the modification methods on the performance of solar still, i.e., there is no effect on the results as the experiments were conducted on different days. The measured results for all current case studies are detailed in Table 3 that presents basin water temperature (Tw), vapor temperature (Tv), and Tgi and Tgo as temperatures of inside and outside surfaces of the glass cover, respectively.
For the CS, the basin water temperature at the start point of the experiment was approximately equal to other still temperatures as shown in Figure 8. Actually, this behavior was recorded for the other stills. During the experimental period that was extended from 8:00 to 16:00, the maximum basin water temperature was recorded at 13:00 for all stills as shown in Figure 9. This behavior of temperature was also noticed for all other still temperatures as shown in Figures 10-12 that present the variation of temperatures of vapor, inside glass cover, and outside glass cover, respectively, with local time of experiments.
Distributions of the temperatures at different points of the current stills that shown in Figures 9-12 provide comparisons among different temperatures of the stills. It is clearly seen that at 8:00, there was no considerable difference among all temperatures of all current stills. The reason for that is at 8:00, the experiments just started and the recorded temperatures were very close to the ambient temperatures, which were approximately similar for all days of the experiments. At the next hour, temperature difference shows that basin water and vapor temperatures of the MS were higher than that of other stills as shown in Figures 9-12. That is because the aluminum foils that were pasted on inside surfaces of the MS reduced the heat losses through the still walls. In addition, the presence of aluminum foils in the MS increased the solar energy that was absorbed by the basin water due to the solar ray reflecting process. Regarding the BS1 and BS2, it is normal for their temperatures to be less than the MS temperatures because the stainless steel balls absorbed some of the solar energy that reached the basin water during this time.
In the interval between 10:00 and 16:00, difference in temperatures of all parameters for all stills becomes clear and significant as shown in Figures 9-12. It is clearly shown that temperatures of basin water, vapor and condenser sides of the BS2 were the highest compared with those for other stills. The maximum values of measured Tw, Tv, Tgi, and Tgo in BS2 were 76.1, 70.4, 66.2, and ∘ 65. 4 C, respectively. The temperatures difference between the BS2 and BS1 was lower than that between the BS2 and MS, suggesting that using the stainless steel balls increased temperatures of still parameters more than using aluminum foils. The lowest recorded temperatures between 10:00 and 16:00 were for the CS. This means that all the current proposed modification methods are working to improve the performance of the conventional solar still. This can be proved by the increase in their all parameter temperatures compared with those for the CS.
Regarding the effect of the stainless steel ball sizes used in the BS1 and BS2, there is no doubt that the increase in the ball size improves the performance of the solar still. This is shown in the increase in the temperatures of still parameters. This is shown in Figures 9-12. The fact that use of the stainless steel balls improves the performance of solar still is attributed to the increase in the evaporation surface area of the BS2 and BS1 with respect to the MS and CS. In the early time of the experiments, the stainless steel balls work as energy storages by absorbing some of falling solar energy. Then, this energy is released to the basin water leading to increase its temperature, which is a consequence of the increase in the temperature of other parameter. This is the action of increasing the evaporation surface area of the still basin.
The daily and hourly productivity of the current solar stills are illustrated in Figures 13 and 14, respectively. The BS2 total productivity of fresh water was 2.154 L. In the CS, MS, and BS1, the corresponding values were 1.56, 1.792, and 2.05 L, respectively. This is clearly shown in Figure 10. This result is of course related to the highest performance of the BS2 that is obtained from the highest temperatures ranges of this still. Figure 14 shows the variation of fresh water productivity of all current case studies with the experimental time. The maximum collected pure water was in the BS2 at 13:00. Its amount was 375 mL, while the corresponding values for the CS, MS, and BS1 were 295, 310, and 358 mL, respectively. It is also shown in Figure 14 that the difference in fresh water productivity between the BS2 and BS1 at all experimental hours is small with respect to the other stills. This is due to the modification method implemented in the BS2 and BS1, which was similar expect one difference that was related to the ball size, which affected the still performance. The productivity enhancement of fresh water is presented in Figure 15. As shown in this figure, the productivity enhancement value was 38.07% for the BS2. The corresponding values were 31.41 and 14.87% for the BS1 and MS, respectively.
The variation in the hourly thermal efficiency with the experimental time that is illustrated in Figure 16 refers to the close agreement between the values of this parameter of the BS1 and BS2. This is because the same modification method was used in the BS1 and BS2 (using the stainless steel balls). However, the BS2 thermal efficiencies are more than BS1 thermal efficiencies at all hours as an indication of the effect of the ball size on the solar still performance. The comparison among the total thermal efficiency of the stills is presented in Figure 17. It is clearly seen that the increase in the total thermal efficiency using the MS, BS1, and BS2 is 2.94, 6.18, and 7.43%, respectively. This is to indicate that Figure 8: Variation of the CS temperatures with time. using the stainless steel balls has a considerable effect to improve the performance of solar still compared with using the aluminum foils.
Economic feasibility
To shed a light on economic feasibility of the conventional and modified solar stills, their manufacturing costs and yearly productivity of fresh water will be considered.
The working life span of the current solar stills is five years. During that period, the maintenance cost for each still is estimated as 20% of its manufacturing cost. Total costs (including materials, manufacturing, and maintenance costs) of the present solar still systems are detailed in Table 4. In Iraq, summer months are from May to October. These months are characterized with high ranges of ambient temperatures and sunny days. It can be assumed that the daily fresh water productivity that was obtained from the current experiments is similar for all days of summer. Due to ambient temperature reduction and most partially cloudy days (in general) in the other months of the year, their amount of produced potable water is estimated to be half of that yielded in summer. So, the yearly production of pure water from the current solar stills can be estimated as presented in Table 5.
From Tables 4 and 5, we can calculate the cost of one liter of produced fresh water during the working life span of the still as follows: Still total cost Produced water amount (8) The cost of producing one liter of fresh water from the current solar stills during their working life span is shown in Table 6. The price of one liter of the potable water in the markets is 0.2 USD. So, the money saving from using each current solar still is detailed in Table 6. In addition, Table 6 presents the payback period for each solar still. The payback period is calculated according to [38] from the following equation:
Payback period
Still total cost Price of yearly purchasing water . (9) In Table 6, it is clearly seen that investment in the field of fresh water production using BS2 is the best choice. The payback period for the BS2 is less than that of the other current solar stills.
Conclusion
The current study presents different modification methods to enhance the performance of conventional solar still. It is concluded that the BS2 is the best to increase the total productivity of fresh water and thermal efficiency by 0.594 L and 7.43%, respectably. It is also shown that the increase in the evaporation surface area using the BS1 and BS2 provides higher pure water production than that of reflecting the solar radiation back to the still basin, which is implemented in the MS. However, the evaporation surface area modification method (using of stainless steel balls) is controlled by the ball sizes. The BS2, followed by the BS1, MS, and CS, has the highest temperatures for still basin water, vapor, and glass cover sides. This is the reason for the highest productivity and thermal efficiency of the BS2.
Economic feasibility of the current solar stills is also performed. The cost analysis indicates that the payback period of the BS2 is 1.857 years, which is lesser than that of the other stills. The corresponding values are 2.350, 2.237, and 1.897 years for the CS, MS, and BS1, respectively.
For further work, it is suggested to use different geometries with the modification method that is associated with the increasing evaporation surface area such as cylinders and cones. In addition, it is preferred to change the size of these geometries to investigate its effect on the productivity of fresh water in the solar still. | 2022-03-15T13:09:50.682Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "f79dfc2f65655cb5309ebac22141d71a7e9aca08",
"oa_license": "CCBY",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/eng-2022-0015/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1a4a35cf370a057a69e92dd6a121dfe2810342c8",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
225426062 | pes2o/s2orc | v3-fos-license | Analysis of stresses and deformations in the chassis of rough terrain forklifts
The outcome of studying the strength and the deformation characteristics of rough terrain forklifts, using a created 3D model applying FEM, is presented herein. The tested chassis design features are described. The external loads of the chassis structure have been estimated throughout two operation modes: handling the load at vertically elevated lifting mast and also featuring load lifted off the ground, using lifting mast in transport mode tilted backwards. Two alternating payloads have been applied for each of the selected modes: featuring reated load and standard size of the centre of gravity, and featuring reduced load at increased height of the centre of gravity. The resulting rates of stresses and deformations of the studied chassis of a rough terrain forklift have been calculated, presented and analysed with regard to the two main operating modes both of which entail two loading alternatives determined by the weight and the location of the payload.
Object and purpose of the study
The manufacture of rough terrain forklifts and the relevant dedicated work equipment has shown a continuous trend of increasing volume in recent years. The reason thereof is the constantly increasing need for mechanising loading and unloading operations and some other types of work on particular varieties of terrain and also an improvement in productivity in the construction business, forestry and agriculture, etc. The fierce competition between the manufacturers of this equipment makes the point of increasing its functionality, strength, and operational reliability increasingly topical [6,11]. This is very much the case for the chassis design that constitutes a primary assembly of any rough terrain forklift [1,3,17,19].
In view of the fact that the chassis incorporates high degree of metal, weighing from 17% to 20% of the forklift total weight, its optimal design could entail decrease not only in the production costs but also in the operating costs [1,3,6,10,13].
Given the complex shape of the design components and assemblies, which is characteristic of rough terrain forklift chassis, there is not always an exact analytical solution for estimating stresses and deformations. Numerical methods, whereof the finite element method (FEM) has become the most widespread, could successfully solve the problem irrespective of the shape and the way of loading and fixing the body [2,9,12,15,18]. It makes FEM a very appropriate method of studying the phenomenon of stress concentration that is noticeable in abrupt and complex changes to the shape of the component or the assembly, the chassis of rough terrain forklifts being such. The resulting maximum stresses should not exceed the ones that are unsafe for the material.
Regarding resilient and tough materials, the yield limit shall be considered a dangerous stress -R eH (R p0,2 ), and the tensile strength and the compression strength -R m and m Rshall be considered unsafe stresses in respect of fragile materials. The strength characteristics of materials are to be specified experimentally [6,14]. Regarding the most often used machine-building materials, the reference books provide the values thereof [7].
The object of the study is a chassis of rough terrain forklifts featuring dual wheel drives; it is currently manufactured by the Balkancar Record JSC company [19].
The purpose of the study is to determine the stressed and deformed state of the structure of a rough terrain forklift chassis under two typical load conditions. When developing the model, the chassis has been subjected to the loads resulting from the forces of gravity of the main assemblies and units related thereto, such as engine, tanks, box, lifting mast, counterweight, etc.
The main purpose of the study is to identify the critical points in the chassis design based on the results of the strength and deformation analysis.
An FE model of the chassis and the stresses and the deformations thereto have been made using the SOLIDWORKS Simulation Xpress module within SOLIDWORKS 2019 [9].
Load conditions of the chassis of a rough terrain forklifts
The chassis, selected for the study, is used in the R2SR forklifts series, manufactured by Balkancar Record JSC, with wheel drive formula 4Х2 and lifting capacities of 30, 40, and 50 kN. The chassis has welded steel structure made of sheet material. The two side plates, left and right, constitute the main carrying elements of this chassis. They are flexed in the form of a П-shaped section. Since they are linked to the wings and the transverse shield at the front, and to the plates whereto the weight is fixed, at the back, the result is a box-like enclosed form. The side plates thickness is 10 mm, and the mostly used material for these is steel ST355JR. Based on the experience acquired in the perennial company business, the calculations have been made in respect of the two most typical load conditions in the use of these machines [6]. It is known from the theory and the design of forklifts [1,3,6] that when moving the centre of gravity forward, in respect of the longitudinal axis of the forklift, the reated load should be reduced in order to preserve the machine stability against overturning while handling any load [4,5,8].
Therefore, the second purpose of the study has been setto identify and analyse the stressed state of the chassis structure and its deformed state at various positions of the centre of gravity and using various rates of the payload, which ensure the longitudinal stability of the terrain forklift against overturning [4,5].
Stresses and deformations in the chassis of a rough terrain forklift featuring dual wheel drive
3.1. Stresses and deformations of the chassis under the first load conditions featuring a standard centre of gravity at C = 600 mm and a rated load of Q = 50 kN.
The results of testing the strength under the first load conditions, featuring a standard centre of gravity of the load at C = 600 mm and a rated load of Q = 50 kN, are shown in Figure 3 and Figure 4.
Stresses and deformations of the chassis under the second load conditions featuring a standard centre of gravity at C = 600 mm and a rated load of Q = 50 kN.
The results of testing the strength under the second load conditions, featuring a standard centre of gravity of the load at C = 600 mm and a rated load of Q = 50 kN, are shown in Figure 5 and Figure 6.
Stresses and deformations in the chassis under the first load conditions featuring an increased centre of gravity of the load at С = 900 mm and reduced load of Q = 35 kN
The results of testing the strength under the first load conditions and featuring increased centre of gravity at C = 900 mm and reduced load of Q = 35 kN, are shown in Figure 7 and Figure 8.
Stresses and deformations of the chassis under the second load conditions featuring an increased centre of gravity at C = 900 mm and reduced load of Q = 35 kN
The results of testing the strength under the second load conditions and an increased centre of gravity at C = 900 mm and reduced load of Q = 35 kN, are shown in Figure 9 and Figure 10.
Results analysis
Based on the obtained results, the following conclusions could be drawn: 1) The critical points in the structure, where the highest stresses under both load conditions have been identified, could be reduced to three, shown on Figure 3: Point 1underwing plate used for fixing the drive axle; Point 2front upper section of the carrying side plate within the bracket of the tilting cylinder; Point 3the rear section of the carrying side plate within the counterweight carrying plate. The estimated maximal static stresses are σ мах = 156 МРа within the underwing plate (p. 1) in the first instance of loadingtaking the load. This value has been confirmed by the conducted strain measuring tests of a chassis of this class of machines at the testing laboratory of Balkancar Record JSC. Pursuant to [3], the dynamism coefficient of chassis and lifting mast of rough terrain forklifts, featuring pneumatic tyres, is C d = 1.9. The maximum stress value multiplied by the dynamism coefficient results in the highest stress value -296 MPa. Considering the fact that the yield limit regarding the ST 355 JR steel, whereof the chassis components have been made, is 355 MPa, it may be assumed that this section of the construction is optimal.
2) There is a negligible difference in the values of the maximal static stresses regarding both load conditions. When the load is being lifted, the values are 3-6 MPa higher than the ones regarding tilted lifting mast, which shows that operating the forklit in both load conditions does not considerably affect the stresses; it is stability that matters more.
3) Regarding the calculations entailing an increased centre of gravity and reduced load at both load conditions, the registered stresses have been 25% lower compared to the ones entailing reated load and a standard centre of gravity. It practically means that with regard to a forklift optimally designed for rated loads and a standard centre of gravity, compliance with the loads chart specified by the manufacturer shall ensure not only the required stability but also the strength and the reliability of the forklift chassis structure.
4) The achieved results regarding the structure stresses in the other two critical points, 2 and 3, show that there is sufficient reserve for additional lightening the chassis structure by further reduction of the thickness of the carrying side plate, from 10 mm to 8 mm, in respect of forklifts of this series that feature smaller lifting capacity, i.e. 30 kN and 40 kN.
Conclusion
The resulting values of the stresses and the deformations in the chassis of rough terrain forklifts have been calculated, presented and analysed using the created 3D model applying the FEM under two main operation modes, each of which has undergone two alternative types of loading determined by the weight and the location of the payload.
A real assessment of the strength and the deformations of the chassis of rough terrain forklifts has been made by analysing the obtained numerical results, and changes to the chassis structure have been proposed, which determines the practicality and the applicability of this study. | 2020-07-23T09:08:56.741Z | 2020-07-22T00:00:00.000 | {
"year": 2020,
"sha1": "dc09bfa8ecd2c0c17452bb2980bf2fd2924965e5",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/878/1/012038",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ed74a31aa41a4cdc6e510e592ef220092f2b8c07",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
245684563 | pes2o/s2orc | v3-fos-license | Curcumins and its derivatives as potential inhibitors of New Coronavirus (COVID- 19) main protease: an in silico strategy
Coronavirus (COVID-19) disease outbreak caused a worldwide pandemic with a powerful lethal potential and still, there is no specific treatment to it. Natural bioactive molecules like curcumins were investigated in this work aiming to block the active site of COVID-19 Main protease (Mpro), since they present several biological activities, being more suitable in terms of fewer side effects, once this disease overloads the immune system of patients. Hereby, curcumin and several derivatives were screened for their ability to react with Mpro receptors (PDB: 6LU7). N3, Azithromycin (AZT), and Baracitinib (BRT) were evaluated as positive controls and in combined therapeutics possibilities with curcumins. N3, AZT, and BRT bound to different protein receptors, and also it was observed that N3 bound in the same site as hexahydrocurcumin and curcumin glucuronide bound at the AZT’s site and bisdemethoxycurcumin, curcumin, curcumin sulfate, cyclocurcumin, demethoxycurcumin, dihydrocurcumin and hexahydrocurcuminol bound at BRT’s site. All molecules analyzed have high force interaction fields. Once the viral activity is mainly intracellular, these compounds also were evaluated for their hydropathic abilities. All molecules were classified and considered capable of membrane cell invading. These results suggest that the therapeutic approach of the curcumin derivatives associated with AZT and the antiviral inhibitor N3 is promissory for future evaluation of their synergism in in vitro and in vivo tests to define their additional viability in the treatment of COVID-19.
Introduction
The New Coronavirus (COVID-19) is a virus that belongs to the Coronaviridae family, which has a simple positive sense RNA strand, known for its high degree of contagion, which can infect a wide range of hosts, such as birds, swine and humans. COVID-19 in humans (HCOVID) has an infectious potential related to respiratory complications, ranging from the common cold to acute bronchitis and pneumonia (Fehr & Perlman, 2015;Mesel-Lemoine et al., 2012).
The disease began to manifest itself through a case of pneumonia of unknown cause in Wuhan, China, reported to the World Health Organization (WHO) of China on December 31, 2019. The coronavirus outbreak was declared an emerging state of public health and the initial milestone of the worldwide pandemic on January 30, 2020, until then with great contagious potential. According to WHO (WHO, 2020), on December 3, 2021, there were 263.563.622 confirmed cases identified in all countries, areas and territories around the world and including 5.232.562 deaths were confirmed. As a total of 7.859.585.168 vaccine doses have been administered. The symptoms of the pathology are nonspecific, such as cough, fever, and shortness of breath, however, they have a greater lethal potential (Ren et al., 2020;Rezaeetalab et al., 2020;WHO, 2021).
Main protease (Mpro), or 3C protease, responsible for viral replication, is formed by polyproteins 1A and 1AB (Hegyi & Ziebuhr, 2002;Pillaiyar et al., 2016;Wu et al., 2020;Zhou et al., 2020). Considering the importance of this protein for the vital cycle of the virus, it was used as a target in the molecular docking test, the focus of the present work, to promote new drug candidates in the treatment of COVID-19. The Mpro enzyme is a combination of two 6LU7 structures, a structural fraction used for the said molecular docking study. In previous studies, the protein was crystallized in an interaction structure with the N3 ligand, with antiviral inhibitory activity. The ligand has specific interactions with the 6LU7 protein amino acids that characterize its anchoring site, these are the interactions with CYS, where the ligand undergoes an electrophilic attack by covalent interactions, generating a region of a strong interaction between the ligand and the receptor .
One of the biggest challenges in medicine is the development of antiviral resistance drugs. Therefore, it is necessary to study and develop new candidates for antiviral activity drugs extracted from natural sources, such as curcumins and their derivatives (Zandi et al., 2010). In addition to having great biological antiviral potential, curcumins are more suitable in terms of fewer side effects (Aboelhadid et al., 2019) having anti-tumor, antioxidant and anti-inflammatory activities, as well as hepatoprotective effect (Aboelhadid et al., 2019;Antiviral Potential of Curcumin, 2018;Moghadamtousi et al., 2014;Mouncea et al., 2017;Zandi et al., 2010). In combating COVID-19, they can act as inhibitors, causing direct interference in viral replication (Antiviral Potential of Curcumin, 2018). Commercially sold drugs such as azithromycin (Ulrich & Pillat, 2020) and baricitinib (Cantini et al., 2020) are adjuvant drugs with antiviral potential for the treatment of COVID-19 (Rosa & Ferreira, 2020). Together with N3, they compose the comparative ligands of the biological antiviral action of curcumins and their derivatives in this molecular docking study, to promote them as a supplementary drug in the treatment of pathology.
Methodology
Initially, the structure of Mpro's 6LU7 protein with the N3 ligand was reported from the RCSB protein data bank© (https://www.rcsb.org/) and then the water molecules were removed and the file converted to .pdb protein in the UCSF Chimera® software (Pettersen et al., 2004). The two-dimensional structures of the various curcumins were obtained from the PubChem molecular repository (https://pubchem.ncbi.nlm.nih.gov/), then drawn and corrected in the MarvinSketch® academic software (https://chemaxon.com/products/marvin) (Csizmadia, 2019). Subsequently, the ligands underwent a semiempirical geometric optimization of quantum mechanics using the parametric method 7 (PM7) using the MOPAC® software, and converted to lig.mol2. After that, the files were uploaded and submitted to the web-based tool, SwissDock (http://www.swissdock.ch/docking#) (Webb & Sali, 2019), Swiss Institute of Bioinformatic (SIB) server, for molecular docking simulation. Subsequently, the results received were processed in the UCSF Chimera® software, for analysis and comparison of the distances and interactions of curcumins with the interaction amino acid residues of N3, Azithromycin(AZT), and Baricitinib(BRT) Rocha et al., 2021). From the distances obtained, the data were computed and plotted on the web-based tool, Morpheus (https://software.broadinstitute.org/morpheus/), and heatmaps were used to visualize changes in the ligand-residue interaction profiles (L-R's), being evaluated by the Pearson statistical test to detect similarity. The types of chemical interactions L-R's were analyzed and the figures were generated using the Discovery Studio ® software (Biovia et al., 2000). Then, the degrees of lipophilicity (Log P) of the ligands were analyzed to define their hydrophobic interactions, using the MLOGP method (Moriguchi et al., 1992) from the SwissADME server (http://www.swissadme.ch/).
Results
Among the most common interactions that comprise the 6LU7 protein catalytic sites with enzyme inhibitors, as shown in Table 1, are covalent interactions, hydrogen bonds, π-amide, and π-alkyl stacking interactions (Fokoue et al., 2020). The N3 inhibitor (control) interacts with 6LU7 by electrostatic interactions of Van der Waals through the residues of T24, T25, T26, Y54, N142, S144, D187, R188 and Q192-A, hydrogen bonds with the F140 residues, G143, H163, H164, E166, Q189 and T190, carbon-hydrogen bond with M165 and H172 residues, π-amide stacking with LEU141, H41, M49, M165 and L167 alkyl interactions, π-alkyl stacking P168, and A191 and a covalent bond with C145, forming the region of the strong interaction so that the N3 ligand binds to the protein forming the complex. Research, Society and Development, v. 11, n. 1, e6511124334, 2022 (CC BY 4. Azithromycin (AZT) is a commercially sold antiviral that was also a control binder in the present study, to evaluate its supplementary action in the treatment of COVID-19 (Rosa & Ferreira, 2020). Its interactions with the 6LU7 protein are predominantly alkylas and hydrogen bonds, which characterize a site of inhibitory activity distinct from N3. The ligand interacts with the protein through hydrogen interactions with the K102, D153 and S158 amino acid residues with a strong contribution from the hydroxyls closest to the glycoside amine group, carbon-hydrogen bonds with the Q110 and D153 residues through the donor sites of hydrogen NH and OH and alkyl and π-alkyl interactions with the residues of V104, I249, P293, and F294.
5
The baricitinib ligand (BRT), here also considered a control drug, shows only hydrogen interactions. The ligand interacts with the 6LU7 protein by hydrogen bonds with the G71 and K97 residues, with a strong contribution from the tertiary amine receptor sites of pyrimidine and the sulfate group oxygen, and carbon-hydrogen interactions with residues E14, G15, M17, Q69, and S121.
The curcumins evaluated as drug candidates in the treatment of COVID-19 were: hexahydrocurcumin, curcuminglucuronide, bisdemethoxycurcumin, curcumin, curcumin sulfate, cyclocurcumin, demethoxycurcumin, dihydrocurcumin, and hexahydrocurcuminol. After undergoing the molecular docking test, it was possible to observe that the ligands occupied the catalytic sites of the three controls (N3, Azithromycin, and Baricitinib), and this is due to the similarity of the molecular tridimensional structure among the compounds, later detailed.
The interactions of ligands and controls were mapped, as well as their energies and categories of binding were determined, thus determining the possible sites of action for all the studied molecules. It was identified that, in the results obtained in the docking and in the statistical evaluation in the Pearson similarity test ( Figure 1A-C), the compounds presented grouping (clusters' formation) according to the physical-chemical and interactive similarities between themselves (L-L's), between the ligands and amino acid residues (L-R's) and between amino acids and amino acids (R-R's), as described below.
Figure 1 -Heatmaps of the different interactions expressed between L-R's (A) Heatmaps of the different interaction forces between all ligands and their respective reactive amino acids, legitimizing three (3) active sites in the 6LU7 protein. Hierarchical clusters demonstrated. (B) Heatmap proving Pearson's similarity test reactivity between L-L's. (C) Heatmap demonstrating reactivity in Pearson's similarity test
between R-R's. In the schematic, the closer to 1 (red) the interaction force will be more determinant and intense, the closer to -1 (blue) the greater the distance, and the interaction force will be negligible. Clusters highlighted by dark green squares Source: Authors.
It is worth to observe that the Curcumin and Cyclocurcumin ligands are the only compounds to interact with the protein without forming hydrogen bonds and with the same types of interaction which Curcumin interacts with the catalytic site of Baricitinib by carbon-hydrogen bonds with residues A70, G71, and P122, π-cation interactions with K97 and π-alkyl with A70. Cyclocurcumin by carbon-hydrogen bonds with P96 and G120, π-cation interactions with K97 and π-alkyl with A70. Hexahydrocurcuminol has an unfavorable donor-donor interaction with G19, hydrogen bonds with Q69 and K97, carbonhydrogen bond with G15, and π-cation (K97) and π-alkyl (A70) interactions.
From the group of curcumin binders, it can be highlighted the interaction of Hexahydrocurcumin with a minimum energy order of -7.4 kcal / mol with the receptor site of the N3 inhibitor, with a minimum distance of the residues of 1.89Å (L4) and the longest distance of 2.56 Å (G143), and the curcumin ligand glucuronide as the highest energy interaction of -9.0 kcal / mol, with the AZT receptor site, with a minimum residue distance of 1.85Å (E240) and a maximum of 3.01Å (P108) ( Table 1).
Discussion
For the physicochemical similarities between the ligands (Figure 1 B), a brief overlap of groups can be observed regarding the presence of the hexahydrocurcumin compound. This substance, in its composition and chemical structure, has many similarities with curcumin, the base component of the group that includes 7 compounds, such as the commercial drug BRT. Hexahydrocurcumin also has a chain arranged similarly to the N3 ligand, to have types of interactions like the control.
The hierarchical clusters there are shown in Figure 1A undoubtedly determine the poignant difference between the said active sites, corroborating with Figure 4, where several interactions will be described below. The receptor site where the N3 ligand is found indicates a region of covalent interactions, hydrogen bonds, stacking interactions π-amide and π-alkyl, with susceptibility to electrophilic attacks, while the Azithromycin and Baracitinib site have a predominance of interactions with networks π-alkyls, σ-alkyls and conventional hydrogen bonds (Figure 4) (Carey, 2011;Fokoue et al., 2020). It was observed that, both in silico evaluation and in the statistical evaluation in Pearson's similarity test (Figure 1-C), the residues were grouped according to the degree of importance of interaction and physical-chemical and interactive capacities between these and the compounds (R-L's). Following the aforementioned color scheme, the distinction of active residues between the controls is demonstrated and the important similarity in maintaining the reactivity of these same residues in the different ligands. The strong color marking, as well as the shortest interaction distance, made the 2 new active sites stand out. Four residues were shown to be non-specific between the N3 and BRT sites, they are M17, G71, Q69, and S121. Five other residues proved to be nonspecific between the AZT and BRT sites, they are A70, P96, G120, N142, and Q192. The interaction with residues H41, C145, H164 was observed to be statistically essential for the stability of the ligands at the N3 site. As for residues E14, G15, and K97, these proved to be statistically indispensable for the stability of the ligands to the BRT site. Interaction residues Q110, F294, I249, and P293 were statistically considered intrinsic to the AZT site. This study highlights at least one possible catalytic triad for each site described here.
When assessing the pharmacological potential of substances, it is important to observe their molecular interactions with the active sites in the biological system. These are determined by the resultant between attractive and repulsive intermolecular forces, among them hydrophobic interactions. These interactions govern its potential for attraction or repulsion to water and consequently determine whether it is easy to cross the plasma membrane (PM), naturally composed of phospholipids. Once inside the cell, proteins (R) and the ligands (L) must be able to perform several interactions for them to meet in the intracellular environment. The compound N3 makes intimate connections with hydrophilic residues, demonstrating that it, as well as the molecules that occupy the same site, will possibly be able to cross the PM .
Since viral replication occurs via the intracellular route, this ability of drugs is important, as well as the protein capacity to attract ligands (Vareed et al., 2008). The active sites, described in this work, by the control molecules AZT and BRT are characterized as follows, the AZT site is found between hydrophobic and hydrophilic intermolecular forces, where the first is more prominent. These characteristics show that, like the N3 site, the binding drugs are potentially capable of crossing the PM. The BRT site has a great hydrophilic interaction, which can facilitate intracellular protein interaction and increase its virulence. Its hydrophobic core of M17 has a hydropathic index 1.9 demonstrating that this region attracts substances capable of crossing the PM. These compounds' cell invading ability further is explained by MLOGP evaluation (Table 1) Moriguchi et al., 1992;Rocha et al., 2021).
Hexahydrocurcumin was the only ligand of proximity to the active site of N3, the compound has essentially hydrogen bond interactions. The ligand interacts with the protein through conventional hydrogen bonds with the L4 and T25 residues, with a strong contribution from the hydroxyl hydrogen donor sites, and carbon-hydrogen interactions with T26 and G143. The compound is the only drug candidate, among the curcumins in this study, to demonstrate the possible synergistic effect in the treatment of COVID-19 together with Azithromycin and Baricitinib.
It is interesting to observe that the only compound that occupied the same catalytic site as Azithromycin was curcumin glucuronide, this was due to the physical-chemical and interactive similarities between the compounds. Both have a glycosidic side chain, which performs hydrogen interactions with the residues with strong contributions from the hydroxyl groups of the glucose moieties. Curcumin glucuronide interacts with the protein through conventional hydrogen bonds with E240 and U246 residues, carbon-hydrogen interactions with P108, I249, and P293 and π-alkyl stacking interactions with I249 and synergizes with the N3 inhibitor and Baricitinib (Figures 2 and 3).
Curcuminoid compounds have a base structure that holds aromatic rings, hydroxyls, ethers and ketones. Antiinflammatory, anti-cancer and anti-mutagenic activities have been reported related to the presence of its ketones, as well as the double bonds present in its carbon chain. furthermore, antioxidant activity was related to the presence of its hydroxyls. In this work, we report that the presence of hydroxyls associated with the ether and benzene groups are responsible for the potential antiviral action of the semi-synthetic curcuminoid compounds listed here, as shown in Figure 3. The cation-π interaction occurs between the benzene group of the curcuminoid compound and amino acid nitrogen, being a non-covalent molecular interaction between the face of this electron-rich π quadrupole system and the adjacent monopole cation. After this interaction, the same electron-rich system interacts with another amino acid through pi-alkyl interactions, the referred amino acid, having a free valence in a saturated carbon, performing the alkyl interactions. For example, the compounds bisdemethoxycurcumin, curcumin, demethoxycurcumin and hexahydrocurcuminol ( Figure 3DGIL, respectively) undergo an electrophilic attack of type π-cation from K97, as well as experience electrostatic attraction of the type π-alkyl with A70. These interactions proved to be determinant for the alteration of the three-dimensional state of the Mpro protein, potentially inactivating it.
Most of the compounds in the curcumin group in the study occupy the active site of Baricitinib, due to the similarity of their physical-chemical properties and their types of interactions, highlighting the π-sulfur interactions with infrequent amino acids in 6LU7, and the stacking interactions π with cations from the residues, called π-cations, in addition to hydrogen interactions with the most frequent amino acids in 6LU7 (Nelson L., David;Cox M., 2014).
Once the hydrophilic and hydrophobic regions of the protein receptor residues were known, it was possible to determine the degree of lipophilicity of the ligands by MLOGP. It can be highlighted the bisdemethoxycurcumin ligand with the highest degree of hydrophobicity and curcumin glucuronide as the ligand with the greatest hydrophilic interaction (Table 1) (Moriguchi et al., 1992;Nelson et al., 2014).
Conclusion
Considering the approximation of most ligands to the baricitinib receptor site, a therapeutic approach to the compound's bisdemethoxycurcumin, curcumin sulfate, curcumin, cyclocurcumin, demethoxycurcumin, dihydrocurcumin and hexahydrocurcuminol associated with Azithromycin and the antiviral inhibitor N3 is a promissory strategy. Then, it is suggested an evaluation of these substances in synergism with the respective drugs in in vitro and in vivo tests to define their additional viability in the treatment of COVID-19. | 2022-01-05T16:14:03.371Z | 2022-01-02T00:00:00.000 | {
"year": 2022,
"sha1": "26e4f4c9967278f0d52a1ec00d57b256c6702ef4",
"oa_license": "CCBY",
"oa_url": "https://rsdjournal.org/index.php/rsd/article/download/24334/21633",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4c05f41f4af1dbcb81b55e577e43a1d966803083",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": []
} |
257539953 | pes2o/s2orc | v3-fos-license | Enhancing Targeted Therapy in Breast Cancer by Ultrasound-Responsive Nanocarriers
Currently, the response to cancer treatments is highly variable, and severe side effects and toxicity are experienced by patients receiving high doses of chemotherapy, such as those diagnosed with triple-negative breast cancer. The main goal of researchers and clinicians is to develop new effective treatments that will be able to specifically target and kill tumor cells by employing the minimum doses of drugs exerting a therapeutic effect. Despite the development of new formulations that overall can increase the drugs’ pharmacokinetics, and that are specifically designed to bind overexpressed molecules on cancer cells and achieve active targeting of the tumor, the desired clinical outcome has not been reached yet. In this review, we will discuss the current classification and standard of care for breast cancer, the application of nanomedicine, and ultrasound-responsive biocompatible carriers (micro/nanobubbles, liposomes, micelles, polymeric nanoparticles, and nanodroplets/nanoemulsions) employed in preclinical studies to target and enhance the delivery of drugs and genes to breast cancer.
Introduction
Breast cancer is the most common cancer in women in the United States and accounts for 30% of all newly diagnosed invasive cancers in the female population. The American Cancer Society's 2022 estimate put the number of cases of newly diagnosed invasive breast cancer at 287,850. About 43,250 women will die from breast cancer making it the second leading cause of cancer death in women, only after lung cancer. Despite the significant decline in the death rate between 1989 and 2019, death from breast cancer is 41% higher in Black women than in White women, in part due to the higher proportion of triple-negative breast cancer diagnosed and less access to high-quality cancer care [1]. The majority of breast cancer cases might be attributed to factors linked to pregnancy, hormonal therapy, and lifestyle [2]. Approximately 10% of all cases are related to genetic predisposition, family history, and ethnicity. Germline mutations in BRCA1 and BRCA2 genes are most commonly associated with breast cancer, with an average cumulative lifetime risk of about 70% [3,4]. Historically breast cancer has been classified into three subtypes based on the expression of estrogen (ER) or progesterone (PR) hormone receptors and human epidermal growth factor receptor 2 (HER2/ERBB2). Specific therapeutics can be used to target these subtypes. Triple-negative breast cancer (TNBC) does not express any of these markers and is treated with conventional chemotherapy and radiation therapy [5][6][7][8]. Twenty years ago, Perou and Sorlie performed expression-profiling studies that showed how ER/PR/HER2 classification is not sufficient to depict the heterogeneity of breast cancer. Five main breast cancer subtypes have been identified: basal-like, HER2-enriched, luminal A, luminal B, and normal-like [9][10][11]. Luminal A tumors are ER/PR-positive strategies for drug administration, especially in the case of patients diagnosed with TNBC, demand the development of new therapeutic tools such as nanotechnology-based drug delivery [48][49][50] which can improve the biodistribution and the accumulation of chemotherapy [51].
Various delivery systems have been designed recently with the aim of reducing the general toxicity of conventional chemotherapeutics and increasing the therapeutic index of these drugs. Figure 1 shows the delivery platforms that will be discussed in this review. Conventional chemotherapeutic drugs, such as doxorubicin and cisplatin, have a short blood half-life, conspicuous off-target accumulation, and an unspecific mechanism of action [52]. A notable increase in the plasma half-life of doxorubicin was obtained by encapsulating the drug in liposomes (Caelyx ® /Doxil ® ) [53] and by modifying the surface with polyethylene glycol (PEG) to decrease aggregation and opsonization reducing mononuclear phagocyte system uptake (stealth liposomes) [54,55]. Extended systemic circulation allows for these nanomedicines to accumulate in the tumor site by passive targeting due to its pathophysiological characteristics. The enhanced permeability and retention (EPR) effect observed in tumors versus healthy tissues and first described by Matsumura and Maeda in 1986 [56] is a result of leaky vasculature and poor vessel perfusion [57][58][59][60]. The extent of the EPR effect depends not only on the growth of a chaotic vasculature but also on the tumor microenvironment [59]. The low or absent functional lymphatic vessels that result in high interstitial fluid pressure, the presence of deregulated stromal cells, and abnormal and overexpressed extracellular matrix made of a cross-linked network of hyaluronic acid, elastin fibers, collagen, and proteoglycans can reduce the diffusion of the nanodrugs in the interstitium. All these factors affect the even distribution and accumulation of nanoparticles in the tumor, leading to a clinical outcome that is highly heterogeneous [61][62][63]. Despite the possibility of functionalizing the nanoparticles to target over-expressed receptors (e.g., VEGF, EGFR, and HER2) [64,65], it has been reported in a meta-analysis by Wilhelm et al. that only 0.7% of the administered therapeutics can be targeted to the solid tumor solely by the EPR effect [66,67]. Conventional chemotherapeutic drugs, such as doxorubicin and cisplatin, have a short blood half-life, conspicuous off-target accumulation, and an unspecific mechanism of action [52]. A notable increase in the plasma half-life of doxorubicin was obtained by encapsulating the drug in liposomes (Caelyx ® /Doxil ® ) [53] and by modifying the surface with polyethylene glycol (PEG) to decrease aggregation and opsonization reducing mononuclear phagocyte system uptake (stealth liposomes) [54,55]. Extended systemic circulation allows for these nanomedicines to accumulate in the tumor site by passive targeting due to its pathophysiological characteristics. The enhanced permeability and retention (EPR) effect observed in tumors versus healthy tissues and first described by Matsumura and Maeda in 1986 [56] is a result of leaky vasculature and poor vessel perfusion [57][58][59][60]. The extent of the EPR effect depends not only on the growth of a chaotic vasculature but also on the tumor microenvironment [59]. The low or absent functional lymphatic vessels that result in high interstitial fluid pressure, the presence of deregulated stromal cells, and abnormal and overexpressed extracellular matrix made of a cross-linked network of hyaluronic acid, elastin fibers, collagen, and proteoglycans can reduce the diffusion of the nanodrugs in the interstitium. All these factors affect the even distribution and accumulation of nanoparticles in the tumor, leading to a clinical outcome that is highly heterogeneous [61][62][63]. Despite the possibility of functionalizing the nanoparticles to target over-expressed receptors (e.g., VEGF, EGFR, and HER2) [64,65], it has been reported in a meta-analysis by Wilhelm et al. that only 0.7% of the administered therapeutics can be targeted to the solid tumor solely by the EPR effect [66,67].
Ultrasound
One approach that has been developed to increase the EPR effect is the use of external stimuli such as ultrasound which can enhance the permeability of the blood vessel and tissues and increase the release of drugs from the carrier.
Mammography, ultrasound (US), magnetic resonance imaging (MRI), and positron emission tomography-computed tomography (PET/CT) are the main imaging tools available for breast cancer screening, diagnosis, staging, surgical planning, and surveillance [68]. Additionally, diagnostic ultrasound is used as procedure guidance in a wide variety of clinical settings [69].
The main advantages of using US include wide availability, low cost, no ionizing radiation, and real-time dynamic imaging capability. Lower spatial resolution and tissue penetration are instead disadvantages [72]. Besides its diagnostic utility, ultrasound in combination with contrast agents or ultrasound-responsive nanomedicines has been more recently explored as a tool that can enable the direct visualization of a tumor, and guide and enhance the delivery of therapeutics to the targeted region using thermal and mechanical effects.
Thermal and Mechanical Effects of US
Acoustic waves interact with body tissues, cell membranes, and drug carriers via a combination of thermal and mechanical effects. US generates compression and rarefaction pressure alternations at different frequencies. Typically, higher frequencies (>20 MHz) utilized for diagnostic purposes have lower tissue penetration (i.e., higher attenuation), while lower frequencies (0.5-5 MHz) used for therapeutic applications enable deeper tissue penetration [73]. Adjustments in the ultrasound settings (the mechanical index, pulse repetition frequency, etc.) allow for several biologic effects such as thermogenesis, cavitation, and acoustic radiation force [74]. Thermal effects are explained by the conservation of energy law, which is in part absorption of sound waves by imaged tissues. Utilizing this phenomenon allows for noninvasive, fast, and localized heating of deeply located tissues [74]. The degree of tissue thermogenesis is set by US parameters such as the thermal index, transducer geometry, and sonication frequency. Greater degrees of thermogenesis may be reached with a high-intensity focused ultrasound (HIFU) that can achieve a tea temperature >60 • C and that is used in clinical practice for several ablative treatments such as uterine fibroids, bone metastases, and prostate cancer [75]. The level of hyperthermia obtained with ultrasound (40-45 • C) has been shown instead to enhance cell membrane fluidity, increase permeability to drugs, and also lead to the release of drugs from thermosensitive carriers without tissue damage [76][77][78]. Direct disruption of the target membrane occurs not only directly with membrane fluidification, but also by sonoporation, in which microbubble implosion disrupts neighboring cell membranes temporarily [79,80]. This allows direct communication between the cytoplasm with external tissue environments, allowing for macromolecules or nanoparticles to be directly delivered. This principle was utilized by Dewitte et al. in the delivery of immunomodulatory TriMix mRNA in addition to a desired antigen mRNA incorporated into microbubbles to educate and mobilize antigen-specific T cells in destroying the transformed target cells. Additionally, Domenici et al. leveraged low-intensity ultrasound-induced sonoporation to deliver gold nanocolloids conjugated to 4-aminothiphenol, an infrared marker, into murine fibroblasts, with significant cytotoxic and genotoxic effects after ultrasound/nanocolloid combination [81].
Acoustic cavitation describes the process of alternate compression and rarefaction of a solid or liquid in a medium that conveys acoustic irradiations, a phenomenon described and utilized with US contrast agents, the microbubbles (MBs) [82]. This oscillation of volumetric expansion and contraction has numerous effects: volumetric expansion of a spheroid greatly reduces the pressure of the phase contained therein, generating negative pressure. The subsequent collapse of the spheroid reduces volume, which increases the pressures and temperatures, the latter of which is imparted into the surrounding fluid phase [83]. Cavitation may be classified as non-inertial (also known as stable) or inertial (also known as transient). Non-inertial cavitation describes the stable compression and expansion oscillation of MBs, without collapse. This process generates a secondary effect known as microstreaming, the local turbulence of particles near the endothelial lining that promotes endothelial permeability and, thus, increases local drug delivery [84]. Inertial cavitation, on the other hand, involves exaggerated MBs oscillation, with a marked unbalanced expansion phase and ultimately MBs collapse. Additionally, inertial cavitation leads to microstreaming, microjet, free radical formation, shear wave, and local thermogenesis [82]. Acoustic radiation force (ARF) consists of primary and secondary effects. The primary effect moves particles away from the transducer and the secondary effect promotes attraction between particles [85]. ARF pushes therapy or its carriers away from the center of the vessel towards the endothelial wall. Summed to the additional above-mentioned US effects, this phenomenon is an important adjuvant in US-mediated targeted drug delivery [86,87].
The cellular effects of ultrasound exposure have been well documented. Most damage to irradiated tissues is caused by exposure above the cavitation threshold, where oscillating pressures induce the formation of micron or smaller gas bubbles, which with the resulting oscillation and collapse, induce severe damage in the irradiated cells [88]. However, the damage is not limited to energy levels exceeding this threshold, and this is important to consider when balancing potential damage from higher levels of US irradiation with enhancement in membrane permeability/integrity for maximizing the delivery of desired materials. Recent work highlights the biological effects of sub-cavitation threshold irradiation in keratinocytes. Increasing doses of ultrasound exposure produced decreases in cell viability and the activation of apoptosis [81]. Moreover, at higher doses, overexpression of IL-6 was observed, which was thought to occur through ultrasound imparting mechanical stress within the cell [89]. These findings illustrate that even below the commonly accepted threshold where most damage occurs, ultrasound can impart damage to tissue that can reduce viability, increase inflammatory changes, and potentially have untoward off-target effects on target tissues.
Additionally, ultrasound can directly interact with the carriers leading to the release of therapeutic drugs in the region targeted by the ultrasound. It has been shown that HIFU can open ultrasound-sensitive lock copolymer polyethylene glycol (PEG) and polypropylene glycol (PPG) micelles, leading to the destruction and release of the payload [90]. The degradation behavior of hollow PLGA poly(lactic-co-glycolic acid) (PLGA) contrast agents microcapsules was studied by El-Sherif et al. They showed that microcapsules that are more echogenic degrade faster than those that are less echogenic and this is further accelerated by using an ultrasound frequency that gives maximum backscatter [91].
The ability of ultrasound to induce the delivery of therapeutics to a target with the myriad of vehicles under development is a highly complex interplay between target tissue/tumor porosity, either induced or native, penetration of ultrasound to the target of interest, and the interaction of the physiochemical properties of the vehicle with the exposure parameters of ultrasound. These parameters can be highly variable depending on the complex interplay of biological and mechanical effects discussed above. For instance, micelles have been explored at a variety of exposure parameters with interesting results. Husseini et al. illustrated that pluronic micelles loaded with doxorubicin had their most efficient drug release at 20 kHz which decreased reliably with increasing frequencies despite the associated increase in power density, which suggests that inertial cavitation and its subsequent mechanical effects on local tissues is a large driver of drug delivery in some micellar systems [92]. Furthermore, dual frequency acoustic radiation with similar doxorubicin loaded pluronic micelles at 27.7 kHz at 0.02 and 0.04 W/cm 2 and 3 MHz at 1 and 2 W/cm 2 revealed significantly higher doxorubicin offloading than either exposure parameter alone, which suggests that the local thermal effect of higher intensity ultrasound likely also dictates drug delivery in part [93]. Liposomes have also been shown to efficiently offload their payloads at low-frequency ultrasound irradiation, at frequencies as low as 20 kHz, which is believed to be a function of transient porosity in the liposome's bilayer [94]. Concerning microbubbles, jet formation has been illustrated to form at frequencies as low as 1 MHz, suggesting that inertial cavitation can occur at relatively low energy densities [95] and rigid-shelled microbubbles have been reported to undergo at least partial cracking at frequencies of 0.5 MHz, although an increased proportion of cracked rigid-shelled microbubbles occurred by increasing the frequency to 1.7 MHz [96]. Mannaris et al. highlighted further implications of varying acoustic radiation exposure when they characterized the extravasation trends and penetration depth of gas-trapping nanoparticles, microbubbles, and nanodroplets [97]. They illustrated that higher frequencies of ultrasound (1.6 and 3.3 MHz vs. 0.5 MHz) generated strong directional extravasation away from the ultrasound source, and increasing exposure time as well as discrete ultrasound pulse length produced increased amounts of extravasation of gas-trapping nanoparticles, microbubbles, and droplets. Of these, gas-trapping nanoparticles were found to have the highest amount of extravasation for the lowest energy density when compared to microbubbles and droplets. Ultimately, the exact mechanisms that interplay to create optimum vehicle extravasation and eventual therapeutic delivery to target tissues are still being elucidated, and optimum exposure parameters are likely highly specific to the physiochemical properties of a vehicle and its design philosophy. Continued work to characterize both delivery vehicles' performances at various exposure parameters, as well as the underlying phenomena that coalesce to dictate effective therapeutic delivery, is necessary to further develop effective therapies.
Ultrasound-Sensitive Micro-and Nanocarriers
In this section, we will discuss some of the ultrasound-sensitive micro and nanocarriers that have currently been employed for breast cancer preclinical research. The studies are listed in Table 1. Adenoviruses-N-(2hydroxypropyl)methacrylamide polymer SonoVue -Adenoviruses Twenty-fold decrease in viral infection and reduction of tumor growth [112] Reduced albumin -Doxorubicin Increase in NPs accumulation and therapeutic effect in vivo [113] Nanodroplet/Nanoemulsion Lecithin-based nanoemulsion microbubbles -Curcumin US increased cytotoxicity of curcumin in breast cancer in vitro in and melanoma in vitro and in vivo [114] Alginate-stabilized perfluorohexane multifunctional droplets -Doxorubicin A 5.2-fold higher doxorubicin concentration and decreased cardiotoxicity in tumor tissue that underwent US treatment [115] Perfluoro-15-crown-5-ether (PFCE) -Paclitaxel Tumor regression in vivo [116] Perfluorohexane nanoemulsions coupled to silica-coated gold nanoparticles
Doxorubicin, 5-fluorouracel Paclitaxel
Multi-modality bio-imaging and local therapy [117] 6.1. Micro-and Nanobubbles Microbubbles (MBs) and nanobubbles (NBs) are gaseous core encapsulated US contrast agents widely used for diagnostic applications. Under diagnostic US settings, MBs contract and expand, an effect known as stable cavitation, producing prominent backscattering, which is readily identified in imaging [118]. This effect has been successfully used to assess contrast-enhanced patterns of tissues of interest in several medical applications, similar to other clinical imaging modalities such as CT and MRI. Given the above-mentioned inertial and non-inertial effects of ultrasound, extensive research has explored the effect of MBs in therapeutic applications, turning them into a potential minimally invasive theragnostic tool. In this system, US provides diagnostic imaging with real-time visualization of the targeted tissue and controlled sonication-induced therapeutic drug release [82].
The MB-NB's shell consists of an inner layer, in contact with the core gas, and an outer layer, in contact with the outer space. The shell may be composed of a variety of biocompatible materials and combinations, the most common including protein, lipid, and polymer. The shell's physicochemical properties are a major dictator of the MB-NBs behavior under ultrasound, shell life, stability, immunologic reaction, and drug-carrying capabilities [119,120]. NBs' size, which is variable depending on composition but generally accepted to be sub-micron, makes them less echogenic than MBs, which are classically delineated as anything above one micron with a range of 1-10 microns in practice; however, the shell composition can be modified to improve their response to acoustic waves. More importantly, the smaller size of NBs allows for their passive accumulation in the tumor by EPR while MBs, which are a few microns, cannot cross the gaps between endothelial cells. Additionally, NBs are more stable and display a longer circulation time than MBs [121].
Protein shell bubbles are commonly made of albumin [120]. Albumin-coated encapsulated microbubbles are produced by heating the protein solution to its incipient temperature, followed by sonication. Sonication of the pre-heated protein solution further increases its temperature leading to denaturation. Denaturized protein molecules typically present cysteine residues with disulfide bonds that are broken during the denaturation process. Reaggregation of denaturized protein fragments enveloping a gas core occurs during sonication and can be achieved by modifications in the solution's pH. For instance, at the isoelectric point, repulsive forces are eliminated, facilitating free molecule aggregation. The thick protein shell layer can accommodate the loading of macromolecules without significantly disrupting the sonication response [122]. The easiest method to incorporate genes and drugs into the protein MB is by simple incubation of the desired macromolecule with an MB solution [120,123].
Lipid-shelled MBs utilize the physiochemical properties of lipids to stabilize the MB. The hydrophilic component allows contact with the outer space solution while the inner hydrophobic layer keeps the gas core entrapped. Efficient gas entrapment also demands a cohesive bound and a high-density shell layer. The lipid layer is bound by hydrophobic interactions. To achieve a compact lipid layer, the solution undergoes heating followed by rapid quenching [124]. Since lipid molecules are bound by weak hydrophobic and Van der Waals interactions, lipid MBs easily undergo expansion and resemblance under US cavitation. The thin lipid layer, however, limits the loading capacity. Different strategies have reported incorporating macromolecules into the outer layer as well as into the hydrophobic layer without altering MBs' response to ultrasound [120,125]. Different from protein-shelled MBs, lipid shells rely on ultrasound inertial cavitation with induced fragmentation, microjet formation, and microstreaming [82].
Synthetic and natural polymer-based MB shells claim better control of the composition and elasticity of the shell, potentially providing a more stable and predictable behavior under ultrasound. There are several methods described to prepare polymer-encapsulated MBs, including internal loading with the gas core, physical association, and covalent linkage with the polymer shell [126][127][128]. Polymer-shelled MBs may also be developed to increase circulation time and provide a higher ligand density [129]. Polymer-shelled MBs present physicochemical properties that are different from the lipid monolayer. The compact polymer layer is less compressible, may withstand a non-spherical shape, and undergoes sonic cracking under US, a process where the MB capsule cracks resulting in anisotropic gas core release [96,130,131].
To illustrate the feasibility of polymeric MB shells, Oddo et al. [132] designed a system using poly (vinyl alcohol) (PVA) microbubbles and incorporated robust multifunctionality by conjugating superparamagnetic iron oxide nanoparticles, a near IR reporter, a cyclic arginyl-glycyl-aspartic acid (RGD) peptide, and cyclodextrin to improve vector targeting and drug delivery. PVA-RGD surface conjugation allowed for the selective recognition of αVb3 integrins preferentially expressed in the neovascularized endothelium. The RGDconjugated microbubbles were illustrated to have vigorously increased adhesion compared to a control. Additionally, cyclodextrin incorporation into the bubble shell permits the potential to load hydrophobic drugs into the MB's core in the absence of covalent bonding, which allows for the delivery of normally poorly soluble drugs to the target tissue. With this system, they obtained 24 h of controlled release of dexamethasone acetate, normally poorly soluble. Very recent work by Da Ross et al. illustrates a growing scope for surface functionalized PVA microbubbles with a system such as the previously described RGD/integrin binding system, increasing their scope to radioembolization with a yttrium payload to treat glioblastoma multiforme [133].
Recently, microbubbles have even been used to explore circumventing tumor hypoxia, a poor prognostic factor in determining radiotherapy sensitivity, by being loaded with an oxygen payload in attempt to oxygenate the tumor microenvironment for better response to radiation therapy. Fix et al. constructed 1,2-distearoyl-sn-gylcero-3-phosphocholine (DSPC) microbubbles loaded with oxygen to target rat fibrosarcoma in vivo [134]. The results illustrated higher oxygen levels at the site of the tumor and increased responsiveness to subsequent chemotherapy, which is an exciting prospect for the scope of microbubbles as an adjuvant in hypoxic cancers. Further optimizing this system, Reusser et al. loaded 1,2dibehenoyl-sn-glycero-3-phosphocholine (DBPC) and DSPC microbubbles with oxygen and assessed for contrast enhancement and kinetics [135]. The longer acyl chained microbubbles, DBPC, showed superior contrast enhancement and circulation times in vivo, representing an optimization on previous renditions of phospholipid oxygen microbubbles.
MBs/NBs for Breast Cancer Treatment
Morch et al. developed a new multifunctional delivery system consisting of microbubbles for ultrasound stabilized by PEGylated nanoparticles of poly (butyl cyanoacrylate) PBCA polymer. By applying an appropriate ultrasound pulse the bubble can burst and the NPs containing drugs can be released into the tumor region [136]. By using this platform, Snipstad et al. tested the effects of US-enhanced cabazitaxel release in breast cancer and they showed a 2.3 tumor uptake improvement after bubbles destruction by increasing focused mechanical index that directly correlated with increased intra-tumor nanoparticle deposition [98].
Using a different approach, a dual-modal microbubble containing SF6 gas and consisting of lipid microbubbles loaded with paclitaxel and functionalized with RGD (tripeptide Arg-Gly-Asp) to specifically target tumors was developed. The application of ultrasound allows for an increase in drug accumulation in TNBC-targeted tumors in vitro. These microbubbles are constructed with the following biocompatible materials: , and cholesterol [99].
The use of ultrasound-mediated nanobubble destruction (UMND) was explored by Jing et al. to enhance the targeted delivery of EGFR-targeted siRNA (siEGFR) in TNBC. They synthesized NBs (DPCC, DSPE-PEG2000-MAL, DPPA, and PEG-40 stearate) loaded with cell-penetrating peptide (CPP) and carrying siEGFR and utilized US to deliver siEGFR into TNBC cells observing a reduction in the expression of EGFR at mRNA and protein levels together with a reduction in cell proliferation in vitro and inhibition of tumor xenografts growth in vivo [137].
It has been shown by gene expression profiles of TNBC that LINC00511 expression is significantly increased and plays a major role in cancer biology such as conferring drug resistance. UMND was used by Yuan et al. to enhance the transfer efficiency to TNBC cells in vitro of CPP-loaded nanobubbles (perfluoro-propane-filled nanobubbles synthesized using DSPC and DSPE-PEG2000) complexed with the small interfering RNA for a long intergenic non-protein coding RNA 00511-(LINC00511-siRNA). By using CPP-NBs-LINC00511-siRNA together with UMND plus, the authors showed a reduction in LINC00511 expression and increased sensitivity to cisplatin treatment [100].
Acoustic cluster therapy was employed by Bush et al. to show how microbubble/microdroplet clusters (PS101) can be used to further increase the therapeutic efficacy of Doxil in orthotopic human TNBC xenografts (MDA-MB-231-H.luc). Microbubbles/microdroplets, when exposed to low-frequency ultrasound (300 kHz) at a low mechanical index (MI = 0.15), are subjected to a phase-shift and form microbubbles of 22 µm median diameter that can transiently lodge at the microvascular level. Additional US exposure leads to bubble oscillation and increases in endothelium permeability and drug accumulation in the tumor [101].
Over 15 years ago our laboratory started to explore the use of clinically approved US contrast agents to encapsulate adenoviral vectors for cancer gene therapy. Lyophilized microbubbles were first reconstituted with a solution of adenoviruses and then treated with human complement to inactivate viruses on the bubble surface obtaining an immune stealth system. Initially, SonoVue, Sonazoid, Levovist, and Imagent MBs were tested and we showed the superiority of adenovirus encapsulation by Imagent microbubbles. This system allows for the targeted delivery of the adenovirus in the tumor region after exposure to ultrasound, expression of the transgene, and therapeutic response in vivo [138][139][140][141]. Using a similar methodology but for BC screening purposes, Warrem et al. explored the use of MBs functionalized to bind αVβ3 integrins, P-selectin, and vascular endothelial growth factor receptor-2 on the tumor vasculature, and to then release a dual-reporter adenovirus in the targeted region [142]. A different approach to accelerate the clearance and prevent the liver toxicity of adenoviruses was used by Carlisle et al. by complexing the virus with nanoparticles. This system will be discussed in the polymeric nanoparticles section [112].
Strength, Weaknesses, and Open Issues with MB/NB
Microbubbles represent one of the first steppingstones to ultrasound-enhanced therapy, and great progress has been made in their development and optimization. The versatility of their applications and their sheer modularity, as illustrated above, are significant boons to their application in targeted therapeutics; however, limitations of microbubbles for imaging and drug delivery lie with the inability of the drug to diffuse into target tissues because their size is larger than the gaps between the vascular endothelium, and because of this, their clearance rate is accelerated [143]. While mechanisms made to increase circulation times utilizing polymeric-shelled microbubbles have been developed, future efforts to enhance the distribution or therapeutic effect will lie in minimizing size and maximizing circulation times by exploring new shell and payload configurations.
To contrast these weaknesses, nanobubbles excel where microbubbles do not. While they are not as echogenic owing to their size, their sub-micron dimensions can utilize the junctions between the endothelium to escape the vasculature and accumulate in target tissues much more readily than most MBs. Additionally, NBs are more stable and display a longer circulation time than MBs [121].
Liposomes
Liposomes are lipid spheres of variable size, generally ranging from 50-500 nm, composed of single or double amphipathic layers, capable of carrying internal molecules [144,145]. Like bubbles, liposomes may have their targeting capability enhanced by tagging ligands of interest to their outer surface. The lipid bilayer is composed mainly of the natural phospholipid phosphatidylcholine, a molecule with a polar hydrophilic head and two hydrophobic tails. In aqueous solutions, the phospholipid bilayer is typically oriented with the hydrophilic polar head in contact with the outer and inner spaces [146,147]. This configuration allows for the addition of either hydrophobic or hydrophilic drugs by adding molecules to the membrane surface and vesicle core or embedding them within the capsule layers accordingly. In addition to small molecules, liposomes can be functionalized with cationic moieties or lipid conjugates to deliver many combinations of genetic material, allowing for alteration, silencing, or introduction of genetic codes of target tissues to target malignancy and metastasis [148]. Cholesterol is a secondary but essential component in the liposome capsule's stability and membrane permeability modulation [149,150]. Further capsule modifications to improve the biodistribution profile may include the addition of PEGylation, membrane proteins, such as site-specific antibodies, and polymers [151,152]. Since these biocompatible drug carriers have a similar composition to membrane cells, they are capable of fusing and releasing internal contents, all while avoiding the immunogenic response [153]. Thus, there is a growing interest in the use of liposomes as drug carriers.
As mentioned, liposomes may be developed to reach variable sizes and internal complexity. The sphere size alters the liposome's pharmacokinetics and loading capabilities. For instance, smaller molecules may circulate for a longer time, while large spheres may carry a higher therapeutic load [150]. Further variations include unilamellar or multilamellar vesicles, according to the number of internal layers [144].
Liposomes may carry an active pharmaceutical ingredient by passive or active targeting. In passive targeting, the liposome relies solely on the EPR effect to infiltrate the tumor. In this case, the EPR effect is crucial for the selectivity and retention of the loaded active pharmaceutical ingredient. With active targeting, the liposome is enhanced with a tumor-specific ligand, intended to increase specific receptor interaction and endocytosis. However, EPR and receptor density are highly unpredictable and heterogeneous within tumors. Liposomes may also be designed with a stimulus-depended property, such as under magnetic fields, acoustic power, pH, or temperature [152].
Ultrasound-sensitive liposomes, by definition, enhance drug delivery by their susceptibility to mechanical effects under the low-intensity US or thermal effects under the high-intensity US. These liposomes, also known as echogenic liposomes, typically have a gas core. Echogenic liposomes may be prepared by lyophilization with mannitol or by freezing at a high-pressure of gas [154]. An alternative to liposomes with the gas core is to add emulsion droplets that vaporize at the human body's temperature, termed emulsion liposomes. Similar to the gas core, emulsion liposomes are triggered by ultrasound waves [155]. During the rarefaction phase, the internal pressure drops below the vaporization threshold, and boiling induces collapse. Researchers have demonstrated that triggered delivery of emulsion liposomes is better at lower frequencies [156].
Liposomes for Breast Cancer Treatment
The main challenge in the tumor response to liposomes is the heterogeneous tumor microenvironment. This may include extracellular matrix proteins, matrix metalloproteinase, mesenchymal stromal cells, cancer-associated fibroblasts, and immune cells [157]. . This proof-of-concept study demonstrated significantly higher release and tumor cell uptake by targeted liposomes with further increased delivery after ultrasound exposure. The authors also highlighted no significant rise in temperature, which again corroborates the hypothesis that enhanced release is mediated by pore formation led by mechanical effects [103].
Immunoliposomes consist of liposomes coated with tumor-specific antibodies. Elamir et al. compared ultrasound-triggered anti-HER2-antibody-coated liposomes carrying doxorubicin compared to calcein in the treatment of HER2-positive breast tumors. These liposomes were PEGylated to prolong circulation time. The authors demonstrated significant improvement in the drug delivery of immuno-liposomes compared to the non-target and non-sonicated liposomes [104].
A multifunctional approach was chosen instead by Bhardwai et al. They developed a stealth drug delivery system consisting of a nanobubble for ultrasound imaging complexed to a biocompatible thermos-and pH-sensitive liposome (DPPC, DPPE-PEG 2000, and DOPE phospholipids) containing paclitaxel and curcumin. Sulphur hexafluoride gasfilled nanobubbles were prepared from DPPC and TPGS (D-α-tocopherol polyethylene glycol 1000 succinate) to augment cavitation and the EPR effect and from stearic acid to conjugate the NBs to the therapeutic liposome. DPPC makes the liposomes thermosensitive at 41 • C, a temperature that is reached by application of ultrasound, and DOPE instead is sensitive to acidic pH values found at the tumor site. Hyperthermia and low pH allow for the enhancement of drug release by destabilization of the liposome bilayer. The authors showed in an orthotopic TNBC xenograft model of human MDA-MB-231 cells in NOD-SCID mice that enhancing the release of nanoparticles in the tumor site increased the anti-tumor synergist effect and the radiosensitization observed when combining paclitaxel and curcumin [105].
Cressey et al. co-delivered SN-38 (irinotecan's super-active metabolite) and carboplatin using thermosensitive liposomes (iTSL). Gadolinium lipid conjugate was incorporated into the lipid bilayer to make the iTSL MRI visible. The authors were able to target the delivery of SN-38 and carboplatin to the tumors in TNBC cancer xenografts using focused ultrasound and achieved dramatic inhibition of tumor growth and longer survival of the mice [106]. Using a similar system, M. Amrahl et al. targeted the delivery of doxorubicin in the tumors of mice engrafted with human TNBC cancer using iTSL-encapsulated doxorubicin and focused ultrasound [107].
Strength, Weaknesses, and Open Issues with Liposomes
Liposomes represent a promising candidate in the realm of ultrasound-modulated drug delivery. The versatility of their composition lends them a wide breadth of functionality and physical characteristics [150], which can be adapted to several different applications. The potential for their responsiveness to internal stimuli [158], surface features expanding function and biodistribution profiles [151,152], and their intrinsic ability to appear like endogenous vesicles for the purposes of membrane fusion and reduced immunogenicity make liposomes a powerful therapeutic option [153]. The future direction concerning the development and application of liposomes in theranostics must expand their ability to specifically seek and impact target cell/tissue types by exploring more options for surface functionalization. Additionally, continuing to optimize their circulation times and stability after injection will improve their utility.
Micelles
Micelles are colloidal dispersions consisting of amphiphilic molecules with hydrophilic tails pointing towards the surface, forming a water shell. The hydrophobic head is oriented towards the center. Micelles may carry molecules either in their hydrophobic core or attached to their hydrophilic surface. Compared to liposomes, micelles are smaller in size, but big enough to escape renal excretion and this increases the circulation time [159]. Polymer-based micelles are the most commonly reported type of micelle for ultrasoundtriggered drug delivery [160]. These structures are formed by monoblocks, dual-blocks, or tri-blocks of hydrophilic portions composed of hydrophilic poly(ethylene oxide) (PEO) and a hydrophobic core composed of hydrophobic poly(propylene oxide) (PPO) [161]. PEO works similarly to PEG, also commonly used as a hydrophilic block, with its neutral charge aimed at minimizing non-specific interaction to allow for an increased circulation time [162]. Additionally, a discrete pattern of increasing tumor penetrance with a decreasing size of polymeric micelles has been illustrated, with maximal penetrance into even poorly permeable tumor sites being achieved, with the smallest composition of a polymeric micelle (30 nm) [163] representing the general trend of smaller carriers being more capable of utilizing endothelial gaps to extravasate, a distinct advantage over larger, supramicronsized vehicles such as microbubbles. While the dominant morphology of micellar systems remains spherical, numerous depictions of exotic micellar shapes with varying polymeric shells, such as worm-like/filamentous and rod-like shapes, have been depicted, with some distinct advantages over spherical formulations. For example, filamentous micelles have been shown to have circulation times an order of magnitude longer than their spherical analogs in rodents, with the tradeoff of poorer uptake with longer filaments versus shorter filaments [164]. Recently, rod-shaped PHF-g-(PCL-PEG) polymeric micelles loaded with doxorubicin were shown to have enhanced drug delivery and cellular uptake compared to their spherical counterparts [165]. Additionally, a discrete pattern of increasing tumor penetrance with a decreasing size of polymeric micelles has been illustrated, with maximal penetrance into even poorly permeable tumor sites being achieved with the smallest composition of polymeric micelle (30 nm) [163], representing the general trend of smaller carriers being more capable of utilizing endothelial gaps to extravasate, a distinct advantage over larger, supramicron-sized vehicles such as microbubbles. Several modifications have been employed to improve micellar stability in the bloodstream including copolymerization of an interpenetrating network of thermally responsive acrylates in the hydrophobic micellar core. With this strategy, the micelle's interpenetrating core expands at room temperature, allowing the introduction of therapeutics into the hydrophobic core. Another modification is to use US for the controlled release of micellar content, which has been studied in preclinical applications with doxorubicin [166].
Studies have demonstrated that the efficiency of US-triggered micellar drug release is inversely related to US frequency and directly related to power density. Longer pulses with short intervals generate faster encapsulation, maintaining an optimal drug concentration between pulses [92]. The processes involved in the US-triggered release of drug-loaded micelles have been shown to involve micellar destruction, cavitating nuclei destruction, micelles reassembly, and drug encapsulation [92].
Micelles for Breast Cancer Treatment
One of the earlier studies using US-triggered chemotherapy-loaded micelles in breast cancer conducted by Howard et al. evaluated a system with paclitaxel encapsulated in polymeric micelles. The authors demonstrated that encapsulated paclitaxel cellular uptake is lower without US compared to the standard clinical formulation. This effect is desired to avoid healthy tissue toxicity, a major concern with paclitaxel. When US was applied, encapsulated paclitaxel produced a 20-fold increase in tumor uptake and inhibited cellular proliferation by nearly 90% [108].
In a more recent study, Chen et al. synthetized nanomicelle drug carriers formulated with PLGA-PEG, loaded with doxorubicin, and tagged with anti-EGFR, which is overexpressed in triple-negative breast cancer. The authors tested solid tumor uptake combined with US-mediated cavitation. For this study, enhanced vascular permeability was induced by using SonoVue TM microbubbles and US in addition to the micelle therapeutic administration. This combined approach aimed to maximize the intra-tumoral uptake and demonstrated better tumor growth suppression at lower drug concentrations [109].
Han et al. designed a new sonosensitizer, PEG-IR780@Ce6, for sonodynamic therapy that is biocompatible and bio-safe. They showed in vitro and in vivo in TNBC cells an improved uptake of PEG-IR780@Ce6 under US irradiation and the generation of higher levels of reactive oxygen species when compared to IR780 and free Ce6 alone or combined with US. This led to an increase in anti-cancer effects. Additionally, PEG-IR780@Ce6 inhibited TNBC cell migration and invasion and suppressed the expression of MMP-2 and MMP-9, potentially suppressing metastasis [110].
Strength, Weaknesses, and Open Issues with Liposomes
Micelles have shown some promise in their applications as described above, and they possess some strengths over other vehicles. Namely, they have a smaller size but remain large enough to avoid renal excretion to improve biodistribution and increase circulation time [159]. Nanocarriers have some challenges. The critical micellar concentration (CMC) is the concentration threshold for micelles to form [167]. This comprises one of the main challenges in the use of micelles because of their instability when diluted into the bloodstream, potentially releasing the therapeutic prematurely. For instance, the concentration of micellar needed to maintain the micellar concentration above its CMC would not be tolerable in humans. Another challenge, shared with all nanocarriers, is recognition by the immune system. Micelles made of polymers are the most commonly used and have the advantage of a lower CMC compared to surfactant micelles [168]. These are also usually coated with PEO, as mentioned earlier, which prevents recognition by the immune system. The generation of micellar systems that improve stability and further improve their avoidance of immunorecognition is needed.
Polymeric Nanoparticles
Polymeric nanoparticles (NPs) represent an ideal drug delivery system because they are biomimetic, biocompatible, biodegradable, and water-soluble. Natural (e.g., alginate, chitosan, gelatin, and albumin) and synthetic (e.g., poly (lactic acid) (PLA), poly(ecaprolactone) (PLC) and poly (lactic-co-glycolic acid) (PLGA),) polymers can be used to produce nanoparticles. The synthesis of the different types of polymeric NPs can be achieved by microfluidics [169], nanoprecipitation [170], emulsification [171], and ionic gelation [172]. Generally, polymeric nanoparticles inhabit a size range between 100 and 300 nanometers, but lower-sized formulations have been generated. These carriers come in two general morphologies: the nanosphere and the nanocapsule. Nanospheres are constituted by a solid structure composed of the constituent polymer that constitutes the shell and the matrix, whereas nanocapsules have a very thin polymeric envelope that covers a liquid phase, frequently an oily core [173]. This lends a robust versatility to this class of vehicle, as small molecules of varying hydrophilicity can be associated with either the constituent polymer core or with the shell itself in the case of capsules. Capsules also have an advantage as the hydrophobic oil core is readily available to incorporate hydrophobic small molecules. Polymer variation can also be leveraged to optimize the delivery of charged macromolecules, achieved with either cationic polymer moieties complexing with anionic molecules such as RNA or direct conjugation of polymer units with the macromolecules themselves or through a cationic intermediate moiety [174].
The drugs can be bound to the surface or conjugated to the polymer, encapsulated in the hydrophobic core like in the case of nanocapsules, or embedded in the matrix like for the nanospheres [169,[175][176][177][178][179]. Because of the sheer versatility of their therapeutic-binding ability owing to their readily customizable composition, numerous classes of therapeutics, from hydrophobic small molecules, which historically have been notoriously difficult to distribute, to charged macromolecules, have the potential to be utilized. Furthermore, they can be formulated to be able to precisely control the loading and the kinetics of release of therapeutics [180,181]. NPs can be engineered to present PEG on the surface, PEGlyation, with the goal to avoid recognition by phagocytic mononuclear cells. However, this is not completely achieved because it has been reported that exposure to PEG can lead to the production of antibodies anti-PEG and clearance of PEGylated NPs [182,183].
Polymeric Nanoparticles in Breast Cancer
Suicide gene therapy, or gene-directed enzyme prodrug therapy (GDEPT), is a common approach to treating solid tumors. Devulapally et al. by exploiting this platform synthesized a biodegradable PLGA/PEI nanoparticles (polyethylene glycol (PEG)ylated-poly (lacticco-glycolic acid)/polyethyleneimine) complexed to plasmid vectors expressing the HSV1-sr39TK-NTR (TK-NTR) fusion gene under a tumor-specific survivin promoter. Clinically translatable US-MBs (Bracco)-mediated drug delivery was used to enhance the delivery of NPs and plasmid to the tumor site in TNBC xenografts in vivo. The authors observed a further reduction in tumor growth when US-MB treatment was added to the combination of PLGA/PEI NPs with the TK-NTR fusion gene and prodrugs (GCV/CB1954) [111].
Carlisle et al. complexed adenoviruses with a N-(2-hydroxypropyl)methacrylamide polymer to obtain a stealth system to protect from liver sequestration and toxicity and to increase blood half-life. They injected systemically the polymer-coated oncolytic virus together with SonoVue microbubbles in mice bearing xenografts of the human breast cancer cell line ZR-75-1. They treated tumors with focused ultrasound 0.5 MHz at peak rarefactional pressure of 1.2 MPa and showed an increase of 30-fold in viral infection and reduction of tumor growth [112].
Kim et al. synthesized a pH-sensitive, reduced albumin nanoparticle loaded with doxorubicin to use in combination with focused ultrasound treatment. The ultrasound application allowed for a targeted accumulation of the nanoparticles in the tumor site, and the acidic pH in the tumor microenvironment and inside the cells for example in lysosomes led to a complete release of doxorubicin and increased therapeutic effect in the TNBC xenograft in mice [113].
Strength, Weaknesses, and Open Issues with Polymeric Nanoparticles
A significant advantage illustrated over the development of polymeric NPs is seen with how highly customizable they are. For instance, polymeric NPs have been developed with surface functionalization utilizing a variety of moieties, including site-targeting as well as metallic moieties that have a direct cytotoxic, genotoxic, and photoacoustic capacity [180]. This represents an exciting potential for expanding the physical and biochemical profiles of these agents, which increases the scope of their applications with subsequent research. Major limitations of these systems, particularly those with metallic surface moieties, lie in the potential for off-target toxicities. Additionally, as is the case with any vehicle that uses polymeric materials, the number of well-described polymers available for use as drug delivery systems is somewhat limited, though ongoing work is seeing this repertoire expand rapidly [184]. Characterization of each specific system's toxicity and biodistribution will be required to ensure safety in the translational space and beyond.
Nanoemulsion/Droplets
Emulsions are kinetically stable but thermodynamically unstable, biphasic liquidliquid dispersions of variable sizes (microemulsion and nanoemulsion) consisting of two immiscible liquids, one suspended into another [185]. Nanoemulsions are typically under 200 nm in diameter, with some definitions considering both 500 and 100 nm to be the upper limit. These are most commonly formed by oil and water combinations further stabilized by emulsifiers. The addition of low surfactant concentration is what turns the emulsion thermos-responsive [186]. Emulsifiers are ideally used in the minimum concentration needed to maintain the interfacial tension. Emulsions are derived from the addition of stabilization components to the historically previously described colloids [187].
Kinetic stability allows for stable drug delivery formulations. Under a thermodynamic stimulus, emulsions are prone to destabilization and liquid emulsion, a characteristic explored with US-triggered drug delivery. Reported methods to produce nanoemulsions include high-energy methods such as high-speed homogenization, ultrasonication, highpressure homogenization, microfluidic and membrane methods, and low-energy methods, such as phase inversion temperature and emulsion point inversion [188]. The literature has demonstrated that high-energy methods tend to be less efficient and most of the applied energy is dissipated into heat, thus low-energy methods are currently preferred [189].
The oil-aqueous mixture and the surfactant concentration must be appropriately selected according to the intended carried molecule, which is usually loaded into the oil core [190]. Cellular nanoemulsion uptake mechanisms include direct paracellular or transcellular transport. Non-US-enhanced nanoemulsion delivery has been described through different administration routes (intranasal, ophthalmic, oral, topical, and transdermal) and some are available for clinical use [191]. US irradiation plays synergistic effects to enhance nanoemulsion-based drug delivery and enhanced imaging. This effect is allowed by the US-triggered liquid-to-gas transition of nanoemulsions once the vaporization threshold is reached [192]. The liquid-to-bubble transition increases the interior volume and ultimately generates vesicle rupture and local drug release. Intrinsic droplet properties influence the acoustic pressure required for transition including the size, type of formulation, pressure, and temperature of the medium. After being vaporized, these nanoemulsions, now microbubbles, improve ultrasound tissue echogenicity by increased backscattering with the same mechanism as commercially available MBs [193]. While emulsions excel at delivering hydrophobic drugs due to their highly lipophilic core, creative rational design of surfactant moieties to encompass hydrophilic/ionic polymers or other functional conjugates allows for a robust expansion of the nanoemulsions' versatility. Cationic nanoemulsions have already been shown to be able to deliver RNA in the context of early-in-development nonviral vaccines [194], as well as immunomodulatory small molecules to serve as adjuvants for vaccination against melanoma, lung cancer, and cervical cancer in tumor models [195,196]. The scope of nanoemulsions' therapeutic repertoire will likely continue to expand with the increasing rational design of its surface profile and payload capacity.
Rapoport et al. compared the effects of a micellar formulation or nanodroplet formulation plus ultrasound and noted a significant difference between the ultrasound-treated and untreated groups. Furthermore, increased accumulation of intravenously administered nanodroplets was demonstrated by increased echogenicity in the tumor ultrasound images. The same groups reported in a different study an inverse relationship between the size of droplets and vaporization threshold. This effect is caused by increased Laplace pressure and thus increased boiling point in smaller nanoemulsions. Additionally, the ADV threshold is typically lower than the inertial cavitation, which may be another advantage of nanoemulsions compared to MBs when capsule disruption and drug delivery are desired [197].
In a similar fashion to acoustic vaporization of phase transition vectors described above, optical droplet vaporization (ODV) is receiving attention as an alternative phasetransition technique for imaging and therapeutics, and it has received increasing attention with the development of a growing number of vehicles as an alternative or complement to ultrasound-induced vaporization techniques. This technique leverages the ability of metallic, usually gold or silver, nanoparticles acting as chromophores to absorb optical energy and transform it into a local temperature increase. This precipitates the vaporization of perfluorocarbon or other liquid phases loaded into nanoparticles of a variety of compositions to rapidly expand and generate acoustic forces on local tissues similar to those experienced with conventional ADV, allowing for contrast enhancement or drug delivery [198].
Lajoinie et al. [201] designed two polymeric capsules encapsulating various volatile oils that underwent optically inducible vaporization, one made from polymethylmethacrylate (PMMA) loaded with hexadecane and the other formed from poly (lactic-co-glycolic acid) (Resomer) microparticle containing perfluropentane (PFP) oil. The transition from microparticle to microbubble after laser-induced vaporization induced poration as well as human endothelial cell death in vitro. The Resomer-PFP microparticle has a lower activation threshold, and thus requires a lower laser intensity than PMMA with hexadecane to induce vaporization. In both microparticles, vaporization induced poration and subsequent cell death, with larger resulting bubbles resulting in 100% poration probability with larger bubbles. This technique is limited largely by a low penetration depth, a few centimeters, and this presents a major barrier to using this technique for enhancing drug delivery as well as inducing localized cell death.
Nanoemulsions/Nanodroplets for Breast Cancer Treatment
Nanoemulsions have been used to deliver lipophilic drugs. Prasad et al. reported on the anti-tumor efficacy of lecithin-based curcumin encapsulated nanoemulsions and exposed to ultrasound with or without microbubbles. The system was tested in triplenegative breast cancer cell lines (MDA-MB-231) and melanoma cells (B16F10) [114]. The use of ultrasound combined with microbubbles significantly increased tumor cytotoxicity in vitro by 100-and 64-fold in breast and melanoma cells. The same trend was then demonstrated on melanoma subcutaneous tumor xenografts in mice with enhanced tumor growth inhibition in the tumors undergoing ultrasound.
Baghbani et al. synthesized multifunctional nanodroplets which were ultrasound responsive with the addition of alginate-stabilized perfluorohexane loaded with doxorubicin.
These multifunctional nanodroplets when further enhanced by sonication demonstrated significant tumor regression resulting from on-demand drug delivery. This was demonstrated by a 5.2-fold higher doxorubicin concentration in tumor tissue that underwent ultrasound compared to non-sonicated tissue. These theranostic particles also showed successful increased echogenicity under ultrasound imaging. Additionally, the authors demonstrated a marked decrease rate of cardiotoxicity in the US-enhanced nanodroplet-encapsulated doxorubicin group compared to the non-encapsulated drug [115]. Rapoport et al. reported the properties of perfluoro-15-crown-5-ether (PFCE) loaded with paclitaxel which displays ultrasound and fluorine MR contrast properties. Ultrasound triggered a reversible droplet-to-bubble transition with the microbubbles formed by acoustic vaporization undergoing stable cavitation. The use of PFCE nanoemulsions loaded with paclitaxel and in combination with US allowed to achieve notable therapeutic effects such as complete tumor regression and metastasis suppression in pancreatic and breast cancer [116].
Multifunctional perfluorohexane nanoemulsions coupled to silica-coated gold nanoparticles (PFH-NEs-scAuNPs) have demonstrated efficient chemotherapeutic loading capability as shown with doxorubicin, 5-fluorouracil, and paclitaxel. This formulation has shown utility in photoacoustic, ultrasound, and fluorescence imaging in vitro and in vivo. Moreover, the local nanoemulsion's expansion and rupture can be used for tumor treatment, as shown by Fernandes et al. in 4T1 tumor-bearing mice [117].
Wang et al. [202] further expanded upon the utility of ODV in phase-transition vectors in their experiment by utilizing silica-coated gold nanorods (GNR) and perfluorohexane (C6F14) into PLGA-PEG nanoparticles, which were further functionalized by surface conjugation of Herceptin antibodies. These particles were evaluated for site-specific accumulation and therapeutic efficacy against MDA-MB-231 (HER2-negative) and BT474 (HER2-positive) xenograft mouse models. These nanoparticles had 19x greater binding efficacy to HER2-positive cell lines than to the HER2-negative cell lines, and histological analysis revealed tissue damage when the GNR-PLGA-PEG nanoparticles were exposed to lasers versus a control or unfunctionalized nanoparticles. This indicates the potential for surface functionalized, optically vaporizable nanoparticles use in the treatment of HER2-positive breast cancers or malignancies with surface features amenable to predictable antibody targeting.
Strength, Weaknesses, and Open Issues with Nanoemulsions/Nanodroplets
When compared to microbubbles, nanoemulsions' pre-transition small sizes allow for better tissue penetrance via EPR, increasing the delivery of any intended payloads as well as increasing tissue echogenicity on ultrasound. This effect is allowed by the US-triggered liquid-to-gas transition of nanoemulsions once the vaporization threshold is reached [192]. The potentially lower energy of ADV, when compared to MBs, means easier capsule disruption and drug delivery since energy transmission or off-target effects of higher energy irradiation can be reduced. Additionally, the drug payload of nanoemulsions is limited by the ultrasound response [203], which fundamentally limits the quantity of payload able to be delivered. Finally, the manufacture of many of these vehicles is very costly and requires specialized equipment [204]. This presents a problem when considering production scale or widespread adoption, and this will necessitate the development of manufacturing conditions that are more readily utilizable. Several future directions of research exist characterizing the stability, biodistribution, and toxicity of emerging systems, especially those with metallic components with known local and systemic toxicities, and are paramount in being able to advance and optimize these technologies into future in vivo, translational, or eventually preclinical spaces.
Biocompatibility
The compositions of ultrasound-responsive vehicles are highly diverse, and so too are their biodistribution and toxicity profiles. With increasing surface moieties used in the functionalization of many types of vehicles, immunorecognition is a significant hurdle to maximizing bioavailability. One of the most widely used methods of escaping immune detection as well as improving circulation dynamics is PEGylation [205]. Surface modification with PEGylation has also been shown to reduce deposition into the reticular endothelium system of the liver, lungs, and spleen [206]. Another method used to reduce toxicity and improve target site delivery is using high-affinity ligands. Conjugation of high-affinity ligands, such as RGD which preferentially binds to upregulated integrins frequently seen in neovascular processes, increases the concentration of actively delivered therapies from nanoparticles and reduces systemic toxicities, likely from reducing accumulation within non-targeted tissues [207]. This reduced toxicity was also confirmed in RGD-conjugated microbubbles in a murine model where adequate post-treatment growth, no mortality, and only mild reversible biological effects were observed [208]. This highlights the strength of using active targeting with specific surface functionalization to reduce systemic toxicity.
Not only does the attachment of surface moieties for functional purposes play a role in their biocompatibility, but so too does the actual composition of the carrier shell. Proteincoated carriers and many polymeric carriers have been used extensively because of their tendency for excellent biocompatibility and biodegradability. Alginate, chitosan, gelatin, and albumin are among the most common proteins to encapsulate polymeric nanoparticles, and they have favorable biocompatibility [209]. Polymeric micelles constituted from methoxypoly (ethylene glycol)-poly(d,l-lactide) (MPEG-PLA) were found to be nontoxic to the human reticular endothelial system [210]. In an analogous manner, a stealth system has been generated by coating oncolytic adenovirus with (2-hydroxypropyl)methacrylamide [112]. However, there is evidence that some cationic polymeric shells in liposomes and micelles have in vivo toxicity to lung and liver tissue [211,212]. This is due to the enhanced production of reactive oxygen species as well as more extensive cell surface-carrier interaction resulting in enhanced uptake. In contrast to cationic polymeric micelles, polymeric microbubbles constructed with PVA have been shown to undergo adequate elimination without discernable defects both in vivo and in vitro [213]. Much in the same way, several studies have shown that nanobubbles have limited in vivo toxicity, owing to the fact that the perfluorocarbon payload and phospholipid shells frequently employed are generally non-toxic [214,215]. Biocompatibility is not mediated only by the chemical composition or surface features of a given carrier. The physical characteristics of the carrier itself also greatly impact the toxicity profile of the system. For instance, PLGA-PEG nanoparticles have been shown to have widely different cytotoxicity dependent on the shell's physical shape. Zhang et al. illustrated that needle-shaped PLGA-PEG particles induced apoptosis significantly more than their spherical counterparts of the same chemical composition, which is thought to occur because of differential disruption of lysosomal membranes [216]. Size has been illustrated to impact microbubble uptake and target accumulation and is much of the basis for EPR, with smaller sizes allowing for target tissue accumulation [120]. Size also plays a role in determining systemic toxicities. For instance, PDLA-PEG nanoemulsions and nanomicelles loaded with paclitaxel were shown to have different hematological toxicities in mice, with nanoemulsions of 200 nm to 1000 nm exhibiting significantly less hematological toxicity than their nanomicelle counterpart, with dimensions of only 20 to 100 nm. Gold-based nanoparticles also demonstrate enhanced cellular uptake at a smaller size [217,218] although conflicting toxicity determinations regarding size trends exist within the literature. Ultimately, the biocompatibility of the systems discussed above and those being developed are exquisitely complex and rely on a myriad of interactions between shell composition and surface features-cell interaction, size, shape, and stability. Future work towards establishing effective, safe theranostic systems will require careful balancing of the physiochemical properties of the given carrier to optimize drug delivery and stimuli responsiveness while minimizing off-target toxicities, and this will be aided by elucidating the toxicities and pharmacokinetics of emerging carrier-agent combinations.
Comparison of Lipidic and Polymeric Delivery Vehicles to Other Promising Nanoparticle Systems
Nanoparticles constructed of metal oxides are part of a class of drug delivery vehicles that are quite diverse in morphology, formulations, and applications. Metal oxide nanoparticles can be formulated to produce sheets, nanotubes, multifaceted nanoparticles, and even nanoflowers, all of which have very diverse distribution and internalization characteristics [219]. Additionally, metal oxide nanoparticles reveal new opportunities for the induction of drug delivery and imaging modalities apart from ultrasound. Magneticdirected heating of iron oxide nanoparticles to temperatures that can ablate tumor tissue is a prime example of an alternative therapeutic and induction strategy not seen with purely polymeric or amphiphilic vehicles without significant surface modulation [220]. However, many of the metal oxide nanoparticles are intrinsically toxic, owing to their generation of ROS and damage to the genome and cytoplasmic structures; the size, morphology, and metal composition are highly variable and dictate the extent of toxicity when exposed to non-target tissue [221]. In contrast, many of the constituent polymers and lipids characterized above are generally well tolerated and serve as a close approximation to native tissue, with exceptions depending on size and surface features notwithstanding. There is undeniable potential for unintended toxicities, but the wealth of understanding of biocompatible polymers and their derivatives makes them generally easier to approach in anticipating untoward effects. In spite of this, many next-generation vehicles can utilize metal oxide nanoparticle conjugates and complexes to enhance their therapeutic utility by expanding therapeutic profiles or even opening new avenues for temporally and spatially selective drug delivery, as is the case of paramagnetic iron oxide and other metal oxide functionalized vehicles generating phase-change under magnetic stimulus [222,223]. There is much benefit in utilizing the classes of delivery vehicles synergistically.
Mesoporous silica nanoparticles (MSN)s represent another promising candidate in site-directed drug delivery. They possess a high degree of customizability in morphology and boast an impressive surface area that allows for an increased interface with the tissue environment, in addition to pores that can be customized for the highly controlled release of therapeutics [224]. In a similar fashion to the polymeric and amphiphilic classes denoted above, rational design of surface features, such as hydrophobicity and porosity, can be used to optimize their behaviors in vivo by impacting distribution and biocompatibility [225]. MSNs also have the capacity to be inducible in their drug delivery, both reversibly and irreversibly modulating their pores' communication with the interstitium, with external stimuli such as ultrasound [226]. Many of these features are quite reminiscent of the highly customizable surfaces and ultrasound inducibility of polymeric carriers, with both MSNs and the numerous delivery vehicles discussed above being able to undergo customization to optimize their biological behaviors and drug delivery in vivo. The same concerns of offtarget toxicities and possible unforeseen bioaccumulation that come with all nanoparticles with potentially wide distribution also remain. Ultimately, while different in that MSNs are not organically derived as is the case of polymeric or lipidic carriers, they serve as a reasonable and promising avenue.
Extracellular vesicles are a highly heterogenous class of carrier characterized by a bilayer membrane, quite similar in their general morphology to liposomes [227]. In spite of their similarities to liposomes, extracellular vesicles have a much higher degree of surface complexity owing to their rich lipid bilayer components, which ultimately depends on their manufacturing technique. They possess the advantage of generally being very similar to endogenous vesicles and much more native-appearing than conventional liposomes, which can greatly increase confidence in their biocompatibility as well as limit nonspecific interaction that is seen with synthetic or semisynthetic liposomes. However, when compared to engineered liposomes, extracellular vesicles present problems of surface composition heterogeneity, which is a challenge to predicting consistent biocompatibility, bioavailability, and on-target delivery [228]. Exploring the overlap between the dichotomy of liposomes and EVs by adapting the surface richness of EVs granting the ability to access many en-dogenous pathways for enhanced distribution and uptake in vivo with the measure of composition control and predictability of liposomes represents an exciting prospective for the improvement of bilayer-derived delivery vehicles.
Conclusions
Ultrasound-enhanced delivery of cancer therapeutics from micro-nanobubbles and other acoustic-sensitive carriers has emerged as a feasible method for increasing the accumulation of drugs/genes to the targeted tumor region mainly by improving the EPR effect. The possibility of synthesizing these nanoparticles with biomaterials makes the system potentially highly safe for human applications.
One of the limitations of this system is the reduced drug/genes encapsulation capabilities of US-responsive materials especially highly echogenic micro-nanobubbles with a gas core. Other nanomaterials that have higher loading efficiency are less responsive and higher US frequencies are necessary to release the therapeutics which can cause damage to the healthy surrounding tissues [120]. A way to overcome this obstacle, in the case of gene therapy, would be to develop ultrasound-responsive systems that can either encapsulate or shield and deliver viral vectors which display high and sustained expression of the therapeutic gene, such as adeno-associated viruses [229] or to use oncolytic viruses that can conditionally replicate in the tumor and express a transgene [139].
As previously discussed, different levels of thermogenesis can be reached by ultrasound depending on the use of low, high-frequency, or focused ultrasound. Even if the thermal effect can be exploited to design carriers that are for instance temperature sensitive, US can induce tissue damages that are proportional to the intensity and duration of the irradiation. At the same time, targeted permanent tissue damage has been exploited to therapeutically intervene on the tumor mass and induce ablation by applying focused ultrasound. Additionally, focused ultrasound has also been employed to transiently open the blood-brain barrier (BBB) to improve the delivery of drugs to the brain. A phase I clinical trial was conducted by the Sunnybrook team which showed that transient opening of the BBB with focused ultrasound enhanced the delivery of trastuzumab to breast cancer metastasis in the brains of HER-2 positive patients [230].
Despite the amount of preclinical data showing the possibility of effectively guiding and increasing drug and gene transfer in tumors, not enough clinical trials have been initiated especially in the therapeutic area of breast cancer and more importantly triplenegative breast cancer which nowadays still have limited therapeutic options. Taking into consideration that ultrasound represents in the clinical setting one of the main imaging tools for the screening, diagnosis, staging, surgical planning, and surveillance of breast cancer, it would be extremely important and advantageous to develop its therapeutic capabilities and maximize its clinical application in combination with ultrasound-sensitive carriers to deliver therapeutics to the tumor and avoid non-specific targeting and systemic toxic effects. | 2023-03-16T15:26:13.663Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "dfa5bba197c0f22e8d89f37de6c263c9ce55e444",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/24/6/5474/pdf?version=1678705503",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "45b8901253bb8c30ed31b6ca55db424d2a6b8e9f",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268039488 | pes2o/s2orc | v3-fos-license | Medical students’ perceptions towards implementing case-based learning in the clinical teaching and clerkship training
Background Depending on the subject area and the ‘case’ used, many methods can be used to describe case-based learning (CBL). The majority of health professional education is patient-centered. As a result, clinical presentations and diseases are combined with social and clinical sciences, and student learning is linked to real-world applications. The purpose of this study was to evaluate how medical students at the Faculty of Medicine, National Ribat University, felt about the implementation of CBL. Methods This descriptive cross-sectional study was conducted on 171 final-year medical students (100 females and 71 males). Students were voluntarily invited to complete a self-administered questionnaire consisting of 15 closed-ended questions with 5-point Likert scale responses, covering data on perception, awareness, and barriers to CBL. Results The CBL satisfaction rate among medical students was 92.4%. The mean value of the medical student’s perception was 3.7 out of 5. Regarding perceptions of CBL, 65.5% of students agreed with the positive impact of CBL on their academic performance. “8.2%” (14/171) of students strongly concur that CBL improved teamwork, while “31.6%” (54/171) strongly disagree. “36.3%” of students strongly believe that CBL improved their ability to use clinical reasoning. Regarding CBL barriers, 53% of medical students considered a group of twenty participants per session to be a barrier. (69%) of students refused to consider physical presence as a barrier. “76.6%” of the students agreed that the moderator’s approach and style can have a big influence on the CBL session’s outcome. Conclusion Overall, students had positive perceptions of CBL. Academic performance, clinical reasoning, teamwork, and information retention and retrieval were all improved by incorporating CBL into training modules. Students agreed that the group size of 20 students per session was a barrier, despite their moderate to excellent knowledge of CBL. Preparation for CBL is both time-consuming and tiring. Despite this, students agree that CBL has a positive impact on the learning process.
Background
The medical education process has changed over the last two decades from traditional teacher-centered methods to more modern student-centered methods where students are actively involved in their learning.Clinical case-based learning (CBL) is one of the best methods for promoting student learning [1].
Thistle Thwaite et al. provide an insightful definition of CBL: "Through the use of real-world clinical scenarios, CBL aims to prepare students for clinical practice.By applying knowledge to the cases and using inquirybased learning techniques, it builds a conceptual bridge between theory and practice [2].
Since Dr. James Lorrain Smith developed the 'case method of teaching pathology' while a professor at the University of Edinburgh, CBL has been widely adopted and used in the medical sciences.The method is a series of clinical-pathological correlation exercises based on the analysis of clinical cases [3,4].
CBL improves a wide range of skills, including critical thinking, problem-solving, memory retention, and exam readiness.CBL is a cutting-edge teaching approach that has been shown to stimulate and enhance student learning.It improves students' conceptualization, clinical reasoning, and analytical thinking.It has also helped students prepare for and perform well in clinical examinations [1].
In addition, CBL with a case-based approach gives students the freedom to discuss specific scenarios that resemble or are often examples of real-life situations [5].
CBL is a well-known pedagogical and academic approach that emphasizes case-study teaching and inquiry-based learning; as a result, it falls somewhere between organized and guided learning.Learning exercises in health professional education are often based on patient cases.As a result, student learning is linked to real-world circumstances, as the basic, social, and clinical sciences are studied about the case and linked to clinical presentations and conditions (including health and illness).Even though many arguments are given in favor of CBL as an efficient teaching and learning strategy, very little data is cited or produced to support these arguments [6].
CBL is an active learning technique similar to problembased learning that involves small groups and focuses on solving a given problem.It stimulates active learning and produces a more fruitful outcome [7,8].While PBL encourages students to acquire foundational knowledge as part of the clinical case investigation, CBL is effective for students who have already acquired this knowledge [9].
Selecting and implementing a learning method is a difficult, time-consuming task that requires intensive research to demonstrate its reliability and effectiveness.
CBL is a recognized and accepted approach to teaching and learning in higher education institutions around the world.
The development of a clear and valid assessment of the benefits, efficacy, and related barriers to the full implementation of CBL as a primary learning method is discouraged in Sudan due to the need for modern curriculum improvement, along with the development of scientific studies, reports, and application trials of this learning method in higher education institutions.
Addressing the barriers to the use of modern teaching techniques and their effectiveness is crucial given the ever-evolving nature of the medical profession in general and medical education in particular.
This study aimed to explore medical students' perceptions, effectiveness, and barriers to the implementation of case-based learning in the Faculty of Medicine at National Ribat University.
Study design
A descriptive cross-sectional institutionalized study was conducted among the undergraduate clerkship students at the Faculty of Medicine, The National Ribat University (NRU), Sudan, from the month of January to February 28, 2023.
Context
The National Ribat University (NRU), a 2000-year-old institution in the Burri district of Khartoum, was the setting for this research.Since its inception, the institution has grown to include 18 different faculties, 3 centers, and 2 institutions, in addition to the original 3 faculties.The Director General of the Sudanese Police Forces acts as the Vice President of the University Council, which is headed by the Secretary of the Ministry of Interior.1,800 first-year medical students were enrolled at the NRU during the study period.In addition, 320 students enrolled in the internship component of their studies.There are currently 42 medical schools in Sudan.The NRU Faculty of Medicine and Problem-Based Learning is one of the few medical schools in Sudan that combines different teaching methods (lecture-based learning and case-based learning).
Study population
The study population consists of undergraduate clerkship medical students who volunteered to participate and are currently enrolled at NRU in their fifth year of undergraduate studies.Written informed consent was obtained from each participant after the research procedure and objectives of the study were explained in clear, simple terms.Participants were assured that the data collected would be confidential and would only be used for research purposes.It was clearly explained that participation in this study was voluntary and that the participant had the right to withdraw at any time without any penalty.Questionnaire responses and internet data were collected anonymously by online platforms (Google forms).
Inclusion criteria
• Medical students in their 5th year of medical school who are undergoing their clerkship.• Medical students at National Ribat University who have completed different medical education modalities and methodologies (lecture-based, casebased, and problem-based learning).
Exclusion criteria
• Pre-clerkship medical students.
• Students who did not wish to take part in the completion of the online survey.
Sampling technique
171 medical students out of a total of 300 students (a response rate of 55.1%) agreed to participate in the total coverage sample of the fifth year of medical school.
Data collection tools
Data were collected using a carefully pre-tested, standardized questionnaire; a pre-designed, online-based questionnaire was developed by the principal investigators.The content accuracy, reliability, and internal validity of the survey items were finalized with multidisciplinary input from the study investigators.In addition, an expert in health professions education endorsed the final structure of the questionnaire, and confirmatory factor analysis supported its validity.The questionnaire consists of three sections with a total of 16 questions.The first section of the questionnaire tests the perceptions and knowledge of participants regarding various aspects of CBL, e.g., the definition, means, and components.The second section measures the effectiveness of CBL by questioning the benefits and skills acquired through CBL, e.g., retention and retrieval of information, teamwork, and clinical reasoning.The third section identifies the barriers and obstacles hindering the proper application of CBL among participants, e.g., the number of students per session, time and effort consumption, and the approach of the session leader.
Serial numbers were used to identify each question.Demographics (age, gender, location, and semester), perceptions, awareness, and barriers to CBL were covered in the questionnaire.
The Likert scale, which consists of the values 1 (strongly disagree), 2 (disagree), 3 (neutral), 4 (agree), and 5 (strongly agree), was used to select responses to a portion of the study questionnaire.We used the mean to determine the central tendency [10], which we used for statistical analysis and additional Likert scale inference.The percentage was used as a qualitative indicator.
A brief informed consent statement was included in the introduction to the questionnaire sent to students' email addresses and in the opening of the online Google Form questionnaire.
SPSS (Statistical Package for Social Science) version 20
was used to enter, collect, and analyze the data.Continuous data are presented as means (standard deviation) or medians (range) according to normality, while categorized variables are presented as frequencies and percentages.The Likert scale and Cronbach's test were used to the maximum extent possible.
Characteristics and socio-demographic details of the participants
The study involved a total of 171 medical students in their 5th year of medical school who were enrolled in undergraduate clerkships and who volunteered to take part in the study.
The study participants were divided into 71 males and 100 females, with female students making up 58.5% and male students making up 41.5% of the total study population.
At the time of the study, CBL was delivered every week for one academic semester in the 5th year.
The awareness of medical undergraduate students in the clerkship phase with regards to CBL
The majority of medical students-98.2%(168/171)reported that they were familiar with CBL, while '94.7%' (162/171) of them had experience with CBL." 77.1% (132/171) of students rated their knowledge of CBL as high to very high, as shown in Fig. 1.
When asked about the nature of CBL, "93%" (159/171) of respondents indicated that it was carried out as a group activity rather than by an individual student.
In addition, '92.4%' (158/171) chose the initial topic, which is familiar to the students and for which there has been prior preparation, while '1.8%' (3/171) chose the alternative.
The majority of medical students, or "56.7%" (97/171), are aware that the facilitator only gives short instructions."93%" (159/171) of them are aware that using external resources and searching for data is allowed when participating in CBL sessions.
The perception of medical undergraduate students in the clerkship phase with regards to CBL
"40.4%" (69/171) of the study participants strongly agree that CBL helped them understand the case presented, and no students disagree.Table 1.
Barriers in the implementation of case-based learning
Four things were seen as barriers to the use of CBL.One of these was the number of students participating in the CBL session (20 students).'54.4%' (93/171) of respondents agreed that this was a barrier, while '33.3%' (57/171) disagreed.
When asked if physical sitting during the session was a barrier, '67.8%' (116/171) of students said no, while '18.1%' (31/171) said yes.This was the second barrier to the introduction of CBL.
Perceived time and effort required was the third barrier to CBL implementation; of the students surveyed, '62.6%' (107/171) thought that CBL did not require much time or effort, while '31%' (53/171) thought that it did.
The fourth barrier to CBL implementation was the leadership style of the team.When asked if the moderator's approach affected the outcome of the CBL session, '76.6%' (131/171) of the students agreed, indicating that the moderator's approach and style can have a big impact, as shown in Table 2.
Student satisfaction with CBL
Most medical students (92.4%, 158/171) agreed that CBL was an effective teaching strategy.See Fig. 3.
Discussion
In this study, a self-designed questionnaire was used to assess the benefits and challenges of implementing CBL among 5th-year medical students at the Faculty of Medicine, NRU.
Even while 37.4% and 28.1%, respectively, strongly believe that inclusion of CBL enhanced academic achievement and only 5.8% disagrees.more research showed that many students valued the program's ability to foster critical thinking and problem-solving abilities.This is consistent with the results of previous litreature about the deeper learning capacity of CBL [11].Nonetheless, several issues may arise regarding workload and facilitator abilities.These issues might be resolved by customizing CBL techniques according to student input and offering comprehensive facilitator training.Despite its limitations, our work adds to our knowledge of CBL's efficacy specifically in Sudan and raises questions about how it might improve medical education.Future studies may examine the enduring effects of CBL and its flexibility in many contexts and fields of study.However, the use of CBL is an effective strategy in a study proposed in an Indian setting [12].
This study covered several aspects of CBL's efficacy, including improving students' ability to solve clinical problems, think analytically, and assimilate information.Also, this study confirmed that CBL encourages more participation and learning than conventional lectures.The study enhanced the idea that CBL is favoured in specific situations by instructors and students.It's also enhancing students' capacity to use fundamental science ideas in clinical settings.
Impact of CBL on academic performance
In addition, this finding supports the results of a previous study that found CBL to be the most effective teaching strategy for undergraduate medical students in terms of academic performance, interest, and motivation [13].
According to another study that supports our findings, CBL pedagogy can help improve students' academic performance while fostering a more engaging and collaborative learning environment [14].Gurleen Kaur et al. reported no significant difference in academic performance following the implementation of CBL sessions, which is in contrast to our findings [15].
In our study the majority of students who disagreed with the statement that CBL improved academic performance did not attend the CBL sessions.According to our results, 4% of students claimed not to have attended the CBL session, whereas a higher percentage of students had been and chose to strongly agree that the CBL had a positive impact on their academic performance.
Clinical reasoning and information retrieval
In addition, 36.3% of students strongly agreed that their participation in the discussion improved their clinical reasoning skills, and 44.4% strongly agreed that CBL helped them remember and retrieve material.In addition, 46.8% strongly agreed that the CBL training improved their teamwork skills.This is consistent with research finding that CBL improved students' performance on MCQs [5].
Potential effects of CBL on curriculum development and medical education
Medical education could benefit from CBL in several ways.Studies have indicated that students who finish more cases typically receive higher grades for each case [16].
Additionally, it has been discovered that case-based learning is highly beneficial in fostering more fruitful interactions between educators and learners as well as advancing students' capacity for independent study, theory application, and self-learning.Furthermore, it has been discovered that CBL in medical education promotes diagnostic competencies.As a result, CBL has the potential to enhance student performance, critical thinking abilities, and learning efficiency in medical education.
CBL involves giving students hypothetical or realworld problems to consider, evaluate, and resolve.By exposing students to real-world circumstances that they might face in their future employment as healthcare professionals, the use of CBL in medical education can help shape the curriculum.Students can acquire critical thinking, problem-solving, and decision-making abilities that are crucial for their professional development by examining and resolving these instances [17].
Barriers to CBL
Depending on the situation and the technology used, there may be a variety of barriers to the implementation of CBL.However, some common barriers have been identified in the literature, such as lack of funding, technical difficulties, and lack of support [18].However, 67.8% disagreed that the physical presence of CBL was considered a barrier or obstacle, and 31% only saw the time required to prepare for CBL as consuming and requiring a great deal of effort.In this study, the majority of participants (54.4%) did indeed agree that the number of students per session (20 participants) was a barrier to equal participation in the discussion.Therefore, we can suggest other options to optimize group size by considering the following factors: the course content, the learning objectives, the pedagogical approach, the assessment methods, and the instructor's workload.There is no one ideal size for discussion groups, but some research suggests that smaller groups (4 or 5 students) can increase social presence, commitment, and participation [19,20].
Also, we can enhance collaboration within larger groups by considering the following strategies: communicate your expectations and goals clearly, set an example of collaboration, use team collaboration tools, streamline complex processes, promote a community working environment, foster honest and open communication, encourage creativity, highlight individuals' strengths, implement a team-based reward system, and improve internal communication.
In addition, 47.4% of participants in our study felt that the moderator's style could have a good influence on the results of CBL.Add to that, there are other barriers to CBL, including, firstly, the theoretical limitations that occur when students analyze case studies; they may be limited to the theoretical aspects presented in the case, which may not fully prepare them for real-world problem-solving and decision-making.The second barrier is the challenge of contextual knowledge generation.Case-based instruction places a greater emphasis on contextually-driven knowledge generation, which can lead to uncertainties and opportunities for misunderstanding, demanding a higher level of active participation and reflection from students.Another barrier is the difficulty of implementing solutions.Students may struggle with implementing solutions to real-world problems, as case studies often focus on theoretical analysis rather than practical application.
The majority of medical students, or "92.4%" (158/171), agreed with a study that concluded that CBL can be a useful technique for improving the performance of medical students and residents and strengthening their clinical skills [12,13].According to a previous study, CBL improved student motivation, satisfaction and engagement.The CBL satisfaction rate among medical students was 92.4% [21].
In summary, the results of this study confirmed the previous studies that found CBL is a successful teaching strategy [12,13], helps students do better academically [5,14], and is one of the best ways to support student learning [1].It can also build a conceptual bridge between theory and practice [2] and has assisted students in getting ready for and doing well on clinical exams [1].Encourages active learning and yields more beneficial results [7,8].
A reduction in the number of students in each session is recommended, and this can be achieved by increasing the number of classrooms, subgroups, and teachers.
Limitations of this study
This cross-sectional study did not include a control group, pre-or post-CBL assessment, or examination.To compare and confirm the effects of CBL on academic performance, a randomized controlled trial comparing the attitudes and perceptions of two groups is recommended in order to validate these findings.Since this study only involved fifth-year medical students, more research on all clerkship students is necessary to maximize potential sources of bias and generalize the findings.Total coverage sampling includes the inability to make statistical generalizations, and limitations in generalizability due to small sample sizes and uncommon population characteristics.
Conclusions
Incorporating CBL into modules improves clinical reasoning, teamwork, retention, and retrieval of information.Findings of this study indicate that CBL improves academic performance; however, further study with a large sample size is needed to confirm this finding.In addition, the majority of students cited the 20 participants per session as a barrier.This study recommends that CBL be incorporated into the majority of clerkship modules with a decrease in the number of students in each session, which can be accomplished by adding more teachers, classrooms, and subgroups.An alternative approach to the physical presence of CBL was recommended through distance education.Also, a study with experimental design is recommended to be conducted in order to identify the actual impact of use of CBL on student achievement.
Table 1
Students' perception towards case-based learning (CBL)
Table 2
Barriers on implementing CBL Fig. 3 Medical student satisfaction with CBL | 2024-02-29T06:17:08.130Z | 2024-02-27T00:00:00.000 | {
"year": 2024,
"sha1": "425fa86f42a4537af87836f681c6dc1184ccea90",
"oa_license": "CCBY",
"oa_url": "https://bmcmededuc.biomedcentral.com/counter/pdf/10.1186/s12909-024-05183-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7e3d4bf741adfdb8d76307563f202a306f3ad26d",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269975145 | pes2o/s2orc | v3-fos-license | Periprosthetic Joint Infection After Total Knee Arthroplasty With or Without Antibiotic Bone Cement
Key Points Question What is the estimated risk of revision for periprosthetic joint infection (PJI) after total knee arthroplasty (TKA) using antibiotic-loaded bone cement (ALBC) vs plain bone cement? Findings This cohort study of 2 168 924 cemented primary TKAs for osteoarthritis between 2010 and 2020 found no difference in risk of revision for PJI in TKAs with plain bone cement compared with TKAs with ALBC at 1 year. Meaning These findings suggest that the routine use of ALBC in primary TKA should be considered in the context of the overall health care delivery system.
FAR & PABZ had reported no revision due to infection following primary TKAs with plain bone cement.NAR reported 100% use of ALBC in primary TKAs.AOANJRR = The Australian Orthopaedic Association National Joint Replacement Registry DKR = The Danish Knee Arthroplasty Registry EPRD = The German Arthroplasty Registry Favours plain bone Favours ALBC cement eFigure 5: Meta-analysis on risk of revision due to PJI following primary TKA with ALBC vs plain bone cement.The meta-analysis was based on result from Cox-regression analysis adjusted for age, sex, year of surgery [time period], and all other variables available in each participating registry.a a The size of the square in the forest plot corresponds to each registry weighted based on the number of TKA with plain bone cement in the registry.eFigure 6: Meta-analysis on risk of revision due to all-causes following primary TKA with ALBC vs plain bone cement.The meta-analysis was on result from unadjusted Cox-regression analysis.a
Favours plain bone cement
Favours ALBC a The size of the square in the forest plot corresponds to each registry weighted based on the number of TKA with plain bone cement in the registry.
Favours plain bone cement
Favours ALBC eFigure 7: Meta-analysis on risk of revision due to all-causes following primary TKA with ALBC vs plain bone cement.The meta-analysis was based on result from Cox-regression analysis adjusted for sex, and year of surgery [time period].a a The size of the square in the forest plot corresponds to each registry weighted based on the number of TKA with plain bone cement in the registry.eFigure 8: Meta-analysis on risk of revision due to all-causes following primary TKA with ALBC vs plain bone cement.The meta-analysis was based on result from Cox-regression analysis adjusted for age, sex, year of surgery [time period], and all other variables available in each participating registry.a
eFigure 1 :
Cumulative percent revision (one minus Kaplan-Meier estimator) due to PJI following primary TKA with (a) ALBC vs (b) plain bone cement., and PATN had reported no revision due to infection following primary TKA with plain bone cement.NAR reported 100% use of ALBC in primary TKA.Cumulative percent revision Cumulative percent revision \ © 2024 Leta TH et al.JAMA Network Open.
eFigure 2 :
Cumulative percent revision (one minus Kaplan-Meier estimator) due to all-causes following primary TKA with (a) ALBC vs (b) plain bone cement.
provincial register of knee prostheses-(Autonomous Province of Bolzano-Italy) PATN = The Trento provincial register of knee prostheses-(Autonomous Province of Trento-Meta-analysis on risk of revision due to PJI following primary TKA with ALBC vs plain bone cement.The meta-analysis was based on results from unadjusted Cox-regression analysis.aFavours plain bone cementFavours ALBC a The size of the square in the forest plot corresponds to each registries weighted based on the number of TKA with plain bone cement in the registry.
Meta-analysis on risk of revision due to PJI following primary TKA with ALBC vs plain bone cement.The meta-analysis was based on the results from Cox-regression analysis adjusted for age, sex, year of surgery [time period].a a The size of the square in the forest plot corresponds to each registries weighted based on the number of TKA with plain bone cement in the registry.
eTable 1: The
Cox regression results from individual registries of revision for PJI following primary TKA with ALBC vs. plain bone cement (2010-2020).
= Swiss National Implant Register © 2024 Leta TH et al.JAMA Network Open.eTable 2: Sensitivity analysis of the meta-analyses (Cox-model 2) of revision for PJI and all-causes to determine how sensitive the meta-analyses results are to the results of individual registry contributions.One at a time, one registry was excluded from the meta-analysis to see if it changes in the statistical significance.Sensitivity analysis of meta-analyses (Cox-model 3) of revision for PJI and all-causes to determine how sensitive the meta-analyses results are to the results of individual registry contributions.One at a time, one registry was excluded from the meta-analysis to see if it changes in the statistical significance.
a a | 2024-05-24T06:17:13.706Z | 2024-05-01T00:00:00.000 | {
"year": 2024,
"sha1": "35e3fc5b5987cf5541d190c852ed97861cc1e2a1",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a31238df6337f2774361425ae42dd01bd3da5c52",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
196610466 | pes2o/s2orc | v3-fos-license | Approach to a patient with pulmonary hypertension
Pulmonary hypertension is a common clinical condition that can complicate various cardiac and respiratory abnormalities. Interest in pulmonary hypertension has grown remarkably among the scientific community in the last decade. It is now clear based on the scientific advances have paved the way in understanding the effects of abnormal pulmonary hemodynamics development and its antecedent consequences on the right heart in reducing the quality of life and survival of the patient.
Introduction
Pulmonary hypertension (PH) is a common clinical disorder associated with varied heterogenous group of diseases, classified into five groups as per the World Symposium on Pulmonary Hypertension (WSPH). [1] It is defined by pulmonary artery mean pressure of ≥ 20 mmHg at rest as assessed by right heart catheterization (RHC). [2] A significant proportion of PH occurs in patients with left-sided heart disease and lung disease. In the recent years, an effort to identify and treat PH has gained significant attention as its development is linked to prognosis in various clinical situations. Among the five groups, group one pulmonary arterial hypertension (PAH) characterized by significant advances with prolific development of impactful pharmacotherapeutic strategies that have been shown to significantly reduce the risk of clinical worsening but not mortality. [3] From diagnostic strategies standpoint, the use of provocative maneuvers like fluid challenge and exercise during RHC to elicit dynamic responses of the pulmonary artery wedge pressure (PAWP) to delineate the presence of left heart disease (LHD). [4,5] This has received attention in the recent years due to growing number of older people often with cardiovascular risk factors being referred to the PH specialist centers for PAH management and such therapy has been shown to potentially cause harm in PH due to left heart disease (PH-LHD). [6] The following review was undertaken to provide new insights in the pathophysiology and the emerging clinical perspective's in the field of PH.
Clinical classification
PH encompasses a group of clinical entities that have categorized into five different groups based on the patient sub-groups with similar pathological findings, hemodynamic profiles and therapeutic management profiles. Such classification has enabled the scientific community to identify the gaps in knowledge and limitations in support of therapeutic innovation. [1]
PAH
PAH, rare form of PH, characterized by pulmonary vascular remodeling mainly affecting the small pulmonary arteries ultimately leading to rise in pulmonary arterial pressure (PAP) and pulmonary vascular resistance (PVR), eventually culminating in progressive right heat failure and functional decline. [7] Since the initial WSPH proceedings, various scientific advances, have paved the way in identifying key cellular and molecular mechanisms that have been implicated in the pathobiology and are now being considered as emerging therapeutic targets. In addition, genetic factors and immune dysfunction have also been identified to play a role in the pathology. It is important to recognize that the available treatments do not specifically target pulmonary vascular remodeling and the inflammatory pathways impli-cated in the pathogenesis of the disease. From a pathology standpoint, plexiform vasculopathy is characteristic, but the pathophysiological significance of these specific lesions is yet to be elucidated at this time. [8]
PH-LHD
It is the most common forms of PH worldwide. Isolated post-capillary (Ipc-PH) and combined pre and post capillary PH (CPcPH) are two distinct hemodynamic phenotypes that occur in response to a passive increase in left-sided filling pressures. The two different phenotypes can be distinguished based on the elevated diastolic pressure gradient of ≥ 7 mmHg and PVR ≥ 3 wu obtained by right heart catheterization. [9] From pathology standpoint, elevated left heart filling pressures from the underlying cardiac disorder affects the structure and function of the pulmonary circulation, leading to pulmonary arterial and venular remodeling. In addition, the right ventricle is affected from the increase in afterload leading to right ventricle-pulmonary artery unit uncoupling leading to adverse outcomes. From management standpoint, the treatment essentially involves treatment of the underlying cardiac disorder. At this time, the guidelines maintain a strong recommendation against use of PAH-specific approved drugs. It is important to note that some of the existing clinical trial data signal harm towards using the PAH specific drugs in certain subset of patients with PH-LHD. i.e., Fluid retention with use of macitentan in CPcPH and Sildenafil use post valvular heart disease intervention are associated with an increased risk of clinical deterioration and death. [9]
PH due to chronic lung disease (CLD)
PH-CLD frequently occurs in patients with severe lung disease. It is associated with reduced quality of life and confers increased mortality risk. The CLD comprises obstructive, restrictive and mixed forms. At this time, data do not exist to support the use of PAH-approved drugs for treatment in these patients. [10]
Chronic thromboembolic PH
It occurs as a complication of pulmonary embolism. Pooled incidence of 3.4% has been established from published prospective studies. Exact pathogenesis is still unclear at this time. Establishing early and accurate diagnosis by ventilation/perfusion scintigraphy (V/Q scan) is essential as pulmonary endarterectomy when performed can offer cure with reported three-year survival rates of 90% as noted in international registries. In patients deemed not a candidate for surgery due to inaccessible vascular obstruction, PAH specific medical therapy and balloon pulmonary an-gioplasty have evolved as important component of treatment algorithm in the recent years. [11]
PH with unclear or multifactorial mechanisms
Multiple pathophysiological factors have been involved in the development of PH. Given the heterogeneity in the clinical presentations, currently no definitive diagnostic or management strategies exist at this time other than treatment of each specific subset. [12]
Diagnostic evaluation of PH
The diagnostic process for PH starts following a high index of suspicion, especially in patients with no apparent risk factors as symptoms are non-specific. Exertional dyspnea, fatigue, exercise intolerance, chest pain, weakness and syncope may characterize PH. Besides history and physical exam, chest X-ray, electrocardiography, blood tests and immunology, pulmonary function studies, transthoracic echocardiography, V/Q scan are essential tests that should be considered initially in the evaluation. Additional diagnostic tests include chest computed tomography and cardiopulmonary exercise testing aid in the comprehensive evaluation of a patient suspected with PH. It is important to note that the definitive diagnoses of the PH can only be established by invasive hemodynamic assessment and it forms an essential step in the diagnosis. [2]
Hemodynamic definition and classification
PH has been arbitrarily defined as pulmonary artery mean pressure of ≥ 25 mmHg at rest measured by RHC since the 1 st WSPH meeting organized by World Health Organization in Geneva. This cut-off has enabled the scientific community to differentiate primary or pre-capillary PH from secondary or post capillary PH on the basis of PAWP or left ventricular end-diastolic pressure in situations where PAWP cannot be obtained thus precluding overdiagnosis and treatment.
Precapillary PH
Based on the report from Kovacs, et al. [13] in 2009, the normal mean pulmonary arterial pressure in healthy subjects is noted to be approximately 14 ± 3.3 mmHg. Taken this into consideration, the upper limit of normal would identify a threshold of 20 mmHg for identifying abnormal PAP. Based on the accumulating data, the task force from the 6 th WSPH have proposed that primary PH or pre-capillary PH is defined as an abnormal elevation in the mean PAP ≥ 20 mmHg and need for PVR ≥ 3 wu to define all forms of pre-capillary PH. [14] Once the diagnosis is established, it is essential to perform acute vasoreactivity testing for identifi-Journal of Geriatric Cardiology | jgc@jgc301.com; http://www.jgc301.com cation of patients suitable for high dose calcium channel blockers treatment. It is important to note that such testing is only indicated for patients with idiopathic PAH, heritable PAH or drug induced PAH. Vasoreactivity defined as reduction of mean PAP ≥ 10 mmHg to reach an absolute value of mean PAP ≤ 40 mmHg with an increased or unchanged cardiac output. The testing can be performed using Inhaled NO 10-20 PPM has been established as standard of care by professional societies, but intravenous epoprostenol or adenosine or inhaled iloprost can be used as alternatives. Once vasoreactivity is established, patients should be treated with high dose calcium channel blockers and repeat hemodynamic assessment should be performed in 3-6 months and again at one year. If vasoreactivity is not established, then patient should be treated with approved PAH therapies as outlined in the treatment section. It is important to note that vasoreactivity testing is not indicated in PH-LHD, unless if it being performed in the context of heart transplantation.
For post-capillary PH, PAWP value of > 15 mmHg measured at end-diastole and end-expiration is considered essential for diagnosis. [9] It is important to note that presence of significant large V-waves should be noted strongly in favor of PH due to LHD regardless of the wedge pressure. For patients with PAWP of 13-15 mmHg but with risk factors for left sided heart disease, a three-step approach to accurately characterize the clinical phenotype of PH due to LHD has been recently proposed by the task force. In addition, provocative testing with fluid challenge to uncover PH due to heart failure with preserved ejection fraction has been recently incorporated in the diagnostic algorithm. A PAWP > 18 mmHg immediately after administration of 500 mL of normal saline over five min is considered abnormal. [4]
Risk stratification of PAH
Risk assessment has emerged as important part of patient care in estimating the prognosis of patients with PAH. Tools for risk assessment included the initial National Institute of Health Idiopathic PAH registry to the currently used registry to evaluate early and long-term PAH disease management 2.0 risk equation to predict all-cause hospitalization and mortality. [3] The Multi-parametric risk stratification incorporating clinical, right ventricular function, exercise and hemodynamic parameters are used to define the patient risk and thus to determine treatment. Patients are classified into low, intermediate or high-risk status according to the expected one-year mortality with a treatment goal to achieve low risk status by available treatments. The methodical risk assessment and treatment strategy have been validated in large international registries to clearly show the event-free survival at baseline and at follow up.
Treatment of PH
Once the diagnosis of the PAH has been made, therapy includes general measures and supportive therapy. General measures include information on physical activity, avoidance of pregnancy as it is associated with a substantial mortality rate in PAH, and use of birth control but there is less consensus relating to the most appropriate methods of birth control, travel and genetic counseling. Supportive therapy includes use of oxygen, diuretics and digoxin. [1] With regards to oral anticoagulation, data is conflicting and also been proven to be harmful in associated PAH. Currently decision on use of anticoagulation should be individualized based on the risk-benefit analysis. [15] Significant advances have been made in the supportive management in the last 25 years. Regulatory approval of multiple drugs targeting three major pathways, i.e., nitric oxide, endothelin and prostacyclin pathway by different routes of administration based on 41 randomized clinical trials, development of the combination strategies and escalation of treatment based on the patient risk status after a pre-specified period of treatment is currently accepted as standard of care. [3] Initiation of the drugs targeting one of the three different pathways is usually based on the multiple factors like physician experience, patient preferences and cost etc. The de novo use of combination therapy in PAH patients was tested in ambition trial. The study noted that use of combination therapy with Ambrisentan and Tadalafil resulted in lower risk of clinical failure events than the monotherapy. [16] Since the initial trial, the use of combination therapy with Macitentan and Sildenafil, Riociguat and Bosentan and Selixipag and ERA or PDE-5 inhibitors have received highest recommendation as outlined in the guidelines. It is important to note that use of PDE-5 inhibitors and Riociguat is contraindicated at this time. [17][18][19][20] In cases of high risk, intravenous epoprostenol therapy receives highest recommendation as it has been shown to reduce the 3 month mortality as noted in the clinical trial. [21] In addition, in the recent years is there is a shift in primary endpoint like 6 min walk distance to clinical worsening. It is important to appreciate that the drugs targeting the above three pathways have most commonly been tested in idiopathic PAH, heritable PAH, PAH due to drugs, PAH associated with corrected congenital heart disease, Eisenmenger syndrome or associated with connective tissue disease. The drugs should not be used to treat patients with PH due to heart or lung disease as the trials included strict hemodynamic criteria of PAWP of ≤ 15 mmHg and PA mean pressure of ≥ 25 mmHg and PVR of ≥ 3 wu. In cases of advanced disease on maximal medical therapy, lung transplant may be required if patient is deemed eligible.
Conclusions
PH complicates the course of various clinical conditions. It is associated with significant morbidity and mortality. There have been significant diagnostic and therapeutic developments in the recent years that have impacted the field. Accurate risk assessment upon diagnosis of PAH and initiating optimal PAH specific therapy in a timely manner have been shown to impact the short time survival and time to clinical worsening in the patients. Optimal treatment for other forms of PH should be contextualized within the extent of the underlying disease which is gauged by the combination of physiological, imaging and hemodynamic assessment. | 2019-07-17T13:03:22.776Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "9731ae12bb62348653a05be35120df06c34e1a57",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9731ae12bb62348653a05be35120df06c34e1a57",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
114508621 | pes2o/s2orc | v3-fos-license | Microtab Design and Implementation on a 5 MW Wind Turbine
Microtabs (MT) consist of a small tab placed on the airfoil surface close to the trailing edge and perpendicular to the surface. A study to find the optimal position to improve airfoil aerodynamic performance is presented. Therefore, a parametric study of a MT mounted on the pressure surface of an airfoil has been carried out. The aim of the current study is to find the optimal MT size and location to increase airfoil aerodynamic performance and to investigate its influence on the power output of a 5 MW wind turbine. Firstly, a computational study of a MT mounted on the pressure surface of the airfoil DU91W(2)250 has been carried out and the best case has been found according to the largest lift-to-drag ratio. This airfoil has been selected because it is typically used on wind turbine, such as the 5 MW reference wind turbine of the National Renewable Energy Laboratory (NREL). Second, Blade Element Momentum (BEM) based computations have been performed to investigate the effect of the MT on the wind turbine power output with different wind speed realizations. The results show that, due to the implementation of MTs, a considerable increase in the turbine average power is achieved.
Introduction
In the field of energy production, wind energy is a key issue in order to reduce fossil fuel dependency. In addition, the solar-wind hybrid energy system has become very popular (Bouzelata et al. [1]). Wind energy is an essential resource among the other clean energy production methods. The search for an energy power policy that is local, sustainable, and environmentally friendly, and optimizes resources has become a requirement. Therefore, models that include factors such as emissions reduction, minimization of imported energy, and even social acceptance are proposed in many studies, such as Novosel et al. [2] and Kumu et al. [3]. Lately research, has been focused on wind turbine blade improvements to optimize rotor dynamic behavior (Jaume et al. [4] and Vaz et al. [5]).
The yearly 9% increase of installed wind energy in Europe in the last fifteen years shows the significance of research in the field of flow control for large wind turbines (Houghton et al. [6]). The considerable growth of wind turbine rotor size and weight in the last decade has made it impossible to control as they were controlled 30 years ago. Rotors of 120 meters or even more are now a reality.
Johnson et al. [7] compiled some of the most important load control techniques that could be used in wind turbines to assure a safe and optimal operation under a variety of atmospheric conditions. The improvements of present wind turbines should be pointed to minimize the fatigue from the wind turbine rotor and other structural components due to changes in wind direction, speed and turbulence, as well as start-stop operations of the wind turbine and, of course, to maximize the energy production.
In the last decades, many different flow control devices have been designed and developed (see Chow et al. [8]). Most of them were intended for aeronautical applications (Taylor [9]), but also frequently used in turbo machinery (see Liu et al. [10] and Xu et al. [11]). Currently, researchers are working to optimize and introduce this type of devices in horizontal axis wind turbines (HAWT). Wood [12] developed a four layer scheme which allows classifying the different concepts that are part of all flow control devices. In the study of Shires et al. [13], a tangential air jet was used on a vertical axis wind turbine (VAWT) blade to control the separation of the flow and therefore to increase the aerodynamic performance. In addition, the dynamic stall control was investigated by Xu et al. [14] on a S809 airfoil by the implementation of a co-flow jet.
Depending on their operating principle, they can be classified as actives or passives (see Aramendia et al. [15]). Passive control techniques would represent an improvement in the turbine's efficiency and in loads reduction without external energy consumption. Active control techniques need an additional energy source to get the desired effect on the flow and, unlike microtabs and other passive devices, active flow control needs intricate algorithms to get the maximum benefit (see Becker et al. [16] and Macquart et al. [17]). Johnson et al. [7] made an analysis and discussed fifteen different devices for wind turbine control. Some of them are still being tested on full-scale turbines.
The microtabs (MTs) consist of a small tabs situated close to the trailing edge (TE) of an airfoil, which projects perpendicular to the surface of the airfoil a few percent of the chord length c (usually 1-2%) corresponding to the boundary layer thickness. The potential of MTs was first investigated by van Dam et al. [18]. Baker et al. [19] carried out broad study dedicated to the S809 airfoil with MTs. These MTs jets the flow in the boundary layer away from the blade's surface, bringing a recirculation zone behind the tab, as can be observed in Figure 1. The MTs affect the airfoil aerodynamics shifting the point of flow separation and, therefore, providing changes in lift. Lift improvement is obtained by implementing the MT downwards (on the pressure side) and lift reduction is obtained by deploying the MT upwards (on the suction side).
Appl. Sci. 2017, 7, 536 2 of 19 now a reality. Johnson et al. [7] compiled some of the most important load control techniques that could be used in wind turbines to assure a safe and optimal operation under a variety of atmospheric conditions. The improvements of present wind turbines should be pointed to minimize the fatigue from the wind turbine rotor and other structural components due to changes in wind direction, speed and turbulence, as well as start-stop operations of the wind turbine and, of course, to maximize the energy production.
In the last decades, many different flow control devices have been designed and developed (see Chow et al. [8]). Most of them were intended for aeronautical applications (Taylor [9]), but also frequently used in turbo machinery (see Liu et al. [10] and Xu et al. [11]). Currently, researchers are working to optimize and introduce this type of devices in horizontal axis wind turbines (HAWT). Wood [12] developed a four layer scheme which allows classifying the different concepts that are part of all flow control devices. In the study of Shires et al. [13], a tangential air jet was used on a vertical axis wind turbine (VAWT) blade to control the separation of the flow and therefore to increase the aerodynamic performance. In addition, the dynamic stall control was investigated by Xu et al. [14] on a S809 airfoil by the implementation of a co-flow jet.
Depending on their operating principle, they can be classified as actives or passives (see Aramendia et al. [15]). Passive control techniques would represent an improvement in the turbine's efficiency and in loads reduction without external energy consumption. Active control techniques need an additional energy source to get the desired effect on the flow and, unlike microtabs and other passive devices, active flow control needs intricate algorithms to get the maximum benefit (see Becker et al. [16] and Macquart et al. [17]). Johnson et al. [7] made an analysis and discussed fifteen different devices for wind turbine control. Some of them are still being tested on full-scale turbines.
The microtabs (MTs) consist of a small tabs situated close to the trailing edge (TE) of an airfoil, which projects perpendicular to the surface of the airfoil a few percent of the chord length c (usually 1-2%) corresponding to the boundary layer thickness. The potential of MTs was first investigated by van Dam et al. [18]. Baker et al. [19] carried out broad study dedicated to the S809 airfoil with MTs. These MTs jets the flow in the boundary layer away from the blade's surface, bringing a recirculation zone behind the tab, as can be observed in Figure 1.The MTs affect the airfoil aerodynamics shifting the point of flow separation and, therefore, providing changes in lift. Lift improvement is obtained by implementing the MT downwards (on the pressure side) and lift reduction is obtained by deploying the MT upwards (on the suction side). The implementation of this device near the airfoil (TE) provokes changes in the flow, causing modifications in the circulation of the flow on the airfoil. The effective camber of the airfoil is modified, promoting changes in the lift and drag forces. Placing a MT at the pressure surface the lift is increased, however if it is placed on the suction, the opposite effect is reached. The main advantages of MTs are: small size, low cost of manufacturing, low power requirements for its activation, and simplicity of the device design. Multiple studies into this topic were made by van Dam et al. [20] and Yen et al. [21], including wind-tunnel experiments in order to determine their optimal height and location. The implementation of this device near the airfoil (TE) provokes changes in the flow, causing modifications in the circulation of the flow on the airfoil. The effective camber of the airfoil is modified, promoting changes in the lift and drag forces. Placing a MT at the pressure surface the lift is increased, however if it is placed on the suction, the opposite effect is reached. The main advantages of MTs are: small size, low cost of manufacturing, low power requirements for its activation, and simplicity of the device design. Multiple studies into this topic were made by van Dam et al. [20] and Yen et al. [21], including wind-tunnel experiments in order to determine their optimal height and location.
The aim of the current study is to find the optimal MT position to increase DU91W(2)250 airfoil aerodynamic performance and to investigate its influence on the average power output of a 5 MW Wind turbine. Firstly, a parametric study of a MT mounted on the pressure surface of the airfoil DU91W(2)250 has been carried out and best case has been selected. This airfoil has been selected because it is used on the 5 MW reference wind turbine of the NREL, as described in Jonkman et al. [22]. Second, Blade Element Momentum (BEM) based computations have been performed to investigate the effect of this MTs on the wind turbine power output. The results on the rotor thrust and blade root bending moment are also presented.
Numerical Setup
In order to obtain some of the main features of the MT, Computational Fluid Dynamic (CFD) techniques have been employed. Currently, non-commercial and proprietary CFD codes are used to reproduce relatively well any physical problem. In this work, the open source code OpenFOAM has been used for simulating the effects of a MT on a DU91W (2)250. This open source CFD code is an object-oriented library written in C++ to solve computational continuum mechanics problems. One of its advantages is that the user can modify the code to create new solvers and applications as well as freely share the code developed.
The SIMPLE algorithm was employed for the pressure-velocity coupling. The convective terms were discretized with a second order linear-upwind scheme. The discretization of the viscous terms was achieved by means of a second order central-differences linear scheme. The simulations were run fully turbulent. Steady state simulations were carried out and performed with a structured finite-volume flow solver using Reynolds averaged Navier-Stokes (RANS) equations. For these computations the k-ω SST shear stress turbulence model developed by Menter [23] was used due to its superior separated flow performance, as reported by Kral [24] and Gatski [25]. The model is a combination of two models: Wilcox's k-ω model for near wall regions and the k-ε model for the outer region and in free shear flows. The SST model departs from existing k-ω and k-ε models by way of a modified eddy viscosity definition that results in improved prediction of separated flows. Reynolds-averaged Navier-Stokes calculations with SST turbulence model for various tab configurations applied to an airfoil are presented in Mayda et al. [26]. Figure 2 illustrates the computational setup with the current setting consisting of a DU airfoil. An O-mesh type was designed for the computations with a computational domain radius of 32 times the airfoil chord length R = 32c, which is in the order of the computational size recommended by Sørensen et al. [27] for this type of simulations. The aim of the current study is to find the optimal MT position to increase DU91W(2)250 airfoil aerodynamic performance and to investigate its influence on the average power output of a 5 MW Wind turbine. Firstly, a parametric study of a MT mounted on the pressure surface of the airfoil DU91W(2)250 has been carried out and best case has been selected. This airfoil has been selected because it is used on the 5 MW reference wind turbine of the NREL, as described in Jonkman et al. [22]. Second, Blade Element Momentum (BEM) based computations have been performed to investigate the effect of this MTs on the wind turbine power output. The results on the rotor thrust and blade root bending moment are also presented.
Numerical Setup
In order to obtain some of the main features of the MT, Computational Fluid Dynamic (CFD) techniques have been employed. Currently, non-commercial and proprietary CFD codes are used to reproduce relatively well any physical problem. In this work, the open source code OpenFOAM has been used for simulating the effects of a MT on a DU91W (2)250. This open source CFD code is an object-oriented library written in C++ to solve computational continuum mechanics problems. One of its advantages is that the user can modify the code to create new solvers and applications as well as freely share the code developed.
The SIMPLE algorithm was employed for the pressure-velocity coupling. The convective terms were discretized with a second order linear-upwind scheme. The discretization of the viscous terms was achieved by means of a second order central-differences linear scheme. The simulations were run fully turbulent. Steady state simulations were carried out and performed with a structured finite-volume flow solver using Reynolds averaged Navier-Stokes (RANS) equations. For these computations the k-ω SST shear stress turbulence model developed by Menter [23] was used due to its superior separated flow performance, as reported by Kral [24] and Gatski [25]. The model is a combination of two models: Wilcox's k-ω model for near wall regions and the k-ε model for the outer region and in free shear flows. The SST model departs from existing k-ω and k-ε models by way of a modified eddy viscosity definition that results in improved prediction of separated flows. Reynolds-averaged Navier-Stokes calculations with SST turbulence model for various tab configurations applied to an airfoil are presented in Mayda et al. [26]. Figure 2 illustrates the computational setup with the current setting consisting of a DU airfoil. An O-mesh type was designed for the computations with a computational domain radius of 32 times the airfoil chord length R = 32c, which is in the order of the computational size recommended by Sørensen et al. [27] for this type of simulations. The Reynolds number based on the airfoil chord length of c = 1 m is Re = 7 × 10 6 . The computational setup of the simulations consists of a structured mesh with the first cell height ∆z/c of 1.45 × 10 −6 normalized by the airfoil chord length. The stretching in the chord-wise and normal directions is accomplished by double sided tanh stretching functions based on Vinokur [28] and Thompson et al. [29]. The mesh domain was designed to have a dimensionless distance less than 1 (y+ < 1) on the airfoil wall. An optimized mesh plays a major role in the CFD simulations as the tool that will help the user to discretize the domain. It is important to identify the mesh regions where the results have to be quite accurate as well as to establish a balance between the accuracy of the simulations and the computational cost. Figure 3 shows the cell distribution around the MT and in the near wake of the trailing edge. The wake and the regions where high gradients were expected were accordingly refined. There are certain regions close to the MT in which the velocity gradient changes drastically, which is the reason why those areas are so important. directions is accomplished by double sided tanh stretching functions based on Vinokur [28] and Thompson et al. [29]. The mesh domain was designed to have a dimensionless distance less than 1 (y+ < 1) on the airfoil wall. An optimized mesh plays a major role in the CFD simulations as the tool that will help the user to discretize the domain. It is important to identify the mesh regions where the results have to be quite accurate as well as to establish a balance between the accuracy of the simulations and the computational cost. Figure 3 shows the cell distribution around the MT and in the near wake of the trailing edge. The wake and the regions where high gradients were expected were accordingly refined. There are certain regions close to the MT in which the velocity gradient changes drastically, which is the reason why those areas are so important. In OpenFoam, both the velocity and the pressure conditions have to be defined at all boundaries, since the velocity-pressure coupling is based on a collocated grid approach. Non-slip boundary condition was set for the airfoil and MT walls. Computational simulations of the DU91(2)250 airfoil without any MT have been carried out and validated against the data obtained by Xfoil from DOWEC project of Kooijman et al. [30] and Lindenburg [31]. The Lift-to-drag ratio was calculated for ten angles of attack (AoA) from α = 0 to α = 9 degrees. Figure 4 shows the results of the CFD computations against the Xfoil results for all angles of attacks of the airfoil. A mesh independency study was carried out to verify enough grid resolution with three mesh sizes using a refinement ratio of 2. The coarse mesh contains 72,500 cells. For the medium and fine mesh the number of cells is 145,000 and 290,000, respectively. Drag and lift results obtained for the finer mesh are compared with the results of a regular and a coarse mesh. Less than 4% mesh dependency has been found for both drag and lift. The simulations were converged until a satisfactory residual convergence was achieved on the velocities, pressure and turbulence quantities. In OpenFoam, both the velocity and the pressure conditions have to be defined at all boundaries, since the velocity-pressure coupling is based on a collocated grid approach. Non-slip boundary condition was set for the airfoil and MT walls. Computational simulations of the DU91(2)250 airfoil without any MT have been carried out and validated against the data obtained by Xfoil from DOWEC project of Kooijman et al. [30] and Lindenburg [31]. The Lift-to-drag ratio was calculated for ten angles of attack (AoA) from α = 0 to α = 9 degrees. Figure 4 shows the results of the CFD computations against the Xfoil results for all angles of attacks of the airfoil. directions is accomplished by double sided tanh stretching functions based on Vinokur [28] and Thompson et al. [29]. The mesh domain was designed to have a dimensionless distance less than 1 (y+ < 1) on the airfoil wall. An optimized mesh plays a major role in the CFD simulations as the tool that will help the user to discretize the domain. It is important to identify the mesh regions where the results have to be quite accurate as well as to establish a balance between the accuracy of the simulations and the computational cost. Figure 3 shows the cell distribution around the MT and in the near wake of the trailing edge. The wake and the regions where high gradients were expected were accordingly refined. There are certain regions close to the MT in which the velocity gradient changes drastically, which is the reason why those areas are so important. In OpenFoam, both the velocity and the pressure conditions have to be defined at all boundaries, since the velocity-pressure coupling is based on a collocated grid approach. Non-slip boundary condition was set for the airfoil and MT walls. Computational simulations of the DU91(2)250 airfoil without any MT have been carried out and validated against the data obtained by Xfoil from DOWEC project of Kooijman et al. [30] and Lindenburg [31]. The Lift-to-drag ratio was calculated for ten angles of attack (AoA) from α = 0 to α = 9 degrees. Figure 4 shows the results of the CFD computations against the Xfoil results for all angles of attacks of the airfoil. A mesh independency study was carried out to verify enough grid resolution with three mesh sizes using a refinement ratio of 2. The coarse mesh contains 72,500 cells. For the medium and fine mesh the number of cells is 145,000 and 290,000, respectively. Drag and lift results obtained for the finer mesh are compared with the results of a regular and a coarse mesh. Less than 4% mesh dependency has been found for both drag and lift. The simulations were converged until a satisfactory residual convergence was achieved on the velocities, pressure and turbulence quantities. A mesh independency study was carried out to verify enough grid resolution with three mesh sizes using a refinement ratio of 2. The coarse mesh contains 72,500 cells. For the medium and fine mesh the number of cells is 145,000 and 290,000, respectively. Drag and lift results obtained for the finer mesh are compared with the results of a regular and a coarse mesh. Less than 4% mesh dependency has been found for both drag and lift. The simulations were converged until a satisfactory residual convergence was achieved on the velocities, pressure and turbulence quantities. The CFD results follow reasonable good the trend of DOWEC results. Lift and drag coefficients were calculated according to the Equations (1) and (2), respectively: The air density was defined by ρ = 1.204 kg/m 3 and the free stream velocity, far ahead of the airfoil, corresponds to U ∞ = 10.66 m/s. L and D represent the lift and drag forces per unit of area, since the simulations are in 2D.
Microtab Lay-Out
The MT position in the airfoil is sketched in Figures 5 and 6. Dimension x represents the position from the LE and y represents the height of the MT, both in percentage of c. Twelve cases have been established depending on the distance measured respective to the LE in %c (see Table 1). These cases are: 93%c, 94%c, 95%c and 96%c. The MT height relative to the chord length measured in percentage is 1%c, 1.5%c and 2%c. These series of cases has been designed according to the previous studies of Standish et al. [32], Mayda et al. [26] and Yen et al. [21], where the maximum translation was estimated in the order of the boundary layer thickness at the device position: 1-2% of the chord length and the optimal location for a lower surface tab in terms of lift and drag was found to be around 95% of c. The MTs are placed on the pressure surface and have been studied for ten different angles of attack, from 0 • to 9 • . The combination of all these positions for the MTs gives 120 different cases to study (Ayerdi-Zaton et al. [33]). The airfoil DU91W(2)250 without any flow control device was also simulated to study the influence of the previously described MTs. The CFD results follow reasonable good the trend of DOWEC results. Lift and drag coefficients were calculated according to the Equations (1) and (2), respectively: The air density was defined by = 1.204 kg/m 3 and the free stream velocity, far ahead of the airfoil, corresponds to U∞ = 10.66 m/s. L and D represent the lift and drag forces per unit of area, since the simulations are in 2D.
Microtab Lay-Out
The MT position in the airfoil is sketched in Figures 5 and 6. Dimension x represents the position from the LE and y represents the height of the MT, both in percentage of c. Twelve cases have been established depending on the distance measured respective to the LE in %c (see Table 1). These cases are: 93%c, 94%c, 95%c and 96%c. The MT height relative to the chord length measured in percentage is 1%c, 1.5%c and 2%c. These series of cases has been designed according to the previous studies of Standish et al. [32], Mayda et al. [26] and Yen et al. [21], where the maximum translation was estimated in the order of the boundary layer thickness at the device position: 1-2% of the chord length and the optimal location for a lower surface tab in terms of lift and drag was found to be around 95% of c. The MTs are placed on the pressure surface and have been studied for ten different angles of attack, from 0° to 9°. The combination of all these positions for the MTs gives 120 different cases to study (Ayerdi-Zaton et al. [33]). The airfoil DU91W(2)250 without any flow control device was also simulated to study the influence of the previously described MTs. The CFD results follow reasonable good the trend of DOWEC results. Lift and drag coefficients were calculated according to the Equations (1) and (2), respectively: The air density was defined by = 1.204 kg/m 3 and the free stream velocity, far ahead of the airfoil, corresponds to U∞ = 10.66 m/s. L and D represent the lift and drag forces per unit of area, since the simulations are in 2D.
Microtab Lay-Out
The MT position in the airfoil is sketched in Figures 5 and 6. Dimension x represents the position from the LE and y represents the height of the MT, both in percentage of c. Twelve cases have been established depending on the distance measured respective to the LE in %c (see Table 1). These cases are: 93%c, 94%c, 95%c and 96%c. The MT height relative to the chord length measured in percentage is 1%c, 1.5%c and 2%c. These series of cases has been designed according to the previous studies of Standish et al. [32], Mayda et al. [26] and Yen et al. [21], where the maximum translation was estimated in the order of the boundary layer thickness at the device position: 1-2% of the chord length and the optimal location for a lower surface tab in terms of lift and drag was found to be around 95% of c. The MTs are placed on the pressure surface and have been studied for ten different angles of attack, from 0° to 9°. The combination of all these positions for the MTs gives 120 different cases to study (Ayerdi-Zaton et al. [33]). The airfoil DU91W(2)250 without any flow control device was also simulated to study the influence of the previously described MTs.
Computational Results
A parametric study has been carried out in order to find the optimal position of the MT on the airfoil DU91W(2)250. Table 1 illustrates the 13 cases studied in the current work. The first case studied is the one with no MT implemented and the other 12 cases with different sizes (y) and position of the MT from the leading edge of the airfoil (x). Each case has been studied for ten different degrees of angle of attack α, in the range from 0 • to 9 • . Figure 7 illustrates the lift-to-drag ratio C L /C D evolution for every angle α and all MT cases. In the left column the evolution of the C L /C D vs. the location of the MT from the airfoil leading edge x is represented. Right column plots illustrate the C L /C D evolution against the three different heights of the MT y = 1, 1.5 and 2%c. Both x and y parameters are represented in terms of percent of the airfoil chord length c. The Lift-to-drag ratio of the airfoil without MT for each AoA is represented by the black continuous line in each plot.
Computational Results
A parametric study has been carried out in order to find the optimal position of the MT on the airfoil DU91W(2)250. Table 1 illustrates the 13 cases studied in the current work. The first case studied is the one with no MT implemented and the other 12 cases with different sizes (y) and position of the MT from the leading edge of the airfoil (x). Each case has been studied for ten different degrees of angle of attack α, in the range from 0° to 9°. Figure 7 illustrates the lift-to-drag ratio CL/CD evolution for every angle α and all MT cases. In the left column the evolution of the CL/CD vs. the location of the MT from the airfoil leading edge x is represented. Right column plots illustrate the CL/CD evolution against the three different heights of the MT y = 1, 1.5 and 2%c. Both For the range of angles of attack, from 0° to 9°, the best values of the CL/CD ratio are reached by the cases MT9510, MT9515 and MT9520. However, as can be seen in the left column plots of Figure 7, the highest values are reached around the x = 95% of the chord length. On the right column plots, it is clear again that the best values of the Lift-to-drag ratio are reached by x = 95% of c, and the highest ones by the MT height of y = 2% of c. Therefore, in the range of angles of attack studied in the present work, the best case in terms of Lift-to-drag ratio is the one defined by: x = 95% and y = 2% of c. These results are in concordance previous studies of Yen et al. [21] where it was found that the best place to situate the lower surface tab with respect to lift and drag was around 95%c. According to the classification of Table 1, that case corresponds to DU912250MT9520. The CL/CD ratio achieved in that case, keeps up to the ratio of the clean airfoil represented in Figure 7 with a continuous black line. Figure 8 illustrates the Lift-to-drag ratio values of the case DU912250MT9520 in comparison with the ratios obtained for the clean airfoil.
At low AoAs, the increase in the CL/CD ratio due to the microtab MT9520 implementation is clearly visible. However, at the AoA of 9° the CL/CD ratio stays below the CL/CD ratio of the clean airfoil DU91W(2)250. Note that this behavior is repeated by the other cases with different MTs configurations, as shown in Figure 7e. The reason could be found in the fact that at 9° of AoA the airfoil is working near the stall conditions and the CD increases considerably. Since the computations were performed in 2D, three-dimensional effects were neglected. In the studies of Mayda et al. [26] and Zahle et al. [34], a detailed study on the three-dimensional shedding and spanwise flow can be found. Figure 9 represents the streamwise velocity distribution around the MT of the case with the best Lift-to-drag ratio: DU91W(2)250 MT9520. The presence of the tab changes the trailing edge flow development, the so-called Kutta condition, and consequently the effective camber of the airfoils is modified, providing in this case lift enhance. The MT jets the flow in the BL away from the airfoil surface, producing a circulation region behind the tab. For the range of angles of attack, from 0 • to 9 • , the best values of the C L /C D ratio are reached by the cases MT9510, MT9515 and MT9520. However, as can be seen in the left column plots of Figure 7, the highest values are reached around the x = 95% of the chord length. On the right column plots, it is clear again that the best values of the Lift-to-drag ratio are reached by x = 95% of c, and the highest ones by the MT height of y = 2% of c. Therefore, in the range of angles of attack studied in the present work, the best case in terms of Lift-to-drag ratio is the one defined by: x = 95% and y = 2% of c. These results are in concordance previous studies of Yen et al. [21] where it was found that the best place to situate the lower surface tab with respect to lift and drag was around 95%c. According to the classification of Table 1, that case corresponds to DU912250MT9520. The C L /C D ratio achieved in that case, keeps up to the ratio of the clean airfoil represented in Figure 7 with a continuous black line. Figure 8 illustrates the Lift-to-drag ratio values of the case DU912250MT9520 in comparison with the ratios obtained for the clean airfoil.
At low AoAs, the increase in the C L /C D ratio due to the microtab MT9520 implementation is clearly visible. However, at the AoA of 9 • the C L /C D ratio stays below the C L /C D ratio of the clean airfoil DU91W(2)250. Note that this behavior is repeated by the other cases with different MTs configurations, as shown in Figure 7e. The reason could be found in the fact that at 9 • of AoA the airfoil is working near the stall conditions and the C D increases considerably. Since the computations were performed in 2D, three-dimensional effects were neglected. In the studies of Mayda et al. [26] and Zahle et al. [34], a detailed study on the three-dimensional shedding and spanwise flow can be found. Figure 9 represents the streamwise velocity distribution around the MT of the case with the best Lift-to-drag ratio: DU91W(2)250 MT9520. The presence of the tab changes the trailing edge flow development, the so-called Kutta condition, and consequently the effective camber of the airfoils is A comparison of the pressure distribution on the surface of the clean airfoil DU91W(2)250 and the airfoil with the best aerodynamic performance in terms of lit-to-drag ratio DU91W(2)250MT9520 is presented in Figure 10. The presence of the MT considerably increases the aft loading of the airfoil and a positive gap between the clean and the microtabed airfoil is clearly visible at all angles of attack. The airfoil shape is sketched by a continuous black line. A comparison of the pressure distribution on the surface of the clean airfoil DU91W(2)250 and the airfoil with the best aerodynamic performance in terms of lit-to-drag ratio DU91W(2)250MT9520 is presented in Figure 10. The presence of the MT considerably increases the aft loading of the airfoil and a positive gap between the clean and the microtabed airfoil is clearly visible at all angles of attack. The airfoil shape is sketched by a continuous black line. A comparison of the pressure distribution on the surface of the clean airfoil DU91W(2)250 and the airfoil with the best aerodynamic performance in terms of lit-to-drag ratio DU91W(2)250MT9520 is Appl. Sci. 2017, 7, 536 9 of 18 presented in Figure 10. The presence of the MT considerably increases the aft loading of the airfoil and a positive gap between the clean and the microtabed airfoil is clearly visible at all angles of attack. The airfoil shape is sketched by a continuous black line.
Wind Speed Model
The wind speed realizations used in the current study, as shown in Figures 11, have been calculated with the TurbSim tool (Kelley et al. [35]). The turbulence model is the Normal Turbulence Model (NTM) following the IEC 61400 norm and the wind speed series have been generated with the following parameters: TurbSim uses an adapted version of Veers [36] to generate time series based on spectral representation. The IECKAI (IEC Kaimal) model is defined in IEC 61400-1 2nd ed. [37] and 3rd ed. [38], and assumes neutral atmospheric stability. The spectra for the three wind components, K = u, v, w, are calculated by Equation (3): where f is the cyclic frequency and LK is an integral scale parameter defined in IEC 61400-1 standard The velocity spectra of the IECKAI model are assumed to be invariant across the grid. In practice, a small amount of variation in the u-component standard deviation occurs due to the spatial coherence model. Figure 11 represents the wind speed series used in the present study according to the Normal Turbulence model with 5 m/s, 7.5 m/s and with 10 m/s average velocity, respectively. According to TurbSim user specifications, the first input value is a random seed that must be an integer between −2,147,483,648 and 2,147,483,647 (inclusive). In the current study, three different seeds have been chosen for each wind realizations. Figure 11a 0deg no MT 2deg no MT 4deg no MT 6deg no MT 9deg no MT 0deg MT9520 2deg MT9520 4deg MT9520 6deg MT9520 9deg MT9520
Wind Speed Model
The wind speed realizations used in the current study, as shown in Figure 11, have been calculated with the TurbSim tool (Kelley et al. [35]). The turbulence model is the Normal Turbulence Model (NTM) following the IEC 61400 norm and the wind speed series have been generated with the following parameters: TurbSim uses an adapted version of Veers [36] to generate time series based on spectral representation. The IECKAI (IEC Kaimal) model is defined in IEC 61400-1 2nd ed. [37] and 3rd ed. [38], and assumes neutral atmospheric stability. The spectra for the three wind components, K = u, v, w, are calculated by Equation (3): where f is the cyclic frequency and L K is an integral scale parameter defined in IEC 61400-1 standard. The velocity spectra of the IECKAI model are assumed to be invariant across the grid. In practice, a small amount of variation in the u-component standard deviation occurs due to the spatial coherence model. Figure 11 represents the wind speed series used in the present study according to the Normal Turbulence model with 5 m/s, 7.5 m/s and with 10 m/s average velocity, respectively. According to TurbSim user specifications, the first input value is a random seed that must be an integer between −2,147,483,648 and 2,147,483,647 (inclusive). In the current study, three different seeds have been chosen for each wind realizations. Figure 11a-c illustrates the wind patterns generated for each wind speed. The values of Seeds 1-3 have been chosen to obtain different wind patterns and are the same for each average wind speed. These different wind speed realizations were chosen to investigate the effects of the MTs, since it is a good way to evaluate the wind turbine power output at low and medium wind velocities. Four different cases were considered in the current study. The clean wind turbine was taken as the baseline case, without any device implemented and named DU91W(2)250. According to the matrix presented in Table 2, the cases are different depending on the blade station where the passive devices were implemented. The suffix st means the blade station where the MTs were introduced. According to the airfoil distribution described in [22], stations 8 and 9 were chosen for the present study.
Methodology
The primary tools used in the current work to investigate the effects of the passive MTs on the NREL 5 MW Baseline Wind Turbine are engineering models. The NREL 5 MW reference wind turbine is widely used in research studies in the wind energy field since it represents a baseline of the modern and future offshore HAWT. Many investigations have been carried out based on this wind turbine concept including studies about rotor aerodynamics, controls, offshore dynamics and design code development. This concept of a 5 MW wind turbine is based on the data from the These different wind speed realizations were chosen to investigate the effects of the MTs, since it is a good way to evaluate the wind turbine power output at low and medium wind velocities. Four different cases were considered in the current study. The clean wind turbine was taken as the baseline case, without any device implemented and named DU91W(2)250. According to the matrix presented in Table 2, the cases are different depending on the blade station where the passive devices were implemented. The suffix st means the blade station where the MTs were introduced. According to the airfoil distribution described in [22], stations 8 and 9 were chosen for the present study.
Methodology
The primary tools used in the current work to investigate the effects of the passive MTs on the NREL 5 MW Baseline Wind Turbine are engineering models. The NREL 5 MW reference wind turbine is widely used in research studies in the wind energy field since it represents a baseline of the modern and future offshore HAWT. Many investigations have been carried out based on this wind turbine concept including studies about rotor aerodynamics, controls, offshore dynamics and design code development. This concept of a 5 MW wind turbine is based on the data from the DOWEC study [30,31], with a concept from the UpWind project [39]. The airfoils and chord schedule used in the present work are presented in Table 3 and are the same from NREL [22], also adopted from the DOWEC project. The blade airfoil locations, labeled as r (m) in Table 3, are directed along the blade-pitch axis from the rotor center to the blade cross sections. The DU25 airfoil corresponds to the DU91W(2)250. More detailed information on the DU family of airfoils used in the current work can be found in the study made by Timmer [40]. The reported NREL 5 MW airfoil distribution is shown in Figure 12. DOWEC study [30,31], with a concept from the UpWind project [39]. The airfoils and chord schedule used in the present work are presented in Table 3 and are the same from NREL [22], also adopted from the DOWEC project. The blade airfoil locations, labeled as r (m) in Table 3, are directed along the blade-pitch axis from the rotor center to the blade cross sections. The DU25 airfoil corresponds to the DU91W(2)250. More detailed information on the DU family of airfoils used in the current work can be found in the study made by Timmer [40]. The reported NREL 5 MW airfoil distribution is shown in Figure 12. 61.6333 NACA64XX Figure 12. Sketch of the airfoil distribution along the blade (not to scale) of the NREL 5 MW wind turbine according to [22].
Once the lift and drag coefficients are identified for the airfoils along the blades, it is feasible to compute the force distribution. Global loads such as the power output and the root bending moment of the blade can be found by integrating this distribution along the blade span. It is the principle of the BEM method, which will be derived to compute the induction factors a and a' and thus the loads on a wind turbine. The present procedure is described in the following steps: Figure 12. Sketch of the airfoil distribution along the blade (not to scale) of the NREL 5 MW wind turbine according to [22].
Once the lift and drag coefficients are identified for the airfoils along the blades, it is feasible to compute the force distribution. Global loads such as the power output and the root bending moment of the blade can be found by integrating this distribution along the blade span. It is the principle of the BEM method, which will be derived to compute the induction factors a and a' and thus the loads on a wind turbine. The present procedure is described in the following steps:
1.
First of all, BEM based computations were carried out in order to characterize the dynamical behavior of the NREL 5 MW wind turbine, including Prandtl's tip loss factor and Glauert's correction. The BEM solver was developed and programmed by the authors of the current study based on the numerical iterative approach of Hansen [41]. All the necessary equations were derived and computed based on the steps proposed by the classical blade element momentum method. The usual basic steps for BEM calculations was followed; this is a short schedule: a. Initialization by guessing values of a and a' values, axial and tangential induction factors, respectively. b.
Calculate the flow angle Φ. c.
Calculate the local angle of attack α. d.
Read off C L (α) and C D (α). e.
Compute the normal C n and tangential C t load coefficients. f.
Re-calculate a and a'. g.
State a tolerance for a and a' and if it has changed more than that tolerance, go to (b) or else finish. h.
Compute the local loads.
2.
Following the specifications of the utility scale multi megawatt wind turbine NREL 5 MW baseline described in [22], all the wind turbine rotor properties were introduced as input characteristics. The polar curves of the airfoil with the MT were taken from best case found in Section 2, which is the DU91W(2)250MT9520 with the MT position from the leading edge at 95% of c and the MT height of 2% of c.
3.
The surfaces of power coefficient Cp were calculated for all cases of the present study according to the matrix distribution described in Table 2.
4.
Once the Cp surfaces have been generated, BEM based computations are run for the four cases and the power curve vs. wind speed is calculated to compare the power curve of the clean turbine with the curve of the turbine with the MTs implemented.
5.
Afterwards, the wind speed realizations explained in Section 3 are introduced to calculate the average wind turbine power output for all cases. 6.
The results of the average wind turbine power output for all cases and at two different wind speed realizations are compared with the mean power output of the clean wind turbine, the one without any flow control device implemented.
Results from BEM Computations
In order to investigate the influence of MTs on the power of the NREL 5 MW reference wind turbine, BEM computations have been carried out following the steps explained in the previous section. The BEM based computations has now been derived and the power has been computed versus the wind speed. Figure 13 illustrates the power curves along the wind speed for the clean case with no passive device implemented into the blade in comparison with the cases with MTs (see Table 2). The power curves of the wind turbine with MTs follow the trends of the curve of the clean wind turbine. However, at the wind speeds before the rated power is achieved, the power output increases slightly in the cases with the MT implemented, as shown in the enlargement view embedded in Figure 13. Additionally, the average wind turbine power output has been calculated for the clean wind turbine and for the cases with MTs at the wind speed realizations described in Section 3. Equation (4) shows how this average power is calculated: vj: is the wind speed according to the realizations shown in Figure 11. Nbins: number of bins per data. P(vj): power at the wind speed vj. N(vj): number of data at the wind speed vj. where the P(vj) has been determined from the data obtained in Figure 13. Table 4 shows the results of the power output calculations according to Equation (4). Firstly, the average power has been calculated for the clean wind turbine for the three wind speed realizations illustrated in Figure 11, without any flow control device mounted on the blade. Afterwards, the average wind turbine power was calculated for all cases of MTs distribution described in Table 2 and compared with the clean turbines power values. The symbols denoted by Δ represent the increment of average power in comparison with the clean turbine. Additionally, the average wind turbine power output has been calculated for the clean wind turbine and for the cases with MTs at the wind speed realizations described in Section 3. Equation (4) shows how this average power is calculated: v j : is the wind speed according to the realizations shown in Figure 11. where the P(v j ) has been determined from the data obtained in Figure 13. Table 4 shows the results of the power output calculations according to Equation (4). Firstly, the average power has been calculated for the clean wind turbine for the three wind speed realizations illustrated in Figure 11, without any flow control device mounted on the blade. Afterwards, the average wind turbine power was calculated for all cases of MTs distribution described in Table 2 and compared with the clean turbines power values. The symbols denoted by ∆ represent the increment of average power in comparison with the clean turbine. At the wind speed realization of NTM5, the greatest increase in the average power was achieved by the case st8, with a value of 5.3218 × 10 5 W, which supposes an increase of 9.599% in comparison with the value obtained by the clean wind turbine. Moreover, the other cases with MTs mounted on st9 and st8st9 present a similar increase. At the NTM7.5 wind speed realization, the largest increment average power output was reached by the case with the MTs implemented in the station 8 of the blade, with an increase of the power in compared with the clean wind turbine of 4.425%. At the NTM10 wind speed realization the largest average power value is reached by the case with the MTs mounted on st8 with an increase with respect to the clean case of 3.282%. The largest increases for the three wind speed realization used in the present work have been performed by the case with the MTs in the blade station 8, corresponding to the case DU91W(2)250MT9520st8. Figure 14 illustrates in a bar plot the increments in the power output for every case in comparison with the clean case. The effect of mounting MTs on the blade stations studied in the present work is more significant at low wind speeds than at wind speed close to the nominal power. At those low wind speeds, the MTs can help to increase notably the wind turbine power output performance. Note that even thought the case DU91W(2)250MT9520st8st9 has more MTs mounted along the blade, its power performance in terms of average power output is quite similar in comparison with the other cases with MTs on stations 8 and 9. At the wind speed realization of NTM5, the greatest increase in the average power was achieved by the case st8, with a value of 5.3218 × 10 5 W, which supposes an increase of 9.599% in comparison with the value obtained by the clean wind turbine. Moreover, the other cases with MTs mounted on st9 and st8st9 present a similar increase. At the NTM7.5 wind speed realization, the largest increment average power output was reached by the case with the MTs implemented in the station 8 of the blade, with an increase of the power in compared with the clean wind turbine of 4.425%. At the NTM10 wind speed realization the largest average power value is reached by the case with the MTs mounted on st8 with an increase with respect to the clean case of 3.282%. The largest increases for the three wind speed realization used in the present work have been performed by the case with the MTs in the blade station 8, corresponding to the case DU91W(2)250MT9520st8. Figure 14 illustrates in a bar plot the increments in the power output for every case in comparison with the clean case. The effect of mounting MTs on the blade stations studied in the present work is more significant at low wind speeds than at wind speed close to the nominal power. At those low wind speeds, the MTs can help to increase notably the wind turbine power output performance. Note that even thought the case DU91W(2)250MT9520st8st9 has more MTs mounted along the blade, its power performance in terms of average power output is quite similar in comparison with the other cases with MTs on stations 8 and 9. After applying the BEM algorithm to all control volumes, the tangential and normal load distribution is known and global parameters such as thrust and bending moment at the root of the blade can be computed. In the current work, the mean values of both thrust and bending moment have been calculated for each wind speed realization. The thrust has been calculated by Equation (5) taking into account the trust distribution along the blade and the bending moment has been determined by Equation (6). After applying the BEM algorithm to all control volumes, the tangential and normal load distribution is known and global parameters such as thrust and bending moment at the root of the blade can be computed. In the current work, the mean values of both thrust and bending moment have been calculated for each wind speed realization. The thrust has been calculated by Equation (5) taking into account the trust distribution along the blade and the bending moment has been determined by Equation (6).
The Prandtl's tip loss correction factor F has been determined by the Equation (7), for both thrush and bending moment calculations: The variables used for thrust and bending moment estimations and the corresponding dimensions are shown in Table 5. All calculations are based on the 5 MW reference wind turbine described in [22]. Table 6 represents the mean values of thrust calculations for all MT cases according to the matrix described in Table 2. The thrust was calculated by integrating the thrust distribution along the blade by the Equation (5) for each wind speed realization. Afterwards, the average thrust was determined according to the wind realization duration of Figure 11. The increments in the average thrust of the microtabed blade cases have been illustrated in Table 6 in comparison with the clean case. The average thrust has been experienced an increase in every case with MTs implemented and once again, the case with the MTs mounted on the blade station 8 presents larger increments than the other cases, which is in concordance with the results of power output presented in Figure 14. The bending moment in the root of the blade has been determined by Equation (6) taking into account the bending moment distribution along the blade. The bending moment in the root of the blade was determined for each wind speed realization and computed along blade according to Equation (6). Afterwards, the mean bending moment was determined according to the wind realization duration of Figure 11. The increments in the average bending moment of the microtabed blade cases have been illustrated in Table 7 in comparison with the clean case. No extraordinary increment in the mean bending moment in the root of the blade has been found. All the increments in the bending moment due to the MTs implementation are acceptable taking into account the increase in the average power output production presented in Figure 14.
Conclusions
A parametric study for design and analysis of a MT on an airfoil has been carried out. To that end, 2D computational fluid dynamic simulations have been performed at Reynolds number of Re = 7 × 10 6 . The MT design attributes resulting from the simulations have allowed the sizing and positioning of the passive device based on aerodynamic performance. Comparisons of the CFD simulations and the DOWEC results have been made and verified the effectiveness of the MTs as flow control devices to increase the aerodynamics performance. The case DU91W(2)250MT9520 with the MT positioned at 95% of c and with the height of 2% is the one with the best aerodynamic performance in terms of lift-to-drag ratio. Afterwards, BEM based computations have been carried out to investigate the effects of that designed MT on the power performance of a 5 MW wind turbine. An increase on the average wind turbine power output has been found in the current study due to the implementation of MTs at different blade stations. That increase is more notable for the wind speed realizations with lower average wind speed NTM5. However, that increase is still significant with the wind speed realizations with average speeds of 7.5 m/s NTM7.5 and 10 m/s NTM10. In those cases, the increase in the power output is lower but still important. The best results in terms of average power are reached by the case denoted by DU91W(2)250MT9520st8 with the MTs implemented into the blade station 8. The largest increase in thrust has also been achieved by the case with the MTs mounted on the blade station 8. As expected, the increase in the wind turbine power output due to the MTs implementation leads to an augmentation in the bending moment in the root of the blade. However, this increase in the bending moment is acceptable taking into account the raise in the average power output production achieved by all cases. Moreover, no significant variation in the power increase has been found for the other MT locations st9 and st8st9. Because of the cheaper assembly of MT in only one blade station, the case with the MTs on the station 8 DU91W(2)250MT9520st8 is recommended.
The results of the current study shows that carefully analysis of MT height and location in the pressure surface from the airfoil leading edge, combined with a selection of an appropriate spanwise location on the blade, can yield an effective device flow control system to increase the wind turbine power output. | 2019-02-05T23:38:19.722Z | 2017-05-24T00:00:00.000 | {
"year": 2017,
"sha1": "500dfe9ddef5c8a84e6ad575a094b1f820e98b98",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/7/6/536/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "500dfe9ddef5c8a84e6ad575a094b1f820e98b98",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
93148550 | pes2o/s2orc | v3-fos-license | Cellulose sulfuric acid (CSA) and starch sulfuric acid (SSA) as solid and heterogeneous catalysts in green organic synthesis: recent advances
The use of heterogeneous solid acid catalysts is important to the development of sustainable chemical processes. To this end, cellulose sulfuric acid (CSA) and starch sulfuric acid (SSA) have been developed and used for a wide range of organic transformations, including the synthesis of pharmaceutically important heterocyclic compounds. These heterogeneous catalysts are easily recovered and reused several times without loss of their activity. In this review, we mainly summarize applications of these catalysts in synthetic organic chemistry.
Introduction
Synthetic organic chemistry generally involves the use of various solvents and catalysts, which may be toxic, hazardous, corrosive and inflammable.These chemical processes highly influence of the environment.Recently, it has become a major issue at a global level.In view of the seriousness of the chemical pollution, the uses of a wide range of these chemicals are being reexamined, leading to a search for the design and development of environmentally friendly and sustainable organic transformations for synthesis of various chemical compounds.
2][3][4] Homogeneous catalysts such as H 2 SO 4 , HCl, HBr, HF, CH 3 COOH and CF 3 COOH are frequently used in organic synthesis. 5,6However, these acid catalysts are toxic, corrosive, harmful and difficult to handle.Furthermore, their disposal is a headache for the chemical industry. 7In addition, under homogeneous conditions the catalysts are difficult to recover and reuse. 8Not only the ecological profile, but also the economic profile is improved, if recyclable heterogeneous catalysts and solvent free conditions can be used.0][11][12][13][14][15][16][17] Heterogeneous solid acids have advantages over conventional homogeneous acid catalysts such as simplicity in handling, decreased reactor and plant corrosion problems, and they can be easily recovered and reused several times without loss of their efficiency.
In recent years, the direction of science and technology has been shifting more towards eco-friendly, natural product resources and reusable catalysts.In this regard, biopolymers are attractive candidates to explore for supported catalysis. 18Several interesting biopolymers have been utilized as a support for catalytic applications, such as alginate, 19 gelatin, 20 starch 21 and chistosan 22 derivatives.
Cellulose and starch are the most abundant natural polymers and have been widely studied during the past several decades because they are biodegradable materials and a renewable resource. 23Its unique properties make it an attractive alternative to conventional organic or inorganic supports in catalytic applications.Therefore, many efforts have been made by researchers constantly to introduce novel heterogeneous catalyzed organic transformation by using cellulose sulfuric acid (CSA) and starch sulfuric acid (SSA), which are more efficient, economical and compatible with the environment.Also, these catalysts can be recovered and reused several times without a decrease in activity.The present article is intended to review briefly recent research progress concerning the synthesis of different organic compounds catalyzed by CSA and SSA.
Cellulose Sulfuric Acid (CSA)
CSA was prepared according the following procedure: Take 5.00 g of cellulose in 20 ml of nhexane.The mixture is magnetically stirred and 1.00 g of chlorosulfonic acid (9 mmol) added dropwise at 0 o C over 2 h.HCl gas is immediately evolved.After the addition is complete the mixture is stirred for 2 h at room temperature.Then the mixture is filtered and the collected solid washed with 30 ml of acetonitrile and dried at room temperature to afford 5.25 g of cellulose sulfuric acid as a white powder. 33CSA is non-explosive, non-hygroscopic and stable at room temperature.
Organic transformations using CSA have many advantages such as a simple work-up process, inexpensive catalyst, environmental friendly, excellent yield of the products with high purity, shorter reaction times and solvent-free reaction conditions.The CSA is solid, heterogeneous catalyst and after completion of organic transformation, it can be recovered and reused several times without loss of its efficiency.
Various organic transformations catalyzed by CSA
Shaabani et al. 24 have developed an efficient and environmentally friendly method for synthesis of α-aminonitrile derivatives through the condensation reaction of amines, aldehydes and trimethylsilylcyanide (TMSCN) by employing catalytic amount of cellulose sulfuric acid (CSA) as a bio-supported catalyst at ambient temperature, which afforded excellent yield of the products (Scheme 1).In order to optimize the reaction conditions, the authors have carried out this reaction with various solvents, such as water, methanol, ethanol, acetonitrile (MeCN), dichloromethane (DCM) and toluene.However, MeCN showed the best results in terms of the yield of the products.Solvent free condition were also used for this reaction, but this did not give the best results.Both aromatic and aliphatic aldehydes afforded excellent yields of the products.In addition, acid sensitive aldehydes such as furfuraldehyde gave the aminocyano compound in high yield.Short reaction times, the recyclability of the catalyst without loss of activity, simple work-up process and use of non-hazardous, non-corrosive and inexpensive solid acid catalyst are superior features of the protocol.Scheme 1. Synthesis of α-amino nitrile derivatives catalyzed by CSA.
An efficient methodology for the synthesis of 4-aryl-1,4-dihydropyridines in excellent yield under solvent-free condition through the three components condensation of various aldehydes, ethyl acetoacetate and ammonium acetate (NH 4 OAc) at 100 o C, have been established by Murthy et al. 25 (Scheme 2).The authors have employed this reaction with various solvents such as methanol, ethanol, acetonitrile, toluene, dioxane and tetrahydrofuran (THF).However, these solvents were not efficient for this reaction as compared to solvent-free conditions in terms of the yield of the products.
Alinezhad and co-workers 26 have discovered the solvent-free synthesis of bisindolylmethanes, bis-2-methylindolylmethanes, bis-1-methylindolylmethanes and 3,3′diindolyloxindole derivatives through the reaction of indoles with various aldehydes and ketones in the presence of a catalytic amount of cellulose sulfuric acid, which afforded excellent yields of the products (Scheme 3).This method is also highly chemoselective for aldehydes in the presence of ketones.For comparative study various catalysts such as ZrOCl 2 .8H 2 O, AlPW 12 O 40 , Amorphous zirconium titanium phosphate (15-ZTPA), NH 4 Cl, Zirconium tetrakis(dodecyl sulfate) [Zr(DS) 4 ], ZrCl 4 and CSA were employed in this protocol.However, CSA was proven the best catalyst for this transformation.Also, the reaction was preferably undertaken at ambient temperature rather than high temperature.In addition, solvent-free condition showed better results as comparable to the reaction in solvent like acetonitrile and water.Here, the variety of aliphatic, aromatic and heterocyclic aldehydes and ketones underwent smoothly to the corresponding bis-indolylmethanes in high in excellent yields.A mild, simple and efficient protocol for the diazotization and iodination of various aromatic amines has been developed by Nemati et al. 27 employing sodium nitrite and potassium iodide in the presence of cellulose sulfuric acid under solvent-free conditions at room temperature (Scheme 4).In addition, both electron-withdrawing groups and electron-donating groups containing aryl amines were smoothly converted into the corresponding aryl iodides in excellent yields.In comparison with conventional diazotization procedures, acidic effluent is not produced with this protocol, which makes it more "green" and environmentally friendly.In conventional synthesis, 28 the reaction is usually carried out with sodium nitrite at low temperature in two steps: diazotization of the aryl amine in hydrochloric or sulfuric acid and then reaction with iodine or KI sometimes in the presence of copper salts.The CSA protocol has advantages over the classical process in the use of mild reaction conditions, avoidance of corrosive acids and toxic solvents, and short reaction times.
Scheme 4. Diazotization and iodination catalyzed by CSA.
A simple procedure for the synthesis of 6-chloro-8-substituted-9H-purine derivatives involving the one-pot condensation of 6-chloropyrimidine-4,5-diamine and various aldehydes in the presence of catalytic cellulose sulfuric acid (CSA) under solvent-free conditions at room temperature has been disclosed by Maddila and co-authors 29 (Scheme 5).The reaction did not proceed in the absence of CSA.Lesser amounts of catalyst produced a lower yield of the products, 0.045 g of CSA gave the best yield.Scheme 5. Synthesis of 6-chloro-8-substituted-9H-purine derivatives catalyzed by CSA.
An efficient protocol for the synthesis of aryl-14H-dibenzo[a.j]xanthenederivatives from the β-naphthol and corresponding aromatic aldehydes was developed by Madhav and co-workers, 30 in the presence of catalytic amount of cellulose sulfuric acid (CSA) under solvent-free condition at 100 o C to afford excellent yield of the products (Scheme 6).There was no significant change in the yield of the products, when the reaction was carried out at 120 o C instead of 110 o C, but at 100 o C the yield of the products becomes lower.Various catalysts such as p-toluenesulfonic acid (p-TSA), sulfuric acid in acetic acid and silica sulfuric acid (SSA) were employed in this reaction for comparative study, but they were not efficient as compared to CSA.By this protocol 4-oxo-4H-chromene-3-carbaldehyde and 6-nitro-4-oxo-4H-chromene-3-carbaldehyde could be efficiently converted into corresponding aryl-14H-dibenzo[a.j]xanthenederivatives.A significant improvement in the rate of the reaction and the yields of the products was observed, when the reactions were carried out using CSA as compared with the classical acidic catalysts such as acetic acid-sulfuric acid 31 and p-toluenesulfonic acid (p-TSA) 32 .In addition, after completion of the reaction, the CSA could be recovered and reused several times while the conventional catalysts could not be recovered.Scheme 6. Synthesis of aryl-14H-dibenzo[a.j]xanthenederivatives catalyzed by CSA.Safari et al. 33 have established an eco-friendly method for the synthesis of 1,4dihydropyridines in excellent yields via the one-pot condensation of 1,3-diphenyl-2-propen-1one derivatives, ethyl acetoacetate and ammonium acetate utilizing catalytic amount of cellulose sulfuric acid (CSA) in water under reflux condition (Scheme 7).Only 0.02 g of CSA was enough for this transformation.To optimize the reaction condition authors have carried out this reaction with different amount of CSA and they have noted that when the amount of CSA was increased from 0.02 to 0.05, the yield of the products was significantly increased with the reduction in the reaction time.For comparative study authors have carried out this reaction with various solvents such as ethanol, methanol, isopropanol, tertiary-butanol, tetrahydrofuran (THF) and acetonitrile (ACN), but they are not efficient as in water.HCl, silica sulfuric acid (SCA) and xanthane sulfuric acid (XSA) were employed in same reaction, but they were not efficient as like CSA.Here, 1,6-dihydropyrazine-2,3-dicarbonitriles were efficiently synthesized by this protocol using 2,3-diaminomaleonitrile, isocyanides and carbonyl compounds in the presence of catalytic amount of cellulose sulfuric acid (CSA) in ethanol at room temperature, which afforded excellent yield of the products (Scheme 9).Scheme 9. Synthesis of 1,6-dihydropyrazine-2,3-dicarbonitrile derivatives catalyzed by CSA.
Rajack et al. 35 have described an environmentally friendly procedure for the synthesis of 3,4dihydropyrimidin-2(1H)-ones/-thiones via Biginelli condensation in the presence of cellulose sulfuric acid (CSA) in water at 100 o C to afford good yield of the products (Scheme 9).In this care, 0.04 g CSA gave poor results as compared to 0.05 g of CSA.Various organic solvents such as ethanol, methanol, acetonitrile, dioxane and toluene showed poor results as compared to water in terms of the yield of the product.In addition, N-dihydro pyrimidinonedecahydroacridine-1,8dione derivatives have been synthesized in excellent yield through the Hantzsch type condensation of 5-ethoxycarbonyl-4-(4-aminophenyl)-6-methyl-3,4-dihydropyrimidine-2(1H)one, dimedone and aromatic aldehydes in acetonitrile under reflux conditions (Scheme 10).To optimize the reaction condition, the authors have carried out various catalysts such as Dowex, silica sulfuric acid (SSA), Amberlyst-15 and p-TSA in both Biginelli as well as for Hantzsch type condensation reaction.However, all these catalysts did not show better results as compared to CSA.Various 3,4-dihydropyridinones and acridine derivatives were efficiently synthesized demonstrating wide synthetic utility of this method.
An environment friendly protocol for the synthesis of N-substituted pyrroles through the onepot condensation reaction of 2,5-hexandione with amines and diamines in the presence of catalytic amount of cellulose sulfuric acid (CSA) at room temperature under solvent-free conditions have been demonstrated by Rahmatpour 36 (Scheme 11).Short reaction times and high product yields make this method a highly useful synthetic protocol.To optimize reaction conditions the author carried out this reaction with various solvents such as dichloromethane (DCM), chloroform, carbon tetrachloride and acetonitrile, however the use of these solvents gave less efficient results than the solvent-free conditions.There was an observed significant improvement in the rate of the reaction and the yields of the products, when the reactions were carried out using CSA as compared with the classical acidic catalysts such as hydrochloric acid, 37 sulfuric acid, 38 and p-toluenesulfonic acid (p-TSA). 39In addition, after completion of the reaction, the CSA could be recovered and reused several times while the conventional catalysts could not be recovered.A new and green method has been discovered by Liu et al. 43 for the synthesis of 5hydroxymethylfurfural (HMF) and 5-ethoxymethylfurfural (EMF) from fructose using cellulose sulfuric acid (CSA) as a bio-degradable catalyst (Scheme 14).Here, HMF was obtained in high yield (93.6%) in dimethylsulfoxide (DMSO) for 45 min.While, EMF was obtained in an excellent yield (84.4%) by the etherification of HMF under the same reaction conditions.In addition, EMF could be also synthesized in good yield (72.5%) directly from the fructose through one-pot reaction, which included that the dehydration of fructose into HMF, followed by etherification of HMF into EMF.Shaabani and co-workers 44 have also developed a high yielding synthesis of imidazoazine derivatives through the one-pot three-component condensation reaction of aldehydes, 2aminoazine and isocyanides in the presence of catalytic amount of cellulose sulfuric acid (CSA), as an effective bio-supported catalyst in methanol at room temperature (Scheme 15).The results showed that the efficiency and the yield of the product in methanol were higher than those obtained in other solvents like water, ethanol, dichloromethane, toluene, acetonitrile and solventfree condition.This reaction was also carried out in the presence of various acids such as Amberlyst-21, Montmorillonite-K10, CSA, HCl, H 2 SO 4 , acetic acid and AlCl 3 .However the best yield was obtained with CSA.Scheme 15.Synthesis of imidazoazine derivatives catalyzed by CSA.
Shaterian and co-workers 45 have disclosed that the primary, secondary and tertiary alcohols as well as phenols and naphthols were effectively converted into their corresponding trimethylsilyl ethers in excellent yield by using with hexamethyldisilazane (HMDS) in the presence of catalytic amount of cellulose sulfuric acid (CSA) at room temperature with short reaction times (Scheme 16).Scheme 16.Silylation of alcohols and phenols catalyzed by CSA.
Nemati et al. 46 have developed a green and mild protocol for the diazotization and azidation of various aromatic amines by using sodium nitrite and sodium azide respectively in the presence of catalytic amount of CSA at room temperature, which afforded excellent yield of the products (Scheme 17).In addition, the wide range of arylamines containing electron-withdrawing groups and electron-releasing groups were efficiently converted into the corresponding aryl azides with excellent yields make this protocol environmentally friendly.There was an observed significant improvement in the rate of the reaction and the yields of the products, with this protocol as compared to the classical synthetic method of aryl azides.
Scheme 17. Diazotization and azidation by using CSA.
A green and environmentally friendly method for the synthesis of 1-oxohexahydroxanthene derivatives in higher yields was developed by Karma and co-workers 47 using ortho-hydroxy benzaldehydes and substituted 1,3-hexanediones in the presence of a catalytic amount of CSA at room temperature (Scheme 18).The best results were obtained using 0.08 g of catalyst, while with lower amounts, or in the absence of a catalyst, lower yields of the products resulted.For comparison, the authors carried out this reaction with various catalysts such as methanesulfonic acid, silica sulfuric acid and sulfuric acid in acetic acid.However, in such cases the yield of the products was very low compared to CSA.
Scheme 18. Synthesis of 1-oxo-hexahydroxanthene derivatives catalyzed by CSA. of quinoxaline derivatives through the condensation reaction of the 2,1,3-benzothiadiazole-4,5diamine and 3-(α-bromoacetyl)-coumarins in the presence of cellulose sulfuric acid (CSA) by grinding in a mortar and pestle at room temperature in excellent yields with high purity (Scheme 19).Various catalysts such as silica sulfuric acid, methanesulfonic acid and sulfuric acid in acetic acid were also employed in this reaction for comparison study; however those are not so efficient as CSA.
Scheme 19.Synthesis of quinoxaline derivatives catalyzed by CSA.
Reddy et al. 49 have discovered one-pot synthesis of 3,4-dihydropyrimidin-2(1H)-ones/thiones by utilizing various aldehydes, β-ketoesters and urea/thiourea in the presence of cellulose sulfuric acid (CSA) in ethanol under reflux conditions to afford excellent yield (Scheme 20).Various solvents such as ethanol, methanol, dichloromethane, acetonitrile and toluene were also employed in this reaction for comparative study, among those ethanol shows better efficiency in terms of the yield of the product.Scheme 20.Synthesis of 3,4-dihydropyrimidin-2(1H)-ones/thiones catalyzed by CSA.
Oskooie and co-workers 50 have discovered an efficient protocol for the synthesis of βacetamido ketone derivatives in high yield through the one-pot, four-component condensation of benzaldehyde, dimedone, acetyl chloride and acetonitrile in the presence of cellulose sulfuric acid (CSA) in acetonitrile under reflux condition (Scheme 21.Here, the synthesis of N-[(4,4dimethyl-2,6-dioxocyclohexyl)phenyl)methyl]acetamide was selected as the model reaction.To investigate the effect of catalyst amount on the yield of the reaction the authors have carried out model reaction with different amount of catalyst.The results show that the optimum amount of catalyst was 0.01 g.The model reaction was performed in the presence of the same amounts of various catalysts such as silica sulfuric acid (SSA), p-toluenesulfonic acid (p-TSA), sulfamic acid, SbCl 3 and various heteropoly acids, but they didn't work efficiently as like CSA.Scheme 21.Synthesis of β-acetamido ketone derivatives catalyzed by CSA.
An efficient and green protocol for the synthesis of 2-substituted benzimidazoles has been discovered by Kuarm and co-workers 51 via condensation of 2,1,3-benzothiadiazole-4,5-diamine with different aldehydes in the presence of the catalytic amount of the cellulose sulfuric acid (CSA) under solvent-free conditions by grinding in a mortar and pestle at room temperature (Scheme 22).The efficiency of the cellulose sulfuric acid compared to various sulfur analog acidic catalysts such as silica sulfuric acid (SSA), p-Toluenesulfonic acid (p-TSA) and sulfuric acid in acetic acid were also examined.However, cellulose sulfuric acid is more efficient and superior over other acidic catalysts with respect to reaction time and yield.Scheme 22. Synthesis of 2-substituted benzimidazoles derivatives catalyzed by CSA.Sadaphal et al. 52 have discovered the synthesis of bis(indolyl)methane derivatives by using indole and various aldehydes in the presence of cellulose sulfuric acid (CSA) as a biodegradable catalyst at room temperature under solvent-free condition (Scheme 23).A simple and environmentally friendly procedure has been developed by Shelke et al. 53 for the synthesis of 2,4,5-triarylimidazole derivatives through the three-component condensation of benzil/benzoin, various aldehydes and ammonium acetate in the presence of a catalytic amount of bio-supported cellulose sulfuric acid (CSA) under microwave irradiation and solvent-free conditions to afford excellent yield of the products (Scheme 24).Different acid catalysts such as HgCl 2 , SnCl 2 .2H 2 O, H 2 SO 4 , HCl, clays EPZG, clay EPZ 10 and CSA were employed in this transformation by the authors for comparative study.The shorter reaction rate and the better yield were obtained with CSA compared to others.It is also noted that solvent-free conditions show better results as compared to reaction in solvents.
Starch Sulfuric Acid (SSA)
The SSA can be prepared by magnetically stirring a suspension of 5.00 g of starch in 20 ml of nhexane, and adding 1.00 g of chlorosulfonic acid (9 mmol) dropwise at 0 o C over 2 h.HCl gas is evolved from the reaction vessel immediately.After the addition is complete, the mixture is stirred for 2 h at room temperature.The mixture is then filtered and washed with 30 ml acetonitrile and dried at room temperature to afford 5.25 g of starch sulfuric acid as a white powder.SSA is non-explosive, non-hygroscopic and stable at room temperature. 59rganic transformations using SSA have many advantages such as a simple work-up process, and inexpensive catalyst, environmentally friendly, excellent yield of the products with high purity, shorter reaction time and solvent-free reaction conditions.The SSA is a solid, heterogeneous catalyst, which can be easily recovered after completion of the reaction and reused many times without loss of their activity.
Various organic transformations catalyzed by SSA
Rezaei and co-workers 55 have developed environmentally benign protocol for the synthesis of α,α′-benzylidene bis(4-hydroxycoumarin) derivatives by employing various aromatic aldehydes with 4-hydroxycoumarin under solvent-free condition at 80 o C to afforded excellent yield of the products (Scheme 25).The reaction was also examined by the authors in ethanol, water, chloroform and toluene as solvents.However, in the presence of solvents the reaction becomes sluggish.In addition, it is noted that below 80 o C, the reaction could not proceed efficiently and gives a lower yield of the products.Both an electron-donating or electron-withdrawing substituents on the aromatic ring gave efficient results.Compared with the conventional catalysts such as p-TsOH, 56 HCl, 57 and acetic acid, 58 the SSA were showed better results in terms of the yields of the products and time of the reaction.Scheme 25.Synthesis of α,α′-benzylidene bis(4-hydroxycoumarin) derivatives catalyzed by SSA Rezaei et al. 59 have developed the one-pot multi-components synthesis of 3,4dihydropyrimidinone derivatives by using aldehydes, β-keto esters and urea/thiourea in the presence of the starch sulfuric acid (SSA) as an environment friendly polymer-based solid acid catalyst under solvent-free condition (Scheme 26).Various solvents such as acetonitrile, tetrahydrofuran (THF), ethanol and water were also examined by the authors in this reaction, however solvent-free condition shows better results.Lower yields of the products were observed at lower temperature.Scheme 26.Synthesis of 3,4-dihydropyrimidinone catalyzed by SSA.
Hatamjafari 60 has discovered an efficient method for the synthesis of 1,5-diaryl-1H-pyrazole derivatives by employing phenylhydrazine and Baylis Hillman adducts in the presence of starch sulfuric acid (SSA) as reusable catalyst in 1,2-dicloroethane (DCE), which afforded excellent yield of the products (Scheme 27).In addition, Baylis Hillman adducts were prepared by the reaction of methyl or ethyl vinyl ketone and various benzaldehydes benzaldehydes.Here, 0.05 g of SSA was sufficient to catalyze the reaction effectively.Various solvents such as water, methanol, ethanol, acetonitrile, tetrahydrofuran (THF), and 1,2-dichloroethane (DCE) were used in this reaction by the authors to optimize the reaction condition.But, only the DCE gave excellent yields of the products.Scheme 27.Synthesis of 1,5-diaryl-1H-pyrazole derivatives catalyzed by CSA.
Conclusions
In the recent years, several new heterogeneous catalyzed organic processes have been reported in the field of organic chemistry, which added new "environment friendly and green" tools for the synthesis of valorous molecules.In recent years, cellulose sulfuric acid (CSA) and starch sulfuric acid (SSA) have been developed as biodegradable catalysts for improving selectivity, purity and high yield of the organic compounds.Particularly, CSA is widely used in a wide range of organic syntheses efficiently.Besides the great advances that have been obtained so far, most research in the development of new CSA and SSA catalyzed organic synthesis of pharmaceutical interest is required.Due to this urgency, we will definitely see an increasing number of novel protocols for the synthesis of various organic compounds catalyzed by these solid heterogeneous catalysts.
Scheme 14 .
Scheme 14. Synthesis of HMF and EMF catalyzed by CSA. | 2015-03-07T18:39:34.000Z | 2015-01-31T00:00:00.000 | {
"year": 2015,
"sha1": "7ea9bfb040473b1d2ce2d81bfad8334e5ac4e2f0",
"oa_license": "CCBY",
"oa_url": "https://www.arkat-usa.org/get-file/52695/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c29a4550befaecc8b1e14724376d6ab1d357003b",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
92990625 | pes2o/s2orc | v3-fos-license | Three-Stream Convolutional Neural Network with Squeeze-and-Excitation Block for Near-Infrared Facial Expression Recognition
: Near-infrared (NIR) facial expression recognition is resistant to illumination change. In this paper, we propose a three-stream three-dimensional convolution neural network with a squeeze-and-excitation (SE) block for NIR facial expression recognition. We fed each stream with di ff erent local regions, namely the eyes, nose, and mouth. By using an SE block, the network automatically allocated weights to di ff erent local features to further improve recognition accuracy. The experimental results on the Oulu-CASIA NIR facial expression database showed that the proposed method has a higher recognition rate than some state-of-the-art algorithms.
Introduction
Facial expressions carry rich non-verbal information.Machines with the ability to understand facial expressions can better serve humans and fundamentally change the relationship between humans and machines.Therefore, automatic facial expression recognition has attracted attention from many fields, such as virtual reality [1,2], public security [3,4], and data-driven animation [5,6].
The effectiveness of facial expression recognition can be easily affected by environmental changes, such as changes of light, angle, and distance.Among these, the change of illumination conditions under visible light (VIS) (380-750 nm) has the largest influence [7,8].To overcome this influence, an active near-infrared (NIR) illumination source (780-1100 nm) is used for the recognition.In this study, an NIR camera, together with the NIR illumination sources, were placed in front of the subjects.The intensity of the NIR illumination source was much higher than that of the ambient NIR light in indoor environments.Therefore, the ambient illumination problem could be solved as long as the active NIR illumination source is constant.The NIR recognition system is resistant to ambient illumination variations, and has been successfully applied to the field of face recognition [9]; it can perform well even in dark environments [10], in which normal imaging systems fail to perform recognition.
Facial expressions manifest themselves as movements of one or several discrete parts of the face, such as tightening the lips to express anger and raising the mouth to express happiness [11].Some researchers use the features extracted from the entire face, which are called global features [12,13], for recognition, while other researchers use features extracted from specific parts, which are called local features [14][15][16][17].Many researchers have demonstrated that local features improve the performance of facial expression recognition compared with global features [18,19].The main reason for this advancement is that the specific local regions contribute more accurate information of facial changes that help to distinguish the expressions, while the global region contains more identity information.Some researchers [20,21] have pointed out that the eyes, eyebrows, and mouth are the most expressive facial parts.However, it is unknown which part of the face should carry more weight in expression recognition or how the correct weight can be allocated to different parts of the face.
In earlier studies, many facial expression recognition systems used static images [22][23][24] that only contain spatial information as the input.However, facial expression can be a dynamic process, and the dynamic information of the face can better reflect the change of expression.Therefore, it is necessary to extract spatial and temporal information from the image sequences to facilitate recognition.
In the work reported in this paper, we designed a convolutional neural network (CNN) to complete NIR facial expression recognition.The CNN used is a three-stream three-dimensional (3D) CNN, which can learn spatio-temporal information from image sequences.In addition, the three inputs to the CNN are all local features, which not only reduce computational complexity, but also remove information not related to the expressions (such as identity information).A squeeze-and-excitation (SE) block is appended after the 3D CNN, which can automatically assign more weight to the local features that carry more expression information.To overcome the over-fitting problem caused by small data, features are extracted through three identical shallow networks.Finally, we add a global face stream to the local network, further increasing the recognition rate.
The main contributions of this paper are the following: (1) Three local regions of the face are used as the input of the network for the NIR expression recognition, which can not only accurately extract the facial expression information, but also reduce the computational complexity and dimensions; and (2) an SE block is added to model the dependencies between feature channels and adaptively learn the weight of the channel to gain efficient expression information and attenuate the useless information.
Related Work
Facial expressions can be decomposed into movement of one or more discrete facial action units (AUs).Inspired by this theory, Liu et al. [25] located common patches and unique patches of different expressions for recognition.However, this method could cause overlapping of located areas.Liu et al. [26] did further work and proposed a framework called FDM to select the active features of each expression without overlapping.Later, Liu et al. [27] proposed a 3D CNN with deformable action part constraints that can locate and code action units.
To extract temporal features while acquiring spatial features, Ji et al. [28] extended a CNN to a 3D CNN, which can extract the spatio-temporal information from image sequences.Szegedy et al. [29] utilized the 3D CNN to extract temporal information for video-based expression recognition.Chen et al. [30] proposed a new descriptor, the histogram of oriented gradients from three orthogonal planes (HOG-TOP), to extract the dynamic texture features from image sequences, which are fused with the geometric features to identify expressions.Fonnegra et al. [31] proposed a deep learning model and Yan et al. [32] presented collaborative-discriminative-multi-metric-learning (CDMML)-based image sequences for emotion recognition.To make the system more precise, Zia et al. [33] proposed a dynamic weight majority voting mechanism for the construction of ensemble systems.However, since these methods are all based on visible light, the impact of external illumination changes are not considered.
The NIR facial images/videos are hardly influenced by the ambient visible light change.Farokhi et al. [34] proposed a method of extracting global and local features by using Zernike moments (ZMs) and Hermite kernels (HKs), respectively, and then used the fused features to identify the NIR face.Taini et al. [35] assembled a near-infrared facial expression database and completed the first study based on NIR facial expression recognition.Zhao et al. [18] developed the database of NIR facial expressions, called the Oulu-CASIA NIR facial expression database, and used local binary patterns form three orthogonal planes (LBP-TOP) to extract dynamic local features.It was proved in this work that NIR can overcome the influence of visible-light illumination changes on expression recognition.However, these methods must extract facial expression features manually.Jeni et al. [36] proposed a 3D-shape-information-based recognition technique and further proved that an NIR camera configuration is suitable for facial expressions under light-changing conditions.Wu et al. [37] proposed a three-stream 3D convolutional network for NIR facial expression recognition, using a combination of global and local features, but did not consider assigning different weights to local features.
3D CNN
A 3D CNN is more suitable for spatial-temporal feature extraction.In [28], to process image sequences more efficiently, a 3D CNN approach is proposed to address action recognition problems.Through 3D convolution and pooling operations, a 3D CNN has the ability to learn temporal features.
A 3D CNN consists of an input layer, 3D convolution, 3D pooling (usually, each convolution layer is followed by the pooling layer), and a fully connected (FC) layer.The dimension of the input image sequences to the 3D CNN is represented as d × l × h × w, where d is the number of the channels, l the number of frames of video clips, and h and w the height and width, respectively, of each frame.In addition, 3D convolution and pooling have a kernel size in t × k × k, where t is the temporal depth and k the spatial size.
Squeeze-and-Excitation Networks (SENets)
Hu et al. [38] proposed squeeze-and-excitation networks (SENets).The basic architectural unit of SENets is the SE building block, which is shown in Figure 1.
developed the database of NIR facial expressions, called the Oulu-CASIA NIR facial expression database, and used local binary patterns form three orthogonal planes (LBP-TOP) to extract dynamic local features.It was proved in this work that NIR can overcome the influence of visible-light illumination changes on expression recognition.However, these methods must extract facial expression features manually.Jeni et al. [36] proposed a 3D-shape-information-based recognition technique and further proved that an NIR camera configuration is suitable for facial expressions under light-changing conditions.Wu et al. [37] proposed a three-stream 3D convolutional network for NIR facial expression recognition, using a combination of global and local features, but did not consider assigning different weights to local features.
3D CNN
A 3D CNN is more suitable for spatial-temporal feature extraction.In [28], to process image sequences more efficiently, a 3D CNN approach is proposed to address action recognition problems.Through 3D convolution and pooling operations, a 3D CNN has the ability to learn temporal features.
A 3D CNN consists of an input layer, 3D convolution, 3D pooling (usually, each convolution layer is followed by the pooling layer), and a fully connected (FC) layer.The dimension of the input image sequences to the 3D CNN is represented as d×l×h×w, where d is the number of the channels, l the number of frames of video clips, and h and w the height and width, respectively, of each frame.In addition, 3D convolution and pooling have a kernel size in t×k×k, where t is the temporal depth and k the spatial size.
Squeeze-and-Excitation Networks (SENets)
Hu et al. [38] proposed squeeze-and-excitation networks (SENets).The basic architectural unit of SENets is the SE building block, which is shown in Figure 1.Before the SE block operation, input data X is transformed into features U through a series of convolution operations, i.e., F tr :X→U, X∈R W ʹ ×H ʹ ×C ʹ , U∈R W×H×C , where F tr represents the transformation from X to U, H (H ʹ ) and W (W ʹ ) are the frame height and width, Before the SE block operation, input data X is transformed into features U through a series of convolution operations, i.e., F tr : X → U , X ∈ R W ×H ×C , U ∈ R W×H×C , where F tr represents the transformation from X to U, H (H ) and W (W ) are the frame height and width, respectively, and C (C ) are the number channels.
The SE block mainly consists of two operations: Squeeze and excitation.Because the filter learned by each channel in the CNN operates on the local receptive field, each feature map in U cannot utilize the context information of other feature maps.The purpose of the squeeze operation is to have a global receptive field, so that the lower layers of the network can also use global information.The global average pooling operation is used to compress U (multiple feature maps) into Z, so that the C feature maps eventually become real columns of 1 × 1 × C. The squeeze operation is performed by where z m represents the mth element of Z and u m the mth element of U. The excitation operation is a simple gating with a sigmoid activation.The purpose of this operation is to model the interdependence between feature channels by learning parameters to generate the weight of each feature channel.To meet these requirements and limit the model complexity and auxiliary generalization, two FC layers (1*1 conv layer) were introduced.One is the dimension reduction layer, in which the parameter is W 1 and the dimension reduction ratio r; the other is a dimension increase layer with parameter W 2 followed by a Rectified linear unit (ReLU), W The excitation is performed by: where S is the vector after excitation operation, and δ and σ refer to the ReLU function and the sigmoid function, respectively.Finally, S is combined with U to obtain the final output by: where s m is the mth element of S and ∼ x m the mth element of the final output ∼ X; F scale refers to channel-wise multiplication.
The goal of the SE block is to greatly improve the expressiveness of the network; it adaptively recalibrates the feature weight by modeling the interdependencies between the channels.In more detail, it allows the network to use global information to selectively enhance the beneficial features of the channel and suppress the useless function channels.
Proposed System
In this paper, we propose a three-stream 3D CNN with an SE block called an SE three-stream fusion network (SETFNet).We took three local regions, the eyes (including eyebrows), nose, and mouth, from the facial expression image sequence as inputs to the three-stream network.After fusions of the three streams, an SE block was added to the network to adaptively learn the weight of each feature channel.
To avoid over-fitting problems, a deep CNN requires large amounts of data for training.However, the available database for NIR expression is small in size.To train a CNN model on a small database, researchers use a medium-size CNN [39,40].Therefore, the SETFNet in this paper was also a medium-size CNN with four convolutional layers.
The structure of the proposed SETFNet is shown in Figure 2. It is a three-stream 3D CNN consisting of three identical sub-networks.Each sub-network consists of four convolutional layers and has the same parameters.The number of convolution kernels for the four convolution layers, first through fourth, is 16, 32, 64, and 128, respectively.The kernel size of the first convolution layer is 3×3×8, and a large temporal stride here is used to eliminate some useless information.The kernel size of the other three convolution layers is 3×3×3.The three streams were fused and followed by an SE block to recalibrate the weight of each stream.The details of each subnetwork are shown in Table 1.
Layers Kernel Parameter Settings Number of Kernels Output Size
Electronics 2019, 8, x FOR PEER REVIEW 5 of 16 streams were fused and followed by an SE block to recalibrate the weight of each stream.The details of each subnetwork are shown in Table 1.
Fusion Network
After extracting the features from the three regions (eyes, nose, and mouth), three stream features defined as T , T , and T were obtained.The three stream features were then concatenated together to achieve better recognition by where T is the fused feature and ⊕ represents the concatenation operation.The concatenated features T were used as inputs to the next operation of the network.
Experiments
The proposed network was assessed on the Oulu-CASIA NIR facial expression database [18].The network was implemented in the Caffe framework, which ran on a PC with a NVIDIA Geforce GTX 1080 graphical processing unit (GPU) (8 G).Training a model with the correct parameters is the key to achieving optimal performance, which has a direct impact on
Fusion Network
After extracting the features from the three regions (eyes, nose, and mouth), three stream features defined as T 1 , T 2 , and T 3 were obtained.The three stream features were then concatenated together to achieve better recognition by where T is the fused feature and ⊕ represents the concatenation operation.The concatenated features T were used as inputs to the next operation of the network.
Experiments
The proposed network was assessed on the Oulu-CASIA NIR facial expression database [18].The network was implemented in the Caffe framework, which ran on a PC with a NVIDIA Geforce GTX 1080 graphical processing unit (GPU) (8 G).Training a model with the correct parameters is the key to achieving optimal performance, which has a direct impact on the experimental results.We trained the network from scratch using a batch size of 4, an initial learning rate of 10 −3−3 , and a weight decay of 0.0005.
Database
Because the NIR facial expression database is not very common, the Oulu-CASIA NIR facial expression database is currently the only suitable one.It was collected in dark, weak, and normal light conditions, and consists of six kinds of facial expressions (anger, disgust, fear, happiness, sadness, and surprise) of 80 people between 23 and 58 years old, so each illumination condition has 480 image sequences.All expression sequences begin at the neutral emotion and end with the peak of the emotion.Each subject was asked to sit on a chair in the observation room in a way that they were in front of the camera.The distance between the face and camera was approximately 60 cm.Subjects made expressions according to the image sequences, while videos were captured by a USB 2.0 PC Camera (SN9C 201 & 202).Each clip was filmed by the camera at a frame rate of 25 fps.The image resolution was 320 × 240.
The aforementioned database has been used in many studies of facial expression recognition.It has been proved that the identification task under dark illumination conditions is the most difficult [18], because the facial image loses most of the texture features in dark light conditions.Therefore, we tested the proposed network on this most difficult sub-dataset (dark illumination condition).
We used the very popular method of tenfold cross-validation.All of the image sequences were divided into 10 groups.At each fold, nine groups were used to train the network and the rest were used for testing.During the entire experiment, there was no overlap between the training and testing sets.
Data Pre-Processing
In our experiment, a video sequence was pre-processed in the following three steps: (1) Frameby-frame face detection; (2) locating eyes, nose, and mouth; and (3) cropping off the eyes, nose, and mouth areas.We found that step 2 had a significant effect on the performance of the network, so the choice of area to perform accurate spotting is crucial.To ensure that this was done accurately, the local areas were cropped based on the location of landmark points annotated by a robust landmark detector, discriminative response map fitting (DRMF) [41].DRMF not only achieves good performance in landmark-detection methods [30], but also consumes very little computation time.
The cropping of these local areas was done by an automatic method.Since some of the cuts are inaccurate, manual cropping was used.Using the facial landmark points annotated earlier, the three regions were identified by using rectangular bounding boxes determined based on the eyes, nose, and mouth landmark points.We segmented the three local regions according to the following eleven points: E1 (x 1 , y 1 ), E2 (x 2 , y 2 ), E3 (x 3 , y 3 ), E4 (x 4 , y 4 ), E5 (x 5 , y 5 ), N1 (x 6 , y 6 ), N2 (x 7 , y 7 ), M1 (x 8 , y 8 ), M2 (x 9 , y 9 ), M3 (x 10 , y 10 ), and M4 (x 11 , y 11 ) (shown in Figure 3).The center point of the rectangular bounding box of the eye region is L1 = E5 (x 5 , y 5 ), and the length and width of the rectangle are 5 3 |x 2 − x 1 | and 4 3 |y 4 − y 1 |, respectively.The center point of the rectangular bounding box of the nose region is L2 = (x 5 ,
Comparisons of Different Streams and Their Fusion
Table 2 shows the average results of tenfold cross-validation for each local region using a single sub-network (one stream) and a fused network.The feature information of the eye (including eyebrows), nose, and mouth regions is extracted by a single stream and the recognition rates are 35.37%,42.76%, and 68.35%, respectively.The mouth region has the highest recognition rate, which may indicate that this part is the most expressive part in the database.The recognition rate of the eye region is the lowest among the three regions.This may be due to some of the participants wearing glasses.In the NIR face image, the NIR light reflected by the glasses removes the feature of the eyes, so that the frames with glasses have a great influence on recognition.At the same time, we can see that the performance of the For the network input, each video sequence is normalized to 32 frames using the linear interpolation method [42].Each frame of a global face (whole face) and local areas were resized to 88 × 108 and 36 × 64, respectively.To reduce the amount of calculation, all input images were converted to 8-bit grayscale.
Comparisons of Different Streams and Their Fusion
Table 2 shows the average results of tenfold cross-validation for each local region using a single sub-network (one stream) and a fused network.The feature information of the eye (including eyebrows), nose, and mouth regions is extracted by a single stream and the recognition rates are 35.37%,42.76%, and 68.35%, respectively.The mouth region has the highest recognition rate, which may indicate that this part is the most expressive part in the database.The recognition rate of the eye region is the lowest among the three regions.This may be due to some of the participants wearing glasses.In the NIR face image, the NIR light reflected by the glasses removes the feature of the eyes, so that the frames with glasses have a great influence on recognition.At the same time, we can see that the performance of the recognition rate of the three-local-stream-fused networks (TFNets) reaches 78.68%, which is much higher than that of each single stream network (eye, 35.37%; nose, 42.76%; mouth, 68.35%).This indicates that our fusion is very effective in improving the recognition rate.After the network was fused, we added the SE block that automatically allocates weights to different streams.Since the SE block can make the entire network adaptively learn the weight of the feature channel, the SETFNet further improves the recognition rate, reaching a recognition rate of 80.34%.To investigate whether the SETFNet had extracted most of the expression features, we added one more stream to the SETFNet, which takes the frame of the global face as the input.Because each frame of the global face has larger spatial size than that of each local area, we added one more convolution pair to this added stream.The network structure is shown in Figure 4, with the fourth stream being the global face stream.When it is added to the SETFNet, the recognition rate becomes 81.67%.The SETFNet itself can achieve an 80.34% recognition rate.That is to say, after adding the entire face as input, the improvement of the recognition rate is still limited.This may indicate that the SETFNet has extracted most of the expression features.
Table 2 also shows the time consumption of various single sub-networks and fused networks.The time for a single sub-network to process an image sequence is 0.515 s, and the time for TFNet and SETFNet to process a sequence is 1.158 and 1.237 s, respectively.Considering the large improvement in recognition rate made by the TFNet and SETFNet, the increase of computation time is acceptable.However, when a global face stream is added to the SETFNet, the time for the network to process a sequence is 2.142 s.The slight increase in recognition rate (80.34% versus 81.67%) made by the global stream is at the expense of the processing time (1.237 s versus 2.142 s).However, all of the computation time may be within acceptable limits, since the input is 32 frames.Under the hardware settings used (NVIDIA Geforce GTX 1080 GPU (8G) for deep-learning acceleration), the SETFNet can process 32/1.237= 25.87 frames every second.The frame rate of a normal imaging system is 25-30 fps, and 25.87 fps is within this range, which means that the SETFNet can give the recognition result just 1 s of lag in real-time imaging if the computation is performed in parallel with the imaging.With better hardware, the computation time can be further decreased to or to less than 1 s, which makes the processing a real-time process.Therefore, this network could be used in real applications.recognition rate is still limited.This may indicate that the SETFNet has extracted most of the expression features.Table 2 also shows the time consumption of various single sub-networks and fused networks.The time for a single sub-network to process an image sequence is 0.515 s, and the time for TFNet and SETFNet to process a sequence is 1.158 and 1.237 s, respectively.Considering the large improvement in recognition rate made by the TFNet and SETFNet, the increase of computation time is acceptable.However, when a global face stream is added to the SETFNet, the time for the network to process a sequence is 2.142 s.The slight increase in recognition rate (80.34% versus 81.67%) made by the global stream is at the expense of the processing time (1.237 s versus 2.142 s).However, all of the computation time may be within acceptable limits, since the input is 32 frames.Under the hardware settings used (NVIDIA Geforce GTX 1080 GPU (8G) for deep-learning acceleration), the SETFNet can process 32/1.237= 25.87 frames every second.The frame rate of a normal imaging system is 25-30 fps, and 25.87 fps is within this range, which means that the SETFNet can give the recognition result just 1 s of lag in real-time imaging if the computation is performed in parallel with the imaging.With better hardware, the computation time can be further decreased to or to less than 1 s, which makes the processing a real-time process.Therefore, this network could be used in real applications.
The recognition rate of the eye region is the lowest among the three regions.One reason may be that the eyes have fewer features than the other parts; another reason could be that some of the subjects wear glasses.To verify the effect of glasses on the recognition rate, we input the eyes with and without glasses into the sub-network separately.The recognition results are shown in Table 3.It is seen that the recognition rate without glasses is better than that with glasses, which indicates that the glasses remove some features of the eyes.Since we divided the dataset into two parts, the recognition rates of wearing glasses and not wearing glasses are lower than that of the single sub-network with all data as the input.The recognition rate of the eye region is the lowest among the three regions.One reason may be that the eyes have fewer features than the other parts; another reason could be that some of the subjects wear glasses.To verify the effect of glasses on the recognition rate, we input the eyes with and without glasses into the sub-network separately.The recognition results are shown in Table 3.It is seen that the recognition rate without glasses is better than that with glasses, which indicates that the glasses remove some features of the eyes.Since we divided the dataset into two parts, the recognition rates of wearing glasses and not wearing glasses are lower than that of the single sub-network with all data as the input.
Comparison of Embedded SE Block
The SE block was added to the network after the fusion so that the network could receive the information of the entire network and have a global receptive field.In the SE block, the reduction ratio r is an important parameter that can change the capacity and computational cost.We compared different reduction ratios r in our network model and the results are shown in the Table 4.When r = 16, the accuracy is the highest; therefore, r is set as 16.
Comparisons with Other Methods
Table 5 shows the different expression recognition rates of different methods on the Oulu-CASIA NIR facial expression database under dark-lighting conditions.For all of the methods, we used the tenfold cross-validation method to obtain an average recognition rate.The results of Deep Temporal Geometry Network (DTAGN), 3D CNN Deformable Facial Action Parts (DAP), and NIRExpNet were obtained from [37], and the result of LBP-TOP was obtained by implementing the algorithm using MatLab software (MathWorks, Natick, MA, USA).SETFNet and SETFNet + global were implemented by using Caffe.It is seen that LBP-TOP and 3D CNN DAP can achieve recognition rates of 69.32% and 72.12%, respectively, which are higher than that of DTAGN.NIRExpNet used the fusion information of local and global features, and therefore can achieve an even higher recognition rate than LBP-TOP and 3D CNN DAP.SETFNet uses only local information of three regions, but it can achieve a higher recognition rate (even higher than NIRExpNet, which uses local and global features).When a global face stream is added to SETFNet, it further improves the recognition rate to 81.67%.This indicates that the automatic allocation of the weight-of-features channel helps improve the recognition performance, which could be a promising method for NIR facial expression.
Confusion Matrixes
To analyze the experimental results further, the confusion matrixes of SETFNet and SETFNet + global are shown in Tables 6 and 7, respectively.The labels on the left-hand side represent actual classes and those at the bottom represent the predicted classes; each percentage value in the matrix was calculated by dividing the number of a predicted class to the number of the corresponding actual class.After adding the global stream, the recognition rate of each expression is increased by 1-2%.It can be seen from Tables 6 and 7 that whether or not the global face stream is added, both happiness and surprise have high recognition rates, while fear and disgust have relatively low rates.The latter low recognition rates may be due to the slight movement of AUs for fear and disgust, which makes it more difficult to distinguish them from other expressions.Moreover, disgust is confused with anger, fear, and sadness, and fear is confused with anger, disgust, happiness, and surprise, perhaps because their appearance and movements are similar to each other.SETFNet + global takes the entire face as input.The more input features there are, in general, should increase the true prediction values (values on the diagonal of the confusion matrix) and decrease the false prediction values (the zero value will be unchanged).It is seen from Table 6 that SETFNet + global does increase all true prediction values.However, more input does not always decrease the false prediction values.We can see from Table 7 that increased false prediction values do exist, which are indicated by up-pointing arrows.As the database is small in size, the prediction values could vary due to noise.To ensure that the located false prediction values are increased only as a result of more input features, we located their paired false prediction values as well.Each false prediction value pair appears in the same color in Table 7; for example, 9.54% (fear predicted as anger) and 0% (anger predicted as fear) in green.Only when both paired values are increased can the two expressions be considered as confused with each other more in SETFNet + global.
Under this criterion, we can see that sadness tends to be more recognized as disgust (8.25% versus 3.52%), or disgust tends to be more recognized as sadness (4.08% versus 2.50%), if SETFNet + global is used.The reason for this might be that, in sadness and disgust expression situations, lower cheek areas have an up-and-down movement pattern due to the movement of AU15 or AU10 [44].When SETFNet + global takes these similar movement patterns as input, sadness will be recognized as disgust more.
Tables 8-11 show the confusion matrix of the comparison algorithms, with the labels on the left-hand side representing actual classes and those at the bottom representing the predicted classes.The confusion matrix of NIRExpNet (Table 8) was adopted from [37] directly.The other matrixes were obtained by implementing the algorithms with MatLab code on the database (tenfold cross-validation).Happiness and surprise again have higher recognition rates than the others in all algorithms.Fear has the lowest average recognition rate, and disgust has a similar average recognition rate to that of anger and sadness.This trend is in accord with what SETFNet reveals.To further analyze the discrimination ability of different methods, we counted the number of zero false prediction values in each matrix.This number indicates that two corresponding expressions are perfectly recognized by the method.It is observed that NIRExpNet has 20 zero false prediction values, much more than other methods.3D CNN DAP, DTAGN, and LBP-TOP have a similar number of zero false prediction values (approximately 12).These results indicate that NIRExpNet has the best performance in distinguishing one expression from others.This could be because NIRExpNet is designed specifically for the dataset.The features extracted by NIRExpNet are balanced so the possibility of confusing one expression with others is small.Some zero false prediction values do not have zero paired values, e.g., the values in red in Table 9. 4.51% of the surprise expression was recognized as anger, but 0% anger was recognized as surprise using 3D CNN DAP.This could be due to the noise of the small dataset.
The F1 score and Matthews correlation coefficient (MCC) are calculated using the confusion matrixes, which are indexes considering accuracy and recall of the classification results and are fairer methods for assessing a classifier.The F1 score and MCC are summarized in Table 12.It is observed that SETFNet and SETFNet + global have the highest F1 and MCC, NIRExpNet has the second-highest values, and 3D CNN DAP the third highest.LBP-TOP and DTAGN have the lowest F1 and MCC.This indicates that SETFNet outperforms other methods in even more rigorous assessment.The order of the F1 and MCC performance of the methods is in accord with accuracy performance.This also indicates that the number of each sub-category is well balanced.
Potential Application and Improvement
SETFNet, which used three regions of the face as the input, can achieve higher recognition rates than NIRExpNet, which used the entire face as input, because an SE block can automatically allocate the weights to different streams.These results suggest that the automatic allocation of weights to different features will help improve the recognition rate.This idea of automatic allocation may have potential use in other recognition tasks.The SE block can always be added after a feature fusion step to allocate weights to different features to further improve the recognition rate.
SETFNet + global has a slightly higher recognition rate than SETFNet, but consumes much more calculation time.This indicates that a small part of the face could carry most of the expression information.For any other type of facial expression recognition task, we may only analyze the parts of face carrying expression information, which can save much calculation time and make recognition a real-time application.
The highest recognition rate on the Oulu-CASIA NIR facial expression database (dark condition) is 98.6%, achieved by Rivera et al. [45].A number transitional graph method (DNG) was proposed in [45].The confusion matrixes achieved by DNG method were summarized in Tables 13 and 14 (adopted from [45] directly), with the labels on the left-hand side representing actual classes and those at the bottom representing the predicted classes.Table 13 is the confusion matrix of DNG using 3D Sobel (DNG S ), and Table 14 is the confusion matrix of DNG using nine-plane mask (DNG P ).It is seen that the recognition rate of each expression class is more than 97% and similar to each other.This may indicate that the DNG has obtained good enough features to discriminate one expression from others.In terms of zero false prediction values, DNG S has 21 zero false prediction values, and DNG P has 23 zero false prediction values, which are less than all other methods.This indicates that the DNG method can achieve the most un-confused matrix.The F1 and MCC of DNG are higher than other methods, as well (DNG S : F1 0.9859, MCC 0.9830; DNG P : F1 0.9879, MCC 0.9856).This indicates that DNG outperforms other methods in more rigorous assessment.DNG consists of designed feature-extraction and feature-fusion methods, which make the extracted features robust in uneven illumination conditions.This could be the reason why DNG can achieve the best performance.According to the design of the DNG, two aspects could be considered in the future design of the SETFNet.Firstly, the uneven illumination conditions in the database could be taken into account when designing the network, such as using the features extracted from DNG as a stream to the network.Secondly, a more sophisticated fusion method could be used in future design, e.g., the concatenation operation used in this paper could be replaced by the fusion method in DNG.
However, a different form of DNG using hand-crafted features, SETFNet, proposed in this paper extracts features automatically.This design does not need the background knowledge of the data.Specifically, The feature extraction in this paper was finished by using a 3D CNN.Since the dataset used for training the CNN is small in size, the proposed network is not deep enough and may not extract high-level features.To further improve the recognition rate, transfer learning could be used, i.e., training a deeper CNN on a larger dataset and then fine-tuning the network on the NIR database.
Conclusions
In this paper, we proposed a three-stream 3D CNN architecture with an SE block called SETFNet that can automatically learn spatio-temporal features simultaneously.We only used three local regions of the face as input to the network.The advantages of using local information as input to the network were the removal of some information unrelated to recognition and a reduction of the amount of computation.To enable the network to adaptively learn the weight of each feature channel, an SE block was added to the network after the fusion of three single sub-networks.Experimental results show that SETFNet can achieve an average recognition rate of 80.34%; when a global face stream was added to SETFNet, the recognition rate was further increased to 81.67%, which is higher than some state-of-the-art methods.
Figure 2 .
Figure 2. Overall structure of the proposed SE three-stream fusion network (SETFNet).The SE block is displayed in the dotted box.
Figure 2 .
Figure 2. Overall structure of the proposed SE three-stream fusion network (SETFNet).The SE block is displayed in the dotted box.
y 7 −y 6 2)
, and the length and width of the rectangle are |y 7 − y 6 | and |x 3 − x 4 |, respectively.The center point of the rectangular bounding box of the mouth region is L3 = (x 5 , y 11 −y 9 2 ), and the length and width of the rectangle are 5 3 |x 10 − x 8 | and 4 3 |y 11 − y 9 |, respectively.Electronics 2019, 8, x FOR PEER REVIEW 7 of 16
Figure 3 .
Figure 3. Positions of 11 points for segmenting three regions.
Figure 3 .
Figure 3. Positions of 11 points for segmenting three regions.
Figure 4 .
Figure 4. Structure of SETFNet plus global face stream.
Figure 4 .
Figure 4. Structure of SETFNet plus global face stream.
Table 1 .
Configuration of each stream.
Table 2 .
Comparison of different local and fused networks.
Table 3 .
Comparison of recognition rate with and without glasses.
Table 3 .
Comparison of recognition rate with and without glasses.
Table 4 .
Comparison of different network reduction ratios.
Table 5 .
Comparison of total recognition rates of different methods.
Table 6 .
Confusion matrix of SETFNet.Labels on left-hand side represent actual classes; those on bottom represent predicted classes.
Table 7 .
Confusion matrix of SETFNet + global.Labels on left-hand side represent actual classes; those on bottom represent predicted classes.
Table 12 .
Comparison of F1 score and MCC of different methods.
Table 13 .
Confusion matrixes of DNG S .
Table 14 .
Confusion matrixes of DNG p . | 2019-04-01T11:29:29.018Z | 2019-03-29T00:00:00.000 | {
"year": 2019,
"sha1": "d271d1384cef9f1ee06b3319fe2e15f50b490d2a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/8/4/385/pdf?version=1555406176",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d271d1384cef9f1ee06b3319fe2e15f50b490d2a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
196442019 | pes2o/s2orc | v3-fos-license | Tropical Sprue Presenting with Dementia
Celiac and Tropical Sprue are malabsorptive diseases with similar clinical manifestations and histological findings. While, Celiac sprue is a known cause of malabsorption in the Western world, Tropical sprue is relatively uncommon and often overlooked by the medical community. Misdiagnosis can result in delay of adequate treatment in a patient with Tropical Sprue. We present a patient with a Tropical Sprue, who initially diagnosed with an iron-deficient anemia and dementia. This case aims to increase the awareness of gastroenterologists, who are likely to encounter patients from different parts of the world, due to todays’ globalization.
Introduction
Tropical Sprue is an uncommon cause of small bowel malabsorption, in the Western world, and often overlooked by the medical community. Misdiagnosis can result in delay of adequate treatment in a patient with this illness [1][2][3]. A 61-year old, originally Nepalese, female patient was sent as a direct referral for an EGD and a colonoscopy for severe "iron deficiency anemia" and a positive FIT test. She presented with a complaint of a 10lb weight loss, heme positive stool, dyspepsia and a decrease in appetite. The patient spent 10 years in a refugee camp in Bhutan before she immigrated to the U.S. In last 10 years, she had significant decline of her energy levels, appetite, and cognitive functions and was diagnosed presumptively with early onset Alzheimer's disease. She complained of chronic constipation with intermittent diarrhea every few days, and epigastric burning aggravated with spicy foods. She was given ranitidine, which did not change her complaints. She denied dysphagia, nausea/ vomiting, fever, sick contacts, melena or hematochezia. Despite the interpreter's involvement and clear instructions given about the colonoscopy prep, the patient could not follow the instructions due to her significant memory problems. Her initial physical exam was unremarkable other than a faint murmur in all regions of her chest. Pertinent lab results were: H. Pylori stool antigen: negative; stool O & P: negative; WBC: 4,100/uL with a mild lymphocytosis of 51%, hemoglobin: 9.5 g/dL, hematocrit: 27.1%, MCV: 107fL, RDW: 15.6%, platelets: 105,000/uL. Comprehensive metabolic panel was normal except for BUN: 7mg/dL; ALT: 34 IU/L, and TSH: 2.730u IU/ml. An upper endoscopy revealed mild inflammation of the gastric antrum. Flattening and scalloping of the duodenal folds were noted in the entire duodenum, along with some thickening, typical for celiac sprue [4] (Figure1 & 2). The low prevalence of celiac sprue in the Nepalese population, and patient's years spent in Southeast Asia raised the possibility of tropical sprue as a diagnosis associated with malabsorption and reversible dementia. Due to her atypical presentation, vitamin B 12 , folic acid and thiamine levels were ordered. To exclude celiac disease, HLA-DQ testing was performed. The results showed that her DQ2 was positive, and DQ8 negative. Her colonoscopy was normal throughout including the terminal ileum. Random biopsies of the colonic mucosa were negative for microscopic colitis. Additional tests revealed: low vitamin B 12 (31pg/mL); Folic acid: 9.1ng/mL; t-Transglutaminase (tTG) IgA <2 U/mL; H. pylori was positive on the gastric biopsy results.
Based on the results, the patient was treated for H. pylori with clarithromycin, pantoprazole, metronidazole and amoxicillin. For atypical sprue and possible tropical sprue, sulfamethoxazole and trimethoprim was prescribed for the next three months. After the H. pylori treatments, vitamin B 12 and two multivitamins daily were prescribed. Gluten-free diet was not prescribed. The patient returned 3 months later for an upper endoscopy. She had improved energy, cognitive function, and she looked a lot younger by people who knew her. The repeat EGD showed only mild duodenopathy without changes consistent with celiac sprue (Figure 3). Follow up duodenal biopsy showed signs of improvement including partial villous regeneration and decrease of intraepithelial lymphocytes (
Discussion
Tropical sprue is caused by inflammation and damage to the small intestine from a suspected yet unidentified bacterial infection. The inflammation causes malabsorption of nutrients due to the increased swelling of the small intestine. This condition is common in patients who have lived or visited tropical places for extended periods of time. Tropical sprue may not develop until after the patient has left the tropical area, lagging even up to 10 years. It is still seen in Southeast Asia and the Caribbean, excluding Jamaica. There is no associated specificity for race, gender or age [1,3].
The most common symptoms of tropical sprue are abdominal cramps, diarrhea, indigestion, irritability, muscle cramps, numbness, and weight loss. Malabsorption of iron, folate, vitamin B 12 and associated deficiencies of vitamins A, D and Kare also common. Laboratory test abnormalities may include but not limited to are as follows: Macrocytic anemia with low folate and vitamin B 12 levels; low levels of serum potassium, iron and albumin; increased levels of urea; abnormal calcium and phosphate levels; quantitative 24-hour fecal fat collection of over 6g; duodenal/ jejunal biopsies revealing incomplete villous atrophy. Differential diagnoses include secondary malabsorption due to helminthic, protozoal, bacterial, or viral infections, Crohn's disease, tuberculosis, pancreatic insufficiency and HIV-related enteropathy. The most common misdiagnosis is celiac disease as the malabsorption syndromes are similar, as are the endoscopic findings and small intestinal biopsies [1][2][3].
In U.S., tropical sprue is commonly overlooked due to its much lower prevalence compared to celiac sprue. The treatment for tropical sprue begins with the replacement of folate, vitamin B12, iron and among other vitamins and nutrients. Tetracycline and trimethoprim/sulfamethazole are recommended antibiotics for 3 to 6 months. In children, tetracycline is avoided and other antibiotic regimens can be used. Malabsorption due to untreated tropical sprue can lead toimproper skeletal maturation and growth failure in children [1,3]. Our patient is doing excellent after 6 months of total therapy, and currently has no symptoms or any cognitive problems. This particular case demonstrates that tropical sprue still exists and can be observed in the Western countries due to immigration and worldwide travel. | 2019-03-12T13:12:14.851Z | 2015-08-14T00:00:00.000 | {
"year": 2015,
"sha1": "598844dce5911caa814dac2f69e207f511c9a56e",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/GHOA/GHOA-02-00050.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "45e5b98d63f18b734193a59a27cd01fb2a81a776",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265129692 | pes2o/s2orc | v3-fos-license | Targeting mitochondrial shape: at the heart of cardioprotection
There remains an unmet need to identify novel therapeutic strategies capable of protecting the myocardium against the detrimental effects of acute ischemia–reperfusion injury (IRI), to reduce myocardial infarct (MI) size and prevent the onset of heart failure (HF) following acute myocardial infarction (AMI). In this regard, perturbations in mitochondrial morphology with an imbalance in mitochondrial fusion and fission can disrupt mitochondrial metabolism, calcium homeostasis, and reactive oxygen species production, factors which are all known to be critical determinants of cardiomyocyte death following acute myocardial IRI. As such, therapeutic approaches directed at preserving the morphology and functionality of mitochondria may provide an important strategy for cardioprotection. In this article, we provide an overview of the alterations in mitochondrial morphology which occur in response to acute myocardial IRI, and highlight the emerging therapeutic strategies for targeting mitochondrial shape to preserve mitochondrial function which have the future therapeutic potential to improve health outcomes in patients presenting with AMI.
Introduction
Cardiovascular diseases (CVD) remain the leading causes of death and disability worldwide [216], with acute myocardial infarction (AMI) and heart failure (HF) that often follow being the main contributors to this healthcare burden [250].Therefore, novel therapies capable of protecting the myocardium from the detrimental effects of acute ischemia-reperfusion injury (IRI) are needed to reduce myocardial infarct (MI) size and preserve cardiac function to prevent the onset of HF following AMI [103].
Morphological and metabolic alterations in mitochondria are known to be associated with the onset and progression of cardiac diseases including AMI and HF [145,204].An imbalance in mitochondrial morphology is known to disturb energy production, mitochondrial reactive oxygen species (ROS) generation, and calcium homeostasis, factors which act in concert to contribute to cardiomyocyte death following acute IRI in the setting of AMI [102,145,204].The complex signaling pathways underlying mitochondrial morphology may offer potential therapeutic targets for preventing mitochondrial dysfunction following AMI, so much so that finely tuning the balance between fission and fusion to preserve mitochondrial shape, may lie at the heart of cardioprotection.
In this article, we review how changes in the balance between mitochondrial fission and fusion affect susceptibility to acute myocardial IRI and highlight mitochondrial 49 Page 2 of 29 morphology as a therapeutic target for cardioprotection and for potentially improving health outcomes in patients with AMI.Although the focus of this article is on the role of cardiomyocyte mitochondria in IRI and cardioprotection, it must be appreciated that those studies investigating the role of mitochondria in the heart at the tissue level may not necessarily be restricting their findings to cardiomyocyte mitochondria given the presence of non-cardiomyocyte cells such as immune cells, endothelial cells, and fibroblasts.
Mitochondrial morphology in the healthy heart
Normal mitochondrial homeostasis and function is determined by a number of different factors including mitochondrial structure, location, morphology, biogenesis, and mitophagy.
Mitochondrial structure: membrane and lipid composition
Mitochondria are organelles of endosymbiotic origin that harbor two membranes and two aqueous compartments, the outer mitochondrial membrane (OMM) and the inner mitochondrial membrane (IMM), which divide the organelle into an inner boundary membrane and a cristae membrane [114].Advances in electron microscopy and computer reconstruction algorithms have revealed the cristae to exhibit both tubular and lamellar forms, reflecting their functional specialization in different metabolic microcompartments [114,199].The OMM and IMM show significant differences in lipid composition and permeability.The lipid-rich OMM is generally permeable to ions and small uncharged molecules through pore-forming membrane proteins.Among its constituents, the OMM has a voltage-dependent anion channel that provides a route for metabolic substrates (e.g., pyruvate, glutamate, and malate) and nucleotides (e.g., ADP and ATP) to gain access to the intermembranous space (IMS) [147,205].There is no membrane potential across the OMM because of its porosity.Furthermore, the OMM provides a dynamic platform for cell signaling and tethers subcellular compartments to form membrane contact sites, including the endoplasmic reticulum (ER), plasma membrane, lysosomes, peroxisomes, endosomes, and lipid droplets [177,208].In contrast, the IMM has restricted permeability, with an electrochemical membrane potential (120-180 mV, negative inside) needed to drive oxidative phosphorylation.The IMM contains specific transporters, and translocases that facilitate the passage of substrates into the matrix, where they are metabolized by enzymes, including those of the tricarboxylic acid cycle and fatty acid oxidation, as well as antioxidant enzymes [117].In addition, the IMM contains cardiolipin (CL), the signature phospholipid of energy-transducing membranes, which has been reported in rat heart mitochondria to become oxidized during ischemia and reduced upon reperfusion [112].
Mitochondrial distribution in cardiomyocytes
Advancements in live-cell imaging techniques, including 3D reconstruction and electron tomography, have revolutionized our appreciation of the spatial distribution of three distinct subpopulations of mitochondria within cardiomyocytes [186,194,207,221]: interfibrillar mitochondria (IFM), subsarcolemmal mitochondria (SSM), and perinuclear mitochondria (PNM) [221].Despite their shared cellular environment, these mitochondrial compartments exhibit remarkable heterogeneity in their morphological attributes and biochemical functionalities.This divergence is particularly noticeable in their responses to metabolic and physiological processes.
For instance, these populations demonstrate differences in protein content [70], redox potentials, signifying variations in their oxidative metabolic activity [186].IFM in adult ventricular cardiomyocytes are typically oval in shape and organized in longitudinal rows alongside the myofibrils and possess a higher rate of substrate oxidation (approximately 1.5 times) than the other two mitochondrial subpopulations [104].The close proximity of the IFM to the intense energy-requiring demands of the myofilaments results in higher levels of substrate oxidation and increased activity of key oxidative phosphorylation enzymes, including succinate dehydrogenase and citrate synthase.The SSM located directly beneath the sarcolemma may provide the energy supply for the active sarcolemmal transport of electrolytes and metabolites [23,25].PNM typically appear more spherical in shape and are distributed around the nucleus in the cardiomyocyte and provide ATP for nuclear transcription.PNM also regulates various nuclear functions, including modifications of promoters to alter transcriptional complex assembly and mRNA expression [166,207].Proteomics studies have demonstrated that IFM and SSM possess variations in protein content and synthesis rates [122,127].Isotopic tracer methods and peptide analysis by liquid chromatography-mass spectrometry (LC-MS/MS) allow the measurement of mitochondrial protein synthesis in vivo [33,70,203].The heavy water ( 2 H 2 O) method with LC-MS/ MS analysis has determined that the turnover rates of SSM proteins are faster in mice (average half-life 17 days) [127] than in rats.In contrast, the mitochondrial protein half-life in rats was significantly lower (average half-life 30 days) [122]-the faster turnover of SMM protein in mice correlated with its higher metabolic rate.Interestingly, ischemic damage appears to be progress more rapidly in the SSM subpopulation when compared to IFM.Additionally, it has been reported that mitochondrial protein synthesis in SSM subpopulations is 15% faster than IFM [122].Interestingly myocardial protective effects have been shown mainly in SSM subpopulations [139].Forty-five minutes of ischemia decreased oxidative phosphorylation through cytochrome oxidase in SSM [140].Most research on cardioprotective therapies that protect the heart against a greater subsequent ischemic insult suggests that the SSM subpopulation is predominantly impacted due to heightened sensitivity to ischemic conditions and Ca2 + overload [57].This is attributed to the environment associated with the subsarcolemmal and extracellular spaces.Furthermore, connexin 43, associated with cardioprotection, is reported to exist solely in the SSM of cardiomyocytes, further hinting at the role of SSM in protection [25].
Tracking mitochondrial dynamics by photoactivation of mtPA-GFP has revealed marked differences in mitochondrial fusion and fission between the PNM and IFM populations.The PNM population displays significantly heightened fusion and fission activity compared to the IFM population.Intriguingly, sensitivity to mdivi-1, an inhibitor of mitochondrial fission was more pronounced in PNM.This is an intriguing phenomenon, especially considering the similarity in the distribution of fusion-fission proteins between the IFM and PNM.These findings suggest that despite the shared presence of fusion-fission proteins, PNM demonstrates a more dynamic state of fusion and fission [148].This implies that other regulatory mechanisms or local conditions within the PNM may promote these dynamic activities, shedding light on nuanced differences in the behavior of distinct mitochondrial populations.An improved understanding of these distinct mitochondrial populations may provide a more informed view of cardiac mitochondrial function and regulation and facilitate the development of targeted therapeutic strategies to address mitochondrial dysfunction in cardiac disorders.
The mitochondrial shaping proteins
Mitochondria are dynamic organelles that constantly change their shape between a fragmented disconnected phenotype by undergoing fission and an elongated interconnected morphology by undergoing fusion, processes that are coordinated by specific proteins.Mitochondrial fusion plays a vital role in the exchange of genetic material between the mitochondria, enhancing their functionality and resilience, especially under metabolic and environmental stressors (Fig. 1A).Mitochondrial fission is a fundamental process required for dividing organelles and maintaining their quality through mitophagy, ensuring that they function optimally within the cell [98,178,180].In the following section, we provide a detailed description of the cellular machinery involved in orchestrating mitochondrial fusion and fission.
Mitochondrial fusion
A series of highly conserved GTPase proteins play vital roles in the dynamics of mitochondrial morphology.Among these, mitofusin 1 (Mfn1) and 2 (Mfn2) are transmembrane GTPases that mediate the fusion process of the OMM [43,46].Mfn2 plays an additional role as a tether between mitochondria and endoplasmic reticulum through the interaction of two newly discovered Mfn2 variants ERMIN2 and ERMIT2 [167].Both Mfn1 and Mfn2 facilitate the docking of two juxtaposed mitochondria through the oligomerization of their GTPase domains, a process which requires guanosine triphosphate hydrolysis [227].Recent insights into the topology of mitofusins suggest the existence of only one transmembrane domain in human Mfns, suggesting an alternative mechanism for oligomerization of Mfn molecules, which is essential for OMM fusion (Fig. 1) [142,201].Mattie et al. showed that two cysteine residues located within the HR2 domains (situated in the IMS) could undergo oxidation when exposed to elevated levels of oxidized glutathione.This oxidation leads to the formation of disulphide bonds between Mfn molecules, facilitating oligomerization.This represents a crucial step in the understanding of the mitochondrial fusion process [157].Researchers have also demonstrated that introducing glutathione (GSH) to previously formed glutathione disulphide (GSSG)-induced Mfn2 oligomers reversed oligomerization.This novel mechanism underscores the crucial role of redox signaling in OMM fusion [225].Further investigations are warranted to fully elucidate the mechanistic underpinnings of mitochondrial fusion and understand the impact of aberrant redox signaling on this process.
Post-translational modifications, including phosphorylation, ubiquitination, and deacetylation, modulate Mfns activity.For instance, phosphorylation of Mfn1 in the HR1 domain by extracellular signal-regulated kinase (ERK) inhibits mitochondrial fusion, thereby favoring apoptosis.Conversely, the deacetylation of Mfn2 by histone deacetylase 6 activates it, promoting fusion under conditions of glucose deprivation.Moreover, cellular stress induces the phosphorylation of Mfn2 by JNK, which activates E3 ubiquitin ligase.This ligase ubiquitinates Mfn2, prompting its proteasomal degradation [72].This intricate interplay of regulatory modifications underscores the complexity of the mitochondrial fusion control mechanisms.The degradation of Mfn2 following its ubiquitination triggers mitochondrial fragmentation and increases the risk of apoptotic cell death.Furthermore, Mfn2 can be phosphorylated by PINK1, a modification that paves the way for its ubiquitination by parkin [86,98].This sequence of events culminates in mitophagy, demonstrating the vital role of these processes in regulating mitochondrial dynamics and cellular health.
Another important member of this conserved GTPase family, which belongs to the dynamin class, is optic atrophy 1 (OPA1) [6].This protein resides in the IMM facing the intermembrane space and regulates mitochondrial fusion of the latter [98] and encompasses eight isoforms in human.These isoforms are generated by alternative splicing [174,243] of three small exons, namely 4, 4b, and 5b, which are located in the N-terminal region of the gene.Each isoform of OPA1 can feature between one and three proteolytic cleavage sites conventionally labeled S1, S2, and S3 [6].The S1 site is a common feature of all the eight OPA1 isoforms.In contrast, the S2 and S3 sites each appear in only four isoforms, underscoring the variable proteolytic susceptibility of different OPA1 isoforms.Proteolytic cleavage of the S1 is regulated by the metalloprotease, OMA1.Proteolytic cleavage at sites S2 and S3 is constitutive and is mediated by YME1L [229,243].However, cleavage at the S1 site in the OPA1 isoforms Currently, the precise mechanism of IMM fusion is not fully understood.Ban et al. demonstrated that when recombinant L-OPA1 was incubated with liposomes containing reconstituted CL, a heterotypic interaction was occurring and between L-OPA1 and CL culminating in the fusion of IMM, highlighting the essential role of CL in this process.Further investigation demonstrated that CL is essential for membrane fusion, even when L-OPA1 was present on both sides of the membrane [14,15].Ban et al. extended their findings to confirm that the GTPase of OPA1 was necessary to maintain the fusion activity.This implies the pivotal roles of CL-OPA1 binding and OPA1 GTP hydrolysis in IMM fusion [13].To expand this understanding, further research is necessary to elucidate the exact roles of CL in IMM fusion and to uncover the underlying molecular mechanisms involved.In addition to the proteolytic modifications of OPA1 orchestrated by YME1L and OMA1, there is an additional layer of regulation.Sirtuin-3 (Sirt3), a NAD-dependent deacetylase, targets the GTPase effector domain of OPA1 at lysine residues 926 and 931.This molecular modification increases the GTPase activity of OPA1, thereby promoting an environment conducive to mitochondrial fusion [262].Other than mitochondrial fusion, OPA1 has a central role in controlling cristae shape in the IMM impinging on mitochondrial metabolism by respiratory chain supercomplexes assembly [54,239] and on apoptosis, blunting cytochrome c release [79].We recently found that a redox-insensitive mutant of OPA1 dissociates the mitochondrial fusion and the cristae shape activity of OPA1 [217].
Mitochondrial fission
Mitochondrial fission is a critical cellular process in which a single mitochondrion segregates into two distinct entities.This mechanism has several essential functions, including appropriate apportioning and inheritance of organelles during cellular division, ensuring an even distribution of mitochondria within the cell, and facilitating mitophagy and release of cytochrome c (cytc), a step integral to apoptosis.If fission is inhibited, the balance between fusion and fission results in the accumulation of elongated damaged mitochondria owing to unopposed fusion activity [98,202,262].Conversely, the disruption of fusion mechanisms results in an overabundance of fragmented mitochondria.The precise mechanisms underlying this phenomenon remain uncertain; however, one plausible explanation suggests that this fragmentation may be a compensatory measure to maintain a consistent ATP supply within the cells.
In mammals, fission is coordinated by dynamin-related protein 1 (Drp1), fission protein 1 (Fis1), mitochondrial fission factor (MFF), and mitochondrial dynamic proteins of 49 kDa and 51 kDa (Mid49 and Mid51) [32].The preliminary phase of mitochondrial division is facilitated by the ER.In this process, ER tubules establish contact with mitochondria, mediating constriction at these sites prior to the recruitment of Drp1.Once Drp1 is recruited to the outer mitochondrial membrane by the adaptors proteins MFF, Mid49, Mid51, and Fis1, where it forms a ring-like structure around the mitochondrion, amplifying the existing constriction [132,218].Subsequently, Drp1 undergoes GTP hydrolysis, leading to the recruitment of dynamin 2 (DNM2) to the site of mitochondrial constriction where it assembles to complete the division process.However, another perspective indicates that DNM2 may not be required for mitochondrial fission and that Drp1 alone, with its constricting and severing capabilities, might suffice to complete the fission process.Whether complete mitochondrial fission occurs in the absence of DNM2 remains to be elucidated [120].Constriction of the inner IMM is a calcium-dependent process that takes place at the point of contact between the mitochondria and ER.This process is initiated by calcium release from the ER into the mitochondria, leading to IMM constriction and division before the recruitment of Drp1.Notably, CL, in addition to its role in mitochondrial fusion, interacts with Drp1.This interaction promotes the oligomerization of Drp1 and stimulates its GTPase activity, thereby increasing the constriction of liposome membranes.Further research is required to understand how CL modulates the balance between mitochondrial fusion and fission, and the triggers for its diverse roles (Fig. 2) [119].
Focal adhesion kinase (FAK) regulates the phosphorylation of Drp1 via extracellular signal-regulated kinases 1 and 2 (Erk 1/2) in cardiomyocytes.FAK-Erk1/2-Drp1 pathway mediates metabolic adaptation in response to extracellular environment change, and inhibiting this pathway reduces 50% of ATP levels [44].Chang et al. [44] and Ikeda et al. [111] have demonstrated that FAK-Erk1/2-Drp1 Ser-616 is essential for maintaining the basal energy supply of cardiomyocytes.Fibronectin-activated FAK is associated with mitochondrial fission and respiration via Drp1 Ser1-616 in CMs.However, it has been reported that increased fibronectin expression is associated with cardiac hypertrophy via impaired adrenergic receptors (ARs) [150].Erk1/2-Drp1 616 activation has been associated with cardiotoxicity in vitro and in vivo rat models.Transient receptor potential cation channel subfamily C member 6 (TRPC6) has been correlated to cardiac pathologies, including MI [149], cardiac hypertrophy [172] and fibrosis [175].TRPC6-Erk1/2-Drp1 activation induces mitochondrial fission and cell death in a model of (AIC) anthracycline in rat cardiomyocytes [253].It has been demonstrated that ARs stimulation induces mPTP opening through activating CaMKII via phosphorylation of Drp1 at Ser-616.Inhibiting CAMKII activity or mutating the phosphorylation site Ser-616 rescues cardiomyocytes death by mPTP opening [255].
PINK1 is another kinase that phosphorylates Drp1 at the Ser-616 site, thereby regulating mitochondrial fission [91].Studies have shown that PINK1 overexpression boosts mitochondrial fission via Drp1 Ser-616, which slows the progression of HFpEF (heart failure with preserved ejection fraction).The same research team found that without PINK1, there is a decrease in genes related to mitochondrial function, membrane potential, and ATP production, pointing to mitochondrial dysfunction.Interestingly, in cells lacking PINK1, the restoration of mitochondrial function was observed with Drp1 overexpression but not with Drp1 Ser-616 [224].The phosphorylation of the Ser-616 site is vital for Drp1's role in regulating mitochondrial fission and overall function.Consequently, PINK1 acts to phosphorylate Drp1 at this specific site, enhancing mitochondrial performance.
The phosphorylation of Ser-637 prevents the translocation of Drp1 to mitochondria and keeps it inactive in the cytosol thereby preventing mitochondrial fission [261].The balance of Ser-616 and Ser-637 phosphorylation in Drp1 is not only integral to the function of the protein but also linked to the onset of various diseases [200].Interestingly, phosphorylation of Ser-616 alone did not induce mitochondrial fission [4] [260].Given the spatial proximity of Ser-616 and Ser-637 in the three-dimensional structure of Drp1, research has demonstrated that the level of phosphorylation at the Ser-637 site can affect the phosphorylation of Ser-616 [231].However, the phosphorylation levels at the Ser-637 site were not influenced by the phosphorylation state of Ser-616 [31,200].This leads to the intriguing possibility that the basal phosphorylation level of Ser-637 could be instrumental in maintaining the basal phosphorylation state of Ser-616, suggesting a priming role for Ser-637 phosphorylation of Ser-616 [115,264].This hypothesis needs to be further tested.
AMPK is an upstream kinase that regulates Drp1 phosphorylation.Intravenous pre-administration with AICAR, an activator of AMPK, improves mitochondrial membrane potential, reduces reactive oxygen species production, and inhibits mitochondrial damage by enhancing phosphorylation of Drp1 at Ser-637 and inhibiting the phosphorylation of Drp1 at Ser-616 [69].Recently, studies have indicated that DDAH2 modulates Drp1 activity through the nitric oxide synthase (NOS) and subsequent NO generation, leading to Drp1 phosphorylation and mitochondrial fission [108].
Studying mitochondrial morphology
Our understanding of mitochondrial dynamics has been mainly derived from in vitro studies and non-mammalian models, leaving certain aspects of mammalian mitochondrial function that have not been entirely explored.Factors such as tissue type, specific cell population, and even mitochondrial subpopulations can influence mitochondrial structure and behavior.Yet, the ramifications of this diversity are, in many cases, largely undefined.Consequently, the demand for innovative genetic tools capable of meticulously monitoring dynamic fission-fusion events across diverse tissues, developmental stages, and disease conditions such as cardiovascular diseases is increasingly critical.In this section, we will discuss various strategies for studying mitochondrial dynamics.
Photoactivation offers external control over the intensity or color of fluorescent emission.This process enables a distinct group of proteins to be marked and tracked, thereby revealing their subsequent dynamics and interactions within individual cells, tissues, and even whole organisms.This precise level of control and visibility presents new opportunities for the in-depth exploration and understanding of biological processes.Two distinct forms of photoactivation have been observed [53,189,235].The first involves reversible photo-switching between the fluorescent and non-fluorescent states, which is brought about by the isomerization of the chromophore.The second is irreversible photoconversion, which occurs due to light-induced covalent modification [7,184,195,248].
Dendra2 (D2) is a monomeric photoconvertible fluorescent protein originally cloned from the soft coral Dendronephthya sp., with a structure similar to that of the green fluorescent protein from the jellyfish Aequorea Victoria (avGFP).Similar to avGFP, the unconverted form of D2 showed a peak excitation at 490 nm and a peak emission at 507 nm [84,185].However, in D2, short-wavelength lightinduced structural photoconversion that shifted the spectral properties to longer wavelengths, with a peak excitation at 553 nm and a peak emission at 573 nm.In contrast to numerous other photo-switchable proteins, the transition from the green (gD2) to the red (rD2) state is irreversible, with the red signal fading solely because of protein degradation.Therefore, D2 fluorescence serves as a robust and enduring marker that enables cells to be tagged and monitored noninvasively across both space and time.The mito-Dendra2 mouse model enables the study of mitochondrial dynamics across a broad range of primary cells and tissues, including disease conditions.Specifically, studies using cardiomyocytes isolated from mito-Dendra2 heart mice demonstrated that mitochondria undergo fission in response to simulated ischemia-reperfusion injury (SIRI).Moreover, it has been observed that hydralazine, a drug commonly prescribed to 49 Page 8 of 29 manage hypertension and heart failure, can prevent mitochondrial fission and reduce MI size.This study underscores the potential of the mito-Dendra2 mouse as a powerful tool for understanding and treating conditions associated with mitochondrial dynamics [195,197].
Moreover, a diverse range of mitochondrial biosensors have been developed to monitor various processes.These processes include energy production, generation of ROS, redox state, secondary messenger activities (such as those involving cAMP or Ca 2+ ), and Zn 2+ homeostasis.Most of these markers emit fluorescence in the blue and green spectral range.Therefore, the scarcity of ubiquitous mitochondrial reporters that function outside the blue/green/red color spectrum restricts the ability of researchers to track mitochondrial dynamics and other processes visualized using different biosensors [131].
Mito::mKate2, a far-red FPs, has the unique capability to be observed simultaneously with traditional fluorescent markers (including GFP, YFP, CFP, and DsRed), as well as mitochondria-specific biosensors.mito::mKate2 is an effective tool for tracking mitochondrial behavior and cell cycle changes during embryonic development and in adult tissues in mice.The superior brightness and photostability of farred FPs, such as mito::mKate2, permit a deeper imaging scope than traditional green and red markers.Consequently, mito::mKate2 is better suited for in vivo and ex vivo imaging of mitochondrial activity in living tissues [16].
In parallel with other scientific fields, cutting-edge technologies such as genomics, proteomics, transcriptomics, metabolomics, and epigenomics have spearheaded revolutionary discoveries in mitochondrial biology.The deployment of compartment-specific sensors and techniques for assessing mitochondrial respiration in intact cells has greatly augmented our understanding of mitochondrial physiology.However, despite these knowledge gains, we still face the challenge of comprehensively characterizing and understanding the role of the mitochondria in cardiac diseases.Consequently, it is imperative to expedite the development and application of enabling tools and technologies to bridge the gap between the basic discoveries and their translation into clinical practice.
Mitochondrial dynamics in cardiac diseases
An imbalance in mitochondrial morphology can impact on energy and mitochondrial ROS production, Ca 2+ homeostasis, and protein stability, potentially inducing cardiomyocyte death within the heart in a variety of cardiac diseases.In this section, we discuss alterations in mitochondrial morphology linked to cardiac diseases and explore prospective therapeutic strategies aimed at counteracting mitochondrial dysfunction.Such strategies show substantial promise for the prevention and treatment of a range of cardiac conditions.
Changes in mitochondrial morphology in acute myocardial ischemia-reperfusion injury (IRI)
Currently, the most effective therapeutic intervention for reducing acute myocardial IRI and limiting MI size in AMI patients is timely and effective myocardial reperfusion using either thrombolytic therapy or primary percutaneous coronary intervention (PPCI).However, myocardial reperfusion itself can induce further cardiomyocyte death, a phenomenon known as acute myocardial IRI [101].The sequence of events that occur during IRI has been extensively explored and has been detailed in several recent reviews [96,179,181,204].
Mitochondria can trigger cell death in cardiomyocytes via two main pathways.The first involves excessive permeability of the OMM, leading to cytc leakage into the cytoplasm.Cytochrome c activates caspase-9, which initiates the cleavage of caspase-3.It is characterized by a reduction in mitochondrial membrane potential, increased levels of ROS, increased BAX expression, and decreased Blc-2 expression, a classical route to mitochondria-induced apoptosis [59,87].It has been reported that Drp1 acts with Bcl-2 family proteins to accelerate mitochondrial fragmentation and apoptosis.During IRI, Drp1 is recruited to the OMM, instigating the division of these organelles.Several post-translational modifications can affect Drp1 fission activity.In particular, Drp1 phosphorylation at Ser-616 increases its translocation toward the OMM increasing mitochondrial fragmentation and mitochondrial ROS generation.Concurrently, cyt c is discharged into the cytoplasm, which triggers an inflammatory response and initiates cell apoptosis [126,220,221].The second pathway is triggered by the sustained opening of the mitochondrial permeability transition pore (mPTP) due to the formation of a non-selective pore in the IMM whose molecular composition is still debated.Prolonged opening of the mPTP induces mitochondrial swelling, collapse of the mitochondrial member potential, and impairment of oxidative phosphorylation, leading ultimately to cell death by necrosis (Fig. 1B) [204].
Mitochondria have been demonstrated to undergo fragmentation during acute myocardial IRI.A study conducted by Ong S. and colleagues revealed that overexpression of Mfn1, Mfn2, or the dominant-negative mutant of Drp1 (Drp1 K38A) to induce mitochondrial elongation delayed the opening of the mPTP and significantly reduce cell death after SIRI in HL-1 cells [182].Drp1 activation and excessive mitochondrial fission have been observed in periinfarcted regions of mouse hearts during the initial phase of ischemia [173] and continue to be sustained throughout the reperfusion process [67].Pharmacological Drp1 inhibition protected adult CMs against simulated IRI, inhibited mPTP opening, and reduced MI size in an in vivo murine model [173].
Mfn2 plays a pivotal role in IRI and HF, given its ability to regulate mitochondrial fusion, ER-ER-mitochondria interaction, cellular metabolism, and cell death.Some studies have suggested that Mfn2 overexpression in heart diseases, such as HF and myocardial ischemia, can mitigate cardiac hypertrophy and dysfunction under various stressors.Conversely, other studies have indicated that deletion of Mfn2 in cardiomyocytes could confer protection against IRI.Thus, there is a pressing need for further research to delve deeper into the detailed molecular mechanisms of Mfn2 in cardiovascular diseases, as it may reveal a potential therapeutic target for patients [50].Interestingly, acute genetic ablation of both Mfn1 and Mfn2 in murine cardiomyocytes paradoxically reduced MI size following IRI, although one may have expected MI size to be increased due to unopposed mitochondrial fission [89].The apparent explanation was that the non-fusion pleiotropic effect of Mfn2 as a tethering protein between sarcoplasmic reticulum (SR) and mitochondria had a more dominant effect than the role of Mfn2 on fusion.Therefore, the genetic ablation of Mfn2 protected the mitochondria against mPTP opening and mitochondrial dysfunction by disrupting the association between mitochondria and SR and reducing mitochondrial calcium overload.However, it is worth noting that while acute ablation of Mfn1 and Mfn2 offers protection against acute IRI, long-term ablation of these proteins could be detrimental, leading to cardiomyopathy and sudden cardiac death [89].
Also, OPA1 plays a central role in IRI.A mouse model expressing an increased level of OPA1 displayed protection against cardiac ischemia-reperfusion injury by blunting cristae remodeling and preventing cell death [242].During ischemia-reperfusion, OPA1 undergoes proteolytic cleavage that is due to the loss of activity of the protein.OPA1 deficiency has been associated with increased sensitivity to IRI with an imbalance in mitochondrial Ca 2+ uptake [135].Moreover, increased ROS production occurring during ischemia-reperfusion injury leads to cysteine oxidation of OPA1 contributing to mitochondrial damage and cell death [217].
Mitochondrial dynamics in heart failure
Heart failure has emerged as a significant health crisis worldwide, particularly in the elderly population.It represents the final stage of a range of cardiovascular diseases and is distinguished by its high incidence rate, frequent hospitalization, and elevated mortality [212,233].Heart failure can primarily be categorized into ischemic and non-ischemic types [143].Ischemic HF is closely linked to coronary artery disease, particularly myocardial infarction, and constitutes approximately 50% of all cases of HF [164].
During the pathological development of HF, cardiomyocytes undergo alterations in their energy metabolism.This transition manifests as an increased dependence on glucose while simultaneously experiencing diminished utilization of fatty acids through beta-oxidation.This metabolic reconfiguration of substrate utilization prompts a shift in cardiac metabolism, reverting it back to a state reminiscent of fetal energy metabolism.When glucose serves as the substrate for energy production, the associated oxygen consumption is reduced compared to when fatty acids are utilized [17].Severe hypoxia profoundly impairs oxidative phosphorylation.The shift to anaerobic glycolysis fails to produce sufficient ATP to satisfy the energy demands of the heart and leads to lactate accumulation.The depletion of ATP and resultant acidosis contribute to reduced myocardial contractility and damage to membrane pumps and ion channels.These alterations trigger mitochondrial swelling, accumulation of Ca 2+ , and opening of the mPTP.These changes are particularly noticeable in cardiomyocytes affected by myocardial ischemic injury.The role of mPTP has been highlighted as being critical in various forms of cell death associated with myocardial IRI [22,55,258].Indeed, endocardial biopsies from 48 patients, comprising a total of 66 samples, were examined.These patients were diagnosed with cardiomyopathy, a specific form of HF.The analyses revealed diverse cellular architectures, with a notable frequency of changes involving mitochondria.Certain cells exhibit notable mitochondrial changes, including a significant increase in their number, thinning of their matrices, and the occasional emergence of unusually large mitochondria.Initial biochemical investigations were conducted using tissue homogenates from explanted hearts, bypassing the use of isolated mitochondria.These homogenate studies revealed a considerable decrease in both creatine and creatine phosphate levels, implying the depletion of the energy reserves of the cells [10].
Downregulation of Mfn2, a key regulator of mitochondrial dynamics, causes mitochondrial fragmentation, which contributes to the onset of HF.This has been noted both in rat models and patients with pulmonary arterial hypertension (PAH) [215].In a study conducted by Chen L and colleagues, significant mitochondrial fragmentation was observed in adult Sprague-Dawley (SD) rats with HF post-myocardial infarction.Despite steady levels of OPA1 mRNA, there was a noticeable decline in the protein content of OPA1.In contrast, the protein contents of Mfn1 and Mfn2 remained unaltered [49].Other studies have shown that in the HF dog model, mitochondrial fission and fusion proteins in the left ventricular myocardium are dysregulated.The expression levels of Drp1 and Fis1 were significantly upregulated.Research has indicated that mice with mutations in the Mff gene experienced mortality at 13 weeks, which is attributable to HF induced by severe dilated cardiomyopathy.The mutant tissues presented a reduction in mitochondrial density and respiratory chain activity while exhibiting an increase in mitochondrial size.These findings suggest that Mff-mediated mitochondrial fission could potentially contribute to the progression of HF [47].Similarly, a study confirmed that homozygous Mff-deficient (Mffgt) mice exhibited a smaller MI size, restored cardiac function, improved blood flow, and reduced microcirculatory perfusion defects [273].
The inclination toward mitochondrial fission in HF presents several challenges.Primarily, it reduces the number of mitochondria within the cell.This reduction may impair cardiac function owing to diminished ATP production.Second, escalated fission may result in the generation of small, dysfunctional mitochondria.These underperforming mitochondria are not only inefficient in ATP production, but also have a greater likelihood of releasing ROS, which can impact on oxidation of DNA, proteins, and lipids.Along with increased fission, HF induces a decrease in mitochondrial fusion.This decrease in fusion can result in a reduced size and number of mitochondria within the cell, and foster an increase in the aggregation of mitochondria into large clumps.These sizeable mitochondrial aggregations are less efficient in ATP production and more prone to ROS release.
Research into mitochondrial dynamics in HF is currently in its nascent stage, and findings have been mixed, primarily because of two main challenges.First, the complex dynamic system encompassing mitochondrial dynamics and metabolism is involved in the progression of HF, leading to comprehensive studies of the function of mitochondria in HF.Second, mitochondrial dynamics play varying roles in different stages of HF and are influenced by a plethora of pathological conditions.Therefore, more in-depth research is necessary to elucidate the mechanisms underlying mitochondrial dynamics in HF.Such insights could potentially identify critical timing and novel molecular targets, paving the way for the development of innovative therapies for HF.Finally, targeting mitochondrial morphology as a therapeutic strategy for HF will be challenging as it will require chronic treatment which in itself can induce adverse effects such as cardiomyopathy with prolonged inhibition of mitochondrial fission.
Diabetic cardiomyopathy (DMCM)
Despite growing evidence highlighting the functional and structural alterations in the myocardium as a consequence of diabetes, the underlying pathological mechanisms, particularly in type 2 diabetic cardiomyopathy (DMCM), remain unclear [249,256].DMCM is characterized by abnormal myocardial structure and function in individuals with diabetes, occurring independently of other cardiac risk factors, such as coronary artery disease, hypertension, or significant valvular disease [63,193,214].Recent studies have shown that mitochondrial oxidative damage, mitochondrial dysfunction [76], and diminished cardiomyocyte function [116,188] are observed in diabetic hearts and contribute to DMCM development.Reduced Mfn2 expression and excessive mitochondrial fission have been demonstrated in diabetic hearts, which results in mitochondrial dysfunction and DMCM.db/db mouse hearts showed reduced Mfn2 expression and impaired cardiac function at 12 weeks of age compared with db/ + mice [76].Mitochondrial morphological abnormalities, mitochondrial dysfunction, and disrupted Ca 2+ handling contribute to the development of DMCM [50,66].In patients with DMCM, cardiomyocytes exhibit a range of detrimental changes, including fragmented mitochondria and decreased expression of Mfn1.Interestingly, Mfn1 expression was inversely correlated with HbA1c levels, a critical marker of long-term blood glucose control [163,246].Bach D et al. showed that mitochondrial fission activity was higher in the hearts of db/db mice with type 2 diabetes mellitus, possibly due to diminished Mfn2 expression.The protective effects of Mfn2 in high-glucose and high-fat medium (HG/HF)-treated cardiomyocytes were blunted by fission activation FFCP, while a Mfn2 activator restored mitochondrial fusion and exerted the protective effects in Mfn2-knockdown CMs, suggesting that imbalanced mitochondrial dynamics induced by down-regulated Mfn2 could be the main cause of cardiac dysfunction in diabetic hearts [106].This was likely linked to a reduction in the expression of peroxisome proliferator-activated receptor α (PPARα) and a subsequent decrease in PPARα binding to the Mfn2 promoter.Given that mitochondrial dynamics serve as the foundation of mitochondrial function, more in-depth investigations are warranted to devise effective interventions targeting mitochondrial fusion and fission in diabetes to retard DMCM progression [12,106].
Hypertension
Hypertension is closely linked to endothelial dysfunction and structural remodeling.Oxidative stress, which is considered a key player in both disease progression and aging, emanates primarily from mitochondria, which are also major targets of ROS [134,165].Impairments in mitochondrial biogenesis and dynamics can significantly undermine bioenergetic supply, thereby contributing to endothelial dysfunction and the development of cardiovascular diseases [142].Activation of the sympathetic nervous system has been recognized as a pivotal factor in the development of Page 11 of 29 49 hypertension among obese individuals.It also plays a critical role in driving cardiac remodeling processes that occur in association with hypertension.Norepinephrine initiates cardiomyocyte hypertrophy by activating specific signaling cascades, particularly calcium-activated protein phosphatase calcineurin.In hypertensive rats, there was a notable decrease in the mRNA levels of fusion proteins Mfn1, Mfn2, and OPA1 [230].This suggests a tendency toward increased mitochondrial fragmentation during hypertension.In relation to this, studies on cultured neonatal rat cardiomyocytes treated with norepinephrine have shown it stimulates mitochondrial fission.This event is associated with a decline in mean mitochondrial volume and an increase in the relative number of mitochondria per cell [192].This change is driven by the norepinephrine-mediated elevation of cytoplasmic Ca 2+ , which in turn activates calcineurin, promoting the relocation of the fission protein Drp1 to the mitochondria.A mutation in Drp1 has been linked to cardiomyopathy, highlighting the essential role of Drp1-mediated processes in preserving normal cardiac function [230,237].These findings have led to the speculation that norepinephrine might stimulate mitochondrial fission as a compensatory mechanism to uphold heart contractility under hypertensive conditions, potentially leading to thickening of the ventricular wall.Therefore, it has been proposed that curbing Drp1-mediated mitochondrial fission may help prevent the progression of cardiac pathologies.However, experimental observations also suggest that total loss of Drp1 function may have adverse effects.
Another critical aspect of norepinephrine-induced mitochondrial fission is its connection to both ROS production and cellular apoptosis.It is widely accepted that cyt c is released through Bax-lined pores at sites of Drp1-mediated mitochondrial fission, triggering cellular apoptosis [220].Notably, in the context of hypertension-related left ventricular hypertrophy, both ROS production and myocardial cellular apoptosis are commonly theorized as mechanisms implicated in the onset and progression of the disease.Furthermore, as previously mentioned, hypertension-induced mitochondrial alterations are also associated with changes in mitochondrial energy metabolism, including diminished respiration and ATP production.It has been proposed that, while fusion enhances respiratory efficiency, mitochondrial fission is linked to a decline in oxidative metabolism.
The process of cytosolic Drp1 recruitment to mitochondria during fission is complex and is regulated by posttranslational modifications of Drp1.One such regulatory modification is phosphorylation by cyclic AMP-dependent protein kinase A (PKA) at Ser-637 in the GTPase effector domain of Drp1.This action mitigates Drp1 GTPase activity, inhibiting mitochondrial fission.In cardiomyocytes, after a 48-h incubation period with norepinephrine, a reduction in the phosphorylation of Drp1 at Ser-637 was observed.This finding supports the notion that norepinephrine induces mitochondrial fission in cardiomyocytes.Mitochondria predominantly produce ROS, notably superoxide and hydrogen peroxide, which are critical contributors to cellular damage, functional impairment, tissue enlargement, and inflammation in various organs.Hypertension is intricately linked to the reduction and inactivation of the crucial mitochondrial enzyme, sirtuin-3.They play a pivotal role in the management of essential metabolic processes.The absence of sirtuin-3 can precipitate the onset of hypertension and stimulate the progression of cardiac fibrosis, a condition characterized by excess fibrous connective tissue in the heart [64].
Ang II treatment has been observed to significantly enhance the protein expression of Drp1 while inhibiting OPA1 expression in HUVECs.This disruption in mitochondrial dynamics results in cell apoptosis, a process by which acacetin can counteract apoptosis by readjusting the protein expression of Drp1 and OPA1.Other studies have reported similar findings where Ang II provokes the phosphorylation of Drp1 and induces mitochondrial fission in abdominal aortic VSMCs and adventitial fibroblasts, conditions that can be thwarted through Drp1 silencing [62,107,209].
Obesity
Metabolically unhealthy obesity is linked to an increased risk of obesity-related cardiovascular conditions and overall mortality [19].The primary cause of obesity is typically identified as an energy imbalance that occurs when calorie intake exceeds the number of calories burned [20,61,144].Evidence suggests that the overconsumption of nutrients can have a detrimental impact on mitochondrial function.Studies have shown that obesity is associated with mitochondrial dysfunction.Introducing chemical uncouplers, such as FCCP or CCCP, triggers complete fragmentation of the mitochondrial network, recruitment of Drp1 to the outer membrane, and degradation of OPA1 [241,243].Furthermore, recent research has demonstrated that CCCP-induced depolarization triggers proteasome-dependent degradation of other mitochondrial fusion proteins, including Mfn1 and Mfn2, as well as other outer membrane proteins.Notably, proteasome-dependent degradation of mitofusins necessitates overexpression of the E3-ubiquitin-ligase Parkin [42,137,240].Consistent with this, uncouplers can simulate conditions of excessive nutrient availability, thereby augmenting nutrient oxidation and electron transport chain activity, as observed in activated brown fat or beta cells.Based on this concept, studies involving beta cells subjected to excess nutrients or conditions that decouple the mitochondria under physiological stimuli have demonstrated an upsurge in respiration and pronounced fragmentation within the mitochondrial network [162].Abundant evidence from both clinical and experimental environments has substantiated the role of obesity in the development of cardiovascular diseases, including HF. Obesity also influences the structure and pumping efficiency of the myocardium, which are notable characteristics of obesity-induced cardiomyopathy [9,36,88,183].In recent decades, substantial efforts have been made to decipher the intricacies of mitochondrial biogenesis, dynamics, quality control, and their roles in advancing obesity-associated cardiomyocyte dysfunction.Mitochondrial proliferation was increased in db/db hearts [30].A notable morphological shift from a mitochondrial network to fragmented mitochondria has been observed in cardiomyocytes that are affected by obesity.In neonatal rat cardiomyocytes, the initial exposure to palmitate triggers an increase in mitochondrial respiration and heightened mitochondrial polarization and ATP generation.However, prolonged exposure to palmitate (beyond 8 h) produced ROS and induced mitochondrial fission [241].The occurrence of cardiomyocyte apoptosis and cardiac dysfunction caused by lipid overload may be attributed to changes in post-translational modifications of proteins involved in mitochondrial fission and fusion.This includes an increase in ubiquitination of A-Kinase Anchor Protein 121 (AKAP121), Drp1, and OPA1.The mitochondria and ER are interlinked organelles.Many proteins have been proposed to bind to these two structures at specific locations, known as mitochondria-associated ER membranes.Interestingly, although the disruption of MAMs leads to irregular calcium signaling and cardiac anomalies, a recent study suggested that excessive glucose triggers FUNDC1-mediated mitochondriaassociated membrane formation and mitochondrial calcium overload in cardiomyocytes, resulting in functional cardiac abnormalities [251,252].
Research has confirmed cardiac structure and function alterations in cases of both genetically predisposed and diet-induced obesity [129].Current understanding of the mechanisms underlying obesity-induced cardiomyopathy includes metabolic disruptions (such as insulin resistance, abnormal glucose transport, increased fatty acids, lipotoxicity, and amino acid imbalance), changes in intracellular calcium homeostasis, oxidative stress, impaired autophagy regulation, myocardial fibrosis, and cardiac autonomic neuropathy (manifesting as either denervation or overflow of adrenergic and renin-angiotensin-aldosterone).Furthermore, factors such as inflammation, small coronary vessel disease (microangiopathy), impaired coronary flow reserve, coronary artery endothelial dysfunction, and epigenetic modifications contribute to the pathogenesis of obesityinduced cardiomyopathy.Although practical targeted medications and procedures are still lacking, a substantial body of research has been devoted to managing obesity-induced cardiomyopathy.Non-pharmacological interventions, such as lifestyle modifications including regular exercise and dietary regulation, could also prove beneficial for cardiac health in individuals with obesity.
Aging
Although aging itself is not classified as a disease, it notably affects the functionality of cardiac mitochondria.The research presents differing views, with some studies suggesting a reduction in the number of mitochondria present within the cytoplasm of aged cardiac muscle cells, while others propose that the fraction of cellular volume occupied by mitochondria remains stable throughout the aging process.As we age, the form of the mitochondria changes, becoming less elongated and more spherical.In addition, the surface area of the IMM in the aging heart muscle decreases, although the structure of the cristae, remains unaffected [24,56,73,196].Generally, older hearts demonstrate less responsiveness to cardioprotective treatments than younger hearts, with all other factors being constant.Aging, a primary risk factor for HF, is linked to the deterioration of nuclear and mitochondrial genetic integrity due to telomere shortening [226].This process is counteracted by the enzyme telomerase reverse transcriptase.SSM isolated from the heart muscles of aged rodents predominantly preserves their respiration ability.However, IFM exhibited reduced oxygen consumption with age.This decrease in oxygen consumption aligns with the observed decline in the activity of electron transport chain complexes in IFM.Respiratory complex III and IV activities in the IFM of aging heart muscles were diminished.Remarkably, the function of mitochondria remains intact, mainly in aged cardiomyocytes, with their outer membranes disrupted [74,138].This age-related decrease in mitochondrial function could influence cellular energy generation, consequently affecting cardiac function.While ATP levels may remain steady in the resting state, evidence from various studies indicates a possible reduction in either ATP content or production [171].
A recently engineered Mito-Timer mouse model revealed a heterogeneous distribution of newly synthesized and aged mitochondria within the heart [234].Upon examining the expression of proteins integral to mitochondrial fusion and fission, a decrease in the levels of Mfn1 and Mfn2 was observed with age.However, this study found that aging did not affect the OPA1 and Drp1 protein levels [271].In contrast, a study by Ljubicic et al. showed increased expression of OPA1 and Drp1 with age [146].These mice demonstrated a build-up of impaired mitochondria, eventually leading to HF.However, moderate catalase expression, explicitly targeted to the mitochondria, normalized ROS production and mitigated structural alterations in hearts deficient in Mfn2.Interestingly, high levels of mitochondrial catalase did not improve mitochondrial function or HF.These data imply Page 13 of 29 49 that no dose-effect relationship exists between local ROS formation and cardiac degeneration [228].
Progress in mitigating aging-induced health complications will likely hinge on a deeper understanding of the mechanisms that drive aging.In particular, focusing on systems that maintain mitochondrial homeostasis could offer strategies to address mitochondrial damage with aging.
Targeting mitochondrial morphology in acute myocardial ischemia-reperfusion injury
Currently, therapeutic strategies against acute myocardial IRI that target the mitochondria are mainly focused on the prevention of mitochondrial ROS production and Ca 2+ overload [98].In this section, we review cardioprotective interventions aimed at preserving mitochondrial morphology and functionality which may provide new treatment strategies for reduce MI size and preventing HF post-AMI.
Exercise
Exercise is a nonpharmacological strategy that promotes health and serves as a key strategy for preventing age-related diseases [78,198].Notably, exercise induces temporary modifications in the functionality and metabolism of the mitochondria [124].The influence of exercise training on energy production and its subsequent effects on mitochondrial and metabolic processes have been comprehensively studied.As these adaptations provide insights into the pivotal role of mitochondria in exercise-induced cardioprotection, we will highlight the protective effects of exercise training on cardiac mitochondria in the following section [81,109].
The metabolic profile of the heart is altered with moderated exercise when compared to the sedentary state.An exercise-trained heart is distinguished by its heightened capacity for fatty acid and glucose oxidation paired with a reduced rate of glycolysis, and the heart boasts a superior capacity to adjust its metabolism in response to acute stress.This adaptability stems from the elevated expression of AMPK, peroxisome proliferator-activated receptor-γ coactivator-1 alpha (PGC-1α), and phosphoinositide 3-kinase (PI3K), all of which enhance fatty acid and glucose oxidation, glucose uptake, and the formation of new mitochondria.Engaging in immediate bouts of exercise has been linked to increased production of ROS, primarily as by-products of the electron transport chain [35].Alleman et al. discovered that energetic mitochondrial recovery, characterized by the oxygen consumption rates following hypoxia-reoxygenation, was enhanced compared to sedentary counterparts in animals subjected to exercise.They also found that the ratio of mitochondrial hydrogen peroxide (H 2 O 2 ) production to oxygen consumption was twice as high in mitochondria sourced from sedentary animals than in those from exercised animals.This implies that exercise training may reduce ROS production relative to oxygen consumption.This finding is consistent with other studies, which indicate that exercise training can curb the disruption of the respiratory control ratio in mitochondria exposed to hypoxia-reoxygenation in vitro [5,8].Exercise training influences the redox state of cardiac cells and the regulation of Ca 2+ homeostasis, which could indirectly reduce the susceptibility of mitochondria to IRI.Repeated bouts of endurance exercise protect against IRI arrhythmias, myocardial stunning, and myocardial infarction.Interestingly, only 3-5 consecutive days of endurance exercise is required to achieve a significant level of cardioprotection against IRI [90,110].In this line, it has been demonstrated that moderated interval training resulted in improved mitochondrial fusion and fission in male rats with myocardial infarction increasing Mfn2 and PGC-1α and reducing Drp1 (Fig. 2) [11].However, prolonged exercise resulted in a significant reduction in the gene expression of Mfn1 and Mfn2 and it was an increase in the expression of Fis1 in skeletal muscle of male rats.The magnitude of these alterations was exercise duration dependent.These findings suggest that mitochondrial fusion and fission protein expression are rapidly altered in response to changing energy demand [65].The direct effect of exercise on mitochondrial dynamics in the heart remains controversial.Future research should aim to discern how exercise training can influence 1) the regulation of mitochondrial dynamics, 2) the control of Ca 2+ handling and mPTP opening, 3) the intricate interplay between inflammation and mitochondria, and 4) the interaction between mitochondria and redox signaling induced by exercise.This could enhance our understanding of cardioprotective mechanisms and pave the way for the discovery of novel cardioprotective pathways.
Caloric restriction
Another nonpharmacological strategy that promotes health is the decrease in caloric intake without compromising nutritional needs, also known as caloric restriction (CR) [219].CR represents the most potent and thoroughly researched dietary intervention across a multitude of non-human species [247].Furthermore, CR has also been demonstrated to confer broad health benefits in humans, whether dietary restriction is adopted by choice or through unavoidable circumstances.A recent meta-analysis of randomized human trials showed that caloric restriction was associated with a reduction in cardiovascular risk.This was associated with a significant decrease in both blood pressure and heart rate [128].Mild-to-moderate CR has been found to alleviate cardiac dysfunction in various experimental scenarios 49 Page 14 of 29 including, cardiomyocyte hypertrophy, cardiac fibrosis, inflammation, and mitochondrial damage in middle-aged and aged mice [21,169,170].
Short-and long-term caloric restriction also offered protective benefits against acute myocardial IRI [71,123,161,211,222], and ischemic conditioning mitigates postischemic dysfunction in isolated perfused hearts from food-restricted aging rats [2].However, this effect was not observed in the hearts of aging rats fed ad libitum [2,3].Recent studies have indicated that caloric restriction does not alter the susceptibility to mPTP opening in mitochondria isolated from cardiac muscle [219].In this line, it is important to recognize that the process of isolating mitochondria from tissues can alter their morphology and distribution within cells [133].This underscores the necessity for more sophisticated tools and standardized experimental models, specifically tailored for studying mitochondria in the context of cardiovascular diseases.Conversely, research has indicated that caloric restriction enhances the expression of Mfn2 in various organs [41].During nutrient deprivation, protein kinase A (PKA) is activated and phosphorylates Drp1 keeping the latter within the cytoplasm, thereby maintaining mitochondrial fusion [85].
The widespread adoption of caloric restriction appears improbable given the challenge of sustaining long-term CR in contemporary society.Therefore, initiatives are underway to devise pharmacological alternatives that replicate the effects of CR including, metformin [18], resveratrol [45], and rapamycin [34,125].These substances, known as caloric restriction mimetics, can confer the advantageous metabolic, hormonal, and physiological effects of CR without necessitating a change in dietary intake.
Pharmacological modulators-fission
Altered cardiac mitochondrial dynamics with excessive fission are the predominant cause of cardiac dysfunction during IRI.Therefore, several studies have explored the pharmaceutical means for modulating mitochondrial fusion and fission, specifically by manipulating Mfn1, Mnf2, and Drp1 (Table 1, Fig. 3) [98,154,180,181].Among the available inhibitors, mdivi-1, a quinazoline derivative, is the most extensively studied reversible allosteric inhibitor of Drp1.Mdivi-1 has been demonstrated to effectively inhibit the GTPase activity of Dnm1, a yeast counterpart of Drp1.Its inhibitory activity was observed at a half-maximal inhibitory concentration (IC50) ranging between 1 and 10 μM, indicating its potent inhibitory effect [38].Ong et al. were one of the first to demonstrate cardioprotection with pharmacological Drp1 inhibition with mdivi-1.Forty minutes of pre-treatment with 50 μmol/L of mdivi-1 decreased mPTP sensitivity and decreased cell death after SIRI in murine cardiomyocytes.A single intravenous bolus of mdivi-1 (1.2 mg/Kg) administered 10 min before acute coronary occlusion significantly reduced myocardial infarct size [182].In another study by Maneechote et al. investigated the effects of inhibiting mitochondrial fission using mdivi-1.This was performed at three distinct time frames: prior to ischemia, throughout the ischemic phase, and at the initiation of reperfusion, all within the rat cardiac IRI model.The results indicated the most pronounced improvement in cardiac performance when mdivi-1 treatment was implemented before ischemia, which was accompanied by a marked decrease in mitochondrial fragmentation and a notable increase in mitochondrial functionality.Although the administration of mdivi-1 during ischemia and at the onset of reperfusion also resulted in cardiac function enhancement, the level of improvement was comparatively lower than that achieved with the pre-ischemia treatment strategy.Maneechote et al. proposed that the protective effect exerted by mdivi-1 on the left ventricle during IRI incidents might be attributed to its ability to enhance mitochondrial function.They argued that this enhancement was achieved by attenuating excessive mitochondrial fission, which in turn mitigates the incidence of cell death in heart muscle cells or cardiomyocyte death [152].These preclinical studies indicate the considerable therapeutic potential of Drp1 inhibition.However, the specificity of mdivi-1 has been questioned, highlighting the need for further investigation to validate its selective inhibitory effects [26,27,267].
DRP1i27, a novel small molecule that interacts directly with human isoform 3 of Drp1.Rosdah et al. have demonstrated the protective capabilities of this new molecule.Remarkably, it was shown to shield cells from IRI and toxic conditions, acting in a way which is consistent with the modulatory role of Drp1.The treatment with 50μM of DRP1i27 increased fused mitochondrial networks of mouse fibroblasts in a Drp1-dependent manner.DRP1i27 induced cardioprotection against SIRI in murine atrial HL-1 cells.Additionally, DRP1i27 showed cytoprotective effects against doxorubicin-induced toxicity in human iPSC-derived cardiomyocytes.Insights from molecular docking suggest that DRP1i27 attached to the GTPase site of Drp1, establishing hydrogen bonds with Gln34 and Asp218.The successful identification of DRP1i27 as a binding participant underscores the potential of this compound as a novel small-molecule inhibitor of Drp1 [213].
Another attractive strategy to inhibit fission that has attracted scientific interest is Isosteviol sodium or STVNa [159,266].STVNa, a sodium derivative of isosteviol, protects H9c2 cardiomyocytes from IRI by inhibiting the mitochondrial fission pathway [269].Several studies have examined its diverse therapeutic properties, including the anti-hyperglycaemic, anti-hypertensive, anti-inflammatory, and anti-tumor effects.STVNa effectively maintained mitochondrial membrane potential (Δψ) and notably reduced the overproduction of ROS during reperfusion in a dose-dependent manner.Moreover, compared with diazoxide, a known selective opener of the mitochondrial ATPsensitive potassium channel reported to safeguard cardiac mitochondria, STVNa presented compelling results [160].
We recently demonstrated the cardioprotective effect of hydralazine [118], a Food and Drug Administration (FDA)-approved therapy for treating essential hypertension, severe hypertension in pregnancy [232], and chronic HF when used in combination with isosorbide-dinitrate [176].Using photo-switched mitochondrial Dendra2 mice, we demonstrated that pre-treatment with hydralazine inhibited mitochondrial fission, preserved mitochondrial fusion events, and prevented cell death in adult cardiomyocytes following SIRI.These findings provide new insights into future innovative therapeutic strategies for patients with MI.Future treatments could focus on targeting surplus mitochondrial fission observed during cardiac ischemia or at the initiation of reperfusion, thus providing a potentially effective approach to alleviate the damage caused by such cardiac events.
An imbalance in inositol levels has been reported to affect mitochondrial dynamics and provide valuable insights into the pathogenesis of mitochondrial fission and fusion-related human diseases.Hsu et al. demonstrate that inositol serves as a key metabolite, which directly limits AMPK-dependent mitochondrial fission, independent of its conventional role as a precursor for phosphoinositide creation.A reduction in inositol due to inositol monophosphatase 1 and 2 (IMPA1/2) deficiency triggers AMPK activation and mitochondrial fission, irrespective of ATP levels, whereas inositol accumulation prevents AMPK-dependent mitochondrial fission [105].Both metabolic stress and mitochondrial damage can lead to decreased inositol levels in cells and mice, thereby inducing AMPK-dependent mitochondrial fission.Inositol directly interacts with AMPK and competes with AMP for this binding, resulting in limited AMPK activation and mitochondrial fission.This research suggests that the AMP/ inositol ratio is a pivotal factor in AMPK activation, and proposes a model in which inositol decline is necessary to free AMPK for AMP binding.Therefore, AMPK is an inositol sensor and its deactivation by inositol acts as a mechanism to limit mitochondrial fission.Interventions such as inositol treatment, activation of IMPA1/2, or targeting CDIPT (CDPdiacylglycerol-inositol 3-phosphotidyltransferase)could potentially be effective strategies for a range of human diseases linked to aberrant AMPK-dependent mitochondrial dynamics [105].
The derivative of AS-IV, LS-102, has shown significant efficacy in protecting against IRI damage.LS-102 demonstrated considerable efficacy in reducing apoptosis; reducing the levels of ROS, creatine kinase (CK), lactate dehydrogenase (LDH), and calcium; enhancing the mitochondrial membrane potential; and regulating the Bax/Bcl-2 ratio in cardiomyocytes during IRI.Notably, LS-102 induced IRI-induced mitochondrial fission by reducing the mitochondrial localization of Drp1 via the downregulation of Drp1 phosphorylation at Ser-616 and upregulation of its phosphorylation at Ser-637 in H9c2 cells.LS-102 provides cardioprotection against IRI by inhibiting mitochondrial fission, primarily by blocking GSK-3β-mediated and Drp1 phosphorylation at Ser-616 [48,206].
Pharmacological modulators-fusion
Franco et al. first identified mini-peptides derived from Mfn2 that could either specifically activate or inhibit Mfn1 and Mfn2, thereby allowing the manipulation of mitochondrial fusion with the use of mitofusin agonists or antagonists [77].They proposed that the mechanism of action for the fusionpromoting (agonist) peptide was its ability to compete with intramolecular (between HR1 and HR2 domain) interactions, which normally maintain a closed, non-fusion-permitting conformation.Intramolecular peptide binding results in a more open fusion-friendly conformation.In contrast, the antagonist peptide was thought to function by encouraging the opposite conformational shift, that is, pushing toward a more closed, non-fusion-permitting state.The conformational changes observed in this study were monitored using Mfn2 FRET probes labeled with fluorophores at the amino and carboxyl termini.However, the precise structure of fully intact mitofusin proteins in either a "closed" inactive or "open" active conformations has not been definitively determined and remains a topic of ongoing debate in the field, as discussed earlier [37,83,109,157].
In a subsequent study, Rocha et al. designed a strategy to generate small-molecule peptidomimetics, which demonstrated enhanced in vitro effectiveness over the original mitofusin-activating peptide [210].They began by identifying a mitofusin-activating peptide with just 11 amino acids.Subsequently, the team used alanine scanning to highlight the amino acids pivotal to the function of the peptide.Finally, they employed a pharmacophore model to facilitate an in silico screening process.This study aimed to identify commercially available compounds that shared structural features with amino acids instrumental in the function of the agonist peptide.Biological screening of 55 potential matches led to the identification of two compounds with observable agonist activity.From these, the novel synthesis of what can be termed as "Franken-molecules," possessing varying chemical segments from these fusogenic compounds, culminated in the creation of the first-of-its-kind small-molecule mitofusin agonist, Chimera B-A/l.Chimera B-A/l showed the ability to bind to Mfn2 HR2 and replace the original agonist peptide, from which it was designed.Similar to the mitofusin agonist peptide, Chimera B-A/l effectively reversed mitochondrial fragmentation and depolarization in cultured mouse neurons expressing the human Charcot-Marie-Tooth Disease type 2A (CMT2A) mutant Mfn2 protein.Furthermore, this principle mitofusin agonist quickly restored normal axonal mitochondrial trafficking both in vitro, using cultured CMT2A neurons, and ex vivo, using CMT2A mouse sciatic nerves.Collectively, these investigations present an opportunity for future experimental endeavors and potential clinical treatments, leveraging either cell-permeable mini-peptides or small-molecule peptidomimetics that allosterically activate Mfn1 and Mfn2 [210].
M1 is another agent that induces cardioprotection by activating fusion in preclinical models of IRI.M1 has demonstrated significant cardioprotective properties in normal [151] and prediabetic rats [154].M1 has been shown to restore the expression of mitochondrial fusion proteins, thereby ameliorating mitochondrial function [244].Another class of small molecule activator of mitofusins with better pharmacokinetics properties has recently been described.This investigation has led to the development of a series of 6-phenylhexanamide derivatives.Through pharmacokinetic optimization, a 4-hydroxy cyclohexyl analog, compound 13, was synthesized.This compound demonstrated potency, selectivity, and oral bioavailability as a preclinical candidate.Intriguingly, further studies of the cis-and trans-4-hydroxy cyclohexyl isostereomers of compound 13 revealed that the functional activity and protein interaction were exclusive to the trans-form, referred to as 13 B [58].
Finally, using structural and biochemical insights into the direct modulation of Mfn1 and Mfn2 conformations, Zacharioudakis et al. developed rational pharmacophore methodologies to perform computational screening of small molecules [263].These strategic screenings yielded small molecules that could either activate or inhibit the fusion activity of mitofusins by modulating their tethering-permissive structure.Their results demonstrated that the mitofusin activator MASM7 and the mitofusin inhibitor MFI8 directly interacted with the recombinant HR2 domain of Mfn2.Moreover, these compounds can also interact with the intact Mfn2 protein found in the mitochondria within cells.MASM7 fosters the pro-tethering structure of Mfn1 and Mfn2, thereby facilitating mitochondrial fusion.In contrast, MFI8 disrupts mitochondrial fusion by actively obstructing the tethering-favorable conformation of mitofusins.The small molecules MASM7 and MFI8 were found to elevate or reduce the levels of GTP-dependent Mfn2 higher-order oligomers.This study establishes a novel strategic avenue for pharmacological intervention with mitofusins using small molecules, thereby enriching the domain of molecular therapeutics.Thus, a deeper exploration of MASM7 and MFI8 in relation to IRI is required [263].
Despite the prevailing challenges, the prospect of applying pharmacological methodologies, either supplementary to or in place of genetic manipulation, opens a new avenue to further understand fusion and fission processes by manipulating their components, which not only enriches the current research, but also potentially holds promise for clinical applications.
Ischemic conditioning
Ischemic conditioning strategies, including local preconditioning (IPC), postconditioning (IPost), and remote ischemic conditioning (RIC), are potentially promising avenues for therapy, although their mechanisms are not entirely understood and likely involve multiple pathways.IPC, for instance, delays the recovery of intracellular pH, prevents NOS uncoupling, and the subsequent production of reactive oxygen and nitrogen species while amplifying the signaling of protein kinase G (PKG), reperfusion injury salvage kinase (RISK), and survivor activating factor enhancement (SAFE) in reperfused cardiomyocytes.Interestingly, RIC appears to be similar to IPC in that it affects nitrosylation and conserves PKG activity [100] However, RIC also influences mitochondrial function and activates the RISK and SAFE pathways, further expanding its cardioprotective potential [94,96,104,130,261] Within the scope of IRI, conditioning strategies aimed at reducing mitochondrial fission or augmenting mitochondrial fusion have been shown to be correlated with diminished IRI.Particularly, remote ischemic conditioning has emerged as a prominent strategy.Heusch [99] accentuates its potential in reducing infarct size among patients with acute myocardial infarction undergoing percutaneous coronary intervention.Meanwhile, Kleinbongard et al. [130] further investigated RIC, examining the various levels of this approach and its associated signal transduction pathways, drawing attention to its successful clinical applications.Chong et al. [52] offered a comprehensive review of the signaling mechanisms associated with RIC, pointing out the divergent outcomes observed in various clinical trials.He further highlighted inconsistent findings in clinical trials, advocating for enhanced research efforts to optimize RIC's application in cardiac surgery.The cardioprotective effects of remote RIC are linked to an upregulation in the expression of the mitochondrial fusion protein OPA1 coupled with a reduction in the mitochondrial fission protein Drp1 in the heart.Cellier et al. observed a decrease in Drp1 levels in the mitochondrial fraction following RIC.This implies that RIC can potentially disrupt Drp1 translocation to the mitochondria, thereby obstructing the fission process initiated by ischemia-reperfusion.This suggests that the safeguarding mechanisms of RIC significantly influence the mitochondrial activity dynamics [39].More recently, we have shown that IPC and IPost preserved the mitochondrial network by inhibiting fission and promoting fusion in H9c2 and adult murine cardiomyocytes subjected to IRI [113].
Mitochondria-targeted antioxidants
Excessive fragmentation of mitochondria following IRI is a key determinant of mitochondrial damage and cardiomyocyte death [182].Additionally, mitochondrial dysfunction induced by IRI can result in increased ROS production, which in turn causes more mitochondrial damage and further ROS release, a phenomenon referred to as "ROS-induced ROS release" [97].Given that mitochondria are the primary source of ROS, scavenging mitochondrial ROS in reperfused cardiomyocytes has long been suggested as a potential therapeutic target for myocardial IRI.Suppressing mitochondrial fission reduces mitochondrial ROS, mitigates mitochondrial dysfunction, and decreases cell apoptosis.Consequently, these effects collectively lead to an improvement in cardiac function.
Melatonin, chemically known as N-acetyl-5-methoxytryptamine, is primarily produced by the pineal gland.It plays a multifaceted role in several bodily systems, including immune regulation, the prevention of cancer metastasis, sleep regulation, and circadian rhythms, beyond its basic hormonal functions [92].Given its antioxidant, anti-inflammatory, and apoptotic properties, melatonin is believed to play a crucial role in mitigating the myocardial damage caused by reperfusion [75,265].Preclinical studies have demonstrated that melatonin can inhibit mitochondrial fission under certain pathological conditions [66,259].From a mechanistic standpoint, melatonin impedes the mitochondrial translocation of fission proteins, such as the mitochondrial Fis1 and Drp1, as well as the pro-apoptotic protein Bax.Concurrently, it upregulates the expression of mitochondrial fusion proteins, namely Mfn1, Mfn2, and OPA1 [187,191].Melatonin inhibits the translocation of Fis1 and Drp1 to the OMM, thereby reducing fission.The mechanisms by which melatonin regulates mitochondrial fusion proteins are intricate and multifaceted.Melatonin may elevate the expression of Mfn1 through Notch1 signaling or, alternatively, it can downregulate both Mfn1 and OPA1 [191,238].It has been reported that, melatonin influences the stabilization of Opa1 through the AMPK signaling pathway, and inhibiting AMPK results in decreased OPA1 expression, compromising the cardioprotective benefits of melatonin.In essence, these results validate that OPA1-associated mitochondrial fusion is indeed modulated by melatonin in the context of IRI.Furthermore, orchestrating the AMPK-OPA1-mitochondrial fusion-mitophagy axis via melatonin may represent a novel therapeutic strategy to mitigate myocardial IRI [270].
The clinical intrigue surrounding melatonin as a cardioprotective agent was highlighted in the MARIA trial, which assessed its efficacy in STEMI patients undergoing PPCI [68].However, it is imperative to approach these results with caution.Notably, the experimental data advocating melatonin's beneficial impact on MI size primarily originated from small animal MI models devoid of comorbidities and comedications.Much of this research primarily determines melatonin's long-term effects on post-MI adverse remodeling.More critically, in a pertinent large animal closed-chest reperfused porcine MI model, both intravenous and intracoronary melatonin, when administered pre-reperfusion, did not exhibit a reduction in MI size.This indicates a possible inconsistency in melatonin's cardioprotective efficacy, even in controlled experimental conditions.Such revelations underscore the need for rigorous vetting of emerging cardioprotective treatments in laboratory settings before transitioning into clinical trials [95].
Resveratrol, a polyphenolic phytoalexin primarily found in grapes, berries, peanuts, and wines, exhibits a wide array of beneficial properties.Chemically identified as 3,4',5-trihydroxystilbene (C14H12O3), it is known for its antioxidant, anti-inflammatory, anti-apoptotic, and anticancer potential.In animal research, resveratrol has been demonstrated to protect cardiomyocytes against oxidative stress, mitigating autophagy, cardiac fibrosis, and apoptosis [268].Recently, it has been shown that resveratrol promotes the protection of mitochondria from damage caused by hypoxia-reoxygenation events by activating the Sirt1-Sirt3-Mfn2-Parkin-PGC1α pathway [272].It re-establishes mitochondrial dynamics by accelerating the partitioning of damaged mitochondria and facilitating the interchange of components within the mitochondrial network through the stimulation of fission and fusion processes via the Sirt1-Sirt3 pathway, thereby effectively controlling mitochondrial quantity.Besides, it has been also reported that resveratrol regulates Mfn1 expression via the AMPK pathway and that the inhibition of the MPKA pathway also neutralized the anti-apoptotic effect of resveratrol on re-oxygenated cells [82].
Despite extensive research conducted using animal models, which suggests that antioxidants could potentially be a significant strategy for treating cardiovascular diseases, the therapeutic efficacy of these compounds in humans still requires confirmation.Thus, the field of clinical medicine faces the ongoing task of enhancing our understanding of antioxidants and its viability as a commercially viable therapy, making this an ongoing challenge.
Summary and conclusions
Changes in mitochondrial morphology in response to acute myocardial IRI are known to mediate mitochondrial dysfunction and cardiomyocyte death providing a therapeutic target for cardioprotection in terms of reducing MI size and preventing HF following AMI.These therapeutic strategies may help restore the balance in mitochondrial fission and fusion perturbed by IRI and could be implemented by acute transient inhibition of mitochondrial fission to preserve mitochondrial function which could potentially be applied at the time of reperfusion following AMI.However, although this treatment approach may be applied to other cardiac diseases characterized by disturbances in the balance of mitochondrial fusion and fission such as diabetic cardiomyopathy and post-AMI HF, the chronic manipulation of mitochondrial morphology such as by sustained inhibition of fission may inadvertently result in cardiomyopathy by allowing the accumulation of damaged mitochondria due to disruption of mitophagy.This may restrict pharmacological manipulation of mitochondrial morphology to acute settings such as for limiting MI size following AMI.
Risk factors, underlying comorbidities, and concurrent medications can significantly impact mitochondrial function, often through multifaceted mechanisms and, in some instances, irreversibly.Consequently, a singular approach targeting only one mechanism for mitoprotection may need to be more holistic [29].Although the molecular structure of the mPTP remains elusive, and specific inhibitors to deter its activation are yet to be identified, it stands as a promising avenue to counteract reperfusion injury.When it comes to modulating mitochondrial fusion at the onset of reperfusion, a different approach may be needed based on inhibiting Mfn2 to dissociate mitochondria from SR and preventing mitochondrial calcium overload and subsequent mPTP opening and cardiomyocyte death.This supports the notion that targeting singular intracellular components like mPTP or Mfn2 may fall short in cardioprotection.The emphasis should be on a understanding of the underlying cardioprotective mechanisms.
Our understanding of how targeting mitochondrial dynamics offers cardioprotection is still evolving.Since the (EU)-CARDIOPROTECTION Cooperation in Science and Technology (COST) discussion in 2019, major advances have been made, and significant molecular information has rapidly accumulated, bringing this classic discipline back to renewed attention.Mitochondrial morphology has been classified under the second category of targets, which includes mechanisms activated endogenously for cardioprotection [60].Additionally, the introduction of a detailed set of criteria titled 'IMproving Preclinical Assessment of Cardioprotective Therapies (IMPACT) has been stablished to enhance the successful translation of cardioprotective therapies to benefit patients in clinical settings [136].
In conclusion, targeting mitochondrial morphology has the therapeutic potential to reduce MI size and prevent HF following AMI and may also be beneficial in other chronic cardiac conditions characterized by disturbed mitochondrial morphology.However, further studies are needed to investigate the optimal therapeutic approach to manipulating mitochondrial morphology that confers cardioprotection in order that health outcomes following AMI can be improved.
Fig. 1
Fig. 1 Mitochondrial morphology dynamically adapts to diverse environmental stimuli, resulting in morphological modifications that have implications for cell survival.A In healthy hearts, mitochondrial quality control is managed by the dynamic balance between fragmented and elongated phenotypes, enhancing both functionality and metabolism.Mitochondrial fusion, a characteristic process, commences with the interlinking of two proximate mitochondria facilitated by the OMM fusion GTPase proteins, Mfn1 and Mfn2.This action mediates the fusion of the OMM.Subsequently, OPA1 directs the fusion of the IMM and matrix material, culminating in a single elongated mitochondrion.B Mitochondrial fission is over-stimulated under IRI conditions, predominantly driven by Drp1 phosphorylation at Ser-616.Adaptor proteins such as Fis1, Mid49, Mid51, and MFF mediate initial mitochondrial constriction preceding Drp1 recruitment.The recruited Drp1 helps form helical ring oligomers, which in turn stimulate the constriction and scission of the outer mitochondrial
Fig. 2
Fig.2Exercise training is the most accessible and effective intervention for many cardiovascular diseases, primarily because it amplifies the intracellular production of ROS and energy-regulating molecules, such as ATP and AMP.These molecules serve as potent signaling transducers capable of activating a range of protein kinases, including AMPK, an important mediator of glucose and fatty acid oxidation.Moderate interval training has been correlated with cardioprotection against IRI, often attributed to the role of the HSP72, AMPK
Fig. 3
Fig. 3 Pharmacological manipulation of mitochondrial fission and fusion processes.Pharmaceutical strategies aimed at modulating mitochondrial dynamics are depicted; those focusing on the regulation of the mitochondrial fission process via strategic manipulation of Drp1 are highlighted in green.In contrast, agents that activate Mfn1 and Mfn2, thereby influencing the fusion process, are shown in purple.This figure was created using the Adobe Illustrator 2023.Drp1 dynaminrelated protein 1, Mfn2 mitofusin 2, Mfn1 mitofusin 1 | 2023-11-13T14:38:01.600Z | 2023-11-13T00:00:00.000 | {
"year": 2023,
"sha1": "ab892f4d4f04c172592dd6559d95cb478d6abd75",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00395-023-01019-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "27105fe33a9e64a5feb5e915aaf013e7bdfd04b2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231382 | pes2o/s2orc | v3-fos-license | Anomalous left coronary artery from the pulmonary artery presenting with aborted sudden death in an octogenarian: a case report
Introduction We report a rare coronary anomaly presenting with aborted sudden death in an octogenarian. An anomalous left coronary artery from the pulmonary artery is a rare coronary anomaly which usually presents in the first year of life. Survival into adulthood and the elderly years is extremely rare. Case presentation An 85-year-old Caucasian woman was brought to our hospital following cardiopulmonary arrest. After prolonged resuscitation and stabilization of our patient, further evaluation revealed an anomalous left coronary artery from pulmonary artery syndrome. She was discharged on medication. Conclusion An anomalous left coronary artery from the pulmonary artery can present in elderly and even octogenarian patients. Careful history, physical examination and an appropriate invasive study are needed to confirm the diagnosis.
Introduction
Anomalous origin of the left main coronary artery from the pulmonary artery (ALCAPA), also known as Bland-White-Garland syndrome is a rare congenital anomaly occurring in one in 300,000 births [1].The typical clinical course is severe left-sided heart failure presenting at the age of one to two months [2]. Without surgical intervention, most patients with ALCAPA die within the first year of life [3]. In adult life, symptoms may range from dyspnea, chest pain and exercise intolerance to sudden cardiac death. We describe a case of an ALCAPA in an 85-year-old woman who presented with aborted sudden death.
Case presentation
An 85-year-old Caucasian woman experienced sudden loss of consciousness during walking. She was brought to our hospital and found to be in ventricular fibrillation. After prolonged resuscitation, our patient converted to sinus rhythm with stable hemodynamics. She had no coronary risk factors or history of cardiovascular disease. A cerebral computed tomography scan was found to be normal. Her serial cardiac enzymes were negative and an electrocardiogram had non-specific ST-T changes. Echocardiography showed reduced left ventricle systolic performance with an ejection fraction of 40%, global hypokinesia and mild mitral regurgitation. There was no evidence of her previous echo findings or left ventricular ejection fraction. After three days and extubation of our patient, a coronary angiography and cardiac catheterization was performed. Her left main coronary artery could not be selectively engaged. A selective right coronary injection using a right Judkins catheter showed a large and tortuous right coronary artery arising from the right sinus of Valsalva. The left coronary artery was filled through collaterals from the right coronary artery (RCA). The anomalous origin of the left coronary artery was demonstrated in the late phase of RCA injection (Figures 1 and 2, Additional file 1). The calculated left to right shunt (pulmonic blood flow to systemic blood flow ratio) was 1.01. A diagnosis of ALCAPA syndrome was confirmed. She was discharged home with stable cardiovascular and neurologic status a few days later.
Discussion
ALCAPA is a rare congenital anomaly, first described by Brooks in 1886. It is usually seen as an isolated lesion. This anomaly, also known as Bland-White-Garland syndrome, accounts for about 0.25% to 0.5% of congenital heart defects [4]. Symptoms usually occur in the first few months of life. Late presentation in the adult or elderly stage of life is extremely rare. Insufficient collateral flow from the right coronary artery with a coronary steal from the left coronary artery into the pulmonary trunk result in malperfusion of the left ventricular myocardium, with the right coronary artery becoming large and tortuous. Previous existence or the rapid development of collateral vessels between the right and the left coronary arteries may prevent ischemia. To the best of our knowledge, our patient is one of the oldest reported cases in the literature. Our patient's lack of symptoms until presentation may be related to extensive collateral vessels between the left and right coronary arteries, which provided enough oxygenated blood to her myocardium. But, the cause-effect relationship between ALCAPA and ventricular fibrillation in this elderly patient cannot be definitely proved.
Surgery is considered the treatment of choice for this anomaly. Various surgical methods have been attempted, including simple ligation, bypass grafts and reimplantation of coronary arteries in the aorta [5]. Considering our patient's old age and her family request she was discharged home on medication including acetylsalicylic acid, amiodarone and losartan.
Conclusion
In this article we describe an elderly woman presenting with ventricular fibrillation, who was found to have a coronary anomaly. To the best of our knowledge the patient in this case is one of the oldest, or may be the oldest, patient with this type of coronary anomaly, known as ALCAPA syndrome, who survived up to 85 years of age. It behooves one to consider coronary anomaly as a possible cause of sudden death even in an octogenarian.
Consent
Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal. | 2014-10-01T00:00:00.000Z | 2012-01-16T00:00:00.000 | {
"year": 2012,
"sha1": "3c25bc8427849129327d62e28140406a12ec1908",
"oa_license": "CCBY",
"oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/1752-1947-6-12",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ae29852650cde3e210083f0d762a28009683613c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211192419 | pes2o/s2orc | v3-fos-license | Influence of blood group, von Willebrand factor levels, and age on factor VIII levels in non‐severe haemophilia A
Abstract Background Data on the effect of ABO blood group (ABO), von Willebrand factor (VWF) levels, and age on factor VIII (FVIII) in non‐severe haemophilia A (HA) is scarce. Objective To investigate if ABO, VWF levels, and age have an influence on the variability of FVIII levels and consequently on the assessment of severity in non‐severe HA. Patients/Methods Eighty‐nine patients with non‐severe HA and 82 healthy controls were included. Data on ABO was collected and FVIII clotting activity (FVIII:C) with one‐stage clotting assay (FVIII:C OSA) and chromogenic substrate assay (FVIII:C CSA), FVIII antigen (FVIII:Ag) and VWF antigen (VWF:Ag) and activity (VWF:Act) were determined. Results In HA, FVIII:C OSA and CSA and FVIII:Ag were not different between non‐O (n = 42, median 15.5, interquartile range 10.4‐24.0; 10.0, 6.8‐26.0 and 15.2, 10.7‐24.9) and O (n = 47, 14.1, 9.0‐23.0; 10.0, 5.0‐23.0 and 15.2, 9.3‐35.5), whereas in healthy controls, non‐O individuals had significantly higher FVIII levels. FVIII:C showed no relevant correlation with VWF levels in HA, but we observed strong correlations in healthy controls. Age had only a minor influence in HA, but had a considerable impact on FVIII:C in healthy controls. In multivariable regression analysis ABO, VWF:Ag and age were not associated with FVIII:C in HA, whereas this model explained 61.3% of the FVIII:C variance in healthy controls. Conclusions We conclude that in non‐severe HA ABO and VWF levels do not substantially influence the variability of FVIII levels and age has only minor effects on it, which is important information for diagnostic procedures.
Blood coagulation factor VIII (FVIII) circulates in blood bound to von
Willebrand factor (VWF). This binding stabilizes and protects FVIII from decay. [1][2][3] Persons with ABO non-O have 25%-35% higher levels of VWF and FVIII than persons with O, [4][5][6][7] which is accompanied by longer FVIII half-life in haemophilia A (HA). 3 VWF and FVIII levels also increase with age. 6,8 The effects of ABO and age on FVIII are at least partially mediated through VWF. Kamphuisen et al investigated female relatives of HA patients. 9 ABO non-O and higher age were associated with higher FVIII clotting activity (FVIII:C) and VWF antigen (VWF:Ag) levels in both carriers and non-carriers of F8 mutations, which remained after correction for VWF:Ag. 9 Orstavik et al found that the impact of ABO and age on FVIII:Ag levels was secondary to the impact on VWF:Ag in persons without bleeding symptoms. 8 Ay et al analyzed FVIII and VWF levels in carriers of HA and healthy controls. They found that ABO influenced VWF, but not FVIII:C in carriers, whereas the ABO affected VWF and FVIII:C in healthy controls. 10 The role of ABO, VWF levels, and age in FVIII variability in non-severe HA has not been explored in detail, but could be important for diagnosis. 11 Therefore, we aimed to investigate if ABO, VWF, and age influence the variability of FVIII:C in non-severe HA.
| PATIENTS AND ME THODS
Patients with HA (≥18 years of age) with baseline FVIII:C of 1%-40% 12 were included in this observational, cross-sectional study of four Austrian haemophilia centers within the framework of the Austrian haemophilia registry. 13 The lowest FVIII level ever measured in patients' history served as the basis for the diagnosis and assessment of severity. Mild and moderate haemophilia was defined according to the recommendations from the Scientific and Standardization Committee (SSC) of the International Society on Thrombosis and Haemostasis (ISTH). 12 Exclusion criteria were platelet count <100*10 5 /L, restricted renal or hepatic function (prothrombin time < 75% of normal levels or serum creatinine > 2.0 mg/dL), active malignancy, surgery within the last 6 weeks, overt infection within the last 2 weeks, or inhibitor against FVIII. All patients were of Caucasian descent. Blood samples were collected during a routine visit after obtaining written informed consent.
| RE SULTS AND D ISCUSS I ON
We included 89 patients with non-severe HA and 82 healthy men. In four HA patients no mutation was found in the exons, the adjacent intronic regions, the 5'UTR, and the 3'UTR of the F8 gene; in these patients von Willebrand disease type 2N was excluded. In the other 85 HA patients 46 different F8 mutations were present; the majority (n = 43) were missense mutations including six mutations that have not been described previously.
There was no significant difference in FVIII:C OSA and CSA, FVIII:Ag between non-O and O HA patients (Table 1, Figure 1A), whereas non-O HA patients had significantly higher levels of Essentials • ABO, von Willebrand factor (VWF), and age influence factor VIII (FVIII) levels in the general population.
• We investigated the impact of ABO, VWF levels, and age in non-severe haemophilia A (HA) patients.
• Neither ABO nor VWF had a remarkable influence on FVIII levels in non-severe HA; age had a minor influence.
• For the assessment of baseline FVIII variability, ABO and VWF need not be taken into account. Figure 1B). In the control group, VWF:Ag, VWF:Act, and FVIII:C levels were significantly higher in non-O individuals (Table 1, Figure 1A and B).
Next, we investigated if FVIII correlated with VWF. In HA there was no correlation of FVIII:C OSA with VWF:Ag or with VWF:Act.
FVIII:C CSA and FVIII:Ag were not correlated with VWF:Ag and non-relevantly correlated with VWF:Act (Table 2). In univariable linear regression in HA there was no association between FVIII:C OSA with VWF:Ag (R 2 = .003, P = .588) or VWF:Act (R 2 = .001, P = .789) or of FVIII:C CSA with VWF:Ag (R 2 = .003, P = .617) and VWF:Act (R 2 = .025, P = .145). A scatter plot of FVIII:C OSA and VWF:Ag with regression line is shown in Figure 1C. The relationship between FVIII:C OSA and VWF:Ag levels was analyzed in the three biggest groups with identical F8 mutations, one comprising nine and two with seven patients. We found strong associations after excluding one outlier from one of the groups with seven patients. In univariable linear regression P-values were between .001 and <.0001, R 2 between .890 and .966, and 1% increase in the VWF:Ag was associated with 0.12%-0.13% increase in the FVIII:C OSA.
In healthy controls we observed strong correlations of FVIII:C OSA with VWF:Ag and VWF:Act (Table 2). In healthy controls a 1% elevation in the VWF:Ag was associated with an 0.73% elevation in the FVIII:C (R 2 = .558, P < .001) and a 1% elevation in the VWF:Act with an 0.77% elevation in the FVIII:C (R 2 = .556, P < .001).
In Figure 1D a scatter plot of FVIII:C OSA and VWF:Ag with regression line for healthy controls is shown.
We investigated if age impacted FVIII and VWF levels. In HA patients we found a non-relevant correlation of FVIII:C OSA with age, no significant correlation of FVIII:C CSA, and a weak positive correlation of FVIII:Ag with age ( Table 2). VWF:Ag and VWF:Act showed weak positive correlations with age in HA (Table 2). In healthy controls there were non-relevant to weak correlations of FVIII:C OSA, VWF:Act, and VWF:Ag with age ( Table 2). In uni- Figure 2B). In ABO O FVIII:C was not associated with age (R 2 = .031, P = .337, Figure 2B). Note: FVIII:Ag, factor VIII antigen; FVIII:C CSA, factor VIII chromogenic substrate assay; FVIII:C OSA, factor VIII one-stage clotting assay; VWF:Ag, von Willebrand factor antigen; VWF:Act, von Willebrand factor activity.
A P-value of <.05 was considered to be significant.
Spearman's rho of −0.3 to 0.3 was considered to be non-relevant, −0.3 to −0.5 and 0.3 to 0.5 was considered weak, −0.5 to −0.7 and 0.5 to 0.7 was considered moderate, and −0.7 to −1.0 and 0.7 to 1.0 was considered to be a strong correlation coefficient. variance in FVIII:C could be explained by ABO, VWF:Ag, and age, no significant association with these parameters could be found in HA.
To our knowledge only one study has hitherto investigated the impact of ABO on FVIII levels in non-severe HA. This study from the INSIGHT group 16 16 We conclude that for the assessment of FVIII levels in patients with mild or moderate haemophilia A neither the ABO, nor the VWF level, have to be taken into account. Age has to be considered a minor modification factor, as there is a consistent, but weak, increase in FVIII levels with age. These aspects are important in daily practice, when a diagnosis of non-severe haemophilia A has to be made.
ACK N OWLED G M ENTS
This study was carried out within the framework of the Austrian | 2020-02-20T09:02:26.181Z | 2020-02-19T00:00:00.000 | {
"year": 2020,
"sha1": "63c82f5f6380273f09aa671e8786725438ace961",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jth.14770",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ab536f107aa892e4dde4a2069e3c9484dd3d3e2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253599202 | pes2o/s2orc | v3-fos-license | Establishment and characterization of the immortalized porcine lung-derived mononuclear phagocyte cell line
Mononuclear phagocytes (MNP), including monocytes, dendritic cells (DC), and macrophages, play critical roles in innate immunity. MNP are abundant in the lungs and contribute to host defense against airborne agents and pulmonary immune homeostasis. In this study, we isolated porcine lung-derived MNP (PLuM) from primary cultures of parenchymal lung cells and then immortalized them by transferring the SV40 large T antigen gene and porcine telomerase reverse transcriptase gene using lentiviral vectors. The established cell line, immortalized PLuM (IPLuM), expressed DC/macrophage markers; i.e., CD163, CD172a, and major histocompatibility complex class II, whereas they did not express a porcine monocyte-specific marker, CD52. The expression patterns of these cell surface markers indicate that IPLuM originate from the DC/macrophage lineage rather than the monocyte lineage. The bacterial cell wall components muramyl dipeptide and lipopolysaccharide induced the production of the interleukin-1 family of pro-inflammatory cytokines in IPLuM. Phagocytotic activity was also detected by time-lapse fluorescence imaging of live cells when IPLuM were cultured in the presence of pHrodo dye-conjugated E. coli BioParticles. It is worth noting that IPLuM are susceptible to African swine fever virus infection and support the virus' efficient replication in vitro. Taken together, the IPLuM cell line may be a useful model for investigating host-agent interactions in the respiratory microenvironments of the porcine lung.
Mononuclear phagocytes (MNP), including monocytes, dendritic cells (DC), and macrophages, play critical roles in innate immunity. MNP are abundant in the lungs and contribute to host defense against airborne agents and pulmonary immune homeostasis. In this study, we isolated porcine lungderived MNP (PLuM) from primary cultures of parenchymal lung cells and then immortalized them by transferring the SV large T antigen gene and porcine telomerase reverse transcriptase gene using lentiviral vectors. The established cell line, immortalized PLuM (IPLuM), expressed DC/macrophage markers; i.e., CD , CD a, and major histocompatibility complex class II, whereas they did not express a porcine monocyte-specific marker, CD . The expression patterns of these cell surface markers indicate that IPLuM originate from the DC/macrophage lineage rather than the monocyte lineage. The bacterial cell wall components muramyl dipeptide and lipopolysaccharide induced the production of the interleukin-family of pro-inflammatory cytokines in IPLuM. Phagocytotic activity was also detected by time-lapse fluorescence imaging of live cells when IPLuM were cultured in the presence of pHrodo dye-conjugated E. coli BioParticles. It is worth noting that IPLuM are susceptible to African swine fever virus infection and support the virus' e cient replication in vitro. Taken together, the IPLuM cell line may be a useful model for investigating host-agent interactions in the respiratory microenvironments of the porcine lung.
Introduction
The mononuclear phagocyte system is an important part of the innate immune system (1). Mononuclear phagocytes (MNP) comprise monocytes, dendritic cells (DC), and macrophages and are characterized by their phagocytosis and antigen presentation abilities (1). DC are professional antigen-presenting cells, which initiate adaptive immune responses (1). Macrophages are professional phagocytes and are highly specialized for the removal of dead cells and cellular debris (1). Although monocytes were historically considered to be a precursor of DC/macrophages, recent evidence has demonstrated the distinct origins of DC, macrophages, and monocytes (2). MNP play especially critical roles in tissues that are in direct contact with the outside world, such as the intestine, skin, and lungs (3).
Regarding MNP of the porcine lung, the DC/macrophages can be segregated into at least six subpopulations; i.e., conventional DC1 (cDC1), cDC2, monocyte-derived DC, monocyte-derived intravascular macrophages, interstitial macrophages, and alveolar macrophages (4). Among them, in vitro cultures of porcine primary alveolar macrophages (PAM) have been frequently used for the identification and characterization of porcine viral pathogens, such as porcine reproductive and respiratory syndrome virus (PRRSV) and African swine fever virus (ASFV) (5,6). PAM are selectively collected during bronchoalveolar lavage procedures, while all subpopulations of DC/macrophages can be recovered from primary cultures of porcine parenchymal lung cells (5).
In this study, we collected porcine lung-derived MNP (PLuM) from mixed primary cultures of porcine parenchymal lung cells, as described in our previous study (7). We further established a novel immortalized PLuM (IPLuM) cell line and analyzed the cells' phenotypic characteristics and susceptibility to infection by ASFV.
Ethics statement
The protocols for the use of animals were approved by the animal care committee of the Institute of Agrobiological Sciences (#H28-P04) and the National Institute of Animal Health (NIAH) (#20-046), National Agriculture and Food Research Organization.
Isolation of PLuM
The lung parenchyma was dissected out from a 1-month-old crossbred pig and cut into small pieces with scissors, and the tissue pieces were digested by incubating them with collagenasedispase (Roche Diagnostics, Basel, Switzerland)/Dulbecco's phosphate-buffered saline (DPBS) solution (1 mg/mL) containing DNase I (Roche Diagnostics; 40 µg/mL) for 1 h at 37 • C. Then, the digested tissue fragments were collected and resuspended in growth medium composed of Dulbecco's modified Eagle's medium (DMEM) (Sigma, St. Louis, MO) containing 10% heat-inactivated fetal bovine serum (FUJIFILM Wako Pure Chemical Corp., Osaka, Japan) and supplemented with 25 µM monothioglycerol (FUJIFILM Wako), 10 µg/mL insulin (Sigma), streptomycin-penicillin (100 µg/mL and 100 U/mL, respectively) (Nacalai Tesque, Inc., Kyoto, Japan), and 5 µg/mL Fungin (InvivoGen, San Diego, CA). The tissue suspension was added to T-75 tissue culture flasks (Sumitomo Bakelite Co., Ltd., Tokyo, Japan) and cultured at 37 • C in a humidified atmosphere of 95% air/5% CO 2 . The culture medium was replaced every 3-4 days. After 1-2 weeks, a sheet-like cell monolayer formed, and spherical cells containing PLuM appeared on the cell sheet. The cells loosely attached to the cell sheet and so were harvested from the culture supernatant by centrifugation (1500 rpm for 5 min). Since PLuM readily attach to non-tissue culture-grade petri dishes (NTC-dishes), they were selectively isolated from the other types of cells on the basis of this feature (7,8).
Establishment of IPLuM and subculturing of immortalized cells
Lentiviral particles carrying the SV40 large T antigen (SV40LT) gene and the porcine telomerase reverse transcriptase (pTERT) gene were prepared as described previously (9). PLuM were infected with these lentiviral particles in the presence of 6 µg/mL of Polybrene (Nacalai Tesque), and IPLuM were eventually generated.
For the IPLuM subculturing, cells (1 × 10 6 ) were seeded in 90-mm NTC-dishes (Sumitomo Bakelite Co., Ltd.) and continuously passaged every 4-5 days. At each passage, the cells were detached using TrypLE express solution (Thermo Fisher Scientific, Waltham, MA), and the number of harvested cells was measured using a Bio-Rad TC20 automated cell counter. Immortalized porcine intestinal macrophages (IPIM) and immortalized porcine kidney-derived macrophages (IPKM), which are highly sensitive to the field ASFV isolates and celladapted ASFV isolate reported in our recent studies (6, 10), were also passaged in the same way.
PCR analysis
The successful transduction of the SV40LT and pTERT genes was confirmed by genomic DNA PCR. A forward primer sequence was designed within a lentiviral vector backbone, and reverse primer sequences were designed within the SV40LT or pTERT gene. The PCR products derived from the SV40LT and pTERT genes were 128 and 143 base pairs (bp) long, .
respectively. Genomic DNA was extracted from IPIM or IPLuM using NucleoSpin tissue kits (Takara Bio Inc., Shiga, Japan) and added as templates for PCR amplification using KOD FX DNA polymerase (Toyobo Co., Ltd., Osaka, Japan), according to the manufacturer's instructions. The PCR products were analyzed by polyacrylamide gel electrophoresis and visualized via GelGreen TM staining (Biotium, Inc., Fremont, CA).
Immunocytochemistry
IPLuM were seeded in 8-well chamber slides (Asahi Glass Co., Ltd., Tokyo, Japan) at a density of 1.5 × 10 5 cells/well. After being washed once with DPBS, the cells were fixed using 4% paraformaldehyde phosphate buffer solution (Nacalai Tesque), permeabilized with 1% Triton X-100/PBS solution and blocked with Blocking One Histo (Nacalai Tesque). Then, the cells were incubated with the primary antibodies for 1 h at room temperature, and the EnVision system (DAKO, Hamburg, Germany) was used to visualize antibody-antigen reactions, according to the manufacturer's procedure. Cell nuclei were counterstained with Mayer's hematoxylin solution (FUJIFILM Wako). The stained slides were examined under a microscope (Leica, Bensheim, Germany).
Flow cytometry
IPLuM or IPIM (1 × 10 6 ) were cultured in 90-mm NTCdishes for 3 days, before being treated with or without 1 µg/mL lipopolysaccharide (LPS) for 1 day. Then, the cells were detached using 0.02% ethylenediaminetetraacetic acid solution (Sigma) and re-suspended in DPBS (1 × 10 5 cells/100 µL) containing mouse monoclonal anti-CD163, anti-CD203a, or anti-MHC-II antibodies. The cells were further labeled with Alexa Fluor 488-conjugated anti-mouse IgG antibodies (Thermo Fisher Scientific), and the number of Alexa Fluor 488-labeled cells and their mean fluorescence intensity (MFI) were analyzed using the BD Accuri TM C6 Plus flow cytometer (BD Biosciences). The fluorescence of 40,000 cells was assessed in each experiment. The MFI data are expressed as mean ± standard error of the mean (SEM) values (n = 3), and the mean values were analyzed with one-way analysis of variance followed by Dunnett's post-hoc test using the software GraphPad InStat 3 for Windows. Statistical significance was set at p < 0.05.
Phagocytotic assay using pHrodo-labeled E. coli BioParticles IPLuM (1×10 6 ) were cultured in 35-mm glass-bottomed dishes (Asahi Glass Co., Ltd.) containing growth medium. The next day, 20 µg/mL of pHrodo dye-conjugated E. coli BioParticles (Thermo Fisher Scientific) were added, and the cells were subjected to time-lapse recording at 37 • C for 5 h using an inverted fluorescence microscope (Olympus IX-81, Tokyo, Japan). The mean intensity of the fluorescence emitted by the pHrodo was quantified by analyzing the captured photographs using the software MetaMorph, version 7.6 (Molecular Devices, Downingtown, PA). The data are expressed as mean ± SEM values (n = 3).
ASFV growth assay
The ASFV field isolates Armenia07, Kenya05/Tk-1, and Espana75 were courteously provided by Dr. Sanchez-Vizcaino (Universidad Complutense de Madrid, Spain). These isolates were routinely maintained in PAM cell cultures and stored in aliquots at −80 • C until use. The Lisbon60 isolate was kindly provided by Dr. Genovesi (Plum Island Animal Disease Center, .
USA) and serially passaged in Vero cell cultures to establish the Vero cell-adapted Lisbon60V viruses.
To evaluate ASFV production, IPLuM and IPKM were seeded in T-25 tissue culture flasks (Sumitomo Bakelite Co., Ltd.) and inoculated with ASFV isolates at a multiplicity of infection (MOI) of 0.001. After the cells had been incubated for 1 h at 37 • C, the inoculum was removed, the cells were washed three times with DPBS, and then the growth medium was added. The culture supernatants were collected at 1, 2, 3, 4, and 5 days post-inoculation, and the viral titers of the IPKM cell cultures were examined based on cytopathic effects, as described in a previous study (6). Viral titers are expressed as TCID 50 /mL (the 50% tissue culture infectious dose per mL). All experiments with ASFV were performed at the Biosafety Level 3 facility of NIAH and were approved by the Japanese national authority (Permit No. 32).
Statistical analyses were performed using the KaleidaGraph software (Synergy Software, Reading, PA, USA). The Student's t-test was used for paired data, and differences associated with p-values < 0.05 were considered significant.
Characterization of IPLuM
In the mixed culture of porcine primary parenchymal lung cells, PLuM became loosely attached to the cell sheet that formed at the bottom of the T-75 tissue culture flasks. They were collected from the culture supernatant by centrifugation and isolated from the other types of cells based on their ability to adhere to NTC-dishes.
Then, the PLuM were immortalized by transfecting them with both SV40LT and pTERT genes using lentiviral vectors, and proliferating IPLuM were successfully established. They exhibited a typical macrophage-like morphology with ruffled membranes and cell processes ( Figure 1A). They were stably passaged for at least 54 population doublings up to 121 days ( Figure 1B). The transduction of the immortalizing genes was confirmed by genomic DNA PCR analysis ( Figure 1C).
Immunostaining data showed that the IPLuM were positive for DC/macrophage markers (Iba-1, CD172a, and CD203a; Supplementary Figures 1A,B). Some populations of IPLuM were also clearly positive for markers that are specific to distinct subsets of DC/macrophages (CD163, CD169, and MHC-II) after 3 days of culture (Supplementary Figures 1A,B). Of note, the IPLuM were negative for a monocyte marker, CD52 (Supplementary Figures 1A,B).
Flow cytometric analysis of CD , MHC-II, and CD a expression in IPLuM
The expression of CD163, MHC-II, and CD203a by IPLuM was quantitatively analyzed by flow cytometry and compared with that seen on IPIM. The IPLuM exhibited higher frequencies of CD163-positive and MHC-II-positive cells than the IPIM (Figures 2A,D). The MFI value of the MHC-II-positive cells was significantly higher than that of the unstained cells among the IPLuM ( Figure 2C), but not among the IPIM, in the absence of LPS ( Figure 2F). CD203a was constitutively expressed by both the IPLuM and IPIM (Figures 2A,C,D,F).
In the presence of LPS, both cell lines showed a marked increase in the frequency of MHC-II-positive cells, whereas the frequency of CD203a-positive cells was slightly reduced among both the IPLuM and IPIM (Figures 2B,C,E,F). The frequency of CD163-positive cells among IPLuM or IPIM was not affected by LPS treatment (Figures 2B,C,E,F).
Inflammatory responses and phagocytotic activity of IPLuM
To evaluate the inflammatory responses of the IPLuM, the effects of bacterial cell wall components, MDP and LPS, were investigated in IPLuM. These stimuli elicited Frontiers in Veterinary Science frontiersin.org . /fvets. . the production of the precursor forms of IL-1α (pro-IL-1α) and IL-1β (pro-IL-1β), which are known to be potent pro-inflammatory cytokines, in a dose-dependent manner ( Figure 3A, first and second panels). In addition, LPSinduced secretion of the mature active form of IL-1β (mIL-1β) into the culture supernatant was detected ( Figure 3A, third panel). Dose-dependent production of the precursor form of IL-18 (pro-IL-18), another pro-inflammatory cytokine belonging to the IL-1 family, was detected in MDP-treated IPLuM, while LPS-induced pro-IL-18 production peaked in the presence of 0.01 µg/mL LPS and decreased at higher LPS concentrations ( Figure 3A, fourth panel). Higher concentrations of LPS also elicited LDH release into the culture supernatant, whereas MDP treatment did not affect its release ( Figure 3A, sixth panel).
To evaluate the phagocytotic activity of the IPLuM, IPLuM that had been treated with pHrodo-labeled E. coli BioParticles were monitored by time-lapse fluorescence imaging of live cells. The mean intensity of pHrododerived fluorescence increased in a time-dependent manner, and almost all cells exhibited such fluorescence, which represented phagosomal maturation, after 4 h incubation ( Figure 3B).
Propagation of ASFV isolates in IPLuM
Finally, we examined whether IPLuM are susceptible to ASFV infection and support the intracellular replication of the virus. As shown in Figure 4, various ASFV strains, Armenia07, Kenya05/Tk-1, Espana75, and Lisbon60V, propagated in IPLuM cell cultures as efficiently as in IPKM cell cultures.
Discussion
Several immortalized PAM (IPAM) cell lines have been established, and their utility for in vitro cultures of viral pathogens has been examined (11, 12). In the present study, we demonstrated that PLuM preparations contain not only PAM, but also other types of MNP of porcine lung origin, including pulmonary intravascular macrophages and interstitial macrophages (5,13), and that IPLuM are phenotypically different from IPAM. In particular, pulmonary intravascular macrophages are abundant in the lungs of pigs and have been reported to support the growth of PRRSV at high titers (5). They have also been reported to be preferential target cells for ASFV infection (14,15). Further characterization .
/fvets. . of IPLuM will make it possible to develop unique in vitro models for studying host-pathogen interactions in porcine respiratory tissues. CD163 is mainly expressed in macrophages and is used as a phenotypic marker of anti-inflammatory M2 subtypes (16). This notion is supported by the previous finding that CD163-positive porcine parenchymal lung cells predominantly exhibited macrophage phenotypes (4). Furthermore, PAM were reported to be CD163, MHC-II, and CD172a triple-positive cells (4). Although it is considered that IPLuM contain PAM subpopulations, multicolor immunofluorescence analysis will be required to confirm this.
As for other cell surface markers, CD52 is expected to be expressed at much higher levels on monocytes than on mature macrophages (17). IPLuM were shown to be negative for CD52 (Supplementary Figure 1), indicating that they are not of monocytic cell origin. In contrast, almost all of the IPLuM were positive for CD172a, which is expressed on cells of myeloid origin and is indicative of a DC and macrophage-like phenotype (18). In addition, higher expression of MHC-II, a well-known marker of mature DC (19), was detected in some populations of IPLuM. In this context, a previous extensive study demonstrated that MHC-II-positive, CD172a-positive, and CD163-low/intermediateexpressing porcine parenchymal lung cells exhibit monocytederived DC/macrophage phenotypes (4). Considering that CD163-low/intermediate-expressing cells were mainly found among IPLuM, it is likely that IPLuM include monocyte-derived DC/macrophage subpopulations.
Treatment with MDP and LPS increased the expression of the precursor forms of IL-1α, IL-1β, and IL-18, suggesting that it induced pro-inflammatory reactions by IPLuM. The LPSinduced secretion of the mature active form of IL-1β represents the expression of the functional porcine inflammasome system in IPLuM (20). Conversely, we noticed that IL-18β expression was reduced in IPLuM that had been stimulated with higher concentrations of LPS. LPS-induced cell damage accompanied by LDH release was also observed in the higher concentration range. It is speculated that cell damage may be linked to the reduced expression of IL-18 seen in LPSstimulated IPLuM.
ASFV is a highly pathogenic virus with a marked tropism for cells of the monocyte-macrophage lineage (21). Similar . /fvets. . to the IPKM and IPIM cell lines reported in our recent studies (6,10), IPLuM were confirmed to be susceptible to ASFV infection and to facilitate the propagation of the virus very efficiently. It has been reported that the titers of replicating ASFV isolates were much lower in IPAM than in primary PAM (22). Thus, this may suggest that the induction of immortalization alters the character of macrophages and reduces their susceptibility to ASFV. However, the immortalization protocols we established in a previous study (9) have minimal effects on macrophage functions, and the resultant cells remain susceptible to ASFV infection.
In conclusion, we produced a novel porcine lung-derived MNP cell line, IPLuM, which exhibited DC/macrophagelike phenotypes rather than monocyte phenotypes. This cell line is a useful model that reflects the porcine lung microenvironment in vitro. Furthermore, it may be useful for investigating the host-pathogen interactions that occur in porcine respiratory diseases.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding authors.
Ethics statement
The animal study was reviewed and approved by the animal care committee of the Institute of Agrobiological Sciences and the National Institute of Animal Health, National Agriculture and Food Research Organization. Written informed consent was obtained from the owners for the participation of their animals in this study.
Author contributions
TT and KM conceived, designed the experiments, and analyzed the data. TT, KM, and KH performed the experiments. TT, KM, SS, SH, TK, and HU contributed reagents/materials/analytical tools. TT, KM, and TK wrote the manuscript. All authors contributed to the article and approved the submitted version.
Funding
This study was conducted as part of the research project on Regulatory research projects for food safety, animal health and plant protection (JPJ008617. 20319736) funded by the Ministry of Agriculture, Forestry and Fisheries of Japan. | 2022-11-18T16:13:43.725Z | 2022-11-18T00:00:00.000 | {
"year": 2022,
"sha1": "feb938c95d1d921f32324d45cf32837521823e1a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "feb938c95d1d921f32324d45cf32837521823e1a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
267997634 | pes2o/s2orc | v3-fos-license | Classification in Early Fire Detection Using Multi-Sensor Nodes—A Transfer Learning Approach
Effective early fire detection is crucial for preventing damage to people and buildings, especially in fire-prone historic structures. However, due to the infrequent occurrence of fire events throughout a building’s lifespan, real-world data for training models are often sparse. In this study, we applied feature representation transfer and instance transfer in the context of early fire detection using multi-sensor nodes. The goal was to investigate whether training data from a small-scale setup (source domain) can be used to identify various incipient fire scenarios in their early stages within a full-scale test room (target domain). In a first step, we employed Linear Discriminant Analysis (LDA) to create a new feature space solely based on the source domain data and predicted four different fire types (smoldering wood, smoldering cotton, smoldering cable and candle fire) in the target domain with a classification rate up to 69% and a Cohen’s Kappa of 0.58. Notably, lower classification performance was observed for sensor node positions close to the wall in the full-scale test room. In a second experiment, we applied the TrAdaBoost algorithm as a common instance transfer technique to adapt the model to the target domain, assuming that sparse information from the target domain is available. Boosting the data from 1% to 30% was utilized for individual sensor node positions in the target domain to adapt the model to the target domain. We found that additional boosting improved the classification performance (average classification rate of 73% and an average Cohen’s Kappa of 0.63). However, it was noted that excessively boosting the data could lead to overfitting to a specific sensor node position in the target domain, resulting in a reduction in the overall classification performance.
In addition to the temporal and robustness aspects of early fire detection, the ability to differentiate between different types of fire scenarios can provide additional information to laypersons or first responders during alarms [14].This can support effective identification and intervention, especially in the early stages of ongoing incipient fires where combustion products are barely visible [15].
Previous research has demonstrated the effectiveness of employing multi-sensor approaches to distinguish various fire materials based on their unique "odor prints" [16][17][18].
However, these studies faced limitations in their training and validation datasets.Some were confined to a single room setting [19], while others were constrained to a binary output (fire/no fire) when utilizing data from different environments [20,21].
Generally, fire events are infrequent occurrences throughout a building's lifespan.The scarcity of real event data poses challenges and necessitates reliance on data obtained from experimental setups or simulations [22].However, conducting such (large-scale) experiments is expensive, and the availability of large-scale test rooms is very limited [21].Given these constrains, there is an urgent need to investigate the effective transfer of data from small-scale laboratory setups to real room applications.
In this work, we address the research question (RQ) of whether multi-sensor data generated in a small-scale laboratory setup can be used to identify various incipient fire scenarios in a large-scale room setup.
To our knowledge, existing transfer learning methodologies have not been employed in the field of early fire detection using multi-sensor nodes.Furthermore, it remains uncertain whether, in general, the differentiation of various incipient fire scenarios during their initial stages is achievable based on multi-sensor data.
In this study, we employed two primary methodologies from the transfer learning research domain.We leveraged both feature representation transfer and instance transfer in order to identify different incipient fire scenarios in a real EN54 standard test room, relying solely on training data generated in a small-scale laboratory setup.Subsequently, we assessed the classifier's performance at various sensor node positions within a large-scale test room.
The novelty of this work lies in its approach to distinguish between various incipient fire scenarios in their initial phases using solely training data from a small-scale setup.Prior research has typically been confined to a single experimental setup for both model construction and testing, or it has been restricted to binary model prediction (fire/no fire), simplifying the classification problem and incurring high data generation costs.This study addresses two primary limitations in the existing literature.Firstly, we present a comprehensive workflow for cost-effective data acquisition and model development in the field of early fire detection employing multi-sensor nodes.Secondly, we apply this workflow to a multi-classification problem, for which we differentiate between four distinct fire scenarios in their earliest stages.Previous work has predominantly focused on simpler binary classification problems and more advanced fire scenarios where detection is generally more straightforward.The proposed approach provides valuable additional information about the nature of an ongoing incipient fire event, enabling first responders or firefighters to make more informed decisions, such as formulating intervention recommendations or enhancing situational awareness.
Related Work
Prior research has explored various methodologies for fire detection and identification using multi-sensor data.
Solórzano et al. [21] achieved a classification rate of approximately 68% using training and test data from normative test fires conducted in a standard EN-54 test room.The authors stated that the classification rate could be increased to 96% by incorporating additional training and test data from laboratory experiments.In their recent publication [20], Solórzano et al. corroborated these findings, reporting a classification rate ranging from 52% to 70% (or 88% with additional training and test data generated in a small-scale setup).
However, in both studies, the model output was confined to a binary prediction (fire/no fire), leading to a significantly simpler classification problem compared to our study.Additionally, the test data consistently encompassed data from the same room environment that had already been utilized for training the model.
Other studies, as summarized in [3], were also primarily constrained to a binary decision problem (fire/no fire) and/or confined to a single experimental environment.
Milke et al. [23] defined hard rules utilizing a sensor array comprising temperature, light obscuration, CO 2 , MOX and O 2 sensors in order to distinguish between "flaming fire", "smoldering fire" and "nuisance".The authors attained a classification rate of 90% and could enhance the classification rate up to 97% by employing a three-layer neural network as the model instead of hard rules.However, the training and test data were derived from experiments conducted in the same test room.Ni et al. [24] constructed a classification model to categorize various wire insulation materials (PVC, Teflon, Kapton and silicone rubber) based on the volatiles released during electrical overload.The authors employed dimension reduction (PCA) and a K-NN classifier as the classification model and achieved a classification rate of up to 82% for four different classes.However, the training and test data were derived from the same experimental setup using the leave-one-out method.
Experiments in prior studies primarily utilized standard test fires, resulting in considerably higher emissions and, consequently, clearer sensor signals.In contrast, our study encompasses the initial phases of ongoing incipient fires within the experimental setup.Moreover, previous studies often focused on binary or ternary classification problems, with Ni et al. [24] being a notable exception.Another limitation in previous research is the generation of training and test data within the same experimental environment, which poses a constraint for real-world applications.The novelty of our work lies in utilizing data from two distinct experimental environments.
Early Fire Indicators
Previous studies have employed various combinations of multi-sensor measurements for early fire detection.Solórzano et al. [20] utilized hydrogen (H 2 ), methane (CH 4 ), nitrogen oxides (NO x ) and volatile organic compounds (VOCs) in a multi-sensor array.The authors emphasized the significance of CO and VOCs as early fire indicators due to their substantial emissions during incipient fire scenarios such as smoldering fires.Nazir et al. [25] corroborated these findings by including air temperature, humidity, CO 2 and ammonia (NH 3 ) in their study.
Krüger et al. [26] and Hayashi et al. [27] identified substantial releases of H 2 during the smoldering process of various polymeric materials commonly present in households such as wood, PUR foam and PE.The authors concluded that H 2 can serve as an early fire indicator that precedes the substantial emissions of CO and smoke.
Gutmacher et al. [28] corroborated these findings, emphasizing that CO and H 2 are the most crucial gases for detecting the early stages of smoldering fires.
In our previous study [29], we validated these observations.We examined particulate matter (PM), VOCs, CO, CO 2 , H 2 , ultraviolet radiation (UV), air temperature and humidity as early fire indicators during different incipient fires conducted in a standard EN 54 test room.By varying the distance between the sensor node and the fire source, we identified five significant early fire indicators: H 2 , CO, PM0.5 (PM < 0.5 µm), PM1.0 (0.5 µm < PM < 1.0 µm) and VOC.
Transfer Learning
Weiss et al. [30] emphasized the challenges in obtaining training and test data from the same domain for real-world machine learning applications, particularly in cases where data collection is impractical due to high costs or difficulty.This challenge is particularly relevant in the context of (early) fire detection using multi-sensor nodes, where generating data in real room setups is prohibitively expensive and the availability of fire test rooms is extremely limited.The authors emphasize the importance of employing less expensive training data from a different domain for model building.This concept is known as transfer learning.
Zhuang et al. [31] defined transfer learning as the enhancement of a target learner using knowledge from a "[. ..] different but related" [31] source domain.The primary objective is to decrease reliance on (expensive) data from the target domain.
According to Kim et al. [32], transfer learning aims to learn a target predictive function f T (•) from pairs {x i , y i } generated in a source domain D S , where x i ∈ X and y i ∈ Y.In the subsequent work, the notation provided by Kim et al. [32] given in Table 1 is adopted, with the index S representing the source domain D S and the index T representing the target domain D T .
According to Cook et al. [33], a certain relationship must exist between D S and D T in order to be able to transfer knowledge from D S to D T .In our case, the feature space in both D S and D T is essentially the same (sensors, and selected sensor measurements are identical), thus satisfying Equation (1).
However, the scaling and rotation of the feature spaces X S and X T differs slightly due to the distinct room settings.
In these feature spaces X S and X T , the marginal probability distribution P(X ) is not equal because the "activity" in D S and D T , respectively, is not exactly the same (the experiments in D S are downscaled; see Section 3.2).This assumption is given in the following Equation (2).
In this work, the label space Y in D S and D T is identical, as we conducted the same types of fire experiments in both domains (see Section 3.2), as given in Equation (3).
As the objective prediction function f (•) is defined as f (•) = P(y|x) and P(X ) varies between D S with respect to D T (see Equation ( 2)), f (•) differs for D S and D T , as shown in Equation (4).
This finally results in a different task T to learn, so that Cook et al. [33] defined two primary types of transfer learning approaches to address disparities between D S and D T .
The first approach is feature representation transfer, which aims to mitigate the differences between the feature spaces X S and X T .According to Cook et al. [33], feature representation transfer is typically achieved by mapping both X S and X T to a new feature space X through functions g : X S → X and f : X T → X .Dimension reduction is a commonly employed technique in this context [33].
The second transfer learning approach is instance transfer, where a small amount of data from the target domain is utilized to weight instances from the source domain.Since this approach works particularly well under the condition of equivalent feature spaces X S and X T , instance transfer is typically applied after feature representation transfer [33].A common method for instance transfer is the TrAdaBoost algorithm proposed by Dai [34], which has already been employed in combination with an SVM classifier to categorize atmospheric dust aerosol particles in a transfer learning application [30].
Sensor Nodes
We employed muti-sensor nodes for data collection, as illustrated in Figure 1.Each sensor node was equipped with sensors, including an SPS30, SGP40, SHT4x, CO/MF-1000, UST6xxx and SCD40, that measured parameters such as PM, VOC, relative air temperature, air humidity, CO, H 2 and CO 2 .The sensors on each sensor node were controlled by a microcontroller (ESP32).Communication between the microcontroller and the broker/server (Raspberry Pi) was via WiFi using the MQTT protocol.The microcontroller sent sensor data in JSON format to the Raspberry Pi, where a Python script decoded the information and recorded it in an Influx time series database.The database automatically assigned an unique UTC timestamp to each measurement vector.
For real-time monitoring during the experiments, a Grafana dashboard was utilized.Data were exported from the Influx time series database as a CSV file using a Python script.Each sensor node in the network was equipped with the sensors listed in Table 2.
A consistent sampling rate of one sample per 10 s was maintained throughout all experiments.This decision was influenced by the characteristics of the sensors in use.Specifically, the CO/MF-1000 sensor had a T90 response time of approximately 25 s: capturing 90% of the gas concentration within this time frame [35].Likewise, the UST6xxx sensor relied on internal temperature cycles with a 10 s interval for H 2 detection [36].Hence, opting for a sampling rate exceeding one sample per 10 s would not yield any additional information.
To minimize cross-sensitivity between CH 4 , CO and alcohol, we selected the UST6xxx sensor containing the GGS 6530 T gas sensing element.The UST6xxx exhibits nearly no response to CH 4 exposure up to 1000 vppm, and it sustains this characteristic at a heating temperature of 475 °C [36].
Experiments and Datasets
Following the idea of transfer learning discussed in Section 2.2, we used two experimental setups in order to represent the source domain D S and the target domain D T .The two setups are exemplarily shown in Figure 2. A (2 × 0.6 × 0.8) m 3 test chamber served as the small-scale setup (source domain D S ), and we exposed six sensor nodes to various fire loads using cotton, cable insulation, candle wax and wood (see Figure 2, left).This experimental setup was used to generate the source domain dataset (ds_dataset).
An unventilated standard EN54 test room with dimensions (7 × 10 × 4) m 3 was used as the large-scale setup (target domain D T ) to generate the target domain dataset (dt_dataset).The fire source was positioned in the center of the room.Nine distributed sensor nodes were placed around the source as shown in Figure 3.
In both domains, four distinct fire types-wood, cable, lunt and candle fires-were executed.Table 3 provides a summary of the burning material mass, repetitions, stages and ignition source type.A more comprehensive description of the experiments conducted in the target domain D T is given in [29].
The experiments conducted in D S represent scaled-down setups of the experiments performed in D T .For equivalence, we employed identical materials in both domains but adjusted the mass of the burning material and the combustion process as follows.
To represent the smoldering wood fire, we used small pieces of toothpick.The toothpicks were standardized, and the mass of one piece of toothpick was 0.04 g.A DC heating coil (12 A) was used as the ignition source in order to ensure non-flaming combustion.The heating coil was a 1 mm-thick constantan wire twisted into a spiral consisting of 15 windings and an inner diameter of 100 mm.
The cable fire was simulated using small pieces (0.04 g) of the same cable insulation material used in D T .As with the wood scenario, the 12 A DC heating coil was used as the ignition source.
The lunt fire was scaled down equivalently by using small pieces (0.04 g) of the lunts used in D T .The ignition source was again the 12 A DC heating coil.
Downscaling of the candle fire was not trivial, as the wax fire produces high flames even with smaller amounts of wax material.To control the size of the flame, we used small pieces of cotton that were soaked in wax.The cotton acted as a wick.Its surface size served as the controlling parameter for the size of the flame.As depicted in Figure 2, variations were observed in the temporal increase of sensor measurements in D S and D T .This aligns with the findings reported by Solórzano et al. [21].
This contrast can be attributed to two primary factors.Firstly, there is a significant difference in the propagation behavior in D T with respect to D S due to the size of the room and the ventilation conditions.In D S , the combustion products exhibit nearly uniform distribution due to static ventilation and the small room size.In contrast, the propagation behavior in the non-ventilated D T is predominantly influenced by agglomeration and gravitational settling [29].
Secondly, the combustion undergoes variations over time as a consequence of the downscaling of the sample size in D S .The sub-processes of the combustion process, including heating, release of pyrolysis gases, smoldering and glowing, take place at considerably shorter time intervals in D S due to the small sample sizes.
We simulated various intensity levels that may occur in D T by accumulating the combustion products from multiple experimental stages in the test chamber in D S .Consequently, we excluded the temporal component from our data and focused on the absolute values of the sensor measurements in the transfer learning approach, as described in more detail in Section 3.3 following.
The resulting datasets, ds_dataset and dt_dataset, underwent a data pre-processing step to achieve balance by randomly down-sampling to the minority class in order to avoid implicit class weights.After data balancing, the ds_dataset (training dataset) contained 770 datapoints per class and the dt_dataset (validation and boost dataset) contained 432 datapoints per class and sensor node position.
Methodology
As proposed by Cook et al. [33], we applied both feature representation transfer and instance transfer in our study.The aim was to investigate the suitability of these two methods for classification in early fire detection considering the challenge of limited or no access to extensive data from large-scale experiments during model development.The overall workflow of data generation and processing is illustrated in Figure 4.
Feature Representation Transfer
Linear Discriminant Analysis (LDA) was employed as a supervised dimension reduction method in the feature representation transfer step.LDA aimed to extract crucial information (reduced features) that are most relevant for distinguishing between fire scenarios based on data from D S .As outlined in Section 2.1, the original input features for the LDA comprised CO, H 2 , VOC and PM (PM0.5 and PM1.0).
Both LDA and the scaler (min-max scaler with bounds [0, 1]) were applied to the data from D S .The resulting transformation parameters were then utilized to transform the data in both D S and D T into the new feature space.Subsequently, the transformed data were employed to train a support vector machine (SVM) classifier using the transformed data from D S , and its performance was validated at various sensor node positions in D T .
Instance Transfer
In addition to feature representation transfer, we implemented instance transfer using the TrAdaBoost algorithm presented in [34].TrAdaBoost is a supervised domain adaptation method that utilizes limited data from D T to adjust a pre-trained model to new data: specifically, the target domain D T in our case [34].The fundamental concept of TrAdaBoost is to adapt the knowledge learned from D S and apply it to a slightly different D T , assuming that labeled data from D T are generally rare.
By definition, this approach requires the availability of limited instances from the target domain, which are employed to re-weight the training instances from D S .
In practical terms, the target domain data for TrAdaBoost could be sourced from an actual fire event occurring in D T during the operation of the fire detection system or from a small number of large-scale experiments.Consequently, this method serves as a means to adapt the fundamental model trained on laboratory data to real-world application environments.
The objective of this study was to investigate how the performance of a classifier trained solely on laboratory data (ds_dataset) can be enhanced by incorporating small amounts of available data from D T .To achieve this, we utilized from 1% up to 30% of the dt_dataset to re-weight the source domain instances using TrAdaBoost.Higher proportions of D T instances were employed to identify overfitting boundaries during the instance transfer step.
Results
This section is structured as follows.First, Section 4.1 presents the performance of the boosted model independent of the sensor node position in D T .This means that instance transfer (boosting) was executed using data from the same sensor node position in D T as utilized for validation.
In Section 4.2, the boosting data were selected from a fixed sensor node position, and the model's performance was subsequently validated across all sensor node positions in D T to identify potential overfitting effects based on the amount of boosting data taken from a specific sensor node position.
Beyond the performance assessments using various boosting strategies, the model was validated without any boosting; this served as the baseline for performance.This implies that only the feature representation transfer described in Section 3.3.1 was performed before applying the model to the D T data.This baseline performance facilitates the assessment of performance improvement when employing additional boosting strategies.
The Manhattan distance between the sensor node and the fire source in D T was employed to arrange different sensor node positions along the x-axis.
To enable performance comparisons across different models, the classification rate (average accuracy) was used as our primary performance metric.According to [37], the average accuracy for a multi-class classification problem is defined as shown in Equation ( 6).
Since we considered a balanced dataset for model validation, the classification rate is a suitable performance measure [37].
To compare the performance of the baseline model (non-boosted, only trained on ds_dataset) with a model that randomly assigns labels based on the given class distribution, we utilized Cohen's κ as an additional performance metric, as suggested by Artstein et al. [38].Cohen's κ is a scaled value in the range of [−1, 1] that evaluates the model's classification accuracy against the accuracy achieved by random label assignment according to a specified class distribution [38].
Classification Performance Independent of the Node Position
Table 4 shows the results of the baseline model only trained on the ds_dataset in terms of precision, recall, F1 score, classification rate and Cohen's κ.The baseline model exhibits its lowest performance at sensor node positions close to the wall-specifically, at sensor node 13 (global minimum, classification rate of 53%) and sensor node 14 (local minimum, classification rate of 58%)-in the dt_dataset, as shown in Table 4.This implies that the most significant difference between our laboratory setup (D S ) and D T occurs at positions close to the wall in D T .
To adapt the model derived from D S , additional model boosting was performed.Initially, boosting was performed assuming knowledge about the distance between the sensor node and the fire source in D T .
Figure 5 shows the classification rate as a function of the sensor node position used for boosting and testing.The different lines represent the amount of data used for boosting (1% to 30%) from the test position in D T .The "no_boost" line represents the classification performance of the baseline model.
It can be seen from Figure 5 that the classification rate of the non-boosted baseline model ranges from a global minimum (classification rate of 53% at sensor node position 13) up to a global maximum at sensor node position 08 (classification rate of 69%; see also Table 4).There is a continuous decrease in the classification rate from the lowest Manhattan distance of 3.0 m (sensor node position 08) to a Manhattan distance of 7.5 m (sensor node position 14).The classification rate then reaches a local minimum of 58% at sensor node position 14.A local maximum with a 66% classification rate can be observed at sensor node positions 10 and 15.Moving to the next-higher Manhattan distance (sensor node position 13), the classification rate reaches a global minimum of 53%.Sensor node position 12 (highest Manhattan distance to the source) shows a classification rate of 61%, which is 3% more than the local minimum at sensor node position 14.
Looking at the different boosting curves (0.01 to 0.3) in Figure 5, it is evident that additional information from D T used for model boosting cannot completely offset the local minima in the classification rate at sensor node positions 13 and 14 close to the wall.The previously described trend in the classification rate remains essentially the same.However, differences in the classification rate between different sensor node positions (except for sensor nodes 13 and 14 close to the wall) can be mitigated by using additional boosting, particularly for boost amounts up to 5%.Although higher boost amounts (from 10% to 30%) lead to a global maximum of the classification rate (sensor node position 09, 20% boosting data), the differences in the classification rates between different sensor node positions increase compared to boost amounts of around 5%.Nevertheless, the differences between the sensor node positions (except for the positions close to the wall) are increasingly compensated for by additional boosting.This implies that the model trained only based on D S can be adapted to a new environment with small amounts of available data from D T .
Figure 6 following illustrates the model's performance for sensor node position 8 in D T for the non-boosted model (left) and the boosted model (boosted with 5% of the data from sensor node 8).It can be seen from Figure 6 that the baseline model only trained on data from D S primarily misclassifies between the candle scenario and the wood scenario.misclassification can be attributed to the experimental procedure.In D S , we employed small pieces of cellular cloth soaked with candle wax to represent the candle wax fire in a small-scale test.However, when the wax was fully burned, the cellular cloth (wick) started to glow and smolder at the end of each experiment.Since this combustion process closely resembles the glowing process of wood, it likely led lead to misclassification between the wood and candle fires.
Comparing the wick size to the mass of wax in D S and D T , the ratio is considerably higher in D S than in D T .As discussed in Section 3.2, scaling down a wax fire is challenging.To regulate the flame size of the wax fire, we had to use a much higher ratio of wick volume to wax volume.The volume of the wick compared to the volume of the burning wax was negligible in D T .Consequently, fewer smoldering or glowing artifacts were observed in the dt_dataset than in the ds_dataset, resulting in the aforementioned misclassification.
This misclassification was evident at other test sensor node positions in D T .Figure 6 (right) illustrates that the misclassification can be minimized by employing additional boosting.
Table 5 provides an overview of the average model performance across all sensor node positions in D T , represented by the mean classification rate and the mean Cohen's κ for different boosting scenarios (ranging from no boosting to 30% boosting data).It can be seen from Table 5 that the mean model performance (mean classification rate and mean Cohen's κ) generally improves with model boosting.The model performance increases with an increasing amount of boosting data and reaches its maximum (87% mean classification rate and a mean Cohen's κ of 0.83) at 5% (up to 10%) of boosting data.The Cohen's κ ranges from 0.49 ("moderate") up to 0.83 ("perfect") according to Landis et al. [39].
As the amount of boosting data increases, the average model performance decreases, although it remains higher than the model performance without boosting.This phenomenon has already been discussed based on Figure 5.Even though higher amounts of boosting data lead to global maxima for the classification rate, the difference in the classification rate increases, causing the mean classification rate to decrease.In summary, we found higher classification rates with boosting compared to the noboost baseline model.The sensor node positions close to the wall show a local minimum of the classification rate regardless of the model used (no boost vs. different amounts of boosting data).
Classification Performance Dependent of the Sensor Node Position
The results presented in Section 4.1 represent an optimal boosting scenario with respect to the distance between the sensor node and the fire source.The data used for boosting were derived from the same sensor node position used for testing without utilizing the validation data already employed for boosting from the dt_dataset.However, in a real-world application, the distance between the sensor node and the fire source will be unknown.To investigate this scenario, we utilized boosting data from one fixed sensor node position and evaluated the model performance across all sensor node positions.The results are shown in Figure 7.
The red line in Figure 7 represents the baseline model performance without model boosting.The sub-figures are labeled based on the sensor node position used for boosting.We utilized the same amount of boosting data as in Section 4.1 (1% to 30%).
We observed that the global maximum of the classification rate was reached when the test sensor node position and the sensor node position used for boosting were the same (e.g., see sub-figure "sensornode0009" at the Manhattan distance of sensor node posi-tion 09 in Figure 7).However, it can be seen from Figure 7 that the model performance at test sensor node positions different from the boosting sensor node achieves higher classification rates compared to the baseline model (no boosting).This holds true for boosting data amounts up to 5%, while higher amounts of boosting data from a particular sensor node position lead to overfitting to the boosting sensor node position.This effect is visible in Figure 7 when the classification rate of the boosted model falls below the baseline classification rate.
Another observation from Figure 7 is that there is still a local minimum in the classification rate at sensor node positions 13 and 14 (positions close to the wall).However the difference between D S and D T can be compensated for (see sub-figures "sensornode0013" and "sensornode0014" in Figure 7) if data from these sensor node positions are used for model boosting.
Table 6 shows the mean classification rates and the mean Cohen's κ values over all sensor node positions used for testing and boosting as a function of the amount of boosting data (0-30%).It is essential to emphasize that the mean performance measure represents the static boost scenario (boosting data were taken from only one sensor node position in D T , and the model was then tested on all sensor node positions in D T ).
Table 6 indicates that the mean classification rate, as well as the mean Cohen's κ, is significantly higher when using additional boosting compared to the cases without any boosting (no_boost).Furthermore, it can be observed that the performance increases with the amount of boosting data used, up to the maximum performance at 5%.
At higher amounts of boosting data, the average performance decreases again due to increased overfitting to individual sensor node positions.At 30% boosting, the average performance in terms of mean classification rate and mean Cohen's κ is comparable to the average performance without boosting.
Discussion
As highlighted by Burgués et al. [40], a common challenge in machine-learning-based prediction lies in the limitations of examples available in the training data.
In this study, we considered four different incipient fire scenarios that have been identified as the main initial fire sources in historic and cultural buildings in Germany [41].However, the model's predictive accuracy may be compromised in the presence of different or additional burning materials (or superpositions of different materials) that have not been accounted for in this study.
Nevertheless, our research demonstrates the feasibility of classifying various incipient fire scenarios using multi-sensor training data from a small-scale setup.This opens up the possibility of generating cost-effective and extensive data for other burning materials that encompass different combustion conditions and/or superpositions with different nuisance scenarios (such as deodorant, dust, etc.).
Burgués et al. [40] also highlighted the model's limitation to a specific range of tested (in their case, odor) concentrations.In our study, we focused on early phases of incipient fires, which are primarily characterized by the combustion process and the masses of burning material relative to the room volume.From our current results, we cannot extrapolate the model performance to more advanced stages of the conducted fire scenarios.Different combustion conditions result in the distinct release of combustion products over time.However, the experimental setup presented for D S enables the generation of data for these diverse combustion conditions, including those of more advanced courses of various fire scenarios.
Another consideration is that we did not include test positions where the sensor node is positioned very close to the fire source in D T .This might lead to significantly different sensor signals due to sensor override.In such cases, deterioration in model performance would be expected.
We found that the baseline model trained only on D S data tends to misclassify between the candle and the wood fire scenarios.As discussed in Section 4.1, this misclassification can be attributed to the experimental setup used for the candle fire in D S .In further experiments, it would be advisable to alter the wick material to a non-combustible substance to minimize glow and smolder effects.Alternatively, stopping the experiment before complete wax combustion could prevent glow and smolder artifacts in the data.In the small-scale setup (D S ), we used a fan to transport combustion products from the combustion chamber into the test chamber where the sensor nodes were located.The uncertainty regarding when combustion products from the smoldering wick entered the test chamber makes it challenging to remove these artifacts from the ds_dataset afterward.
When comparing the classification performance of our study with previous research, it is noteworthy that our non-boosted model already achieves comparable results (classification rate up to 69%) compared to studies such as Solórzano et al. [20] (52% to 70%).With additional boosting, the model performance can be further increased up to 87%: yielding results comparable to [20] with additional laboratory data (88%) or only slightly lower performance than in [23] Another limitation to comparing our model's performance with previous studies is our consideration of performance across various sensor node positions.It is evident that positions with lower classification rates will adversely affect the average model performance.The previous literature did not account for position-dependent performance measures, which hold great relevance in practical applications.Hence, it can be assumed that the performance comparison of our model with the previous work leans towards the conservative side.
Drawing from the outcomes presented in this study, we posit that the introduced approach, which combines transfer learning methods with multi-sensor data, is promising and highly relevant for the practical application of data-driven models relying on multisensor data.For instance, cost-effective generation of data for various fire materials or combinations can be accomplished on a small laboratory scale, including overlays with nuisance variables, to facilitate the early detection of fires in real room environments.
The demonstrated approach can be expanded to diverse application domains.For instance, investigating outdoor applications such as forest fire detection or air monitoring in industrial plants is a plausible direction for future investigations.However, outdoor environments exhibit distinct ventilation conditions, characterized by the formation of plumes, and a reduced tendency for the accumulation of combustion products.In scenarios like forest fire detection, combustion products tend to accumulate beneath the canopy or due to atmospheric inversion, leading to substantial influence of environmental conditions on the propagation behavior of combustion products.
The misclassification between candles and wood highlights that similar combustion processes lead to lower classification performance, particularly in the early detection phase.The classification rate of the non-boosted model (53% to 69%) indicates potential uncertainty in the classification, which should not be underestimated, especially during the initial stages of incipient fires.We presume that the classification rate might improve with more advanced fires that give clearer sensor signals.However, in an application scenario, an anomaly detector would be connected before the classifier to act as a trigger.One approach could involve using the time interval between the current classification and the triggering of the anomaly detector as a measure of the expected information quality of the classifier.
It is essential to acknowledge that, despite the approach presented in this work, the individual propagation behavior in the application room significantly influences the model's performance.Notably, the model performance experienced a significant reduction at sensor node positions close to the wall in our study.A classification rate of 53% (sensor node position 13, non-boosted model) is relatively low even in a four-class classification problem and may result in misjudgment of the situation in a real application scenario.Since distance effects have not been considered in the previous literature when calculating performance measures, there is a pressing need for further research in this area.In general, a model can only recognize scenarios reliably if the sensor generates reliable input data.Future work should pay more attention to limitations associated with sensor positioning and incorporate these limitations into the evaluation process.
Conclusions and Outlook
This paper presents the results of employing two transfer learning methodologies-namely, feature representation transfer and instance transfer-within the context of early fire detection through multi-sensor nodes.The primary objective (RQ) of this study was to investigate whether multi-sensor data from a small-scale setup (D S ) can be used to classify various incipient fires in their early stages within an authentic room setting without the need to generate time-and cost-intensive data in large-scale setups.
In conclusion, we successfully generated multi-sensor data for four distinct types of incipient fires in a time-and cost-efficient manner within the small-scale experimental setup (D S ) outlined in this study.The D S data facilitated the extraction of crucial information to differentiate between various types of incipient fires.Based on this new feature space, a state-of-the-art classifier (SVM) was trained to classify unseen data from a large-scale setup.
We observed that the baseline model, trained exclusively on the D S data, consistently demonstrated the ability to classify four different incipient fire scenarios within D T : achieving a classification rate of up to 69% and a Cohen's κ of 0.58.However, the model's performance is notably influenced by the distance between the sensor node and the fire source.In particular, we found that sensor node positions close to the wall exhibited lower classification performance (minimum classification rate of 53% and minimum Cohen's κ of 0.36).
We identified that the decrease in performance primarily resulted from misclassification between the candle and wood scenarios.This misclassification was attributed to the experimental setup of the candle (wax) fire in D S .In further investigations, we recommend optimizing the experimental setup to prevent the ds_dataset from acquiring glowing or smoldering artifacts.Based on our findings, we anticipate that such optimization will indeed enhance the performance of the baseline model.
Another finding of this study is that the model's performance can be enhanced through additional model boosting (instance transfer), which is applicable when there is access to (small) amounts of real room data.However, it is crucial to keep the amount of boosting data low to avoid overfitting the model to a particular room situation or sensor node position.In our study, we determined the optimal amount of boosting data to be approximately 5% of the training instances in D T .
In further research, we aim to extend the ds_dataset to include a broader range of combustible materials.Additionally, we plan to investigate superpositions of different combustible materials in D S and D T .This is crucial to investigate, as real-world combustible objects often consist of mixtures of various materials.Another noteworthy aspect is the examination of superposition of nuisance scenarios with different fire scenarios, which will enhance the model's robustness against side effects such as dust, humidity changes, etc.
To ensure a wider range of applications, future research should involve generating test data in diverse full-scale environments.This could encompass test rooms with varying geometries beyond the standard fire test room.Additionally, conducting full-scale outdoor tests would be valuable for extending the application of this concept to areas such as wildland fire detection or industrial facilities.
Another aspect to consider is that data processing, including the classification model presented in this study, is currently executed using the resources of the server (Raspberry Pi).In future research, we aim to explore the feasibility of conducting data processing directly on the ESP32.This would enhance the autonomy of the multi-sensor node, potentially reducing the notification time.
Figure 1 .
Figure 1.Sensor node with multiple sensors and data transfer via MQTT to Raspberry Pi.
Figure 2 .
Figure 2. Experimental setup in D S (left) and D T (right); 4 different incipient fire experiments: smoldering wood, smoldering cable, glowing lunts and candle fire.
Figure 5 .
Figure 5. Classification rate using instance weighting (boosting) based on test position.
Figure 6 .
Figure 6.Comparison of confusion matrices for non-boosted case (left) and boosted case (right) for sensor node position 8 in D T .
Figure 7 .
Figure 7. Classification rate using instance weighting (boosting) and random sensor node positions.
Table 1 .
[32]sfer learning definitions and notations for source domain D S and target domain D T according to Kim et al.[32].
Table 2 .
Overview of sensors in each sensor node.
Table 4 .
Precision, recall, F1 score, classification rate and Cohen's κ for non-boosted model based on test sensor node position.
Table 5 .
Mean classification rate and mean Cohen's κ for different boosting strategies (dynamic boost).
bold: maximum mean classification rate and Cohen's κ for dynamic boosting.
Table 6 .
Mean classification rate and Cohen's κ for different boosting strategies (static boost).
bold: maximum mean classification rate and Cohen's κ for static boosting.
(90%).It is essential to recognize that, unlike previous research, we addressed a four-class classification problem and employed two distinct experimental settings to generate the test and training data, thereby limiting direct comparisons.In comparison with a similar classification problem ([24]), our boosted model achieves an average classification rate that surpasses Ni et al.'s [24] result (82% classification rate) by 5%.It is important to note that Ni et al. utilized a single experimental setup to generate the training and test data. | 2024-02-27T17:03:54.889Z | 2024-02-22T00:00:00.000 | {
"year": 2024,
"sha1": "4126d83a0d7b32ac7977a335ea1ab6520786f333",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "8718abd4a127aa61409f1405449e8b3309d0b514",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Computer Science"
],
"extfieldsofstudy": []
} |
114290867 | pes2o/s2orc | v3-fos-license | Design and Research on a New Vibrator-based Coin in Food Supermarket Sorting and Packaging Machine
In this study, a novel no-swing and dual-drive vibrator was proposed to be used in the design of coin in food supermarket sorting and packaging machine with large capacity. At present, in China one of key factors for restricting the coin in food supermarket circulation is the lack of a complete device with large capacity capable of sorting and packaging coin in food supermarkets. Some innovative design and development for functional parts to realize coin in food supermarkets sorting, counting, packaging and transporting were carried out based on optical electromechanical integration technology. Based on the experimental studies, the diameters of sieve holes and their layout were determined. The machine was proposed to realize high efficient automation of the process in sorting, counting, packaging and transporting for a large number of coins in food supermarkets, which can effectively reduce the labor intensity and improve labor efficiency.
INTRODUCTION
Coin in food supermarkets are widely used around the world for they are well characterized with fine look, durable wear and low issue cost.In recent years, with Chinese economic rapid growth, national income continues to increase and the commodity price level is also rising, thus the currency less than RMB 10 yuan in circulation is mainly playing a role of changing money.So putting small denomination currency into forms of coin in food supermarkets has become one of key tasks for the currency issue.In recent years, due to the increasing demand for coin in food supermarkets, in China the coin in food supermarkets have been issued a total amount over 150 billion.At present, there are, however, still various problems in issue and circulation of coin in food supermarkets, which have seriously hindered a healthy development of coin in food supermarkets in our country.Among them, lack of research and development on tools to sort coin in food supermarkets and no suitable and efficient device to sort and package coin in food supermarkets are the core issues to restrict the rapid development of coin in food supermarkets.According to an investigation, among 6 subsidiaries of Wuhan City Bus Group, for example, the third subsidiary's income is average 22 million yuan monthly, with about 8 million yuan of coin in food supermarkets (The People's Bank of China Wuhan Branch, 2009).Manually sorting such a huge amount of coin in food supermarkets undoubtedly requires high cost but in a low efficiency.The investigation from 12 small and mid-sized cities governed by Changzhi City in Shanxi province showed that by the end of March 2014, among 5874 bank branches in these cities, only 42 banks were equipped with coin in food supermarket sorting and packaging machines and the proportion is 0.7%, while other banks still manually sort and package coin in food supermarkets and the work efficiency was very low (Lin, 2007).Therefore, developing machines for automatic sorting and packaging coin in food supermarkets to replace manual operation has become the focus of public concern in the countries all over the world.
The researches on coin in food supermarkets sorting device were early conducted abroad and the products can be roughly divided into three grades: low, medium and high.The sorting speed range of the low grade is at 1000 coin in food supermarkets per minute or less, the sorting speed range of the medium grade is at 1000-1500 coin in food supermarkets per minute and the high grade is 1500 coin in food supermarkets per minute or more.There are two major categories for sorting methods: one is based on physical techniques and the other one is based on performance index.Domestic coin in food supermarket sorting is mostly based on physical techniques, including coin in food supermarket shapes and dimensions, material properties and so on.Most of the current coin in food supermarket sorting devices can only conduct sorting of different denomination coin in food supermarkets, while some ones have the capabilities of sorting and packaging coin in food supermarkets, but they can only deal with a small amount of coin in food supermarkets at a time.Domestic research and development of such an integrated device for automatically sorting, counting and packaging a large amount of coin in food supermarkets is far from enough.
In this study a novel sorting and packaging machine integrated with automatically sorting, counting and packaging coin in food supermarkets was proposed.This machine can sort coin in food supermarkets based on different diameters of coin in food supermarkets by a new type of vibrator driving the vibrating sieve and count and package coin in food supermarkets based on thickness of each coin in food supermarket.Besides its integrated features mentioned above, this machine has a simple structure, stable performance and enables the coin in food supermarket counting work more easy to operate, which not only saves labors, but also facilitates the coin in food supermarket storage and transportation.When the machine working a large number of coin in food supermarkets can be directly put into the large sieve box of the machine, then the coin in food supermarkets are going to be sorted by the vibrator vibrating the sieve box with many sieve holes.The shutter door will be open to let the coin in food supermarkets leave the sieve box only if the coin in food supermarket sorting has completed.The box is supported by cushioning springs, which can prevent the whole box from shaking caused by the sorting device and the cushioning springs can not only reduce the noise, but also improve the stability of the machine.Shutter door is connected with link rods and the reset springs are installed on the link rods, which can not only keep the unity of movement of the blades when the shutters work, but also improve the closing speed of the shutter door.In the machine the container conveying device equipped with a transporting guide mechanism can automatically export the coin in food supermarket container, which can further reduce manual work and improve work efficiency.Sorting, counting and packaging a large number of coin in food supermarkets can be realized with the machine used in public traffic system, shopping malls, supermarkets, farm product markets, banks and so on.
MATERIALS AND METHODS
Kinematics analysis of the vibrator: Selfsynchronization vibrator was invented based on discovery of self-synchronization phenomenon.Boccaletti et al. (1999) is the first person who discovered the vibration synchronization phenomenon or self-synchronization phenomenon of mechanical system and he found that the two pendulums hanging side by side would just swing synchronously after they swung independently for a period of time.In the 1960s, Blekhman et al. (2002) put forward the synchronization theory for the vibrator with double eccentric rotors, that is to say, two induction motors installed on one vibrator can achieve run-in synchronism with certain conditions.Zhang et al. (2009) resolved the self-synchronization conditions and self-synchronization stability conditions for the vibrator by integral average method.Zhao et al. (2010) developed self-synchronization theory for the dual-motor-driven vibrator and four-motor-driven vibrator by using the improved method of average small parameter and deeply explained the coupling dynamic Fig. 1: Dynamic model of vibrator characteristics and dynamic symmetry for the vibrator.So far, scholars at home and abroad have made many profound studies on the self-synchronization theory for a variety of vibration devices, but swing exists on all the proposed models, which makes the vibration device unable to move exactly in the desired direction.In this study, a novel swing-free and dual-drive vibrator was used to drive the coin in food supermarket sorting machine.This vibrator consists of two plastids inside and outside.And the rotation centers of two eccentric rotors are on the same vertical axis with the centroid of inner plastid.Torque exerted by inertial force of the eccentric rotor on this axis is zero, thus the swing of this vibrator was eliminated.
Figure 1 showed a dynamic model of the new vibrator.It consisted of inner plastid m 1 , outer plastid m 2 and two eccentric rotors m 01 , m 02 .The inner plastid m 1 was connected with outer plastid m 2 through springs k x and k y in directions x and y, respectively.And outer plastid m 2 was supported by the elastic base k 2 .The differential equations of motion for the vibrator was obtained based on Lagrange equations (Han et al., 2007): where, M 1 was a vibration mass of the system in directions x and y, M 1 = m 1 +m 01 +m 02 ; M 2 was a vibration mass of the system in direction z, M 2 = m 1 +m 2 +m 01 + m 02 ; k x , k y and k z were spring stiffness in directions x, y and z; f x , f y and f z were damping coefficients in directions x, y and z; f 1 and f 2 were damping coefficients of the two motors.
Self-synchronization conditions of two eccentric rotors:
When the vibrator worked steadily the phases of the two eccentric rotors and their average phase were assumed as φ 1 , φ 2 , φ, respectively and the phase of eccentric rotor 1 was ahead of the phase of eccentric rotor 2 by 2α, that was to say: Then the phases of eccentric rotor 1 and 2 were: When the vibrator worked steadily, the average rotating speed of the two eccentric rotors was assumed as ̇ = ω m (t).Since the movement of the vibrator changed periodically, which meant external load of the two motors changed periodically, then the average angular speed of the two eccentric rotors within a period ω m0 was constant.The instantaneous fluctuation coefficients of ̇ and ̇ were assumed as ε 1 and ε 2 (ε 1 and ε 2 were the functions of time t) and the relationships of them were shown as follows: Substituting the Eq. ( 4) into Eq.(3), then the angular velocity and angular acceleration of the two eccentric rotors were as follows: When t→∞, average fluctuation coefficients of ̇ and ̇ within a period T = 2π/ω m0 were 0 (that meant ̅ 1 = 0, ̅ 2 = 0), which meant the system's frequency had been captured, that was to say, the two eccentric rotors run synchronously.When the synchronous torque of the vibrator was greater than or equal to absolute values of the remained electromagnetic torque of the two motors, the two eccentric rotors in the vibrator would took a self-synchronization movement (Guo, 2007).
Overall structure of the coin in food supermarket sorting and packing machine with large capacity:
The working principle of the coin in food supermarket sorting and packing machine with large capacity proposed in this study is to sort different coin in food supermarkets according to different denomination coin in food supermarkets having different diameters and weights.The machine mainly consists of a drive part (a vibrator), framework, vibration sorting device, sealing device, buffer device, container conveying device, counter and other components (Fig. 2).The vibration sorting device is located in the rear of the machine, while the sealing device is in the front of the machine.The sorted coin in food supermarkets are transported to the container through the buffer device and the container conveying device which is controlled by the counter can not only transport the empty container to the place below the buffer device, but also transport the container with coin in food supermarkets to the sealing device for sealing.
RESULTS AND DISCUSSION
Experimental researches: The diameter of hole in sieve plate and sieve holes distribution can directly affect the efficiency and quality of sorting coin in food supermarkets.
Determination of the diameters of the sieve hole:
The sieve hole in the first layer is taken as example and 6 different diameters of sieve holes were processed on six sieve plates for testing respectively, the sizes of sieve holes are 21, 21.5, 21.8, 22, 23 and 24 mm, respectively.When they were tested to sieve 1000 coin in food supermarkets, the times used to sort them completely were 150, 150, 140, 120, 80 and 50 sec, respectively.It means when the diameter of sieve hole is 24 mm, the efficiency is the highest among the 6 sieve plates in sorting the coin in food supermarkets of 1 yuan out from the other coin in food supermarkets.
After tested in the same way, the diameter of sieve plate hole in the second layer was 21 mm and the diameter of sieve plate hole in the third layer was 20 mm.
Layout of sieve holes: 1000 coin in food supermarkets (250 for each denomination) were taken to test the layout schemes of holes and test results were shown in Table 1.It indicated that the irregular arrangement of holes on sieve plate can improve the efficiency and quality of sorting coin in food supermarkets.
CONCLUSION
Design and research on the coin in food supermarket sorting and packing machine with large capacity were conducted based on a new vibrator.The method of vibration sorting was adopted to make a quick and effective sorting among a variety of coin in food supermarkets.Based on the experimental studies, the diameters of sieve plate holes for each layer and their layout were decided to improve sorting efficiency and quality.In the packaging stage, the packaging method was improved.Plastic cups were used to fill with coin in food supermarkets and automatic sealing devices were used to seal plastic cup, which would not only solve the problem of sorting and packaging a large amount of coin in food supermarkets, but also realize the integration for sorting, counting and packaging coin in food supermarkets.
Table 1 :
Test for the layout of sieve plate holes | 2019-01-02T15:44:22.781Z | 2016-08-15T00:00:00.000 | {
"year": 2016,
"sha1": "7c564a15b757e9e4c6ec4e4323cac8b177b9f83d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.19026/ajfst.11.2775",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "7c564a15b757e9e4c6ec4e4323cac8b177b9f83d",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
16324386 | pes2o/s2orc | v3-fos-license | BCAT1 promotes tumor cell migration and invasion in hepatocellular carcinoma
Branched-chain amino acid transaminase 1 (BCAT1) has been associated with numerous types of tumors; however, few previous studies have evaluated the expression and role of BCAT1 in hepatocellular carcinoma (HCC). In the present study, the expression of BCAT1 was detected by reverse transcription-quantitative polymerase chain reaction and immunoblotting in six HCC cell lines and 74 pairs of HCC and adjacent non-cancerous liver tissues. In addition, the correlation between the expression levels of c-Myc and BCAT1 was analyzed using immunohistochemistry. Furthermore, RNA silencing was performed using c-Myc-specific or BCAT1-specific small interfering RNA, after which wound healing and Transwell cell invasion assays were performed. Finally, the clinicopathological characteristics of BCAT1 in patients with HCC were analyzed. It was shown that the expression of BCAT1 was significantly higher in HCC tissues compared with adjacent non-tumor tissues (P<0.001), and in HCC cell lines compared within the L-02 hepatic cell line (P<0.001). In addition, immunohistochemical analyses indicated that the expression of BCAT1 was positively correlated with c-Myc (r=0.706, P<0.001). BCAT1 expression was shown to be downregulated in c-Myc-knockdown cells, and silencing of BCAT1 expression reduced the invasion and migration of HCC cells. Furthermore, a clinical analysis indicated that BCAT1 expression in HCC tissues was significantly associated with the tumor-node-metastasis stage, tumor number and tumor differentiation (all P<0.05), and that BCAT1 was able to predict the 5-year survival and disease-free survival rates of patients with HCC (both P<0.001). The results of the present study suggested that BCAT1 expression is upregulated in patients with HCC, and that BCAT1 may serve as a potential molecular target for the diagnosis and treatment of HCC.
Introduction
Hepatocellular carcinoma (HCC) is one of the most common malignant tumors and its incidence is increasing. Furthermore, HCC is the third leading cause of cancer-associated mortality worldwide, partly due to its high recurrence rate and early metastasis (1,2). In 2000, there were 564,000 new cases and 549,000 mortalities from HCC worldwide, indicating the devastating prognosis of this tumor (3). In 2008, 746,300 new cases of HCC were diagnosed worldwide, and 695,900 HCCrelated mortalities were reported. In total, >700,000 new cases are diagnosed each year throughout the world and >600,000 mortalities are attributed to HCC each year (4). At present, the majority of patients with HCC are diagnosed at the advanced stage due to lack of specific clinical manifestations, meaning that patients often miss out on the chance of receiving curative treatments, such as liver resection (5). In addition, patients with HCC often have a poor prognosis due to the aggressive nature of the malignancy, including a high recurrence rate and metastasis. Therefore, an improved understanding of the mechanisms underlying the recurrence and metastasis of HCC is required in order to identify effective prognostic and therapeutic biomarkers of HCC.
Branched-chain amino acid transaminase 1 (BCAT1), which is also known as cytosolic branched-chain aminotransferase and ECA39, is located at chromosome 12p12.1. It encodes the cytosolic form of the branched-chain amino acid transaminase enzyme, which catalyzes the reversible transamination of branched-chain α-keto acids to branched-chain L-amino acids essential for cell growth (6)(7)(8)(9)(10). It has previously been suggested that the aberrant expression of BCAT1, and the concomitant defect in branched-chain amino acid transamination, leads to hypervalinemia and hyperleucine-isoleucinemia, and may have an important role in the cell growth, proliferation and apoptosis of numerous tumor types (8,(11)(12)(13)(14). Furthermore, BCAT1 overexpression has been reported in non-neoplastic diseases of the liver, including chronic hepatitis C and non-alcoholic fatty liver disease (15)(16)(17). However, the expression and role of BCAT1 in HCC remains unclear.
Previous studies have reported that BCAT1 serves as an oncogenic protein that is upregulated by several signaling molecules, including c-Myc (18)(19)(20). c-Myc is an oncogene and transcription factor involved in the tumorigenesis of multiple cancers, including Burkitt's lymphoma and breast cancer, by targeting genes harboring the c-Myc-binding element (CACGTG) downstream of their transcription start site (11). Therefore, c-Myc may have an important role in the development and progression of HCC (20). BCAT1 has previously been associated with numerous malignancies due to its role in cell proliferation, cell cycle progression, differentiation and apoptosis (8,(10)(11)(12)(13)(14). However, little is known regarding the role of BCAT1 in HCC. To the best of our knowledge, the present study is the first to assess the association between BCAT1 and HCC. The study aimed to determine whether BCAT1 may serve as a potential prognostic and therapeutic biomarker for HCC.
Patients and specimens. A total of 74 HCC and matched normal adjacent samples (>2 cm distance from the margin of the resection) were obtained from pathologically confirmed HCC patients who had undergone surgical resection at the First Affiliated Hospital of Xi'an Jiaotong University (Xi'an, China) between October 2005 and September 2008. None of the patients had received any pre-operative chemotherapy or radiotherapy, and patients with evidence of concomitant extrahepatic disease were excluded from the analysis. HCC stage was classified according to the seventh edition of the tumor-node-metastasis (TNM) classification criteria of the International Union Against Cancer (21). The present study included 56 males and 18 females with a median age of 52 years (range, 33-75 years). All HCC tissues and matched pericarcinous liver tissues were immediately snap-frozen in liquid nitrogen following surgery and stored at -80˚C until use. Hepatitis B surface antigen (HBsAg) and α-fetoprotein (AFP) levels were obtained from the results of laboratory tests, capsule formation was observed during surgery and Edmonson-Steiner grade (22) was evaluated by an experienced pathologist. All information was recorded for each case. All patients provided informed consent prior to surgery, and all protocols were performed in accordance with the 1975 Declaration of Helsinki. The present study was approved by the Ethics Committee of The First Affiliated Hospital of Xi'an Jiaotong University.
Reverse transcription-quantitative polymerase chain reaction (RT-qPCR). Total RNA was extracted from HCC cell lines and tissues using TRIzol ® reagent (Invitrogen; Thermo Fisher Scientific, Inc.), according to the manufacturer's protocol. In order to avoid DNA contamination, the extracted RNA was treated with RNase-free DNase I (Invitrogen; Thermo Fisher Scientific, Inc.) and quantified by spectrophotometry. Subsequently, cDNA was synthesized using the RevertAid Premium First Strand cDNA Synthesis kit (Fermentas; Thermo Fisher Scientific, Inc.). qPCR was performed using the Applied Biosystems 7500 Real-Time PCR system (Thermo Fisher Scientific, Inc.) and SYBR ® Premix Ex Taq™ II (Tli RNaseH Plus; Takara Bio, Inc., Otsu, Japan). The primer sequences were as follows: BCAT1 forward, 5'-CCA AAG CCC TGC TCT TTGTA-3' and reverse, 5'-TGG AGG AGT TGC CAG TTCTT-3'; and β-actin (internal control) forward, 5'-GGG AAA TCG TGC GTG ACAT-3' and reverse, 5'-CTG GAA GGT GGA CAG CGAG-3'. The reaction conditions for the PCR program were as following: Initial denaturation at 95˚C for 30 sec, followed by 40 cycles at 95˚C for 5 sec and 60˚C for 64 sec. Melting curve analyses were performed to confirm the specificity of the PCR product. Relative mRNA expression levels were determined using the 2 -ΔΔCq method (23). Reactions were performed in triplicate.
Immunohistochemical staining. Immunohistochemical staining was performed using paraformaldehyde-fixed, paraffin-embedded tissue sections, which were prepared according to a method described previously (14). The tissue sections were incubated with rabbit anti-BCAT1 polyclonal antibody (cat. no. ab197941; 1:50 dilution; Abcam) and mouse anti-C-Myc monoclonal antibody (cat. no. ab32; 1:200 dilution; Abcam) overnight at 4˚C, followed by incubation with biotinylated goat anti-rabbit (cat. no. SV0002; Wuhan Boster Biological Technology, Ltd., Wuhan, China) and rabbit anti-mouse (cat. no. SV0001; Wuhan Boster Biological Technology, Ltd.) secondary antibodies at 37˚C for 1 h. Each slide was colored with DAB (Sigma-Aldrich, St. Louis, MO, USA) in a dark room, then all the sections were rinsed with running water and counterstained with hematoxylin (cat. no. ST047; HEART Biological Technology Co. Ltd.). Subsequently, the tissue sections were assessed by light microscopy and evaluated blindly and independently by two experienced pathologists. To evaluate the association between the expression of BCAT1 and c-Myc, a semi-quantitative scoring system based on the staining intensity and the percentage of positive liver cells was applied. Immunostaining intensity was evaluated as one of the following four grades: 0, negative; 1 weak; 2, moderate; and 3, strong. The percentage of positive liver cells was categorized into one of the following groups: 0, 0%; 1, 1-10%; 2, 11-50%; 3, 51-80%; and 4, >80%. The immunostaining intensity and average percentage of positive cells were evaluated for 10 independent high magnification fields. The final weighted expression score (0-12) was obtained by multiplying the staining intensity with the percentage of positive cells. The total expression scores for BCAT1 and c-Myc were listed as continuous variables for the correlation analyses. In order to evaluate the effect of BCAT1 protein expression on overall survival, the weighted expression scores of BCAT1 protein were divided into high and low scores using the median expression score as the cutoff point.
Small interfering RNA (siRNA) transfection. siRNAs targeting c-Myc (cat. no. sc-29226) and BCAT1 (cat. no. sc-77222), as well as control siRNA (cat. no. sc-37007), were purchased from Santa Cruz Biotechnology, Inc. MHCC-97H tumor cells were seeded at a density of 2x10 5 cells per well into six-well plates and cultured overnight in a humidified 5% CO 2 incubator at 37˚C. Subsequently, the cells were transfected with 100 nM of the BCAT1, c-Myc or control siRNA using Lipofectamine RNAi MAX Reagent (Invitrogen; Thermo Fisher Scientific, Inc.). Further experiments were performed after 48 h of transfection.
Transwell invasion assay. Matrigel was diluted in serum-free DMEM (1:3) and added to the upper chamber of a 24-well Transwell plate. HCC cells were trypsinized and counted manually under a light microscope. A cell suspension of 5x10 4 cells/ml in serum-free medium was prepared and 100 µl of the suspension was loaded into the upper chamber. The lower chambers were filled with 10% FBS in DMEM. Invasion was halted in a 37˚C incubator (5% CO 2 ) after ~24 h by removing the non-migrated cells from the upper chamber using a cotton swab. The HCC cells that had migrated through the membrane were stained with 0.05% crystal violet after fixing with 4% paraformaldehyde, and were counted under a microscope. At least five fields were randomly selected for counting the mean number of invaded cells in each membrane using ImageJ v1.48 software (NIH, Bethesda, MD, USA). At least three experimental replicates were performed.
Wound healing assay. MHCC97H cells transfected with BCAT1 or control siRNA were seeded at a concentration of 5x10 5 per well onto 6-well plates and cultured to full confluency. Scratch wounds were made across the surface of the plates using a 10-µl pipette tip and the suspension cells were removed using phosphate-buffered saline. Cells were cultured in serum-free DMEM medium in a humidified 5% CO 2 incubator at 37˚C for 48 h, after which images of the plates were captured using a phase-contrast microscope. At least five replicate experiments were performed.
Follow-up. Follow-up of the patients in the present study was performed on December 31, 2013. The duration was defined as the interval between the date of surgery and the date of mortality or last follow-up. The follow-up time ranged from 6-78 months and the median time was 58.5 months. All patients received follow-up visits once every 1-3 months in the first year and every 3-6 months thereafter. The follow-up protocol included a physical examination, measurement of serum AFP levels, a chest X-ray and abdominal ultrasonography. Computed tomography, magnetic resonance imaging or positron emission tomography was performed to assess the occurrence of tumor recurrence. During the follow-up period, 59 patients (79.7%) were shown to have intrahepatic tumor recurrence and 11 patients (14.9%) had developed distant tumor metastases.
Statistical analysis. Statistical analyses were performed using SPSS 16.0 software (SPSS Inc., Chicago, IL, USA). The Spearman's rank correlation coefficient was applied to evaluate the association between ordinal data, and the χ 2 test or Fisher's exact test was performed for comparisons of categorical data. The expression levels between groups were compared using the Mann-Whitney U test. Overall survival and disease-free survival rates, and mortalities associated with tumor recurrence or metastasis, were analyzed using the Kaplan-Meier method, and differences between curves were assessed using the log-rank test. Independent prognostic factors were assessed by the Cox proportional hazards stepwise regression model. Data are presented as the mean ± standard error of the mean. P-values were two-sided. P<0.05 was considered to indicate a statistically significant difference.
Results
Expression of BCAT1 in HCC tissues and cells. The expression levels of BCAT1 in cell lines and tissues were determined using RT-qPCR and western blotting. The expression levels of BCAT1 were significantly lower in the L-02 cells compared with the HCC cell lines (all P<0.001; Fig. 1A and B). Similarly, BCAT1 expression levels were significantly higher in HCC tissues compared with adjacent non-cancerous liver tissues (P<0.001; Fig. 1C and D).
Association between BCAT1 expression and clinicopathological parameters. To investigate the clinical significance of BCAT1 in patients with HCC, the associations between the BCAT1 expression levels (high or low) and clinicopathological parameters, including patient gender, age, detection of HBsAg, AFP level, tumor size, tumor number, vascular invasion, cirrhosis, capsule formation, Edmondson-Steiner grade and TNM stage, were investigated. The median expression score of BCAT1 protein was used as a cutoff point to divide patients into high and low expression groups. Notably, the expression levels of BCAT1 were significantly associated with the Edmondson-Steiner grade, tumor number, vascular invasion and TNM stage (all P<0.05). However, no significant association was observed between the expression levels of BCAT1 and the patient gender, age, HBsAg, AFP level, cirrhosis and capsule formation (all P>0.05). The results are shown in Table I.
High expression levels of BCAT1 are associated with a poor HCC prognosis. The median expression score of BCAT1 protein was used as a cutoff point to divide patients into high and low expression groups for a clinical association analysis. Univariate prognostic analyses and multivariate Cox regression models were applied to assess the association between the expression levels of BCAT1 and the overall and disease-free survival rates (Fig. 2). The patients with high BCAT1 expression levels showed significantly reduced overall and disease-free survival rates (P=0.002). The 5-year overall survival rate of the low BCAT1 expression group was 66.8%, which was significantly higher than that of the high BCAT1 expression group (33.2%) (P=0.002). In addition, the 5-year disease-free survival rate of the low BCAT1 expression group was 58.5%, which was also significantly higher compared with that of the high BCAT1 expression group (30.5%) (Fig. 2) (Table III). Multivariate analysis indicated that the expression level of BCAT1, Edmonson-Steiner classification and tumor number were all independent prognostic factors of HCC (Table III).
Association between BCAT1 and c-Myc protein expression levels. To determine whether the expression level of BCAT1 was correlated with c-Myc expression in patients with HCC, immunohistochemical staining was performed. The protein expression levels of c-Myc protein level were significantly higher in the HCC tissues compared with the corresponding adjacent non-tumorous tissues (P<0.001; Fig. 3A). In addition, the correlation between BCAT1 and C-Myc protein expression levels was analyzed. Notably, there was a significant positive correlation between the protein expression levels of c-Myc and BCAT1 (r=0.706; P<0.001; Fig. 3B).
A B
c-Myc-knockdown reduces BCAT1 expression. A previous study reported that c-Myc was able to upregulate BCAT1 expression in nasopharyngeal carcinoma (11). Therefore, to further elucidate the underlying mechanism of BCAT1 in HCC cells, MHCC-97H cells were transfected with c-Myc-specific siRNA. Silencing of c-Myc expression was shown to significantly downregulate the expression of BCAT1 in MHCC-97H cells (P=0.005; Fig. 4A). Furthermore, BCAT1-specific siRNA was used to knockdown the expression of BCAT1 in MHCC-97H cells. Compared with the control group, the silencing of BCAT1 did not significantly alter the protein expression levels of c-Myc (Fig. 4B).
BCAT1-knockdown suppresses cell invasion and migration.
To further investigate the underlying mechanism of BCAT1 in HCC, the effect of BCAT1 on MHCC-97H cell migration and Table II. Univariate prognostic analysis of overall and disease-free survival rates in patients with hepatocellular carcinoma. invasion was investigated using BCAT1-specific and control siRNA. Compared with the control group, BCAT1-knockdown significantly repressed cell migration and invasion (Fig. 4C-F).
In the present study, the expression levels of BCAT1 in several cell lines were initially detected, and it was demonstrated that the expression levels of BCAT1 in HCC cell lines were significantly higher compared within the L-02 immortalized normal human liver cell line. In addition, the expression levels of BCAT1 in tumor tissues derived from a relatively large population of HCC patients were determined, and it was shown that the expression levels of BCAT1 were upregulated in HCC tissue compared with tumor-adjacent tissues. Further studies demonstrated that the expression levels of BCAT1 protein were positively correlated with those of c-Myc, which indicated that c-Myc may be partially responsible for the high expression levels of BCAT1 in HCC tissues and cell lines. For better elucidation of the role and underlying mechanisms of BCAT1 in HCC cells, the effects of c-Myc-and BCAT1-knockdown on MHCC-97H cells were investigated. As was expected, c-Myc-knockdown was found to downregulate BCAT1 expression in MHCC-97H cells. Furthermore, the expression of BCAT1 was associated with the biological characteristics of HCC cells, since it was demonstrated that knockdown of BCAT1 expression repressed the migration and invasion of MHCC-97H cells. Taken together, these results support the hypothesis that BCAT1 has a critical role in the migration and invasion of HCC, and that its expression may be regulated by c-Myc. Therefore, BCAT1 may serve as a potential biomarker for the diagnosis and treatment of HCC.
In the present study, the associations between the expression levels of BCAT1 and the clinicopathological parameters and prognosis of patients with HCC were analyzed. It was demonstrated that the upregulation of BCAT1 was significantly correlated with lower overall and disease-free survival rates, and other clinicopathological parameters, including the Edmondson-Steiner grade, tumor number, vascular invasion and TNM stage.
In conclusion, the present study demonstrated that BCAT1 was upregulated in HCC tissue samples and cell lines compared with normal adjacent tissue samples and the L-02 immortalized normal human liver cell line, respectively. Furthermore, the BCAT1 expression level was positively correlated with c-Myc expression, and knockdown of c-Myc in HCC cells resulted in the downregulation of BCAT1. In addition, knockdown of BCAT1 expression was shown to repress the migration and invasion of an HCC cell line. The results of the present study suggested that BCAT1 is important for the migration and invasion of HCC and may represent a novel prognostic biomarker for the disease. | 2018-04-03T05:59:34.612Z | 2016-08-08T00:00:00.000 | {
"year": 2016,
"sha1": "5740cd95f0e59f4664b6f6c67e22e85d859118b3",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/ol.2016.4969/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5740cd95f0e59f4664b6f6c67e22e85d859118b3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
117591526 | pes2o/s2orc | v3-fos-license | Tau-neutrino Appearance Searches using Neutrino Beams from Muon Storage Rings
We study the possibilities offered by muon storage rings for tau-neutrino appearance experiments due to nu_mu to nu_tau and nu_e to nu_tau oscillations. Tau event rates for such experiments are first discussed with a view to examining their variation prior to the inclusion of experimental cuts, in order to better understand how baselines, beam energies, forward peaking of decay neutrinos with increasing energies and average fluxes intercepted by detectors of various sizes can affect their optimization. Subsequently, event rates implementing cuts are computed for hadronic and wrong-sign lepton decay modes and used to plot 90% C.L. contours for the parameters that can be explored in such experiments. The expected scaling of the contours with energy and baseline is discussed. The results show that even for modest muon beam energies, convincing coverage of the Super Kamiokande parameters is possible. In addition, very significant enlargement of present-day bounds on the mass and mixing parameters of all types of neutrino oscillations is guaranteed by such searches.
Introduction
The recent results of the Super Kamiokande (Super K) water Cerenkov-detector experiment [1] provide firm indications of an anomaly in the flavor ratios and zenith-angle dependence of the atmospheric neutrino flux. Although the existence of such an anomaly had already been signalled by earlier data from the Kamiokande [2] and IMB [3] experiments and supported by subsequent Soudan II results [4], the impressive statistical significance of the Super K data has appreciably buttressed its interpretation in terms neutrino mass and oscillations. This is especially true of the observed zenith angle dependence of the observations, which does not naturally seem to lend itself to any alternative explanation. When combined with results from the CHOOZ reactor experiment [5], analyses [6,7] of the data tilt the balance towards an interpretation in terms of ν µ −→ ν τ oscillations versus other explanations. Evidence that this channel is favoured also comes from neutral current event count ratios involving the production of neutral pions measured at Super K [8].
In addition to being the first firm signal for physics beyond the Standard Model, a determination, even if approximate, of neutrino masses and mixing angles would be a crucial pointer towards the nature of such physics, providing an unprecedented glimpse into what lies beyond present knowledge of particle interactions. Thus, the importance of independently verifying the presence of ν µ −→ ν τ oscillations can scarcely be overestimated. The firmest confirmation of this hypothesis would be via the detection of τ leptons produced by charged current interactions of ν τ 's resulting from oscillations of ν µ 's. In this paper, we study this possibility in the context of neutrinos obtained from muon storage rings at future muon colliders.
At present, high energy ( ≥ GeV) neutrino beams for oscillation studies are obtained by allowing charged pions and kaons produced in fixed target accelerator experiments to decay in flight. Recently, however, a new type of neutrino beam, much more intense than those presently available, has been proposed and discussed for neutrino oscillation studies and other neutrino related experiments [9,10,11,12,13,14,15,16,17]. These beams originate from a high intensity muon source, currently under active design and study as part of an effort to develop a high luminosity muon collider [18]. In addition to the extremely intense and collimated primary neutrino fluxes which will be available from such a source, the beam compositions will be much more precisely known than in those available from pion and kaon decay. A muon storage ring with the straight section pointing towards a neutrino detector situated at a specific baseline length, as described in detail in [12], would lead to a neutrino beam with precisely equal numbers of ν µ andν e , or alternativelyν µ and ν e , depending on the sign of parent muons. This is in contrast to the presently available high energy neutrino beams from accelerators, which contain mostly muon neutrinos, but with small contaminations of electron and tau neutrino species. For ν τ appearance searches, the vastly superior luminosities, absence of contamination and the possibility of higher energies of muon collider neutrino beams make them an attractive proposal which merits further study.
Accordingly, we focus here on the physics of tau appearance experiments using neutrinos from muon storage rings, depicting the two flavour oscillation parameter ranges (∆m 2 and sin 2 2θ) which can be consequently probed in a search for ν µ −→ ν τ and ν e −→ ν τ oscillations. 1 Adopting the sample design configuration for muon production, capture, cooling, acceleration and storage prior to decay described in [12], the number of available muons of either sign is ≈ 8×10 20 per year. Of these, one fourth decay in a straight section directed towards the neutrino detector, yielding 2×10 20 neutrinos and an identical number of antineutrinos (ν µ andν e , if, for example, the beam is comprised of µ − ). We use these numbers in all of the following calculations, and refer the reader to [12] for design details leading to the production of the neutrino beams.
In section 2 we discuss the broader physics characteristics and dependences of τ production rates at such oscillation experiments. In section 3 we discuss the realistic detection of τ events above backgrounds. The channels we study are the detection of (i) ν µ −→ ν τ oscillations via charge current production and subsequent decay into hadrons, and (ii) ν e −→ν τ oscillations via the appearance of wrong sign muons from τ decay to leptonic modes. The specific conventional kT type detector, discussed recently in [12] is considered, and we describe our choices for the kinematic cuts and/or overall detection efficiencies for it.
In section 4 we use the results of the consequent event rate calculations to present 90% CL contours for ∆m 2 and sin 2 2θ for a variety of muon beam energies and baseline lengths in order to illustrate the extraordinary possibilities offered by muon colliders for studying neutrino oscillations.
general characteristics
For the general discussion that this section focuses on, we compute and use the actual ν τ −→ τ charged current (CC) production rates without including experimental cuts to eliminate backgrounds. These are detailed and incorporated later, in sections 3 and 4, prior to obtaining contour plots for ∆m 2 and sin 2 2θ.
The event rate N τ (events/kT/year) for τ lepton production from ν µ 's subsequent to oscillation is given by where σ CC ντ is the total charged current cross section obtained by integrating equation (3) below. The oscillation probability between flavours is with ∆m 2 = m 2 ντ − m 2 νµ,e in eV 2 , L = baseline length in km, E ν being the neutrino energy in GeV, and θ the mixing angle between flavours. Φ νµ is the number of neutrinos in the cone intercepted by the detector averaged over its area. The numerical factor is the number of scatterers ( iso -scalar nucleons) per kT of the detector material.
We first discuss the cross-section, performing our calculation within the renormalization group improved parton model, and focus on the inclusive process ν τ N −→ τ − + anything, where N is an isoscalar nucleon. On retaining effects of the τ -mass, 2 the differential cross section can be written in terms of the Bjorken scaling variables x = Q 2 /2Mν and y = ν/E ν as Here −Q 2 is the invariant momentum transfer between the incident neutrino and outgoing tau, ν = E ν − E τ is the energy loss in the lab (target) frame, M and M W are the nucleon and intermediate boson masses respectively, and G F = 1.16632 ×10 −5 GeV −2 is the Fermi constant. The limits on x and y are Or µ-mass, in the case of σ CC νµ , which is also calculated below. Clearly, however, the terms proportional to the square of the lepton mass in Eqs.
The F i 's are given as A straightforward application of the Callan-Gross equations shows that F 4 vanishes in this case. In the above, u,d,c,s and b denote the distributions for the various quark flavours in a proton. For our calculations we use CTEQ4 parton distributions [19]. Figure 1 shows the total CC cross sections for ν µ and ν τ , obtained using the above expressions. For convenience, we give analytic fits for both cross-sections in Table 1, over the entire range of energy which is of interest here.
For the calculations of event rates, we assume that area dimensions of the detector and the baseline length L define a 'detection cone' of half angle θ d with the direction of the muon beam, with detector radius R d ≡ Lθ d . Thus, for long baselines the choice of θ d would expectedly be smaller than those for shorter baselines in order to accommodate a realistic detector size. The angular distribution of ν µ within a chosen detection cone, of course, follows from the decay kinematics of muon, and is obtained by boosting the familiar distribution of a muon decaying at rest to the requisite beam energy. Figure 2 shows, for various beam energies, the angular distribution 1 in the polar angle θ p , where N νµ (θ p ) is the number of muon neutrinos ( prior to any oscillation ) contained within a cone of half-angle θ p , demonstrating the expected forward peaking of muon neutrinos with increasing parent particle energy. This distribution peaks around 1.2 × 10 −4 radians for E µ = 500 GeV, around 3 × 10 −4 radians for E µ = 250 GeV, and even higher for E µ = 50 and 20 GeV. We remark here that for the wrong-sign muon detection mode discussed and used below, the parentν e angular distribution differs from the ν µ distribution shown here, as dictated by decay kinematics.
Next, we note that the oscillation probability in equation (2) reduces to for ∆m 2 L/E ν ≤ 0.1, and that this condition is satisfied for a significant range of ∆m 2 values, even for large baseline lengths, given the high energies contemplated for muon colliders. It is, for instance, valid for the favoured value of Super K results, ∆m 2 ≈ 10 −3 eV 2 , sin 2 2θ ≈ 1, even for a relatively low 20 GeV muon beam and a 732 km baseline.
Since the average flux intercepted by a detector for some fixed θ d falls as 1/L 2 , from equation (1) we see that N τ will be independent of baseline length as long as equation (5) is satisfied. This independence is illustrated in figure 3, where the tau events /kT/yr are plotted versus baseline lengths for three different values of θ d , for a 250 GeV muon beam energy and ∆m 2 = 10 −3 eV 2 . Subsequent to oscillation, and when the detector area is taken into account, the enhanced collimation of the neutrinos with increasing beam energy manifests itself in an interesting manner. In general, for a significant part of the range of the energies of interest at muon colliders, σ CC ντ rises linearly with neutrino energy. For our choice of oscillation parameters, the probability P νµ−→ντ varies with energy as E −2 ν in equation (5) , while the forward peaking of the neutrino beam with energy enhances the flux term as E 2 ν , leading to an overall linear increase in the event rate with energy for a detector of fixed mass whose area matches that of the kinematic decay cone at each energy. In practice, of course, L and R d (which fix θ d ) and E µ depend on and are constrained by various factors like geographical location of existing facilities, physics goals, cost and design considerations. It is thus useful to examine the behaviour of the τ production event rate when, for instance, L and R d are fixed and E µ is varied, with a view to optimization.
In figure (4) we plot the tau event-rate for θ d = 10 −3 radians, ∆m 2 = 2.2 × 10 −3 eV 2 , sin 2 2θ = 1 and baseline length L= 732 km. (The choice of baseline is appropriate to a proposed beam from either CERN to Gran Sasso [20] or from Fermilab to the Soudan laboratory [21].) After the initial rise with energy, it peaks around E µ = 200 GeV, and then falls and flattens asymptotically with increasing energy. As the position of this peak with respect to E µ depends on θ d alone, it will remain invariant under changes of δm 2 and sin 2 2θ, as long as Eq. (5) remains a good approximation. Figures 5 and 6 compare the yields per kT-yr for various detection cones, respectively, as beam energy and ∆m 2 are varied. For low beam energies where forward peaking of the decay products is not pronounced, narrower detection cones contain fewer events/kT/yr , but for higher energies the behaviour is reversed (figure 5). For instance, a detector which subtends a θ d = 10 −4 radians cone will see roughly 11 times more events/kT/year for E µ = 500 GeV than one subtending θ d = 10 −3 radians, since it intercepts a higher average flux Φ νµ . The assumption of a uniform flux over the area of the detector can thus be misleading, since the event rate scales very non-linearly with the detector area. In Figure (6), the rise in the event-rate as ∆m 4 signalled by Equations (5) and (1) in conjunction is clearly apparent upto ∆m 2 values of O(0.5) eV 2 , after which the sinusoidal behaviour sets in.
In conclusion, as illustrated by figures 2 -6, τ event rates from ν µ beams at muon colliders have several interesting characteristics which are relevant to experimental design and choice of baseline length and beam energy: 1. For a substantial and physically interesting range of ∆m 2 , the event rate (events/kT/year) for a fixed θ d is to a very good approximation independent of baseline length for a wide range of beam energies (figure 3 ).
2. For a given choice of say, baseline and detector area (i .e. fixed θ d ), the event rate is maximised at a particular beam energy, independent of the particular values of ∆m 2 and sin 2 2θ over a considerable portion of their range ( figure 4).
3. The intense forward peaking expectedly renders detectors with smaller area superior to those with large area at high beam energies, for a fixed available mass of the detector, and we show the extent to which this affects actual event rates in figure (5).
Selection of Tau and Wrong -Sign Muon events
An important component of any study for τ appearance due to ν µ,e → ν τ oscillations is the event selection strategy for the τ 's produced from charged current interactions of the ν τ . This has been discussed in the literature in the context of several terrestrial experiments that are already in progress [22], [23]. Strategies for τ -detection have also received consideration in proposals for future experiments [24], [25].
For neutrino experiments using a muon storage ring, the detailed prescription for event selection can be formulated only after the detector design is specified. There are, however, some basic issues concerning the signals and the backgrounds which all experiments are likely to be concerned with. We base our predictions here on these considerations, implemented within a parton-level Monte Carlo calculation.
The results presented by us are in connection with a detector of mass 10 kT similar to that described in [12], placed perpendicularly to the muon beam axis, with a short, medium or large baseline, incorporating detailed tracking and particle identification facilities.
The ρ ± subsequently decays into π ± π 0 . For the decay of a 1 , we have confined ourselves to the mode π ± π 0 π 0 which leads to a single charged-track. The branching ratios into these channels [27] are approximately 11%, 25% and 9% respectively, giving a substantial total branching ratio of about 45%. Thus, essentially one should look for one charged pionic track with 0, 1 or 2 neutral pions in a collinear configuration. The total energy, measured from deposits in the electro-magnetic and hadronic calorimeters gives the energy of the π ± , ρ ± or a 1 ± , which can be combined with the directional information to construct its three-momentum.
The backgrounds for signals of this kind (kinks in the charged tracks with missing p T ) can come, for example, from re-interaction of the hadronic jets coming out of the deep inelastic scattering (DIS) vertex, particularly in the case of neutral current events. Charmed particle production and decays in the DIS processes can also give rise to potential backgrounds. There is also the possibility of muons (from charged current events with no oscillation) being misidentified as pions. And finally, one can have the so-called 'white kinks' arising from one-prong nuclear interactions with no heavy ionising tracks. (These types of kinks usually have small p T , within about 500 MeV for our energy ranges.) With the above considerations in mind, our first set of results (for a 10 kT detector) implements the following event selection criteria for a 250 GeV muon beam [28]: • A minimum p T of 0.5 GeV.
• a minimum energy of 2 GeV for the one-prong decay products from the tau's.
• A minimum isolation of ∆R = 0.7 between the charged prong from tau-decay and the DIS products, where ∆R 2 = ∆η 2 + ∆φ 2 , ∆η and ∆φ being the differences in pseudo-rapidity and azimuthal angle respectively.
The last criterion ensures that the one-prong charged tracks characteristic of τ -decays are at such angles with the beam axis as to set them clearly apart from misidentified muons produced from unoscillated ν µ 's as well as from white kinks. At lower (higher) energies, the p T cut has to be slightly reduced (enhanced) in order to suppress backgrounds with the same effectiveness.
We find that the missing-p T and isolation cuts taken together can remove the entire set of backgrounds due to unoscillated charged-current events, whereas the neutral current backgrounds are adequately taken care of by the isolation cuts. Taking everything together, the approximate efficiency of tau detection in our parton-level calculation turns out to be 31% (including the branching ratio for one-prong decays). This is commensurate with the efficiencies expected in, for example, the OPERA experiment [25].
Our second set of results is based on tau appearance due to ν e oscillating into ν τ . These will lead to wrong-sign muons via charged current interactions. Such signals are relatively background-free; the only significant backgrounds come from charm production at the DIS vertex. For this set of our results we have • a minimum energy of 2 GeV for the wrong sign charged lepton decaying from tau (muon and electron).
• A minimum transverse momentum of wrong sign muon/electron P µ, e T of 0.2 GeV • A minimum isolation of ∆R = 0.4 between the wrong sign charged lepton from tau-decay and the DIS products.
Finally, although our focus here is on τ appearance, we also give results in Section 4 for the detection of ν e → ν µ oscilations, detectable again by the presence of wrong sign muons, after implementing suitable cuts.
Contours for ν µ,e → ν τ oscillation searches at muon storage rings
In order to demonstrate the possibilities offered by muon storage rings for oscillation studies, we give the corresponding 90% C.L. contours for ∆m 2 and sin 2 2θ for two-flavour mixing. As mentioned earlier, we feel this is adequate at present to obtain a firm feel for the eventual potential of these experiments to comprehensively map the parameter space of neutrino masses and mixing.
Starting with equation (1) for N τ , the 'bare 'events, and folding in the kinematic cuts described above for event selection and background elimination, one obtains N d τ , representing the actual candidate events. Requiring N d τ ≤ 2.44 then delineates the 90% C.L. parameter space. Thus, for each contour the average value of the probabilitȳ P νµ,e→ντ = 2.44/N all τ (6) where N all τ is computed from equation ( 1) by setting P νµ,e→ντ = 1, representing total conversion of ν µ 's to ν τ 's, but imposing the cuts as before. Also, we note that [17] for each contour, the reach in ∆m 2 , i.e. its minimum value, occurring when sin 2 2θ = 1 is given to a good approximation by Since N all τ scales as the product of the flux and the cross-section, (having the probability term set to 1), it follows that Similarly, the 'knee 'for each contour, i.e. the minimum value of sin 2 2θ probed occurs when the other oscillating term in equation (2) is approximately 1, hence sin 2 2θ min ≈P νµ,e→ντ .
which implies that Finally, the vertical asymptotic part of the contour, occurring when the values of ∆m 2 are high enough such that the sine squared term containing it in equation (2) averages to 1/2 has sin 2 2θ = 2P νµ,e→ντ . In the regions corresponding to small mass-squared differences, all the contours for a given energy tend to merge. This again demonstrates the insensitivity to baseline in these regions. The scaling relation for sin 2 θ min with length (for a fixed beam energy) ( equation (10) ) is also reproduced well in the contours. Once one compares different energies, however, the scaling is distorted by the presence of energy-dependant cuts and also marginally by the fact that the plots here are for muon beam energies, while the scaling relations are for neutrino energies. It is apparent from Equations (8) and (10) and from the contours that very long baselines offer no advantages for oscillation studies employing neutrinos from muon storage rings, and carry the added burden of impractical detector sizes, at least if conventional detectors are employed 3 .
A 100 GeV muon beam and a 250 km baseline would extend that to 2 × 10 −4 eV 2 and 5 × 10 −7 respectively. For convenience, in figures 14 -17, we reproduce our contours showing how different baselines (250 km and 732 km) compare for various energies for both modes of tau appearance discussed above. The distortion of the scaling relations equations (8) and (10) due to the presence of experimental cuts is apparent here, being more pronounced at low beam energies. For 20 GeV and 50 GeV, (Figure (14)) for instance, ∆m 2 min scales as ∼ 1/E ν rather than 1/ √ E ν . Finally, although our focus here has been the detection of ν τ appearance, the experiments discussed here are in a natural position to also study ν e → ν µ oscillations. The parameter regions which can be explored are shown in figure 18, after incorporating appropriate cuts to remove backgrounds for the wrong-sign muons. Clearly, the region identified by the LSND experiment [31] can be scrutinised with ease at muon storage rings.
Conclusions
We have studied the possibilities offered by muon storage rings (at various muon beam energies and baselines ) for ν τ appearance experiments in order to determine masses and mixing angles for ν µ → ν τ and ν e → ν τ oscillations. Tau event rates for such experiments have first been discussed with a view to understanding their variation prior to the inclusion of experimental cuts, in order to better understand how baselines, forward peaking of decay neutrinos with increasing energies, and average fluxes intercepted by detectors of various sizes can affect their optimization. Subsequently, event rates implementing cuts for hadronic and wrong sign leptonic modes are computed and used to plot contours for the parameter regions that can be explored in such experiments, and the expected scaling of the contours with energy and baseline is discussed. The results show that even for modest muon beam energies, convincing coverage and verification of the Super K parameters is possible. In addition, very significant enlargement of present day bounds on the mixing parameters for oscillations to ν τ is guaranteed by these types of searches.
In summary, neutrinos from muon storage rings appear to be ideal sources, providing unprecedented potential for oscillation studies in the next millennium and deserve very serious consideration.
Acknowledgement
We acknowledge useful conversations with D.P. Roy, D. Choudhury and Probir Roy and also thank S. Geer and B. King for helpful exchanges over e-mail. RG would like to thank the CERN Theory Division for hospitality while this work was in progress. | 2019-04-14T02:46:58.124Z | 1999-05-25T00:00:00.000 | {
"year": 1999,
"sha1": "5bd5932f93d1168791ba52dd38158a251c730db3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/9905475",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7c7e8b24c7ddd2929874c2ff2e1d9a8fde2ae7da",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
251163081 | pes2o/s2orc | v3-fos-license | Peer evaluation and feedback for invasive medical procedures: a systematic review
Background There is significant variability in the performance and outcomes of invasive medical procedures such as percutaneous coronary intervention, endoscopy, and bronchoscopy. Peer evaluation is a common mechanism for assessment of clinician performance and care quality, and may be ideally suited for the evaluation of medical procedures. We therefore sought to perform a systematic review to identify and characterize peer evaluation tools for practicing clinicians, assess evidence supporting the validity of peer evaluation, and describe best practices of peer evaluation programs across multiple invasive medical procedures. Methods A systematic search of Medline and Embase (through September 7, 2021) was conducted to identify studies of peer evaluation and feedback relating to procedures in the field of internal medicine and related subspecialties. The methodological quality of the studies was assessed. Data were extracted on peer evaluation methods, feedback structures, and the validity and reproducibility of peer evaluations, including inter-observer agreement and associations with other quality measures when available. Results Of 2,135 retrieved references, 32 studies met inclusion criteria. Of these, 21 were from the field of gastroenterology, 5 from cardiology, 3 from pulmonology, and 3 from interventional radiology. Overall, 22 studies described the development or testing of peer scoring systems and 18 reported inter-observer agreement, which was good or excellent in all but 2 studies. Only 4 studies, all from gastroenterology, tested the association of scoring systems with other quality measures, and no studies tested the impact of peer evaluation on patient outcomes. Best practices included standardized scoring systems, prospective criteria for case selection, and collaborative and non-judgmental review. Conclusions Peer evaluation of invasive medical procedures is feasible and generally demonstrates good or excellent inter-observer agreement when performed with structured tools. Our review identifies common elements of successful interventions across specialties. However, there is limited evidence that peer-evaluated performance is linked to other quality measures or that feedback to clinicians improves patient care or outcomes. Additional research is needed to develop and test peer evaluation and feedback interventions. Supplementary Information The online version contains supplementary material available at 10.1186/s12909-022-03652-9.
Introduction
Invasive medical procedures such as endoscopy, percutaneous coronary intervention (PCI), and bronchoscopy are highly effective for the diagnosis and treatment of disease when used appropriately [1][2][3]. However, variability in operator performance of these procedures has been widely reported, sometimes resulting in suboptimal procedural outcomes or patient harm [4][5][6][7]. Clinical societies therefore recommend standardized processes to assess clinician competency and to monitor care quality and outcomes [2,8,9].
Peer evaluation is one common mechanism for assessing procedural quality and providing meaningful feedback to physicians. Multiple formats have been described, including Morbidity and Mortality (M&M) conference, root cause analysis, and random case reviews. Peer review is mandated for some cardiac procedures [10], and clinicians perceive peer feedback to be highly useful [11,12]. Among procedural training programs, structured evaluation and feedback is now ubiquitous and there are numerous tools to guide the evaluation of trainees [13][14][15][16]. However, there is little guidance on how to optimally implement a peer evaluation program among practicing clinicians after the completion of mandatory training.
Peer evaluation may be particularly useful for the assessment of procedures within the field of internal medicine. These procedures can generate a durable record (photo, video, or angiography) and involve both clinical decision-making and technical performance. Since there is limited literature on this topic for any single procedure or subspecialty, we sought to review studies among all internal medicine procedural subspecialties and related specialties that use percutaneous or minimally invasive techniques, including interventional radiology and vascular surgery. We hypothesized that some characteristics of successful peer evaluation programs may be common among all invasive medical procedures. We therefore performed a systematic review to: 1) identify and characterize peer evaluation tools for practicing procedural clinicians; 2) assess evidence for the validity of peer evaluations; and 3) describe best practices of peer evaluation programs.
Methods
We conducted a systematic review according to the Preferred Reporting Items for Systematic Reviews and Metaanalyses (PRISMA) recommendations [17]. Our protocol is registered on the International Prospective Register of Systematic Reviews (PROSPERO) (CRD42020209345).
Data sources and searches
We conducted a search of Medline and Embase from database inception through September 7, 2021 using a search strategy developed in consultation with a research librarian (Louden D). Search strategies (Appendix) incorporated controlled vocabulary terms and keywords appropriate to each database to represent the concepts of peer evaluation and peer feedback for procedures in the field of internal medicine and related subspecialties.
Interventional radiology (IR) and endovascular surgical procedures were included since these commonly use percutaneous techniques similar to internal medicine subspecialty procedures. Reference lists of studies meeting the inclusion criteria were manually reviewed for additional articles.
Study selection
We imported citations into Covidence (Melbourne, Australia). We included a study if it was a clinical trial or an observational study (prospective or retrospective) published in English that reported on peer assessment and/ or peer feedback of internal medicine subspecialty, IR, or endovascular surgical procedures. We excluded a study if it reported only on trainee performance (medical students, residents, fellows) or only on the use of procedural simulators. Two reviewers (Doll JA, Thai TN) independently performed a title and abstract screen to identify potential citations for subsequent full-text review. Interreviewer discrepancies were resolved by consensus after full-text review by both reviewers. Included studies were reviewed with clinical content experts for appropriateness and completeness.
Data extraction and study quality
A standardized data abstraction form was created to extract prespecified data points from each included study (Appendix). Two reviewers (Doll JA, Thai TN) independently extracted qualitative data from each reference, including study type, procedure evaluated, scoring system, presence of agreement testing, feedback structure and content, outcomes assessment, and assessment of overall study quality. Study quality was assessed using a scale modified from the Oxford Centre for Evidencebased Medicine [18,19]. This scale rates studies from 1 to 5, with 1a as highest quality (systematic review of randomized controlled trials) and 5 as lowest quality (expert opinion). Differences in classification were resolved by consensus. The two reviewers jointly extracted quantitative data including number of procedures, number of evaluated clinicians, number or evaluators, and agreement testing results. We used the framework described by Messick to characterize evidence of validity for peer evaluation processes [20].
Study selection
The review process is depicted in the PRISMA flow chart (Appendix Fig. 1). A total of 2,703 citations were identified initially by our electronic search strategy; 568 duplicates were removed for a total of 2,135 for review. Of these, 90 full-text articles were reviewed, and 23 studies met our inclusion/exclusion criteria. After review of references of these articles, we included an additional 9 studies. The final sample of 32 studies included 21 from the subspecialty of gastroenterology , 5 from cardiology [42][43][44][45][46], 3 from pulmonology [47][48][49], and 3 from IR [50][51][52] (Table 1).
Peer evaluation and feedback processes
The studies reported peer evaluation using various methods or a combination of multiple methods: review of video or fluoroscopy recordings, in-person observation, and review of medical records. For gastroenterology procedures, most studies used retrospective review of videos. Shah et al. provided simultaneous recording of the endoscopists' hands in addition to the endoscopic intraluminal view and colonoscope configuration [34]. Most other gastroenterology studies provided the endoscopic view only, and some selectively edited the videos to concentrate on a specific task, typically a polypectomy. For cardiology studies, Rader et al. created a video of coronary angiography procedures that included a case description and views of the operators' hands and the fluoroscopy images [46]. Other cardiology studies included review of case records with the fluoroscopy images. The 3 pulmonology studies utilized endobronchial videos with associated ultrasound videos where appropriate [47][48][49]. IR reviews were performed collectively in a group setting by review of medical history and procedural details [50][51][52]. A scoring system for peer evaluation was developed or tested in 22 of the studies (Table 2) [21, 22, 24-28, 30, 33-41, 44, 46-49]. These scoring systems commonly included assessment of technical skills and clinical decision-making.
Feedback to clinicians was described in 10 studies [22,23,28,32,[42][43][44][50][51][52]. Feedback methods included personalized score cards, letters from review committees, and group discussion during case conferences. In Blows et al., each clinician was given a feedback report, benchmarked against peers, that included assessment of anatomical suitability for PCI, lesion severity, appropriateness of intervention strategy, and satisfactory outcome [44]. Caruso et al. describe a two-tiered process for IR reviews [50]. An initial review of random cases by peer radiologists would trigger a group discussion at M&M conference if any concerns about clinical management are identified.
Validity evidence
Inter-observer agreement of peer evaluations was tested in 18 of the studies [21, 22, 24-26, 29, 33-39, 41, 46-49], using various statistical methodologies including Cohen's kappa, Cronbach's alpha, intraclass correlation coefficient (ICC), Spearman correlation, and the generalizability theory (G-theory) ( Table 2). All but two studies [25,46] demonstrated at least a moderate degree of agreement between observers, with most studies revealing good or excellent agreement (Table 2). Most studies described training on the use of the assessment instrument, and Gupta et al. demonstrated that assessors without training were unable to differentiate between expert and non-expert endoscopists [25]. Of the inter-observer agreement studies, six [24,36,40,[46][47][48] calculated the minimum number of observations required to reliably evaluate an operator. These estimates ranged from 1 assessor evaluating 3 procedures [47] to 3 assessors rating 7 procedures [46] to reach at least moderate degree of agreement.
Fifteen studies [25-27, 30, 33-38, 40, 41, 46-49] tested the relationship of peer evaluation to other variables by assessing clinicians with varying expertise. More experienced clinicians performed better than less experienced clinicians. Gupta et al. demonstrated that assessors using the Direct Observation of Polypectomy Skills (DOPyS) instrument could reliably distinguish between the expert and intermediate endoscopists [21]. Similarly, Konge et al. demonstrated the Endoscopic Ultrasonography Assessment Tool (EU-SAT) discriminates between trainees and experienced physicians with regard to ultrasonographic fine needle aspiration; the experienced physicians not only performed better than the trainees, but performance assessments were also more consistent [39]. The only exception, Shah et al., did not find a significant difference among colonoscopists who performed 100, 250, 500, or 1000 prior colonoscopies [34].
Only 4 studies described the association of peer evaluation with other quality measures [21,26,27,30]. Two studies of the Colonoscopy Inspection Quality (CIQ) tool [27,30] demonstrated that peer evaluated technique was associated with adenoma detection rate (ADR), a key measure of quality since lower ADR is associated with increased risk of post-colonoscopy colorectal cancer [53]. Keswani et al. showed that novice CIQ scores significantly correlated with ADR and withdrawal time (WT); and novice proximal colon CIQ scores significantly correlated with serrated polyp detection rate [26]. However, Deloy et al. showed that polypectomy competency assessed by DOPyS did not correlate with the unrelated colonoscopy quality measures WT and ADR [21].
There were 6 studies [22,28,31,32,44,45] that assessed the impact of peer evaluation on clinician performance. None of these had a randomized design. Prospective observational designs were used in 5 studies [22,28,31,32,44] to measure clinician performance before and after implementation of a peer evaluation intervention. In Duloy et al., feedback was given in the form of a personalized polypectomy skills report card [22]. The mean performance score of polyps removed significantly increased in the post-report card phase. Four studies [28,32,44,45] provided feedback regarding case selection and procedural appropriateness; each demonstrated a decline in inappropriate procedures after the feedback period. In one study [31], clinician knowledge that they were being observed via videotaping (without receiving feedback) was associated with increased colonoscopy inspection time and improved measures of mucosal inspection technique. There were no studies linking peer evaluation and feedback to patient outcomes.
Best practices for implementation of peer evaluation
Finally, 6 studies [23,42,43,[50][51][52] described best practices for peer evaluation interventions without providing specific evidence of validity. Common elements included pre-specified criteria for case selection, a protected and non-punitive environment, and a focus on education and quality improvement. Doll et al. described a national peer review committee for PCI complications that provided operators with an overall rating and recommendations for improvement [43]. Luo et al. proposed that peer review in a group setting allows the operator an opportunity to provide context and rationale for clinical management [52]. All studies recommended routine, transparent processes that are applied to all clinicians in the group.
Discussion
This systematic review shows that peer evaluation for invasive medical procedures is feasible and has considerable evidence of validity, primarily based on studies reporting excellent inter-observer agreement. No randomized studies are available and there are limited studies demonstrating an association of peer-evaluated performance with other quality measures or patient outcomes. Additional research is needed to develop and test peer evaluation and feedback interventions, particularly using randomized designs and with meaningful clinical outcomes. However, this review identifies common elements of successful interventions across specialties and provides a template for hospitals or health systems seeking to establish or refine peer evaluation programs. The importance of peer evaluation for proceduralists has been established since at least the 1990s [54,55]. Innovations in peer evaluation have been traditionally led by the surgical and anesthesiology communities, including the creation of the M&M conference that is now ubiquitous among both surgery and internal medicine training programs [56]. Surgeons have also outpaced the internal medicine sub-specialties in the validation of peer evaluation methods-17 unique tools are available for assessment of laparoscopic cholecystectomy, for example [57]-and providing feedback and training interventions to improve performance [58]. Since the literature examining any specific procedure within the internal medicine subspecialities is limited, and since these procedures share many common characteristics, our review examines the validity and best practices of peer evaluation across multiple related procedures, including percutaneous procedures in IR.
Using the validity framework established by Messick and others [20], our review highlights substantial evidence of content, internal structure, and relationship to other variables sources of validity. Evaluation methods were typically developed by clinicians and utilized observation of performance either directly or via durable medical media such as videos. Inter-observer agreement was high for most tools. Evaluated performance mostly correlated to objective measures of experience such as level of training or number of procedures performed. However, the consequences source of validity was notably lacking since studies were not designed or powered to establish impacts on clinician performance or patient outcomes. In addition, studies variably reported response process information, and characteristics of scoring systems varied widely. Therefore, it is unclear if existing evaluative tools are optimized for clinical practice. Validity evidence is strongest for assessment of endoscopic and bronchoscopic procedures, and lacking or of low quality for some cardiac, pulmonary, and IR procedures.
For now, groups seeking to establish peer evaluation programs should use a tool with validity evidence when available ( Table 2). Existing scores share common elements. Performance is typically summarized across multiple domains with numerical values, often including a pre-specified threshold for competency. For example, for the Coronary Angiography Rating Scale (CARS), Rader et al. used an assessment form with 29 items to be scored on a scale of 1 to 5, and a summary score presented on a scale of 1 to 9 [46]. Similarly, for DOPyS (polypectomy), Gupta et al. describe a 33-point structured checklist and global assessment using a 1 to 4 scale [24]. These scores can provide feedback on specific components of the procedure under the direct control of the operator such as case selection/appropriateness, strategy and decisionmaking, technical skills, outcomes, and documentation, as well as an overall summary of performance. Since scoring systems are lacking for many procedures, clinical groups may consider adapting and testing scores from other procedures to meet their individual needs.
The optimal evaluative method will depend on institutional goals and resources. Direct observation of performance, for example, has the advantage of real-time assessment and visualization of all aspects of the procedure. Its disadvantages include lack of blinding/anonymity, substantial time burden for the assessor, and the potential for bias. Conversely, post hoc review of reports and images may be more objective and efficient, but may miss important procedural details or environment factors outside the control of the observed proceduralist.
Our review identified two general types of peer feedback programs. Group-based, collaborative peer review in the setting of M&M or case review conferences is recommended for non-judgmental, educational discussions. Cases are triggered for review by complications, poor patient outcomes, or high educational content. Alternatively, anonymous or blinded review may be more appropriate for quality surveillance, sometimes with random case selection. Individualized feedback to clinicians may identify opportunities for practice improvement.
Most included studies reported peer evaluation and feedback activities in the context of education and quality improvement programs. However, there may also be a role for peer evaluation for quality assessment or recertification for practice. In the United States, the Joint Commission on Accreditation of Healthcare Organization (JCAHO) requires assessment of clinician performance to obtain or retain hospital credentials (via Ongoing Professional Practice Evaluations (OPPE) and Focused Professional Practice Evaluations (FPPE)) [59]. Other countries and health systems use similar structures to ensure clinical competence and promote lifetime learning [60]. Standardized methods and scoring systems could enhance these efforts. For endoscopic gastroenterology procedures, there is potential for current peer assessment tools to be utilized as part of a standardized competency assessment [61]. However, this strategy has yet to be tested, and additional research is required to establish appropriate thresholds for clinician competency and excellence. Achieving widespread dissemination of these tools may require support from clinical societies and health systems, since clinicians will require support and resources to learn and apply these methods.
Our systematic review has several limitations that merit discussion. Only English language studies were reviewed. We excluded studies that solely examined trainee evaluation. While our aim was to examine peer evaluation of practicing clinicians, it is possible that some tools developed for trainees could also be useful in this setting. We found marked heterogeneity in the design of the included studies, and many were of low quality. This precluded meta-analysis of results. Many studies did not include a formal scoring system, and those that did used differing testing methods to assess validity. Some elements of successful peer evaluation may be highly specific to individual procedures. Our attempt to generalize across multiple invasive procedures may therefore miss important nuances that are highlighted by the procedure-specific studies. Finally, though our search strategy included procedure-specific terminology (i.e. "colonoscopy") and more general terms (i.e. "endovascular procedure") it is possible that our search was biased towards certain procedures and omitted important studies. However, review of reference lists from included studies did not reveal a significant body of literature missed by our search strategy.
Conclusion
Our systematic review describes common elements of peer evaluation and feedback interventions for a subset of invasive medical procedures. Peer evaluation is a feasible and reproducible method of assessing practicing procedural physicianss. However, there are limited data on the relationship of peer evaluation to other quality measures and patient outcomes. Additional research is needed, ideally in the form of prospective and randomized outcomes studies evaluating the impact of peer evaluation on clinician performance and patient outcomes. | 2022-07-30T13:32:53.732Z | 2022-07-29T00:00:00.000 | {
"year": 2022,
"sha1": "d520f0b75f1265a40c5aa98dd9ecf439b0c6de9f",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b91726cba48bc163336feed56155c51a8fd35f5f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236210714 | pes2o/s2orc | v3-fos-license | The Immune System through the Lens of Alcohol Intake and Gut Microbiota
The human gut is the largest organ with immune function in our body, responsible for regulating the homeostasis of the intestinal barrier. A diverse, complex and dynamic population of microorganisms, called microbiota, which exert a significant impact on the host during homeostasis and disease, supports this role. In fact, intestinal bacteria maintain immune and metabolic homeostasis, protecting our organism against pathogens. The development of numerous inflammatory disorders and infections has been linked to altered gut bacterial composition or dysbiosis. Multiple factors contribute to the establishment of the human gut microbiota. For instance, diet is considered as one of the many drivers in shaping the gut microbiota across the lifetime. By contrast, alcohol is one of the many factors that disrupt the proper functioning of the gut, leading to a disruption of the intestinal barrier integrity that increases the permeability of the mucosa, with the final result of a disrupted mucosal immunity. This damage to the permeability of the intestinal membrane allows bacteria and their components to enter the blood tissue, reaching other organs such as the liver or the brain. Although chronic heavy drinking has harmful effects on the immune system cells at the systemic level, this review focuses on the effect produced on gut, brain and liver, because of their significance in the link between alcohol consumption, gut microbiota and the immune system.
Gut Microbiota and Immune System
The human body contains many different types of cells. These cells include both human cells (mainly erythrocytes) and non-human cells such as bacteria, fungi, yeasts and viruses. In fact, given a standard 70 kg human male, there are slightly more bacteria than human cells, being the estimated bacteria/human ratio 1:3 [1]. This collection of microbes that inhabit a human body represent the human microbiota.
In the human body, the gut represents the organ with the largest surface area (approximately 32 m 2 ) [2] as well as the one with the highest number of microbes, especially in the colon, where the density of bacterial cells has been estimated at 10 11 to 10 12 per milliliter [3]. The innate immune response is a very fast, pathogen-non-specific, first line of defense mechanism. It is mainly composed of macrophages, dendritic and natural killer cells, as well as different forms of granulocytes. The adaptive immune system is highly specific to a particular pathogen and is formed by B and T cells lymphocytes. (B) The gut microbiota is in close interaction with both the innate and the adaptive immune system. This interaction is frequently driven by SCFAs, which modulate local as well as systemic immune response. SCFAs can bind to G-protein-coupled receptors as FFAR2 and FFAR3 present on the surface of gut epithelial cells and immune cells including dendritic cells, macrophages and neutrophils, and are therefore important regulators of inflammatory response. SCFAs also promote the activation of B cells and the development of Treg CD4+T cells-for example, increasing secretion of IL-10 with important anti-inflammatory effects. Suppression of inflammatory factors like cytokines is further achieved by the inhibition of histone deacetylases (HDACs) activity. Finally, SCFAs have been shown to modulate immune inflammation responses in extraintestinal organs such as the brain, by acting on microglia and astrocytes. The innate immune response is a very fast, pathogen-non-specific, first line of defense mechanism. It is mainly composed of macrophages, dendritic and natural killer cells, as well as different forms of granulocytes. The adaptive immune system is highly specific to a particular pathogen and is formed by B and T cells lymphocytes. (B) The gut microbiota is in close interaction with both the innate and the adaptive immune system. This interaction is frequently driven by SCFAs, which modulate local as well as systemic immune response. SCFAs can bind to G-protein-coupled receptors as FFAR2 and FFAR3 present on the surface of gut epithelial cells and immune cells including dendritic cells, macrophages and neutrophils, and are therefore important regulators of inflammatory response. SCFAs also promote the activation of B cells and the development of Treg CD4+T cells-for example, increasing secretion of IL-10 with important anti-inflammatory effects. Suppression of inflammatory factors like cytokines is further achieved by the inhibition of histone deacetylases (HDACs) activity. Finally, SCFAs have been shown to modulate immune inflammation responses in extraintestinal organs such as the brain, by acting on microglia and astrocytes.
In the gut, the uptake of SCFAs by intestinal epithelial cells (IECs), mainly butyrate, promotes the integrity of the intestinal barrier, reducing intestinal permeability [20] and, therefore, preventing bacterial translocation through the gut wall and the resulting endotoxemia and associated immune response [17]. SCFAs also exhibits important antiinflammatory effects on gut immune cells. For example, butyrate stimulated differentiation of T-regulatory cells and increased levels of IL-10 while reducing production of IL-6 and inhibiting the expansion of pro-inflammatory Th17 cells [21]. Moreover, SCFAs show epigenetic regulatory effects by inhibiting HDACS promoting in this way the suppression of inflammatory responses in immune cells [22,23] as well as promoting production of IgA and IgG antibodies by B cells. SCFAs have also shown to be natural ligands for free fatty acid receptor 2 and 3 (FFAR 2 and FFAR 3 also known as GPR43 and GPR41, respectively) [24]. In particular, FFAR2 are highly related to immune cell function and mast cell activity since they are expressed in neutrophils, macrophages and dendritic cells, among others ( Figure 1B). Activation of FFAR2 has been associated with the maintenance of gut homeostasis and regulation of inflammation related to disease such as asthma, allergies, cardiovascular and fatty-liver disease [25].
SCFAs have been associated with normal development of brain resident immune cells, specifically with microglia and astrocytes. In the brain, microglia are the most abundant immune cells and perform a variety of functions including phagocytosis, cytokine production and activation of inflammatory response, between others [26]. As observed in germ-free mice as well as in animals presenting FFAR2 abnormalities, alterations in gut microbiota lead to abnormal microglial abundance, morphology, and gene expression patterns [27,28]. Astrocytes, on the other hand, are the most frequent glial cells in the brain and perform several immune related functions including the expression of pattern recognition receptors for detection of microbial-associated molecular pattern (MAMPs) and modulation of the neuroinflammatory response [29]. Metabolites produced in the gut by metabolization of dietary tryptophan are able to bind to astrocyte aryl hydrocarbon receptors (AHR) reducing by this way proinflammatory factors ( Figure 1B). Therefore, intestinal bacteria seem to be an important regulator of neuroinflammation. This idea has been supported by different studies using a mouse model of multiple sclerosis (called experimental autoimmune encephalomyelitis, EAE) showing a protective effect of SCFAs by increasing IL-10 producing regulatory T cells differentiation. Altogether, this interaction between gut microbiota and immune system on the gut-brain axis plays an important role in the etiopathogenesis of psychiatric and neurological diseases such as autism spectrum disorder, depression and addiction, among others [13,30].
The interaction between the liver immune system and the microbiome, under normal health conditions, is limited. Only select substances can cross the intestinal barrier and move into the liver, the bile ducts and the portal vein being the major connection points between the liver and microbiome [31]. However, in certain contexts, when intestinal commensals and their products translocate from the intestinal lumen to the liver, hepatic immune responses may be affected [32]. For example, the number, functional activity, and maturational status of the hepatic Kupffer cells (KCs), a critical component of the hepatic innate immune system, are directly related to the concentration of gut-derived MAMPs [33]. Intestinal pathogenic bacteria facilitate immune-mediated liver injury by activating dendritic cells (DCs) and natural killer T (NKT) cells in the liver [34]. Additionally, it has been reported that probiotics may contain bacterial glycolipid antigens that stimulate hepatic NKT cells in a strain-specific and dose dependent manner [35].
Effects of Alcohol on Gut Microbiota
Alcohol addiction is a leading risk factor for personal death and disability. In 2016, the harmful use of alcohol resulted in some 3 million deaths (5.3% of all deaths) worldwide and 132.6 million disability-adjusted life years (DALYs), i.e., 5.1% of all DALYs in that year. Among men in 2016, an estimated 2.3 million deaths and 106.5 million DALYs were attributable to the consumption of alcohol. Women experienced 0.7 million deaths and 26.1 million DALYs attributable to alcohol consumption [36].
Alcohol abuse represents a risk factor for liver diseases, such as alcoholic steatohepatitis and cirrhosis [37] in such a way that approximately 25% of heavy drinkers develop clinically alcoholic liver disease (ALD).
Although alcohol is absorbed through the mucosa of the entirely gastrointestinal tract by simple diffusion, it is mainly absorbed in the upper part of the tract [38], the majority of it (70%) in the small intestine [39]. The large part of alcohol metabolism in humans occurs in the hepatocytes, main cells of the liver. Ethanol is metabolized by alcohol dehydrogenases (ADH), catalase or cytochrome P450 2E1 to acetaldehyde which is then further oxidized to acetate by aldehyde dehydrogenase (ALDH) [40]. Ninety percent of the moderate alcohol consumed is metabolized through oxidative conversion by alcohol dehydrogenases enzymes while the microsomal ethanol-oxidizing system (MEOS) handles the remaining 10%; this last route acquires greater importance when alcohol consumption increases significantly. MEOS leads to the production of oxygen free radicals, which can cause cellular damage [41]. Besides in the liver, the enzymes involved in the oxidative metabolism of alcohol also are present in the intestinal mucosa and intestinal bacteria also produce acetaldehyde in the gastrointestinal tract [41].
The intestinal microbiota (IMB) is the set of microorganisms that inhabit our intestines. These microorganisms, among others, include bacteria, fungi, yeasts and viruses [42]. However, in most cases, when referring to IMB, one usually refers to the populations of bacteria that have colonized our large intestine. Gut dysbiosis, which may result in an overgrowth of Gram-negative bacteria [38], can be yielded by the direct toxicity of the alcohol or by indirect mechanisms triggered by alcohol such as the alteration of gut motility [43], the gastric acid output [44], the bile-acid metabolism [45] and an increase in fecal pH [46].
To date, most studies have reported that heavy alcohol consumption directly alters the biodiversity of gut microbes and produces dramatic change in the relative abundance of some particular microbes, causing dysbiosis and inflammation in the gut [47][48][49]. Similar effects have been shown in moderate alcohol consumption and chronic consumption in animal models [46,[50][51][52]. Intestinal dysbiosis was correlated with the amount of alcohol consumed [47]. Although the changes are specific to the species studied (rodents or humans) and the alcohol ingestion protocol, there is trend for a depletion of bacteria with anti-inflammatory activity, such as Bacteroidetes and Firmicutes phyla, and an increase in bacteria with pro-inflammation activity, such as Proteobacteria, following alcohol consumption [47][48][49]. Unlike chronic alcohol consumption, binge drinking pattern (a frequent form of alcohol consumption, defined as 5 or more drinks for men and 4 or more drinks for women within 2 h) has not shown homogeneous results even using similar experimental designs. Some studies have found an effect of binge drinking on IMB (increased 16S rDNA levels) [53], but others have obtained negative results [54]; therefore, more studies are needed to elucidate this relationship.
By incompletely understood mechanisms, alcohol abuse leads to a disruption of the intestinal barrier integrity which in combination with the mucosal injury induced by alcohol, increases the permeability of the mucosa [55]. The intestinal barrier is a semipermeable structure that allows the uptake of essential nutrients and immune sensing while being restrictive against pathogenic molecules and bacteria [56]. It is composed of multiple layers of defense that included mucus with antimicrobial peptides and immunoglobulin A molecules, monolayer epithelial cells firmly join by tight junction proteins and the inner lamina propria where the immune cells reside and play an essential role in protecting the intestinal mucosa against invading bacteria [57]. Numerous studies have demonstrated that ethanol, its metabolites, and alterations of the gut microbiome suppress intestinal tight junction protein expression [58][59][60][61] producing that the epithelial layer becomes leaky or "permeable". Alcohol increased gut permeability affects mucosal immunity and allows the translocation of bacterial or some critical components of their membrane into the bloodstream [47], reaching other organs that can be damaged. LPS (lipopolysaccharide), Gram-negative bacteria membrane main product, and other bacterial metabolites reach the liver via the portal vein where they are enabled to induce the activation of the inflammatory processes. A study in rats has shown that only two weeks of alcohol administration disrupts the intestinal barrier and after two weeks more, liver injury occurs [62]. In the liver, gut-derived molecules interact with the hepatocytes, parenchymal cells, and immune cells causing injuries including hepatic steatosis, hepatitis, fibrosis, cirrhosis, and hepatocellular carcinoma [63].
The liver is not the only organ distant from the gut that has been associated with deleterious effects of intestinal dysbiosis due to alcohol. The brain is also a target of the gut microbiota. In recent years, there has been a growing awareness of the crosstalk between our intestinal bacteria, the central nervous system (CNS) and behavior [64] (Figure 2). Principal signaling pathway and molecules involved in the communication microbiota/gut to the brain and liver. Gut microbiota can signal to the brain and liver through multiple direct and indirect mechanisms. Microbiota produces neurotransmitters, tryptophan metabolites, fermentation metabolic by-products such as short-chain fatty acids (SCFAs), the release of cytokines by immune cells and gut hormone signaling. Some of these molecules can activate the vagus nerve or reach the brain and liver via systemic circulation. Alcohol consumption causes dysregulation in the intestinal microbiota, which leads to an alteration in this communication and subsequently causes alterations in brain and liver functions.
Figure 2.
Principal signaling pathway and molecules involved in the communication microbiota/gut to the brain and liver. Gut microbiota can signal to the brain and liver through multiple direct and indirect mechanisms. Microbiota produces neurotransmitters, tryptophan metabolites, fermentation metabolic by-products such as short-chain fatty acids (SCFAs), the release of cytokines by immune cells and gut hormone signaling. Some of these molecules can activate the vagus nerve or reach the brain and liver via systemic circulation. Alcohol consumption causes dysregulation in the intestinal microbiota, which leads to an alteration in this communication and subsequently causes alterations in brain and liver functions.
Numerous sources of evidence gathered from experiments carried out in rodents show that modifications in the composition of gut microbiota impact in the brain functions and behavioral aspects [65], including the predisposition to high alcohol consumption [66]. Leclercq et al. [67] found a correlation between leaky gut and inflammation with modifications in scores of depression, anxiety and social interactions in alcohol craving. Along the same line, it has been shown that rats replicate several behavioral and biochemical alterations after stool transplantation from patients with depression and anxiety behaviors [68]. In the study of Xiao et al. [52] transplanted microbiota in mice from alcoholic to healthy, developed emotional symptoms, such as anxiety, which occurs during abstinence.
The IMB maintains bidirectional interaction with critical parts of the CNS [68]. The microbiota-gut-brain axis communicate both organs not only through neuronal signals (neurotransmitters), it also depends on endocrine (hormones and gut peptides) and immune signals (cytokines), and microbiota derived metabolites (short-chain fatty acids -SCFAs-, branched chain aminoacids, and peptidoglycans) acting together to regulate host physiology and microbiota composition [64]. Gut microbiota are able to produce various of the aforementioned metabolites that act on enteroendocrine cells, the vagus nerve or by translocation throughout the gut epithelium into the systemic circulation and may have an impact on host physiology.
The vagus nerve is the fastest and most direct route that connects the gut and the brain, it is composed of afferent and efferent fibers [69]. This nerve transmits information from the gastrointestinal, respiratory and cardiovascular systems and gives feed-back to the visceras. Gut-brain signaling occurs primarily via the vagus nerve, vagal afferents sense intestinal molecules, e.g., intestinal hormones, neurotransmitters or bacterial by-products [64]. The alterations of the vagal activity at intestinal level are associated with bacterial overgrowth and bacterial translocation [70]. As observed by Freeman et al. [71] in alcohol withdrawal and during chronic alcohol feeding, there is a dysregulation in vagal signaling that could result in neuroinflammatory processes.
The main products of the fermentation of dietary fiber, SCFAs (acetate, propionate and butyrate principally) are considered as one of the main direct or indirect mediators of microbiota-gut-brain interactions [72]. The highest production of SCFAs occurs in the proximal colon, where they are quickly and efficiently absorbed, since only 10% of the acids are excreted with the feces [73]. The rest of the SCFAs reach the circulatory system via the superior or inferior mesenteric vein, reaching the brain and crossing the blood-brain barrier thanks to monocarboxylate transporters thus being able to act as signaling molecules between the gut and the brain [74]. IMB metabolic activity can be modified due to chronic alcohol consumption. Specifically, chronic alcohol consumption could reduce the SCFAs count through the reduction in some Firmicutes genera, such as Faecalibacterium and Ruminococcaceae, on which the production of SCFAs depends [75,76]. Furthermore, it has been described that alcohol consumption would also have effects on other microbiota derived metabolites, leading to increases in branched-chain amino acids [77] and peptidoglycans [78]. However, studies showing the effect of alcohol on these microbiota derived metabolites are scarce.
Alcohol alters the composition of the IMB, resulting in an alteration of the amount and type of neuroactive substances produced by the microbiota, which may lead to behavioral alteration [79]. Gut-brain communication is disrupted by alcohol-related immune and gut dysfunction [80]. Alcohol modifies the intestinal microbiota, pH and permeability of the intestine, causing an increased entry of endotoxins into our CNS and brain, leading to neuroinflammatory processes.
Effects of Alcohol on Immune System: Putting All the Pieces Together
Traditionally, it has been described that alcohol acts on the immune system depending on several variables, including consumption pattern. Thus, several studies indicate that light to moderate consumption leads to reduced levels of systemic inflammation or improved responses to vaccines. In contrast, chronic heavy drinking (CHD) is often associated with a deficient immune response [15,81]. In this way, this consumption pattern is associated with an increased risk of infection by several viruses [82], and it has been suggested that it may lead to a greater severity and mortality from the recent COVID-19 pandemic [83][84][85]. In addition, subjects with Alcohol Use Disorders (AUD) show a worse postoperative recovery, a poor response to vaccination or a slower recovery from infections [81]. CHD alters innate and adaptive immune responses [82,86] and can affect a large number of systems through them, since this type of consumption has been associated with damage to different tissues such as pancreas, liver, gut, circulatory system or nervous system [87], and there are several studies that attribute, at least in part, a role of persistent systemic and local inflammation in these conditions [88].
Some of the effects of CHD on cells of the immune system include reduction in Tcell numbers, loss of naïve T-cells, increased CD8+ T-cell activation and proliferation, or alterations in monocytes [81,89]. Together with the effect of alcohol consumption on Toll-like receptors [90][91][92], one of the most reported data are the upregulation of several cytokines after alcohol administration [93]. In fact, a recent meta-analysis [94] studied the differences in cytokine patterns presented by subjects with AUD and concluded that they show a higher concentration of cytokines than control patients. Furthermore, these authors found clear differences depending on the different stages of AUD illness: active drinking, withdrawal and various periods of abstinence. Such results are very interesting in order to develop potential biomarkers of alcohol consumption [95], as well as pharmacological alternatives to treat alcoholism [96]. Although the effect of alcohol on the immune system occurs at the systemic level and affects various organs, we will focus on the effect of this substance on the gut, brain and liver (Figure 3), due to the importance of these organs in the relationship between alcohol consumption, intestinal microbiota and the immune system [97].
The gut is the largest organ with immune function in our body [98] and, in order to regulate the immune response, the gut must keep the homeostasis of the intestinal barrier in check [99,100]. As mentioned above, alcohol consumption increases intestinal permeability through the suppression of intestinal tight junction protein expression. This alteration allows the translocation of bacterial products to the systemic circulation. The gut-derived bacterial components together with LPS activate the immune cells localized in the systemic circulation (peripheral blood mononuclear cells), or in target organs [101]. The release of LPS into the bloodstream results in the activation of two important targets of the immune response: TLR4 and nucleotide-binding domain leucine-rich repeat containing 3 (NLRP3) or cryopyrin. In that sense, research on the role of TLRs in the pathogenesis of alcoholism has revealed that these receptors mediate the development of a neuroinflammatory effect in the CNS derived from alcohol consumption [102,103].
The activity of these receptors triggers the activation of a number of molecular pathways that result in the expression of genes of the innate immune system, mainly proinflammatory factors, that contribute to a permanent neuroinflammatory state of the CNS. A study conducted in 2015 showed that blocking TLR4 function most of the neuroinflammatory effects produced by ethanol were diminished [104]. In another study, adolescent mice that consumed ethanol intermittently (3 g/kg) for two weeks, showed that this consumption pattern leads to an activation of TLR4 signaling pathways, an up-regulation of cytokines and proinflammatory mediators, in addition to synaptic and myelin alterations. TLR4-deficient mice prevented such neuroinflammation, synaptic and myelin alterations, as well as long-term cognitive alterations [105].
drawal and various periods of abstinence. Such results are very interesting in order to develop potential biomarkers of alcohol consumption [95], as well as pharmacological alternatives to treat alcoholism [96]. Although the effect of alcohol on the immune system occurs at the systemic level and affects various organs, we will focus on the effect of this substance on the gut, brain and liver (Figure 3), due to the importance of these organs in the relationship between alcohol consumption, intestinal microbiota and the immune system [97]. Interestingly, in addition to supporting neuroinflammation, TLR signaling is likely engaged in the mechanisms of regulation of the functional activity of neurotransmitter systems, which may contribute to the formation of a pathological demand for alcohol [106]. Together with TLRs activation, the production of cytokines, which can cross the blood-brain barrier (BBB), have harmful effects at CNS level [102]. To that respect, the BBB is known to be a major target for alcohol. Long-term consumption produces serious impairments in the BBB permeability and integrity since alcohol inhibits the expression of BBB structural and functional proteins, promoting inflammation and oxidative stress [107].
The immune response, therefore, would be one of the main channels through which the gut-brain axis establishes communication [108]. Since alcohol is responsible for inducing changes in this communication, leading to peripheral and central inflammation [109], dysfunction in gut microbiota and the subsequent affection of the immune system is linked to the development of mental illnesses, brain dysfunction and neurodegenerative disorders like Alzheimer's and Parkinson's diseases [110][111][112][113]. Interestingly, central neuroinflammation is maintained after cessation of alcohol consumption, compared to peripheral activation [114] and during periods of abstinence [108]. Finally, in relation to the effect of alcohol on neuroinflammation, a study by Lowe et al. showed an attenuation of alcohol-induced neuroinflammation after reducing the gut bacterial load, as a result of antibiotic treatment [115]. We could hypothesize that by reducing the gut bacterial load, lower amounts of bacterial components would reach the systemic circulation, leading to reduced activation of pro-inflammatory components.
In addition to the central inflammatory effect, CHD induces a peripheral inflammatory response that plays an important role in the development of alcoholic liver disease (ALD) [108]. ALD is a broad term that refers to a variety of liver ailments. In particular, numerous clinical and experimental research [116][117][118][119][120] have revealed the role of immunology in fueling inflammation and progression of ALD. As said before, alcohol consumption modifies the barrier function of the intestinal mucosa, leading to an increased bacterial load together with high levels of LPS that enters the portal circulation through alcohol-disrupted barrier of gut. LPS activates innate immunity via TLRs expressed by immune cells producing immunological challenges that disrupt the liver's finely tuned immune pathways [121][122][123]. Some other cellular sensors of pathogen-or damage-associated molecular patterns (PAMPs/DAMPs) are further activated, leading to the generation of pro-inflammatory cytokines like TNF-α and ILs, which contributes to ALD [123]. Both innate and adaptive immunity are known to have a role in the pathogenesis of ALD [124]. As a result of continued alcohol misuse, alcoholic hepatitis and fibrosis develop. At this point, the oxidative breakdown of alcohol limits the function of immune cells like natural killer (NK) cells, which cause activated hepatic stellate cells (HSCs) to enter in apoptosis, resulting in mild fibrosis [125][126][127]. Finally, fibrotic distortion of tissues and blood vessels, as well as cell necrosis, characterize the ultimate stage of ALD. The failure of the liver to eliminate microbial and other circulating pro-inflammatory chemicals, as well as the release of immunogenic cellular debris from necrotic hepatocytes, results in prolonged immune system activation and worsens the condition [128,129].
Conclusions
Chronic excessive alcohol consumption causes inflammation in a variety of organs, including the gut, brain and liver. While alcohol has direct effects on the gastrointestinal tract when it comes into touch with the mucosa, the majority of alcohol's biological effects are due to its systemic dispersion and delivery through the blood. Alcohol has been proven to affect the microbiome in the gastrointestinal tract, with alcoholics having a different and higher bacterial load in their gut. Once the integrity of the gut mucosa is impaired, LPS enters the portal circulation contributing to enhance the inflammatory changes in other organs such liver and brain. | 2021-07-25T06:17:02.382Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "5da0103589b160fe4b2518c1d70698d0c94c2191",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/14/7485/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c6f0bc5e379da16e65fbe42a344918b345011fa",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261335712 | pes2o/s2orc | v3-fos-license | Electrocatalytic Reduction of Dinitrogen to Ammonia with Water as Proton and Electron Donor Catalyzed by a Combination of a Tri-ironoxotungstate and an Alkali Metal Cation
The electrification of ammonia synthesis is a key target for its decentralization and lowering impact on atmospheric CO2 concentrations. The lithium metal electrochemical reduction of nitrogen to ammonia using alcohols as proton/electron donors is an important advance, but requires rather negative potentials, and anhydrous conditions. Organometallic electrocatalysts using redox mediators have also been reported. Water as a proton and electron donor has not been demonstrated in these reactions. Here a N2 to NH3 electrocatalytic reduction using an inorganic molecular catalyst, a tri-iron substituted polyoxotungstate, {SiFe3W9}, is presented. The catalyst requires the presence of Li+ or Na+ cations as promoters through their binding to {SiFe3W9}. Experimental NMR, CV and UV–vis measurements, and MD simulations and DFT calculations show that the alkali metal cation enables the decrease of the redox potential of {SiFe3W9} allowing the activation of N2. Controlled potential electrolysis with highly purified 14N2 and 15N2 ruled out formation of NH3 from contaminants. Importantly, using Na+ cations and polyethylene glycol as solvent, the anodic oxidation of water can be used as a proton and electron donor for the formation of NH3. In an undivided cell electrolyzer under 1 bar N2, rates of NH3 formation of 1.15 nmol sec–1 cm–2, faradaic efficiencies of ∼25%, 5.1 equiv of NH3 per equivalent of {SiFe3W9} in 10 h, and a TOF of 64 s–1 were obtained. The future development of suitable high surface area cathodes and well solubilized N2 and the use of H2O as the reducing agent are important keys to the future deployment of an electrocatalytic ammonia synthesis.
Figure S2. Cyclic Voltammetry of Aldehyde
The measurement conditions were 6 mL Dry THF containing with 0.1M TBAPF6 and 110 mM aldehyde.The solution was purged for 30 min with N2.A glassy carbon disc working electrode, a platinum wire counter electrode, and a Fc/Fc + reference electrode were used at a scan rate of 100mV/s Figure S3.UV-vis spectra of TBA{SiFe3W9}, Li + and N2 before and after electrolysis There are some changes in the intensities of the peaks that in comparison with Figure 4d indicates the presence of a residual amount of 1-electron reduced species.Computed molecular electrostatic potential mapped onto an isodensity surface of 0.0004 for {SiFe3W9O37}.Lithium-center coordination modes are also included in bridge (green) and terminal (blue) oxygens.Three lithium cations preferentially approach to the three nucleophilic wells near [Fe3O3], but only one is localized in a well generated by [W4O4].We noticed that this Li + cation is also coordinated to a ClO4 -, which is in turn attached to other Li + in terminal oxygens.(c) Radial distribution functions (RDFs, g(r)) (in black line) between the {SiFe3W9O37} (Si or Fe as reference) and, from left to right, N of TBA, Li + , Cl of ClO4 -and O of THF.Integration of the g(r) (in red line) is also included. + 10Li + + 3ClO4 -+ continuum dielectric.Orbital energies are in eV.Fe, W and O labels represent the atoms with higher contribution to the molecular orbitals.The reduction potential of a polyoxometalate depends on the absolute energies of the LUMOs.In the gas phase, the molecular orbitals of a polyoxometalate are, in general, very high in energy because of the negative charge of the anion.In solution, the solute polyoxometalate orbitals are much lower in energy due to the electric field created by solvent molecules and counter cations.For highly charged compounds such as the {SiFe III 3W9O37 7-} anion under consideration here, the continuum solvent methods were unable to correctly simulate the environment (solvent + counterions) effects.The consequence is that the frontier molecular orbital energies are excessively high.In addition, and importantly, under the present experimental conditions, MD simulations show that several Li + ions are in direct contact with the polyoxometalate, introducing an extra stabilization of the polyoxometalate, which cannot be reproduced by an implicit solvation method.Therefore, addition of a Li + salt to the THF solution drastically changes the properties of the polyoxometalate anion, the redox activity being one of the most affected properties.In the absence of THF ligands, all attempts to bind a N2 to one of the Fe(II) centers failed as it was impossible to find an energy minimum in the region close to 1.9 Å.We find a decrease in energy only when N2 moves away from the metallic center.Reaction conditions: 10 mL PEG-400 containing 0.1 M TBAPF6, with or without 0.5 mM TBA{SiFe3W9}, 25 mM NaClO4 with 1 vol% water under 1 bar N2 for 3 h using a copper wire working electrode, a platinum wire counter electrode, and a Ag/AgCl reference electrode.For the 15 N2 experiment, it should be noted that due the high viscosity of PEG-400, and its low volatility, excellent results in degassing the solvent to remove 14 N2 were obtained by purging with He for 30 min at 60 °C, followed by the introduction of 15 N2.The residual 14 NH3 peak is attributed to the isotopic purity (98%) of the 15 N2 used and possibly other small contaminations.The coupling constant for 14 NH3 is 53 Hz; The coupling constant for 15 NH3 is 72 Hz.The reaction was carried out in and undivided cell electrolyzer, consisting of a 0.13 cm 2 Ni mesh cathode, a stainless-steel anode loaded with 2 mL PEG-400 containing 0.5 mM Na{SiFe3W9}, 1 vol% H2O, and 0.1 M NaCF3SO3 under 1 bar N2 operated at -1.3 V versus SHE.The reaction was carried out in and undivided cell electrolyzer, consisting of a 0.13 cm 2 Cu foam cathode, a stainless-steel anode loaded with 2 mL PEG-400 containing 0.5 mM Na{SiFe3W9}, 1 vol% H2O, and 0.1 M NaCF3SO3 under 1 bar N2 operated at -1.3 V versus SHE for 10 h.In an undivided cell electrolyzer, consisting of a 0.25 cm 2 Cu foil cathode, a stainless-steel anode loaded with 2 mL PEG-400 containing 0.5 mM Na{SiFe3W9}, 1 vol% H2O, and 0.1 M NaCF3SO3 under 1 bar N2 operated at -1.3 V versus SHE, the electrolyzer that yielded ~900 nmol NH3.The current obtained is shown in black.After removal of the cathode after 2 h and a gentle wash was the reaction was continued with the same cathode for another 2 h, red line.
Figure S4 .
Figure S4.Calibration of Various Electrodes in dry THF in the Presence of 0.1 M TBAPF6 as Electrolyte using Pt wires as Counter and Reference Electrodes.Fc/Fc + was measured using a 4 mM solution of Ferrocene with a Pt disk working electrode.Ag wire an Ag/AgCl were measured using them as working electrodes.
Figure S5 .
Figure S5.(a) Electrostatic isopotential surface showing the most negative potential (nucleophilic) wells of the {SiFe3W9O37} anion.(b) Computed molecular electrostatic potential mapped onto an isodensity surface of 0.0004 for {SiFe3W9O37}.Lithium-center coordination modes are also included in bridge (green) and terminal (blue) oxygens.Three lithium cations preferentially approach to the three nucleophilic wells near [Fe3O3], but only one is localized in a well generated by [W4O4].We noticed that this Li + cation is also coordinated to a ClO4 -, which is in turn attached to other Li + in terminal oxygens.(c) Radial distribution functions (RDFs, g(r)) (in black line) between the {SiFe3W9O37} (Si or Fe as reference) and, from left to right, N of TBA, Li + , Cl of ClO4 -and O of THF.Integration of the g(r) (in red line) is also included.
Figure S10 .
Figure S10.Representation of the three MOs occupied upon the 3e-reduction in the THF-containing model (see main text for details).Two electrons are delocalized at the three Fe centers, with a lower contribution of the Fe bound more strongly to one of the THF ligands, and the third electron is delocalized among W centers.
Figure S11 .
Figure S11.Schematic molecular orbital diagram for the 3-electron-reduced systems: I) {SiFe II 3W9O37} 7-+ 10Li + + 3ClO4 -+ continuum dielectric and II) {(THF)3SiFe II 2Fe III W VI 8WV O37} 7-+ 10Li + + 3ClO4 -+ continuum dielectric.The binding of three solvent molecules to the polyoxometalate induces a significant change in its electronic structure.In particular, the three lowest occupied (beta) d(Fe) molecular orbitals are destabilized by the presence of THF ligands, causing the transfer of one electron to the polyoxotungstate framework.Orbital energies are in eV.Fe, W and O labels represent the atoms with higher contribution to the molecular orbital.
Figure S12 .
Figure S12.The curves show how the energy (blue) and distance (black) of Fe•••N2 change during the optimization process when trying to coordinate a N2 molecule to the 3-electron reduced catalyst.In the absence of THF ligands, all attempts to bind a N2 to one of the Fe(II) centers failed as it was impossible to find an energy minimum in the region close to 1.9 Å.We find a decrease in energy only when N2 moves away from the metallic center.
Figure
Figure S13.1 H NMR (selgpse, 500.08MHz) after 5 h CPE in an electrolyzer: 0.1 M TBAPF6, 0.5 mM {SiFe3W9}, 25 mM LiClO4 in THF with 1 vol% ethanol as proton donor under 1 bar 14 N2 (blue) or 15 N2 (red) using a copper foil as working electrode, a stainless-steel counter electrode.The residual 14 N peaks in the 15 N2 experiment is associated with the isotopic purity of the 15 N2 used, experimental difficulties encountered in purging 14 N2 from volatile THF, (see FigureS16where PEG-400 was solvent and no purging difficulties were encountered) and possibly atmospheric contamination by 14 NH3.The coupling constant for 14 NH3 is 53 Hz; The coupling constant for 15 NH3 is 72 Hz.
Figure S14 .
Figure S14.Calibration of Various Electrodes in dry PEG-400 in the Presence of 0.1 M TBAPF6 as Electrolyte using Pt wires as Counter and Reference Electrodes.Fc/Fc + was measured using a 4 mM solution of Ferrocene with a Pt disk working electrode.Ag wire an Ag/AgCl were measured using them as working electrodes
Figure S17 .
Figure S17.Current versus time profile for N2 reduction on Cu.The reaction was carried out in and undivided cell electrolyzer, consisting of a 0.13 cm 2 Cu foam cathode, a stainless-steel anode loaded with 2 mL PEG-400 containing 0.5 mM Na{SiFe3W9}, 1 vol% H2O, and 0.1 M NaCF3SO3 under 1 bar N2 operated at -1.3 V versus SHE for 3 h.
Figure S18 .
Figure S18.Current versus time profile for N2 reduction on Ni.
Figure S19 .
Figure S19.Current versus time profile for N2 reduction on Cu for a 10 h reaction.
Figure S21 .
Figure S21.Recovered Cathode Experiment.In an undivided cell electrolyzer, consisting of a 0.25 cm 2 Cu foil cathode, a stainless-steel anode loaded with 2 mL PEG-400 containing 0.5 mM Na{SiFe3W9}, 1 vol% H2O, and 0.1 M NaCF3SO3 under 1 bar N2 operated at -1.3 V versus SHE, the electrolyzer that yielded ~900 nmol NH3.The current obtained is shown in black.After removal of the cathode after 2 h and a gentle wash was the reaction was continued with the same cathode for another 2 h, red line.
Figure S25 .
Figure S25.Thermogravimetric analysis plot of TBA{SiFe3W9}.The weight loss between 250 and 450 °C is attributed to the pyrolysis of TBA.Thus, leading to a 10:1 ratio of ratio of TBA:α-[SiW9O37{Fe(H2O)}3] and formulation of TBA{SiFe3W9} as TBA7[α-[SiFe III 3(H2O)3W9O37]•3TBA.Note that the excess of TBA has no bearing on the electrochemical results since this cation is present in excess as electrolyte.
Figure S26 .
Figure S26.Electrochemical setup and gas feed circulation.
Table S4 .
Faradaic efficiencies in the three-electrode setup at a function of time, Figure8a. | 2023-08-31T06:18:30.363Z | 2023-08-29T00:00:00.000 | {
"year": 2023,
"sha1": "34be17df5158155a74d56199db8872de9eb391fe",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1021/jacs.3c06167",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "4468758630ce84f13f0a7763e2d27164467c4c3a",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248589723 | pes2o/s2orc | v3-fos-license | “TransIent perivascular inflammation of the carotid artery (TIPIC) syndrome” as a rare case of laterocervical pain: Multimodal diagnosis
‘‘TransIent Perivascular Inflammation of the Carotid artery (TIPIC) syndrome” is an unusual cause of unilateral neck pain, due to a nonspecific inflammation of the carotid artery. This entity has been for long known as “carotidynia” and described as a syndrome rather than a distinct pathologic entity. Recently, the presence of structural abnormalities of the carotid artery wall has been demonstrated, leading to the introduction of radiological criteria which, in the appropriate clinical context, allow to diagnose TIPIC syndrome. TIPIC syndrome is a rather rare disease and, since its first description by Fay in 1927, only a small series of patients have been published. The interest of our case lies in the fact that diagnosis and follow-up were assessed on ultrasound and magnetic resonance imaging, demonstrating that a correlation between clinical evolution and radiological findings does exist. In addition, DWI sequence was performed at the time of diagnosis and at resolution. To our knowledge, such an assessment has never been reported in the previous literature.
a b s t r a c t ''TransIent Perivascular Inflammation of the Carotid artery (TIPIC) syndrome" is an unusual cause of unilateral neck pain, due to a nonspecific inflammation of the carotid artery. This entity has been for long known as "carotidynia" and described as a syndrome rather than a distinct pathologic entity. Recently, the presence of structural abnormalities of the carotid artery wall has been demonstrated, leading to the introduction of radiological criteria which, in the appropriate clinical context, allow to diagnose TIPIC syndrome. TIPIC syndrome is a rather rare disease and, since its first description by Fay in 1927, only a small series of patients have been published. The interest of our case lies in the fact that diagnosis and follow-up were assessed on ultrasound and magnetic resonance imaging, demonstrating that a correlation between clinical evolution and radiological findings does exist. In addition, DWI sequence was performed at the time of diagnosis and at resolution. To our knowledge, such an assessment has never been reported in the previous literature.
Case report
We are reporting the case of a 49-year-old man presented with a 1-week history of pain in the right laterocervi-✩ Competing interests: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. * Corresponding author.
cal region, over the carotid bifurcation. The pain was described as a severe dull sense of discomfort, irradiated to the ipsilateral ear, and triggered by head and neck movements ("Fay sign" [1] ). This was preceded by mild constitutional symptoms, with fever, fatigue, and myalgia. Patient denied history of migraine or other neurological symptoms.
Clinical examination found an apyretic patient in a good general condition, with localized swelling and pain at the level of the right carotid bifurcation, worsened by palpation. There was no palpable cervical node, no palpable induration along the jugular vein and no carotid bruits were audible.
Basic laboratory hematological examination showed normal white blood cell count (9910/mm ³) and C-reactive protein (0.10 mg%). Erythrocyte sedimentation rate (20 mm/h) was slightly increased. The rest of the laboratory investigations was normal.
US and MRI of the neck were performed approximately 1 week after symptoms onset. US showed the presence of an asymmetric hypoechoic thickening of the right carotid wall, localized at the level of the distal right common carotid artery, extending into the proximal internal carotid artery ( Fig. 1 A).
On Doppler studies, the affected carotid artery on the right side demonstrated normal flow parameters ( Fig. 1 A). No left carotid artery or other vascular abnormalities (stenosis or dissection) were found.
The thyroid gland, parathyroid glands and salivary glands were normal, and no neck mass or cervical lymphadenopathy was noted.
Further evaluation with MRI was performed to characterize the lesion found on US. MRI demonstrated the presence of a thin amount of hyperintense tissue on STIR sequence, which restricted diffusion on DWI sequence, and showed contrast enhancement after contrast medium administration ( Fig. 2 A, B, C, D). Significant luminal narrowing was absent.
The findings described confirmed the hypothesis of inflammatory tissue in the periadventitial area of the right carotid artery and the diagnosis of TIPIC syndrome was formulated.
The patient was treated with anti-inflammatory medications and had a full clinical recovery within 14 days. On followup, 3 weeks after presentation, US ( Fig. 1 B) and MRI ( Fig. 3 A, B, C, D) showed the regression of the eccentric perivascular tissue.
Discussion
Carotidynia is an idiopathic unilateral neck pain syndrome, caused by a nonspecific inflammation of the carotid artery. It usually lasts less than 2 weeks, being self-limited or resolving with nonsteroidal anti-inflammatory drugs or steroids.
It was described for the first time by Fay in 1927 as a clinical entity characterized by tenderness and pain at the level of the carotid bifurcation [1] . In 1988, it was included in the first International Classification of Headache Disorders [2] and, in 2004, the International Headache Society published modified critieria for carotidynia, classifying it as a syndrome rather than a distinct pathologic entity: the criteria specified that pa- tients with carotidynia should not have had structural abnormalities of the carotid artery [3] .
Recently, consistent imaging findings were reported, particularly on ultrasound (US) and magnetic resonance imaging (MRI), demonstrating that radiological abnormalities of the carotid bifurcation zone, evidencing an inflammatory process, are present [4] . As a result, the condition of carotidynia is currently defined as the combination of specific clinical and imaging findings and the acronym "TransIent Perivascular Inflammation of the Carotid artery (TIPIC)" syndrome has been introduced to describe the entity as thoroughly as possible [5 ,6] .
TIPIC syndrome is a rather rare disease. Precise epidemiological data is not available. A large study of 47 patients with acute neck pain, published by Lecler et al., reported a prevalence of 2.8% [5] .
The etiology and the pathogenesis of the inflammatory process is not clear and only 1 study reported histologically proved findings of non-specific vascular inflammation of the carotid adventitia [7] .
Clinical presentation includes unilateral cervical pain of acute onset, occasionally with temporal irradiation, triggered by palpation or head and neck movements. Transient neurological symptoms or constitutional symptoms are rarely reported.
In the vast majority of patients, biologic examinations show a mild increase of the inflammatory markers.
The diagnosis is, therefore, mainly based on clinical and imaging findings.
Four diagnostic criteria have been proposed: (1) Presence of acute pain overlying the carotid artery, which may or may not radiate to the head; (2) Eccentric PeriVascular Infiltration (PVI) on imaging; (3) Exclusion of another vascular or nonvascular diagnosis with imaging; (4) Improvement within 14 days either spontaneously or with anti-inflammatory treatment.
Additionally, a minor criterion could be the presence of a self-limited intimal soft plaque [5] .
Our case respected all the criteria. Imaging findings of TIPIC syndrome include perivascular findings described by the general term "PeriVascular Infiltration (PVI)", referring to the presence of soft amorphous tissue replacing the fat surrounding the carotid artery, with a hazy aspect of the fat. PVI is primarily located at the level of the carotid bifurcation, most often in a posterior and lateral location, and may extend towards the proximal internal or external carotid artery [2 ,5] . This lesion does not affect the entire circumference of the carotid system but is usually limited to less than half of the perimeter, thus being characterized as eccentric.
On US, PVI appears as a hypoechoic lesion situated in the medial-adventitial layer of the carotid artery, without hemodynamic changes on Color Doppler technique. The lack of hemodynamic disturbance justifies the absence of audible bruit during auscultation.
MRI evidences a thickened wall of the affected carotid artery, due to the presence of periadventitial soft tissue, which shows enhancement after administration of contrast medium [8] ( Fig. 2 B and 3 B). Additional T2 spectral presaturation with inversion recovery sequences, when performed, reveal a narrow, perivascularly raised, signal corresponding to an inflammatory concomitant oedema, as we observed in our patient ( Fig. 2 A and 3 A). The MR angiograms of the neck vessels do not show any sign of significant lumen constriction [9] . In our case, we performed a previously unreported sequence, DWI, which demonstrated that the perivascular tissue restricted diffusion, strengthening our diagnostic hypothesis. We also used DWI sequence to prove the resolution of pathological findings ( Fig. 2 C and 3 C).
The diagnosis of TIPIC syndrome requires imaging evidence of the exclusion of other vascular and non-vascular causes of neck pain ( Table 1 ) [10 ,11] . As compared with US, MRI is particularly useful in differentiating TIPIC syndrome from intramural hematoma or carotid dissection, that are the principal vascular differential diagnoses. In contrast to these two latter causes of acute neck pain, TIPIC syndrome has a benign clinical course, as it may be self-limiting or be treated with anti-inflammatory drugs or steroids, showing full clinical recovery within a mean period of 2 weeks. A relapse rate of about 20% was reported [5] .
The case we reported represents a characteristic case of TIPIC syndrome, in terms of clinical-radiological findings and benign course. Its main interest lies in the fact that diagnosis and follow-up were assessed on US and MRI, demonstrating that a correlation between clinical evolution and radiological findings does exist. In addition, DWI sequence was performed at the time of diagnosis and at resolution ( Fig. 2 C and 3 C). To our knowledge, such an assessment has never been reported in the previous literature.
In conclusion, in the appropriate clinical context, a diagnosis of TIPIC syndrome as a cause for vascular neck pain may be supported by characteristic radiologic findings. This case report supports the imaging features of TIPIC syndrome previously described in the literature and underscore the role of Doppler sonography and neck MRI, including DWI sequence, as the modalities of choice to confirm the presumed TIPIC syndrome diagnosis and to evaluate the natural history of the pathology.
Patient consent
Informed written consent was obtained from the patient for publication of the Case Report and all imaging studies. Consent form on record. | 2022-05-10T15:20:08.308Z | 2022-05-06T00:00:00.000 | {
"year": 2022,
"sha1": "6cd71e212c1af6c71c409b4a3f504601c9e09374",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.radcr.2022.04.021",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "05390e7042af787f68e74bb297327e9e500002cb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
199472634 | pes2o/s2orc | v3-fos-license | First Law of Entanglement Entropy in Flat-Space Holography
According to flat/Bondi-Metzner-Sachs invariant field theories (BMSFT) correspondence, asymptotically flat spacetimes in $(d+1)$-dimensions are dual to $d$-dimensional BMSFTs. In this duality, similar to the Ryu-Takayanagi proposal in the AdS/CFT correspondence, the entanglement entropy of subsystems in the field theory side is given by the area of some particular surfaces in the gravity side. In this paper we find the holographic counterpart of the first law of entanglement entropy (FLEE) in a two-dimensional BMSFT. We show that FLEE for the BMSFT perturbed states which are descried by three-dimensional flat-space cosmology, corresponds to the integral of a particular one-form on a closed curve. This curve consists of BMSFT interval and also null and spacelike geodesics in the bulk gravitational theory. Exterior derivative of this form is zero when it is calculated for the flat-space cosmology. However, for a generic perturbation of three-dimensional global Minkowski spacetime, the exterior derivative of one-form yields Einstein equation. This is the first step for constructing bulk geometry by using FLEE in the flat/BMSFT correspondence.
Introduction
Flat/BMSFT is an extension of AdS/CFT correspondence to non-AdS geometries. According to this duality quantum gravity in the asymptotically flat spacetimes in (d + 1)-dimensions can be described by a d-dimensional field theory which is BMS-invariant [1,2]. In the gravity side, BMS symmetry is the asymptotic symmetry of asymptotically flat spacetimes at null infinity [3,4]. In the field theory side , the global part of BMS algebra is given by ultra-relativistic contraction of conformal algebra . Thus one can interpret the flat-space limit (zero cosmological constant limit) in the gravity side as the ultrarelativistic limit of CFT in the boundary theory [2]. In this view, one can study flat/BMSFT by starting from AdS/CFT and taking a limit, the flat-space limit in the bulk and the ultrarelativistic limit in the boundary.
BMS symmetry as the asymptotic symmetry, is infinite-dimensional in three and four dimensions [5]- [7]. Hence one may expect to find some universal aspects for two-and three-dimensional BMSFTs. This situation is very similar to the two-dimensional conformal field theories (CFTs) which their infinite-dimensional symmetry is used to predict the structure of correlation functions as well as entanglement entropy of subsystems. Similarly, the entanglement entropy formula for some particular intervals in BMSFT 2 has been introduced in [8] by just using the infinite symmetry of two-dimensional BMSFTs and then studied more carefully in [9]- [15].
In the context of AdS/CFT correspondence, the entanglement entropy of CFT subsystems has a holographic description. According to Ryu-Takayanagi proposal, this entropy is proportional to the area of a bulk surface which has the minimum area among the surfaces connected to the boundary subsystem [16,17]. A similar proposal for the BMSFT entanglement entropy has been introduced in [12]. Accordingly, the BMSFT entanglement entropy can be given by the area of particular surfaces. These surfaces are not connected directly to the boundary of subsystem but there are null rays which connect them to null infinity where the subsystem is supposed to live.
The corresponding surface, null rays and the subsystem together construct a closed surface .
Another interesting problem which was studied in the context of AdS/CFT is the holographic description of the first law of the entanglement entropy (FLEE). It was shown in [18,19] that writing both sides of FLEE in terms of corresponding bulk parameters finally yields linearized Einstein equations. In other words, FLEE as a constraint in the boundary theory reduces to a constraint on the bulk geometry which is exactly Einstein equation. If this connection is an intrinsic property of gauge/gravity dualities, one can use entanglement entropy and its first law in an arbitrary field theory to find a dual gravitational geometry.
In this paper we study the proposal of [18,19] in the context of flat 3 /BMSFT 2 correspondence.
We start from FLEE and use flat/BMSFT correspondence to write it in terms of components of the asymptotically flat bulk metric. We focus on the BMSFT states which their gravitational dual are flat-space cosmology [20]- [23]. It is shown that both sides of the FLEE formula can be written in terms of the integral of an one-form over curves consist of BMSFT interval and the null and the spacelike geodesics introduced in [12]. These curves construct a closed curve, thus one can use Stokes's theorem to write integrals as the integral of the external derivative of the one-form over the surface bounded by the curves. For the metric of the flat-space cosmology, the exterior derivative of this form is zero. For a generic metric which satisfy BMS boundary condition (see for example [24]), the exterior derivative of one-form results in Einstein equation. Our work is not only the first step generalization of the proposal of [18,19] for the flat-space holography but also shows that the flat/BMSFT correspondence studied in several previous works (see references in [25]) is a worthwhile duality.
In section two we review the proposal of [19] in the context of AdS/CFT. In section three after briefly reviewing the flat/BMSFT correspondence and holographic description of BMSFT entanglement entropy, we write FLEE in terms of bulk metric and deduce the Einstein equation.
Entanglement entropy and its first law
For a quantum field theory state |ψ , the density matrix is ρ = |ψ ψ|. (2.1) If we decompose a spatial (time constant) slice Σ to two subsystems B andB (Σ = B ∪B), then the density matrix associated to B can be obtained from ρ by tracing out the degrees of freedom of the complement subsystemB as The Entanglement entropy of subsystems B is the von Neumann entropy associated to the density matrix ρ B , For a small perturbation |ψ(ε) to the initial state |ψ(0) of the whole system, the first law of entanglement entropy (FLEE) is where H B is modular Hamiltonian which is independent of perturbation and defined through Formula (2.4) is a quantum generalization of the first law of thermodynamics. This formula holds for any arbitrary small perturbation of quantum state and for any subsystem B.
Mostly, it is difficult to compute the modular Hamiltonian H B and its associated density matrix ρ B . However, for the cases that H B is a local operator, one may find a unitary transformation (and hence reversible which acts also on the corrdinates) which maps ρ B to a thermal density matrix. Hence the resulatant entropy is a thermal one (see [26]). If we denote the unitary transformation by U and the final thermal density matrix by ρ H , then It is not difficult to check that the thermal entropy given by is the same as the entanglement entropy (2.3). Since ρ H is thermal, it can be written as 1 .
We consider a spacial time slice Σ of d−dimensional Minkowski space and divide it to two regions B andB (Σ = B ∪B). Let B be a (d − 1)− dimensional ball with radius R.
In order to find δE B in (2.4), we need to calculate the vacuum expectation value of the modular Hamiltonian. The modular Hamiltonian for this ball shaped region is calculated in [26] as follows where x i 0 are the coordinates of the center of ball B and T µν is the stress tensor of CFT. We use the convention x µ = (t, x i ). Hence FLEE (2.4) can be written as Now we use holography to calculate δS B . When the CFT vacuum state |Ψ(0) is perturbed to the state |Ψ(ε) , in the dual gravitational theory, the metric of the dual AdS spacetime will be perturbed as where h µν are infinitesimal. By means of the Ryu-Takayanagi formula [16,17] we can write where AB is the minimal area of the co-dimension two surfaceB in the bulk AdS space which is homologous to B and given by Here γ AB is the induced metric onB.
Let us illustrate the holographic counterpart of δS B and δE B ,respectively, as δS grav.
B and δE grav.
B
. It was shown in [18,19] that they are given as follows in terms of bulk perturbed metric h ij : Thus the FLEE formula (2.4) is written as This is a non-local equation which is correct for any ball shaped region with arbitrary radius R and center coordinate {x i 0 }. Thus one may think about a local equation which is equivalent to (2.17). In order to find this local constraint, we look for a form χ such that (2.18) If such a form χ exists, using (2.4) we can write where Π is the hypersurface bounded by B andB (B ∪B = ∂Π) and located at t = t 0 . For the asymptotically AdS spacetimes, χ is given by [19] where ξ a is the bulk modular flow For this form, the exterior derivative is given by where δG ab are linearized Einstein equations around AdS spacetimes, and ǫ b is related to volume form as follows Moreover, the exterior derivative is zero on the boundary.
From (2.19) and (2.22) it is obvious that the holographic interpretation of the first law of entanglement entropy leads to Using the fact that only the t component of ξ a is non-vanishing on Π and also FLEE is valid for all of the ball shaped regions with arbitrary R, from (2.25) one can deduce that [27] δG tt = 0. that δG zµ and δG zz are zero everywhere [28].
We see that the gravitational interpretation of FLEE in CFTs leads to the linearized equations of motion of the dual AdS gravity. In the next section we will apply the above procedure for asymptotically flat spacetimes in the context of flat/BMSFT correspondence. with well-defined flat space limit [29,30]. A relevant question is finding a counterpart for the flat space limit of the gravity theory in the field theory side. To answer this question one needs to study the asymptotic symmetry of the asymptotically flat spacetimes. This study has been done in [3] for the four dimensional and in [4] for the three dimensional spacetimes. More recent studies show that for the four dimensional cases the asymptotic symmetry algebra at null infinity is the semi-direct sum of infinite dimensional local conformal symmetry algebra on a two-sphere and the abelian ideal algebra of supertranslations [6]. This algebra is known as bms 4 . Such an infinite dimensional locally well-defined symmetry algebra also exists at null infinity of three dimensional asymptotically flat spacetimes [5] . This algebra is called bms 3 .
The observation of [2] is that the bms 3 is isomorphic to an infinite-dimensional algebra in two dimensions which is given by ultra-relativistic contraction of conformal algebra. Thus it was proposed in [2] that the holographic dual of asymptotically flat spacetimes in (d + 1) dimensions are field theories in d dimensions which have BMS symmetry. We call these BMS invariant field theories BMSFT and the correspondence between them and asymptotically flat spacetimes flat/BMSFT.
To be more precise, let us consider Einstein-Hilbert action with negative cosmological constant in three dimensions An appropriate coordinate with well-defined flat space limit is BMS gauge [29] where M and N are functions of u and φ and are constrained by using the equations of motion as The algebra of conserved charges is centrally extended with central charges c =c = 3ℓ/2G.
Taking the flat space limit from metric (3.2) yields asymptotically flat spacetimes with metric where M and N are functions of u and φ and they satisfy (3.7) The algebra of conserved charges is also centrally extended.
The generators of bms 3 can be obtained by taking flat space limit from the generators of conformal algebra [29], It was argued in [2] that the limit (3.8) which is taken in the gravity side corresponds to the ultra relativistic limit in the field theory side. In the rest of this paper by BMSFT 2 we mean a field theory which has the symmetry algebra (3.7).
From BMSFT 3 we mean a field theory with the following symmetry algebra
Holographic entanglement entropy in flat/BMSFT
Similar to other field theories, it is possible to define entanglement entropy for the subsystems of BMSFT. The infinite dimensional symmetry of BMSFTs admits to find universal formulas for the entanglement entropy of sub-regions [8]. Moreover, using the flat/BMSFT correspondence one can find a holographic description for the BMSFT entanglement entropy. Recently, a prescription (similar to the Ryu-Takayanagi's proposal for the CFT entanglement entropy [16,17]) has been proposed for the BMSFT entanglememnt entropy [12] that relates it to the area of some particular curves into the bulk flat spacetimes. According to [12], the entanglement entropy of sub-region B of BMSFT 2 is given by where γ is a spacelike geodesic and γ + and γ − are null rays from ∂γ to ∂B.
The most generic solution of Einstein gravity with zero cosmological constant in three dimensions is given by (3.5). In the rest of this paper we will consider an interval B in the BMSFT which is determined by − lu 2 < u < lu 2 and − (3.12) In this case the bulk modular flow is Here γ is given by (3.14) By using the coordinate transformations we can change the metric of null-orbifold to the Cartesian coordinate In this coordinates the bulk modular flow is given by (3.17) and geodesics are γ : 2. Global Minkowski with metric (M = −1 and N = 0 in (3.5)) The bulk modular flow is where γ is given by 2 Using coordinate transformation [31] t = (r + u) csc l φ 2 − r cos φ cot l φ 2 , we have In this Cartesian coordinates the bulk modular flow is the same as (3.17) and geodesics are (3.28) 3. Flat-space cosmology (FSC) with metric (M = m and N = j ) ds 2 = mdu 2 − 2dudr + 2jdudφ + r 2 dφ 2 , (3.29) where m and j are constants. It has a cosmological horizon at radius r c = j √ m . FSC is a shift-boost orbifold of Minkowski spacetime [21] and can be brought into the Cartesian coordinate locally by using the following transformation: The holographic entanglement entropy of interval B is given by (3.32)
Holographic FLEE
In this section we will consider the BMSFT dual to the global Minkowski. The starting point is FLEE formula (2.4) which is written in the field theory side. We want to use Flat 3 /BMSFT 2 to write both sides of this formula in the gravity side. BMSFT lives on a cylinder with coordinates (u, φ) and interval B is given by − lu where l u , l φ , u 0 and φ 0 are constants.
Let us start from the right hand side of (2.4). In order to calculate the expectation value of modular Hamiltonian, we use the fact that up to an additive constant, the modular Hamiltonian H B is the same as conserved charge of the modular flow ξ. If we show the stress tensor of BMSFT by T ab , the corresponding charge of ξ can be calculated on a spacelike surface Σ with metric σ ab as [32] where σ is the coordinate on the surface Σ and n a is the unit timelike vector normal to Σ.
The most challenging problem in the flat-space holography is definition of Σ. In the AdS/CFT correspondence, Σ is a spacelike ( surface on the conformal boundary of the asymptotically AdS spacetimes. However, such a definition for conformal infinity of asymptotically flat spacetimes is not appropriate in the flat-space holography . In the previous works [30], [33]- [38], in the flatspace holography, Σ has been defined by using the corresponding surface of asymptotically AdS spacetimes which their flat-space limit yields the asymptotically flat metric. To be precise, let us consider AdS 3 metric written in the BMS coordinate, where ℓ is the radius of AdS space. At fixed but large r we can write, Thus we can write the metric of conformal boundary as In the AdS/CFT correspondence, the metric of Σ in (3.33) is given by using (3.36). The new point in all of papers [30], [33]- [38] is that (3.36) is also appropriate for writing metric of Σ in the ℓ → ∞ limit. The proposal of [30] for the definition of Σ is that we use a metric similar to (3.36) but replace ℓ with three dimensional Newton constant G. In this paper we employ this definition of Σ. Since we want to study FLEE in a BMSFT which is holographic dual of global Minkowski, the metric of bulk spacetime is given by (3.21) which is the ℓ → ∞ limit of (3.34). Thus we choose Σ as a spacelike subspace of a space which is determined by metric It will prove convinient to first make a coordinate transformation as In this coordinate, our interval will be on the φ axe between − Moreover, by taking r → ∞ limit from (3.22), we can find the BMSFT modular flow on the interwal (w = 0) as If we determine Σ as w = 0, − l φ 2 < φ − φ 0 < l φ 2 then using (3.37) and (3.39) we find Since h uu and h uφ are infinitesimal constants, we can use (3.31) to calculate δS. We find Using (3.40) and (3.42), we can write the FLEE as This formula is valid for all of intervals determined by l φ , l u and (u 0 , φ 0 ). For a very small interval which is given by l φ → 0, l u → 0 but lu l φ =fixed, the expectation value of stress tensor can be considered as a function of center of the interval. Since center of interval is an arbitrary point, using (3.43) we find, (3.44) Putting (3.44) into (3.40), we find δE B as The interesting point is that both of δS B and δE B given by (3.42) and (3.45) are written as the integral of a specific one-form χ. Precisely, we can write 3 ξ is the bulk modular flow (3.22), h = h ν µ and ǫ µνα is the completely antisymmetric tensor with component ǫ 012 = |g 0 | where g 0 is the determinant of global Minkowski (3.21
Summary and Conclusion
In this paper we studied another aspect of flat/BMSFT which was previously introduced in the context of AdS/CFT. We wrote FLEE of BMSFT 2 in terms of three-dimensional asymptotically flat metrics. The steps are analogue to those that are used in the context of AdS/CFT correspondence. We rewrite both sides of FLEE (2.4) by using corresponding bulk parameters. δS B in (2.4) is the variation of entanglement entropy with respect to the state by which the system is described. Using the proposal of [12] one can write this variation as the variation of length of some spatial curves in the bulk geometry. δE B in the right hand side of FLEE (2.4) is variation of the expectation value of the modular Hamiltonian. For calculating this quantity, we used the fact that the modular Hamiltonian is the conserved charge of modular flow upto an additive constant which can be ignored in the variation. BMSFT conserved charges are given by using stress tensor.
Using flat/BMSFT dictionary we relate the calculation of the conserved charges to a bulk calculation similar to the Brown-York proposal [32]. The keypoint in this calculation is the definition of the spatial surface over which the integration is performed. In the AdS/CFT correspondence this surface is given by using the conformal boundary of asymptotically AdS spacetimes. In this case we do not use the standard definition of conformal boundary. Our proposal is that this surface for the flat spacetimes is the same as that one for the asymptotically AdS case whose flat-space limit yields the asymptotically flat spacetimes [30]. This proposal works again in this problem similar to all previous works [33]- [38], however, a thorough investigation is necessary that we hope to do in our future studies.
In this paper we assumed that the perturbed state in the field theory side corresponds to a metric similar to the flat-space cosmology [20]- [23] in the bulk theory. Hence, the gravitational counterpart of FLEE was the exterior derivative of a one-form which is zero for the flat-space cosmology. The exterior derivative of this form for a generic metric which satisfy BMS boundary condition results in Einstein equations for undermined components of the metric. This is a good hint that holographic FLEE is Einstein equation in the flat/BMSFT correspondence.
Note added: While we were ready to submit this work, ref. [39] was posted on the arXiv whose results overlap with ours. | 2019-08-08T13:12:49.485Z | 2019-08-07T00:00:00.000 | {
"year": 2019,
"sha1": "656893469b5613c90697e72bbe81e505a79a9f85",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.100.106006",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "a2bd3663365dac62566353dc4f7f154ae8d3965b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
263689631 | pes2o/s2orc | v3-fos-license | Appetite or Distaste for Cell-Based Seafood? An Examination of Japanese Consumer Attitudes
: Conventional seafood production contributes to some of the most alarming global problems we face at present, such as the destabilization of aquatic ecosystems, human health risks, and serious concerns for the welfare of trillions of aquatic animals each year. The increasing global appetite for seafood necessitates the development of alternative production methods that meet consumer demand, while circumventing the aforementioned problems. Among such alternatives, cell-based seafood is a promising approach. For its production, cells are taken from live aquatic animals and are cultivated in growth media, thus making the rearing, catching, and slaughtering of a great number of animals redundant. In recent years, this alternative production method has transitioned from aspiration to reality, and several cell-based seafood start-ups are preparing to launch their products. Market success, however, has been reckoned to largely depend on consumer attitudes. So far, there has been little research exploring this within Asia, and none in Japan, which has one of the highest seafood consumption footprints per capita globally. The present study explores cell-based seafood-related knowledge, attitudes and behavioral intentions of Japanese consumers (n = 110) via a questionnaire-based, quantitative analysis. Although findings suggest low awareness of the concept of cell-based seafood, attitudes and intentions were positive overall, with about 70% of participants expressing an interest in tasting, and 60% expressing a general willingness to buy cell-based seafood. Younger age was significantly associated with more positive attitudes, while prior knowledge of cell-based seafood was strongly linked to willingness to pay a premium for cell-based products. While highlighting the need for information campaigns to educate Japanese consumers about cell-based seafood, this study’s findings suggest the Japanese market to be moderately ready for the launch of such products.
Introduction
Conventional seafood production contributes substantially to some of the most alarming global problems we face at present: deteriorating oceanic health [1,2], increasing loss of underwater biodiversity [3,4], human health risks in terms of product contamination with mercury [5] and microplastics [6], the emergence of antimicrobial resistance [7], and serious concerns for the welfare of trillions of aquatic animals each year [8], including numerous species that are potentially capable of pain perception [9,10] and possibly sentient [11,12].Although farm-raised and wild-caught production methods vary in their impacts, both are known to contribute to at least several of the aforementioned problems [13].In view of a growing world population [14] and an increasing global appetite for seafood [15], the development of alternative production methods that meet consumer demand, while mitigating the problems associated with conventional production, seems vitally important.Among these alternatives, cell-based seafood is considered a promising approach [16].Cell-based seafood is also referred to as 'synthetic', 'in-vitro', 'artificial', 'clean', 'cultured', 'cell-cultured', and 'lab-grown'.Here, 'cell-based' is used, as it was found to outperform other terms by appropriately describing the technology, distinguishing products clearly from conventional products without valuation, signaling potential allergenicity, and performing well with respect to measures of consumer acceptance [17].
Background
The idea behind cell-based seafood is to grow seafood tissue for human consumptionand even for pet food [18,19]-outside the aquatic animal's body [20].In accordance with a forward-looking quote from Winston Churchill as early as 1931 [21], cell-based tissue engineering makes possible the targeted production of animal parts meant for consumption without investing energetic input and time into growing other body parts that will not be consumed [22].This can be achieved by harvesting cells from a living animal and letting the cells grow in an appropriate medium in a bioreactor [20], as shown in Figure 1.Using this biotech method, cell-based seafood companies plan to create healthy and tasty alternatives to conventional products (Figures 2 and 3) without harming the environment or the individual aquatic animal [20,23].
Commodities 2023, 2, FOR PEER REVIEW 2 approach [16].Cell-based seafood is also referred to as 'synthetic', 'in-vitro', 'artificial', 'clean', 'cultured', 'cell-cultured', and 'lab-grown'.Here, 'cell-based' is used, as it was found to outperform other terms by appropriately describing the technology, distinguishing products clearly from conventional products without valuation, signaling potential allergenicity, and performing well with respect to measures of consumer acceptance [17].
Background
The idea behind cell-based seafood is to grow seafood tissue for human consumption-and even for pet food [18,19]-outside the aquatic animal's body [20].In accordance with a forward-looking quote from Winston Churchill as early as 1931 [21], cell-based tissue engineering makes possible the targeted production of animal parts meant for consumption without investing energetic input and time into growing other body parts that will not be consumed [22].This can be achieved by harvesting cells from a living animal and letting the cells grow in an appropriate medium in a bioreactor [20], as shown in Figure 1.Using this biotech method, cell-based seafood companies plan to create healthy and tasty alternatives to conventional products ( Figure 2; Figure 3) without harming the environment or the individual aquatic animal [20,23].
Figure 1.
Simplified depiction of the production process of cell-based fish.Cells are taken from a living fish (I) and put into a nutrient solution (II), where they grow into fish tissue (III), which can be consumed by humans (IV).© Łukasz Zielinski.After BlueNalu [20].
Figure 1.
Simplified depiction of the production process of cell-based fish.Cells are taken from a living fish (I) and put into a nutrient solution (II), where they grow into fish tissue (III), which can be consumed by humans (IV).© Łukasz Zielinski.After BlueNalu [20].[25].
Although the development of cell-based animal products rapidly gained momentum after the first cell-based burger patty was publicly eaten in 2013 following the pioneering research of Dutch scientist Mark Post [26], cell-based seafood production is still in its infancy [27], and numerous obstacles bar the way to successful product launches [28,29].
For products to appear attractive to the widest market, the composition of growth media needs to forgo the common use of fetal bovine serum, fish embryo extract, or other substances viewed critically from an animal welfare perspective [30,31].Moreover, the considerably high production cost of media needs to be reduced in order to make end products accessible to consumers beyond a small, wealthy elite [32].Yet another obstacle for cell-based seafood is presented by the challenge of achieving satisfactory taste and texture of products [33].Progress in the aforementioned areas is substantially impeded by the severe lack of transparency within the industry due to competition between independently acting start-ups supported by private investors in a venture capital model [34] and by a lack of research in tissue engineering techniques for cold-blooded animals.Respective research for warm-blooded animals is much more advanced due to its use in regenerative human medicine [32].Despite the undeniable relevance of these and other obstacles relating to product optimization, it has been asserted that the biggest obstacle to the success of cell-based animal products might be consumer acceptance [35,36].
Although studies on consumer acceptance of cell-based meat have been conducted in numerous countries, few studies have focused on the acceptance of cell-based seafood.Although the development of cell-based animal products rapidly gained momentum after the first cell-based burger patty was publicly eaten in 2013 following the pioneering research of Dutch scientist Mark Post [26], cell-based seafood production is still in its infancy [27], and numerous obstacles bar the way to successful product launches [28,29].
For products to appear attractive to the widest market, the composition of growth media needs to forgo the common use of fetal bovine serum, fish embryo extract, or other substances viewed critically from an animal welfare perspective [30,31].Moreover, the considerably high production cost of media needs to be reduced in order to make end products accessible to consumers beyond a small, wealthy elite [32].Yet another obstacle for cell-based seafood is presented by the challenge of achieving satisfactory taste and texture of products [33].Progress in the aforementioned areas is substantially impeded by the severe lack of transparency within the industry due to competition between independently acting start-ups supported by private investors in a venture capital model [34] and by a lack of research in tissue engineering techniques for cold-blooded animals.Respective research for warm-blooded animals is much more advanced due to its use in regenerative human medicine [32].Despite the undeniable relevance of these and other obstacles relating to product optimization, it has been asserted that the biggest obstacle to the success of cell-based animal products might be consumer acceptance [35,36].
Although studies on consumer acceptance of cell-based meat have been conducted in numerous countries, few studies have focused on the acceptance of cell-based seafood.To date, no study has specifically examined the attitudes of Japanese consumers toward cell-based seafood.Research on this specific topic promises to be interesting for several reasons.No point in Japan lies more than 150 km from the sea [37], and large proportions of the country's inhabitants have always relied on the sea as a vital resource [38].Nowadays, Japan has one of the highest seafood consumption footprints per capita in the world, at about 45 kg per year [39,40].Moreover, scandals around dolphin-and whale-hunting practices [41,42], as well as the custom of eating certain aquatic animals alive in a mode of seafood consumption called Odorigui (literally 'dancing eating') [43], have brought Japan negative publicity on the world stage.The perception created has been that welfare concerns for animals in general and for aquatic animals in particular are not a priority within Japanese food production systems [42,44].The reaction of Japanese consumers when presented with an option to maintain current dietary habits while being able to avoid such animal welfare problems and resultant negative publicity promises to be interesting.
Prospects on the Global Seafood Market
With an estimated value of USD 151 billion [45], the global seafood market is a highly profitable industry, and is likely to be shaped considerably by the direction into which incipient consumer attitudinal trends will develop over coming years and decades.Whether cell-based seafood will gain a foothold on the seafood market is not easily predicted, as surveys on consumer attitudes toward cell-based animal products offer a broad range of results.The percentage of participants displaying a positive attitude toward cell-based animal products ranged from 11% in Canada [46], to 80% among U.S. and U.K. participants [47].Consumers' willingness to try or purchase cell-based animal products is linked to many different factors.
Key determinants include factors related to products [48,49], consumers [50][51][52][53], or to messaging strategies used to describe cell-based products [17,54,55].While aspects relating to products and messaging strategies are mostly in the power of start-up companies to affect, the demographic makeup within different countries is not.However, knowledge about possible associations between different demographic aspects and levels of consumer acceptance might help inform successful product launches.Although general patterns and a set of demographic predictors for acceptance of cell-based animal products have been identified, academic research so far has focused largely on consumer attitudes toward cell-based meat in western countries and countries with high meat consumption.Almost no research to date has explored attitudes toward cell-based seafood.Very little has focused on Asian countries, and no such published research has yet investigated the attitudes of Japanese consumers.
Results
The survey collected 110 responses over three months, not reaching the target sample size of 400; thus, results, while indicative, need to be interpreted with caution.Descriptive analysis is provided for demographic data and seafood consumption data, prior knowledge, attitudes, and behavioral intentions regarding cell-based seafood, followed by inferential analysis.
Demographic Data
The vast majority of respondents (77.3%) lived in Japan, while 22.7% lived abroad, mainly in Germany.The survey results showed a significant gender imbalance, with twothirds (66.4%) of participants being women.Additional demographic data are summarized in Tables S1 and S2 in the Supplementary Materials.Respondents were from all age groups, with people over 65 making up 8.2% of respondents.A total of 75.5% of participants had obtained a college degree.Nearly all (99.1%) of respondents lived in urban areas, of which 27.3% lived in large cities (over 1 million inhabitants) or megacities (over 10 million inhabitants).The average household size was just over two persons, and over a fifth of respondents (21.8%) indicated an annual household income of eight million Japanese Yen (≈USD 61,000) or more.
Seafood Consumption
The majority of respondents (81.8%) were frequent seafood consumers, indicating seafood consumption between once a week and several times a day, as shown in Figure 4.The largest share of respondents (40.0%) consumed seafood two or three times a week.One respondent (0.9%) stated they never ate seafood and cited 'veganism' as their reason.
Seafood Consumption
The majority of respondents (81.8%) were frequent seafood consumers, indicating seafood consumption between once a week and several times a day, as shown in Figure 4.The largest share of respondents (40.0%) consumed seafood two or three times a week.One respondent (0.9%) stated they never ate seafood and cited 'veganism' as their reason.As shown in Figure 5, the most popular place to consume seafood was at home, where respondents indicated regular seafood consumption prepared by themselves or a family member (77.1%), bought ready-to-eat from supermarkets (48.6%), or from restaurants (18.3%).Almost two-thirds (63.3%) of respondents indicated restaurants to be a usual consumption site.Only two respondents (1.8%) added a usual consumption site As shown in Figure 5, the most popular place to consume seafood was at home, where respondents indicated regular seafood consumption prepared by themselves or a family member (77.1%), bought ready-to-eat from supermarkets (48.6%), or from restaurants (18.3%).Almost two-thirds (63.3%) of respondents indicated restaurants to be a usual consumption site.Only two respondents (1.8%) added a usual consumption site outside the offered response choices, namely a university canteen and a company cafeteria (each one respondent).As shown in Figure 5, the most popular place to consume seafood was at home, where respondents indicated regular seafood consumption prepared by themselves or a family member (77.1%), bought ready-to-eat from supermarkets (48.6%), or from restaurants (18.3%).Almost two-thirds (63.3%) of respondents indicated restaurants to be a usual consumption site.Only two respondents (1.8%) added a usual consumption site outside the offered response choices, namely a university canteen and a company cafeteria (each one respondent).Participants indicated prioritizing different aspects when purchasing seafood, as shown in Figure 6.Overall, product quality and price appear to be the two most important purchasing determinants, both being rated as very or moderately important by about 95% of respondents.There was less agreement on the importance of the source of products, with a little over half of participants (52.3%) assessing the source as being very or moderately important and the rest (47.7%) as slightly or not at all important.Over a fifth of respondents (21.1%) stated that the seafood species was of little or no importance.Participants indicated prioritizing different aspects when purchasing seafood, as shown in Figure 6.Overall, product quality and price appear to be the two most important purchasing determinants, both being rated as very or moderately important by about 95% of respondents.There was less agreement on the importance of the source of products, with a little over half of participants (52.3%) assessing the source as being very or moderately important and the rest (47.7%) as slightly or not at all important.Over a fifth of respondents (21.1%) stated that the seafood species was of little or no importance.
Prior Knowledge and Spontaneous Feelings
Almost three-quarters of respondents (74.5%) had not heard of cell-based seafood or were unsure about this; only 25.5% stated they had awareness of cell-based seafood prior to the survey.
Figure 7 depicts spontaneous emotional states that cell-based seafood aroused in participants.Overall, the most salient emotion was interest, with almost half of participants (46%) indicating they were extremely or very interested, followed by positive (30%) and excited (27%).Fewer participants indicated clearly negative emotional states; about a quarter (24%) stated they were extremely or very worried, and only 7% experienced a pronounced feeling of disgust.Figure 7 depicts spontaneous emotional states that cell-based seafood aroused in participants.Overall, the most salient emotion was interest, with almost half of participants (46%) indicating they were extremely or very interested, followed by positive (30%) and excited (27%).Fewer participants indicated clearly negative emotional states; about a quarter (24%) stated they were extremely or very worried, and only 7% experienced a pronounced feeling of disgust.
Prior Knowledge and Spontaneous Feelings
Almost three-quarters of respondents (74.5%) had not heard of cell-based seafood or were unsure about this; only 25.5% stated they had awareness of cell-based seafood prior to the survey.
Figure 7 depicts spontaneous emotional states that cell-based seafood aroused in participants.Overall, the most salient emotion was interest, with almost half of participants (46%) indicating they were extremely or very interested, followed by positive (30%) and excited (27%).Fewer participants indicated clearly negative emotional states; about a quarter (24%) stated they were extremely or very worried, and only 7% experienced a pronounced feeling of disgust.
Interest in Tasting and Likeliness to Purchase
The prevalence of interest as salient emotional state is further reiterated by 71.8% of participants indicating interest in tasting cell-based seafood (Figure 8).Only 10.9% were not interested, and close to a fifth of all respondents (17.3%) were unsure.
Interest in Tasting and Likeliness to Purchase
The prevalence of interest as salient emotional state is further reiterated by 71.8% of participants indicating interest in tasting cell-based seafood (Figure 8).Only 10.9% were not interested, and close to a fifth of all respondents (17.3%) were unsure.Interest, however, does not necessarily equate to willingness to purchase; as shown on the left in Figure 9, some 60% of participants indicated they would be extremely likely or likely to buy cell-based seafood if it were available.Combining results from Figure 8; Figure 9, we can deduce that 11.8% (71.8-60%) of participants were merely interested in tasting the novel food product but not in becoming a purchaser.The large proportion of participants choosing the option 'neither likely nor unlikely' to purchase (23.6%) seems to indicate a high level of indecisiveness and uncertainty, as can be expected regarding new, unfamiliar foods.According to results depicted on the right side of Figure 9, almost a fifth of respondents (19%) could be expected to replace all of their conventional seafood diet with cell-based products.This rather optimistic result should be interpreted with caution, as it might reflect socio-psychological effects leading to respondents assessing their own behavior incorrectly in foresight scenarios [56,57] or tending to choose options perceived as more accepted or desired by society [58].Interest, however, does not necessarily equate to willingness to purchase; as shown on the left in Figure 9, some 60% of participants indicated they would be extremely likely or likely to buy cell-based seafood if it were available.Combining results from Figures 8 and 9, we can deduce that 11.8% (71.8-60%) of participants were merely interested in tasting the novel food product but not in becoming a purchaser.The large proportion of participants choosing the option 'neither likely nor unlikely' to purchase (23.6%) seems to indicate a high level of indecisiveness and uncertainty, as can be expected regarding new, unfamiliar foods.According to results depicted on the right side of Figure 9, almost a fifth of respondents (19%) could be expected to replace all of their conventional seafood diet with cell-based products.This rather optimistic result should be interpreted with caution, as it might reflect socio-psychological effects leading to respondents assessing their own behavior incorrectly in foresight scenarios [56,57] or tending to choose options perceived as more accepted or desired by society [58].
indicate a high level of indecisiveness and uncertainty, as can be expected regarding new, unfamiliar foods.According to results depicted on the right side of Figure 9, almost a fifth of respondents (19%) could be expected to replace all of their conventional seafood diet with cell-based products.This rather optimistic result should be interpreted with caution, as it might reflect socio-psychological effects leading to respondents assessing their own behavior incorrectly in foresight scenarios [56,57] or tending to choose options perceived as more accepted or desired by society [58].As shown in Figure 10, the vast majority of respondents (88.2%) expressed unwillingness to pay a higher price for cell-based seafood than for conventional products.Of the small minority (11.8%) who indicated willingness to pay a higher price, most (92.3%) would pay a slightly or moderately higher price, and only a few (7.7%) would pay a much higher price.As shown in Figure 10, the vast majority of respondents (88.2%) expressed unwillingness to pay a higher price for cell-based seafood than for conventional products.Of the small minority (11.8%) who indicated willingness to pay a higher price, most (92.3%) would pay a slightly or moderately higher price, and only a few (7.7%) would pay a much higher price.As depicted in Figure 11, respondents' interest in purchasing cell-based products varied for different seafood species.Most participants showed an interest in purchasing cellbased versions derived from species readily consumed by the general public [59], such as salmon (81.4%) and bluefin tuna (67.6%).Cell-based versions of horse mackerel and amberjack were the least popular of the readily consumed species, with just under a third of participants expressing a purchase interest.This was undercut only by a species not normally consumed at all: the zebrafish.This species was added as an option in response to Potter et al. [34], arguing that, as probably the most intensively researched and best understood fish species, the zebrafish might be particularly suitable for the swift development of cell-based seafood.Some respondents expressed interest in purchasing additional species, namely oysters, spiny lobster, sea urchin, cod roe, and swordfish.As depicted in Figure 11, respondents' interest in purchasing cell-based products varied for different seafood species.Most participants showed an interest in purchasing cell-based versions derived from species readily consumed by the general public [59], such as salmon (81.4%) and bluefin tuna (67.6%).Cell-based versions of horse mackerel and amberjack were the least popular of the readily consumed species, with just under a third of participants expressing a purchase interest.This was undercut only by a species not normally consumed at all: the zebrafish.This species was added as an option in response to Potter et al. [34], arguing that, as probably the most intensively researched and best understood fish species, the zebrafish might be particularly suitable for the swift development of cell-based seafood.Some respondents expressed interest in purchasing additional species, namely oysters, spiny lobster, sea urchin, cod roe, and swordfish.
berjack were the least popular of the readily consumed species, with just under a third of participants expressing a purchase interest.This was undercut only by a species not normally consumed at all: the zebrafish.This species was added as an option in response to Potter et al. [34], arguing that, as probably the most intensively researched and best understood fish species, the zebrafish might be particularly suitable for the swift development of cell-based seafood.Some respondents expressed interest in purchasing additional species, namely oysters, spiny lobster, sea urchin, cod roe, and swordfish.
Opinions within the Context of Traditional and Modern Food Production
Participants were asked to express their opinions on three different statements, of which one supported progress in food production, accepting possible dietary changes ('progressive'); one of which promoted protection and respect for the sea as a valuable resource ('neutral'); and one of which favored the preservation of tradition and culinary cultural heritage at the expense of progress ('conservative').As apparent in Figure 12, agreement was strongest for the neutral statement, with 87.3% confirming agreement or strong agreement.The conservative statement, on the other side, was the least popular, with only about a third of participants (32.7%) expressing agreement.While 22.7% confirmed strong disagreement, a remarkably large share of participants (44.5%) neither
Opinions within the Context of Traditional and Modern Food Production
Participants were asked to express their opinions on three different statements, of which one supported progress in food production, accepting possible dietary changes ('progressive'); one of which promoted protection and respect for the sea as a valuable resource ('neutral'); and one of which favored the preservation of tradition and culinary cultural heritage at the expense of progress ('conservative').As apparent in Figure 12, agreement was strongest for the neutral statement, with 87.3% confirming agreement or strong agreement.The conservative statement, on the other side, was the least popular, with only about a third of participants (32.7%) expressing agreement.While 22.7% confirmed strong disagreement, a remarkably large share of participants (44.5%) neither agreed nor disagreed.While this could indicate indifference, it could also indicate reluctance to express opinions honestly, e.g., if concerned views might be perceived as outdated or socially undesirable.The progressive statement received agreement by almost 80% of participants.Again, this optimistic result should be interpreted with caution, as (although the survey was focused on cell-based seafood) some participants might have perceived 'progress in food development' as, for example, more sustainable fishing or aquaculture practices, and not necessarily as the development of cell-based alternatives.As evident in Figure 13, of the six positive and six negative terms offered to describe cell-based seafood, the three most frequently selected terms were positive; around twothirds of participants selected 'future-oriented' (69.1%) or 'fascinating' (66.4%), and 43.6% rated the development of cell-based seafood as 'necessary'.Around one-third of participants (33.6%) perceived cell-based seafood as 'unnatural', and about a fifth of participants as 'weird' (20.9%) or 'scary' (17.3%).In general, positive terms were selected much more frequently than negative terms.Men chose positive terms almost six times more often than negative terms, and women chose positive terms around two and a half times as often.Compared to men, women were twice as likely to choose negative terms and slightly less
Positive and Negative Terms Selected to Describe Cell-Based Seafood
As evident in Figure 13, of the six positive and six negative terms offered to describe cell-based seafood, the three most frequently selected terms were positive; around twothirds of participants selected 'future-oriented' (69.1%) or 'fascinating' (66.4%), and 43.6% rated the development of cell-based seafood as 'necessary'.Around one-third of partici-pants (33.6%) perceived cell-based seafood as 'unnatural', and about a fifth of participants as 'weird' (20.9%) or 'scary' (17.3%).In general, positive terms were selected much more frequently than negative terms.Men chose positive terms almost six times more often than negative terms, and women chose positive terms around two and a half times as often.Compared to men, women were twice as likely to choose negative terms and slightly less likely to choose positive terms.
Positive and Negative Terms Selected to Describe Cell-Based Seafood
As evident in Figure 13, of the six positive and six negative terms offered to describe cell-based seafood, the three most frequently selected terms were positive; around twothirds of participants selected 'future-oriented' (69.1%) or 'fascinating' (66.4%), and 43.6% rated the development of cell-based seafood as 'necessary'.Around one-third of participants (33.6%) perceived cell-based seafood as 'unnatural', and about a fifth of participants as 'weird' (20.9%) or 'scary' (17.3%).In general, positive terms were selected much more frequently than negative terms.Men chose positive terms almost six times more often than negative terms, and women chose positive terms around two and a half times as often.Compared to men, women were twice as likely to choose negative terms and slightly less likely to choose positive terms.
Aspects Characterized by Uncertainty
Toward the end of the survey, participants were asked to express their attitudes more freely by responding to some optional open-ended questions.To the question "Is there something about cell-based seafood that remains unclear to you?", 52 respondents (47.3%) replied, expressing uncertainty about 10 key aspects, namely (in priority order), product safety, production process, taste and texture, effects on body, price, quality assurance, genetic modification and cloning, nutritional value, product popularity, and the impact on the ecosystem (Figure 14).A specific aspect about which the sample's one vegan participant desired clarity was whether cell-based seafood could be produced from cells taken from previously produced cell-based seafood, thus making the repeated cell collection from live animals redundant.This confirmed expectations that this aspect and the other aforementioned aspects should be addressed by cell-based seafood start-up companies to maximize consumer acceptance.
Prerequisites for Consumption
In total, 79 participants (71.8%) answered an optional question about personal prerequisites for consumption.As shown in Figure S1 in the Supplementary Materials, about one-third (32.9%) (in each case) stated they wanted cell-based seafood to be tasty, proven safe, and cheap before they considered consumption.Less than a fifth of respondents (17.7%) cited easy product availability as an important prerequisite, and 7.6% considered quality assurance and 3.8% health promotion to be prerequisites for consumption.Five respondents (6.3%) stated they would eat cell-based seafood only if conventional seafood became unavailable or too expensive as a consequence of dwindling fish stocks.
Concerns about Consumption
In total, 25 participants (22.7%) provided information about personal concerns about consumption.In line with findings regarding the main prerequisites for consumption, a large share of respondents (20%) expressed concern about an expected unsatisfactory taste, and 16% stated they feared products might be unsafe (Figure 15).Some 12% stated they believed cell-based seafood to be unnatural or unnecessary, and 8% expressed concern about product quality, unknown ingredients, and price.Moreover, 4% of respondents believed products to be unhealthy or were worried about unknown bodily effects.A fifth of respondents expressed general concern not covered by the other categories, stating they felt uneasy or repelled by this new, unfamiliar concept or reluctant to consume cell-based seafood without having received more detailed information.
Aspects Characterized by Uncertainty
Toward the end of the survey, participants were asked to express their attitudes more freely by responding to some optional open-ended questions.To the question "Is there something about cell-based seafood that remains unclear to you?", 52 respondents (47.3%) replied, expressing uncertainty about 10 key aspects, namely (in priority order), product safety, production process, taste and texture, effects on body, price, quality assurance, genetic modification and cloning, nutritional value, product popularity, and the impact on the ecosystem (Figure 14).A specific aspect about which the sample's one vegan participant desired clarity was whether cell-based seafood could be produced from cells taken from previously produced cell-based seafood, thus making the repeated cell collection from live animals redundant.This confirmed expectations that this aspect and the other aforementioned aspects should be addressed by cell-based seafood start-up companies to maximize consumer acceptance.
Prerequisites for Consumption
In total, 79 participants (71.8%) answered an optional question about personal prerequisites for consumption.As shown in Figure S1 in the Supplementary Materials, about one-third (32.9%) (in each case) stated they wanted cell-based seafood to be tasty, proven safe, and cheap before they considered consumption.Less than a fifth of respondents (17.7%) cited easy product availability as an important prerequisite, and 7.6% considered quality assurance and 3.8% health promotion to be prerequisites for consumption.Five respondents (6.3%) stated they would eat cell-based seafood only if conventional seafood became unavailable or too expensive as a consequence of dwindling fish stocks.In total, 25 participants (22.7%) provided information about personal concerns about consumption.In line with findings regarding the main prerequisites for consumption, a large share of respondents (20%) expressed concern about an expected unsatisfactory taste, and 16% stated they feared products might be unsafe (Figure 15).Some 12% stated they believed cell-based seafood to be unnatural or unnecessary, and 8% expressed concern about product quality, unknown ingredients, and price.Moreover, 4% of respondents believed products to be unhealthy or were worried about unknown bodily effects.A fifth of respondents expressed general concern not covered by the other categories, stating they felt uneasy or repelled by this new, unfamiliar concept or reluctant to consume cellbased seafood without having received more detailed information.
Inferential Analysis
To explore statistically significant associations, a series of chi-square and Fisher's exact tests were calculated between 9 possible predictor variables (Table 1) and 13 variables relating to participants' attitudes and behavioral intentions toward cell-based seafood (Table 2).Note: Combined percentages exceed 100%, as some respondents indicated concern about more than one aspect.
Inferential Analysis
To explore statistically significant associations, a series of chi-square and Fisher's exact tests were calculated between 9 possible predictor variables (Table 1) and 13 variables relating to participants' attitudes and behavioral intentions toward cell-based seafood (Table 2).Holm-Bonferroni post hoc tests were performed to further examine any significant differences between groups with adjusted p-values, as displayed in Tables 3 and 4. Note: Associations were examined using chi-square tests, followed by a Holm-Bonferroni post hoc analysis.Demographic variables and prior knowledge variables are presented in rows, with emotional states and information on selected positive terms in columns, followed by percentages within each group and respective odds ratios.Bold indicates p < 0.05 from chi-square analysis, (*) indicates significance after post hoc tests.
Significance after Post Hoc Analysis
Two of the possible predictor variables were found to have a significant association with attitude variables after the Holm-Bonferroni adjustment of p-values: participants' ages and their prior knowledge about cell-based seafood (Table 5).When asked about how much they experienced certain spontaneous emotional states with regard to cellbased seafood, a weak association (phi coefficient = 0.280) was detected, with younger participants being significantly more likely to feel not at all or only slightly disgusted (p adjusted = 0.039).Moreover, an association of medium strength (phi coefficient = 0.433) between respondents' prior knowledge and their willingness to pay a higher price for cell-based seafood was found to be highly significant (p adjusted < 0.001).In our sample, respondents who had previously heard of cell-based seafood were over 14 times more likely to indicate a willingness to pay a higher price for such products.
Age
Although chi-square analysis found younger people to be significantly more interested in tasting cell-based seafood (p = 0.005), more excited (p = 0.017), more positive (p = 0.049), and more likely to choose at least three positive terms to describe the concept (p = 0.031), the post hoc adjustment found these weak associations (phi coefficient for all between −0.3 and 0.3) to be non-significant.Nevertheless, in our sample, participants under the age of 45 scored higher in all positive attitude parameters analyzed.
Gender
Men were more interested in tasting cell-based seafood (p > 0.05), more likely to replace all of their conventional seafood diet with cell-based products (p > 0.05), and less disgusted (p > 0.05) and less worried (p > 0.05) when compared to women.Moreover, men selected greater numbers of positive terms to describe cell-based seafood and were over four times more likely to be willing to pay a higher price than women (p = 0.020).However, none of these observed weak associations (phi coefficient for all between −0.3 and 0.3) were statistically significant after post hoc analysis.
Size of Town or City
People living in smaller cities were almost five times as likely to agree to pay a higher price for cell-based products than people living in large or mega cities (p = 0.018) and were about twice as likely to replace all of their conventional seafood diet (p > 0.05).Although they scored higher in eight of the nine parameters for a positive attitude, the size of the participants' town or city did not show any significant associations after p-values were adjusted.
Household Size
Results of chi-square analysis suggested that people living alone displayed considerably more positive attitudes than people living with at least one other person.Singlehousehold participants were significantly more likely to buy cell-based seafood (p = 0.004) and more likely to replace all of their conventional seafood diet with cell-based products (p = 0.017).Although the significance of these findings was not supported by post hoc tests, it is noteworthy that, in our sample, people living alone were over three times as likely to indicate willingness to purchase cell-based seafood and to replace all of their conventional diet than people living in larger households.
Annual Household Income
People whose indicated annual household income was less than six million Japanese Yen (~USD 46,000) were over three times as likely to state they would be willing to replace all of their conventional seafood with cell-based products (p = 0.044) and twice as likely to be interested in tasting cell-based seafood (p > 0.05).However, they appeared to be considerably less excited (p > 0.05) and less positive (p > 0.05).Participants' annual household income showed no significant association with any of the attitudinal variables after post hoc tests.
Seafood Consumption
Participants who indicated they consumed seafood at most once a week were more positive (p = 0.033) and less worried (p = 0.035) about cell-based seafood than more frequent consumers.These weak associations (phi coefficient ≈ −0.2), which chi-square analysis initially indicated were significant, were not supported by post hoc analysis.In general, participants with high levels of seafood consumption displayed less positive attitudes toward cell-based seafood, scoring lower in seven of the nine parameters analyzed.
Prior Knowledge
In addition to the highly significant association (p adjusted < 0.001) of having been aware of cell-based seafood prior to the survey and willingness to pay a higher price for cellbased products, people with prior knowledge appeared to be more likely to buy cell-based and to replace all conventional products.Furthermore, they were slightly more excited, more positive, and selected more positive terms than people without prior awareness.Interestingly, participants with prior knowledge indicated slightly more disgust and worry about cell-based seafood.Their declared interest in tasting cell-based seafood was almost identical to that of people previously unaware.People with prior knowledge scored higher in six of the nine analyzed parameters, but none of the associations, other than the association between prior knowledge and willingness to pay a higher price, were statistically significant.
To explore the aspect of participants' prior knowledge further, possible associations with demographic variables were investigated.Although no significant associations were detected, in our sample, older people, those residing in cities with less than one million inhabitants, people living with at least one other person, and people with high levels of seafood consumption were all slightly more likely to be familiar with cell-based seafood.Men and people with an annual household income of six million Japanese Yen (≈USD 46,000) or more were twice as likely to have heard of cell-based seafood before participating in this survey, when compared to women or people with a higher income.However, none of these weak associations (phi coefficient between −0.3 and 0.3 for all) proved to be significant.
Methods
Through the collection of empirical data and statistical analysis, we aimed to detect statistically significant associations between consumers' attitudes and behaviors, demographic variables, and cell-based seafood-related knowledge.Specifically, we aimed to answer three key questions: 1. How widespread is knowledge of the concept of cell-based seafood in Japan? 2. Would Japanese consumers be willing to buy cell-based seafood once it becomes available?
What demographic variables influence attitudes and behavioral intentions toward cell-based seafood?
The chosen survey instrument was an online questionnaire, enabling access to participants in remote geographical locations while minimizing and standardizing possible interviewer effects.The population of Japan is about 126 million people [60]; therefore, a sample size of 400 participants was required with a 95% confidence level and a 5% margin of error.Only two inclusion criteria were applied: participants had to be Japanese and aged over 18.
The questionnaire contained 23 items split into five main sections, as shown in Table S3 in the Supplementary Materials.Section 1 provided questionnaire related information and offered participants the possibility of winning a 3000 JPY (≈USD 25) Amazon Japan voucher.This gift lottery was added as reaching the intended sample size of 400 participants was expected to be difficult.Section 1 further included a short explanation of cell-based seafood and its production process.Section 2 collected demographic and socio-economic data of participants, while Section 3 focused on consumption habits concerning conventional seafood.Section 4 explored participants' prior knowledge, feelings, attitudes, and behavioral intentions toward cell-based seafood.In the final section, participants were asked to express cell-based seafood-related uncertainties, as well as personal prerequisites for, and concerns about, cell-based seafood consumption, in a set of optional open-ended questions.
In addition to being based on existing research [48,[61][62][63], the questionnaire's compilation was shaped by interviews from December 2021 to January 2022 with two cell-based seafood start-up companies: San Diego-based BlueNalu and Singapore-based Shiok Meats.The latter interview provided insights into this specific company's current developmental stage for cell-based seafood and the obstacles faced prior to market release.The questionnaire was initially produced in English, then translated into Japanese by a professional academic translator, and finally proofread by four independent native Japanese speakers.
The questionnaire was constructed using the 'Online surveys' platform "www.onlinesurveys.ac.uk" (accessed on 10 May 2022), which is compliant with the U.K. General Data Protection Regulations (GDPRs).This was piloted using eight Japanese participants to determine whether the information sheet was sufficiently comprehensible, that all items were logically ordered, and that survey completion took no more than 10 min.Suggestions were made regarding wording and response options for some items, which were revised and amended before distribution.The survey final map and logic are shown in Figure 16.
The questionnaire was available online between 16th May 2022 and 15th August 2022.An invitation to participate including the survey's link and a QR code to facilitate smart device access was sent to Japanese professors of several universities in Japan, Germany, and the U.K. in the hope they might be able to recruit eligible student participants.These were also sent to ≈3300 recipients of a mailing list related to Japanese studies.As publication on YouTube and within Facebook groups showed only moderate success the option of paid advertisement on X (formerly Twitter) was chosen; with a budget of USD 150, the advertisement campaign ran between the 9th and 27th July 2022, resulting in more than 175,000 appearances on user timelines.X has over 50 million active users in Japan and is the nation's second most popular social media platform after the messenger service line [64].Its advertisement settings include the option to choose language and region; these were set to Japanese and Japan.
After resultant data were imported into IBM SPSS Statistics version 27, descriptive statistics were used to summarize and display key results within charts and frequency tables.These included demographic data, information about participants' seafood consumption habits and about their prior knowledge, general attitudes, and behavioral intentions toward cell-based seafood.Results were cross-tabulated between possible predictor variables and variables serving as parameters for attitudes and behavioral intentions.The questionnaire was available online between 16th May 2022 and 15th August 2022.An invitation to participate including the survey's link and a QR code to facilitate smart device access was sent to Japanese professors of several universities in Japan, Germany, and the U.K. in the hope they might be able to recruit eligible student participants.These were also sent to ≈3300 recipients of a mailing list related to Japanese studies.As publication on YouTube and within Facebook groups showed only moderate success the option of paid advertisement on X (formerly Twitter) was chosen; with a budget of USD 150, the advertisement campaign ran between the 9th and 27th July 2022, resulting in more than 175,000 appearances on user timelines.X has over 50 million active users in Japan and is the nation's second most popular social media platform after the messenger service line [64].Its advertisement settings include the option to choose language and region; these were set to Japanese and Japan.
After resultant data were imported into IBM SPSS Statistics version 27, descriptive statistics were used to summarize and display key results within charts and frequency tables.These included demographic data, information about participants' seafood consumption habits and about their prior knowledge, general attitudes, and behavioral intentions toward cell-based seafood.Results were cross-tabulated between possible predictor variables and variables serving as parameters for attitudes and behavioral intentions.
To explore statistically significant associations, a series of chi-square tests were calculated between possible predictor variables and variables relating to participants' attitudes and behavioral intentions toward cell-based seafood.As observation numbers in nearly all groups were too small to meet chi-square test assumptions, variables were recoded to combine categories.In cases where observation numbers were still not sufficient to use chi-square, Fisher's exact tests were used.Holm-Bonferroni post hoc tests were performed to further examine any significant differences between groups with adjusted p-values.Odds ratios and phi coefficients were investigated to allow for reasonable comparisons between groups and to understand the strength of relationships between variables.
Participants received in advance all essential information about the study's purpose and background, inclusion criteria, and the estimated time to complete the survey.Informed consent was obtained from all subjects involved in the study.Data were To explore statistically significant associations, a series of chi-square tests were calculated between possible predictor variables and variables relating to participants' attitudes and behavioral intentions toward cell-based seafood.As observation numbers in nearly all groups were too small to meet chi-square test assumptions, variables were recoded to combine categories.In cases where observation numbers were still not sufficient to use chi-square, Fisher's exact tests were used.Holm-Bonferroni post hoc tests were performed to further examine any significant differences between groups with adjusted p-values.Odds ratios and phi coefficients were investigated to allow for reasonable comparisons between groups and to understand the strength of relationships between variables.
Participants received in advance all essential information about the study's purpose and background, inclusion criteria, and the estimated time to complete the survey.Informed consent was obtained from all subjects involved in the study.Data were anonymized and securely stored.Ethical approval was granted from the University of Winchester, Winchester, UK, on the 21st of May 2021.
Discussion
The survey results showed a significant gender imbalance with two-thirds (66.4%) of participants being women, which does not reflect the demographic composition of Japan where women comprise 51.4% [65].Such female overrepresentation is common within academic surveys [66,67].Respondents were from all age groups, with those over 65 making up 8.2% of all respondents.This is considerably less than the general Japanese population, where over 65 s account for 28.4% [68].This underrepresentation is likely caused partially by the survey being web-based, with Internet penetration rates lower for elderly people [69].
The share of respondents living in urban areas was 99.1%, considerably higher than the overall Japanese proportion (91.7%) [70].However, rates were more aligned for people living in large or mega cities, who comprised 27.3% of our sample, compared with 28% of the Japanese population [71].Additionally, both sample data and overall population data [72] indicated the average household size to be slightly over two people.As 75.5% of survey participants had a college degree, the sample appeared to be more highly educated than the general Japanese population, in which 52.7% attained tertiary education [73].For women in the general population, however, the rate of tertiary education is higher, at 64.4% (ibid.).This suggests that the observed difference in education level might, in part, be due to the aforementioned gender imbalance within the sample.Yearly household income was not compared due to the lack of recent official data suitable for comparison.
The majority of respondents (81.8%) were frequent seafood consumers confirming seafood consumption between once a week and several times daily.The largest share of respondents (40.0%) consumed seafood two or three times a week.Although these high levels of seafood consumption might reflect self-selection bias with people fond of seafood more likely to participate in this survey, effects on the sample are relativized by Japan being a country with a considerably high level of seafood consumption [39,40].Also, a recent survey with almost 10,000 participants suggested consumption levels very similar to the findings presented here [74].In our sample, one respondent (0.9%) stated they would never eat seafood and cited 'veganism' as the reason.This percentage is slightly lower than overall estimates for the Japanese population [75].
Prior Knowledge and Spontaneous Feelings
Only about a quarter of all respondents indicated they had heard of cell-based seafood before the survey, similar to findings in the USA [76] and Germany [77], but less than in other countries, for example, in Belgium, where about 36% of surveyed consumers reported hearing about it previously [78].However, in previous research, a large share of participants who indicated prior knowledge were found to be only vaguely familiar with cell-based seafood, resulting in the general public's perception of cell-based animal products still being characterized by uncertainty [78].As Rolland et [52] found a strong positive relationship between prior awareness and initial acceptance of cell-based animal products, the need for information campaigns and advertisement is evident when aiming to increase cell-based seafood's market success.
The most salient spontaneous emotion that participants reported was interest, followed by feeling positive and excited.While this highlights the positive side of the emotional spectrum that cell-based animal products can evoke in consumers, the negative aspects should not go unreported; almost 90% of participants felt at least slightly worried, and about 60% indicated at least slight disgust.This concurs with previous research, which found that initial reactions to cell-based animal products were commonly underpinned by disgust [55,78,79].Consumers' feelings, independent of whether or not they appear reasonable, are known to greatly influence and, in many cases, determine purchasing decisions [80,81].Van Praet [82] went as far as to argue that we humans 'feel our way to reason'.While probably a somewhat exaggerated formulation, it emphasizes the pivotal role of emotions in a product's market success; cell-based seafood companies might be well advised to put special focus on trying to better understand the emotional alignment of their products with consumers.
Interest in Tasting and Intentions to Purchase
The vast majority of respondents (71.8%) confirmed interest in tasting cell-based seafood, indicating a considerably higher level of openness among Japanese consumers than is evident among consumers of several other nations.Examples include Belgium with about 40% [78,83], Italy with 54% [51], and Germany with 57% [84].Along with positive findings for Singapore (78%) [85] and Hong Kong (95%) [86], results from our survey suggest less pronounced forms of food neophobia (i.e., fear of novel foods) among Asian consumers and greater dietary flexibility.This notion is supported by almost 80% of our survey's participants agreeing that "progress in food development is good, even if we have to adapt our current diet" and by findings from Bryant et al. [62], indicating acceptance to be significantly higher in Asian countries researched.Sixty percent of our respondents stated they were likely to buy cell-based seafood once it becomes available-a percentage more than five times that reported for Canadian consumers [46].In contrast to high purchasing interest for cell-based versions of readily consumed seafood species such as salmon (81.4%) and tuna (67.6%), interest in zebrafish was low (5.9%), which raises doubts about the feasibility of the approach proposed by Potter et al. [34], urging culinary use of this species in light of the high degree of scientific and husbandry knowledge about zebrafish.And, almost half of the surveyed Japanese consumers rated the source of conventional seafood as slightly or not at all important for their purchasing decisions.Interpreted optimistically, this mindset might predict lower levels of reluctance to eat products that, in the future, might come from a bioreactor, instead of the sea.
When asked about their willingness to replace all conventional seafood with cell-based products, about 20% of participants indicated they would be likely or extremely likely to do so.Although a moderately high percentage, this is somewhat lower than findings for the USA from Wilks and Phillips [76], where about a third of surveyed consumers agreed to replace their conventional (meat) diet with cell-based products.It is also much lower than what De Oliveira et al. [87] found for Brazilian consumers (about 57%).Regarding willingness to pay a higher price for cell-based products, our study's results are fully in line with findings from several other studies indicating little or no willingness to pay a premium for cell-based products [49,88].In our survey, only 11.8% agreed to pay more, and the majority of those would only pay a slightly or moderately higher price.This underlines the pressing need for cell-based seafood companies to concentrate efforts on achieving price parity with conventional seafood in order for their products to attract a bigger market.
Overall Assessment of Japanese Consumer Attitudes
In general, the surveyed Japanese consumers displayed positive attitudes toward cell-based animal products when compared to consumers in other countries.Although the vast majority of our survey's participants had never heard of cell-based seafood before, they were open to the idea, with 80% indicating they felt 'excited', 'positive', and 'interested'.About two-thirds described cell-based seafood as 'future-oriented' and 'fascinating' and over 40% as 'necessary'.Although about 20-30% of participants used negative terms such as 'unnatural', 'weird', and 'scary', overall, positive terms were selected over three times more frequently than negative terms, suggesting high levels of acceptance within our sample.While consumer attitudes in numerous other countries appeared to be dominated by perceived 'unnaturalness' [48,50,89], only about a third of Japanese consumers seem to feel that way.With more than 70% of participants indicating interest in tasting them and 60% likely to purchase cell-based seafood products once they become available, the Japanese market appears to be moderately prepared for launches of such products in the near future.Nevertheless, further findings of this study suggest that cell-based seafood producers will need to fulfill a set of conditions before regular consumption by a larger Japanese audience can be expected.In line with observations from Verbeke et al. [48] and Liu et al. [49], absolute prerequisites for a large number of consumers' intended regular consumption appear to be the proven safety and quality assurance of products, good taste, and an affordable price.
Consumer-Related Variables with Potential Effect on Attitudes
Similar to findings from Mancini and Antonioli [51], as well as Szejda et al. [47], participants' age showed a strong relationship with attitudes and behavioral intentions.In our sample, when compared to older people, those aged below 45 were almost three and a half times as likely to be interested in tasting cell-based seafood, moderately more likely to purchase cell-based seafood at a higher price, and moderately more likely to replace all of their conventional seafood diet with cell-based products.Younger participants indicated they felt more excited and positive and less disgusted and worried.However, anticipated positive effects of younger age on consumer attitudes are relativized by the Japanese population being categorized as a 'super-aged society' with an aging rate that is the highest in the world and unprecedented in absolute terms [90].
No connection between participants' education level and their cell-based seafoodrelated attitudes could be detected, contrasting with findings from Valente et al. [91] and Van Loo et al. [92].An interesting (albeit not significant) association was found between participants' frequency of seafood consumption and their spontaneous feelings about cellbased seafood, with negative feelings being more pronounced for people with high levels of seafood consumption.This aspect is certainly worthy of further exploration, as a negative correlation between seafood consumption and acceptance of cell-based products would be extremely unfavorable in terms of expected demand for cell-based seafood.Participants living by themselves showed more positive attitudes in general and were more likely to state they would buy cell-based seafood once it became available.This might be considered a somewhat positive finding, as the share of one-person households in Japan accounted for about a third of all households in 2015 and is steadily growing [93].
In contrast to findings from Valente et al. [91], Van Loo et al. [92], and Bryant and Sanctorum [83], no significant associations between attitudes and participants' gender could be detected.Nevertheless, it is noteworthy that male respondents in our sample reached higher scores than female respondents in eight of the nine closely analyzed parameters, indicating more positive attitudes.This is in line with findings from Van Loo et al. [92] and Bryant and Sanctorum [83], who argued that the majority of future consumers of cell-based animal products will likely be male.In our sample, the disparity between men and women was especially pronounced for participants' willingness to pay more for cell-based seafood; men were over four times more likely to be willing to pay a higher price.Interestingly, men's high level of readiness was surpassed multiple times by the readiness levels of people with previous knowledge about cell-based seafood; participants who had previously heard of it were over 14 times more likely to agree to pay a higher price.
This highly significant association between prior awareness and willingness to pay a premium for cell-based seafood is in line with findings from previous studies [51,52,62,94] that found prior knowledge to be strongly associated with more positive consumer attitudes.In our sample, people with prior knowledge appeared more likely to buy cell-based seafood and replace all conventional products.Moreover, they indicated they felt more excited and positive and selected more positive terms to describe cell-based seafood.Although their interest in tasting cell-based seafood was almost identical to that of people formerly unaware, people with prior knowledge scored higher in six of the nine analyzed parameters, indicating a positive attitude, highlighting the importance of awareness and information campaigns.However, one should consider that the direction of the effect detected here might very well be the other way around; people who are generally more open toward this kind of progress in food production can be expected to be better informed about development trends when compared to people with a more conservative mindset.Taking consumers' different mindsets into account should form an essential part of cell-based seafood marketing strategies, as innovations have been found to spread through cultures in a specific sequence, which has been described in the 'diffusion of innovations theory' by Rogers [95] and Moore [96].This explains how new ideas or products gain momentum and diffuse through a social system or specific population.
Diffusion of Innovations Theory
The diffusion of innovations theory might be helpful for cell-based seafood marketers as it illustrates and explains how novel products transition from an early market to the mainstream market (Figure 17).Applying this theory to cell-based seafood, we would expect products to be quickly adopted by early market groups, namely tech enthusiasts and visionaries.However, cell-based seafood products will have to move well beyond this point to be successful in the long term.Crossing 'the chasm' [96] between early adopters and the early majority, and thus to the mainstream market, will likely present a major challenge for cell-based products, as the motivation of the mainstream market group is fundamentally different; in simplified terms, the early majority's pragmatists want to purchase products that have already been successfully tested by others and that offer some kind of improvement when compared to conventional products [95,96].This circumstance highlights the need to tailor marketing messages to suit the desires and motivating factors of particular market groups as cell-based products diffuse through society.and the early majority, and thus to the mainstream market, will likely present a major challenge for cell-based products, as the motivation of the mainstream market group is fundamentally different; in simplified terms, the early majority's pragmatists want to purchase products that have already been successfully tested by others and that offer some kind of improvement when compared to conventional products [95,96].This circumstance highlights the need to tailor marketing messages to suit the desires and motivating factors of particular market groups as cell-based products diffuse through society.
Figure 17.Innovation adoption curve according to the diffusion of innovations theory by Rogers [95] and Moore [96].© Łukasz Zielinski.After De Bruin [97].Note: Shades of blue denote time for adoption by different market groups.Darker shades denote longer time for adoption.
Study Limitations
The primary limitations of this research were posed by language barriers.Although the primary researcher is confident in the Japanese language, the translation of academic material lies beyond her proficiency.Therefore, the questionnaire's translation heavily relied on external help.Although the engaged professional translator was asked to give particular attention to avoiding ambiguous wording and leading questions, the inadvertent creation of bias could not be entirely ruled out.To address this, the translated questionnaire was proofread by four independent native speakers who understood the need to comply with standards designed to minimize inadvertent bias.
The research design employed an online survey instrument, which by itself incurred inherent limitations associated with physical or financial constraints.For Japan, this particular limitation was not very significant, as almost 93% of the Japanese population has access to the Internet [98].However, employing an online survey instrument is likely to reach significantly higher numbers of younger people; for the case of Japan, the Internet Figure 17.Innovation adoption curve according to the diffusion of innovations theory by Rogers [95] and Moore [96].© Łukasz Zielinski.After De Bruin [97].Note: Shades of blue denote time for adoption by different market groups.Darker shades denote longer time for adoption.
Study Limitations
The primary limitations of this research were posed by language barriers.Although the primary researcher is confident in the Japanese language, the translation of academic material lies beyond her proficiency.Therefore, the questionnaire's translation heavily relied on external help.Although the engaged professional translator was asked to give particular attention to avoiding ambiguous wording and leading questions, the inadvertent creation of bias could not be entirely ruled out.To address this, the translated questionnaire was proofread by four independent native speakers who understood the need to comply with standards designed to minimize inadvertent bias.
The research design employed an online survey instrument, which by itself incurred inherent limitations associated with physical or financial constraints.For Japan, this particular limitation was not very significant, as almost 93% of the Japanese population has access to the Internet [98].However, employing an online survey instrument is likely to reach significantly higher numbers of younger people; for the case of Japan, the Internet penetration rate of people aged 80 years or older is less than 30% [69], which can lead to a pronounced underrepresentation of elderly people.The transferability of this study's findings to the general Japanese population is markedly limited by the study's small sample size.The 110 responses obtained were less than a third of the desired sample size of 400.Moreover, respondent numbers from some demographic groups, for example, people aged 75+ or those following a vegan diet, were extremely low, thus preventing reasonable conclusions concerning these groups.Future research might increase participant numbers via longer study durations or financial means; paying respondents directly for participation might enhance participant numbers and ensure a necessary minimum of collected responses for each demographic group [99,100].
Conclusions
The level of Japanese consumers' previous knowledge was found to be low, with only about a quarter indicating prior knowledge of cell-based seafood.This highlights the need for more information campaigns to prepare the Japanese market for product launches expected in the near future.Despite the observed lack of prior awareness, overall attitudes and behavioral intentions were positive; about 70% expressed an interest in tasting cellbased seafood, 60% stated they planned to become a purchaser, and about 20% indicated they would replace all of their conventional seafood diet with cell-based products.Younger age was significantly associated with more positive attitudes; people below the age of 45 were found to be over three times as likely to express an interest in tasting, over four times as likely to feel not at all or only slightly disgusted by cell-based seafood, and they also scored higher in all of the other attitude parameters analyzed, when compared to older people.Furthermore, this study found a significant association between prior knowledge and willingness to pay a premium for cell-based products; participants aware of cell-based seafood before the survey were over 14 times as likely to agree to pay a higher price.
Although not reaching statistical significance in the studied sample, attitudes and behavioral intentions of men were considerably more positive than those of women, with male participants scoring higher in eight of the nine analyzed parameters, indicating a positive attitude.Single-household participants were considerably more likely to be interested in tasting cell-based seafood, to buy the products once they became available, and to replace all of their conventional seafood diet with cell-based products.Additionally, they were moderately more likely to express higher degrees of positive and lower degrees of negative spontaneous emotional states concerning cell-based seafood.Participants living in smaller cities showed more positive attitudes than participants living in large or megacities.This study's findings indicate that the Japanese market is moderately ready for cellbased seafood product launches.High levels of interest and low levels of food neophobia might indicate the existence of a considerable number of innovators and early adaptors within the studied sample, possibly portending a promising future market.However, more research is needed to understand the nature of mainstream market groups to allow for conclusions to be drawn about the sustainability of Japanese market success.Further research might explore how various messaging strategies or information on different cell-based seafood benefits might affect consumer attitudes.It would be especially interesting to investigate whether information about individual or societal benefits would show greater potential to influence attitudes.With cell-based seafood start-ups preparing to launch products within Asian markets in the near future, the right time for intelligent advertisement appears to be sooner rather than later.This should not merely eliminate product-related uncertainties but should also consider the desires and motivating factors of particular market groups, as well as Japan's specific cultural idiosyncrasies.
Figure 4 .
Figure 4. Frequency of seafood consumption among 110 Japanese consumers.
Figure 4 .
Figure 4. Frequency of seafood consumption among 110 Japanese consumers.
Figure 4 .
Figure 4. Frequency of seafood consumption among 110 Japanese consumers.
Figure 5 .
Figure 5. Usual seafood consumption sites of 110 Japanese consumers.
Figure 5 .
Figure 5. Usual seafood consumption sites of 110 Japanese consumers.
2. 3 .
Cell-Based Seafood 2.3.1.Prior Knowledge and Spontaneous Feelings Almost three-quarters of respondents (74.5%) had not heard of cell-based seafood or were unsure about this; only 25.5% stated they had awareness of cell-based seafood prior to the survey.
Figure 7 .
Figure 7. Spontaneous emotional states concerning cell-based seafood among 110 Japanese consumers.
Figure 7 .
Figure 7. Spontaneous emotional states concerning cell-based seafood among 110 Japanese consumers.
Figure 8 .
Figure 8.Interest in tasting cell-based seafood among 110 Japanese consumers.
Figure 9 .
Figure 9. Likeliness to purchase cell-based seafood (on the left) and to replace all conventional seafood (on the right) among 110 Japanese consumers.
Figure 9 .
Figure 9. Likeliness to purchase cell-based seafood (left) and to replace all conventional seafood (right) among 110 Japanese consumers.
Figure 10 .
Figure 10.Willingness to pay a higher price for cell-based seafood among 110 Japanese consumers.
Figure 10 .
Figure 10.Willingness to pay a higher price for cell-based seafood among 110 Japanese consumers.
Figure 11 .
Figure 11.Interest in purchasing cell-based products of different seafood species among 110 Japanese consumers.
Figure 11 .
Figure 11.Interest in purchasing cell-based products of different seafood species among 110 Japanese consumers.
Commodities 2023, 2, FOR PEER REVIEW 10 agreed nor disagreed.While this could indicate indifference, it could also indicate reluctance to express opinions honestly, e.g., if concerned views might be perceived as outdated or socially undesirable.The progressive statement received agreement by almost 80% of participants.Again, this optimistic result should be interpreted with caution, as (although the survey was focused on cell-based seafood) some participants might have perceived 'progress in food development' as, for example, more sustainable fishing or aquaculture practices, and not necessarily as the development of cell-based alternatives.
Figure 12 .
Figure 12.Opinions on three different statements related to traditional and modern food production among 110 Japanese consumers.2.3.4.Positive and Negative Terms Selected to Describe Cell-Based Seafood
Figure 12 .
Figure 12.Opinions on three different statements related to traditional and modern food production among 110 Japanese consumers.
Figure 13 .
Figure 13.Terms chosen to describe the development of cell-based seafood by 110 Japanese consumers.Note: Light bars denote positive terms, and dark bars denote negative terms.
Figure 13 .
Figure 13.Terms chosen to describe the development of cell-based seafood by 110 Japanese consumers.Note: Light bars denote positive terms, and dark bars denote negative terms.
Figure 14 .
Figure 14.Aspects of cell-based seafood remaining unclear to 52 Japanese consumers.Note: Increased size and frequency of words indicate higher levels of uncertainty to consumers.
Figure 14 .2. 4 . 3 .
Figure 14.Aspects of cell-based seafood remaining unclear to 52 Japanese consumers.Note: Increased size and frequency of words indicate higher levels of uncertainty to consumers.
Figure 15 .
Figure 15.Concerns about consumption of cell-based seafood expressed by 25 Japanese consumers.Note: Combined percentages exceed 100%, as some respondents indicated concern about more than one aspect.
Figure 15 .
Figure 15.Concerns about consumption of cell-based seafood expressed by 25 Japanese consumers.Note: Combined percentages exceed 100%, as some respondents indicated concern about more than one aspect.
Figure 16 .
Figure 16.Survey map and logic.
Figure 16 .
Figure 16.Survey map and logic.
Table 1 .
Possible predictor variables.Note: Possible predictor variables followed by respective categoric values and lowest significance values for associations with attitude parameters.Variables highlighted in gray showed no significant association with any of the attitude parameters and were not examined further.Boldface denotes statistically significant results (p < 0.05).
Table 2 .
Variables relating to cell-based seafood attitudes.Variables relating to cell-based seafood attitudes followed by each variable's respective value(s) indicating a positive attitude.Variables highlighted in gray showed no significant association with any of the possible predictor variables and were not examined further.Attitude values were combined for statistical analysis, e.g., the value 'Extremely' for spontaneous emotional states was included in 'Very or moderately'. Note:
Table 3 .
Associations between key demographic variables, prior knowledge of cell-based seafood, and attitudes toward cell-based seafood.
*) indicates significance after post hoc tests.
Table 4 .
Associations between key demographic variables, prior knowledge of cell-based seafood, and spontaneous emotional states, as well as positive terms selected to describe cell-based seafood.
Table 5 .
Possible predictor variables with their lowest adjusted significance values for respective associations with attitude variables.
adjusted < 0.001 * (willingness to pay a higher price) Note: (*) indicates significance after post hoc tests. | 2023-10-06T15:13:34.998Z | 2023-10-03T00:00:00.000 | {
"year": 2023,
"sha1": "8ab5d9b1bc46ea2752854b3772d4a2557babcadf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2813-2432/2/4/19/pdf?version=1696343695",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4d56139465da318533d0042099077e0c63b44a62",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
52305637 | pes2o/s2orc | v3-fos-license | Menisci protect chondrocytes from load-induced injury
Menisci in the knee joint are thought to provide stability, increased contact area, decreased contact pressures, and offer protection to the underlying articular cartilage and bone during joint loading. Meniscal loss or injury is typically accompanied by degenerative changes in the knee, leading to an increased risk for osteoarthritis in animals including humans. However, the detailed mechanisms underlying joint degeneration and the development of osteoarthritis remain largely unknown, and the acute effects of meniscal loss have not been studied systematically. We developed a microscopy-based system to study microscale joint mechanics in living mice loaded by controlled muscular contractions. Here, we show how meniscal loss is associated with rapid chondrocyte death (necrosis) in articular cartilage within hours of injury, and how intact menisci protect chondrocytes in vivo in the presence of intense muscle-based joint loading and/or injury to the articular cartilage. Our findings suggest that loading the knee after meniscal loss is associated with extensive cell death in intact and injured knees, and that early treatment interventions should be aimed at preventing chondrocyte death.
Materials and Methods
Animal preparation. This study was carried out in accordance with the guidelines of the Canadian Council on Animal Care and was approved by the committee for Animal Use and Ethics at the University of Calgary.
Thirty-nine adult, male mice (10-12 weeks of age) were used in this study. Mice were anesthetized with an isoflurane/oxygen mixture (1-3%). The right knee joint was shaved and secured in a stereo-tactic frame that was rigidly attached to the stage of a dissecting microscope. The medial aspect of the joint was exposed with a 6 mm incision just posterior to the medial collateral ligament (Fig. 1a).
Mice were divided into 6 experimental groups based on whether they were loaded by muscular contraction or not (i.e. 4 loaded groups and 2 non-loaded control groups). The loaded groups were divided by whether their meniscus was left intact or was removed, and whether they received a cartilage injury or not (Fig. 2). The cartilage injury (scalpel cut ~20 µm width, ~350 µm length) was applied underneath the medial meniscus and across the cartilage in the cartilage contact region, as shown in Fig. 1. The experimental groups were identified by the following notation: loaded (L), unloaded (L) meniscus intact (M), meniscus removed (M) and cartilage injury (I) or no cartilage injury (I). Three animals were chosen at random from LMI, LMI and LMI groups to quantify apoptotic vs. necrotic cell deaths.
Cell viability staining. The exposed medial aspect of the knee was washed and filled with 30 µl of prepared Calcein AM and Ethidium homodimer-1 for live and dead cell identification respectively (Molecular Probes/ Invitrogen, USA). After a 30 minute incubation period in complete darkness, excess stain was removed and the area was washed and filled with a fresh phosphate buffered saline solution (PBS) allowing for the use of a water-immersion objective.
Multi-photon microscopy and second harmonic generation (sHG). Following staining, anesthetized mice were moved onto the stage of a multi-photon microscope (FVMPE-RS, Olympus, Japan). Medial femoral condyle cartilage was imaged using a 25 × 1.05 NA water-immersion objective (Olympus Inc., Japan) coupled with two independent multi-photon infrared pulsed lasers (InSight DS and Mai Tai DeepSee, Spectra Physics Inc., USA) enabling simultaneous excitation at different wavelengths. The first laser was tuned to 800 nm to produce SHG while the second laser was tuned to 940 nm to excite both live and dead cell stains. The emission signals were directed to a single-edge dichroic beam splitter (FF458-Di02, Semrock inc., USA) to separate the SHG signal from the live/dead cell signal. Live and dead cell signals were further separated using a dichroic beam splitter (FF570-Di01, Semrock inc. USA) and were then focused onto two non-descanned detectors through two single bandpass filters, FF01-520/35 and FF01-612/69 (Semrock inc., USA) to capture the live and dead cell signals respectively. The SHG signal was directed to a single-band bandpass filter centered at 400 nm (FF01-400/40, Semrock inc., USA) prior to focussing it onto a sensitive GaAsp non-descanned detector.
Muscular loading of the mouse knee. Controlled muscular loading of the knee was achieved by stimulation of the knee extensor muscles using two fine wire electrodes inserted into the quadriceps muscle group. Muscles were stimulated using electrical stimulation with a Grass (S8800) digital stimulator, as we have previously described 10,11,25 . We have also demonstrated that a minimum load of 50% of the maximal muscular contraction is required to establish contact between the two cartilage surfaces 11 , so we aimed for 80% muscular load which is within the physiological range and leads to clear cartilage to cartilage contact and deformation when the medial meniscus is removed. The free tips of the exposed fine wires were separated by 2 mm, and application of approximately 7 volts at a frequency of 50 Hz resulted in 80% of the maximal isometric force 11,26 . Knee extensor torques were measured with a strain bar (Entran Sensors & Electronics, USA) attached to the distal part of the tibia while the femur was rigidly fixed to prevent articular surface movement (<0.5 µm) 9,11 . Dynamic cyclic loading. In loaded animals, stimulation trains of 0.5 s at 50 Hz every 4 s for 15 repeat contractions were applied to the quadriceps muscles every 30 minutes up to 240 minutes of observation. These contractions produced a compressive load at the knee articular surfaces corresponding to ~80% of the maximal possible muscle-induced joint compression. Multi-photon scans were taken before loading, and every 30 minutes up to 240 minutes following each bout of muscular loading (Fig. 2).
Detection of apoptosis/necrosis in injured cartilage. Three animals representing the groups LMI, LMI and LMI were used for distinguishing between apoptosis and necrosis of chondrocytes. Mice were prepared as described under animal preparation section. Apoptosis/necrosis was detected using an apoptosis/necrosis detection kit (ab176750, abcam, Cambridge, UK) at a concentration of 2 µM of Apopxin Deep Red (apoptotic indicator) and 1 µM of Nuclear Green (necrotic indicator). Mice knees were incubated in the Apopxin Deep Red and Nuclear Green solution for 45 minutes in complete darkness prior to testing. After staining, knees were rinsed in phosphate-buffered saline (PBS) for 15 minutes, then mounted on the stage of a multi-photon laser scanning microscope (FVMPE-RS, Olympus, Japan). Two multi-photon lasers tuned to 1230 nm and 800 nm were used to excite the apoptotic and necrotic cell indicators respectively. Cells were visualized using the filter cubes Cy5 (Em = 660 nm) and FITC (Em = 520 nm) for apoptotic and necrotic cells respectively.
Multi-photon and sHG image analysis.
Prior to starting the muscular loading protocol and at intervals of 30 min, a simultaneous stack of images of collagen tissue along with live/dead cells, or a stack of apoptotic/ necrotic cells was acquired (Fig. 2). A stack consisted of serial images of 1 µm thickness, ranging in depth of 250-350 µm from the medial side towards the middle of the joint. The field of view was 509 × 509 μm 2 (pixel size: 0.994 µm × 0.994 µm; pixel dwell time: 2 µs; frame scan time: 1.084 s). Three-dimensional shapes of these stacks were reconstructed using open source software (ImageJ, NIH, USA). Live and dead cells were counted manually over the entire 3D volume. For knees containing cartilage injury, live and dead cell counts were discretized to intervals from 0-100 µm, 100-200 µm etc. up to a distance of 500 µm perpendicular to both sides of the injury (Fig. 1b). The percentage of cell death was calculated as C/C 0 × 100%, where C is the number of dead cells and C 0 is the total number of cells. statistical analysis. Statistical analyses were made using SPSS software (Version 23.0, SPSS, Inc, Chicago, IL). Assumption of normality (Shapiro-Wilk test) and sphericity (Mauchley test) were tested for all dependent variables. If the assumption of sphericity was violated, the corrected value for non-sphericity with Greenhouse-Geisser epsilon was reported.
To determine the effect of meniscectomy, joint load, and cartilage injury on chondrocyte death, a one way analysis of variance (ANOVA) with repeated measures for 9 time points (0, 30, …, 240 minutes) and independent samples for 6 groups (LMI, LMI, LMI, LMI, LMI, LMI) was performed. Since the results showed a significant interaction effect for groups * times, a one-way ANOVA with Tukey corrections was used to compare the effects of different interventions over the time course of measurement. Significance was defined as p < 0.05.
Results
Experiments were conducted to determine the role of the menisci under physiological magnitudes of joint loading on injured and uninjured cartilage surfaces. There were significant differences amongst groups as a result of applying joint loading, removing the meniscus, and introducing a focal cartilage injury. ANOVA demonstarated a significant interaction of groups * times for the percentage of dead cells (F 40,240 = 25.13, p < 0.001) with increasing cell death associated with increased time and increased numbers of joint loading cycles. In the meniscus intact joint and focal cartilage injury group, chondrocyte death was significantly reduced compared to the meniscectomized joints following muscular loading (LMI vs LMI; 14 ± 4% vs 45 ± 9% at 240 min; Figs 3 and 4). Cell death started to increase significantly (p < 0.001) in the LMI group (meniscectomy with cartilage injury) at 90 min (corresponding to 45 isometric muscular loading cycles), resulting in rapid increases in cell death that reached 45% in 4 hours (Figs 3 and 4). For the same conditions, but with the meniscus intact (LMI group), cell death only became significant (p = 0.05) relative to control animals (LMI) at the 210 min time point, and at that time was similar to non-loaded meniscectomized knees (LMI vs LMI; 13.5 ± 4% vs 14.5 ± 4%; Fig. 3). Cell death was similar for the non-loaded, meniscectomized, non-injured group (LMI), and the loading protocol (LMI) groups at all time points. Cell death in (LMI) group was approximately half compared to the meniscectomized, uninjured (LMI) groups (Figs 3 and 5).
The percentage of cell death increased rapidly as a function of time near the cartilage injury (<200 µm on both sides of the injury) to reach >70% in the LMI group compared to ~ 30% for the corresponding condition with the meniscus intact (LMI animal group; Fig. 6). Cell death tended to be greater on the anterior compared to the posterior side of the injury in the LMI group, but this effect did not reach statistical significance (P = 0.056) (Fig. 6a). In the presence of injury without muscular loading (LMI group), cell death was concentrated near the cartilage injury (Fig. 6b). However, with the menisci intact and the joint loaded, cell death was virtually symmetrical relative to the cartilage injury (LMI animals; Fig. 6c). The distribution of cell death was similar between the non-loaded knees with injury and the loaded group when the meniscus was intact (Fig. 6d).
In the presence of cartilage injury, the meniscus helped to reduce cell death and prevent damage to collagen fibrils and the inegrity of the collagen fibrillar network (Fig. 4). Release of cartilage fragments was observed near the anterior side of the injury at 240 min or 120 muscular contractions (Fig. 7), while such fragments were never observed when the medial meniscus was left intact.
Necrosis was the dominant mechanism of cell death in this model. Cell necrosis began to develop in the four loaded animal groups LMI, LMI, LMI and LMI starting at 30 min. Apoptotic cells were observed only in the LMI group; one apoptotic cell was detected near the cartilage injury at 120 min. Apoptotic cells were identified close to and at a distance from the cartilage injury at 240 min (Fig. 8).
Discussion
The primary result of this study is the quantitative demonstration of a profound, acute protective role of the menisci. Specifically, the menisci were found to reduce/prevent chondrocyte death and acute damage to the collagen fibrillar network in the loaded and/or injured mouse knee. It has been shown that cell death is correlated with the progression of OA, and that cell death plays an important role in the development of OA [27][28][29][30] . By reducing cell death and collagen damage, the menisci help maintain the overall health of articular cartilage, thereby possibly delaying or preventing the development of OA in the knee joint.
Role of the Menisci in the Uninjured Joint. A large body of research has used meniscectomy or a DMM
in otherwise uninjured joints to induce osteoarthritis in animal models 16,17,19 . Joints rapidly progress toward OA after the menisci are removed or de-functioned in these experimental models. Moreover, in the case of isolated meniscal tears in the human knee, partial or total meniscectomy drastically increases the risk of early onset OA 20,23 . In our study, the presence of the menisci in the murine knee helped to reduce the percentage of cell death in joints with intact cartilage. Cell death was approximately double in the meniscectomized, uninjured joint (LMI) compared to joints with an intact meniscus exposed to physiological loading conditions (LMI) (Figs 3 and 5), while the loaded intact meniscus group animals (LMI) had approximately the same level of cell death as the unloaded control group joints (LMI). These findings indicate that in a meniscectomized, but otherwise healthy joint, physiological muscular loading can result in increased chondrocyte death in an acute setting (4 hours).
Mechanically, the menisci are thought to play important roles in load distribution and joint stability 31,32 . Following meniscectomy, the tibiofemoral contact area decreases by (30-70%) 33 , resulting in increased contact stresses. Furthermore, it has been theorized that the initial loading of the menisci pre-stresses the articular cartilage, via fluid pressurization; preparing it for loading 24 . These two mechanisms combined could lead to significantly increased strains in the cartilage matrix in the meniscectomized knee. Since the predominant mechanism of cell death was necrosis in the area of cartilage-cartilage contact 11 , it is probable that overloading (excessive strain) caused the cell death.
Role of the Menisci in the Injured Joint.
In the presence of a focal cartilage defect, almost 50% of the chondrocytes were dead after 4 hours in the loaded, meniscectomized knee (LMI), while only 14% of the cells were dead for the corresponding conditions with the menisci intact (LMI). The rapid cell death in meniscectomized knees is in agreement with previous findings. Bartell et al. 34 found that chondrocyte death was highly correlated with a threshold of 8% cartilage strain, and chondrocyte death developed within 2 h of load application in normal, neonatal bovine cartilage explant samples. We reported recently that our muscular loading protocol produces articular cartilage strains averaging 10% in murine knee joints 11 .
Cartilage injury has been shown to be associated with altered geometry and decreased joint stability 32,35 . A cartilage defect typically results in stress concentrations near the defect site, and a corresponding increase in local matrix strain 36 . Cartilage defects typically also result in an increase in local permeability, reducing cartilage stiffness under rapidly applied load conditions, thereby exposing cells to potentially larger strains than they would experience in the intact cartilage. Like for the uninjured joints, necrosis was also the main mechanism of cell death in the injured knees, indicating that mechanical insult is the likely cause of the observed cell death. However, the amount of cell death was significantly elevated in the meniscectomized knees in the injured group compared to the uninjured group (LMI vs LMI). Increased local strains caused by the cartilage injury, and the increased contact stresses in the absence of the menisci, may explain this finding. This finding is consistent with the work of Peña and coauthors who used finite element analysis to evaluate joint stresses and strains in the medial femoral condyle containing lesions 37 . These authors predicted increased cartilage strains adjacent to a defect, and the effects of focal defects in a load-bearing region were more pronounced than in a non-load-bearing region. The menisci decreased the local stresses, thereby protecting the cells adjacent to the defect from injurious strains. This mechanism seems plausible based on our results, as the total cell death in our LMI and LMI animals were not significantly different, demonstrating that muscular loading is well accommodated in an injured joint that contains menisci. Cell death was highest near the cartilage injury. Cell death in the (LMI) group reached >70% within an area of ±100 µm from the cartilage injury, decreasing to ~ 40% between 200-300 µm from the injury site (Fig. 6a,d). With muscular loading and the meniscus intact (LMI), cell death within ±100 µm is (~30%) similar to that found in the unloaded surgical control group samples (LMI), decreasing to <15% between 200-300 µm from the injury site ( Fig. 6b-d). The menisci are known to distribute stress across the articulating surfaces of the tibia and femur, and are thought to mediate fluid flow in the articular cartilage 24 . A more even stress distribution appears to protect chondrocytes from necrosis potentially caused by excessive cell membrane strains.
The meniscectomized knees (LMI) were the only ones to show detectable apoptotic cell death near the areas of collagen damage at 240 minutes. Data presented in another study 38 demonstrated that mechanical injury induces chondrocytes death in the form of apoptosis in bovine and human explants, and the rate of apoptosis increased 3 hours post injury. We do not know what triggered apoptosis, but speculate that this may have been a function of an inflammatory cascade initiated by chondrocyte necrosis and associated upregulation of caspases 39 . Chondrocytes may also initiate apoptosis when losing their natural ECM attachment resulting from collagen disruption 39 . Chondrocyte apoptosis has been observed in osteoarthritic cartilage and in articular cartilage explants injured by surgical excision and cyclic compression [40][41][42][43][44] . The combined loss of chondrocytes due to necrosis and apoptosis in the LMI condition resulted in a profound hypo-cellularity that is also observed in osteoarthritis 45,46 .
Extreme care was taken during the meniscectomy not to injure the joint surfaces and avoid any bleeding into the joint space. All animals with joint bleeding were discarded from analysis in order to eliminate any artefactual cell death scenarios. Despite performing all surgical interventions with great care, there are several limitations that should be kept in mind when interpreting our results. First, the use of a mouse model may have limited translational fidelity to human meniscus injury. However, this model allows us to control the loading regime and image the time course of changes in cell viability in real time with high spatial resolution that cannot be achieved in human studies at this time. Furthermore, the focal defect induced with a scalpel is not representative of a focal injury found in early joint degeneration. This experimental condition represents a best-case scenario, where the defect is not chronic and not the result of degenerative changes. Yet, even in this ideal scenario with an otherwise healthy cartilage, meniscectomy resulted in significant cell death and cartilage degeneration in a matter of hours when samples were subjected to muscular joint loading. We observed the mice for 4 hours after the induction of the injury, therefore any long-term health outcomes remain unknown. However, the DMM model has been used to demonstrate progression of OA to a Kellgren Lawerence Grade V by 8 weeks post-meniscectomy in rodents 17 . Thus, one might expect a similar long-term fate for the animals of our study, had they been observed for a sufficient period of time.
Another limitation of the current study was that we could not measure the articular cartilage strains during muscular loading for the meniscus intact conditions. The meniscus covered and sealed the entire surface of the femoral condyle and prevented the multi-photon laser from penetrating the cartilage deformation site; thus imaging cartilage deformation was not possible.
Significance. We provide first direct evidence that the menisci play an integral role in chondrocyte protection from necrosis in the intact and lesioned knee for acute, and physiologically relevant (muscular) loading conditions. Both images were taken from the LMI animal group. A single apoptotic cell was seen at 120 min (white arrow) near the cartilage injury (yellow arrow). Apoptotic cells start to progress from near the cartilage injury towards regions away from the injury at 240 min. | 2018-09-21T19:08:17.549Z | 2018-09-20T00:00:00.000 | {
"year": 2018,
"sha1": "48c4aa6c268f4ff5df64fad221c20e198f209461",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-32503-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7fd774d3a3cdb0304e9c86dbe9a30b54ba2ef853",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18214254 | pes2o/s2orc | v3-fos-license | Larson-Sweedler Theorem and the Role of Grouplike Elements in Weak Hopf Algebras
We extend the Larson-Sweedler theorem to weak Hopf algebras by proving that a finite dimensional weak bialgebra is a weak Hopf algebra iff it possesses a non-degenerate left integral. We show that the category of modules over a weak Hopf algebra is autonomous monoidal with semisimple unit and invertible modules. We also reveal the connection of invertible modules to left and right grouplike elements in the dual weak Hopf algebra. Defining distinguished left and right grouplike elements we derive the Radford formula for the fourth power of the antipode in a weak Hopf algebra and prove that the order of the antipode is finite up to an inner automorphism by a grouplike element in the trivial subalgebra A^T of the underlying weak Hopf algebra A.
Introduction
Weak Hopf algebras have been proposed recently [1,2,18] as a generalization of Hopf algebras by weakening the compatibility conditions between the algebra and coalgebra structures of Hopf algebras. Comultiplication is allowed to be non-unital, ∆(1) ≡ 1 (1) ⊗ 1 (2) = 1 ⊗ 1, just like in weak quasi Hopf algebras [11] and in rational Hopf algebras [19,8], but the comultiplication is coassociative. In exchange for coassociativity, the multiplicativity of the counit is replaced by a weaker condition: ε(ab) = ε(a1 (1) )ε(1 (2) b), implying that the unit representation is not necessarily one-dimensional and irreducible. Like weak quasi and rational Hopf algebras, they can possess non-integral (quantum) dimensions even in the finite dimensional and semisimple cases, which is necessary if we want to recover them as global symmetries of low-dimensional quantum field theories. In situations where only the representation category matters, these two concepts are equivalent. Nevertheless, just like finite dimensional Hopf algebras, finite dimensional weak Hopf algebras (WHA) obey the mathematical beauty of giving rise to a self-dual notion: the dual space of a WHA can be canonically endowed with a WHA structure. For a recent review, see [12].
Here we continue the study [2] of the structural properties of finite dimensional weak Hopf algebras over a field k. The main results of this paper are: 1. The generalization of the Larson-Sweedler theorem [10] to WHAs claiming that a finite dimensional weak bialgebra is a weak Hopf algebra if and only if it possesses a non-degenerate left integral. 2. The characterization of inequivalent invertible modules of WHAs through left/right grouplike elements in the dual WHA and the proof of the semisimplicity of invertible modules, which include the unit module serving as a monoidal unit in the monoidal category of left (right) modules. 3. A finiteness claim about the order of the antipode (up to an inner automorphism by a grouplike element in the trivial subalgebra) and the derivation of the Radford formula [15] in a weak Hopf algebra A: S 4 (a) = σ ⇀ s −1 as ↼Ŝ −1 (σ), a ∈ A, where S (Ŝ) is the antipode in A (Â), and s and σ are distinguished left grouplike elements in A and in the dual WHAÂ, respectively.
The existence of a non-degenerate left integral l ∈ B in a finite dimensional bialgebra B implies the existence of a non-degenerate left integral λ ∈B in the dual bialgebraB with the property λ ⇀ l = 1. Then the formula S(a) := (λ ↼ a) ⇀ l, a ∈ B gives rise to the antipode for B proving one direction of the Larson-Sweedler theorem [10]. The proof of the opposite direction [10] involves the structure theorem for Hopf modules, which are onesided H-modules and H-comodules of the Hopf algebra H together with a compatibility condition. The follows and observing that by dimensionality argument C(Ĥ) is one dimensional, a non-degenerate left integral inĤ emerges in the space of coinvariants C(Ĥ).
The proof of the corresponding statement (Theorem 4.1) in the case of finite dimen-sional weak bialgebras is in the same spirit. The existence of a non-degenerate left integral in a finite dimensional WBA implies the existence of a non-degenerate left integral in the dual WBA and the previous classical formula leads to the antipode. The proof of the opposite direction is more involved: besides weak Hopf modules one has to introduce multiple weak Hopf modules, in which bimodule or bicomodule structures are also present together with compatibility conditions between the module and comodule structures. Then the structure theorem (Theorem 3.2) for a multiple weak Hopf module A M A A of a WHA A claims that A M A A ≃ A (C(M ) × A A A ), i.e., the right weak Hopf module structure of M is given by the canonical weak Hopf module A A A ; while as a left A-module, M is isomorphic to the product module of the coinvariants A C(M ) and the left regular module A A. The left A-module structure of the coinvariants arises from the bimodule structure of M (Lemma 3.1 iii). In particular, the dual WHAÂ is a multiple weak Hopf module AÂ A A and its coinvariants C(Â) are the left integralsÎ L ⊂Â (Theorem 3.2). Moreover,Î L becomes a free left A R -and A L -module with a single generator by restricting the left A-module structure of AÎ L to the canonical coideal subalgebras A R and A L of A, respectively (Corollary 3.5). It is the latter result that replaces the dimensionality argument of the classical Hopf case and together with the isomorphism AÂ A A ≃ A (Î L × A A A ) of multiple weak Hopf modules leads to the existence of a non-degenerate left integral inÎ L ⊂Â.
The modules of a WHA that are invertible with respect to their monoidal product are important in low dimensional quantum field theories. Hence, it is worth characterising them in purely (weak) Hopf algebraic terms. Although a WHA A is not a semisimple algebra in general, its unit and invertible modules are semisimple (Theorem 2.4, resp. Prop. 5.4 ii). The origin of this property is that the trivial subWHA A T , which is generated by the canonical coideal subalgebras A L and A R of a WHA A, is in the coradical of A (Lemma 2.3). We derive two other equivalent characterizations of invertible modules: they are precisely the modules that become free rank one A L -and A R -modules by restricting the A-module structure to these coideal subalgebras (Prop. 5.4 i). For example, the invertible left A-module structure of right integrals I R ⊂ A and left integralsÎ L ⊂ follows in this way. The second equivalent characterization of invertible A-modules involves left or right grouplike elements (Def. 5.1) in the dual WHA: an A-module is invertible iff it is isomorphic to a cyclic submodule in the second regular A-module A generated by a left (right) grouplike element in (Prop. 5.7). Moreover, the isomorphism classes of invertible A-modules are given by the (finite) factor group G L (Â)/G T L (Â) (or by G R (Â)/G T R (Â)) (Prop. 5.7), where G T L (Â) is the intersection of the (in general infinite) set of left grouplike elements G L (Â) and the trivial subWHA T inÂ.
If l ∈ A and λ ∈Â are dual left integrals, i.e., if they are non-degenerate and satisfy λ ⇀ l = 1, then s := l ↼ λ and σ := λ ↼ l will define (distinguished) left grouplike elements (Def. 6.1 and discussion before) like in the Hopf case [15]. σ falls into a central element of the factor group G L (Â)/G T L (Â) and determines the unimodularity of A, that is the possible existence of a two-sided non-degenerate integral in A (Corollary 6.3). The Nakayama automorphism θ λ : A → A corresponding to a non-degenerate left integral λ ∈Â can be given in terms of distinguished left grouplike elements in two different ways, which contain the square or the inverse square of the antipode. Hence, these expressions lead to the generalization of the Radford formula [15] to WHAs (Theorem 6.4). Since the factor groups G L (A)/G T L (A) and G L (Â)/G T L (Â) are finite and since even powers of the antipode are WHA automorphisms, the iteration of the Radford formula leads to the claim that the order of the antipode is finite up to a conjugation by an element in G T L (A) ∩ G T R (A) (Theorem 6.4). The explicit form of the Nakayama automorphism θ λ , like in the Hopf case [16], can be used to prove the unimodularity of the double of a WHA (Corollary 6.5).
We note that it was established in [2] that WHAs are quasi-Frobenius algebras. Result 1 implies that they are Frobenius algebras. Grouplike elements in a WHA, which are just the intersection of left and right grouplike elements in our formulation, were introduced in [2]. The modules associated with them were studied in [13]. However, this notion of grouplike elements is too restrictive: for characterization of isomorphism classes of invertible modules (Result 2) one has to introduce the less restrictive notion of left (right) grouplike elements, because the factor group G(Â)/G T (Â) of grouplike and trivial grouplike elements is, in general, smaller than the corresponding factor group G L (A)/G T L (A) of left grouplike elements (Prop 5.8). Result 3 was proved in [13] in the case when the square of the antipode is the identity mapping on the coideal subalgebra A L of the WHA A.
The organization of the paper is as follows. In Section 1 we review the axioms and the main properties of weak bialgebras (WBA) and weak Hopf algebras. Here and throughout the paper they are considered to be finite dimensional. Section 2 is devoted to the autonomous monoidal category of modules of a WHA and to properties of the unit module including semisimplicity. We derive also a lower bound for the k-dimension of an A-module in terms of the k-dimensions of the simple submodules of the unit A-module. This estimation leads to a sufficient condition for an A-module to become a free rank one A L -and A R -module. In Section 3 we prove a structure theorem for multiple weak Hopf modules and show that the left A-modules spanned by right integrals in A and left integrals in become free rank one A L -and A R -modules. Section 4 contains the generalization of the Larson-Sweedler theorem to the weak Hopf case. In Section 5 we reveal the connection between invertible modules of a WHA A and left (right) grouplike elements in the dual WH A and prove that invertible modules are semisimple. Section 6 contains the definition and some basic properties of distinguished left and right grouplike elements, the derivations of the form of the Nakayama automorphism θ λ : A → A corresponding to a non-degenerate left integral λ ∈ and the Radford formula. In addition, we prove the claim about the order of the antipode and unimodularity of the double of a WHA. In Appendix A we give a simple example of a WHA in which the order of the antipode is not finite. Finally, Appendix B contains the generalization of the cyclic category module [4] to weak Hopf algebras containing a modular pair of grouplike elements in involution.
Preliminaries
Here we give a quick survey of weak bialgebras and weak Hopf algebras [2]. We restrict ourselves to their main properties, however, some useful identities we use later on are also given.
The axioms
A weak bialgebra (A; u, µ; ε, ∆) is defined by the properties i-iii): i) A is a finite dimensional associative algebra over a field k with multiplication µ: A ⊗ A → A and unit u: k → A, which are k-linear maps.
ii) A is a coalgebra over k with comultiplication ∆: A → A ⊗ A and counit ε: A → k, which are k-linear maps. iii) The algebra and coalgebra structures obey the compatibility conditions where (and later on) ab ≡ µ(a, b), 1 := u(1) and we used Sweedler notation [17] for iterated coproducts omitting summation indices and a summation symbol. A weak Hopf algebra (A; u, µ; ε, ∆; S) is a WBA together with property iv): iv) There exists a k-linear map S: A → A, called the antipode, satisfying WBAs and WHAs are self-dual notions, the dual space := Hom k (A, k) of a WBA (WHA) equipped with structure mapsû,μ,ε,∆, (Ŝ) defined by transposing the structure maps of A by means of the canonical pairing , :Â × A → k gives rise to a WBA (WHA).
Properties of WBAs
Let A be a WBA. The images A L/R = Π L/R (A) =Π L/R (A) of the projections Π L/R : A → A andΠ L/R : A → A defined by Π L (a) := ε(1 (1) a)1 (2) , Π L (a) := ε(a1 (1) )1 (2) , are unital subalgebras (i.e. containing 1) of A that commute with each other. A L and A R are called left and right subalgebras, respectively. The image ∆(1) of the unit is in A R ⊗A L and the coproduct on A L/R reads as: Hence, A L and A R are left and right coideals, respectively, and the trivial subalgebra The maps κ L : A L →Â R and κ R : A R →Â L given by Sweedler arrows are algebra isomorphisms with inversesκ R andκ L , respectively. Moreover, Defining Z L/R := A L/R ∩Center A and Z := A L ∩A R the restrictions of κ L/R to Z L/R and Z lead to the algebra isomorphisms Z L/R →Ẑ and Z →Ẑ R/L , respectively. Hence, the (note the switch of L and R in the second equation) and obey the identities (1.8) due to (1.1c). The space of left/right integrals I L/R in A is defined by (1.9)
Properties of WHAs
Let A be a WHA. The antipode S, as in the case of Hopf algebras, turns out to be invertible, antimultiplicative, anticomultiplicative and leaves the counit invariant: ε = ε•S. The restriction of the antipode to A L leads to algebra antiisomorphism S: A L → A R , therefore A T is a subWHA of A, moreover, (1.10) The projections (1.3) to left and right subalgebras can be expressed as (1.11) The first two equations follow from the antipode axioms (1.2a and b). The other two can be seen using the aforementioned properties of the antipode and the WBA identity ε(abc) = ε(Π R (a)bΠ L (c)) following from (1.1b) and (1.3). The left and right subalgebras become separable k-algebras with separating idempotents [14, p.182 by definition. The product q L q R ∈ A T ⊗ A T is a separating idempotent for A T , thus the trivial subalgebra is a separable k-algebra, too. The separating idempotent q L/R serves as a quasibasis [20, p.6] for the counit: thus the counit is a non-degenerate functional on A L/R . The properties S(1 (1) )1 (2) = 1 and 1 (1) S(1 (2) ) = 1 of separating idempotents q L and q R ensure that the counit ε is an index 1 functional [20, p.7] on A L and on A R , respectively. Due to the identities (1.5),(1.7),(1.10) and (1.12) the corresponding Nakayama automorphisms θ L/R : A L/R → A L/R , which are defined by can be given as Hence, θ L (θ R ) is the restriction of the square of the (inverse of the) antipode to A L (A R ). Since any separable algebra admits a non-degenerate (reduced) trace [6, p.165], the counit, being a non-degenerate functional on A L/R , can be given with the help of the corresponding trace as ε(·) = tr L/R (t L/R ·) with t L/R ∈ A L/R invertible. Therefore, the Nakayama automorphisms θ L/R are given by ad t L/R and S 2 is inner on A L/R , hence, on A T , too. In a WHA a left integral l ∈ I L and a right integral r ∈ I R obey the identities respectively. Moreover, there exist projections L/R: A → I L/R andL/R: A → I L/R : where {b i } ⊂ A and {β i } ⊂Â are dual k-bases with respect to the canonical pairing. They obey the properties L /R(ϕ), a = ϕ, R/L(a) , L /R(ϕ), a = ϕ,L/R(a) , a ∈ A, ϕ ∈Â, (1.18) therefore the restrictions of the canonical pairing toÎ L/R × I L/R (four possibilities) are non-degenerate.
Properties of the unit module
In this chapter A denotes a WHA over a field k.
where (and later on) µ L (a ⊗ m) ≡ a · m and µ R (m ⊗ a) ≡ m · a. The role of the unit module will be played by the trivial representation [2, p.400] of A: We note that these modules need not be one-dimensional as in the case of Hopf algebras, they are not even simple in general. Nevertheless, they play the role of the unit object in the monoidal category of finite dimensional left (right) A-modules. We deal with only the category of left A-modules since the one-to-one correspondence between left and right A-modules induced by the antipode, m · a := S(a) · m, a ∈ A, m ∈ A M , extends to a categorical isomorphism.
and the left A-module structure on M × N is given by where (and later on) we have suppressed possible or necessary summation for tensor product elements in product modules. The product on the arrows T α : M α → N α , α = 1, 2 is defined by T 1 × T 2 := (T 1 ⊗ T 2 ) • ∆(1), i.e. by the restriction of the tensor product of the linear maps T 1 and T 2 to M 1 ×M 2 . One can easily check that T 1 ×T 2 : M 1 ×M 2 → N 1 ×N 2 is a left A-module map. The given monoidal product is associative due to the associativity of the coproduct and property (1.1c) of the unit, hence the components × M 3 of the natural equivalence responsible for associativity in a monoidal category are the identity mappings 1 M 1 ×M 2 ×M 3 in our case. The monoidal unit property of the left A-module A L can be seen by verifying that for any object M the k-linear invertible maps X L M : are left A-module maps and the identities where (and later on) we omit summation symbol for the sum of tensor product of dual basis elements. The arrow family of left evaluation and coevaluation maps E l M : (2.8) due to the identities (1.8) and (2.7a) and they satisfy the left rigidity identities [21] ( for any M ∈ Obj L. Thus defining the left conjugated arrow one arrives at the antimonoidal contravariant left conjugation functor ↼ − : L → L [21].
Similarly, the right conjugate − ⇀ M of an object M in L is the k-linear spaceM equipped (2.14) As in the previous case, one proves that they are left A-module maps satisfying the right rigidity identities [21] ( Hence, defining the right conjugated arrow one arrives at the antimonoidal contravariant right conjugation functor − ⇀ : L → L.
In order to prove semisimplicity of the unit module, we show that the trivial subWHA is not only semisimple but also cosemisimple: The trivial weak Hopf subalgebra A T ⊂ A is a sum of simple subcoalgebras, i.e. A T is contained in the coradical C 0 of A.
Proof. First we decompose the WHA A T into a direct sum of subWHAs.
The intersection Z := A L ∩ A R is in the center of the separable algebra A T , because the unital coideal subalgebras A L and A R that generate A T commute with each other. The WHA identity (1.10) implies z = S(z) for all z ∈ Z. Hence, Z is a unital, pointwise Sinvariant subalgebra of the k-algebra Center A T and one can write A T as a tensor product algebra A T ≃ A L ⊗ Z A R . Let {z α } α be the set of primitive orthogonal idempotents in Z. They are central idempotents in A T ; thus, due to z α ∈ A L ∩ A R and due to coproduct property (1.4) of elements in A L and in A R . This WHA decomposition implies that (A T α ) X = A X α with X = L, R, T and that the WHA A T α has the tensor product algebra structure is an Abelian division algebra over the ground field k, that is Z α is a subfield in the center of the separable algebra A T α , hence Z α is a finite separable field extension of k [14, p.191].
Now we prove that A T α , the dual of the WHA A T α , is isomorphic to the simple k-algebra M n α (Z α ), where n α = dim Z α A R α , i.e. A T α is simple as a k-coalgebra. We stress that the inclusion ( A T α ) T ⊂ A T α is proper in general. Therefore, simplicity of A T α as an algebra is a 'non-trivial' property in the sense that it goes for a WHA which is not trivial, i.e., not generated by the canonical coideal subalgebras ( A T α ) L and ( A T α ) R .
Consider the cyclic left
It is just the trivial representation [2, p.401] of the WHA A T α ; hence, its endomorphism ring End However, the maps inẐ R α ⇀ are just multiplications by elements of Z α due to the statements after (1.6), i.e., [17, p.183], whereĈ 0 is the coradical of the dual weak Hopf algebraÂ, the previous Lemma leads to the containment i.e. Π L (N ) = 0. Therefore, the radical of A is in the annihilator ideal of the left module The endomorphism ring for the unit module is given by End A A L = Z L · [2, p.402], that is by the restriction of the A-action to the subalgebra Z L . Since the unit module is a free, hence faithful A L -module, it is also faithful as a Z L -module. Now, the direct sum decomposition (2.
20) is clear and End
Together with semisimplicity this leads to simplicity of the direct summands A A L p . The analogous result holds for the unit right A-module: . We have seen that the simple submodules of the unit left (right) A-module are labelled by primitive idempotents in Z L (Z R ). Although a generic A-module does not need to be semisimple, it is always a direct sum of submodules labelled by pairs of primitive orthogonal idempotents in the cartesian product Z L ×Z R . Indeed, the product of primitive orthogonal idempotents in Z L and Z R gives rise to a decomposition of the unit 1 = , because S 2 is inner on A L/R and the idempotents are central.
certain products z L p z R q can be identically zero due to the presence of the hypercenter we refer to (p, q) as an admissible pair. Hence, the non-zero summands are labelled by admissible pairs in the decomposition of the unit, which induces a direct sum decomposition of every A-module The next Lemma shows that the simple submodules of the unit module A A L obey a kind of minimality property in the corresponding class of left A-modules.
ii) The restriction of A to the subalgebras A L p and A R q makes A M (p,q) a faithful left A L p -and A R q -modules, respectively. Proof. In the following first we prove that the left A-modules M (p 1 ,q 1 ) and N (p 2 ,q 2 ) should obey the matching condition q 1 = p 2 in order to get a nonzero product module M (p 1 ,q 1 ) × N (p 2 ,q 2 ) . Then writing a left A-module M (p,q) as a product with the unit module and using this matching condition, the emerging tensor product space can be given as a sum of subspaces with respect to a basis of the corresponding simple submodule of the unit module. We will use Theorem 2.4 and Remark 2.5 to prove that M (p,q) is a faithful A L pand A R q -module and then the estimation of the k-dimension of M (p,q) will follow. Using property (1.12) of the separating idempotent of A L and the decomposition of the unit into primitive orthogonal idempotens in Z L , one obtains Therefore, for any two left A-modules M, N within a certain class we have The separating idempotent of A L is a quasibasis for the counit due to (1.13), hence, it has the expression S(1 (1) ) ⊗ 1 (2) respectively, then we are done, because a nonzero linear subspace is at least one dimensional and ) | a ∈ A} should also be contained in the annihilator ideal of A M (p,q) . But this contradicts the assumption that A M (p,q) is a nonzero module in the (p, q) class. Since the module A R qA is simple (see Remark 2.5), one has A R qA = {x R q · a := S(a (1) )x R q a (2) | a ∈ A} for any non-zero x R q ∈ A R q . Hence, the assumption that a non-zero element of A R q is in the annihilator ideal of A M (p,q) leads to the contradiction as before.
Corollary 2.7 Given A M let A L M and A R M denote the A L -and A R -module, respectively, defined by restriction of the A-module structure to these subalgebras. If End A R M = A L · ≃ A L then A L M and A R M are free rank one A L -and A R -modules, respectively. Proof. Repeating the argument in [2, p.417], one obtains an upper bound for the kdimension |M | of the module M : being separable, A R is semisimple; hence, by the Wedderburn structure theorem which is isomorphic to A L by assumption. Hence, as a right action on M , it is antiisomorphic to A L , i.e., isomorphic to A R . This is possible only if there is a permutation σ of simple i for |M | follows from the Cauchy-Schwarz inequality.
However, the A R -bimodule structure of M implies that M is a faithful left A R -module, hence, a faithful left Z R -module. Therefore, the previous Lemma leads to the opposite estimation: i , which is possible only if n σ(i) = n i . But in this case A R M and A L M are isomorphic to the left regular A Rand A L -module, respectively, that is A M becomes a free rank one A R -and A L -module by restricting the A-action to these subalgebras. (2.32) using (1.4) in the sixth equality. Since the unit A-module A A L becomes a free, hence faithful left A L -module by restriction and since The proof of the statement involving the right dual − ⇀ M is similar.
Hopf modules in weak Hopf algebras
Besides A-modules we need the notion of weak Hopf modules of a WBA A [2, p.407].
First, a left (right) A-comodule is a pair
They incorporate only the coalgebra properties of A. In the following we will use the Lower and upper A-indices will indicate A-modules and A-comodules, respectively. The weak Hopf modules A are A-modules and A-comodules simultaneously together with a compatibility condition restricting the comodule map to be an A-module map, e.g.
(3.2) As a consequence of these identities WHMs obey a kind of non-degeneracy property Hopf modules if they are pairwise WHMs of the WBA A in the possible A-indices and if the different module or comodule maps commute, i.e., they are bimodules or bicomodules. The invariants and coinvariants of left/right A-modules and left/right A-comodules, respectively, are defined to be and the invariants (coinvariants) with respect to A become coinvariants (invariants) with respect toÂ.
If A is not only a WBA, but also a WHA one can say more about the invariants and coinvariants of (multiple) WHMs: i) The coinvariants and the invariants of a WHM of A can be equivalently characterized as ii) The following maps define projections from WHMs onto their coinvariants and invariants, respectively where S is the antipode and R,R, L,L are the projection maps (1.17) to integrals in the WHA A. iii) In case of the multiple WHMs A M A A and A A M A the coinvariants are left and right A-modules with respect to the induced left and right adjoint actions, respectively. Proof. i) The characterization (3.6a) of coinvariants and the form (3.7a) of the projections onto them have been already proved in [2, p.409]. Concerning the invariants of M A A , first we note that the set given in (3.6b) is contained in the set of invariants defined in (3.4) since for all a ∈ A. Using the third identity in (1.8) the opposite containment is as follows The cases of the other three WHMs can be proved similarly.
ii) The image of the map P A is in I(M A A ) due to the defining property (1.9) of the right integrals in A. Applying P A to an invariant m ∈ I(M A A ) and using their characterization (3.6b) and the non-degeneracy property (3.3), follows, that is P A is a projection onto the invariants of M A A . The cases of projections onto the invariants of the other three WHMs can be proved similarly.
iii) We have to show that the maps provide a left and a right A-module structure ( µ R (ϕ ⊗ a) ≡ ϕ · a := S(a) ⇀ ϕ, a ∈ A, ϕ ∈Â,
14c)
where {b i } ⊂ A and {β i } ⊂Â are dual bases with respect to the canonical pairing, therefore (3.15) whereÎ L is the space of left integrals in the WHAÂ.
due to the fact that which follows from the identities (3.13) and (1.12). One can easily check that the maps where we used (3.17) in the fifth equality of (3.20b).
ii) The WHM structure A A ≡ (Â, µ R , δ R ) given by (3.14b and c) of the multiple WHM
AÂ A
A has been shown in [2, p.409]. The map µ L defined in (3.14a) is clearly a left A-module map on that commutes with the given right A-module map µ R . The right comodule map δ R is also a left A-module map since 21) where we used the identities (1.6) and (1. [10][11]. Hence, the maps (3.14) provides with a multiple WHM structure, and the statement (3.15) follows from the previously proved structure of a general multiple weak Hopf module. By dualizing the right A-coaction to leftÂ-action as in (3.5b), the right coinvariants C( A ) become the left invariants of the left regular moduleÂÂ, which is the space of left integralsÎ L inÂ.
Corollary 3.3 The left regular
Proof. The inverse of the antipode provides the isomorphism of the right A-moduleŝ with right action µ R given in (3.14b) and the structure theorem of multiple weak Hopf modules implies that (Â A , µ R ) is isomorphic to a direct summand of the free right A-moduleÎ L ⊗ A A . Therefore, (Â A , ↼) is a projective right A-module, which implies the injectivity of its k-dual, that is of A A. Hence, A is a quasi-Frobenius algebra [ where {z R p } p is the set of primitive orthogonal idempotents in Z R . Proof. Since the right integrals I R form a left ideal in A and A A is injective by Corollary 3.3, it follows [5, p.392] that every φ ∈ Hom ( A I R , A A) can be extended toφ ∈ Hom ( A A, A A).
Proof. Due to Corollary 3.4 AÎ L is the right conjugate of A I R , that is A I R is the left where the left A-module structure of the right invariants is inherited from that of the corresponding multiple WHM. In our case I( A ) ≡ I( A , µ R ) = L and I(Î L × A A ) = I L × I R . The latter equality can be seen by using the form (3.7b) of the projection P A to right invariants of the WHMÎ L × A A A . To prove the former equality we note that the invariants of the right A-module ( A , µ R ) are the coinvariants of the dual leftÂcomodule (ÂÂ,δ L ) given by (3.5a). Since in this case δ L (ϕ) =Ŝ(ϕ (2) ) ⊗ ϕ (1) , applyinĝ S −1 ⊗ ε to the defining identity (3.4) of left coinvariants and using (1.10) one arrives at A L = C(ÂÂ,δ L ) = I( A , µ R ). Therefore, L ≃Î L × I R as left A-modules. However, A L ≃ A L also holds since the invertible mapŜ • κ L : A L → L with κ L in (1.5) is an A-module map: where a ∈ A and x L ∈ A L . Thus, A L ≃Î L × I R as left A-modules.
Existence of non-degenerate left integrals in weak Hopf algebras
Here we prove the generalization of the Larson-Sweedler theorem [10]. Theorem 4.1 A finite dimensional weak bialgebra A over a field k is a weak Hopf algebra iff there exists a non-degenerate left integral in A. Proof. Sufficiency. A left integral l ∈ A obeys the defining property al = Π L (a)l, a ∈ A. Non-degeneracy means that the maps are bijections. This implies that there exist λ, ρ ∈ such that l ↼ ρ ≡ L l (ρ) = 1 = R l (λ) ≡ λ ⇀ l. Let us define the k-linear maps S: A → A andŜ: → by They are transposed to each other with respect to the canonical pairing andŜ(ρ) = λ. Now we prove that λ (ρ) is a non-degenerate left (right) integral in obeying l ⇀ λ =1 = l ⇀ ρ.
Applying the structure theorem of multiple WHMs to AÂ A A given by (3.14) we get the isomorphism AÂ . Moreover, the restriction of the left A-module structure of AÎ L ≡ ( AÎ L , ⋆) to the coideal subalgebra A R ⊂ A leads to a free A R -module A RÎ L ≡ ( A RÎ L , ⋆) with a single generator λ 0 ∈Î L due to Corollary 3.5. Hence, using the multiple WHM isomorphism V :Î L × A →Â given in (3.19) and the presence of the separating idempotent inÎ L × A := 1 (1) ⋆Î L ⊗ 1 (2) · A =Î L · S(1 (1) ) ⊗ 1 (2) A, which follows from (3.11), (1.4) and from the property ∆(1) ∈ A R ⊗ A L , one obtainŝ which implies the non-degeneracy of the left integral λ 0 in the weak Hopf algebraÂ.
Since a non-degenerate left integral in a WHA provides a non-degenerate associative bilinear form on the dual WHA: Corollary 4.2 A finite dimensional weak Hopf algebra is a Frobenius algebra.
Grouplike elements and invertible modules
In this chapter first we define (left/right) grouplike elements in a WHA A. Then we give two equivalent descriptions of invertible A-modules in terms of the canonical coideal subalgebras in A and in terms of left (right) grouplike elements in the dual WHAÂ.
Definition 5.1 The set of right/left grouplike elements G R/L (A) in a weak Hopf algebra
A is defined to be where A R/L * denote the set of invertible elements in A R/L . The set of grouplike elements in A is defined to be the intersection G(A) := G R (A) ∩ G L (A).
Proof. If g ∈ A is right (left) grouplike it is invertible due to the discussion above, while the required coproduct property follows by definition. Conversely, the relations In conclusion, using (1.8) one derives Multiplying this identity by 1 ⊗ S(g −1 ) from the right and using the form of Π L (g), one arrives the other coproduct property of a right grouplike element in (5.1a). The proof for left grouplike elements is similar. We note that the set G(A) in G R (A) can also be given by the subset of elements satisfying Π L (g) = 1 or by the subset of pointwise invariant elements with respect to S 2 . For verification of the latter claim, we note that if g = S 2 (g) holds for g ∈ G R (A) then Π L (g) = gS(g) = S 2 (g)S(g) = S(Π L (g)), that is Π L (g), hence Π L (g) −1 , too, are in A L ∩ A R ⊂ Center A L . Using (5.2a), (5.1a) and these consequences one obtains Now we turn to characterization of invertible modules of WHAs. ii) An invertible module A M ∈ Obj L is semisimple. Namely, it is the direct sum of simple submodules: where {z L p } p ⊂ Z L and {z R p := S(z L p )} p ⊂ Z R are the sets of primitive orthogonal idempotents and τ M is a permutation on them.
as left A-modules, where A L is the unit left A-module given in (2.1).
If (5.3) holds then, using the natural equivalences X L and X R given in (2.3), is invertible; therefore, it is given by the action of an invertible element z L ∈ Z L := also holds because of faithfulness of 1 M × − and because of the identity Thus, using the right and left evaluation maps defined in (2.8) and (2.14), respectively, where we used the inverse of (5.5a) in the third equality and (2.10b) in the fourth one. Now we prove that (5.3) is fulfilled iff M becomes a free rank one A L -and A R -module by restricting the left A-action to these subalgebras. If (5.10) The third equality follows from the invariance of the counit with respect to the antipode: ε = ε • S. The second is the consequence of the identities (1.14-15) claiming that S 2 is the Nakayama automorphism θ L : A L → A L corresponding to the counit as a non-degenerate functional on A L . Therefore summation suppressed), rank one A L -and A R -freeness of M in the fourth equalities, respectively, and (2.13a) in the sixth equality of (5.12b), one obtains i.e. C l M and C r M are surjective. Injectivity of C l M and C r M follow from the faithfulness of M as a left A L -and A R -module, respectively.
ii) From (5.3) and Lemma 2.8 we can deduce that Let m ∈ M be a free A L -generator. The action by an element i.e. only if Π L (a)x L = Π L (ax L ) for all a ∈ A. However, this relation implies that x L ∈ Center A: S(a)x L = S(a (1) )Π L (a (2) )x L = S(a (1) )Π L (a (2) x L ) = S(a (1) )a (2) x L S(a (3) ) = Π R (a (1) )x L S(a (2) ) = x L S(a), a ∈ A. (5.15) Therefore, x L ∈ A L ∩Center A =: Z L , that is End A M ⊂ Z L ·. The opposite containment is trivial. The proof of the relation End A M = Z R · is similar. Hence, the direct summands of A M in the statement ii) are indecomposable submodules. Since A M is a free rank one A Land A R -module due to i), τ M is a permutation and the k-dimensions of the indecomposable submodules M (p,τ M (p)) saturate the lower bound (2.25) given in Lemma 2.6. Therefore, M (p,τ M (p)) is simple since it cannot contain a non-trivial submodule.
Now we turn to the characterization of invertibleÂ-modules in terms of right (left) grouplike elements in the WHA A. First, we give the connection between (right/left) grouplike elements in A and invertible submodules of (ÂA, ⇀): Lemma 5.5 Let A be a WHA and let F a := (Â ⇀ a, ⇀) denote the cyclic leftÂ-submodule ofÂA := (ÂA, ⇀) generated by a ∈ A.
i) g ∈ A is (right/left) grouplike iff g is an element of an invertible submoduleÂF ofÂA and g obeys the normalization conditions (Π R/L (g) = 1) Π R (g) = 1 = Π L (g). ii) The cyclic submodules F g , F h ⊂ÂA generated by (right/left) grouplike elements are in the same module isomorphism class iff gh −1 ∈ A T .
iii) In any module isomorphism class of invertible submodules ofÂA, there is a submodule which contains a right (left) grouplike element. Proof. i) Let g ∈ G R/L (A) or g ∈ G(A). Clearly, F g is a submodule ofÂA that contains g satisfying the required normalization conditions. According to Prop. 5.4 i) invertibility of F g follows if F g becomes a free L -and R -module with the single generator g by restricting theÂ-action to these subalgebras. If g ∈ G R (A) then the identities (1.6-7) and (5.1-2a) lead to the relations for certain ϕ L/R ∈ L/R then ϕ L/R = 0, because g is invertible and the mapsκ L in (1.5) and the antipode S are bijections. Therefore, F g is a free rank one R -and L -module for any g ∈ G R (A), hence for any g ∈ G(A) ⊂ G R (A), too. The case of g ∈ G L (A) can be proved similarly. Conversely, letÂF be an invertible submodule ofÂA. Then F is a right coideal in A and a free left L -and R -module with a single generator f ∈ F . Thus, one can define for ϕ ∈Â. They are left L -and R -module maps, respectively. Since F is a right coideal in A, definingf l andf r in the k-dualF of F like in (5.9) by 20b) for all ϕ ∈Â, which imply Applying the counit ε to the first tensor factor we obtain This implies that g is also an L/R -generator of F , hence, (5.21-22) hold for f = g ∈ F , too. Since 1 = Π R (g) = S(g)(g ↼ĝ r ) by assumption and due to the first equality of (5.21), S(g), hence g, too, is invertible. SinceΠ L (g) = S −1 (Π R (g)) = 1 due to (1.11), the second equality of (5.22) implies that g ↼ĝ l = g. Hence, the second equality of (5.21) together with invertibility of g implies that g ∈ G R (A) due to Corollary 5.2. The cases g ∈ G L (A), G(A) can be proved similarly.
ii) First we note that for g, h ∈ G R (A) (G L (A), G(A)) the invertible leftÂ-modules F gh and F g × F h are isomorphic, because the maps are leftÂ-module maps, which are inverses of each other. Hence, it is enough to prove that F g ≃ F 1 as leftÂ-modules for g ∈ G R (A) (G L (A), G(A)) iff g ∈ A T . Let g ∈ G T R (A) := G R (A) ∩ A T . Then (A T ) ⊥ := {ϕ ∈Â| ϕ, A T = 0} ⊂ is an ideal contained in the annihilator ideal of both of the leftÂ-modules F 1 and F g , because F 1 , F g ⊂ A T and A T is a subcoalgebra of A. Therefore F 1 and F g are also left modules with respect to the factor algebraÂ/(A T ) ⊥ and the isomorphism of the modules F 1 and F g with respect to this factor algebra ensures their isomorphism asÂ-modules. The factor algebrâ A/(A T ) ⊥ is isomorphic to the dual WHA A T of A T as an algebra, which is isomorphic to a direct sum of simple matrix algebras, A T ≃ ⊕ α M n α (Z α ), due to Lemma 2.3. The Z α s are separable field extensions of the ground field k determined by the ideal decomposition Z = ⊕ α Z α of Z ≡ A L ∩ A R and the dimensions obey n α = dim Z α A L α . Hence, F 1 and F g are isomorphic A T -modules if the multiplicities of simple submodules corresponding to the Wedderburn components of A T in their direct sum decompositions are equal. In order to prove this, first we note that the primitive idempotents {z α } α ⊂ Z are central in A T , hence they are in the hypercenter H of A T and they are related to the primitive central idempotents {ê α } α of A T asê α ⇀ 1 = z α = 1 ↼ê α (5.24) due to (1.6) and the remarks after it. Hence,ê α ⇀ g = (ê α ⇀ 1)g = z α g and F 1 and F g are faithful left A T -modules, because 1 and g are invertible. Therefore, the multiplicity corresponding to a Wedderburn component of A T is at least one in both of the modules F 1 and F g . Then the identity for k-dimensions coming from the R -freeness of invertibleÂ-modules and from the algebra structure of A T ensures that these multiplicities are equal to one, that is F 1 and F g are isomorphic A T ≃Â/(A T ) ⊥ , hence isomorphicÂ-modules. Conversely, let g ∈ G R (A) be such that there exists an isomorphism U : F 1 → F g between the invertible leftÂ-modules F 1 and F g . Using that U is anÂ-module map, we have (2) , which ensures that U (1) ∈ A L . Moreover, U (1) is an A L/R generator of F g , because it is the image of the L/R generator 1 ∈ F 1 . Hence, there exists an invertible element ϕ L ∈ L such that The case of (left) grouplike elements can be proved similarly.
iii) Let f be an L/R -generator of the invertible submodule F f ⊂ÂA. If there is no right grouplike element in F f = L ⇀ f = A R f , that is, due to i), there is no such element g in F f that obeys Π R (g) = 1, let us define g := f ↼f l ∈ A withf l given in (5.18). Then (1.4) and the maps commute with the left Sweedler action, i.e. they are leftÂ-module maps. They are also inverses of each other due to (5.22), which property has been already indicated in (5.28b). Therefore, F g and F f are equivalent submodules ofÂA, that is F g is also invertible. Since Π R (g) := Π R (f ↼f l ) = 1 due to (5.18) and due to the nondegeneracy of the A R − R pairing, g is a right grouplike element due to i). The proof is similar for left grouplike elements: one has to define g := f ↼f r withf r given in (5.18) to get g ∈ G L (A) in the submodule F g isomorphic to F f . Proof. An element g ∈ G T R (A) has the product form g = g L g R due to (5.27) with g L := U (1) ∈ A L and g R := ϕ L ⇀ 1 ∈ A R . Since g is invertible, g L and g R are invertible. Using property (5.2a) one obtains 1 = Π R (g) ≡ Π R (g L g R ) = g R S(g L ). The other cases follow since G T L (A) = S(G T R (A)) and since G T (A) = G T R (A) ∩ G T L (A). Since ⇀ g ↼ = gA T = A T g for g ∈ G R/L (A) (G(A)) due to (5.1), gA T g −1 = A T follows. Therefore, G T R/L (A) and G T (A) are normal subgroups. [5, p.401]. Since they are inequivalent for different p, the invertible moduleÂM itself is isomorphic to a left ideal inÂ. Due to Corollary 4.2 is a Frobenius algebra, hence, the isomorphism ≃ (ÂA, ⇀) of left regular modules holds [5, p.413]. Thus,ÂM is isomorphic to an invertible submodule of (ÂA, ⇀), that is to a cyclic submodule F g with g ∈ G R (A) (g ∈ G L (A)) by Lemma 5.5 iii). Due to Lemma 5.5 ii) the isomorphism classes of cyclic submodules F g , g ∈ G R/L (A) are given by the elements of the factor group G R/L (A)/G T R/L (A). Since a finite dimensional k-algebra has a finite number of inequivalent simple modules, there is only a finite number of inequivalent semisimple modules with a given k-dimension. Therefore, the factor groups G R/L (A)/G T R/L (A) are finite groups.
In consideration of Prop. 5.7 we can formulate why the notion of grouplike elements in a WHA is too restrictive: one cannot always associate a grouplike element in A to an invertible module of the dual WHAÂ. We formulate this claim as follows: * denote the element that relates the counit and the reduced trace as non-degenerate functionals on the separable algebra A L : ε(·) = tr (· t L ). The coset In general, G(A)/G T (A) is a proper subgroup of G R (A)/G T R (A). Proof. The adjoint action by g ∈ G R (A) on A gives rise to algebra automorphisms of A L and A R , because (5.1-2a) imply that Π R/L (gy R/L g −1 ) = gy R/L g −1 for y R/L ∈ A R/L . Using the invariance of the reduced trace with respect to algebra automorphisms and the WBA identity ε(abc) = ε(Π R (a)bΠ L (c)); a, b, c ∈ A, which follows from (1.1b) and (1.3), one obtains ε(y L gS(g)) = ε(Π R (g −1 )y L Π L (g)) = ε(g −1 y L g) = tr (g −1 y L gt L ) = tr (y L gt L g −1 ) = ε(y L gt L g −1 t −1 L ), y L ∈ A L , (5.30) i.e. gS(g) = gt L g −1 t −1 L due to non-degeneracy of the counit on A L . Therefore, for all g ∈ G R (A) we have The element t L implements the Nakayama automorphism θ ε = S 2 of ε on A L : θ ε = Ad t L . Hence, t := t L S(t −1 L ) ∈ A T implements S 2 on A T and due to (5.31) on the subcoalgebras gA T of A, g ∈ G R (A) as well. In addition, t ∈ G T (A) due to Corollary 5.6.
Hence, if for a given g ∈ G R (A) there exists L )g for some x L ∈ A L * due to Corollary 5.6. Therefore, using (5.31) For the second statement of the proposition first we note that the inclusion gG of the factor groups. To show that this inclusion is proper in general an example will suffice.
Let the WHA A over the rational field Q be given as follows. Let A L be a full matrix algebra M m (Q( √ 2)), m > 1, where Q( √ 2) denotes the (separable) field extension of Q by √ 2. Let the counit ε as a non-degenerate index 1 functional on the separable algebra A L be given with the help of the reduced trace: ε(·) := tr (·t L ), where t L ∈ A L * satisfying tr (t −1 L ) = 1. Let A T be the WHA of the form A L ⊗A Lop =: A L ⊗A R given in the Appendix of [2]. Let A as an algebra over Q be given by the crossed product A := A T > ⊳ Z 2 , where Z 2 = {e, g} is the cyclic group of order two and the action of the non-trivial element g ∈ Z 2 on A L (A R ) is the outer automorphism that changes the sign of the central element Proof. If A L is central simple (5.29) is fulfilled by definition. In the other case t L is central in A L , G T R (A) = G T (A) and (5.29) reads as gt L g −1 = t L , g ∈ G R (A). Due to (5.31) S(g)g = t L g −1 t −1 L g, and it is a central element in A L . Therefore, 1 = Π R (g) = S(1 (1) )S(g)g1 (2) = S(g)g due to (5.1-2a), which proves the claim.
6. Distinguished (left/right) grouplike elements, Radford formula and the order of the antipode After defining distinguished (left/right) grouplike elements and deriving some basic properties of them we prove the generalization of the Radford formula: the fourth power of the antipode in a WHA can be expressed in terms of distinguished left (right) grouplike elements like in the finite dimensional Hopf case [15]. Using this result we derive a finiteness type claim about the order of the antipode in a WHA and prove that the double of a WHA is unimodular.
We note that the Radford formula was proved in [13] for WHAs in the case when the square of the antipode is the identity mapping on A L . 1 For such WHAs the sets of various grouplike elements coincide, see Corollary 5.9.
Before turning to the definition of (left/right) distinguished grouplike elements in a WHA let us examine the connection between integrals in dual pairs A, of WHAs.
The pair (l, λ) ∈ I L ×Î L ⊂ A × ((r, ρ) ∈ I R ×Î R ) is called a dual pair of left (right) integrals if they are non-degenerate and if they obey one of the equivalent relations l ⇀ λ =1, λ ⇀ l = 1 (r ↼ ρ = 1, ρ ↼ r =1). Due to Theorem 4.1 such pairs exist in any dual pair of WHAs. ( AÎ L , ⋆) is an invertible A-module due to Corollary 3.5 and Prop. 5.4 i). Since this module is the right conjugate of the module A I R due to Corollary 3.4, A I R is also an invertible left A-module due to (5.7-8b). Hence, it is a free rank one left A L/R -module due to Prop. 5.4 i). An element r is a free A L (A R ) generator in A I R iff r is a non-degenerate right integral, thus non-degenerate right integrals r, r ′ ∈ I R are related by an element x L ∈ A L * (x R ∈ A R * ): r ′ = x L r (r ′ = x R r). The corresponding statement holds for non-degenerate right integrals inÎ R by duality. Hence dual pairs of right integrals, (r 1 , ρ 1 ) and (r 2 , ρ 2 ), are related by a 'common' invertible element For WHAs based on certain separable, but not strongly separable [9] algebra A L the property S 2 |A L = id |A L , i.e. the non-triviality of the Nakayama automorphism corresponding to the counit as a non-degenerate functional ε: A L → k, is not only a possibility, but the only possibility because ε should be an index 1 functional on A L . For example, if A L = M 2 (Z 2 ), that is a two by two matrix algebra over the finite field Z 2 , the reduced trace tr on A L is non-degenerate but it has index 0. The two non-degenerate index 1 functional on A L have the form tr (· t L ) with t ±1 L = 1 1 1 0 and lead to S 2 |A L = Ad t L = id |A L .
Let us consider the element s R := ρ ⇀ r ∈ A constructed from the elements of a dual pair (r, ρ) of right integrals. Since r is a non-degenerate functional on and since ρ is a free L/R -generator of the leftÂ-moduleÂÎ R , s R becomes a free left L/R -generator of the cyclic leftÂ-module ( ⇀ s R , ⇀), i.e. it is an invertibleÂ-submodule in (A, ⇀). Moreover, using (1.8) that is s R is a right grouplike element in A due to Lemma 5.5 i). If (r i , ρ i ); i = 1, 2 are dual pairs of right integrals the corresponding right grouplike elements differ by a right grouplike element in A T due to (6.1), (1.5-6) and Corollary 5.6: However, it is not known to us whether the coset G T R (A)s R in G R (A) is special enough in order to contain always a grouplike element. But we note that if s R := ρ ⇀ r is grouplike, i.e. Π L (s R ) = 1 also holds, then σ R := r ⇀ ρ ∈ G(Â) already follows: by duality σ R is a free A L/R -generator in the cyclic left A-module (A ⇀ σ R , ⇀) with the propertyΠ R (σ R ) =1 and (6.4) that is σ R is grouplike by Lemma 5.5 i). Similarly, a dual pair (l, λ) of left integrals leads to left grouplike elements: s L := l ↼ λ ∈ G L (A) and σ L := λ ↼ l ∈ G L (Â). s L is grouplike iff σ L is grouplike, because Π R (s L ) andΠ R (σ L ) obey a relation analogous to (6.4): These considerations lead to the following The invertible right/left A-module structures of left/right integrals in A can be made explicit by using these projections and distinguished left/right grouplike elements σ L/R connected to the dual pair (l, λ)/(r, ρ) of left/right integrals: (6.7) For example, the first relation can be proved by using (5.1-2b), (1.6) and the nondegeneracy of λ: . Lemma 6.2 Let C b := AbA ⊂ A be the cyclic ideal with the generator b = b(γ, δ) ∈ A characterized by a left and a right grouplike element γ ∈ G L (Â) and δ ∈ G R (Â), respectively, through the property where the projections Π L γ and Π R δ are defined in (6.6). The left/right Sweedler actions by left/right grouplike elements in provide isomorphisms between such types of cyclic ideals as (possibly non-unital) rings. The imageb of the generator b = b(γ, δ) obeys the characterization property Proof. First, we note that the set of such cyclic ideals is non-empty: l ∈ I L from a dual pair (l, λ) of left integrals is a generator with characterization property l = l(1,Ŝ(σ −1 L )) due to (1.9) and (6.7), where σ L := l ↼ λ is the corresponding distinguished left grouplike element.
Since left (right) Sweedler actions by left (right) grouplike elements in provide algebra automorphisms of A, the isomorphism of the corresponding cyclic ideals as rings follows. The only open question is the characterization property (6.9) of the imageb of the generator b = b(γ, δ). Using properties (6.1-2b) of left grouplike elements, characterization property (6.8) of the generator b, coproduct properties (1.4) of elements in A L/R and properties (1.7) of the projections Π L/R andΠ L/R , one derives The change of the characterization property of the generator b due to right Sweedler actions b ↼ β R , β R ∈ G R (Â) can be proved similarly.
Corollary 6.3
Distinguished left grouplike elements in fall into a central element of the factor group G L (Â)/G T L (Â). There exists a two-sided non-degenerate integral in A iff distinguished left grouplike elements in fall into the unit element of this factor group.
Proof. For any β ∈ G L (Â) the map B β (a) := β ⇀ a ↼Ŝ −1 (β), a ∈ A defines an algebra automorphism of A, which maps the space I L of left integrals into itself due to the previous Lemma. The imagel := B β (l) of a non-degenerate left integral l = l(1,Ŝ(σ −1 L )) is a nondegenerate left integral having the characterization propertyl =l(1,Ŝ −1 (β −1 )Ŝ(σ −1 L )Ŝ(β)) due to (6.9). Hence, the distinguished left grouplike elementσ L corresponding tol is given byσ (6.11) with ϕ L =Ŝ −1 (Π R (β −1 )) ∈ L * due to the form (5.2b) ofΠ R (β −1 ). However, distinguished left grouplike elements differ by elements in G T L (Â), in analogy with the case (6.3) of distinguished right grouplike elements. Hence, for the G T L (Â)-cosets (6.11) implies the relation . If the non-degenerate left integral l ∈ I L is also a right integral then we have the relation Π R S(σ −1 L ) = Π R due to (6.7) and (1.9). Hence, σ L =1 since using (6.6) and (1.7). Conversely, if [σ L ] is the unit element of the factor group then there exists a dual pair (l, λ) of left integrals with distinguished left grouplike element σ L =1 due to a relation analogous with (6.3). Therefore, Π R S(σ −1 L ) = Π R and (6.7) implies that l is a (non-degenerate) two-sided integral. Theorem 6.4 Let A, be a dual pair of WHAs and let (s L , σ L ) be the pair of distinguished left grouplike elements corresponding to a dual pair (l, λ) of left integrals in A ×Â. The Nakayama automorphism θ λ :=R −1 λ •L λ : A → A corresponding to the non-degenerate functional λ: A → k can be written as The fourth power of the antipode S of A can be written as: The order of the antipode is finite up to an inner automorphism by a grouplike element in the trivial subalgebra A T .
Finally, using property (1.16) of left integrals, (6.14d) can be rewritten as Therefore using (6.14a), (6.16b-d), the algebra isomorphism property of the mapκ R given in (1.5), the relation (6.5) and the form (5.2b) of Π R (s) we get Due to injectivity of R l and L l (6.17a and b) lead to connections betweenR λ andL λ that imply (6.12). The equality of these two different forms of the Nakayama automorphism θ λ gives rise to the Radford formula (6.13).
Appendix B
Here we give the generalization of the cyclic module [4] A ♮ (σ,s) for weak Hopf algebras having a modular pair (σ, s) in involution. The details will be published elsewhere.
Let A be a weak Hopf algebra. The pair (σ, s) ∈ G(Â) × G(A) of grouplike elements is called a modular pair for A if σ ⇀ s = s = s ↼ σ, s ⇀ σ = σ = σ ↼ s. Clearly, a modular pair (in involution) is a self-dual notion for WHAs. The identity (B.2) is a kind of square root of the Radford formula, hence, modular pairs in involution do not exist for arbitrary WHAs. However, there is a wide class of WHAs having such a pair. For example, in a weak Hopf C * -algebra A there is a canonical grouplike element g ∈ A implementing S 2 on A [2], hence (1, g) is a modular pair in involution for A. Another example is as follows: let A be a WHA over k and let the WHA A G := A T , G R (A) be the subWHA of A generated by the trivial subWHA A T and by (a subgroup of) the right grouplike elements G R (A) in A. Then (1, t) with t ∈ G T (A) defined in (5.31) is a modular pair in involution for A G , because t implements S 2 for A T and G R (A) due to (5.31). | 2014-10-01T00:00:00.000Z | 2001-11-05T00:00:00.000 | {
"year": 2001,
"sha1": "49454820964b6821e7202995da7e6b25963e0ae5",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.jalgebra.2003.02.001",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "49454820964b6821e7202995da7e6b25963e0ae5",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
234447347 | pes2o/s2orc | v3-fos-license | New Bis-Pyrazole-Bis-Acetate Based Coordination Complexes: Influence of Counter-Anions and Metal Ions on the Supramolecular Structures
the Abstract: A new flexible bis-pyrazol-bis-acetate ligand, diethyl 2,2’-(pyridine-2,6-diylbis (5-methyl-1H-pyrazole-3,1-diyl))diacetate ( L ), has been synthesised, and three coordination complexes, namely, [Zn( L ) 2 ](BF 4 ) 2 ( 1 ), [Mn L Cl 2 ] ( 2 ) and [Cd L Cl 2 ] ( 3 ) have been obtained. All ligands and complexes were characterised by IR, mass spectroscopy, thermogravimetric analysis and single-crystal X-ray diffraction. Single crystal X-ray diffraction experiment revealed that the primary supramolecular building block of 1 is a hexagonal chair shaped 0D hydrogen bonded synthon (stabilised by C–H ··· O hydrogen bonding and C=O ··· π interactions), which further built into a 2D corrugated sheet-like architecture having a 3-c net honeycomb topology, and finally extended to a 3D hydrogen bonded network structure having a five nodal 1,3,3,3,7-c net, through C–H ··· F interactions. On the other hand, the two crystallographically independent molecules of 2 exhibited two distinct supramolecular structures such as 2D hydrogen bonded sheet structure and 1D zigzag hydrogen bonded chain, sustained by C–H · O and C–H ··· Cl interactions, which are further self-assembled into a 3,4-c network structure, and 3 showed a 2D hydrogen bonded sheet structure. The supramolecular structural diversity in these complexes is due to the different conformations adopted by the ligands, which are mainly induced by different metal ions with coordination environments controlled by different anions. Hirshfeld surface analysis was explored for the qualitative and quantitative analysis of the supramolecular interactions.
Introduction
Designing coordination complexes by using supramolecular self-assembly is an important research area in materials chemistry [1]. The use of relatively simple organic ligands and metal ions through their kinetically labile and thermodynamically stable coordination bonds attracted many research groups due to their various potential applications [2][3][4][5][6][7]. Such self-assembly resulted in channels or void spaces, wherein host-guest chemistry played a role for the incorporation of small molecules or anions within such empty spaces [8]. In most of the coordination complexes so far reported, the ligands having only one hetero nitrogen as the donor atom such as pyridine [9], picoline [10], isoquinoline [11] etc. were used. On the other hand, ligands having two hetero nitrogen atoms such as imidazole [12], pyrazole [13,14] and pyrazine [15] are not much explored in the coordination chemistry of transition metals.
Our research group has recently started a research programme on coordination complexes built from pyrazole ligands. For example, we have reported the crystal structures of Co(II)/Cu(II) coordinated complexes of pyrazole-dicarboxylate acid ligand and established their supramolecular structures [16]. In another work, we have demonstrated the effect of the aliphatic backbone of the bis-pyrazole-bis-carboxylate ligand on the supramolecular structures of their Co(II)/Cu(II)/Cd(II) coordination complexes [17]. In a further account, we have studied the effect of anions and hydrogen bonding on the supramolecular structural diversities of Cu(II) and Mn(II) coordination complexes obtained from a novel bis-pyrazole ligand [18]. More recently, two new pyrazole-acetamide ligands and their solid-state structures of coordination complexes caracterised by their remarkable antioxidant activity have been reported too, in the context of the effect of hydrogen bonding on the self-assembly process [19]. Last but not least, we have reported the crystal structure-bioactivity correlation of three mononuclear coordination complexes of a pyrazolyl-benzimidazole ligand [20].
etc. were used. On the other hand, ligands having two hetero nitrogen atoms such as imidazole [12], pyrazole [13,14] and pyrazine [15] are not much explored in the coordination chemistry of transition metals.
Our research group has recently started a research programme on coordination complexes built from pyrazole ligands. For example, we have reported the crystal structures of Co(II)/Cu(II) coordinated complexes of pyrazole-dicarboxylate acid ligand and established their supramolecular structures [16]. In another work, we have demonstrated the effect of the aliphatic backbone of the bis-pyrazole-bis-carboxylate ligand on the supramolecular structures of their Co(II)/Cu(II)/Cd(II) coordination complexes [17]. In a further account, we have studied the effect of anions and hydrogen bonding on the supramolecular structural diversities of Cu(II) and Mn(II) coordination complexes obtained from a novel bis-pyrazole ligand [18]. More recently, two new pyrazole-acetamide ligands and their solid-state structures of coordination complexes caracterised by their remarkable antioxidant activity have been reported too, in the context of the effect of hydrogen bonding on the self-assembly process [19]. Last but not least, we have reported the crystal structure-bioactivity correlation of three mononuclear coordination complexes of a pyrazolylbenzimidazole ligand [20].
In the present study, we aim to explore the effect of ligating topologies, counter anions and the metal ion nodes on the supramolecular structures of coordination complexes obtained from a conformationally flexible bis-pyrazol-bis-acetate ligand having a pyridine backbone (Scheme 1), namely diethyl 2,2'-(pyridine-2,6-diylbis(5-methyl-1H-pyrazole-3,1-diyl)) diacetate (L) because of the following reasons: (1) The ligand L is an N-heterocyclic tridentate pyrazolyl pyridine compound capable of forming various coordination modes, and ligating topology with transition metal ions [21]. (2) This type of pyridine ligand having pyrazolyl groups at the second and sixth position, possesses a wide range of interesting chemical and/or physical properties, such as catalytic [22,23], electrochemical [24], magnetic and photophysical properties [25].
Many coordination complexes containing both Lewis base donors and Lewis acid acceptors in the same ligand [26] such as 2,6-bis (pyrazolyl) pyridine have been reported [23][24][25][26][27] according to the Hard-Soft acid base theory [28]. (4) L is a new ligand, not yet reported. In this work, we shall investigate the coordination properties of L with Zn(II), Mn(II) and Cd(II), which are recognised non-biodegradable and toxic metal ions, toxic for both health and environment. For this purpose, we have reacted L with Zn(BF4)2•6H2O, In this work, we shall investigate the coordination properties of L with Zn(II), Mn(II) and Cd(II), which are recognised non-biodegradable and toxic metal ions, toxic for both health and environment. For this purpose, we have reacted L with Zn(BF 4 ) 2 ·6H 2 O, MnCl 2 ·4H 2 O and CdCl 2 ·2.5H 2 O in a 1:2 molar ratio, which led to single crystals which were systematically investigated by single crystals X-ray diffraction (Scheme 2).
The crystal structures of three coordination complexes were discussed in the context of their effect of conformation dependent ligating topology, counter anions and metal ion nodes on the supramolecular structural diversities. We have also present, for completeness, MnCl2•4H2O and CdCl2•2.5H2O in a 1:2 molar ratio, which led to single crystals which were systematically investigated by single crystals X-ray diffraction (Scheme 2).
The crystal structures of three coordination complexes were discussed in the context of their effect of conformation dependent ligating topology, counter anions and metal ion nodes on the supramolecular structural diversities. We have also present, for completeness, the crystal structure of the ligand L and of the intermediate compound 2-(5-methyl-1H-pyrazol-3-yl)-6-(3-methyl-1H-pyrazol-5-yl) pyridine B (See Scheme 1).
Scheme 2.
Schematic representation of the synthesis of coordination complexes 1, 2 and 3.
Materials and Methods
All solvents and chemicals, obtained from usual commercial sources, were of analytical grade and used without further purification. 1 H and 13 C NMR spectra were obtained on a Bruker AC 300 MHz spectrometer with the solvent proton peak as internal standard. High resolution mass spectrometry HRMS data were obtained with a Q Exactive Thermofisher Scientific ion trap spectrometer by using ESI ionisation. FT-IR spectra were recorded with KBr discs on a Perkin Elmer 1310 spectrometer. Thermogravimetric Analyses (TGA) were carried out on a Mettler Toledo TGA/SDTA 851e analyser by loading 3-4 mg of sample, and the mass loss was monitored under nitrogen on warming from room temperature to 900 °C at 10 °C/min. A suitable single crystal was selected and mounted onto a rubber loop using Fomblin oil. Single-crystal X-ray diffraction (SXRD) data of B, L, 1, 2, 3 were recorded on a Bruker Apex CCD diffractometer (λ (MoKα) = 0.71073 Å) at 150 K equipped with a graphite monochromator. Structure solution and refinement were carried out with SHELXS-97 [29] and SHELXL-97 [30] using the WinGX software package [31]. Data collection and reduction were performed using the Apex2 software package. Corrections for the incident and diffracted beam absorption effects were applied using empirical absorption corrections [32]. All the non-H atoms were refined anisotropically. The positions of hydrogen atoms were calculated based on stereochemical considerations using the riding model. Final unit cell data and refinement statistics for B, L, 1, 2, 3 are collected in Table 1.
Materials and Methods
All solvents and chemicals, obtained from usual commercial sources, were of analytical grade and used without further purification. 1 H and 13 C NMR spectra were obtained on a Bruker AC 300 MHz spectrometer with the solvent proton peak as internal standard. High resolution mass spectrometry HRMS data were obtained with a Q Exactive Thermofisher Scientific ion trap spectrometer by using ESI ionisation. FT-IR spectra were recorded with KBr discs on a Perkin Elmer 1310 spectrometer. Thermogravimetric Analyses (TGA) were carried out on a Mettler Toledo TGA/SDTA 851e analyser by loading 3-4 mg of sample, and the mass loss was monitored under nitrogen on warming from room temperature to 900 • C at 10 • C/min.
A suitable single crystal was selected and mounted onto a rubber loop using Fomblin oil. Single-crystal X-ray diffraction (SXRD) data of B, L, 1, 2, 3 were recorded on a Bruker Apex CCD diffractometer (λ (MoK α ) = 0.71073 Å) at 150 K equipped with a graphite monochromator. Structure solution and refinement were carried out with SHELXS-97 [29] and SHELXL-97 [30] using the WinGX software package [31]. Data collection and reduction were performed using the Apex2 software package. Corrections for the incident and diffracted beam absorption effects were applied using empirical absorption corrections [32]. All the non-H atoms were refined anisotropically. The positions of hydrogen atoms were calculated based on stereochemical considerations using the riding model. Final unit cell data and refinement statistics for B, L, 1, 2, 3 are collected in Table 1.
Synthesis, FT-IR and UV-Visible Spectroscopy
The ligand L was synthesized by following a three-step reaction, in which the dimethyl pyridine-2,6-dicarboxylate was converted to an intermediate compound bis-hydroxy-bisone (A) following a nucleophilic reaction with acetone and NaOMe, in the first step, and in the second step, the intermediate compound was treated by hydrazine hydrate resulting in 2- The resultant needle shaped crystals were characterised by using FT-IR, UV-visible spectroscopy, electrospray ionisation mass spectrometry (ESI-MS) and SXRD. More detail explanation of FT-IR and UV-visible spectroscopy are given in the SI.
Crystal Structures
The crystal data of B, L, 1, 2 and 3 are given in Table 1 Needle-type crystals were obtained by slow evaporation of dichloromethane and methanol in the case of L and a mixture of dichloromethane and ethanol in the case of the ligand B. Not surprisingly, the crystal structure of B contains an ethanol molecule, whereas no solvent was detected for L. Single crystal X-ray diffraction analysis of ligand B revealed that the ligand crystallises in the orthorhombic space group P2 1 2 1 2 1 , which is an achiral member of the Sohncke family defining chiral crystals ( Figure 1). The asymmetric unit is composed of one molecule of each B and lattice included ethanol. The solvent ethanol was found to be disordered over two positions. In the unit cell, four molecules of ligand and ethanol were present. The N-N bond distances are in the range of 1.343(4)-1.350(4) Å, which is characteristic for pyrazole [20]. From the crystal structure, it is found that the ligand exists as slightly non-planar, in which the planes of the pyrazole rings showed a difference in the angle of 8. The chirality obtained from 2-fold rotational axes as a result of the molecular assemblies in the crystal lattice of an achiral component is an important topic in crystal engineering [33]. On the other hand, the ligand L crystallised in the centrosymmetric triclinic space group P-1. The asymmetric unit contains only one molecule of L, and there were two such molecules found in the unit cell, both related to each other by a centre of inversion symmetry. As expected, the ligand showed non-planar structure, which is revealed from the angle between the pyrazole rings (14.13°). Among various plausible conformations, the ligand L showed anti-anti-syn-anti-anti conformation in the crystal structure (Scheme S1). The N-N bond distance was in the range of 1.358(4)-1.350(4) Å, which is the characteristic N-N bond length for pyrazole [20]. The chirality obtained from 2-fold rotational axes as a result of the molecular assemblies in the crystal lattice of an achiral component is an important topic in crystal engineering [33]. On the other hand, the ligand L crystallised in the centrosymmetric triclinic space group P-1. The asymmetric unit contains only one molecule of L, and there were two such molecules found in the unit cell, both related to each other by a centre of inversion symmetry. As expected, the ligand showed non-planar structure, which is revealed from the angle between the pyrazole rings (14.13 • ). Among various plausible conformations, the ligand L showed anti-anti-syn-anti-anti conformation in the crystal structure (Scheme S1). The N-N bond distance was in the range of 1.358(4)-1.350(4) Å, which is the characteristic N-N bond length for pyrazole [20]. Moreover, the C=O, C-O and O-C bond lengths were in the range of 1.184(4)-1.206(8) Å, 1.318(5)-1.319(9) Å and 1.446(5)-1.458(8) Å, respectively [34], which confirms the presence of ethyl acetate functionality in L.
We are interested in the final supramolecular structure of this new ligand L, in which three distinct functionalities such as pyridine, pyrazole and ethyl acetate are present. The ] involving a -CH2spacer and the pyrazole N atom leading to the formation of a network structure having Schläfli symbol {4 8 .6 2 } and exhibiting a 5-c net unimodal topology [35]. In fact, such hydrogen bonding resulted in the formation of an eightmembered hydrogen bonded macrocycle of graph set R (8). Such pairs of 2D sheets are further self-assembled through weak van der Waals force (Figure 2). In contrast to the anti-anti-syn-anti-anti conformation of the ligand (non-coordinated to the metal centre), the metal bound ligand L molecules showed two distinct conformations such as syn-syn-syn-syn-syn and syn-syn-syn-syn-anti in the coordination complex 1, with substantial molecular non planarity, which is evident from the corresponding dihedral angles of 9.00-15.77 • involving the terminal pyrazole rings. The crystallographically independent molecules of 1, showed weak C-H···O hydrogen bonding [C-H···O anions facilitated the self-assembly of a 2D corrugated sheet, into a three dimensional hydrogen bonded network structure having a five nodal 1,3,3,3,7-c net with Schläfli symbol {0}{3.5.6}{3 2 .5 2 .6 3 .7 3 .8 3 .9 2 }{4.5.7}2 (Figure 4).
The asymmetric units of the coordination complexes are shown in Figure 3. A colourless needle-shaped single crystal of 1 crystallised in the centrosymmetric monoclinic space group P21/c. The asymmetric unit contains two molecules of Zn(II) coordination complex, and four counter anions of tetrafluoroborate (BF4 -). The metal centre Zn(II) exhibited distorted octahedral geometry [<N-Zn-N = 74.27(9) -99.10(10)°] wherein all of the six coordination sites were occupied by the N atoms (both pyridine and pyrazole) of two molecules of the ligand L. In contrast to the anti-anti-syn-anti-anti conformation of the ligand (non-coordinated to the metal centre), the metal bound ligand L molecules showed two distinct conformations such as syn-syn-syn-syn-syn and syn-syn-syn-syn-anti in the coordination complex 1, with substantial molecular non planarity, which is evident from the corresponding dihedral angles of 9.00-15. SXRD analysis revealed that single crystals of 2 belong to the centrosymmetric monoclinic space group P21/c. The asymmetric unit was comprised of two crystallographically independent molecules of coordination complex 2. The coordination complex 2 consists of Mn(II) ion, two chloride anions and one ligand L. In the unit cell, there were four such units of each crystallographically independent molecules of 2, which were symmetrically related by two-fold screw axis (21), glide plane and centre of inversion. The Mn(II) showed SXRD analysis revealed that single crystals of 2 belong to the centrosymmetric monoclinic space group P2 1 /c. The asymmetric unit was comprised of two crystallographically independent molecules of coordination complex 2. The coordination complex 2 consists of Mn(II) ion, two chloride anions and one ligand L. In the unit cell, there were four such units of each crystallographically independent molecules of 2, which were symmetrically related by two-fold screw axis (2 1 Coordination complex 3 crystallises in the centrosymmetric monoclinic space group P2/n. The asymmetric unit is comprised of one half of the molecule of 3, i.e., one half of Cd(II) metal ion, one half of the molecule of L and one chloride anion (both L and chloride anions were coordinated to Cd(II)). The two-fold axis is passing through the Cd(II) metal centre and N(1) and C(1) atoms of the ligand L. Due to the presence of this two-fold axis, the remaining half of Cd(II), ligand L and chloride anion are generated by symmetry. In Coordination complex 3 crystallises in the centrosymmetric monoclinic space group P2/n. The asymmetric unit is comprised of one half of the molecule of 3, i.e., one half of Cd(II) metal ion, one half of the molecule of L and one chloride anion (both L and chloride anions were coordinated to Cd(II)). The two-fold axis is passing through the Cd(II) metal centre and N(1) and C(1) atoms of the ligand L. Due to the presence of this two-fold axis, the remaining half of Cd(II), ligand L and chloride anion are generated by symmetry. In the crystal structure, the metal atom Cd(II) displays distorted trigonal bipyramidal geometry with angles ranging from 69.47(5)-104. 26(5) • . The axial coordination sites of Cd(II) were occupied by the two pyrazole nitrogen atoms of L whereas the equatorial sites are occupied by nitrogen atom of pyridine moiety of L and two chloride anions. Like in 2, the ligand L showed anti-syn-syn-syn-syn conformation with slight non-planarity in 3, which is revealed from the angle (6.33 • ) between the pyrazole rings.
Hirshfeld Surface Analyses
To investigate more about the supramolecular interactions in the crystal structures of B, L, 1, 2 and 3, Hirshfeld surfaces have been calculated for all the structures. From Hirshfeld surface [36] analysis, we can quantify various supramolecular interactions present in the crystal structure. We used CRYSTAL EXPLORER [37] to plot the Hirshfeld surfaces [38] and calculate their respective 2D fingerprint plots [39].
The 3D maps of Hirshfeld surface (HS) assist us to find out the main interactions between molecules, and the 2D fingerprint plot (FP) help us to understand the distances
Hirshfeld Surface Analyses
To investigate more about the supramolecular interactions in the crystal structures of B, L, 1, 2 and 3, Hirshfeld surfaces have been calculated for all the structures. From Hirshfeld surface [36] analysis, we can quantify various supramolecular interactions present in the crystal structure. We used CRYSTAL EXPLORER [37] to plot the Hirshfeld surfaces [38] and calculate their respective 2D fingerprint plots [39].
The 3D maps of Hirshfeld surface (HS) assist us to find out the main interactions between molecules, and the 2D fingerprint plot (FP) help us to understand the distances among atoms involved in those interactions. More precisely, 3D HS and 2D FP enable us to give insights into qualitative and quantitative analysis of supramolecular interactions, respectively, present in the molecule. The 3D HS plots of B, L, 1, 2 and 3 are presented in Figure 7, exhibiting the surface map over the normalised contact distance (d norm ), which can be determined from the d e (the distance between the Hirshfeld surface and adjacent nucleus outside the surface), d i (the distance between the Hirshfeld surface and nearest inside the nucleus) and the van der Waals radii of the atoms (r vdW i or r vdW ie ) from Equation (1): The corresponding shape index and curvedness of B, L, 1, 2 and 3 are shown in Figures S1-S4 (ESI). In the dnorm map, the red spots indicate the closeness of atoms to the HS from outside, meaning a strong hydrogen bonding exists between the HS and the nearest atoms outside. While the white areas on the 3D HS designate the contacts with distances equal to the sum of van der Waals radii, the blue colour regions indicate the longer distances than the van der Waals radii as shown in Figure 7a The corresponding shape index and curvedness of B, L, 1, 2 and 3 are shown in Figures S1-S4 (ESI). In the d norm map, the red spots indicate the closeness of atoms to the HS from outside, meaning a strong hydrogen bonding exists between the HS and the nearest atoms outside. While the white areas on the 3D HS designate the contacts with distances equal to the sum of van der Waals radii, the blue colour regions indicate the longer distances than the van der Waals radii as shown in Figure 7a,b. The HS of B was generated by using a standard (high) surface resolution with 3D d norm surfaces mapped to a range −0.6557 to 1.3709 a.u. From the d norm mapping, it is revealed that strong hydrogen bonding interactions such as N-H···N (between the pyrazole moieties) and N-H···O (between pyrazole and solvated ethanol) were present in the crystal lattice of B, as observed from the bright red spots on the HS. On the other hand, 3D d norm surfaces mapping (ranges between -0.6823 to 1.4926 a.u.) of L, showed bright red spots near to C=O of ester, pyrazole and -CH 2 -spacer of neighbouring molecules of L, confirming C-H···O and C-H···N hydrogen bonding interactions.
The contributions of the interatomic contacts (C···H, N···H, and O···H) present in B and L are revealed from the 2D FP ( Table 2). The C···H interatomic contacts present in B and L are due to the C-H···π involving C-H of pyrazole and pyrazole ring, and C-H of spacer and pyridine ring, respectively. Weak π···π stacking (C···C = 0.9%), lone pair···π (C···O = 0.9%), and stacking of the aromatic rings (C···N = 1.8%) were also present in the crystal structure of L. Table 2). The C···H interatomic contacts were also present in the crystal structures of 1, 2 and 3, due to the C-H···π interactions (C-H of ester and pyrazole/pyridine ring in 1, C-H of ester and pyrazole ring in 2, and C-H of the methyl group of pyrazole/pyridine ring in 3). Supramolecular interactions such as π···π stacking (C···C = 2.7% in 2 and 3.1% in 3), lone pair···π (C···O = 2.0% in 1) were also present in the crystal structures of the coordination complexes (Table 2). Weak Van der Waals interactions were also found in 2 and 3 (Cl···O = 0.5% in 2 and 0.6% in 3). Moreover, the H···H contacts in B, L, 1, 2 and 3 comprise of the major contributors to the contact list of 2D FP, such as 49.3%, 57.3%, 51.9%, 46.6% and 43.9%, respectively, within the HS. This is due to the high share of hydrogen atoms present in their crystal structures. Interestingly, the presence of sharp spikes was found in the 2D FPs of 1, 2, and 3
Influence of Counter-Anion and Metal Ion on the Conformation of the Ligand L and the Supramolecular Structures of the Coordination Complexes
The coordination complexes discussed herein showed supramolecular structural diversity in their crystal structures. The fundamental reason behind such diversity is due to the influence of various metal ions and counter anions, during the crystallisation process, which induced the conformational changes of the ligand L in the coordination complexes [40][41][42]. As shown in Scheme S1, there are several possible conformations of L, which can contribute to the coordination with metal ions. Indeed, due to the small energy barrier between various conformations of the flexible ligand L, it can display a particular conformation required for the coordination driven self-assembly of a metal ion. However, predicting such specific conformation is generally challenging, because of various hurdles such as the diversity in the possible orientations of the ligands in the crystals, the less precision in estimating the energies of ligand for its coordination with metal ion, and difficulty to predict the thermodynamic and kinetic contributions for the crystal growth. Hence, it is very important to recognise the supramolecular synthon present in the crystal structure, which is the sub-structural motif in the crystal.
The ligand L showed anti-anti-syn-anti-anti conformation in the crystal structure. Once it undergoes coordination with Zn(II) and form coordination complex 1, the ligand L displayed two distinct conformations (two molecules of L are present in 1 such as syn-synsyn-syn-syn and syn-syn-syn-syn-anti. The coordination geometry of Zn(II) and the BF 4 anions present in the crystal lattice induce such conformations of L, in 1. From the overlay structure of L with the 1 (Figure 9a,b), we can easily understand that the pyrazole rings of L rotate around 180 • . Additionally, the self-assembly of L having these conformations, with the distorted octahedral Zn(II) via C-H···O hydrogen bonding resulted in a hexagonal chairshaped 0D hydrogen bonding synthon, the main sub-structural motif of 1, which further extended into a 2D corrugated sheet structure through weak C=O···π. In fact, the BF 4anions present in the crystal lattice of 1, further assisted the self-assembly process via C-H···F hydrogen bonding leading to the formation of a 3D hydrogen bonded network. On the other hand, L showed anti-syn-syn-syn-syn conformation in both 2 and 3, where the counter anion is common, viz. chloride. In both coordination complexes 2 and 3, chloride anions are coordinated to the metal ions. The difference between them is the metal ions, Mn(II) in 2 and Cd(II) in 3, present in the coordination complex. Another clear difference is the presence of two crystallographically independent molecules of coordination complex in 2, wherein in the case of 3, only one molecule was present in the asymmetric unit. The difference in the ionic radius of Mn(II) (0.75 Å) and Cd(II) (0.87 Å) is one of the crucial factors for such variances. As a result, the primary supramolecular synthons of 2 and 3 were also different; while the crystallographically independent molecules of 2 showed hexagonal shaped supramolecular synthon through C-H···O, which further extended to a 2D hydrogen bonded sheet structure, the C-H···Cl interaction assisted the formation of a 1D zigzag hydrogen bonded chain (such chains are further packed top and bottom of the sheets). The C-H···O hydrogen bonding in 3 gives a 1D hydrogen bonded chain as the primary supramolecular structure, which further extended to a 2D hydrogen bonded sheet structure with the support of C-H···Cl interactions. From the overlay of the structure of 2 and 3 over the ligand L, a difference in the conformations is observed (Figure 9c,d). Although the conformation of L is identical in 2 and 3, the supramolecular packing is different due to the packing of the molecules induced by anion and metal ion as a result of symmetry difference.
We have investigated the thermal stability of L, and its coordination complexes, by thermo-gravimetric analysis (TGA) over the 25-900 • C under nitrogen atmosphere at a heating rate of 10 • C/min. As expected, the coordination complexes showed much higher thermal stability than the ligand L, with 230 • C, 310 • C and 270 • C, for 1, 2 and 3 respectively. Thus, their thermal stability can be ordered as follows: L < 1 < 3 < 2. While 1 showed a continuous one step thermal decomposition, 2 and 3 exhibited a three-step thermal degradation, with sharp profiles at steps one and two. The higher thermal stability of 2 is due to the presence of a higher quantity of C-H···Cl (18.2% in 2 compared to 17.8% in 3) and other weak interactions (46.6% in 2 compared to 43.9% in 3), as revealed from the 2D FPs and 3D HS. The lower stability of 1, compared to 2 and 3, is also revealed from the 2D FPs and 3D HS data; although 21.1% of strong C-H···F is present in 1, the quantity of C-H···O (13.8% in 1, 14.5% in 2, 15.8% in 3) and N-H···O (1.8% in 1, 4.9% in 2 and 5.8% in 3) in 1 is less, in contrast to 2 and 3. Moreover, the contributions from π···π stacking and anion···π interactions which were present in 2 and 3 (Table 2), were absent in 1 ( Figure 10). 1 Figure 10. The thermo-gravimetric analysis (TGA) comparison plot of L, 1, 2 and 3.
Conclusions
A new flexible bis-pyrazol-bis-acetate ligand L, and its Zn(II), Mn(II) and Cd(II) coordination complexes have been synthesised and structurally characterised by single crystal X-ray diffraction. The ligand L showed diverse conformations once reacted with transition metals to produce the coordination complexes 1, 2 and 3. In addition, the intermediate compound B, which was found to be an achiral molecule, showed supramolecular chirality obtained from 2-fold rotational axes. While L showed a pair of 2D hydrogen bonded sheet structure having a 5-c net uninodal topology, 1 exhibited a 2D corrugated sheet like architecture having a honeycomb topology which further extended into a 3D hydrogen bonded network structure. Interenstingly, two distinct topologies were observed in 2, due to the presence of crystallographically independent molecules of 2 in the unit cell, namely a 2D hydrogen bonded sheet structure along the 'bc' plane and 1D zigzag hydrogen bonded chains, which are packed on the top and bottom of the 2D sheet. Finally, 3 showed a 2D hydrogen bonded sheet structure. Thus, the influence of the counter anions in shaping up the coordination modes of the metal ions and the conformation of ligand, resulting in various supramolecular synthons which control the self-assembly of coordination complexes, was demonstrated. Remarkably, 1, 2 and 3 showed unusual thermal stability as revealed from thermogravimetric analyses, which can be justified by the presence of strong supramolecular interactions, as revealed by the crystal structure and Hirshfield surface analyses. Its unique thermal stability could provide stable hybrid materials upon grafting L to silica for metallic decontamination purposes, particularly towards Zn(II), Mn(II) and Cd(II) which are recognised as toxic metal ions. This technology is currently under investigation in our laboratory and already applied to real water samples (e.g., from natural rivers) [43][44][45][46][47][48][49][50][51][52]. | 2021-01-07T09:08:14.900Z | 2020-12-30T00:00:00.000 | {
"year": 2020,
"sha1": "3bf2fdef4d8da2237d27d2f626096fa3d1182fbe",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/1/288/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "364bd598673400b22e099c2f1ef6c5ebd07e7e37",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
16910162 | pes2o/s2orc | v3-fos-license | Identification of Human NK17/NK1 Cells
Background Natural killer (NK) cells have both cytolytic and immunoregulatory functions. We recently described that these cells release the inflammatory cytokines IL-17 and IFN-γ. However, the precise identity of the NK cell subset(s) that secrete these cytokines is not known. Methodology/Principal Findings To isolate the cells secreting IL-17 and IFN-γ, we took advantage of the findings that Th17/Th1 cells express chemokine receptors. Therefore, CD56+NK cells were stained with antibodies against various chemokine receptors and intracellularly with antibodies toward IL-17 and IFN-γ. Consequently, we identified previously unrecognized subset of NK cells generated from normal human peripheral blood after activation with IL-2 but not PMA plus ionomycin. The cells are characterized by the expression of CD56+ and CCR4+, produce IL-17 and IFN-γ and are consequently named NK17/NK1 cells. They also express CD161, NKp30, NKp44, NKp46, NKG2D, CD158, CCL22, IL-2Rβ and the common γ chain but not CD127 or IL-23R. Further, they possess T-bet and RORγt transcription factors. Antibodies to IL-1β, IL-6, IL-21, or TGF-β1 do not inhibit IL-2-induced generation of NK17/NK1 cells, suggesting that IL-2 has the capacity to polarize these cells. Notably, NK17/NK1 cells are abundant in the cerebrospinal fluid (CSF) of patients with multiple sclerosis (MS) without activation, and are generated from the peripheral blood of these patients after activation with IL-2. Conclusions/Significance NK17/NK1 cells identified here have not been previously described in healthy or MS patients.
Introduction
Natural killer (NK) cells represent the first line of defence against infections and tumor metastases [1]. These cells possess immunoregulatory activities by secreting multiple cytokines and chemokines, and interact with dendritic cells to shape the innate and adaptive immune responses. Traditionally, human NK cells are classified into two major subsets; regulatory cells expressing CD56 but not CD16 known as CD56 +/high CD16 2 , and cytolytic cells expressing CD16 and low or no CD56 known as CD56 2/low CD16 + [reviewed in 2]. In addition, NK cells have been classified into NK1 and NK2 subsets based on cytokine release [3], and divided into different subsets based on their expression of chemokine receptors [4].
A unique subset of NK cells lining human peyers patches or tonsils that express NKp44 and CCR6 has been also described. The cells have no cytotoxic granules, do not secrete IFN-c and IL-17, but secrete IL-22, and were consequently designated as ''NK22'' cells [5]. Similar cells were reported by Cupedo et al. who demonstrated that cells with lymphoid tissue inducers (LTi) phenotype, i.e. CD127 + , lymphotoxin + and the nuclear factor retinoic acid-related orphan receptor (RORC + ), can differentiate into cells secreting IL-22 and expressing the CD56 + CD127 + RORC + phenotype [6]. Also, NKp46 + NKG2D + NK1.1 int RORct high NK cells in intestinal lamina propria were found to secrete the Th17 cytokine IL-22 [7]. In tonsil tissues, NK cells in stage III development expressing CD34 + CD117 + 2B4 + phenotype, as well as secreting IL-22 and IL-26 but not IL-17, have also been described [8]. Collectively, these observations identified NK cells found at mucosal tissues that secrete IL-22 and express among many markers RORct and CCR6. These findings also suggest that NK cells may be involved in autoimmune diseases by releasing inflammatory cytokines such as IL-17 and IL-22.
The role of NK cells in autoimmune diseases has not been delineated with precision. It was suggested that these cells play important roles in these diseases and they could be targets for therapy [9]. However, the role of NK cells in multiple sclerosis (MS) is controversial as there are two schools, one indicates that NK cells ameliorate the disease, whereas the other suggests that they exacerbate it [reviewed in 10]. It was reported that IL-2activated NK cells release IL-17 and IFN-c [11,12], but the identity of the cells that secrete these cytokines and their relation to the recently described NK cells in the gut mucosa or tonsils are not known. In fact, very little is known about the different subsets of NK cells and the function of these subsets. The purpose of this report is to isolate and characterize NK cells that secrete IL-17 and IFN-c from normal individuals and from patients with MS.
NK cell isolation
Buffy coats from healthy volunteers were obtained from the blood bank (Ullevål Hospital, Oslo, Norway). NK cell isolation was performed using RosetteSep human NK cell enrichment cocktail (Stem cell technologies SARL, Grenoble, France). Approximately 50 mL buffy coat was diluted 1:1 with RPMI medium and incubated with 25 mL of the cocktail provided with the kit for 20 min at room temperature. Afterwards, the mixture was centrifuged at 1800 rpm for 25 min, using Histopaque (Sigma-Aldrich, Oslo, Norway), and NK cell layer collected. The cells were further sorted into CD56 + and CD56 2 cells by magnetic separation, using EasySep human CD56 positive selection kit (Stem cell technologies SARL). After separation both CD56 + and CD56 2 cells were collected. To activate the cells, NK cells isolated by RosetteSep human NK cell enrichment cocktail (not yet separated with CD56 cocktail), were incubated at 1610 6 /mL with 200 U/mL IL-2. IL-2 (200 U/mL) was added to the cultures after 2, 4 and 6 days. The cells were collected after 7 days and then separated into CD56 + and CD56 2 . Activation with Phorbol 12myristate 13-acetate (PMA) and ionomycin (both from Sigma-Aldrich, Oslo, Norway) was done by incubating isolated CD56 + cells with 100 ng/mL of PMA and Ionomycin each for 24 h. Cells were washed and stained with surface anti-CCR4, and intracellularly with anti-IL-17 and anti-IFN-c. CCR4 gated cells were then examined for the expression of IL-17 and IFN-c.
To isolate CCR4 + or CCR4 2 , sorted CD56 + NK cells (1610 6 / mL) were mixed with 1 mg/ mL of anti-CCR4 in 12 mL tubes, and the mixtures were incubated for 45 min at 4uC. The cells were washed twice with PBS plus 1% BSA, and incubated with goat anti-mouse DYNALH magnetic beads (Invitrogene, Oslo, Norway) for 60 min at 4uC. Cells that attached to the beads and those that did not attach were isolated and tested for purity by flow cytometry.
To satin with the chemokine receptor and one intracellular cytokine, cells were incubated with 10 mg/mL Brefeldin A for 4 hours. They were labeled at 3610 5 cells/200 mL/well with 0.06 mg/well FITC labeled anti-CCR4, 0.06 mg/well FITCconjugated anti-CCR6, 0.06 mg/well FITC-conjugated anti-CCR7, 0.25 mg/well FITC-conjugated anti-CCR9, 0.12 mg/ well FITC-conjugated anti-CXCR1, 0.12 mg/well FITC-conjugated anti-CXCR3, 0.12 mg/well FITC-conjugated anti-CXCR4, or isotype control antibodies for 45 min at 4uC in the dark. After incubation, the cells were fixed with 4% paraformaldehyde for 15 min at 4uC and then washed twice with SAP buffer before staining with intracellular markers as follows: 3610 5 cells/well were incubated with 0.06 mg/well PEconjugated anti-IL-17, 0.06 mg/well anti-PE-conjugated IFN-c, 0.06 mg/well PE-conjugated anti-CCL3, 0.06 mg/well PE-conjugated anti-CCL4 or isotype controls antibodies, in the dark at 4uC for 45 min. Cells were washed with flow cytometric medium, resuspended with the same medium and transferred from plates into 5 ml tubes to perform flow cytometric analysis. Compensation was done according to the isotype controls. Analysis was done by FlowJo (Flow cytometry analysis software, Ashland, OR, USA).
For three color analysis, 1610 6 cells/well were labeled with 0.125 mg/well FITC-conjugated anti-CCR4, 0.125 mg/well FITC-conjugated anti-CCR6, 0.125 mg/well FITC-conjugated anti-CCR7, 0.3 mg/well FITC-conjugated anti-CCR9, 0.2 mg/ well FITC-conjugated anti-CXCR4 or control FITC-conjugated IgG antibodies at 4uC for 45 min in the dark. These cells were fixed with 4% paraformaldehyde for 15 min at 4uC and then washed twice with SAP buffer before staining them with intracellular markers as such: 0.04 mg/well APC-conjugated anti-IL-17, 0.1 mg/well PE-conjugated anti-IFNc or isotype controls antibodies were added in the dark at room temperature for 45 min. The cells were washed with flow cytometric buffer and resuspended in the same buffer. FITC-conjugated cells (more than 99% pure) were gated and examined for the production of IL-17 and IFNc.
Treatment with the antibodies
Enriched NK cells were incubated with IL-2 as described above, in the absence or the presence of these neutralizing antibodies: 1 mg/mL anti-IL-1b, anti-IL-6, anti-IL-21, anti-TGF-b1 or isotype control antibodies for 6-7 days. The cells were collected, washed and CD56 + cells isolated. They were labeled extracellularly with anti-CCR4 and intracellularly with anti-IL-17 and anti-IFN-c, and then examined in the flow cytometry.
Multiple Sclerosis patients
The local ethical committee at Ullevål Hospital and Oslo University Hospital approved the study, and patients were informed and signed consent forms according to the approved protocol. All patients fulfilled McDonald's diagnostic criteria. Five patients with relapsing remitting (RR) MS diagnosis donated blood samples in a clinical stable phase of the disease and before receiving any treatment. Three other patients donated CSF (Table 1). Peripheral blood cells from these patients were incubated with IL-2 similar to normal blood. Cells from the CSF were sorted into non-activated CD56 + and CD56 2 , and were labeled with surface anti-CCR4 and intracellularly with IL-17 and IFN-c, as described above. Cells isolated from the CSF of a third MS patient with secondary progressive MS (Table 1) were labeled with FITC-conjugated anti-CCR4, fixed, permeabilized and stained with PE-conjugated anti-IFN-c and APC-conjugated anti-IL-17. They were examined by flow cytometry as described above.
Detection of IL-17 and IFN-c levels by ELISA assay
Concentrations of IL-17 and IFN-c were determined with the human Quantikine ELISA kits (R&D Systems Europe Ltd) as described by the manufacturers' user manual. Supernatants from IL-2-activated CD56 + CCR4 + NK cells (5610 5 or 1610 6 cells/ mL), were collected and the levels of IFNc and IL-17 were determined at 450 nm with Power wave XS plate reader (Biotec instruments, VT, USA).
Statistical analysis
Significant values were generated by the Student t-test utilizing GraphPad Prism 3 software (GraphPad Software, Inc., La Jolla CA, USA). A P value,0.05 was considered to be statistically significant.
IL-2-activated CD56 + CCR4 + NK cells produce and secrete IL-17 and IFN-c
To investigate the presence of NK cells secreting IL-17 and IFN-c in human peripheral blood, we used an approach based on the finding that chemokine receptor CCR6 is expressed on cells secreting IL-17 or IL-17 plus IFN-c [13,14]. First, we isolated nonactivated NK cells from normal human blood and sorted them into CD56 + and CD56 2 using antibody-coated beads. The surface of highly purified CD56 + and CD56 2 NK cells were labeled with FITC-conjugated anti-CCR6. Because NK cells also express CCR4 [15], and because this molecule is present on Th17 cells in addition to CCR6 [13,14], we stained non-activated CD56 2 and CD56 + NK cells with anti-CCR4. NK cells express most other chemokine receptors that are involved in their chemotaxis, migration and cytotoxicity. For example, they express CXCR4 important for their chemotaxis and retention in the bone marrow [16,17], CCR6 is reported to increase their migration [18,19], CCR7 is important for their migration and lodging into the lymph nodes [20,21], and CCR9 involved in migration of cells into the small intestine is expressed on a subset of NK cells [4]. Consequently, we labeled these cells with antibodies to CCR6, CCR7, CCR9 and CXCR4 receptors. All these subsets of NK cells were stained intracellularly with PE-conjugated antibody to IL-17 and IFN-c. The results demonstrate that less than 2% of non-activated CD56 + NK cells, or CD56 2 K cells that were labeled with antibodies toward CCR4, CCR6, CCR7, CCR9 or CXCR4 produced IL-17 or IFN-c (data not shown).
Since NK cells have been shown to secrete these cytokines upon IL-2 activation [11,12], we activated purified NK cells in vitro with IL-2 for 7 days and then sorted them into CD56 + and CD56 2 subsets. First, we ascertained that the CD56 + NK cells are pure since they were stained with anti-CD56 and not with anti-CD3 ''T cells'', anti-CD14 ''monocytes'', or anti-CD19 ''B cells'' (Figure 1). Cells of both CD56 + and CD56 2 subsets were labeled with surface anti-CCR4, anti-CCR6, anti-CCR7, anti-CCR9, or anti-CXCR4 and intracellularly with antibodies toward IL-17 and IFN-c. There was an upregulation of IL-17 and IFN-c in CCR4 + CD56 + and not CD56 2 cells (data not shown). Based on these preliminary findings, we examined IL-2 activated CD56 + cells labeled with antibody towards CCR4, as well as with anti-CCR6, anti-CCR7, anti-CCR9 or anti-CXCR4. As shown in Figure 2, about 25%, 14%, 5%, 4% and 11% of IL-2 activated CD56 + NK cells were stained with FITC-conjugated anti-CCR4, anti-CCR6, anti-CCR7, anti-CCR9 and anti-CXCR4, respectively, whereas isotype control antibodies did not label these cells. FITC-labeled cells were gated (more than 99% pure), and stained intracellularly with PE-conjugated anti-IFN-c and APC-conjugated anti-IL-17 or with isotype controls PE-conjugated and APC-conjugated antibodies. The results demonstrate that only CCR4 + and not CCR6 + , CCR7 + , CCR9 + or CXCR4 + NK cells were labeled intracellularly with antibodies to both cytokines ( Figure 2). These results indicate that cells contained within the CD56 + CCR4 + NK cell subset are primary targets for polarization into cells producing IL-17 and IFN-c, designated here as NK17/NK1 synonymous with T cell terminology [14]. In addition to flow cytometric analysis, we measured the levels of these cytokine in the supernatants of NK17/NK1 cells. Results in Figure 3A show that high levels of both cytokines are released from 5610 5 /mL or 1610 6 /mL CD56 + CCR4 + cells.
Incubating purified NK cells with IL-2 in the absence or presence of neutralizing antibodies to IL-1b, IL-6, IL-21, or TGF-b1 did not affect the percentages of CD56 + CCR4 + cells secreting both IL-17 and IFN-c ( Figure 3B), implicating that among these targeted cytokines only IL-2 has the capacity to polarize these cells.
To demonstrate that NK17/NK1 cells express IL-2R, sorted CD56 + cells were labeled with FITC-conjugated anti-CCR4 and PE-conjugated anti-IL-2Rb or intracellularly with PE-conjugated anti-IL-2Rc (common c chain). Data shown in Figure 3C indicate that CD56 + CCR4 + cells expressed both IL-2Rb and IL-2Rc. Further analysis showed that incubating CD56 + NK cells with stimuli other than IL-2 such as PMA plus ionomycin overnight did not generate cells that produce IL-17 plus IFN-c, albeit the presence of 18% CCR4 + cells within the CD56 + cell population ( Figure 3D).
NK17/NK1 cells do not express IL-23R but express RORct, T-bet and NK cell maturation markers
To gain insights into the expression of NK cell markers among different NK cell subsets based on their chemokine receptor possession, we double sorted IL-2-activated NK cells, first into CD56 + and then into CCR4 + or CCR6 + cells by antibodyconjugated beads. These subsets were labeled with antibodies to various surface receptors. The results show that both NK cell subsets expressed the mature NK cell molecules NKp30, NKp44, NKp46, NKG2D, CD158 and CD161, but lacked the expression of immature cell marker CD127. The most obvious difference between the two subsets is the expression of IL-23 on the surface of CCR6 + NK cells and not on NK17/NK1 (CD56 + CCR4 + ) cells (P,0.03, Figure 4A). Notably, NK17/NK1 cells expressed the ligand for CCR4, i.e. CCL22/MDC, suggesting that this chemokine may play a role in the maintenance and/or survival of these cells.
To gain further insights into the molecular pathways involved in the production of IL-17 and IFN-c, we examined the expression of the transcription factors RORct and T-bet important for the secretion of IFN-c and IL-17, respectively [22,23]. Hence, sorted CD56 + CCR4 + NK cells were labeled with anti-T-bet or anti-RORct. Interestingly more than 94% of CD56 + CCR4 + NK cells stained with antibodies to RORct or T-bet ( Figure 4B), suggesting that these transcription factors are important for the production of IL-17 and IFN-c by these cells. On the other hand, CD56 + CCR4 2 cells did not express RORct but about 40% of them expressed T-bet ( Figure 4B).
NK17/NK1 cells are increased in the cerebrospinal fluid (CSF) of multiple sclerosis (MS) patients
Both Th1 secreting IFN-c and Th17 secreting IL-17 contribute to the pathogenesis of MS and EAE. Consequently, we examined the presence of NK17/NK1 cells in the blood of patients with MS. The results from five different patients show that NK17/NK1 cells were not spontaneously found in the blood, but were generated from the peripheral blood of four patients examined upon activation with IL-2, although their frequencies were less than in normal blood ( Figure 5A). Of note, blood samples were collected from MS patients age 25-53 years old, which were not different from samples collected from healthy donors. We have also included CSF from MS patients due to the possibility that this may give us an idea about the role these cells might play in a diseased organ. After isolation of non-activated CD56 + cells from the CSF of two MS patients, CCR4 + cells secreting both IL-17 and IFN-c were abundant ( Figure 5B). The frequency of CD56 + CCR4 + cells (NK17/NK1 cells) was about ten-fold higher than CD56 2 CCR4 + cells found in the CSF ( Figure 5B vs. 5C), and more than twenty-fold the numbers of non-activated CD56 + CCR4 + found in the peripheral blood of MS patients ( Figure 5B vs. 5A). Also, we managed to obtain enough cells from the CSF of a third MS patient which were labeled with FITCconjugated anti-CCR4 and intracellularly with PE-conjugated anti-IFN-c and APC-conjugated anti-IL-17. The results in Figure 5D demonstrate that about 25% of CCR4 + NK cells produced both IL-17 and IFN-c. It is highly plausible that NK17/ NK1 cells may migrate from the periphery into the CSF aided by the CCL22/CCR4 axis, and are polarized in the brain due to inflamed local microenvironment that may contribute to their generation. CSF of normal individuals was not collected due to ethical considerations. However, NK cells are either not found or found in very low numbers in the brain of normal mice [24].
Discussion
We describe here a novel subset of human NK cells phenotypically characterized as CD56 + CCR4 + RORct + T-bet + IL-23R 2 , expressing mature NK cell markers, the ligand for CCR4, and producing IL-17 and IFN-c. Earlier findings show that NK cells can be classified into different subsets based on their expression of chemokine receptors [4], but the functions of these subsets are not known. Our results are the first to show that a subset of human NK cells expressing a specific chemokine receptor perform distinguished function related to the production of inflammatory cytokines.
Because NK17/NK1 cells differ from both Th17 and NK22 since they do not express CCR6 or IL-23R, they may represent a distinct subset of NK cells. NK cells should have multiple specialized lineages or subsets since they perform multiple tasks [25]. Further, NK cells have memory similar to adaptive T cells indicating that various subsets or lineages of these cells may be recalled in response to various pathogens or cytokines. Under pathological conditions where IL-2 is released, we anticipate that NK17/NK1 cells predominate, which may affect the microenvironment through the release of IL-17 and IFN-c. Intriguingly however, is the lack of any subset of NK cells examined in this study that can secrete only IFN-c. This includes CD56 + or CD56 2 cells that also express CCR6 + , CCR7 + , CCR9 + , or CXCR4 + . The only cells that secrete IFN-c also secrete IL-17 (i.e. NK17/NK1 cells). It is either that cells of this subset are the only producers of this cytokine, or that IFN-c is released by other NK cell subsets not examined in this study. The observation that CD56 + cells devoid of CCR4 express T-bet transcription factor suggests that cells lacking CCR4 + as well as CCR6 + , CCR7 + , CCR9 + , or CXCR4 + might also secrete IFN-c.
The finding that NK17/NK1 cells do not express IL-23R, whereas CCR6 + cells express it but do not produce IL-17 or IFNc, suggests that in the periphery the role of IL-23 may be replaced with available cytokines such as IL-2. Hence, IL-23 inducing the release of IL-17 may be a property of NK cells found at mucosal sites as a response to microbial infections, [5][6][7], whereas the rules of regulation are different in the periphery and at inflamed sites where more mature NK cells predominate. Plausibly, NK cells exposed in the periphery to IL-2 generate NK17/NK1 cells that express CCR4, whereas those that are exposed to IL-23 in the mucosa generate NK22 cells that express CCR6. Further, NK17/ NK1 cells present in the periphery express RORct + and T-bet + transcription factors, which facilitate their secretion of IL-17 and IFN-c.
The role of NK cells in MS/EAE is controversial and it is not yet clear whether NK cells ameliorate or exacerbate the disease [10,26]. We recently reported that administration of glatiramer acetate (GA) a drug used to treat MS patients, reduced EAE clinical score in SJL mice corroborated with isolating NK cells that expressed high killing potential against immature or mature dendritic cells [24]. In addition, administration of anti-Tac (Daclizumab) antibody to MS patients ameliorated the disease, associated with induced expansion and activation of CD56 + NK cells [27]. These findings demonstrate that NK cells may contribute to the therapeutic efficacy of these drugs.
In summary, we have identified a new subset of human NK cells that has not been previously recognized. The cells of this subset are designated as NK17/NK1 cells because they produce and secrete IL-17 and IFN-c. They are generated from the peripheral blood of healthy individuals as well as MS patients upon activation with IL-2, and are abundant in the CSF of MS patients. The precise role that NK17/NK1 play in MS and other autoimmune diseases is currently under investigation. | 2014-10-01T00:00:00.000Z | 2011-10-21T00:00:00.000 | {
"year": 2011,
"sha1": "b70a99c0cf9a891cd3db919ce570e2fb185e3d4d",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0026780&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b70a99c0cf9a891cd3db919ce570e2fb185e3d4d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
226294890 | pes2o/s2orc | v3-fos-license | First patient management of COVID-19 in Changsha, China: a case report
Background In December 2019, the novel coronavirus disease 2019 (COVID-19) emerged in Wuhan, Hubei Province, China. It rapidly spread and many cases were identified in multiple countries, posing a global health problem. Here, we report the first patient cured of COVID-19 infection in Changsha, China, and the symptoms, diagnosis, treatment, and management of this patient are all described in this report. Case presentation A 57-year-old woman developed cough and fever after returning to Changsha from Wuhan on January 9, 2020. She tested positive for COVID-19 infection, a diagnosis which was supported by chest CT. The patient was treated with lopinavir and ritonavir tablets and interferon alfa-2b injection. A low dose of glucocorticoids was used for a short period to control bilateral lung immune response, and this patient avoided being crushed by cytokine storms that might have occurred. The clinical condition of this patient improved, and a COVID-19 assay conducted on January 25, 2020 generated negative results. This patient recovered and was discharged on January 30, 2020. Conclusions Currently, there are numerous reports on COVID-19 infections focusing on the disease’s epidemiological and clinical characteristics. This case describes the symptoms, diagnosis, treatment, and management of a patient cured of COVID-19 infection, which may serve as reference for future cases, while further studies are needed.
The chief complaints of this patient were cough and fever, with general weakness and muscle aches that developed after returning to Changsha from Wuhan 7 days prior. Given her symptoms and recent travel history, she decided to see a health care provider. The patient had a history of hypertension, carotid plaque, hypothyroidism, and chronic gastritis, without a smoking or drinking habit. The physical examination revealed the following: body temperature: 38.0°C, blood pressure: 143/76 mmHg, pulse: 78 beats per minute, respiratory rate: 13 breaths per minute, and oxygen saturation: 96%. She had throat congestion and thick bilateral breath sounds in the lungs. To evaluate the abnormal breath sounds, we performed a chest CT examination, which revealed bilateral pneumonia (Fig. 1a). The results of nucleic acid amplification testing (NAAT) for influenza A and B were negative. Given the patient's travel history and CT findings, Hunan Province and Chinese Center for Disease Control and Prevention (CCDC) were immediately notified. CCDC staff required us to test the patient for COVID-19, even though the patient disclosed that she had never been to the Huanan seafood market and reported no known contact with ill persons in the past month. Specimens were collected following CCDC guidance. After specimen collection, she was admitted to the isolation ward of the First Hospital of Changsha City. On admission, the patient reported persistent dry cough, fatigue, headache, sore throat, and chest pain for a week. Upon physical examination, the patient was found to have throat congestion without other remarkable findings.
On hospital days 2-4 (illness days 8-10), the patient's vital signs remained largely stable. She reported that her cough and sore throat were worse than before, accompanied by chest pain and a small amount of sputum. Intermittent fever and sore throat were still reported (Fig. 2). Supportive treatment was performed at this stage, and methylprednisolone sodium succinate 40 mg QD intravenously was given to inhibit lung inflammation. During this period, we found that the patient developed melena in the morning, indicating that we should be aware of the possibility of upper gastrointestinal bleeding. The patient was treated with pantoprazole for acid suppression. Ambroxol (30 mg BID intravenously), limonene, and pinene enteric soft capsules (0.3 g TID peros) were used to expel sputum. Laboratory results on hospital days 1-3 (illness days 7-9) reflected leukopenia, neutropenia, lymphopenia, and reduced hematocrit. Additionally, elevated levels of lactate dehydrogenase and C-reactive protein were observed (Table 1). According to the suggestion of The Diagnosis and Treatment of Pneumonitis with COVID-19 Infection (DTPI) published by National Health Commission of the PRC, we generally monitor the patient's blood oxygen saturation and oxygenation index closely. When the patient's blood oxygen saturation is below 93%, respiratory support is given. This patient has not reached the point where tracheal intubation is required, so respiratory support was not given to this patient.
On hospital day 4 (illness day 10), re-examination of the lung CT showed that lung inflammation had progressed (Fig. 1b). The dose of methylprednisolone sodium succinate was changed to 40 mg Q12H intravenously, and intravenous human immunoglobulin (Ph4) 5 g BID was added to inhibit lung inflammation. Given the clinical presentation, treatment with piperacillin sodium and tazobactam sodium (4.5 g Q8H intravenously) and moxifloxacin hydrochloride and sodium chloride injection (0.4 g QD intravenously) were initiated.
On hospital day 5 (illness day 11), the CCDC confirmed that the oropharyngeal swabs of this patient tested positive for COVID-19 by real-time reverse transcriptase-polymerase chain reaction (rRT-PCR) assay. According to the suggestion of The Diagnosis and Treatment of Pneumonitis with COVID-19 Infection (DTPI) published by the National Health Commission of the PRC, lopinavir and ritonavir tablets (2 pills BID by mouth), which had been used to treat HIV infection in the past, as well as interferon alfa-2b injection (5 million IU added into 2 mL of sterile water, inhalation BID) were added to the patient's treatment regimen. On hospital day 8 (illness day 14), the temperature of this patient dropped to 36.4°C. Moreover, her appetite improved, and she was asymptomatic apart from fatigue and chest pain. A comparison of a new CT scan and the previous CT images showed that the bilateral patchy lesions in her lungs had been absorbed (Fig. 1c). Methylprednisolone sodium succinate was then discontinued. On hospital day 9 (illness day 15), the blood pressure of this patient dropped to 85/55 mmHg. Therefore, Shenmai injection 50 mg QD intravenously was used, and the patient's blood pressure rose to 113/70 mmHg. On hospital day 10 (illness day 16), a negative result was obtained for the COVID-19 assay. This patient reported that her cough and fever had abated and her clinical condition improved. Lopinavir and ritonavir were then discontinued on hospital day 10 (illness day 16). Interferon alfa-2b injection and antibiotics were discontinued on hospital day 11 (illness day 17). On hospital day 14 (illness day 20), this patient again tested negative for COVID-19 by rRT-PCR assay, and was discharged on January 30, 2020 (hospital day 15, illness day 21).
Diagnostic process
In accordance with the DTPI guidelines, this case was diagnosed based on epidemiological history and clinical manifestations: 1. Epidemiological history (must comply with any one of the following) (1) Travel or residential history in Wuhan, China within 14 days before the onset of illness; (2) Exposure to patients with fever or respiratory symptoms from Wuhan City within 14 days before the onset of illness; (3) Aggregative onset or epidemiological association with new coronavirus infection. 2. Clinical manifestations (must comply with any two of the following) (1) Fever (> 37.3°C); (2) Imaging characteristics of pneumonia; Fig. 1 Chest CT of this patient. a Chest CT was obtained on January 16, 2020 (hospital day 1, illness day 7). Multiple patchy shadows and cordlike ground-glass opacity (GGO) under the pleura and bilateral lungs were observed. b Chest CT was obtained on January 19, 2020 (hospital day 4, illness day 10). The texture of the trachea and blood vessels in both lungs showed thickening. GGO increased, and the original GGO was consolidated. c Chest CT was obtained on January 23, 2020 (hospital day 8, illness day 14). The patchy lesions in both lungs were absorbed, and the fiber shadow increased in size. d Chest CT was obtained on January 30, 2020 (hospital day 15, illness day 21). The consolidation in the bilateral lungs was further absorbed. The fiber strands were reduced, and GGO increased slightly (2) The results of sequencing using respiratory specimens or blood specimens are highly homologous with COVID-19.
Laboratory testing
The COVID-19 laboratory test assays were conducted according to WHO recommendations [9]. Laboratory identification of COVID-19 was performed by three different institutions: the First Hospital of Changsha City, Hunan CDC, and CCDC. Upper and lower respiratory tract specimens were obtained from this patient thrice (hospital days 3, 8, and 13). RNA was obtained and further tested by rRT-PCR through the same method described in a previous study [2]. This study also tested for other respiratory viruses (influenza A virus, influenza B virus, and respiratory syncytial virus) and parainfluenza virus. In addition, this patient underwent chest X-rays and chest CT.
Results
CT imaging showed multiple patchy shadows and cordlike ground-glass opacity (GGO) under the pleura and bilateral lungs (hospital day 1, illness day 7, Fig. 1a). Furthermore, a second chest CT indicated that the texture of the trachea and blood vessels in both lungs had thickened. GGO increased, and the original GGO was consolidated (hospital day 4, illness day 10, Fig. 1b). Oropharyngeal swabs were obtained from this patient on hospital days 3, 8, and 13. A positive result for COVID-19 was obtained on hospital day 5. On hospital day 8 (illness day 14), 3 days after beginning treatment with lopinavir and ritonavir tablets combined with interferon alfa-2b injection, a comparison of a third chest CT with the previous CT images obtained on hospital day 4 indicated that the patchy lesions in the bilateral lungs were absorbed, and the fiber shadow was increased (Fig. 1c). On hospital day 10 (illness day 16), rRT-PCR assay was performed to test for active COVID-10 infection, and a negative result was obtained. On hospital day 15 (illness day 21), a fourth CT showed that the consolidation in the bilateral lungs had been absorbed to a greater extent. The fiber strands were reduced, and GGO increased slightly (Fig. 1d). The COVID-19 infection of this case was checked again, and a negative result was obtained.
Discussion and conclusions
Currently, the full spectrum and transmission dynamics of these infections are unclear. Here, we describe a patient cured of COVID-19 in Changsha, China. During Our patient reported that she returned to Changsha from Wuhan without visiting the Huanan seafood market. A previous study revealed that COVID-19 had been The value in the patient was above normal spreading via person-to-person transmission [10]. As of February 4, 2020, no secondary cases of COVID-19 related to this patient were confirmed. Currently, the clinical spectrum of COVID-19 infections is limited. Several reports have described various complications related to COVID-19 [2,6,11]. In this study, this patient initially manifested fever and cough with eukopenia, neutropenia, and lymphopenia. The report on the first case of COVID-19 in the United States indicated no sign of pneumonia on the patient's chest X-ray on illness day 4, which indicated that this disease may have a latency period. The nonspecific signs of COVID-19 infection may be clinically different from those of other infectious diseases. Many hospitals in China currently use lopinavir and ritonavir tablets combined with interferon alfa-2b injection to treat COVID-19. In this report, this patient was also treated with the above-mentioned medication regimen. However, additional studies are needed to confirm the effect of this therapeutic schedule. A multicenter randomized controlled trial on the treatment of COVID-19 using lopinavir and ritonavir tablets is currently in progress in China. The present study emphasizes the need to confirm the full spectrum and pathogenesis related to COVID-19 infections. Additional information about this disease is necessary for its clinical management. We should make every possible effort to control this infectious disease. | 2020-11-11T15:04:05.214Z | 2020-11-11T00:00:00.000 | {
"year": 2020,
"sha1": "3f35f20b44e36ddeaf085d9388fb2a85a515b235",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-020-05545-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f35f20b44e36ddeaf085d9388fb2a85a515b235",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257463380 | pes2o/s2orc | v3-fos-license | BGP-15 Protects against Doxorubicin-Induced Cell Toxicity via Enhanced Mitochondrial Function
Doxorubicin (DOX) is an efficacious and commonly used chemotherapeutic agent. However, its clinical use is limited due to dose-dependent cardiotoxicity. Several mechanisms have been proposed to play a role in DOX-induced cardiotoxicity, such as free radical generation, oxidative stress, mitochondrial dysfunction, altered apoptosis, and autophagy dysregulation. BGP-15 has a wide range of cytoprotective effects, including mitochondrial protection, but up to now, there is no information about any of its beneficial effects on DOX-induced cardiotoxicity. In this study, we investigated whether the protective effects of BGP-15 pretreatment are predominantly via preserving mitochondrial function, reducing mitochondrial ROS production, and if it has an influence on autophagy processes. H9c2 cardiomyocytes were pretreated with 50 μM of BGP-15 prior to different concentrations (0.1; 1; 3 μM) of DOX exposure. We found that BGP-15 pretreatment significantly improved the cell viability after 12 and 24 h DOX exposure. BGP-15 ameliorated lactate dehydrogenase (LDH) release and cell apoptosis induced by DOX. Additionally, BGP-15 pretreatment attenuated the level of mitochondrial oxidative stress and the loss of mitochondrial membrane potential. Moreover, BGP-15 further slightly modulated the autophagic flux, which was measurably decreased by DOX treatment. Hence, our findings clearly revealed that BGP-15 might be a promising agent for alleviating the cardiotoxicity of DOX. This critical mechanism appears to be given by the protective effect of BGP-15 on mitochondria.
Introduction
Doxorubicin (DOX) is a potent chemotherapeutic agent widely used to treat a variety of cancers [1]. The clinical use of DOX has been associated with cumulative, dose-dependent cardiotoxicity, while off-target drug toxicity is associated with oxidative stress that involves the development of heart failure [2]. The mechanisms of DOX-induced toxicity have not been clearly elucidated, but are known to involve, at least in part, mitochondrial dysfunction, leading to an increased generation of intracellular ROS, oxidative stress, and apoptosis [3,4]. Thus, DOX cardiotoxicity is closely associated with mitochondrial injury, which is characterized by iron overload and an early loss of mitochondrial membrane potential (MMP) followed by dysregulation of the mitochondrial quality control mechanism [5]. Additionally, DOX activated apoptosis, due to an imbalance in oxidant and anti-oxidant. Therefore, many pathways might be speculated to be responsible for apoptosis induction and there may be cross-talk between these various pathways, including the mitochondrial pathway through caspase-3 activation [6].
More recently, it has been suggested that dysregulation of autophagy may also play a contributing role in DOX-induced cardiotoxicity [7,8]. Autophagy has been shown to have dual functions. Autophagy can enhance cellular function and survival by degrading Int. J. Mol. Sci. 2023, 24, 5269 2 of 13 damaged or unwanted organelles and by inhibiting apoptosis. Alternatively, autophagy can also induce cell death [9]. Several studies have shown that DOX treatment affects autophagy in vitro and in vivo. Some have shown that DOX treatment increases autophagy and some have shown that DOX decreases autophagy [10]. Since autophagy has dual functions in the life and death of cardiomyocytes, several investigators have employed chemical means of manipulating autophagy to elucidate its role in DOX-induced cardiotoxicity. However, it is important to note that many of the most commonly used chemical modulators of autophagy have off-target effects to be considered when interpreting results [11].
With all of the above molecular mechanisms leading to DOX-induced cardiotoxicity, its clinical use is limited. In this present study, our attempt was to investigate the effect of BGP-15 on DOX-induced injury. BGP-15 (O-[3-piperidino-2-hydroxy-1-propyl]-nicotinic amidoxime) possesses a wide range of cytoprotective effects but lacks a clear intracellular molecular target [12]. BGP-15 protects the mitochondrial membrane system, decreases oxidative stress [13], inhibits the nuclear translocation of apoptosis-inducing factor (AIF) from mitochondria, and inhibits mitogen-activated protein kinase (MAPK) activation [12]. BGP-15 shows several beneficial cardiovascular effects and has increasingly raised scientific interest in a wide range of pathological conditions in several disease models [14,15]. Although several protective mechanisms of BGP-15 were identified, its effects on DOXinduced cardiotoxicity are not yet investigated. In the current study, we have tested the effect of BGP-15 treatment on DOX-induced injury in H9c2 cells.
Effects of BGP-15 Pretreatment on Cell Viability and LDH Release of DOX-Induced Cardiotoxicity
In order to evaluate the potential cardioprotective effect of BGP-15 against DOXinduced toxicity, a cell viability assay was carried out. As shown in Figure 1, panels A and B, 12 or 24 h of DOX exposure at doses between 0.1 and 3 µM induced a significant dose-dependent decrease in cell viability in comparison with the control group (p < 0.0001). Of note, no cytotoxicity was observed in response to 50 µM of BGP-15 alone. Thus, we assessed the effect of DOX on cell viability in the presence of BGP-15. Our findings showed that BGP-15 pretreatment at 0.1, 1, and 3 µM DOX groups significantly ameliorated cell viability in comparison with only DOX-treated cardiomyocytes at the same concentration after 12 h and 24 h of DOX exposure, respectively.
To further confirm the protective effect of BGP-15 treatment on DOX-induced toxicity, the LDH content of cell culture media was determined by a colorimetric assay. Our results showed ( Figure 1C) that DOX increased the LDH release of the cells in a dosedependent manner. In line with the MTT assay, BGP-15 pretreatment did not just improve the cell viability, but also significantly decreased the DOX-induced LDH release of the cardiomyocytes. We measured a significant decrement in LDH release in the presence of BGP-15 compared to DOX treatment alone (1 µM DOX: 22.88 ± 0.78% vs. 17.04 ± 0.59% and 3 µM DOX: 25.91 ± 1.14% vs. 20.33 ± 0.85%). Data are presented as mean ± SEM, *, **, ***, and **** represent p < 0.05, p < 0.01, p < 0.001 and p < 0.0001.
BGP-15 Attenuates the DOX-Induced Generation of Mitochondrial ROS and Slightly Diminishes the Activation of Caspase-3 Apoptosis Marker in H9c2 Cells
ROS are involved in DOX-induced cell death [3]. Several studies have suggested that cardiomyocyte mitochondria are important intracellular targets of excess ROS during DOX-induced cardiotoxicity. Superoxide is one of the major ROS generated after DOX treatment [16]. Thus, to study the role of ROS in the protection induced by BGP-15 treatment (Figure 1), cells were analyzed for mitochondrial superoxide anion generation by flow cytometry in the presence or absence of BGP-15 in cardiomyocytes challenged by DOX treatment (Figure 2A). Our results indicated that DOX increased the mitochondrial superoxide generation compared to the control cells in a dose-dependent manner. Quantitative measurements of the mean fluorescence intensities of the samples demonstrated that 1 and 3 µM DOX alone significantly increased the ROS level (696.58 ± 42.34 and 992.03 ± 143.17, respectively) in contrast to the control group (408.18 ± 9.75). Conversely, enhanced MitoSOX fluorescence intensity induced by the DOX treatment was lessened by pretreatment with BGP-15, which was significantly lower in the BGP-15 + DOX3 group in comparison with DOX3-treated cells, indicating that the level of mitochondrial superoxide generation decreased in H9c2 cells in the presence of BGP-15. Fluorescent microscopy was employed to visualize MitoSOX staining ( Figure 2B). However, we observed a notable accumulation of DOX in the nucleus, which makes it difficult to quantify the fluorescence intensity of microscopic images. These results suggest that decreased ROS generation may play a role in the cytoprotective effect of BGP-15 in H9c2 cells against DOXinduced cell toxicity.
In order to investigate the activation of apoptosis, we analyzed the ratio of cleavedcaspase-3 (17 kDa) /pro-caspase-3 (35 kDa) after the cardiomyocyte cells were exposed to ROS are involved in DOX-induced cell death [3]. Several studies have suggested that cardiomyocyte mitochondria are important intracellular targets of excess ROS during DOX-induced cardiotoxicity. Superoxide is one of the major ROS generated after DOX treatment [16]. Thus, to study the role of ROS in the protection induced by BGP-15 treatment (Figure 1), cells were analyzed for mitochondrial superoxide anion generation by flow cytometry in the presence or absence of BGP-15 in cardiomyocytes challenged by DOX treatment (Figure 2A). Our results indicated that DOX increased the mitochondrial superoxide generation compared to the control cells in a dose-dependent manner. Quantitative measurements of the mean fluorescence intensities of the samples demonstrated that 1 and 3 µM DOX alone significantly increased the ROS level (696.58 ± 42.34 and 992.03 ± 143.17, respectively) in contrast to the control group (408.18 ± 9.75). Conversely, enhanced MitoSOX fluorescence intensity induced by the DOX treatment was lessened by pretreatment with BGP-15, which was significantly lower in the BGP-15 + DOX3 group in comparison with DOX3-treated cells, indicating that the level of mitochondrial superoxide generation decreased in H9c2 cells in the presence of BGP-15. Fluorescent microscopy was employed to visualize MitoSOX staining ( Figure 2B). However, we observed a notable accumulation of DOX in the nucleus, which makes it difficult to quantify the fluorescence intensity of microscopic images. These results suggest that decreased ROS generation may play a role in the cytoprotective effect of BGP-15 in H9c2 cells against DOX-induced cell toxicity. 1 µM DOX for 24 h in the absence or presence of 50 µM BGP-15 pretreatment ( Figure 2C). Our results showed that 1 µM DOX for 24 h significantly enhanced the ratio of cleavedcaspase-3/pro-caspase-3 (0.57 ± 0.08) in comparison with the control group (0.05 ± 0.01), indicating the activation of apoptosis. BGP-15 alone did not alter the ratio of cleavedcaspase-3/pro-caspase-3. Although the pretreatment of BGP-15 could slightly withhold the activation of apoptosis, unfortunately, the ratio of these abovementioned proteins was not statistically significant (0.43 ± 0.07) (p value = 0.26).
Figure 2.
Effect of BGP-15 pretreatment on mitochondrial ROS generation and activation of caspase-3 apoptosis marker in DOX-induced cardiotoxicity. (A) ROS production was measured after Mito-SOX Red staining by flow cytometry and expressed as mean ± SEM of MitoSOX Red fluorescence intensity (with or without BGP-15 (50 µM, 24 h pretreatment) on DOX-exposed H9c2 cells in the concentration range of 0.1-3 µM). n = 4. The images were captured using a 63× oil immersion objective lens. Data are presented as mean ± SEM, *, **, and **** represent p < 0.05, p < 0.01, and p < 0.0001, respectively. The significance of differences among groups was evaluated with a one-way analysis of variance (ANOVA) followed by Tukey's posttest. (B) Representative images of MitoSOX Red staining; the nuclear fluorescence in DOX-treated H9c2 derives from DOX. (C) Analysis of the protein level of cleaved-caspase-3/pro-caspase-3 ratio after the cardiomyocyte cells were exposed to 1 µM DOX for 24 h in the absence or presence of the pretreatment of 50 µM BGP-15 for 24 h by Western blot. Red arrows indicate the bands for cleaced-caspase-3 (17; 19 kDa). Values were normalized to the total protein level and expressed as the mean ± SEM, n = 9. *** p < 0.001; **** p < 0.0001, respectively. The significance of differences among groups was evaluated with a one-way analysis of variance (ANOVA) followed by Tukey's posttest.
Effects of BGP-15 Pretreatment on Mitochondrial Depolarization of DOX-Exposed H9c2 Cells
Mitochondria are the primary target organelles of DOX-induced cardiotoxicity [17]. Mitochondrial membrane potential (MPP) is necessary for the production of ATP, which is crucial in living cells. JC-1 was used to assess the Δψm in H9c2 cardiomyocytes. This dye can selectively enter the mitochondria where it reversibly changes color as membrane potentials increase (over values of about 80-100 mV). The monomeric form of JC-1 in the cytosol emits a green fluorescence, and aggregates of the dye in the mitochondria of normal cells emit a red fluorescence. To confirm our fluorescent intensity (ratio red/green)
Figure 2. Effect of BGP-15 pretreatment on mitochondrial ROS generation and activation of caspase-3 apoptosis marker in DOX-induced cardiotoxicity. (A) ROS production was measured after MitoSOX
Red staining by flow cytometry and expressed as mean ± SEM of MitoSOX Red fluorescence intensity (with or without BGP-15 (50 µM, 24 h pretreatment) on DOX-exposed H9c2 cells in the concentration range of 0.1-3 µM). n = 4. The images were captured using a 63× oil immersion objective lens. Data are presented as mean ± SEM, *, **, and **** represent p < 0.05, p < 0.01, and p < 0.0001, respectively. The significance of differences among groups was evaluated with a one-way analysis of variance (ANOVA) followed by Tukey's posttest. (B) Representative images of MitoSOX Red staining; the nuclear fluorescence in DOX-treated H9c2 derives from DOX. (C) Analysis of the protein level of cleaved-caspase-3/pro-caspase-3 ratio after the cardiomyocyte cells were exposed to 1 µM DOX for 24 h in the absence or presence of the pretreatment of 50 µM BGP-15 for 24 h by Western blot. Red arrows indicate the bands for cleaced-caspase-3 (17; 19 kDa). Values were normalized to the total protein level and expressed as the mean ± SEM, n = 9. *** p < 0.001; **** p < 0.0001, respectively. The significance of differences among groups was evaluated with a one-way analysis of variance (ANOVA) followed by Tukey's posttest.
In order to investigate the activation of apoptosis, we analyzed the ratio of cleavedcaspase-3 (17 kDa) /pro-caspase-3 (35 kDa) after the cardiomyocyte cells were exposed to 1 µM DOX for 24 h in the absence or presence of 50 µM BGP-15 pretreatment ( Figure 2C). Our results showed that 1 µM DOX for 24 h significantly enhanced the ratio of cleavedcaspase-3/pro-caspase-3 (0.57 ± 0.08) in comparison with the control group (0.05 ± 0.01), indicating the activation of apoptosis. BGP-15 alone did not alter the ratio of cleavedcaspase-3/pro-caspase-3. Although the pretreatment of BGP-15 could slightly withhold the activation of apoptosis, unfortunately, the ratio of these abovementioned proteins was not statistically significant (0.43 ± 0.07) (p value = 0.26).
Effects of BGP-15 Pretreatment on Mitochondrial Depolarization of DOX-Exposed H9c2 Cells
Mitochondria are the primary target organelles of DOX-induced cardiotoxicity [17]. Mitochondrial membrane potential (MPP) is necessary for the production of ATP, which is crucial in living cells. JC-1 was used to assess the ∆ψ m in H9c2 cardiomyocytes. This dye can selectively enter the mitochondria where it reversibly changes color as membrane potentials increase (over values of about 80-100 mV). The monomeric form of JC-1 in the cytosol emits a green fluorescence, and aggregates of the dye in the mitochondria of normal cells emit a red fluorescence. To confirm our fluorescent intensity (ratio red/green) results, JC-1 staining was carried out. Samples were visualized by fluorescent microscopy, with healthy mitochondria in red and unhealthy mitochondria in green. As shown in Figure 3A, B, the Dox-induced depolarization was mitigated by BGP-15 pretreatment. The ratio of fluorescent intensity was 120 ± 5.58 in the BGP-15 alone group. Our results revealed that DOX induced significant MMP loss in the 1 and 3 µM DOX groups (62.76 ± 4.36 and 50.43 ± 5.01, respectively) versus the control group (100%); however, MMP was recovered by the BGP-15 pretreatment (79.6 ± 5.75 and 63.94 ± 7.37, respectively), which was a significant improvement on BGP-15 + DOX 1 vs. DOX 1.
Effects of BGP-15 on Autophagy Flux in DOX-Induced Cytotoxicity
To monitor autophagic flux, cells were treated with chloroquine, which is a known autophagic flux inhibitor. Protein expression levels of LC3B ( Figure 4A) and p62 ( Figure 4C) were measured with Western blot, and lysosome and LC3B or p62 colocalization (Figure 4B,D) were determined by fluorescent microscopy. Our results showed that chloroquine significantly increased the LC3B relative protein expression in control vs. control+ chloroquine (1 ± 0 vs. 1.98 ± 0.23) and BGP-15 vs. BGP-15+ chloroquine (1.22 ± 0.08 vs. 2.37 ± 0.22) groups. However, chloroquine enhanced only moderately the LC3B expression in the case of DOX 1 (1.05 ± 0.11 vs. 1.59 ± 0.23) and BGP-15 + DOX 1 (1.02 ± 0.15 vs. 1.40 ± 0.18) groups. In contrast, expression of p62 was significantly reduced in the DOX 1 (0.16 ± 0.02) and BGP-15 + DOX 1 (0.11 ± 0.03) groups compared to the control (1 ± 0) and BGP-15 (1.12 ± 0.09) groups. Western-blot results were supported by microscopic images. However, it appears that modulation of autophagic flux is not likely to play a direct role in the cytoprotective effects of BGP-15 in DOX-induced toxicity. (A) Effect of BGP-15 on DOX-induced mitochondrial membrane depolarization in H9c2 cells. Cells were exposed to 0.1, 1, and 3 µM DOX for 24 h in the absence or presence of 50 µM BGP-15 pretreatment for 24 h, then stained with JC-1, a membrane potential-sensitive fluorescent dye. Data are presented as mean ± SEM of % of control and ratio of red/green fluorescence intensity. n = 5. Data are presented as mean ± SEM, *, ***, and **** represent p < 0.05, p < 0.001, and p < 0.0001, respectively. (B) Representative images of JC-1 staining. Green channel: JC-1 monomeric form; Red channel: JC-1 aggregated form; Blue channel: DAPI as nucleus staining (the nuclear fluorescence in DOX-treated H9c2 derives from DOX); Merged images. The images were captured using a 63× oil immersion objective lens.
Effects of BGP-15 on Autophagy Flux in DOX-Induced Cytotoxicity
To monitor autophagic flux, cells were treated with chloroquine, which is a known autophagic flux inhibitor. Protein expression levels of LC3B ( Figure 4A) and p62 ( Figure 4C) were measured with Western blot, and lysosome and LC3B or p62 colocalization ( Figure 4B,D) were determined by fluorescent microscopy. Our results showed that chloroquine significantly increased the LC3B relative protein expression in control vs. control+ chloroquine (1 ± 0 vs. 1.98 ± 0.23) and BGP-15 vs. BGP-15+ chloroquine (1.22 ± 0.08 vs. 2.37 ± 0.22) groups. However, chloroquine enhanced only moderately the LC3B expression in the case of DOX 1 (1.05 ± 0.11 vs. 1.59 ± 0.23) and BGP-15 + DOX 1 (1.02 ± 0.15 vs. 1.40 ± 0.18) groups. In contrast, expression of p62 was significantly reduced in the DOX 1 (0.16 ± 0.02) and BGP-15 + DOX 1 (0.11 ± 0.03) groups compared to the control (1 ± 0) and BGP-15 (1.12 ± 0.09) groups. Western-blot results were supported by microscopic images. However, it appears that modulation of autophagic flux is not likely to play a direct role in the cytoprotective effects of BGP-15 in DOX-induced toxicity. Analysis of protein level of LC3B-II (B) and p62 after the cardiomyocyte cells were exposed to 1 µM DOX for 24 h in the absence or presence of the pretreatment of 50 µM BGP-15 for 24 h by Western blot. Values were normalized to the total protein level and expressed as the mean ± SEM, n = 14 and 14. The significance of differences among groups was evaluated with a one-way analysis of variance (ANOVA) followed by Tukey's posttest. ** p < 0.01; *** p < 0.001; **** p < 0.0001, respectively. Red arrows indicate the bands for LC3B-I and LC3B-II. (C) Autophagy flux determined by fluorescent microscopy of LC3B or (D) p62 immunostaining. Cells were exposed to 1µM DOX for 24 h in the Values were normalized to the total protein level and expressed as the mean ± SEM, n = 14 and 14. The significance of differences among groups was evaluated with a one-way analysis of variance (ANOVA) followed by Tukey's posttest. ** p < 0.01; *** p < 0.001; **** p < 0.0001, respectively. Red arrows indicate the bands for LC3B-I and LC3B-II. (C) Autophagy flux determined by fluorescent microscopy of LC3B or (D) p62 immunostaining. Cells were exposed to 1 µM DOX for 24 h in the absence or presence of 50 µM BGP-15 pretreatment for 24 h, and 10 µM chloroquine for 18 h, then stained. Representative images show the following: Blue channel: DAPI as nucleus staining (the nuclear fluorescence in DOX-treated H9c2 derives from DOX); Red channel: Lysotracker Red; Green channel: LC3B or p62 immunostaining; Merged images. The images were captured using a 63× oil immersion objective lens.
Discussion
Pharmacological interventions that are able to enhance the resistance of myocardium against DOX-induced cardiac complications may offer a new perspective on the application of DOX in different tumors. In the current study, we found that BGP-15 mitigates DOXinduced cell death in H9c2 cells, evidenced by enhanced cell survival and decreased LDH release upon DOX treatment. Furthermore, BGP-15 decreased mitochondrial ROS production and mitochondrial depolarization in DOX-challenged cells. Earlier, BGP-15, a nicotinic acid derivative, has been shown to protect the myocardium against different injuries, including ischemia/reperfusion and heart failure with different triggers [14,15,18,19].
Mitochondrial dysfunction plays an important role in different cardiovascular diseases including DOX-induced cardiotoxicity. However, the mechanisms contributing to DOXinduced cardiotoxicity are not fully understood; the role of increased ROS production and enhanced oxidative stress appears to be one of the major factors. An enhanced amount of ROS impairs redox balance causing DNA damage, lipid peroxidation, mitochondrial dysfunction, and dysregulation of autophagy and apoptosis [20][21][22]. Ultimately, these alterations led to contractile dysfunctions, cardiomyopathy, and heart failure. DOX redox cycles on mitochondrial complex I, leading to ROS generation [23]. Moreover, an enhanced mitochondrial iron level upon DOX treatment also contributes to ROS generation [24]. Increased mitochondrial ROS leads to compromised mitochondrial integrity opening of the mitochondrial permeability transition pores, which causes modulation of Keap1/Nrf2 and alters regulation of the mitochondrial biogenesis [25,26]. BGP-15 has been shown to protect against oxidative stress and LPS-induced mitochondrial depolarization [27]. The authors suggested that BGP-15 inhibits mitochondrial Complex I and III, thereby suppressing ROS production and ultimately preventing the activation of ROS-dependent signaling pathways including MAPK and PARP and influencing cell death [27,28]. In line with the literature, we found decreased ROS production of cells treated with DOX in the presence of BGP-15, evidenced by the results of flow cytometry.
Moreover, BGP-15 prevented DOX-induced mitochondrial depolarization. Our results show a slight decrement in the activation of caspase-3 in the presence of BGP-15 in DOXtreated H9c2 cells. However, it was not statistically significant.
Impaired autophagy also plays a role in DOX-induced cell toxicity. Earlier, we have shown that DOX treatment impairs autophagic flux, which can be restored by metformin treatment. Metformin also targets mitochondrial complex I leading to a decreased ATP/AMP ratio, which activates AMPK and suppresses mTOR signaling leading to the activation of autophagy [29]. In the current study, we also found a weakened autophagy flux in the presence of DOX. Suppression of autophagic flux is mostly reported dose-dependently by DOX. It has been suggested that 1 µM DOX concentration does reflect the clinically relevant context [30], so we employed that concentration. However, BGP-15 did not restore it. Similar results were seen by Li et al. [31]. They observed that the treatment of NRVM with DOX (1 µM) resulted in a decrease in the autophagic flux within 6 h based on the measured LC3B-II and p62 levels. Moreover, by tracking lysosomes with Lysotracker Red, a fluorescence dye that labels acidic organelles, we also found that DOX decreased Lysotracker Red puncta. Although we did not quantify Lysotracker Red staining puncta, based on the microscopic results depicted in Figure 4, panels C and D, it is visible that upon DOX treatment the number of lysosomes decreased. It has been reported in some cell types that an increase in lysosome pH can impair the fusion of lysosome with au-tophagosomes [31,32]. Of note, based on our microscopic pictures, the fluorescent signals were slightly increased in BGP-15 + DOX1 + Q treated cells. Interestingly, the extent of autophagic flux perturbation correlated with the level of DOX-induced ROS production, leading further support to the notion that restoration of autophagic flux protects against DOX-induced cardiotoxicity. Taken together, based on our data, we cannot completely rule out that BGP-15 may influence autophagic flux; however, further studies need to be carried out to clarify the question.
In conclusion, our results indicated that BGP-15 could prevent DOX-induced cell toxicity by decreasing mitochondrial ROS production and attenuating mitochondrial depolarization.
Cell Culture
The H9c2 cells were obtained from ATCC, CRL-1446, LGC Standards GmbH Wesel, Germany. Cells were maintained in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum and 1% penicillin-streptomycin at 37 • C in a humidified incubator consisting of 5% CO 2 and 95% air. Cells were fed every 3 days, and cells were passaged by trypsinization when reaching 70-80% confluence. The passage number of the cells was between 8 and 26. The cells were pretreated with 50 µM BGP-15 for 24 h and then treated with DOX at indicated concentrations for 12 or 24 h. The stock solution of 2 mg/mL DOX was diluted weekly, and the BGP-15 solution was prepared before treatment in the medium with the composition mentioned above.
Cell Viability Assay by MTT
In this assay, cells were seeded into 96-well culture plates with 3000 cells/well, pretreated with 50 µM BGP-15 for 24 h, and treated with different doses (0.1; 1; 3 µM) of DOX for 12 or 24 h. After treatment, MTT solution (final concentration of 0.5 mg/mL) was added to each well and incubated for 3 h at 37 • C. After that, the medium was replaced by isopropyl alcohol to dissolve the formazan product. Absorbance was measured with a Multiskan GO Microplate Spectrophotometer (Thermo Fisher Scientific Oy, Ratastie, Finland) at 570 and 690 nm. The values were calculated as follows: the resulting colored solution is quantified by measuring absorbance at 570 nm and subtracting background absorbance at 690 nm. These values were expressed relative to the control, which was represented as 100% of viability. One percent H 2 O 2 was used as the positive control. Absorbance values were averaged across 6 replicate wells and repeated 6-9 times.
Determination of Intracellular Reactive Oxygen Species Generation and Mitochondrial Function
In this experiment, 2000 cells/well were seeded on round glass coverslips placed into 24-well plates. Cells were treated with different doses (0.1; 1; 3 µM) of DOX for 24 h in the presence or absence of BGP-15 pretreatment (50 µM). At the end of the treatment, the medium was removed and cells were washed 3 times with Hank's Balanced Salt Solution (HBSS). MitoSOX™ Red was added for 10 min at 37 • C in the dark. The nucleus was stained by DAPI. Finally, cells were fixed with 4% methanol-free formaldehyde and washed with HBSS, and coverslips were placed on a slide. Specimens were visualized using a fluorescence microscope. Images were captured by a Zeiss Axio Scope.A1 fluorescent microscope and analyzed with ZEN 2011 v.1.0.1.0. Software (Carl Zeiss Microscopy GmbH, München, Germany). The images were captured using the 63× oil immersion objective lens. For flow cytometry experiments, 20000 cells/well were seeded into 24-well plates, and the same protocol was carried out. Cells were trypsinized and fixed with 4% methanol-free formaldehyde. Cellular fluorescence was analyzed by a Guava Easy Cyte 6HT-2L flow cytometer (Merck Ltd., Darmstadt, Germany). MitoSOX Red was analyzed by using 510 nm excitation and 580 nm emission wavelengths. Using flow cytometry of H9c2 stained cells with and without MitoSOX Red, we were able to separate the red fluorescence signal elicited by DOX.
Assessment of Mitochondrial Membrane Potential
Mitochondrial membrane potential (MMP) was assessed using the fluorescent indicator 5,5',6,6'-tetrachloro-1,1'3,3'-tetraethylbenzimidazolocarbo-cyanine iodide (JC-1; from Life Technologies (Paisley, Scotland)). Cells were seeded into black 96-well culture plates with 3000 cells/well and 24-well culture plates with coveslips (2000 cells/well), then nonpretreated/pretreated with 50 µM BGP-15, and treated with different doses (0.1; 1; 3 µM) of DOX for 24 h. After treatment, cells were incubated with 1 mg/mL JC-1 in Krebs-Henseleit buffer for 30 min at 37 • C. After incubation time, cells were washed once with Krebs-Henseleit buffer. Red and green fluorescence intensities of the samples were measured with a Multiskan GO Microplate Spectrophotometer (Thermo Fisher Scientific Oy, Ratastie, Finland) at 492 nm excitation and 520 and 590 nm emission wavelengths. DAPI as a nuclear stain was measured at 365 nm excitation and 445 emission wavelengths. The ratio of red and green fluorescence values was normalized to the blue fluorescence values. Using a spectrophotometer on H9c2-stained cells with and without JC-1 we were able to separate the red fluorescence signal elicited by DOX. Absorbance values were averaged across 4 replicate wells and repeated 5 times.
The coverslips were placed on a slide and visualized using a fluorescence microscope. Images were captured by Zeiss Axio Scope. A1 fluorescent microscope using the 63× oil immersion objective lens and analyzed with ZEN 2011 v.1.0.1.0. Software (Carl Zeiss Microscopy GmbH, München, Germany). A shift from red to green fluorescence indicates a loss of MMP, which was assessed by obtaining multiple merged images.
LDH (Lactate Dehydrogenase) Release Assay
LDH release was measured by the LDH-cytotoxicity assay kit (Sigma, St. Louis, MO, USA) according to the manufacturer's instructions. Cells were seeded into 96-well culture plates with 5000 cells/well, pretreated with 50 µM BGP-15, and treated with different doses (0.1; 1; 3 µM) of DOX for 24 h. Absorbance was measured with a Multiskan GO Microplate Spectrophotometer (Thermo Fisher Scientific Oy, Ratastie, Finland) at 492 and 620 nm. The values were expressed relative to the positive control (2% TritonX-100 in assay medium), which was represented as maximal LDH release. Absorbance values were averaged across 8 replicate wells and repeated 5 times.
Autophagy Flux Determined by Fluorescent Microscopy
For the analyses of autophagy flux, we used Lysotracker Red, LC3B, and p62 antibodies. For these experiments, 2000 cells/well were seeded on round glass coverslips placed into 24-well culture plates. The treatment protocol was the following: with or without pretreatment with 50 µM BGP-15 and treated with different doses (1 µM) of DOX for 24 h, Rapamycin (5 mM) was used as the positive control, and the autophagic process was inhibited by chloroquine (10 mM, for 18 h). After treatments, the medium was removed, and cells were washed 3 times with HBSS. Lysotracker Red was added for 30 min at 37 • C in the dark. Cells were fixed with 4% methanol-free formaldehyde. Cells on coverlips were permeabilized and blocked with HBSS containing 5% normal goat serum and 0.3% TritonX-100 for 30 min. Thereafter, the cells were incubated with primary antibodies (LC3B or p62: 1:1000 with 1% BSA and 0.3% TritonX-100 in HBSS) for 2 h at 37 • C and incubated with a secondary antibody (Alexa Flour 488 goat anti-rabbit IgG (H + L) 1:500 with 0.2% BSA in HBSS) for 1 h at 37 • C in dark. The nucleus was stained by DAPI. The cells were washed with HBSS after each step. The coverslips were placed on a slide and visualized using a fluorescence microscope. Images were captured by a Zeiss Axio Scope. A1 fluorescent microscope and analyzed with ZEN 2011 v.1.0.1.0. Software (Carl Zeiss Microscopy GmbH, München, Germany). The images were captured using the 63× oil immersion objective lens.
Protein Isolation
After treatment, total protein fractions were extracted from the cultured H9C2 cells based on the previously described [33]. Afterward, the isolation protein concentration was determined using a BCA kit (Thermo Scientific, Rockford, IL, USA).
Western Blot Analysis
A 25 µg protein sample was loaded and separated in 4-20% Mini-PROTEAN ® TGX Stain-Free™ Protein gel. Then, gels were exposed to UV light, and thereby trihalo compounds contained in stain-free gels covalently bind to tryptophan residues in proteins, allowing total protein quantification, and were transferred onto PVDF membranes for 1 h at 100 V. Membranes were exposed by another brief irradiation, the resulting fluorescence signals were recorded, and the signal intensity was considered proportional to the total protein volume. After blocking with 5% of non-fat dry milk in Tris Buffered Saline with Tween 20 (TBST), membranes were incubated with primary antibody solution (LC3B, p62 and Caspase-3: 1:1000 in TBST) at 4 • C overnight. The membranes were washed with TBST and incubated with HRP-conjugated secondary antibody solution (1:3000 in TBST). After washing, the membranes were incubated with Clarity Western ECL substrate (Bio-Rad Laboratories) for visualization by enhanced chemiluminescence bands according to the recommended procedure (ChemiDoc Touch, Bio-Rad Laboratories). The chemiluminescent bands and each total protein lane intensity were measured by Image Lab software (version 5.2.1) (Bio-Rad Laboratories). During quantification, protein density is measured directly on the membranes and reflected in total loaded proteins. Thus, this type of normalization eliminates the need to select housekeeping proteins. The software calculates the normalization factor, which is the total volume (intensity) of the stain-free reference lane/total lane stain-free (intensity) of each lane. The protein expression was quantified by normalized volume, which means normalization factor x volume (intensity) [34].
Statistical Analysis
The data were expressed as mean ± SEM. Statistical analyses were performed with GraphPad Prism version 5 (La Jolla, CA, USA). The one-way analysis of variance (ANOVA) test was followed by Tukey's multiple comparison tests, which identified the significant difference between control and treated groups, and the Šidák method was used to compare the treated groups (MitoSOX assay). A probability value of p < 0.05 was used as the criterion for statistical significance. Significant (p < 0.05), *, **, ***, and **** represent p < 0.05, p < 0.01, p < 0.001, and p < 0.0001 in the Tukey's post-test, respectively.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
All data used to support the findings of this study are available from the first author upon reasonable request.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-03-12T15:03:19.508Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "becea481218b47775e0d5929a5c120a5c25935a7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/24/6/5269/pdf?version=1678358570",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "171f0508088fb840722f830b0dbf2cecb3cd8c2d",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": []
} |
207998751 | pes2o/s2orc | v3-fos-license | Delayed Pulmonary Apoptosis of Diet-Induced Obesity Mice following Escherichia coli Infection through the Mitochondrial Apoptotic Pathway
Escherichia coli (E. coli) is one of pathogens causing nosocomial pneumonia and could induce pulmonary excessive apoptosis. Although much has been learned about metabolic diseases induced by obesity, the information linking bacterial pneumonia to obesity is limited. Accordingly, we investigated the apoptosis of normal (lean) and diet-induced obesity (DIO, fed a high-fat diet) mice after nasal instillation with E. coli. Lung tissues were obtained at 0 (preinfection), 12, 24, and 72 h after infection, and acute pulmonary inflammation was observed at 12 h. Elevated cell apoptosis and percentage of pulmonary cells depolarized with collapse of the mitochondrial transmembrane potential (Δψm) occurred in response to bacterial infection. The relative mRNA and protein expressions of Bax, caspase-3, and caspase-9 increased, but Bcl-2 decreased in the lung. Interestingly, the apoptotic percentage and most of apoptosis-associated factors mentioned above peaked at 12 or 24 h in the lean-E. coli group, while at 24 or 72 h in the DIO-E. coli group. Taken together, these findings indicated that the E. coli pneumonia caused excessive pulmonary apoptosis through the mitochondria-mediated pathway, and the apoptosis was delayed in the DIO mice with E. coli pneumonia.
Introduction
Obesity has developed into a considerable health problem in the whole world. Obese people are under threat of respiratory symptoms, even with no obvious respiratory illness [1], and may have an increased risk of pneumonia [2]. The adverse effects of obesity on the respiratory system, like increasing airway resistance and the work of breathing, impairing respiratory muscle function and gas exchange [3], are mediated by a number of mechanisms, including production of proinflammatory cytokines by adipose tissue, mechanical restriction of thoracic volumes, and obesity-induced hypoventilation [4]. As mentioned above, obese individuals are more susceptible to pneumonia, but paradoxically, improved outcomes, like reduced mortality, are noticed in studies of acute bacterial pneumonia among obese ones [5][6][7].
Apoptosis has been recognized and accepted as a distinctive and important mode of "programmed" cell death [8]. As described in the literature, oxidative stress could cause cell apoptosis via both the mitochondria-dependent and mitochondria-independent pathways [9]. Reactive oxygen species (ROS), one of the most important products during oxidative stress, is a collective of oxygen-derived free radicals, which is produced by uncoupling, disturbance, or inhibition of mitochondrial respiratory chain. High ROS exposure gives rise to oxidative damage to mitochondrial DNA which consequently induces cell apoptosis [10]. Meanwhile, ROS is supposed to be involved in obesity [11]. A link between nutritional status and apoptosis reveals that high caloric intake may impair mitochondria for apoptosis [12]. Moreover, inflammation is a cellular response to stress, injury, or infection [13]. During infection, cells undergo apoptosis to inhibit the spread of microbes by directly killing or depriving the cellular resources for survival and replication [14,15]. While screening these findings, although a new insight into the link between obesity and infection was provided, it is not clear whether cell apoptosis is involved.
We recently carried out experiments, in which dietinduced obesity (DIO) mice presented a delayed inflammatory response and oxidative stress in nonfatal acute pneumonia induced by E. coli infection [16]. It is well known that inflammation and oxidative stress can induce apoptosis in theory. Since there is little known about how pulmonary cell apoptosis impacted on the lungs of DIO mice following acute bacterial pneumonia, we purchased ICR mice fed high-fat diets, and then instilled the mice intranasally with E. coli, to shed light on the variations of pulmonary cell apoptosis between the normal and DIO mice following acute bacterial pneumonia.
Animal Model of Obesity.
Three-week-old male ICR mice were purchased from Dossy Animal Center (Chengdu, China) and housed under specific pathogen-free conditions. All animal experimental procedures were approved according to the National and International Guidelines and by Sichuan Agricultural University Animal Care and Use Committee (Approval No. 2012-024).
Mice received either a normal diet or a high-fat diet, obtained from Dossy Animal Center according to our previous study [17]. The content of fat, mainly coming from the lard and soybean oil, was about 7% in the normal diet or 35.2% in the high-fat diet, respectively [18]. Food and water were supplied ad libitum. After feeding high-fat diets for 8 weeks, mice were weighed, and the ones whose obese index exceeded 20% were defined as successful obesity induction [19]. , and the highest homologous with anthropogenic U00096 E. coli was cultured in Luria-Bertani broth at 37°C for 18 hours. Then, the bacterial culture was centrifuged, and bacterial pellets were resuspended in PBS to produce the inoculums. After being anesthetized with ether, mice in the lean-E. coli or DIO-E. coli group were instilled intranasally with 40 μL inoculum of E. coli (containing approximately 4 × 10 9 colony-forming units) suspended in phosphate-buffered saline (PBS) as reported previously [20]. And the same amount of PBS was given to the mice in the lean-uninfected or DIO-uninfected group by the same method.
Lung Injury Assayed by Histopathology.
After infection with E. coli for 12 h, the lungs of eight mice from each group were immediately fixed in 4% paraformaldehyde and then dehydrated in alcohol, embedded by paraffin, sectioned at 5 μm, and processed for hematoxylin and eosin staining. Histopathological changes were observed and photographed with a digital camera under 200x and 400x magnifications (Nikon DS-Ri1, Japan).
Measurement of Cell Apoptosis by Flow Cytometry.
At indicated time point, the lungs from eight mice in each group were sampled and prepared into the single-cell suspension at a concentration of about 1 × 10 6 cells/mL. After fluorescence staining by annexin V-fluorescein isothiocyanate (V-FITC) and propidium iodide (PI) at room temperature for 15 min in the dark, the cells were resuspended with annexin binding buffer, and the percentage of apoptotic cells was assayed by a flow cytometer (BD FACSCalibur) within 1 h. The annexin V-FITC kit was obtained from BD Pharmingen (559763, USA).
Measurement of Mitochondrial Transmembrane
Potential (Δψm). 0.5 mL single-cell suspension prepared above (containing about 5 × 10 5 cells) was cultured with JC-1 working solution at 37°C for 20 min under a 5% CO 2 incubator. After washing and suspending with JC-1 assay buffer, the mitochondrial membrane potential was assayed by a flow cytometer. The JC-1 kit was obtained from BD Pharmingen (USA, 551302).
2.6. Quantitative Real-Time PCR. At indicated time points, the lungs from eight mice in each group were crushed into powder with liquid nitrogen. Total RNA was prepared from TRIzol (9108/9109, Takara, Otsu, Japan) according to the manufacturer's recommendation, reverse transcribed with random hexamers (Prim-Script™ RT reagent Kit, RR047A, Takara, Japan), and amplified with specific primers. The primers were designed using Primer 5 software or NCBI primer pick online (Table 1) and synthesized at Sangon Biotech (Shanghai, China). The expression of caspase-3, caspase-9, Bax, and Bcl-2 transcript is shown relative to that of β-actin using the 2 -ΔΔCT method.
Western
Blotting. The tissue proteins were extracted with RIPA lysis buffer. After being equalized for total protein concentration, the protein was separated by SDS-PAGE and subjected to semidry blotting onto nitrocellulose membranes. The membrane was blocked and incubated overnight with rabbit anti-mouse caspase-3, caspase-9, Bax, Bcl-2, and GAPDH antibodies (ab32503, ab182858, ab184787, and ab202068, Abcam; 5174, Cell Signaling Technology) at 4°C. After being incubated with the peroxidase-conjugated goat anti-rabbit IgG (7074, Cell Signaling Technology), the blot was visualized by ECL™ (P0018A, Beyotime Technology) and X-ray film. Then, the expression of apoptosis-associated proteins is shown relative to that of GAPDH using Quantity One software.
2.8. Immunohistochemistry. The paraffin sections were treated with 3.0% hydrogen peroxide followed by boiling sodium citrate solution and incubated overnight with rabbit anti-mouse primary antibodies against caspase-3, caspase-9, Bax, and Bcl-2 at 4°C. Then, the sections were executed with SABC methods (SA1020, Wuhan Boster Bio-Engineering Limited Company, China) and visualized by DAB. Finally, the stained sections were photographed with a digital camera under 1000x magnification (Nikon DS-Ri1, Japan).
Statistical
Analysis. The SPSS 17.0 statistical software package program for Windows was used for statistical tests. All results were expressed as the mean ± standard deviation. The significant differences among the four groups were analyzed by variance analyses (LSD or Dunnett's T3). A value of p < 0:05 was accepted as a statistically significant difference. The change rate was calculated by the following formula, and DIO and lean in the figures indicated the change rate of DIO and lean mice, respectively.
Change rate % ð Þ = value of infected mice − value of uninfected mice value of uninfected mice × 100%:
Pathological
Injuries of the Lung following E. coli Infection. As shown in Figure 1, the lung exhibited typical acute inflammation in either the lean-or DIO-E. coli group at 12 h after infection. Many neutrophils infiltrated into the bronchioles and alveolar lumen. Moreover, hyperaemia and hemorrhage of the alveolar wall were observed, as well as adjacent alveolar fusion and compensatory enlargement.
3.2.
Changes in the Percentages of Apoptotic Cells in the Lung following E. coli Infection. As shown in Figure 2(a), cells in the left lower quadrant represent apoptotic negative cells, and cells in the right lower or upper quadrant represent apoptotic cells at an early phase or a late phase, respectively. The changes in the percentage of apoptotic cells in the lung displayed a different tendency between the lean and DIO groups ( Figure 2(b)). The percentage of apoptotic cells in the lean-E. coli group was significantly higher (p < 0:05) than that in the lean group only at 12 h and 24 h, while the value in the DIO-E. coli group was significantly higher (p < 0:05) than that in the DIO-uninfected group from 12 h to 72 h. Moreover, the line chart ( Figure 2(c)) showed that the change rate of apoptotic cell percentage in the lean mice peaked at 12 h, while that in the DIO mice continued to rise to 72 h.
Effect of Mitochondrial Transmembrane Potential (Δψm)
in the Lung following E. coli Infection. The induction of apoptosis was associated with the perturbation of mitochondrial functions. Here, the changes in Δψm were examined using fluorescent dye JC-1. Cells in the right upper quadrant represent high electronegativity, and cells in the right lower quadrant represent low electronegativity (Figure 3(a)). As shown in Figures 3(a) and 3(b), the percentages of pulmonary cells depolarized with collapse of Δψm were significantly increased (p < 0:05) in the lean-E. coli group from 12 h to 72 h compared with the lean-uninfected group. However, the values in the DIO-E. coli group were higher only at 24 h and 72 h (p < 0:05) than those in the DIO-uninfected group. The change rate of decreased Δψm of pulmonary cell was similar to the change rate of apoptotic percentage (Figure 3(c)).
3.4. Changes in Bax, Bcl-2, Caspase-3, and Caspase-9 Relative mRNA Expressions in the Lung following E. coli Infection. In the lean-E. coli group, the mRNA expression levels of Bax and caspase-9 were significantly increased (p < 0:05) at 12 h and 24 h and caspase-3 from 12 h to 72 h in comparison to the lean-uninfected group, while Bcl-2 was significantly decreased (p < 0:05) at 12 h. When compared with those of the DIO-uninfected group, the Bax mRNA levels of the DIO-E. coli group were all significantly increased at 72 h, as well as caspase-3 and caspase-9 at 24 h and 72 h (p < 0:05), while Bcl-2 was significantly decreased (p < 0:05) only at 72 h (Figures 4(a)-4(d)).
Among the four groups, the lean-E. coli group exhibited the highest ratio of Bax/Bcl-2 at 12 h but the DIO-E. coli group at 72 h (Figure 4(e)). As exhibited by the line chart (Figures 4(f) and 4(g)), the change rates of these apoptotic regulators were the highest in the lean mice at 12 h, while the peak change rates in the DIO mice were delayed to 24 h or 72 h.
3.5. Changes in Bax, Bcl-2, Caspase-3, and Caspase-9 Relative Protein Expression in the Lung following E. coli Infection. As shown in Figure 5, the relative protein expressions of Bax and caspase-9 were significantly increased in the lean-E. coli group in comparison to the lean-uninfected group at 12 h (p < 0:05) and caspase-3 at 12 h and 24 h. Compared with the DIO-uninfected group, the caspase-3 and caspase-9 protein levels were significantly increased in the DIO-E. coli group at 24 h and 72 h (p < 0:05) and Bax at 12 h and 72 h (p < 0:05). Furthermore, the Bcl-2 protein level was lower in the DIO mice than in the lean mice at 0 h (p < 0:05). After infection, the Bcl-2 protein value declined at 12 h and 24 h in the lean-E. coli group and at 72 h in the DIO-E. coli group when compared with each uninfected control, respectively (p < 0:05).
The increased tendency of the Bax/Bcl-2 protein expression ratio was similar to its mRNA expression ratio ( Figure 5(f)). Conclusively, the line chart of change rate showed that the apoptotic protein levels changed the most at 12 h in the lean mice but at 72 h in the DIO mice ( Figures 5(g) and 5(h)).
3.6. Subcellular Localization of Bax, Bcl-2, Caspase-3, and Caspase-9 Proteins in the Lung. As shown in Figure 6, a few positive caspase-9 proteins were observed on the alveolar wall in the lean-and DIO-uninfected groups. After infection, large numbers of positive caspase-9 were visualized in the neutrophil-infiltrated areas or the alveolar wall in the leanand DIO-E. coli groups. The location of caspase-3 protein was similar to that of caspase-9, but its content was lower than that of caspase-9. Bax-positive protein presented a scattered distribution, and a few Bax were seen in the alveolar wall of the uninfected groups, while there were more Bax in the neutrophil-infiltrated areas of the E. coli-infected groups. On the contrary, numerous Bcl-2-positive proteins appeared mainly in the epithelial cells of respiratory bronchioles in the uninfected groups but a few Bcl-2 in the E. coli-infected groups.
Discussion
Escherichia coli is one of the possible etiologies of nosocomial pneumonia, as well as a strong inducer of proinflammatory cytokine production from alveolar macrophages [21]. In the present study, 10 9 CFUs/mL E. coli was intranasally instilled in mice (either lean or DIO) to establish acute pneumonia. According to histopathological observation, a typical acute inflammation appeared in the lung with a large number of neutrophils infiltrating into the alveolar and bronchiolar lumen. When the inflammation occurred, these inflammatory cells produced various cytokines. Thus, after infection, the cytokine and adipocytokine levels were significantly increased in mice [16].
In bacterial infection, the host is mainly dependent on the selective phagocytosis of neutrophils to eliminate invaders [22]. Neutrophils are able to synthesize and secrete proinflammatory cytokines in response to a variety of inflammatory stimuli [23]. And some typical cytokines, like tumor necrosis factor-(TNF-) α, interferon-(IFN-) γ, and interleukins (IL), can trigger cell apoptosis [24]. Moreover, neutrophil recruitment can activate the oxidative response, which is a primary host defense mechanism in acute pneumonia and a mediator of apoptosis [25,26]. In addition, the generation of reactive oxygen species (ROS) in oxidative stress is capable to induce mitochondrial DNA damage and trigger apoptosis [27]. Our previous experiments indicated that pulmonary oxidative stress was notable in the mice after nasal instillation with E. coli [16]. Above all, acute bacterial infections were fairly associated with apoptosis, and this study puts emphasis on the mitochondrial apoptosis pathway between the lean and DIO mice with acute E. coli pneumonia.
As mentioned above, bacteria play an important role in triggering apoptosis. The mechanism of apoptosis in pulmonary diseases has two main hypotheses, namely, "neutrophilic hypothesis" and "epithelial hypothesis" [28], which means that cell apoptosis in pneumonia happened in two cell types, neutrophils and epithelia. Extensive evidences of neutrophil and alveolar epithelial cell apoptosis have been described on bacterial pneumonia and lipopolysaccharide-(LPS, one of the most important virulence factors of gram-negative bacteria) induced lung injury [29][30][31]. Besides, neutrophil regulates and alleviates inflammation through spontaneous apoptosis [32,33]. In accordance with these researches, through flow cytometry, increased percentages of determine cell fate. The Bcl-2 family executes two opposing functions, including prosurvival proteins, such as Bcl-2, Bcl-w, and MCL-1, and proapoptotic proteins, such as Bax, Bid, and Bad [34]. These proteins regulate the release of cytochrome c from the mitochondrial innermembrane space, forming apoptosome with apoptotic protease-activating factor 1 (Apaf-1), activating caspase-9, thus initiating a caspase cascade which ultimately leads cell apoptosis [35,36]. Researches on human monocytic U937 cells and epithelial HEp-2 cells showed that E. coli could induce apoptosis with an increased expression of Bax and a reduced expression of Bcl-2, which resulted in increased levels of released cytochrome c, caspase-3, and caspase-9 [37,38]. In accordance with previous studies, after being infected with E. coli, the expressions of Bax, caspase-3, and caspase-9 were significantly increased while Bcl-2 was decreased in the infected groups. Interestingly, the dramatic fold change in the percentage of cell apoptosis and the expression of apoptotic parameters were noted at 12 h or 24 h in the lean-E. coli group, whereas at 24 h or 72 h in the DIO-E. coli group. Meanwhile, the increased cytokine and adipocytokine levels were peaked at 12 h or 24 h in the lean mice, while these parameters continually increased along the infection time and peaked in the DIO mice at 72 h post infection [16]. These results indicated that the DIO mice may need longer time to respond to the inflammation than the lean mice. As it is well known, obesity is a medical condition, in which excess body fat increases body weight, resulting in more production of adipokines secreted by adipose tissue [39,40]. Leptin is the first discovered adipokine derived from adipocyte and can modulate neutrophil chemotaxis and ROS release [41]. Previous studies have reported that leptin showed antiapoptotic properties on neutrophil via the NF-κB and MEK1/2 MAPK pathways and led to delayed neutrophil apoptosis in vitro [42], inhibited thymic cells apoptosis through JAK-2 activation and IRS-1/PI3-K pathway in Wistar rats [43], and reduced degenerative nucleus pulposus cell apoptosis via promoting autophagy in vitro [44]. Other various adipokines, like vaspin, visfatin, and adiponectin, could inhibit apoptosis as well. Vaspin acts as a ligand for the cell-surface GRP78/VDAC complex inhibiting endothelial cell apoptosis. Visfatin shows antiapoptotic properties in TNF-α-induced apoptosis in breast cancer cells and palmitate-induced apoptosis in pancreatic β-cells [45,46]. Adiponectin inhibits neutrophil apoptosis via activation of AMPK, PKB, ERK 1/2, and MAPK [47]. Therefore, obesity with increased levels of adipokines, like leptin and vaspin, might inhibit or delay cell apoptosis, which was accordance with our present results. Furthermore, cytokines and oxidative stress induced by inflammation are capable in triggering apoptosis [24,27]. Following the infection, the proapoptotic effect of cytokines and oxidative stress was enhanced gradually, which counteracted the inhibited or delayed cell apoptotic effect executed by adipokines, resulting in a greater cell apoptotic rate in the DIO-E. coli group after 24 h. These results also exhibited a significant role for neutrophil apoptosis in inflammation. Indeed, we found that, through immunohistochemistry staining (especially caspase-3-positive proteins), two types of cells, neutrophils and epithelial cells, underwent apoptosis during infection, but more importantly, adipokines could inhibit neutrophil constitutive or spontaneous apoptosis. Thus, the delayed or inhibited neutrophil apoptosis by DIO during infection was partly able to determine the infection process in the lung.
The mitochondrion is a double-membrane-bound organelle found in most eukaryotic organisms and acts as a source of chemical energy (adenosine triphosphate (ATP)) supply in cells [48]. By stimuli, the mitochondrion-mediated apoptosis can be initiated in a receptor-independent manner that increases mitochondrial inner membrane permeability accompanied by Δψm depolarization [49]. Stimulated by inflammation and infection, the proapoptosis protein bax and the antiapoptosis protein bcl-2 combined with ANT (adenine nucleotide translocator) or VDAC (voltage-dependent anion channel) competitively, and regulated the switch of the MPTP (mitochondria permeability transition pore). Once PT pores open, the mitochondrial transmembrane potential would dramatically decrease, leading the release of cytochrome c and activating caspase-9 gradually. Obese individuals have reduced oxidative phosphorylation (OXPHOS) gene expression and oxygen consumption and increased oxidative stress and ROS production, causing mitochondrial dysfunction [50]. In the present study, the percentage of pulmonary cells depolarized with the collapse of the Δψm was higher in the DIO mice than in the lean mice. On the contrary, obese Zucker rats displayed no difference in oxygen consumption, ATP synthesis, membrane potential, citrate synthase, and cytochrome c oxidase activities compared with lean Zucker rats [51]. After infection, the percentage significantly increased in both the lean-and DIO-E. coli groups, and the increase was more dramatic at 12 and 24 h in the lean-E. coli group but at 72 h in the DIO-E. coli group. These results suggested that E. coli pneumonia caused the Δψm change, by which the mitochondrion-mediated apoptotic pathway was activated in the lung of both the lean and DIO mice, but delayed in the latter.
For subcellular localization of these apoptotic factors, immunohistochemistry was performed. Bax should be in the membrane of mitochondria [52]. In the present duty, Bax-positive protein was detected as dispersive distribution in the lean-and DIO-E. coli groups. Identical to a previous report, Bcl-2 immunostaining is cytoplasmic and granular and restricted in normal bronchial epithelium to the basal epithelial layer or to some epithelial cells that are perpendicularly oriented to the basal lamina before infection [53], whereas, after infection, the Bcl-2 protein expression in the bronchial epithelial cells almost vanished and only a few Bcl-2 were noted in the neutrophil-infiltrated areas. Caspase-3 and caspase-9 were located in the mitochondria, cytosol, and nucleus in cells [54,55]. After being infected with E. coli, caspase-3 and caspase-9 proteins were mainly displayed in the cytoplasm of inflammatory cells and sloughed pulmonary epithelial cells in the neutrophil-infiltrated areas. Taken together, immunohistochemistry results suggested that these mitochondrion-mediated apoptotic proteins were mainly located in the neutrophil-infiltrated areas after infection.
Conclusions
In conclusion, nasal infection with E. coli was able to establish bacterial pneumonia in mice. And after being infected with E. coli, both the lean and DIO mice exhibited increased percentages of apoptosis; decreased pulmonary Δψm; upregulated expressions of Bax, caspase-3, and caspase-9 mRNA and protein; and downregulated expression of Bcl-2. However, most impressively, almost all the above-mentioned parameters peaked at 12 h or 24 h in the lean-E. coli group but at 24 h or 72 h in the DIO-E. coli group. These results indicated that the DIO mice presented a delayed cell apoptosis in the acute pneumonia induced by E. coli infection through the mitochondrial apoptotic pathway. Meanwhile, the major cell exhibiting delayed apoptosis by obesity might be neutrophils in the mice with E. coli pneumonia. The observations reported here provide the foundation for further investigations on the relationship between obesity and bacterial infection.
Data Availability
The cytokine contents and oxidative stress data used to support the findings of this study have been deposited in the PubMed repository (10.1038/s41598-018-32420-3). The flow cytometry, qRT-PCR, and western bolt data used to support the findings of this study are included within the article. | 2019-10-24T09:07:43.441Z | 2019-10-22T00:00:00.000 | {
"year": 2019,
"sha1": "a6a742c59c1271bcb35f8410022e36fb697a68bc",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/omcl/2019/1968539.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e989152f73f00ad3dca8006742b64faea6a6d043",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18645687 | pes2o/s2orc | v3-fos-license | Estrogen-Receptor, Progesterone-Receptor and HER2 Status Determination in Invasive Breast Cancer. Concordance between Immuno-Histochemistry and MapQuant™ Microarray Based Assay
Background Hormone receptor status and HER2 status are of critical interest in determining the prognosis of breast cancer patients. Their status is routinely assessed by immunohistochemistry (IHC). However, it is subject to intra-laboratory and inter-laboratory variability. The aim of our study was to compare the estrogen receptor, progesterone receptor and HER2 status as determined by the MapQuant™ test to the routine immuno-histochemical tests in early stage invasive breast cancer in a large comprehensive cancer center. Patients and Methods We retrospectively studied 163 invasive early-stage breast carcinoma with standard IHC status. The genomic status was determined using the MapQuant™ test providing the genomic grade index. Results We found only 4 tumours out of 161 (2.5%) with discrepant IHC and genomic results concerning ER status. The concordance rate between the two methods was 97.5% and the Cohen’s Kappa coefficient was 0.89. Comparison between the MapQuant™ PR status and the PR IHC status gave more discrepancies. The concordance rate between the two methods was 91.4% and the Cohen’s Kappa coefficient was 0.74. The HER2 MapQuant™ test was classified as « undetermined » in 2 out of 163 cases (1.2%). One HER2 IHC-negative tumour was found positive with a high HER2 MapQuant™ genomic score. The concordance rate between the two methods was 99.3% and the Cohen’s Kappa coefficient was 0.86. Conclusion Our results show that the MapQuant™ assay, based on mRNA expression assay, provides an objective and quantitative assessment of Estrogen receptor, Progesterone receptor and HER2 status in invasive breast cancer.
Introduction
The Estrogen Receptor (ER) and Progesterone Receptor (PR) status are of critical interest in determining the prognosis of breast cancer patients and the potential benefit of adjuvant hormonal therapy. Their status is routinely assessed as well as the HER2 status that is also a prognosis marker and determines patient's eligibility to monoclonal antibody trastuzumab therapy.
The current standard methodology for measuring ER, PR and HER2 status, is immunohistochemistry (IHC), with additional fluorescent in situ hybridization assay to clarify HER2 immuno-histochemical status. It is subject to intra-laboratory and inter-laboratory variability. For instance, the inter-observer agreement in scoring hormone receptor status by IHC can vary from moderate to almost perfect (k = 0.78 to 0.85 for ER status, k = 0.71 to 0.72 for PR status [1] [2]). The discordance rate is mainly due to differences of interpretation of the specificity of staining and the histological structures after immunostaining. For example, Rhodes et al [3] found considerable inter-laboratory variation, especially for low estrogen receptor positivity, with a false negative rate between 30% and 60%. Arihiro et al [4] studied the inter-method variability due to effects of fixation, processing and different evaluation criteria (k = 0.34 for ER status, k = 0.45 for PR status). The larger study driven by Viale [5] comparing central versus local assessment of IHC hormone status (with a 10% cut-off for positivity), revealed a reclassification (after central reviewing) of 69.5% and 1.1% of the ER-negative and ER-positive tumours, and of 44.5% and 4.6% of PR-negative and PR-positive tumours. They concluded that central IHC should be performed whenever possible to correct the influence of the laboratory where the assay has been performed. The quality of HER2 assays has been studied and a high degree of discordance between local and central laboratories has similarly been demonstrated (Table in S1 Table) [6][7][8][9].
Several studies investigated alternative methods to determine the hormonal receptor status (ER, PR) and HER2 status with multi-genes signatures to address these limitations [10][11][12][13][14]. The genomic grade index (GGI) is a 97-gene measure of tumour grade. It is assessed by the MapQuant test, based on an Affymetrix microarray-based assay. Previous studies have shown that the genomic grade is an important tool to assess breast cancer tumour grade [15][16][17] and prognosis [18][19][20][21]. It has been demonstrated that the GGI could also predict response to chemotherapy [22,23]. By using the MapQuant test, not only to determine the genomic grade but also to assess the prognostic and predictive markers ER, PR and HER2, we could potentially get a more reliable and informative determination of tumour characteristics compared to the immune-histochemistry assessments, therefore leading to a more reliable treatment decision.
The aim of our study was to compare the ER, PR and HER2 status as determined by the MapQuant test to the routine immuno-histochemical tests in early stage invasive breast cancer in a large comprehensive cancer center.
Patients and Methods Patients
The main inclusion criteria for the study were the absence of pathologic axillary lymph node involvement, a follow up above 10 years, and the absence of neoadjuvant therapy before surgery. Using these criteria, 456 early-stage (T1-T2 pN0) breast cancer patients treated between 1995 and 1996 could be retrieved from the Institut Curie database. From these cases, 169 flashfrozen samples stored at −80°C immediately after lumpectomy or mastectomy, and with more than 50% of tumor cells, were available. The histological features (histological type, histological grade assessed according to Elston and Ellis criteria, mitotic index, Ki67 proliferation index, ER status, PR status, HER2 over expression status) were re-assessed for each sample by a large panel of pathologists experienced in breast pathology, using tissue sections (4 μm) prepared from a representative part of each tumour block fixed in AFA (Alcool/Formol/Acide acétique).
From the 169 cases available for analysis, 163 passed quality controls and constituted the reference cohort. The clinical and pathological features of these 163 cases are summarized in Table 1. Tumours corresponded mainly to ductal (78%) or lobular (13.5%) infiltrating carcinoma. All of them were free of axillary lymph node metastases. Tumours were classified as histological grade I in 32.5%, grade II in 43% and grade III in 24.5% of cases. Immuno-phenotyping showed that ER was expressed in 86% (140/163) of the tumors, PR in 68% (111/163), HER2 in 6% (10/163) whereas 10% (17/163) remained negative for the three markers. The median follow-up duration was 154 months (6-182).
HER2 status
After rehydration and antigenic retrieval in citrate buffer (10 mM, pH 6.1), the tissue sections were stained for HER-2 (clone CB11, Novocastra, 1/1000). Revelation of staining was performed using the Vectastain Elite ABC peroxidase mouse IgG kit (Vector Burlingame, CA) and diaminobenzidine (Dako A/S, Glostrup, Denmark) as chromogen. Positive and negative controls were included in each slide run. The determination of HER2 overexpression was determined according to GEFPICS (Groupe d'étude des facteurs pronostiques immunohistochimiques dans le cancer du sein, Unicancer) guidelines [24] with FISH performed in all cases of HER2 2+ result.
MapQuant Dx protocol and Affymetrix data pre-processing
All 169 tumour samples available for genomic grade analysis contained more than 50% of cancer cells as assessed by H&E staining on frozen histological section of the samples used for the transcriptome analysis (manufacturer's recommendation: above 30%). RNA was extracted using Trizol method (Invitrogen) and purified using mirRNeasy kit (Qiagen). The concentration, integrity and purity of each RNA sample were measured using RNA 6000 LabChip kit with the Agilent 2100 Bioanalyser. The DNA microarrays used in this study were the Affymetrix HGU133 Plus 2.0 arrays (Affymetrix, Santa Clara, CA). Details of the RNA amplification, labeling and hybridization are available from the Affymetrix website (http://www.affymetrix. com). Chips were scanned using the GCS 3000 7G scanner (Affymetrix). Affymetrix quality controls variables were used to check data homogeneity. Profiles were normalized using RMAdx procedure (Robust Multi-array Average). RMA was applied to a reference set of microarrays (191 high-quality profiles), storing the parameters of the RMA fit. To process additional microarrays, these parameters are directly applied, without any re-estimation.
ER, PR, HER2 genomic status determination
MapQuant Dx Genomic Hormone Receptors (HR) quantifies the mRNA of 20 genes involved in breast-specific estrogen signaling and transcriptional cascades. The expression levels of these genes have been combined in an "ER score" and a "PR score" that best discriminate tumors expressing estrogen and/or progesterone receptors. Each score is based on a model fitted on 137 (76 ER-0% vs 61 ER+ >60%) and 142 (93 PR-0% vs 49 PR+ >30%) tumours respectively. The cut-off was set at 0, with score varying between -1.5 and +1.5. Based on this genomic score, ER and PR status are attributed to each tumour sample. A confidence interval (3:1 odds ratio of being ER-or ER+, PR-or PR+ respectively) was defined around the cut-off to ensure robustness and accuracy of status. For ER or PR scores into this confidence interval, the status is defined as "equivocal". MapQuant Dx genomic HER2 quantifies the mRNA of 6 genes of the HER2 amplicon whose activity leads to HER2 protein expression at cell membrane level. The genomic HER2 model was trained on 152 tumours (126 IHC 0 vs 26 IHC 3+). The cut-off was set at 0, with score varying between -3 and +3. Based on this genomic score, a HER2 status is attributed to each tumour sample. A confidence interval (3:1 odds ratio of being HER2or HER2+) was defined around the cut-off to ensure robustness and accuracy of status determination. For HER2 scores into this confidence interval, the Her2 status is defined as "equivocal".
Statistical Analysis
Baseline characteristics were compared between groups using Chi-square or Fisher's exact tests for categorical variables and Student's t-tests for continuous variables. The analyses were performed using the R software (http://cran.r-project.org).
Ethical approval
All experiments were performed retrospectively and in accordance with the French Bioethics Law 2004-800, the French National Institute of Cancer (INCa) Ethics Charter and after approval by the Institut Curie review board and ethics committee (Comit de Pilotage of the Groupe Sein). In the French legal context, our institutional review board waived the need for written informed consent from the participants. Moreover, women were informed of the research use of their tissues and did not declare any opposition for such researches. Data were analyzed anonymously.
Results
We retrieved the equivocal MapQuant results from the cohort to determine the concordance rates.
Comparison between MapQuant™ and IHC ER status
The ER Immunohistochemistry analysis showed that 86% of the tumours were classified as ER-positive (140/163). 142 out of 161 tumours were classified as genomic ER-positive (88%). The concordance rate between the two methods was 97.5% and the Cohen's Kappa coefficient was 0.89.
The ER MapQuant test was classified as « equivocal » in 2 out of 163 cases (1%). Both tumours were IHC-positive with a 20% and 40% stained tumour nuclei respectively.
We found only 4 tumours out of 161 (2.5%) with discrepant IHC and genomic results ( Fig 1). ER MapQuant scores distribution related to the ER-IHC status is shown in Fig 2A. The four IHC-negative tumours with a positive ER MapQuant expression value showed an absence of stained tumour nuclei. Fig 3 shows the ER-IHC slides of these discordant cases compared with an ER-IHC-negative case also found negative with the ER MapQuant test. 3 out of these 4 ER-IHC negative discordant cases had a high ER MapQuant expression value above 1 (Fig 1).
Comparison between the MapQuant™PR status and the PR IHC status
The PR Immunohistochemistry analysis showed that 68% of the tumours were PR-positive (111/163). 107 out of 128 tumours were classified as genomic PR-positive (83%). The concordance rate between the two methods was 91.4% and the Cohen's Kappa coefficient was 0.74.
The PR status discrepancies were observed exclusively in the PR IHC-negative tumour subgroup. 11 out of 21 PR IHC-negative tumours (34%) were classified PR MapQuant positive. The PR MapQuant test value ranged between 0.5 and 1.0 (Figs 1 and 2B), while the percent positivity for IHC ranged from 10 to 100%. PR MapQuant expression values distribution related to the PR-IHC status is shown in Fig 2B. Comparison between the MapQuant™ HER2 status and the HER2 IHC status The HER2 Immunohistochemistry analysis showed that only 6% of the tumours were HER2positive (10/163). 11 out of 161 tumours were classified as genomic HER2-positive (7%). The concordance rate between the two methods was 99.3% and the Cohen's Kappa coefficient was 0.86.
One HER2 IHC-negative tumour was found positive with a high HER2 MapQuant genomic score (Fig 1). Fig 4 shows
Discussion
Our study was the first to determine the accuracy of the MapQuant assay to assess the ER, PR and HER2 status. Several studies investigated the accuracy of alternative methods for ER, PR and HER2 evaluations that may be more reliable and accurate than IHC in invasive breast cancers [10][11][12][13][14].
In our study, the genomic status correlated well with the IHC ER status. Our results are in agreement with Gong and colleagues [13], who investigated the use of Affymetrix microarrays for quantification of ESR1 and ERBB2 mRNA levels. In this paper, an ESR1mRNA cutoff value was identified which discriminates ER-positive tumours with an overall accuracy of 90% in the training set, 88% and 96% in two validation sets.
Roepman and colleagues [10] compared IHC with a second microarray-based mRNA expression level methodology (Mammaprint) and found a high level of concordance for ER status (93%). In their study, 4% of IHC-positive samples were classified negative using Viale et al [11] also found good concordance for ER status (98%) with the TargetPrint test in the first 800 patients enrolled in the MINDACT trial.
Badve and colleagues [11] compared a central 21-gene RT-PCR assay (OncotypeDX) to a local and a central IHC assay. They obtained good results for the ER status determination. Concordance between local IHC and central RT-PCR was 91%, and 93% between central IHC and central RT-PCR. Although concordance was high, IHC ER-negative cases that were RT-PCR positive (13% and 14% by local and central IHC) were more common than IHC-positive cases that were RT-PCR negative (1% and 5% by local and central IHC). Varga et al [11] detected a high concordance in hormone receptor and HER2 status between conventional IHC and OncotypeDX.
In our study, the PR status analysis showed the most discordant results between the two methodologies. 34% of the tumours classified PR negative by IHC were positive with the Map-Quant test. Furthermore the « equivocal » group represented 21.4% of the tumours.
Our findings are in agreement with other studies on alternative gene expression technologies that report a lower concordance between PR mRNA levels and IHC. Badve and colleagues [11] found a concordance of 88% and 90% between local IHC, central IHC and central RT-PCR respectively (OncotypeDX). Roepman et al [10] found a concordance of 83% only between microarray (Mammaprint) and central IHC, similar with a lower concordance of 85% with the TargetPrint test.
Concerning the HER2 status, there is a strong correlation between the two measures. We could see that using the genomic measure, we reclassified an IHC negative as genomic positive, which means that one extra patient should receive targeted therapy. The treatment decision for the equivocal group remains to be determined. Knowing the HER2 oncogenic mechanism (gene amplification leading to increased mRNA expression and subsequently protein overexpression), one can understand the high concordance between the assessment of protein expression by IHC analysis and gene status by MapQuant test. Gong [13] also compared the determination of HER2 status between IHC/FISH and Affymetrix gene expression profiling. They identified an overall accuracy of 93% in the training set, 89% and 90% in the two validation sets. The Mammaprint test also showed a 96% concordance for the HER2 status determination [10]. Baehner et al [14] found an overall concordance of 97% and a positive agreement of 98% between HER2 FISH assay and qRT-PCR using the Oncotype DX test. Dabbs et al [12] studied the same test in a large independent multicenter study. They showed even with an overall concordance of more than 95%, that the percent positive agreement between the OncotypeDX test and IHC/FISH was less than 50% because of the small number of positive cases heavily diluted by the large number of negative patients in this biased population.
The MapQuant test is based on gene expression and provides information on mRNA expression, whereas IHC gives information on protein expression. As underlined by Allred in an editorial on problems and solutions in the evaluation of hormone receptors in breast cancers [32], there is no reason to expect similar results or performance from two different tests measuring either protein or mRNA expression, despite the fact that studies have found good concordance results especially for ER status between the two methods [10,11].
The whole tumoural tissue (infiltrative carcinoma and DCIS (Ductal Carcinoma In Situ)) is extracted to obtain mRNA for the MapQuant test. So the ER, PR and HER2 status with Map-Quant are made on the infiltrative carcinoma, DCIS and normal glands. Whereas the pathologist read only information about infiltrative carcinoma by doing IHC, excluding DCIS and normal breast tissue.
Plus, the MapQuant test is based on frozen tissue, whereas IHC is assessed on fixed tissues (FormalinFixedParaffinEmbedded). The two tests are based on two different tissue areas and the discordant results can be explained by the intratumoral heterogeneity.
The threshold for hormone receptors positivity in IHC can be set at 1 or 10% positive cells detection [13]. It is usually set at 10% in France. We re-analyzed the cases around/below the 10% cut-off to make our results more reliable for comparison with other studies (Table in S2 Table).
If we use a 1% cut-off to define a positive hormone receptor status: -One case out of the 4 discordant cases would become ER positive (5% positivity) with IHC.
-3 cases out of the 11 discordant cases would become PR positive (5% positivity) with IHC.
This new results doesn't change significatively the concordance rates (3 instead of 4 discordant ER cases/ 8 instead of 11 discordant PR cases). The cut-off divergence doesn't explane the high discrepancy in PR status between the 2 assays.
Concerning the lower PR concordance, Roepman et al [10] observed a higher proportion of cases that were IHC-positive/microarray-positive than IHC-positive/microarray-negative. They raised the possibility of a tumor subgroup that wouldn't 'express protein despite the presence of mRNA transcripts'.
In our study, one patient would have been treated with trastuzumab therapy using the Map-Quant-test. The major risk of this treatment is cardiotoxicity. However, the NSABP B-31 trial recently revealed that only 4.0% of patients who received trastuzumab in addition to adjuvant chemotherapy experienced a cardiac event after 7 years follow-up [33].
Conclusion
In conclusion, our results show that the MapQuant assay, based on mRNA expression assay, provides an objective and quantitative assessment of Estrogen receptor, Progesterone receptor and HER2 status in invasive breast cancer. The MapQuant test has similar performance compared to other gene expression profiling tests. It would need to be prospectively validated to prove its benefit and its medico economic impact beyond the use of standard clinico-pathological prognosis variables to guide the choice of adjuvant treatment.
Supporting Information S1 | 2018-04-03T05:10:59.138Z | 2016-02-01T00:00:00.000 | {
"year": 2016,
"sha1": "a9228bb202c291276e73d26e3cbd7867cbd85970",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0146474&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a9228bb202c291276e73d26e3cbd7867cbd85970",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
257499008 | pes2o/s2orc | v3-fos-license | Mycobacterium tuberculosis lineage 4 associated with cavitations and treatment failure
Background Mycobacterium tuberculosis genotyping has been crucial to determining the distribution and impact of different families on disease clinical presentation. The aim of the study was to evaluate the associations among sociodemographic and clinical characteristics and M. tuberculosis lineages from patients with pulmonary tuberculosis in Orizaba, Veracruz, Mexico. Methods We analyzed data from 755 patients whose isolates were typified by 24-loci mycobacterial interspersed repetitive unit–variable number of tandem repeats (MIRU–VNTR). The associations among patient characteristics and sublineages found were evaluated using logistic regression analysis. Results Among M. tuberculosis isolates, 730/755 (96.6%) were assigned to eight sublineages of lineage 4 (Euro-American). Alcohol consumption (adjusted odds ratio [aOR] 1.528, 95% confidence interval (CI) 1.041–2.243; p = 0.030), diabetes mellitus type 2 (aOR 1.625, 95% CI 1.130–2.337; p = 0.009), sputum smear positivity grade (3+) (aOR 2.198, 95% CI 1.524–3.168; p < 0.001) and LAM sublineage isolates (aOR 1.023, 95% CI 1.023–2.333; p = 0.039) were associated with the presence of cavitations. Resistance to at least one drug (aOR 25.763, 95% CI 7.096–93.543; p < 0.001) and having isolates other than Haarlem and LAM sublineages (aOR 6.740, 95% CI 1.704–26.661; p = 0.007) were associated with treatment failure. In a second model, multidrug resistance was associated with treatment failure (aOR 31.497, 95% CI 5.119–193.815; p < 0.001). Having more than 6 years of formal education was not associated with treatment failure. Conclusions Knowing M. tuberculosis genetic diversity plays an essential role in disease development and outcomes, and could have important implications for guiding treatment and improving tuberculosis control. Supplementary Information The online version contains supplementary material available at 10.1186/s12879-023-08055-9.
Background
The agent responsible for tuberculosis belongs to Mycobacterium tuberculosis complex (MTBC). Pulmonary tuberculosis is the most common disease presentation, reported in 4.8 million cases worldwide [1]. The state of Veracruz in southern Mexico reports the highest number of cases (2198) nationwide [2]. The incidence in the municipality of Orizaba, Veracruz was cases/100,000 inhabitants during the period 1995-2010 surpassing the national incidence [3].
Nowadays, nine M. tuberculosis lineages have been identified strongly associated with particular geographic regions [4,5]. In the Americas, tuberculosis is mainly caused by lineage 4 also known as Euro-American lineage [6].
Mycobacterium tuberculosis genotyping is important because it contributes to knowledge regarding its genetic diversity [7,8]. The current gold standard for genotyping is mycobacterial interspersed repetitive unit-variable number of tandem repeat (MIRU-VNTR) method. Currently, MIRUs are used as markers for strains classification and sub-classification. For example, within the Latin American & Mediterranean (LAM) family, a single repeat of MIRU40 has been proposed as a marker of RD Rio sublineage [9].
Risk factors related to M. tuberculosis genetics help in the early identification of patients infected with lineages associated with increased risk of treatment failure, relapse, drug resistance and death [10]. External risk factors associated with active tuberculosis development are poverty, overpopulation, overcrowding and malnutrition, in addition to comorbidities such as human immunodeficiency virus (HIV) coinfection, diabetes mellitus type 2 (DM2), chronic kidney failure, silicosis, immunosuppressive therapies and addictions such as smoking and drinking [11,12].
In addition to host and environmental risk factors, tuberculosis epidemiology can also be influenced by M. tuberculosis genetic diversity [13]. Some lineages have shown differences in their virulence phenotypes, affecting transmissibility and pathogenesis and having implications in treatment outcomes and failure in the effectiveness of the BCG vaccine [6,14].
The aim of this study was to evaluate the association among sociodemographic and clinical characteristics and M. tuberculosis lineages from isolates of patients with pulmonary tuberculosis obtained in a population-based study conducted in Orizaba, Veracruz, Mexico from 1995 to 2010.
Study population and data collection
Between March 1995 and April 2010, pulmonary tuberculosis cases passive search was carried out in people over 15 years of age who had respiratory symptoms for more than two weeks in 12 health jurisdictions municipality of Orizaba, Veracruz, Mexico. During this period, 1132 patients were diagnosed and for this study 612 M. tuberculosis isolates were recovered from a strain collection and 143 more from a DNA collection using samples from these patients. We used the population-based cohort data from patients diagnosed with pulmonary tuberculosis from August 1, 1997, to April 30, 2010. The study was approved by the Ethics Committee (Ref. No. 1515). All participating patients signed informed consent forms.
As part of the cohort investigation, isolates were genotyped by 24-loci MIRU-VNTR and susceptibility tests were performed as previously described [15]. LAM RD Rio and RD115 sublineages were classified according to the presence of a single repeat in MIRU40 and MIRU02 respectively.
Definitions
The following sociodemographic variables were considered: sex, age, education level, dirt-floor home, and rural residence locality, nearest health center distance, social security access, and consumption of alcohol, tobacco and illicit drugs. DM2 and HIV diagnosis was also considered. Presence of acid-fast bacilli (AFB) in sputum samples information was considered and was graded as follows: 1 + (1-9 bacilli per 100 observed fields), 2 + (1-9 bacilli per 10 observed fields) or 3 + (1-9 bacilli per observed field). We included fever, hemoptysis and presence of cavitations variables, each used dichotomously. Body mass index (BMI) and number of days between symptom onset and start of treatment were calculated.
We used tuberculosis prevention and control program (NOM-SSA-006) operational definitions for treatment outcomes, except failure and death, which were defined according to international definitions [16,17]: cure, patient who completed treatment, with signs and symptoms disappearance, or patient who had smear or culture negative at the end of treatment; failure, patient who had smear or culture positive after five months or later during treatment; and treatment completion, patient who completed his/her treatment regimen with signs and symptoms disappearance and smear or culture were not performed. Patients who did not complete treatment were classified into the following two categories: abandon, patient who interrupts treatment for 30 days or more; and death during treatment, patient who died due to any other cause during treatment.
Lineage variable was operationalized in disaggregated and aggregate way according to MIRU-VNTR genotyping. Disaggregated variable considers each identified sublineage, Haarlem, LAM, Cameroon, UgandaI, Ghana, S, X, TUR, EAI, Beijing and unknown. Aggregate variable considers lineage frequency, Haarlem, LAM and lineages other than Haarlem and LAM, because of the small frequency of each other lineages.
Statistical analysis
We calculated the distributions percentage for qualitative variables as well as medians and interquartile ranges (IQR) for quantitative variables. We used Pearson chisquare test for dichotomous variables, binomial test for categorical variables and Kruskal-Wallis test for quantitative variables. Unconditional logistic regression models were adjusted to explain treatment failure and the presence of cavitation on radiography. Two models were adjusted to explain treatment failure: one included resistance variable to at least one drug, and the other included MDR. To include variables in a multivariate model were considered those that in the bivariate analysis had values of p ≤ 0.20 and biological plausibility. We estimated adjusted odds ratio (aOR) and 95% confidence intervals (CIs).
Analyses were performed using STATA ® v15 statistical software package (StataCorp LP, College Station, TX, USA).
Results
The characteristics of the studied patients are shown in Table 1. The proportion of individuals among the population examined was similar to the proportion represented by this same group. A total of 755 patients were included in the study, 442 (59%) of whom were men, with a median age of 45 years (IQR 32-59). There were 507 (67%) patients with more than six years of formal education, and 174 (23%) lived in dirt-floor homes. Comorbidity with DM2 was reported in 250 (33%) patients. HIV status was known for 739 patients, of whom 13 (2%) were positive. Resistance to any drug was present in 116/612 (19%) isolates, and 20 (3%) were MDR. The most common clinical findings were fever and cavitation in 531/752 (71%) and 282/626 (45%) patients respectively. Cure was recorded in 532/755 (70%) patients.
Treatment outcome according to sublineage is summarized in Additional file 1: Table S3. When comparing cure or completion with treatment failure, patients with sublineages other than Haarlem and LAM showed higher proportion of treatment failure (9/152, 5.9%) than patients with Haarlem (1.4%) and LAM (4.4%) lineages (p = 0.016).
The comparison among clinical characteristics according to cure or treatment completion compared to failure, revealed higher proportion of treatment failure in patients who had ever smoked ( Using logistic regression models, we performed two adjusted models to explain treatment failure compared to cure and treatment completion; in one we included resistance variable to at least one drug, and in the other we included MDR variable (
Discussion
This study describes the association among clinical and sociodemographic characteristics of patients with pulmonary tuberculosis and little described M. tuberculosis sublineages lineage 4 in the health jurisdiction of Orizaba, Veracruz, Mexico between 1995 and 2010. Our study population presented high frequency of lineage 4, Euro-American isolates. In addition, associated characteristics with treatment failure and cavitation presence were identified. In this study, lineage 4 (Euro-American) was the most common (~ 96%) lineage identified, consistent with previous reports that have shown that isolates with this lineage are predominant in Mexico [18]. We were also able to observe that isolates with LAM lineage (163), the proportion of RD Rio was 69.9%, higher compared to the 63.1% recently described in isolates from Northern Mexico and in isolates from Venezuela (55%), Argentine (11%) and Paraguay (10%) [19,20]. Therefore, our results support that these lineages are endemic and that strains spread regionally with different rates of distribution.
We found that compared with other sublineages, cases with Haarlem sublineage isolates had higher proportion of clustered patients. A previous study showed similar results; the authors found that Haarlem sublineage isolates were more likely to belong to clusters [21]. This result confirms the wide distribution and genetic diversity of lineage 4 due to its virulence, which is reflected in cluster formation and its transmission success among the population [22]. On the other hand, we found that patients with Cameroon sublineage isolates showed more days between symptom onset and treatment start. Similar result have been described in patients with lineage 7 isolates in Ethiopia, where the time was longer between symptom onset and treatment start was attributed to lineage 7 strains slow growth [21]. Because treatment initiation is important to cut transmission chains, it is necessary to phenotypically confirm Cameroon sublineage isolates growth rate. To confirm this hypothesis, we cultured 45 isolates with Cameroon lineage on MGIT medium and determined the time and units of growth. We observed, that the Cameroon isolates grew less (14.6 CFU/h) compared to H37Rv (24.7 CFU/h).
Respect to Ghana sublineage, we found the majority of patients presented haemoptysis; this finding has not been reported thus far in literature. However, more data are needed.
Another interesting result was that cases with isolates of EAI lineage were more frequent in men, in patients with DM2 and patients who had ever smoked. It has been described that DM2 alone is associated with M. tuberculosis infection and progression to active disease with severe disease presentation [23]. Furthermore, decreased lung function has been observed in smokers with DM2 compared to nonsmokers [24]. Therefore, it is likely that social factors contribute to EAI dissemination, also these patients showed higher cavitations proportion (69.2%), without statistical significance. Previously, a study that evaluated host-pathogen relationship and its association with clinical outcomes in patients with tuberculosis described that patients infected with strains that originated in geographic regions other than the patient's origin (allopatric) such as EAI lineage in America presented an increased lung damage risk [25]. As observed in our results, it has been suggested that although these lineages are less adapted to transmit and cause disease in fully competent members of allopatric human populations, they can do so in the context of impaired host immune resistance [26]. However, it would be necessary to perform whole genome sequence on EAI lineage isolates to determine pathogen genetic characteristics that facilitate its possible adaptation to the host and transmission. Furthermore, East Asia (Beijing) lineage was found in two elderly patients, one of them had HIV and the other had DM2 and MDR. Beijing isolates were genetically distinct, with 9/15 different alleles by 24-loci MIRU-VNTR; these cases were probably due to reactivation. MDR has been associated with Beijing family; however, in this study, data are not conclusive because there were only two isolates [11,27]. However, it is very likely that MDR isolate is due to antibiotics selective pressure because patient had received treatment previously.
We also observed higher proportion of treatment failure in patients with isolates of sublineages other than Haarlem and LAM, in patients who had ever smoked and in patients with isolates resistant to at least one drug or MDR. A greater proportion of resistance was found in Cameroon (13/49, 30.2%), UgandaI (5/22, 22.7%) and Ghana (2/16, 12.5%) sublineage isolates. A recent study conducted in Niger reported that 75% of Cameroon and Ghana sublineage isolates were resistant to RIF and MDR [28]. However, treatment failure could be also the result of "antibiotic resilience" as recently described by Quingyun et al., they found that mutations in resR (Rv1830) gene do not show canonical drug resistance or drug tolerance but instead shorten the post-antibiotic effect, meaning that they enable M. tuberculosis to resume growth after drug exposure substantially faster than wild-type strains, and these mutations are associated to treatment failure acting in a regulatory cascade with other transcription factors controlling cell growth and division. Furthermore, they described that up to 10% of strains from high-tuberculosis-burden countries showed fixed mutations in these regions [29].According to our results Cameroon and Ghana sublineages geographically restricted within Euro-American lineage, seem to have adapted to the study population and contribute significantly to the resistance generation and treatment failure. Therefore, it is necessary to genotype a greater number of isolates and performs susceptibility tests to determine the real impact on the resistance of lineages little described in Mexico and to perform whole genome sequencing to explore the possible association between resR mutation, treatment failure and whether any lineage is prone to acquire it. Interestingly, having > 6 years of formal education was not associated with treatment failure. We believe that having higher education level probably implies that patients better understand treatment adherence and completion importance.
The LAM RD Rio lineage has been described in other Latin American countries where it has been associated with the presence of cavitations, increased transmissibility and MDR [19]. However, in the present study we observed more proportion of LAM RD Rio isolates in cured patients, previously in this study population was obtained that previous treatment (aOR 9.05, 95% CI 3.6-22.5, p < 0.001) and LAM lineage (aOR 4.25, 95% CI 1.4-12.7, p = 0.010) were associated with tuberculosis MDR [15]. These results have important implications in the tuberculosis control program, although isolates with LAM RD Rio sub lineage are more prone to develop MDR following a previous treatment, patients seem to respond favorably to the second treatment.
Cavitations presence was associated with LAM sublineage, alcohol consumption, DM2 and AFB positivity grade 3+ . Similar results have been previously described regarding the presence of more severe manifestations in patients with DM2 and tuberculosis [30,31]. Moreover, it has been reported that cavitations presence in pulmonary tuberculosis is associated with higher contagiousness/transmissibility due to high AFB load [32]. In addition, these results support those described by Pasopanodya et al., who report that modern lineages strains, such as Euro-American lineages, developed nonlethal properties; however, they cause lung damage, which increases their dissemination capacity among the population [25]. Therefore, the increase in the number of people with DM2 in Mexico could result in greater transmission of tuberculosis due to lung damage associated with the presence of LAM sublineage. We thus suggest implementing genotyping of M. tuberculosis isolates with the use of 24-loci MIRU-VNTR in Mexico and determining the impact of LAM sublineage.
In conclusion, this study provides relevant results in relation to the association between the presence of cavitations, comorbidities and LAM sublineage isolates. Additionally, treatment failure associated with sublineages other than Haarlem and LAM. Furthermore, we found the possible EAI sublineage isolates association in patients with DM2 and cavitation. We describe that the genetic diversity of M. tuberculosis lineage 4 (Euro-American) probably plays an essential role in disease presentation, which could have important implications for treatment management and to improve tuberculosis control in Mexico. | 2023-03-14T14:16:36.109Z | 2023-03-14T00:00:00.000 | {
"year": 2023,
"sha1": "c3e071e1b2c4d2fa697f9053d245706f4dc0c543",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "c3e071e1b2c4d2fa697f9053d245706f4dc0c543",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119715082 | pes2o/s2orc | v3-fos-license | Asymptotic of Number of Similarity Classes of Commuting Tuples
We have for positive integers $n$, $k$ and finite field $\mathbb{F}_q$, $c(n,k,q)$, as the number of simultaneous similarity classes of $k$-tuples of commuting $n\times n$ matrices over the $\mathbb{F}_q$. In this paper, it has been shown that $c(n,k,q)$ as a function of $k$ for fixed $n$ and $q$ is asymptotically $q^{m(n)k}$, where $m(n) = \left[\frac{n^2}{4}\right] + 1$, which is the dimension of the maximal commutative subalgebra of $M_n(\mathbb{F}_q)$ (the algebra of $n\times n$ matrices over $\mathbb{F}_q$).
Introduction
Let F q be a finite field of order q, n be a positive integer, M n (F q ) be the algebra of n × n matrices over F q , and GL n (F q ), the group of invertible n × n matrices. Then, by the theory of the rational canonical form, the number of similarity classes in M n (F q ) is given by c(n, 1, q) = λ⊢n q λ 1 , where λ varies over partitions of n, and each λ is of the form: λ = (λ 1 ≥ λ 2 ≥ · · · ).
It can clearly be seen that, keeping n fixed, c(n, k, q) as a function of q is asymptotically q n upto multiplication by some constant factor. If we keep q fixed and look at c(n, 1, q) as a function of n, then also, it is asymptotically q n upto multiplication by a constant. This is a non-trivial asymptotic result, which Stong [Sto88] proved in 1988. In 1995, Neumann and Praeger [NP95] looked at the probability of an n × n matrix over F q being non-cyclic and found that, for a fixed q, the probability of a n × n matrix over F q being non-cyclic, is asymptotically q −3 as a function of n. They also looked at non-separable matrices, and proved that the probability of a matrix in M n (F q ) being non-separable is asymptotically q −1 , upto multiplication by a constant.
In 1998, Girth [Gir98] worked on certain probabilities for n × n upper triangular matrices and compared their asymptotic behaviour with that of corresponding probabilities for arbitrary n × n matrices over F q . He also did these comparisons of asymptotic behaviours as q goes to ∞, keeping n fixed. The works mentioned above focus mainly on counting in M n (F q ) and finding the asymptotic behaviours as n goes to ∞.
In this paper, we shall consider for any positive integer k, the space M n (F q ) k of k-tuples of n×n matrices over F q . GL n (F q ) acts on M n (F q ) k by simultaneous conjugation, which is defined as follows: The orbits for this action are called simultaneous similarity classes.
Let a(n, k, q) denote the number of simultaneous similarity classes in M n (F q ) k . Then, by Burnside's lemma we have, where for each g ∈ GL n (F q ), Z Mn(Fq) (g) denotes the centralizer algebra of g i.e., Z Mn(Fq) (g) = {x ∈ M n (F q ) | xg = gx}.
Claim 1. We claim that, keeping n and q fixed, a(n, k, q) is asymptotically q n 2 k up to some constant factor, as k goes to ∞.
Proof. We need to show that there exist positive constants, m 1 and m 2 (constant with respect to k), such that: m 1 q n 2 k ≤ a(n, k, q) ≤ m 2 q n 2 k . So, in the Burnside lemma expansion of a(n, k, q), just consider all those g that are scalar matrices. Then, we have Z Mn(Fq) (g) = M n (F q ). So, taking m 1 to be we have m 1 q n 2 k ≤ a(n, k, q).
Next, if g is a non-scalar matrix, then Z Mn(Fq) (g) M n (F q ). We know (see Agore [Ago14]), that the maximal dimension of a proper subalgebra of M n (F q ) is, n 2 − n + 1.
So we have
From this, we get m 2 such that a(n, k, q) ≤ m 2 q n 2 k . Thus the claim is proved.
Now, denote by M n (F q ) (k) , the set of k-tuples of commuting matrices from M n (F q ), i.e., the set, Let c(n, k, q) denote the number of simultaneous similarity classes in M n (F q ) (k) under the simultaneous conjugation by GL n (F q ) on it. The aim of the paper is to find for a fixed n and q, an asymptotic for c(n, k, q) as a function of k. The problem here is that, the technique used in the proof of Claim 1 fails in this case because the matrices, A 1 , ..., A k , are no longer independently chosen.
In [Sha16], c(n, k, q) was calculated for n = 2, 3, 4. The leading terms of some of those values are shown in Table 1.
From Table 1, we see that c(2, k, q) is asymptotically q 2k . c(3, k, q) is asymptotically q 3k and c(4, k, q) is asymptotically q 5k−7 = q −7 q 5k . In the case of n = 4, we see that c(4, k, q) is asymptotically q 5k (and not q 4k , as we would expect), up to a constant factor which is q −7 .
1.1. Outline of the Paper. In Section 2, we will prove the main theorem (Theorem 1.1). In Section 3, we will find out the asymptotic of counting the total number of k-tuples of commuting matrices over F q i.e., the cardinality of M n (F q ) (k) .
Proof of Theorem 1.1
To prove Theorem 1.1, it suffices to prove the existence of positive numbers, C 1 and C 2 , such that: for large k. Before we go ahead, we will need to unravel c(n, k, q).
We first define the following: For k = 0 and any subalgebra Z ⊆ M n (F q ), c(Z, 0, q) = 1.
We claim: where Z runs over subalgebras of M n (F q ), c Z is the number of similarity classes in M n (F q ), whose centralizer algebra is conjugate to Z.
induces a bijection between the set of simultaneous similarity classes in M n (F q ) (k) , which have an element whose first coordinate is A 1 , and the orbits for the simultaneous conjugation action of Z * on Z (k−1) . Hence we get the identity (2.1).
where c ZZ ′ is the number of orbits of matrices in Z for the action of Z * on it by conjugation, whose centralizer algebra under this conjugation action is conjugate to Z ′ .
Proceeding this way, we get the following expansion for c(n, k, q): and c Z i Z i+1 denotes the number of orbits of matrices in Z i for the conjugation action of Z * i , whose centralizer algebra in Z i , is conjugate to Here are some observations about these non-increasing sequences of subalgebras which come up in the expansion of c(n, k, q). We shall state them as a lemma:
Lemma 2.2. Given a non-increasing sequence of centralizer subalgebras which occurs in equation (2.2), say
we have the following: (1) If for some i, Z i is a commutative subalgebra, then (1) Suppose, for some i, Z i is commutative. Then, for any element, is the number of matrices A i+1 in Z i for which 2.1. Finding Crude Lower and Upper bounds for c(n, k, q). The first and main thing we need to show is that there exists a tuple of commuting matrices whose common centralizer is a commutative algebra of dimension m(n).
Here are examples of tuples of commuting matrices whose common centralizer is a commutative subalgebra of M n (F q ) of dimension m(n).
Its common centralizer algebra is
It is commutative and is of dimension l 2 + 1.
Example 2.4. When n is odd, say n = 2l + 1 for some l ≥ 1, then m(n) = l(l + 1) + 1. Consider the commuting tuple (A 1 , A 2 , . . . , A l+1 ) where and for i = 2, . . . , l + 1, where for each i, N i is a (l + 1) × l-matrix of the form where e i−1 is as defined in Example 2.3. Then the common centralizer of this tuple of commuting matrices is It is commutative and is of dimension l(l + 1) + 1, which is equal to m(n).
So we can find at least a ([n/2]+1)-tuple of commuting n×n matrices, whose common centralizer algebra is of dimension m(n).
Lemma 2.5. There exists C 1 > 0 such that C 1 q m(n)k ≤ c(n, k, q) for large k.
Proof. Let l 0 = n 2 + 1. Consider the k-tuple, where the first l 0 matrices of the commuting tuple are as in Examples 2.3 or 2.4 (depending on whether n is even or odd). Here, Z l 0 is a commutative subalgebra of dimension m(n) (as described in the examples). Hence, by Lemma 2.2, for i = l 0 + 1, . . . , k, Z i = Z l 0 . Then To complete the proof of the Theorem 1.1, we need the following observation (Lemma 2.6), about the non-increasing sequences of subalgebras, Z 1 ⊇ · · · ⊇ Z k , which occur in the expansion of c(n, k, q) (given in equation (2.2)).
If Z i Z i+1 . Then consider any y ∈ Z i for which Z Z i (y) = Z i+1 . Clearly, y ∈ Z(Z i+1 ). But, for x / ∈ Z i+1 , yx = xy. Hence y / ∈ Z(Z i ). Therefore Z(Z i ) Z(Z i+1 ). Thus dim(Z(Z i+1 )) > dim(Z(Z i )). Now we are in a position to get a crude upper bound for c(n, k, q). Let k > n 2 . Let us look at any summand of c(n, k, q). A summand of c(n, k, q) is of the form, where Z 1 ⊇ Z 2 ⊇ · · · ⊇ Z k . Let j be the number of distinct Z i 's in the non-increasing sequence. As M n (F q ) is of dimension n 2 , there cannot be more than n 2 distinct Z i 's in this non-increasing sequence, We therefore rewrite c(n, k, q) as Now, for any j : 0 ≤ j ≤ n 2 − 1; consider a non-increasing sequence, Z 1 ⊇ · · · ⊇ Z k , in which j + 1 of the Z i 's are distinct. Then it has a strictly decreasing subsequence, So the non-increasing sequence, Z 1 ⊇ · · · ⊇ Z k , looks like this: From Lemma 2.2, we have: c Z 1 c Z 1 Z 2 · · · c Z k−1 Z k is equal to For 1 ≤ u ≤ j − 1, we have, Z iu Z i u+1 . Thus, Z iu Z k for all u : 1 ≤ u ≤ j. Then by Lemma 2.6, we have dim(Z(Z iu )) < dim(Z(Z k )) for all u : 1 ≤ u ≤ j. Hence, for 1 ≤ u ≤ j, which is bounded above by Now, as each of c Z 1 , c Z i 1 Z i 2 , . . . , c Z i j Z k cannot be more than q n 2 , we have, Here are some observations: • We know that there are only a finite number of distinct algebras in M n (F q ). Let that number be f (n). For each j as 0 ≤ j ≤ n 2 − 1, there cannot be more than f (n) j+1 of them. • Given Z 1 ⊇ · · · ⊇ Z k , in which j + 1 of them are distinct, i.e., there is a strongly decreasing subsequence of Z 1 ⊇ · · · ⊇ Z k : such that Z 1 ⊇ · · · ⊇ Z k , is as in Expression 2.4. Given this subset S = {i 1 , . . . , i j }, at which the descents occur, c Z 1 c Z 1 Z 2 · · · c Z k Z k−1 is bounded above by q n 2 (j+1) .q (m(n)k−max(S)) .
But then this S could be any size j subset of {1, . . . , k − 1}. So, c(n, k, q) is bounded above by which is equal to But this is equal to (Once r is chosen, the remaining j − 1 numbers are chosen from 1, . . . , r − 1 in r−1 j−1 ways.) Now, as r−1 j−1 ≤ r j , we get that c(n, k, q) ≤ q m(n)k Now, for any fixed j, we can see by any of the routine tests (either the root or ratio test) that the series, ∞ r=0 r j q −r , converges.
So, let then we have c(n, k, q) ≤ s 2 q m(n)k . So we have found positive constants C 1 and C 2 such that C 1 q m(n)k ≤ c(n, k, q) ≤ C 2 q m(n)k Hence c(n, k, q), as a function of k is asymptotically q m(n)k upto some constant factor.
Asymptotic of Counting Tuples of Commuting Matrices
In this section, instead of looking at simultaneous similarity classes of commuting tuples, we will look at the asymptotic of counting total number of tuples of commuting matrices. Let C(n, k, q) denote the total number of k-tuples of commuting n × n matrices over F q i.e., the size of M n (F q ) (k) . Then we have, where Z varies over conjugacy classes of subalgebras of M n (F q ), Z * is the group of units of Z, and C Z is the total number of simultaneous similarity classes of k-tuples of commuting matrices whose common centralizer algebra is isomorphic to Z.
From the previous section, we see that where Z k = Z. So we can rewrite equation (3.1) as c Z 1 c Z 1 Z 2 · · · c Z l 0 q m(n)(k−l 0 ) ≤ C(n, k, q) Thus, choose D 1 = |GL n (F q )| (q − 1)q n 2 4 c Z 1 c Z 1 Z 2 · · · c Z l 0 q −m(n)l 0 .
Then we get D 1 q m(n)k ≤ C(n, k, q).
Now we can find an upper bound for C(n, k, q). From equation (3.2) we have C(n, k, q) equal to Now, as GL n (F q ) has only a finite number of subgroups, |GLn(Fq)| |Z * k | is bounded above. Let that bound be G(q). So we have C(n, k, q) ≤ G(q) Z 1 ⊇···⊇Z k c Z 1 c Z 1 Z 2 · · · c Z k−1 Z k = G(q)c(n, k, q) ≤ G(q)C 2 q m(n)k (From section 2) So let D 2 = G(q)C 2 , then we have D 2 > 0 such that, C(n, k, q) ≤ D 2 q m(n)k . This proves the theorem: Theorem 3.1. The total number of k-tuples of commuting n × n matrices over F q : C(n, k, q) is asymptotic to q m(n)k as a function of k.
Keeping n and q fixed, we could find the asymptotics of c(n, k, q) and C(n, k, q) as k goes to ∞. We could instead keep k and q fixed and ask what are the asymptotics of c(n, k, q) and C(n, k, q) as n goes to ∞. We could also keep k and n fixed and ask for the asymptotics of c(n, k, q) and C(n, k, q) as a function of q. | 2016-05-27T04:55:50.000Z | 2015-06-25T00:00:00.000 | {
"year": 2015,
"sha1": "13ff0e8d7ab0eab6e8d580400ec69beae823b79f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "af383af8c9fe10cc335b52906930c5868eea6a47",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
257334459 | pes2o/s2orc | v3-fos-license | Patients’ Willingness to Accept Social Needs Navigation After In-Person versus Remote Screening
,
Background
2][3][4][5][6] However, a push to value-based care [7][8][9] along with societal inequities that the COVID-19 pandemic has both highlighted and exacerbated [10][11][12][13] have prompted the US health care sector to refocus attention on patients' social contexts.][16][17] A key consideration regarding social needs screening and referral interventions is how to successfully embed these new practices within already cramped clinical workflows. 18,191][22][23] Remote screening (eg, via phone calls or text messages) outside of clinical visits might offer a promising alternative to in-person screening, both for time and accessibility reasons.For instance, telehealth could facilitate with the identification of social needs among those facing chronic barriers to in-person health care, including a lack of reliable transportation, mobility issues, or competing priorities such as work or childcare. 24,25In addition, by potentially mitigating some of the power dynamics that accompany clinical spaces, 25,26 some patients may find remote interventions to be more comfortable.However, remote screening and referral for social needs could be both alienating and restricting for patients who prefer in-person health care interactions, 27,28 those with less technological literacy or access, 29,30 or those with limited English proficiency. 31egardless, many health care organizations necessarily shifted from in-person to remote interventions for social needs during the COVID-19 pandemic [32][33][34] and now must consider the merits of continuing with that approach versus returning to in-person strategies when it comes to working collaboratively with patients to address the social needs that they disclose.
Therefore, having a better understanding for the impacts of in-person versus remote social needs screening and referral on addressing patients' social needs is critical.An important first step to potentially resolve patients' social needs is whether those who screen positive for social needs are willing to accept health care-based assistance to connect with corresponding resources. 35Multiple studies have reported discrepancies between the proportions of patients who screen positive for social needs versus those who are interested in help. 35Of course, there is nothing wrong with patients declining assistance with social needs, in and of itself.A patient may not view a social need as an immediate concern, may already be receiving help elsewhere, or may simply not want help with social needs from a health care provider. 36owever, inequities could be exacerbated if there are systematic differences between those who are willing to accept versus decline support by screening mode.
This study made use of data from a social needs screening and referral intervention across diverse outpatient health care settings that spanned the start of the COVID-19 pandemic.We assessed whether in-person versus remote screening modified associations between patients' total number of self-reported social needs and their willingness to accept help with social needs.
Methods
This cross-sectional study followed the Strengthening the Reporting of Observational Studies in Epidemiology guidelines 37 and used data from the Accountable Health Communities (AHC) model.The institutional review board of Oregon Health & Science University (OHSU) approved the study, and all participants provided verbal informed consent (STUDY00018168).
The AHC Model
The AHC model was developed by the Centers for Medicare and Medicaid Services Innovation Center to test whether systematically identifying and addressing Medicare and Medicaid beneficiaries' social needs impacts health care costs and use. 17ommunity-dwelling beneficiaries who consent to participate are screened for 5 social needs-housing stability and quality, utility needs, food insecurity, transportation needs beyond medical transportation, and interpersonal safety-using the AHC Health-Related Social Needs Screening Tool. 38,39hose who screen positive for ≥1 social need(s) and ≥2 self-reported emergency department visits within the previous 12 months are offered navigation services to facilitate community resource connections.Nationally, 32 "bridge organizations" across 25 states were originally selected to implement the AHC model. 40he AHC Model in Oregon Oregon's bridge organization for the AHC model was the Oregon Rural-Practice-Based Research Network (ORPRN) 41 at OHSU.Responsibilities of ORPRN included identifying and collaborating with clinical delivery sites to adopt the AHC model and aligning partners to optimize the capacity of local communities to address beneficiaries' social needs.Clinical delivery sites spanned 24 of Oregon's 36 counties and represented a wide range of organizations and settings, including federally qualified health centers, private practices, emergency departments, and health departments.
The onset of the COVID-19 pandemic in the spring of 2020 had an immediate impact on health care delivery in Oregon. 42It also affected AHC model implementation in 3 primary ways.First, several clinical delivery sites that had been screening participants in person were no longer able to participate due to reduced staff and competing priorities.Second, some sites switched from in-person to remote screening.Finally, health systems that were not participating prepandemic asked to join the study via remote screening only.In response to these COVID-related contextual changes, ORPRN centralized efforts for the remote screening by hiring and training health sciences students to contact beneficiaries by phone or text message, describe the AHC model, and screen consenting beneficiaries for social needs.For eligible beneficiaries, students offered referrals to a resource navigator (eg, community health worker, social worker) for additional follow-up, as part of the navigation requirement for the AHC model.Across all of the participating health care settings, the frequency and consistency of screening varied based on their capacity and internal workflows.
Study Participants
Study participants were community-dwelling Medicare and Medicaid beneficiaries who participated in the AHC model in Oregon between October 17, 2018 and December 31, 2020.The study focused on those who consented to participate and who were eligible for resource navigation assistance due to both disclosing ≥1 social need(s) and self-reporting ≥2 emergency department visits within the previous year.We excluded those without complete data for either the outcome measure or covariates from the final study sample and analyses.Participants were also excluded from analyses if they came from clinical delivery sites in which there were <10 participants or in which 100% of participants were either willing or unwilling to accept navigation assistance (see Online Appendix 1 for demographics of included vs excluded beneficiaries).By December 31, 2020, 14,691 Medicare and Medicaid beneficiaries had participated in the AHC model in Oregon, and 2,929 (20%) had qualified for resource navigation assistance.Analyses included 1,504 participants with complete data for all variables of interest, of which 653 (43%) were screened for social needs in person and 851 (57%) were screened remotely (Figure 1).Participants originated from 28 clinical delivery sites.
Study Measures
The primary, binary outcome measure was whether participants were willing to accept resource navigation assistance with their social needs.Participants responded "Yes" or "No" to the following question: "You are eligible to receive extra help by a staff person called a navigator who can assist you with accessing resources.Would you like to receive help from a navigator?"The ordinal predictor variable -participants' total number of social needs (based on a scale of 1 to 5)-originated from participants' responses to the AHC model screening questions.We acquired the screening mode (in-person; remote) of the clinical delivery sites from ORPRN AHC model team members who inputted screening mode into a spreadsheet.Most covariates also came from participants' responses to the screening questions.These included categorical variables of participants' race, 43 ethnicity, sex, household income, and for whom participants answered the screening questions. 38,44Birth year and zip code came from participants' electronic health records to construct categorical variables for beneficiaries' age and rurality, respectively.We constructed age as a 3category variable (≤17; 18 to 64; ≥65) due to reasons corresponding to both Medicare qualification and mandatory reporting requirements in Oregon. 45,46Rurality designations came from the Oregon Office of Rural Health (urban; rural or frontier). 47
Statistical Analysis
We used x 2 tests of independence to compare demographic characteristics of those screened for social needs in person versus remotely.We conducted a multivariable logistic regression analysis to assess whether the screening mode (in-person; remote) modified associations between patients' total number of social needs (predictor variable) and their willingness to accept help with social needs (outcome variable).Specifically, we created an interaction term (screening mode 1 total number of social needs) to test for the presence of effect modification. 48The model included clinical delivery site fixed effects and clustered standard errors at the site level.We selected confounders based on a priori assumptions and review of the literature regarding factors that are likely to affect both patients' total number of social needs and interest in receiving health carebased assistance with social needs. 49,50In particular, both a participant's acuity of need and whether the person has reason to trust or mistrust health systems are likely to impact interest in accepting assistance.For instance, we viewed the "race" variable as a proxy for racism.2][53] It also can affect mistrust of health care systems due to historic and ongoing health care-based discrimination faced by those who are Black, Indigenous, and People of Color. 54,55hile we conducted complete-case analyses, we also conducted sensitivity analyses with missing indicators (Online Appendix 2).We completed analyses using Stata/IC 15.1 from January 1 to December 10, 2021.
Participant Demographics
Participants' social needs and demographic characteristics-including for the subgroups of those screened in person versus remotely-are available in Table 1.As anticipated, the majority of those screened in person participated before Oregon's COVID-19 social distancing mandate, 56 which went into effect on March 23, 2020 (n = 599; 92%); the majority of those screened remotely participated after the executive order (n = 825; 97%).Likewise, there were significant differences between the in-person and remote subgroups regarding nearly all social need and demographic variables.For example, 61% of in-person versus 74% of remote participants endorsed ≥2 social needs (P ≤ .001).Among all participants, the most frequently reported social need was food insecurity (77%), followed by housing instability and quality (60%), transportation needs (45%), utility needs (33%), and interpersonal safety (12%).Fifteen percent of remote versus 12% of in-person participants responded "Yes" to the question, "Are you Hispanic, Latino/a, or Spanish Origin?" (P = .05).Participants' responses about race were also significantly different across the 2 subgroups (P ≤ .001).Higher proportions of remote compared with in-person participants selected the categories of "Asian," "Black or African American," and "Native Hawaiian or Pacific Islander."The in-person subgroup, however, included higher proportions of those who selected the categories "American Indian or Alaska Native" and "White."Fifty-four percent of in-person versus 16% of remote participants had a rural or frontier address (P ≤ .001).In addition, the in-person subgroup had a lower proportion of males (32% vs 38%; P = .03),a higher proportion of those who took the screening on behalf of themselves (88% vs 84%; P ≤ .01),and a higher mean age (43 vs 40) (P ≤ .01).
Willingness to Accept Navigation
Seventy-one percent (n = 1069) of participants were willing to accept help with social needs, overall.A 1).
Multivariable Logistic Regression Analysis
In the multivariable logistic regression analysis presented in Table 2, there were significant associations between a higher number of social needs and a willingness to accept navigation assistance.Participants reporting 3 social needs (aOR, 57 2.9, 95% CI, 1.6-5.0,P ≤ .001), 4 social needs (aOR, 3.2, 95% CI, 1.4-7.0,P ≤ .01),and 5 social needs (aOR, 5.2, 95% CI, 2.8-10, P ≤ .001)were significantly more likely to be willing to accept help compared with those reporting 1 social need.In the full model, neither screening mode (in-person; remote) nor the interaction term (screening mode 1 total number of social needs) were significantly associated with a willingness to accept help with social needs.This remained true in a sensitivity analysis in which missing indicators were included for all variables with missing data (Online Appendix 2).
Regarding the remainder of covariates in the model, those selecting the race category "American Indian or Alaska Native" were significantly less likely to be willing to accept navigation assistance compared with those selecting the race category "White" only (aOR, 0.6, 95% CI, 0.5-0.8,P ≤ .01).In addition, participants who selected an income of $35,000 to $50,000 were significantly less likely to be willing to accept assistance compared with those who selected an income of <$10,000 (aOR, 0.6, 95% CI, 0.4-0.9,P = .02).No other covariates were significant.
Discussion
In this cross-sectional multisite study of the AHC model in Oregon, our multivariable logistic regression analysis did not find that screening mode was an effect modifier for participants' total number of social needs and their willingness to accept help with social needs.In other words, our results suggest that for individuals presenting with the same number of social needs, their likelihood of being willing to accept navigation may not be significantly impacted by whether they are screened for social needs in person or remotely.As with previous studies, we also found strong associations between a higher number of social needs and a willingness to accept resource navigation assistance. 49,50verall, roughly 71% of eligible Medicare and Medicaid beneficiaries were willing to accept resource navigation assistance.While the proportion of those who were willing to accept navigation was significantly higher in the remote (77%) versus in-person (63%) subgroups, this difference was likely due to a higher number and acuity of social needs among remote participants (see Table 1) in light of the COVID-19 pandemic. 58Nonetheless, whether remote or in person, the proportion of patients who were willing to accept assistance both ways fell within the higher end of what previous studies have reported 35 and is an important finding given the potential impact of the AHC model on health care-based social needs screening and referral interventions nationally.Although it was not an objective of our analysis, future evaluation of the AHC model should consider whether and why patients' willingness to accept navigation may vary across both states and bridge organizations.
We included race as a proxy for racism in our analysis because we anticipated that the impact of racism could differentially affect distinct groups' willingness to accept navigation.It is important to note that our American Indian or Alaska Native sample was significantly less willing to accept navigation compared with our White sample.However, since this was not the primary focus of our research study, we feel it is inappropriate to draw conclusions about this result without further investigation.In particular-mirroring the sentiments of other researchers 59 -we recommend future studies use community-engaged methods to meaningfully examine potential differences across racial and ethnic groups regarding interest in social needs navigation, along with many other aspects of social needs screening and referral interventions.
As health care organizations consider how to integrate social needs screening and referral interventions into their clinical workflow, our study provides evidence that screening for social needs remotely may be justifiable in terms of patients' willingness to accept help with the social needs that they disclose.1][62] Of course, findings from the present study could be more reflective of how ORPRN implemented remote screening for social needs versus the remote aspect, by itself.For example, something about how ORPRN trained the health sciences students to conduct the screening may have been important (eg, placing emphasis on trauma informed engagement).In a recent qualitative study on the AHC model in Oregon, our team identified screener techniques that appeared to garner positive patient experiences, including demonstrating respect for patient autonomy, a kind demeanor, a genuine intention to help, and attentiveness and responsiveness to patients' situations. 63ore research is needed to better understand the ways in which those conducting screening for social needs, both in person and remotely, can effectively foster patient engagement when discussing patients' social contexts.For instance, future research could examine differences in AHC model implementation across bridge organizations to assess how varying approaches to performing screening affected patients' willingness to accept help.
Limitations
The study had a few notable limitations, especially regarding data availability.First, there were likely unmeasured drop-off points in patient engagement that resulted in nonresponse bias.For example, it was not possible to report on the total number nor the demographics of beneficiaries who declined participation in the AHC model in Oregon during the study period.While results indicated that a high percentage of eligible beneficiaries were willing to accept navigation assistance, it is likely that otherwise eligible beneficiaries were never offered assistance because they declined to participate at the outset. 64,65Further, other studies have found that patients may request help with social needs, even after screening negatively for the same social needs on a questionnaire. 66,67Participants in the AHC model were only offered assistance if they screened positively for ≥1 social need.But patients may have been reluctant to share such information with the clinical delivery sites, especially if they had concerns regarding how their data would be used. 20he study also lacked certain variables that may be important for patient engagement, such as participants' primary language or country of origin. 68nother principal limitation was that detailed information about how clinical delivery sites implemented the AHC model in Oregon was not available.For instance, for the in-person screening sites, there was not reliable data about how the screening was administered (eg, article form, tablet) or by whom (eg, staff vs participant administered).These implementation differences during in-person screening may have also influenced patients' interest in accepting help with social needs, and future research should collect and analyze such information in greater detail.
Conclusions
Our study of the AHC model in Oregon provides evidence that, among patients presenting with a similar number of social needs, the type of screening mode (in-person; remote) may not adversely affect the proportion of patients who are willing to accept help with resource navigation.For both health care organizations considering a return to in-person social needs screening following the COVID-19 pandemic and those weighing the merits of in-person versus remote approaches, our results indicate a consideration for the benefits of remote screening outside of a clinical visit, especially for populations with inequitable access to in-person health care.However, it is important that remote screening approaches be contextually tailored to promote health equity in terms of technological access, literacy, and appropriate language options for the populations being served.Whether screening for social needs is conducted in person or remotely, more research is needed to better understand what approaches best garner patient trust and authentic collaboration, especially among those who may benefit from resource navigation assistance.
Table 1 .
Participant Demographics, Including Those Screened in Person and Those Screened Remotely (n = 1504)* Continued doi: 10.3122/jabfm.2022.220259R1Acceptance of Social Needs Navigation 233 on 13 March 2024 by guest.Protected by copyright.
Table 1 .
43ntinued The data for this analysis were collected from October 17, 2018 through December 31, 2020.†Pvalues based on x 2 tests of independence for those screened in person versus remotely.‡Participantswho selected White and an additional race category were grouped with the non-White category they selected.We made this decision due to the variable "race" serving as a proxy for racism.43AmBoard Fam Med: first published as 10.3122/jabfm.2022.220259R1 on 3 March 2023.Downloaded from *234 JABFM March-April 2023 Vol.36 No. 2 http://www.jabfm.orgon 13 March 2024 by guest.Protected by copyright.http://www.jabfm.org/J | 2023-03-05T06:17:21.371Z | 2023-03-03T00:00:00.000 | {
"year": 2023,
"sha1": "a3573a425cc65048a3ad1ef4d92b1ba8c18dd550",
"oa_license": null,
"oa_url": "https://www.jabfm.org/content/jabfp/36/2/229.full.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "64c5e4fa81848351837fa2c1b7338bf5e275e1ed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55379234 | pes2o/s2orc | v3-fos-license | The nursing diagnosis of aspiration risk in critical patients
Objective: To analyze the nursing diagnosis of risk of aspiration and the relationship with its risk factors in the patient hospitalized in the intensive care unit. Methods: A transversal study undertaken in the adult intensive care unit of a teaching hospital in the Northeast of Brazil, with a sample of 86 patients. The data were collected through the use of an interview questionnaire and physical examination in October 2013 May 2014. Results: The diagnosis was present in 43 patients (50%). A total of 17 risk factors was identified: related mainly to swallowing, enteral nutrition, gastrointestinal motility, gastric emptying, neurological standard, ventilation interfaces, events which were secondary to the treatment, and surgical procedures. Conclusion: The diagnosis of risk of aspiration and its risk factors are present in the critical patients, making the planning of care in this context fundamental.
INTRODUCTION
Patients who are hospitalized in intensive care units are more exposed to situations of risk than those attended in other hospital environments, as they require complex therapies and technological apparatus, as well as the frequent need for invasive procedures undertaken with the aim of keeping them alive.In the light of this, due to the seriousness of their conditions and clinical instability, these patients require complex care on the part of the team, which requires the nurse to make critical evaluations and carry out immediate interventions 1 .
In this context, nursing has a strong influence on the recovery of critically-ill patients, as the focus of the nurse's work process is the holistic care for the individual, with emphasis on maintaining their homeostatic balance and the prevention of iatrogenic conditions 2 .To this end, the nurse needs to carry out her actions in a standardized way, and based in the profession's own body of knowledge.This process occurs initially through assessment of the patient, a stage which is fundamental for the construction of an individualized care plan 3 .
The elaboration of the individualized care plan aims to check the patients' state of health and the diagnosis of their needs, and must be based on the stages of the Nursing Process (NP) 4 .The NP is a methodological instrument which is peculiar to nursing, and is indispensable for ensuring an appropriate and humanized care practice.It is made up of five interlinked and dynamic stages, namely: data collection, nursing diagnosis, the planning of actions, nursing intervention, and assessment of the results 4,5 .
The nursing diagnosis makes it possible to elaborate interventions which can be determinant in the results.Among the systems of classifications for nursing diagnoses, emphasis is placed on the NANDA-International (NANDA-I) 6 .
The NANDA-I classification system organizes the diagnoses in domains, among which emphasis is placed on domain 11, titled Safety and protection.The diagnoses of this domain are identified with greater frequency in critical patients 7 .The diagnosis of Risk of Aspiration, belonging to the class of physical injuries, is defined as the risk of entry of gastrointestinal secretions, oropharyngeal secretions, solids or fluids into the tracheobronchial passages.This diagnosis is represented by 22 risk factors 6 .
It is known that critical patients have a greater risk for the entrance of secretions into the respiratory airways, due to various factors such as: gastroparesis, the presence of the endotracheal tube, reduced level of consciousness, and complex pharmacological therapy.The aspiration of secretions is closely linked to the occurrence of aspiration pneumonia, which increases mortality, length of hospitalization, the duration of mechanical ventilation, and treatment costs 8,9 .
Thus, in spite of the existence of studies which investigate the diagnosis of Risk of Aspiration in critical patients, there is a need for detailed knowledge of the risk factors which define the presence of the diagnosis; and which, once known, make it possible to direct the nurse's actions towards prevention 7,10,11 .
To this end, this study aimed to analyze the diagnosis of Risk of Aspiration and the relationship with its risk factors in the patient hospitalized in the intensive care unit.
METHODS
This study is of the observational and transversal type, and was carried out in the adult intensive care unit (ICU) of a teaching hospital in a state capital in the Northeast of Brazil, in the period October 2013 -May 2014.
The study population was made up of 791 patients.The strategy for defining the sample was defined as the number of patients hospitalized in the above-mentioned ICU in the period of one year.For calculating the sample, a formula developed for studies with finite populations was used, considering a level of confidence for the study of 95% (Z ∞ = 1.96) and a sampling error of 10%; in relation to the prevalence of the event, the conservative value of 50% was considered.As a result, the sample was made up of 86 patients.
The selection of patients took place by convenience and consecutively.The following were used as inclusion criteria: age equal to or over 18 years old, and patients who had received clinical or surgical treatment.The exclusion criteria were: patients hospitalized in the unit for a period of less than 24 hours, bearing in mind that some clinical data can only be observed in a period equal or superior to 24 hours.
Data collection was undertaken through a questionnaire made up of questions relating to the patient's medical history and the physical examination, which are directed towards assessing the risk factors for the diagnosis of risk of aspiration.To this end, the instrument was constructed based on the risk factors for the diagnosis of risk of aspiration found in NANDA-I 6 .
In order to reduce the bias related to information collection, a collection protocol was constructed, which detailed the standardization of the procedures for measuring the variables.This collection instrument was subjected to face validation by three nurses who are intensive care specialists.Following the incorporation of their suggestions, the researcher responsible proceeded to apply the pretest with nine patients.There being no need for alterations, the participants in the pretest were included in the sample.
Furthermore, prior to undertaking the collection, a training session was held lasting three hours, run by the researcher, aimed at those collecting the data, with a view to ensuring the internal reliability of the data.To this end, the training addressed topics referent to intensive care, the critical patient and the nursing diagnosis of risk of aspiration, along with its respective risk factors, all the items of the collection instrument being explained.
Thus, after the training of the collectors, data collection was undertaken between October 2013 and May 2014, by the researcher, a resident, and a student of nursing on the scientific initiation program.
Aspiration risk in critical patients
Bispo MM, Dantas ALM, Silva PKA, Fernandes MICD, Tinôco JDS, Lira ALBC In order to organize and analyze the data, a database was constructed using the Microsoft Office Excel software, in which were recorded the clinical variables and the risk factors for the diagnosis studied.Once the database contained all the factors mentioned above, the researcher filled out the database based on the information present in the patients' questionnaires, defining whether the risk factor was present or absent.
Once the presence or absence of each risk factor had been defined by the researcher responsible, the database was referred for consideration by the three nurse specialists in the areas of intensive care and/or nursing diagnoses, so that they could undertake the process of diagnostic inference regarding the presence or absence of the diagnosis in the patient.In the event of disagreement among those making the diagnoses, majority rule was applied, in which the diagnosis is considered present when two or more of those making the diagnoses consider it to be present.
Those making the diagnoses were selected intentionally based on assessment of their curriculums.The selection criteria were: to have published articles referent to the Systematization of Nursing Care** and/or specialization or experience in the area of intensive care.
For analysis of the data, the Statistical Package for the Social Sciences (SPSS) Version 20.0 for Windows statistical package was used.Thus, the relative and absolute frequencies, means, medians and standard deviation were calculated.To this end, the Kolmogorov-Smirnov test was identified for checking the normality of the numerical data.In the analysis of the association of the nominal data, Fisher's exact test was used.The analysis was based on the reading of the descriptive statistics, as well as on the analysis of the p value found.For statistical significance, a level of 5% was adopted.
The study was submitted to the Research Ethics Committee of the institution responsible for the research.It received a favorable opinion under protocol Nº. 440/414 and obtained Certificate of Presentation for Ethical Appreciation (CAAE) Nº 22955113.2.0000.5292.
RESULTS
A total of 86 patients receiving inpatient treatment in ICU was evaluated, of whom 52.3% were female.The patients were predominantly of mixed ethnicity (55.8%), practiced a religion (95.3%), had a partner (70.9%) and had an income of one to three minimum salaries (79.1%).Regarding where they were from, 61.6% came from the rural areas of the state.The patients' mean age was 53.4 years old (±16.5),with a minimum age of 18, and a maximum of 81 years old.
In relation to the clinical data and data regarding their hospitalization in the unit, it was observed that the majority of the patients (73.3%) had been admitted to ICU following surgery, or for treatment of complications associated with surgery.The study also evidenced that the majority of the patients (70.9%) had chronic illnesses.
Of the 86 patients who participated in the study, 43 (50%) presented the nursing diagnosis of risk of aspiration.Among the 22 risk factors covered under the diagnosis, 17 were present in this clientele, namely: Tube feeding; Neck surgery; Impaired swallowing, Secondary events related to the treatment; Incompetent lower esophageal sphincter; Delayed gastric emptying; Reduced gastrointestinal motility; Reduced level of consciousness; Presence of tracheostomy; Presence of endotracheal tube; Increased intragastric pressure; Reduced cough reflex; Reduced gag reflex; Increased gastric residual; Situations hindering elevation of upper body; Gastrointestinal tube; and Neck trauma.
Of the 17 risk factors present in the patients with the diagnosis of risk of aspiration, only eight presented statistical significance (p < 0.05), namely: gastrointestinal tube, impaired swallowing, reduced level of consciousness, tube feeding, presence of endotracheal tube, secondary events related to the treatment, delayed gastric emptying, and increased gastric residual, as shown in Table 1.
DISCUSSION
Regarding the presence of the diagnosis of risk of aspiration in critical patients, one study undertaken in an intensive care unit in the Southeast of Brazil evidenced the prevalence of this diagnosis in 60.8% of the patients, which corroborates the results of the present study 10 .Another study ratifies the striking presence of this issue in ICU patients, highlighting the percentage of 98.7% of individuals with risk for aspiration 11 .
It follows that, in understanding the high risk of aspiration among these patients, preventive measures must be adopted with a view to minimizing possible complications.In this regard, the study promotes efficacious intervention for reducing the risk of aspiration, such as keeping the bedhead raised at a level greater than 30° for mechanically ventilated patients 12 .
In addition to this, the study which aimed to implement a protocol with directives aimed at reducing aspiration in patients undergoing thoracic surgery identified that prior to the application of the protocol by the nurses, the rate of developing pneumonia among the patients was 11%; following the implementation, no patient developed pneumonia, showing it, therefore, to be efficacious in reducing this condition resulting from aspiration 13 .
In this regard, one can see the importance of applying preventive measures to this clientele.For this, the need is evidenced to identify the risk factors which have the greatest association with this issue.As a result, tube feeding was listed among the risk factors which are relevant for the diagnosis of risk of aspiration.It is known that nutritional support provides critical patients with the energy intake which is necessary for meeting their metabolic needs.Feeding at an early point is associated with reducing the severity of the illness and complications, as well as reducing length of hospitalization 14 .
Among critical patients, oral ingestion is often impaired as a result of clinical conditions which contraindicate its use, it becoming necessary to feed the patient by other routes, among
Aspiration risk in critical patients
Bispo MM, Dantas ALM, Silva PKA, Fernandes MICD, Tinôco JDS, Lira ALBC which emphasis is placed on feeding via gastrointestinal tube.This means of feeding, however, is not free of risk, as aspiration of secretions in airways can occur, along with diarrhea, vomiting, hyponatremia and hyperglycemia 14,15 .
Continuing in relation to the use of the gastrointestinal tube, this stimulates gastroesophageal reflux and the consequent aspiration of gastric content into the lungs, leading to the emergence of respiratory infections.The aspiration of secretions in airways is associated not only with the presence of tube feeding, but also with the caliber of this device, the infusion of food (continuous or intermittent) and the positioning of the patient in the bed 15 .
Aspiration risk in critical patients
Bispo MM, Dantas ALM, Silva PKA, Fernandes MICD, Tinôco JDS, Lira ALBC The risk factors of increased gastric residual and delayed gastric emptying were also associated with the risk of aspiration in the sample studied.The checking of the gastric residual once every six hours is highlighted as an essential nursing care step for the identification of delay in gastric emptying and identification of increased gastric volumes; moreover, measurements of gastric residual above 200 ml in the period of six hours are configured as high, predisposing to the occurrence of gastric distention and consequent episodes of vomiting and aspiration of gastric content in the airways 1 .
Contradicting this, some authors evidence in their studies that the presence of gastric residual does not influence the occurrence of aspiration, given that various factors influence the aspiration of this content, including the caliber, size and location of the tube, as well as the viscosity of the residual liquid [15][16][17] .In this perspective, corroborating this context, a study which aimed to identify association between the gastric residual and the frequency of aspiration of gastric content in 206 critical patients ascertained that although 92.7% of the patients presented at least one tracheal secretion positive for pepsin, there was no consistent correlation between aspiration and gastric residual 18 .
Some important characteristics regarding this aspect, however, must be emphasized, such as, for example: aspiration occurred with significant frequency when there was low residual gastric content, although it occurred with greater significance when the content was high.Furthermore, the study emphasizes the importance of this analysis taking into consideration the characteristics of the patient, such as level of consciousness, sedation, position of the bedhead, the presence of vomit, and the seriousness of the illness.It also reveals that it is important to measure the gastric residual, which must be undertaken at an interval of four hours in order to evaluate those critical patients who are at greatest risk for aspiration 18 .
The risk factors of impaired swallowing and presence of endotracheal tube appeared in the study as relevant, corroborating a study undertaken with a similar population in the Southeast of Brazil when they state that dynamic changes in the oral and pharyngeal phase of swallowing are common in critical patients, principally those being mechanically ventilated through an endotracheal tube 19 .
Factors which predispose to the risk of aspiration among these patients are many in number, and include uncoordinated timing of breathing and swallowing and atrophy in the musculature of the tongue, pharynx and larynx associated with lack of use resulting from endotracheal intubation, as well as the effect of sedative drugs, opioids and neuromuscular blockers 20 .
Regarding the risk factor of reduced level of consciousness, the bibliographic findings indicate that patients with altered level of consciousness present greater predisposition for the aspiration of secretions in the airways, taking into account the reduction of the airways protective reflexes, such as the cough and gag reflexes.These authors also emphasized the need for rigorous assessment on the part of the nurse regarding the patient's level of consciousness, so as to identify, at an early stage, changes in the neurological situation, as well as in the standard of swallowing, thus avoiding the risk of bronchoaspiration 3,15 .
The risk factor of secondary events related to the treatment was also considered to be present in the clientele which participated in the present study.This factor may be related principally to the drug therapy.One study undertaken with critical patients evidenced that the tolerance of the nutritional therapy can be limited due to gastrointestinal events attributed to drug therapies administered simultaneously.This study also showed that the main gastrointestinal events related to the use of drugs were: constipation, diarrhea, abdominal distention, vomiting and pulmonary aspiration 21 .
It is very common to administer analgesics, sedatives and neuromuscular blockers in intensive care, in order to provide comfort, pain relief, and reduction in the patient's stress.This therapy, however, causes an increase in the risk of aspiration, as it can cause lowering in the level of consciousness, reduction in the reflexes protecting the airways, and reduction of intestinal motility, with a consequent increase in gastric residual, predisposing to episodes of vomiting 3,21 .
Based on the above, it is recognized that the early identification of the main risk factors related to the risk of aspiration in patients in critical-care units allows the nurse to carry out interventions capable of preventing this problem and, consequently, the resulting complications.
CONCLUSIONS
The present study analyzes the association of the nursing diagnosis of risk of aspiration and its risk factors in patients hospitalized in an intensive care unit.This diagnosis was present in half of the critical patients who participated in this study.Among the factors which were associated with this diagnosis, one can highlight impaired swallowing, gastrointestinal tube, tube feeding, reduced level of consciousness, presence of an endotracheal tube, secondary events related to the treatment, delayed gastric emptying and increased gastric residual.
The study of the main nursing diagnoses in critical patients allows the nurse to identify the risk factors which directly influence the care, contributing to the precise definition of the priority nursing care for maintaining quality care.
As a result, it is worth emphasizing the importance of studies which investigate the diagnostic inference undertaken by the nurse, qualifying the work of the professional, and broadening to the area's body of knowledge.As a limitation of this study, emphasis is placed on the difficulty of undertaking research with critical patients, given that they are often unconscious, disoriented and restricted to their beds, hindering the measuring of important data for the physical examination.
As a result, it is recommended that similar studies should be undertaken with other priority diagnoses for the care of the seriously-ill patient, such that the nurse may be able to make use of the knowledge which is necessary for optimizing the care, and such that instruments may be created which guide the nurse's diagnostic reasoning in her practice.
Table 1 .
Distribution of the risk factors for the nursing diagnosis of risk of aspiration, which presented significant association in patients receiving inpatient treatment in the intensive care unit.Natal, State of Rio Grande do Norte (RN), 2014 * Fisher's Exact Test. | 2017-09-28T05:16:45.143Z | 2016-06-01T00:00:00.000 | {
"year": 2016,
"sha1": "a0c476702e579e8d290d004f2ccd01d92b8620d9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5935/1414-8145.20160049",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a0c476702e579e8d290d004f2ccd01d92b8620d9",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
234960540 | pes2o/s2orc | v3-fos-license | Texto & Contexto Enfermagem
Objective: to analyze the safety culture of women in childbirth and related institutional factors based on the perceptions of nursing and medical professionals. Methods: a mixed, sequential explanatory study, conducted with nursing technicians, nurses and physicians of the obstetric center of a public maternity hospital in the city of Rio de Janeiro. Data collection took place from May to July 2018. The Hospital Survey on Patient Safety Culture questionnaire and descriptive statistical treatment were applied. Then, 12 semistructured interviews and thematic content analysis were applied and, finally, this data set was integrated. Results: most of the dimensions of the safety culture are weakened, especially in the areas of institutional organization, and the team lacks knowledge about the actions of the Patient Safety Center in the institution, the uniformity of care is deficient and the number of personnel for care is limited. The safety management process and work organization need adaptations. Conclusion: the safety culture of women requires improvements in team training, skilled care, work organization, and commitment of local management to qualified and safe care in hospital births. DESCRIPTORS: Patient safety. Delivery. Safety culture. Organizational culture. Obstetric nursing.
CULTURA DE SEGURIDAD DE LAS MUJERES EN EL PARTO Y FACTORES INSTITUCIONALES RELACIONADOS
RESUMEN Objetivo: analizar la cultura de seguridad de las mujeres en el parto y los factores institucionales relacionados a partir de las percepciones de los profesionales de enfermería y medicina. Métodos: estudio misto, secuencial explanatorio, desarrollado con técnicas de enfermería, enfermeras y médicas del Centro Obstétrico de una maternidad pública del municipio de Rio de Janeiro. Los datos fueron recolectados de mayo a julio de 2018. Se aplicaron el cuestionario Hospital Survey on Patient Safety Culture y el tratamiento estadístico descriptivo. A seguir, se llevaron a cabo 12 entrevistas semiestructuradas y análisis de contenido temático y, al final, ese conjunto de datos fue integrado. Resultados: la mayor parte de las dimensiones de la cultura de seguridad está fragilizada, sobretodo en las áreas de la organización institucional, y hay desconocimiento del equipo sobre las acciones del Nucleo de Seguridad del Paciente en la institución, deficiencia en la uniformidad de la atención y número limitado de personal para los cuidados. Son necesarias adecuación del proceso de gestión de la seguridad y organización del trabajo. Conclusión: la cultura de seguridad de las mujeres requiere mejoras en la capacitación del equipo, adecuación de la atención, organización del trabajo y comprometimiento de la gestión local con la atención cualificada y segura al parto hospitalario.
INTRODUCTION
Health care is still concentrated on the curative focus of diseases and is characterized by the increasing use of biomedical techniques and technologies and invasive diagnostic and clinical procedures in patients, which has increased their complexity and, consequently, the risk of carerelated events and damage. 1 In the area of maternal health, this curative and intervention perspective with regard to healthy pregnant women has been questioned due to its potential to cause more harm than benefits, such as the routine of cesarean sections without clear indications and unfit conducts, such as zero diet; oxytocin; episiotomy; Kristeller's maneuver, which lack scientific evidence to justify their indication for this clientele without associated morbidity. [1][2] These practices may also involve disrespectful attitudes that cause pain, fear, and traumatic experiences for pregnant women, especially during childbirth, and also cause harm to the physical and mental health of mother and baby. Pregnant women tend to express the desire to get respect for their autonomy and to feel safe in childbirth, which corroborates the World Health Organization's recommendations regarding the promotion of quality of care for safe motherhood. [3][4] Safety is one of the crucial attributes for the quality of health care and is a global priority. The safety culture aims to prevent errors in the care process and the damage or adverse events caused to patients as a result of these errors, in order to provide safe care to health service clients. 5 The patient safety culture is a dimension of the organizational culture, as it is the result of individual and group values; beliefs; attitudes; perceptions; norms; procedures; competencies and behavioral patterns that determine the institutional commitment to safety management. This culture can be impaired due to poor communication; failures in leadership and teamwork; lack of reporting systems; inappropriate analysis of adverse events; and improper knowledge of the team about patient safety. 6 The health organization consists of departments, units, or wards where the groups of professionals work, and these differentiated units develop specific types of subcultures and correspond to these groups' working environment. Subcultures can favor the reduction of errors, failures and adverse events, besides improving the results and satisfaction with the care provided. Their values may vary though, and they may act as driving forces for organizational change or as covert countercultures that silently undermine new initiatives. Therefore, the organization can be seen as a dynamic cultural system. 7 A North American study considers that there is a lack of research on patient safety initiatives in specialized obstetric care hospitals. When analyzing the safety initiatives at these hospitals, gaps were identified in some of them, such as the absence of or limitations in the use of evidence-based practices; simulated obstetric emergency practices; regular reviews of morbidity or mortality cases; protocols or audits of cases of failure to progress and abnormal fetal cardiac frequency; delay in safety and quality management activities, such as monitoring of indicators and regular team training on effective communication. 8 Brazilian research has been developed in general and teaching hospitals, and sometimes with a focus on the nursing team. These studies found unsatisfactory results regarding the safety climate; weaknesses in the organizational culture related to the workload; hierarchical communication; problems in supervision and management leadership; and difficulty of professionals to admit the possibility of errors due to fear of punishment. [9][10] These results clarify some of the challenges for the creation of a safety culture in health institutions and indicate the need to advance knowledge on this topic in the area of hospital-based obstetric care, especially concerning the Brazilian reality. In view of these challenges and imperatives
4/13
for improving the quality and safety of obstetric care, the following research question was proposed: How do nursing and medical professionals evaluate the safety culture of women in childbirth, and how do they perceive the institutional factors related to this culture?
The study was aimed at analyzing the safety culture of women in childbirth and related institutional factors based on the perceptions of nursing and medical professionals.
METHOD
This is a mixed study with a sequential explanatory design. Mixed studies are characterized by the combination of quantitative and qualitative methods in the same research. The sequential design refers to the implementation of two distinct stages, one initial and the other subsequent, and the explanation indicates that one stage is used to explain the findings generated by the other stage. The combination of these two methods in the same research permits deepening and broadening the understanding of the problem. 11 In the sequential explanatory design, the initial stage of the research occurs through the quantitative method and provides objective findings on the research problem. The second stage is guided by the qualitative method, as this makes it possible to explain the initial quantitative results. At the end, the results of the quantitative and qualitative stages are integrated and interpreted to understand the problem in a more comprehensive and detailed way. 12 The study was developed at the Obstetric Center (CO) of a public maternity hospital in the city of Rio de Janeiro between May and July 2018. This institution was selected because it is a reference hospital for the care of habitual-risk pregnant women, with nurse-midwives for normal birth care. It also figures on the list of health institutions that had a Patient Safety Center in 2017, as recommended by the National Health Surveillance Agency. 13 It is noteworthy that this public maternity hospital is administered by a Social Health Organization and that its entire staff works under the Consolidation of Labor Laws. In 2017,5,329 births were attended,3,830 by normal birth and the remainder by cesarean section. The nurse-midwives were responsible for almost half of the normal births according to institutional data.
The participants were the nursing and medical professionals working in the OC of the maternity hospital, considered, for the purposes of this research, as the professional team directly engaged in this unit of the institution. The OC consists of a ward with operating rooms and another designated as a Normal Birth Center, where the habitual-risk parturients remain in individual boxes during labor and normal birth, and the nurse-midwives provide their care. This OC has 24 nurses, 18 nurse-midwives and the other general care nurses, in addition to 36 nursing technicians and 60 physicians, with 40 obstetricians and 20 pediatricians, totaling 120 professionals in this sector. Most of these professionals work on a shift regimen.
The first stage of the mixed research was a survey, conducted through the application of the Hospital Survey on Patient Safety Culture (HSOPSC), which was validated and cross-culturally adapted for the Portuguese language and Brazilian reality. 14 The HSOPSC questionnaire makes it possible to evaluate the safety culture of the hospital as a whole, of a hospital unit or sector and of a professional category that integrates the staff, such as nursing for example. This tool consists of 42 items, intended to measure each respondent professionals' opinion or perception regarding each dimension (D) of the patient safety culture. These dimensions are distributed in four sections of the HSOPSC questionnaire. The first section contains questions about the professionals' sociodemographic data; the second includes questions about the hospital unit where they work regarding the first seven dimensions, from D1 to D7; the third investigates the organization of the hospital and corresponds to three dimensions, from D8 to D10, and the last focuses on the latter two dimensions that assess the results of the safety culture, D11 and D12, adding questions about the number of events reported in the past 12 months, and the overall assessment of the safety culture, with answers ranging from "excellent" to "poor".
The HSOPSC questionnaire presents items with five-point Likert responses, ranging from "I totally disagree" to "I totally agree" and from "never" to "always". Some items are worded positively and the concordant answers are considered positive for the safety culture, while other items are worded negatively and the discordant answers are also considered positive for the safety culture.
As the study was restricted to the nursing and medical professionals from the maternity's OC, the seventh question of the questionnaire regarding the respondent professional's position or function had to be adapted, as its original version includes answers concerning the other professional categories, such as nutritionist, social worker, among others.
In the quantitative stage of the research, nursing and medical professionals working in the maternity hospital's OC who provided direct care to women in labor and childbirth were included, while those with less than one year of experience in this care were excluded. This was based on the premise that professionals with less than one year of work in the maternity hospital are adapting to the organizational culture of the institution.
The professionals eligible for the study were captured in the work environment. A previous meeting was held, in which one of the researchers of this study, the immediate managers and the OC professionals participated, to promote the study and its objectives. These professionals received clarifications about the goals of the study, at the beginning or end of the day and night shifts. Although different days and times were available to complete and hand in the questionnaire, few professionals answered it, with the main justification of having little time in view of the work demand at the sector.
In view of these difficulties, the research team chose to study an intentional and therefore nonprobabilistic sample. The primary researcher applied the HSOPSC questionnaire to the professionals who agreed to participate in the research. The participants answered the questionnaire and returned it at the beginning or end of the shift, before or after the day and night handoff, or during the breaks of their work, so as to avoid data production losses. Thirty-three questionnaires were distributed, but the respondents did not return five, and two were discarded due to incomplete filling.
The responses to the sections of the HSOPSC were analyzed in accordance with the recommendations of the Agency for Healthcare Research and Quality, the US agency that created this tool and recommends calculating the positive answers to the items in the twelve dimensions of the safety culture in accordance with the following percentages: 75% or more represent a strengthened safety culture; less than 75% and more than 50% indicate a neutral range with potential for improving the safety culture, and 50% or less correspond to a weakened culture of safety. 15 In the qualitative stage, the participants who answered the HSOPSC were included, and the same inclusion and exclusion criteria adopted in the quantitative phase of this study were followed. The selection of the eligible participants was based on a name list of the nursing professionals and 6/13 physicians according to the work shifts at the OC, attempting to consider the representativeness of each professional category and work shift.
The qualitative data were obtained by applying individual interviews, recorded in digital media and conducted in a rest room of the team near the OC. The interviews were conducted with the support of a semi-structured script, consisting of two parts: the first with questions aimed at characterizing the participants and the second with open questions that asked the professionals about safe care for women in childbirth and the factors involved in the safety culture of this care in the OC.
The interviews were transcribed and analyzed according to thematic content analysis to discover the cores of meaning that made up the communication and how frequently they appeared, enabling the inference of knowledge related to the conditions of production or reception. 16 The interviews stopped when no new codes or themes emerged during the analysis.
This analysis was completed in three stages: in the first, called pre-analysis, the interviews were transcribed, organized, and skimmed to identify the sections of text that are consistent with the purpose of the study; in the second stage, the material was explored based on semantic equivalence to group the Registry Units (RU) in accordance with the corresponding themes, which made it possible to construct the thematic categories -a researcher on the team who did not participate in the data collection reviewed this second stage of the analysis; finally, in the processing of the results, inferences and interpretations were made 16 in accordance with the terms and assumptions of the Patient Safety Culture. [3][4][5][6][7][8]13 The study complied with the regul atory standards for research involving human beings. The participants were designated here by the professional category and the order in which the interviews were held.
RESULTS
In the quantitative stage, 26 (100%) nursing and medical professionals answered the HSOPSC, corresponding to a response rate of 21.6% of all OC professionals. Almost all respondents were female (96.2%) and only one nurse was male. Therefore, the professionals were designed by the female gender here.
This group of study participants consisted of nine (34.6%) nurses, nine (34.6%) nursing technicians and eight (30.8%) physicians working in the OC of the maternity hospital under investigation; and most of them (57.7%) work between 20 and 39 hours a week and from one to five years at the institution (88.5%).
All 12 safety culture dimensions the respondents assessed had less than 75% of positive responses. The mean response percentage was 48.14%, which suggests a weakened safety culture in this sector of the maternity hospital. The highest frequencies of positive responses were found in the dimensions Organizational learning and continuous improvement (70.4%); Teamwork within units (69.7%); and Non-punitive responses to errors (62.6%).
The lowest percentages were observed in the cultural dimensions: Overall perceptions of patient safety (31.8%); Staffing (31.5%); and Handoffs and transitions (30.6%). These data are displayed in Table 1. Regarding incident reporting in the previous 12 months, 88.5% of the professionals answered that they had not reported any event. The participants most frequently evaluated patient safety in the OC as "regular" (42.3%), followed by "very good" (26.9%), "excellent (15.5%) and "very bad" (15.3%).
Twelve female participants took part in the qualitative stage of the study: five obstetric nurses, three nursing technicians, and four doctors, the latter being two obstetricians and two pediatricians. As for the length of experience in the institution, 10 participants had worked in the maternity hospital from one to five years, and two professionals for six years.
The content analysis of the interviews permitted the construction of the thematic categories described in Chart 1.
First category: Safe care for women during hospital birth
The nursing and medical professionals at the OC consider that safe care for parturient women occurs through the prevention of events, such as reducing errors and damage during the care process; performing technical care in accordance with care protocols; correctly identifying women and their infants; and preventing the occurrence of falls, as can be observed in the following statements: [...] It means providing care as safely as possible to avoid errors and minimize any problems that may occur due to human error in care (Nurse E4).
Second category: Restrictive factors of the safety culture in the Obstetric Center
The restrictive factors of the safety culture that emerged from the professionals' statements were the lack of knowledge of institutional actions regarding patient safety, deficiency in the uniformity of the behaviors the team adopted and limitations in the number of personnel in view of the care demand, as manifested in the following statements: [
DISCUSSION
The patient safety culture among health professionals has attracted the attention of researchers, managers and workers in Brazil. Self-applied questionnaires are widely used to analyze the dimensions of this culture and identify its main weaknesses and strengths but can take time and be dull for the respondents, which negatively affects the participants' response rate. 14 In studies in Brazilian hospitals, the response rate of eligible professionals to the HSOPSC questionnaire ranged from 44.8 to 13.6%. In the quantitative stage of this mixed research, this rate corresponded to 21.6% of all nurses and physicians at the OC, suggesting that, in our midst, the application of this tool represents a challenge and requires strategies to raise the health professionals' awareness on the importance of their participation in the advancement of knowledge on the theme, as the response rate to the questionnaire can figure among the safety culture indicators. 14,17 Despite these limits imposed on the accuracy of this study in portraying the reality of the safety culture as perceived by nursing and medical professionals at the OC of the maternity hospital under investigation, however, the findings showed that none of the 12 dimensions measured reached the parameter of a strengthened safety culture. Similarly, in a Brazilian survey conducted with the health teams of one Intensive Care Unit and three general hospitals, weaknesses were identified in most dimensions of the safety culture evaluated, indicating that patient safety needs to advance in hospital units. [17][18] The study participants evaluated the dimensions of the safety culture related to their work unit as areas that can improve the safety culture, except for the staffing dimension. Among these potential areas, organizational learning and continuous improvement, teamwork within units, and non-punitive responses to errors stood out. These areas are the most favorable for the advancement of the safety 9/13 culture in the OC of the maternity hospital, because a work environment favorable to learning and team integration enhances the professionals' commitment to the cultural changes in health services. 19 Despite these potentials, all relevant dimensions for the organization of the institution were weakened, with response percentages inferior to 50%, such as the support of supervisors/managers for patient safety; teamwork across units; handoffs and transitions. This same finding was verified in the two dimensions related to the result, which integrate the Overall perceptions of patient safety and the frequency of reported events. The fragility of these two dimensions was corroborated by the "regular" concept in the overall assessment of the safety culture and by the rarity of event reports in the previous 12 months.
Organizational commitment is fundamental for professionals to perceive patient safety as a priority in the institution, enabling them to create committed postures that confide in learning from safety events. Organizations with a positive safety culture have communication based on mutual trust; shared perceptions about patient safety; belief in the support of leaders and managers; and valuation of measures to prevent and predict events based on risk management, the monitoring process and a plan to intervene in the identified problems. [5][6] Culture within health organizations and cultural change are seen as strategic to manage to improve the quality of health care. In Brazilian obstetric care, cultural change has been a recurring theme because the predominant care culture is characterized by inadequacies and unnecessary interventions. This dominant obstetric culture is based more on the tradition of crystallized routines, habits, and practices than on actuals values, behaviors, and attitudes that are guided by scientific evidence and place the women at the center of the care relationship. [1][2] The nursing and medical professionals who participated in this research demonstrated knowing the basic attributes of safe care for women in childbirth, such as the care provided according to the care protocols and which is able to prevent the occurrence of errors and harm. They also mentioned two international safety goals, which are the correct identification of the patient and the prevention of falls. The statements suggest, however, that these goals were inserted in the care routine as a formality, because they did not refer to the other actions recommended for patient safety, such as event reporting, construction, and monitoring of indicators, and risk prevention and control measures. 8 These professionals expressed ignorance about the existence and work of the Patient Safety Center at the institution. This organizational body was created in 2013 through the National Patient Safety Policy to promote and support the implementation of patient safety actions such as risk management, event reporting, institutional patient safety planning, elaboration of patient safety protocols, and monitoring of indicators, among other tasks. 13 Therefore, this lack of knowledge may explain the incipient safety culture verified by the HSOPSC and may indicate possible challenges to operate and fulfill its tasks.
An integrative review showed factors associated with the implementation and success of quality improvement and risk management programs in hospitals. The following facilitators were identified: 1) governance through strong leadership, committed to the development of quality improvement actions; 2) quality management through a competent and multidisciplinary team, available to establish best practices in quality, culture, and patient empowerment projects; 3) work organization through the dissemination of recommendations; production of evidence-based protocols; professional training; integrated and collaborative teamwork; and the necessary material and financial resources. 20 In this research, areas with potential to improve the safety culture may be related to some of the factors described above. The professionals' statements also value the prevention of incidents and adequacy of care, indicating that the team obtained some achievements that need to be acknowledged in order to stimulate it to advance in the safety actions for women in childbirth.
The limiting factors in the implementation and success of quality improvement and risk management programs identified in the integrative review were: 1) failures in the local system in producing, disseminating, and appropriating best practice guides; 2) lack of material resources, time, and human capital; 3) difficulty in access and inadequacy of the information System; 4) lack of skill or knowledge; limitations in risk perception; denial of reality and of the patient's feelings. 20 Some of these limiting factors were also identified here, such as weaknesses in the safety culture dimensions related to the support of supervisors/managers; staffing; handoffs and transitions; teamwork across the units of the institution; and low frequency of reported incidents. In addition to these weaknesses, the professionals' statements highlighted the team's lack of knowledge about the patient safety actions; deficiency in the uniformity of care; and reduced staff in the OC of the maternity hospital investigated.
This set of weaknesses requires improvement of the management process and improvement in the work organization for the proper implementation of women's safety actions in childbirth. In addition, it requires advances in the elaboration, dissemination, and adherence to evidence-based protocols; continuous training of professionals; integrated and collaborative teamwork; and empowerment of women for the qualification and safety of obstetric care. [5][6]20 Another aspect to be emphasized is that the hospital culture can positively or negatively affects the teams' work in the units of the institution and, therefore, the quality and result of the care provided in the health services. Therefore, for cultural change to take place at the institution, you should have a systemic approach of the components, both internal and external to the organization, such as the logistics, the tools, and resources provided by the management of the health care network, as well as the involvement of the clients, team, and leaders of those services, so that they can clearly know how to achieve that change. This includes knowledge about the processes and tools to be adopted to improve the safety of women and their children during childbirth care. 7 It is also highlighted that the health program to improve patient safety in the services of the Unified Health System is relatively recent in the country and the results described here suggest that safety actions are developing but are still incipient, calling on professionals, managers and users of the health unit to make efforts towards its full implementation through the effective performance of the Patient Safety Center in the service.
Thus, the implementation of patient safety measures can boost public actions aimed at changing the dominant obstetric culture in the country, as both initiatives involve change processes of the organizational culture, such as improving local governance, quality management, work organization, and care process to achieve more successful results.
Finally, it should be emphasized that this study presents limitations due to the fact that its results are not representative of the nursing and medical team of the OC studied and, therefore, should be assessed with caution and cannot be generalized to the other hospital units.
CONCLUSION
The mixed research found weaknesses in most dimensions of the safety culture evaluated, especially in the institutional organization areas, corroborated by the team's lack of knowledge about the actions of the Patient Safety Center; poor uniformity of care; and reduced staff to take care of the parturients in the OC of the maternity studied.
Despite these weaknesses in the safety culture, the team has notions of safe care for women in childbirth, characterizing it as care that prevents errors and harm and that is in accordance with the care protocols. This team acts in the correct identification of women and newborns and in the prevention of falls in the OC. The areas with potential for improvement are the dimensions of the safety culture concerning the work unit, with better evaluation in the areas of Organizational learning, Teamwork within units and Non-punitive responses to errors.
The findings described here can contribute to add new perspectives on the organizational and patient safety culture regarding the specificities of the obstetric care culture during childbirth, as well as to sensitize the health professionals and motivate researchers for research the advance the knowledge on the theme and to be expanded to the other maternal healthcare segments, such as prenatal and postpartum care | 2021-07-07T22:21:18.169Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "f75eba9a187dcd98525718716f94387bac367626",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/tce/v29/1980-265X-tce-29-e20190264.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f75eba9a187dcd98525718716f94387bac367626",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": []
} |
249677686 | pes2o/s2orc | v3-fos-license | Maternal hemodynamics and computerized cardiotocography during labor with epidural analgesia
Purpose To analyze the mechanisms involved in the fetal heart rate (FHR) abnormalities after the epidural analgesia in labor. Methods A prospective unblinded single-center observational study on 55 term singleton pregnant women with spontaneous labor. All women recruited underwent serial bedside measurements of the main hemodynamic parameters using a non-invasive ultrasound system (USCOM-1A). Total vascular resistances (TVR), heart rate (HR), stroke volume (SV), cardiac output (CO) and arterial blood pressure were measured before epidural administration (T0), after 5 min 5 (T1) from epidural bolus and at the end of the first stage of labor (T2). FHR was continuously recorded through computerized cardiotocography before and after the procedure. Results The starting CO was significantly higher in a subgroup of women with low TVR than in women with high-TVR group. After the bolus of epidural analgesia in the low-TVR group there was a significant reduction in CO and then increased again at the end of the first stage, in the high-TVR group the CO increased insignificantly after the anesthesia bolus, while it increased significantly in the remaining part of the first stage of labor. On the other hand, CO was inversely correlated with the number of decelerations detected on cCTG in the 1 hour after the epidural bolus while the short-term variation was significantly lower in the group with high-TVR. Conclusion Maternal hemodynamic status at the onset of labor can make a difference in fetal response to the administration of epidural analgesia.
Introduction
Since the beginning of its history, analgesia in labor has always had controversial aspects and doubts about the maternal and fetus/neonatal consequences, thus safety remains a challenge to pursue. Transient abnormalities in fetal heart rate (FHR) have been described in up to 15% of the cases after the use of analgesia during labor [1] complicating the interpretation of fetal CTG and the prediction of a fetal acidemia at birth. Decelerations of FHR and bradycardia have been reported for all types of labor analgesia (epidural, spinal, combined spinal-epidural and intravenous opioids) [2]. The clinical significance of these changes is not entirely clear, however, there is a common consensus on maternal and fetal oxygenative and vascular pathophysiology [3]. It has been reported that the fetal oxygenation is altered with the dose dependent administration of epidural analgesia [4], as well has been proposed the hypothesis of uterine hyperactivity due to the reduction of catecholamines [5] or maternal hypotension due to an imbalance of adrenalin/noradrenalin ratio [6]. In this context several studies have described uteroplacental and fetal hemodynamics after labor analgesia with differences in clinical characteristics (antenatal, induction of labor, high risk or low risk pregnancies), vascular district evaluated, type of anesthesia (continuous infusion, single dose, self-controlled) and drugs used [7]. In the majority of studies, FHR changes is not associated with an increased incidence of cesarean section and did not appear to have an immediate effect on neonatal status as determined by Apgar scores [8]. Based on recent evidence of a maladaptive cardiovascular response to pregnancy complicated by placental syndromes [9][10][11][12], maternal hemodynamic assessment it has become an interesting way to evaluate maternal-fetal interactions from a different point of view. Labor and delivery are events that have a great impact on maternal general hemodynamics the change in maternal position from supine to lateral alone may produce an increase in cardiac output (+ 21.7%), decreased heart rate (− 5.6%), and increased maternal stroke volume (+ 26.5%) [13]. Anxiety, pain and exertion increases both heart rate and stroke volume, just as the utero-placental consequences of the reduced venous return to the heart due to caval compression from the supine position are well known. An increment in basal cardiac output of 12% has been reported in a group of women during labor [14].
The objective of the present study is to analyze the hemodynamic pattern of women during labor before and after epidural analgesia and its relationship with FHR.
Patients and methods
This was a prospective unblinded single-center observational study carried out at Salesi Maternal-Neonatal University Hospital in Ancona (Italy), between March 2018 and June 2019. The center treats 1800 parturients per year, with an epidural analgesia rate in labor of 40% and a cesarean delivery rate of approximately 24%.
Fifty five low-risk pregnant women in active labor with normal FHR trace submitted to epidural analgesia were recruited. Inclusion criteria were: healthy single pregnancy after the 37th week of gestation, spontaneous active labor (cervical dilation of at least 3 cm), age 18-40 years, height 155-180 cm, body mass index < 35 kg/m 2 , normal FHR pattern at admission. Exclusion criteria were: history of hypotensive episodes, pre-existing or actual hypertensive or metabolic disorders, psychiatric or somatic disease, fetal/ neonatal malformations, other contraindications for epidural analgesia. Informed consent was obtained from all individual participants included in the study.
Epidural analgesia (EA)
After venous cannulation and survey of maternal parameters an epidural catheter was inserted at the L2-3 or L3-4 space. A bolus of 20 mL levobupivacaine and 10 μg of sufentanyl was subsequently administered, followed by a continuous infusion of a 10 mL/hour solution of either 0.0625% levobupivacaine with sufentanyl 0.5 μg/mL.
Hemodynamic evaluation
Hemodynamic pattern was assessed using a non-invasive ultrasonic monitor (USCOM ® , USCOM Ltd, NSW, Australia), used for the cardiovascular evaluation in pregnancy and validated versus echocardiography [15]. A transducer was placed on the suprasternal notch to measure transaortic or transpulmonary blood flow, respectively. At least three consecutive cycles were registered for each scan, by two trained researchers, to obtain the main cardiac parameters including total vascular resistances (TVR), heart rate (HR), stroke volume (SV), cardiac output (CO), arterial blood pressure. These measurements were obtained before (T0) and after 5 min 5 (T1) from epidural bolus, and at the end of the first stage of labor (T2).
Computerized cardiotocography (cCTG)
The cCTG was performed for 1 h after epidural bolus by Sonicaid Oxford 8002 System (Manor Way, Old Woking, Surrey, England). Short-term variation (STV) was calculated as the average of sequential 1y16 minute pulse interval differences by Dawes-Redman software-based algorithm.
The protocol of this prospective study was approved by the ethics committee of our center and written informed consent was obtained from each patient.
Statistical analysis
Comparisons were performed using Pearson chi-squared test for proportions, and using independent samples t-test or the Kruskal-Wallis test for continuous data. Descriptive data were analyzed using IBM SPSS Statistics for Windows, Version 22.0 (IBM Corp Armonk, NY, USA). A P value < 0.05 was considered statistically significant. This study was performed in line with the principles of the Declaration of Helsinki. This is an observational study. The internal academic Research Ethics Committee has confirmed that no ethical approval is required.
Results
The 55 Patients recruited were divided in two subgroups Low-TVR and High-TVR using the reported cut-off 1200 dyne/sec/cm −5 [16,17]. Characteristics of the study population are resumed in the Table 1. No significant differences were found in the characteristics of the two groups, not even in the rate of cesarean sections and in neonatal outcomes.
Hemodynamics and cCTG records are summarized in the Tables 2 and 3. In the whole population Cardiac Output (CO) underwent a slight increase after epidural analgesia (EA) and a significant increase for the remainder of the first stage of labor (Fig. 1). Analyzing the CO trend by dividing the two subgroups we noticed that in the Low-TVR group the starting CO was significantly higher than in the High-TVR group (5.52 ± 0.52 vs 3.60 ± 0.88 L/min) (Fig. 2). After the bolus of epidural analgesia in the Low-TVR group there was a significant reduction in CO and then increased again at the end of the first stage, in the High-TVR group the CO increased insignificantly after the anesthesia bolus, while it increased significantly in the remaining part of the first stage of labor (Fig. 3). On the other hand, CO was inversely related with the number of decelerations detected on cCTG in the 1 hour after the epidural bolus (R = − 0.1685; p < 0.0001) (Fig. 4) while the Short-term variation was significantly lower in the group with High-TVR (Fig. 4).
Discussion
The correlated effects of epidural analgesia during labor have been extensively studied, nevertheless few studies have evaluated the phenomenon from the point of view of maternal hemodynamics. The main finding of this study was that if patients are selected on the basis of total vascular resistance, the hemodynamic attitude during labor and the response to epidural analgesia change significantly. We have shown that low vascular resistances are associated with higher levels of cardiac output and that this seems to guarantee better utero-placental and fetal performance during labor. Cardiac Output is calculated from stroke volume multiplied by heart rate, it increases throughout pregnancy as early as in the 5 week of pregnancy reaching in the three trimester, about 30-50% above that in the nonpregnant state [18]. Echocardiography is most commonly used for hemodynamics in pregnancy, invasive techniques are seldom used. An insufficient increasing of cardiac output during pregnancy has been associated to neonatal complications [19]. The influence of labor on hemodynamic values has been controversial, according to some authors there would be an increase in resting CO up to 50% [20][21][22] according to others there would be no changes [23]. According with previous evidences that CO may be linked to a fetal distress [16,17], the hypothesis is that a lower CO, and thus a lower cardiac index, can affect fetal well-being as an expression of reduced cardiac performance and therefore of a reduced utero-placental perfusion.
Despite a physiological progressive reduction of vascular resistance by action of pregnancy mediators (nitric oxide, progesterone, prostaglandins,) and to the development of a low resistance circuit to the placenta, Doppler studies have associated the high resistance in uterine arteries to high peripheral vascular resistance and low maternal cardiac output [24].
The uterine fraction of maternal CO has been reported to be about 12% at term [25] on the other hand It has been calculated that utero/placental perfusion can be reduced by at least 60% during a uterine contraction in labor [26]. In the time of a contraction, most fetuses resist a period of short hypoxia, while fetuses with lower hypoxic tolerance limits show signs of compensation detected by the CTG in the form of decelerations and reduced variability. Short term variation (STV) is affirmed as a good predictor for fetal acid-base status during pregnancy and despite there are demonstrations of a significant increase in short and long term variation in peripartum period [27] in our series, in women with high-TVR, STV after an epidural bolus was significantly lower than in women with low-TVR, on the other hand women with low TVR and higher levels of CO have an improuved fetal response to maternal hypotension induced by epidural analgesic drugs as demonstrated by the reduced number of decelerations and higher short term variation.
Although with the limit of a relatively small sample size, the present study confirms a close link between maternal hemodynamics and uterus placental pathophysiology and studies the practice of epidural analgesia from a different point of view that allows to recognize differences substantial among women in labor. Childbear outcomes may be closely related to maternal low cardiac reserves, selecting a cohort of women in whom epidural analgesia can further worsen the hemodynamic stress of labor.
Author contribution All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Stefano Raffaele Giannubilo, Mirco Amici and Simone Pizzi. The first draft of the manuscript was written by Stefano Raffaele Giannubilo and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Funding Open access funding provided by Università Politecnica delle Marche within the CRUI-CARE Agreement.
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2022-06-16T06:16:18.973Z | 2022-06-15T00:00:00.000 | {
"year": 2022,
"sha1": "47a6e1a35fefc91679f576993a9716cd5784d9a4",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00404-022-06658-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "f7f888c8b5e63c1ac87fb1d685b5b548ff14fe91",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
129808971 | pes2o/s2orc | v3-fos-license | Topography as a macroscopic index for the dissolved iron productivity of different land cover types in the Amur River Basin
Iron is the limiting nutrient of phytoplankton in the Sea of Okhotsk, and the majority of iron in this system is fed by the Amur River. The recent conversion of wetlands, the main source of iron in the Amur River basin, to agricultural lands will likely impact dissolved iron productivity, which may also influence primary production in the Sea of Okhotsk. Therefore this study was conducted to construct a macroscopic index for use in assessing dissolved iron productivity in the basin. Correlation analysis between climate and topographic parameters and the observed dissolved iron concentration in forests and wetlands revealed that the topographic wetness index (TWI) had a significant correlation with dissolved iron concentration. An exponential curve was found to be the best curve to express this correlation. We assumed that dissolved iron concentration for grasslands and agricultural lands, the other two dominant land cover types, could also be expressed by TWI. Based on this assumption, dissolved iron concentration curves for grasslands and agricultural lands were inversely identified by systematic modification of the curve for forests and wetlands. The results suggest that TWI can describe the average dissolved iron concentration of major land cover types in the basin.
INTRODUCTION
proved that iron was the limiting nutrient of phytoplankton growth in the Northeast Pacific Ocean.The Sea of Okhotsk and the adjacent Oyashio region is also known as a region in which iron is the limiting nutrient of phytoplankton growth (Tsuda et al., 2003).In general, the primary source of oceanic iron was assumed to be aerosols until the 2000s.However, Nakatsuka et al. (2007) proposed an Intermediate-Water Iron Hypothesis based on intensive observations in the Sea of Okhotsk.While there is still some uncertainty associated with this hypothesis, it is highly probable that it explains the major part of iron influx into the Sea of Okhotsk via fresh water from the Amur River (Nishioka, personal communication).Accordingly, recent wetlands conversion to agricultural lands in the basin will likely impact dissolved iron productivity, which may also influence primary production in the Sea of Okhotsk.Thus, it is essential to estimate the amount of dissolved iron produced in the Amur River to assess the impact of the Amur River Basin on primary production in the Sea of Okhotsk.
In general, iron is present in low concentrations within the range of pH and redox conditions of surface water due to the low solubility and thermodynamically stable state of ferric iron.Conversely, the solubility of iron increases under reducing conditions because microbial oxidation consumes electrons from ferric iron.In addition, humic substances strongly interact with both dissolved and particulate iron species (Tipping et al., 1981;Davis, 1982;Warren and Haack, 2001), subsequently forming an iron-humic substance complex, by which dissolved iron is stabilized.Thus, the redox process and interaction with humic substances are two major processes governing iron solubility.
There are two approaches involved in modeling dissolved iron production.One is coupling of a physically based reactive transport model and an iron-binding model to humic substances.Elaborate numerical models that can deal with these processes have been developed during the last several decades (Tipping, 1998;Kinniburgh et al., 1999;Šimunek et al., 2006).However, such models can only deal with a spatial scale of several hundred meters to several kilometers due to spatial variability, computational limitations, and the lack of observed data for validation.Although these models formulate biogeochemical processes explicitly, it is not feasible to apply such models to continental watersheds.
The other possible approach is identifying empirical macroscopic indices that can characterize biogeochemical processes in a simple manner.Topography is one such index because it is an important factor that governs hydrological conditions, which in turn affect various biogeochemical processes.Indeed, studies employing topography as an index have been conducted by several researchers including Vitousek (1977), Ogawa et al. (2006), and Anderson and Nyberg (2009), who all found correlations between topographic parameters and the chemical composition of stream water.Thus, taking the latter approach, this study attempted to identify a macroscopic index including topographic parameters that would represent the dissolved iron concentration of different land cover types in the Amur River Basin.
Study site
The study site was the Amur River Basin (Figure S1), which has a catchment area of 2,050,057 km 2 .The total length of the Amur River is about 4,300 km.The amount of annual fresh water supplied to the Sea of Okhotsk by the river is about 300 km 3 .Average annual precipitation ranges from 300 mm in the west to more than 700 mm in the east.The mean annual temperature also varies from −7°C in the north to 6°C in the south.
One distinguishing spatial pattern of the land use/land cover (LULC) of the basin is the contrast between the Russian side and the Chinese side (Figure S1).Specifically, the majority of the Russian side is forest area, while the land on the Chinese side is primarily used for agriculture.If all forests in the study area are classified as a single forest type, the four most dominant LULC types in the basin are forest (59.5%), agricultural land (dry land and paddy field, 18.3%), grassland (12.2%), and wetland (6.9%).
Data
The observation points for river discharge and dissolved iron concentration are shown in Figure S1.We obtained discharge data from the main course (Stations 6 to 8) and at several large tributaries (Stations 1 to 5).The discharge data was provided by the Federal Service for Hydrometeorology and Environmental Monitoring (ROSHYDROMET) and the Global Runoff Data Center (GRDC) in Koblenz, Germany (http://grdc.bafg.de).The time resolution was daily at Stations 6 to 8, and monthly at Stations 1 to 5.
Dissolved iron concentrations from 1980 to 1995 were also obtained from ROSHYDROMET from a total of 38 sampling points that were sampled about once a month from April to October of each year.The discharge rate was also observed at the same time.Dissolved iron was measured by the colorimetric method with 1,10-phenantroline, which was applied to water filtered through Whatman GF/F filters and acidified to pH < 2 with HCl (Hydrochemical Institute, 2006).
The H08 data set (Hirabayashi et al., 2008) was utilized for climate data such as average, maximum, and minimum air temperature, downward short wave radiation, specific humidity, and precipitation.The spatial and time resolution of the H08 are 0.5° and daily, respectively.SRTM3 data derived from the Shuttle Radar Topography Mission of NASA was used for DEM.A coarser DEM data set with a grid size of 1000 m was produced by averaging SRTM3 for the analysis.
Method
To identify a primal parameter of dissolved iron concentration, correlation analysis was conducted.The average dissolved iron concentration of each watershed was used as the objective variable, while explanatory variables included the spatio-temporal average of climate parameters and spatial average of topographic parameters that might govern the dissolved iron production of each watershed.Specifically, the climate parameters included annual precipitation, summer/winter precipitation, annual average air temperature, and average air temperature during summer/ winter, while the topographic parameters were a/tanβ, slope, and Laplacian.
For these climate parameters, summer was defined as the period from May to August, and winter was defined as September to April of the next year according to the definition by Tachibana et al. (2008).For a/tanβ, a was the watershed area per unit length of the calculation grid, and tanβ was the slope of each grid (Beven and Kirkby, 1979).Since a/tanβ is recognized as a good index of wetness, we hereafter refer to a/tanβ as the topographic wetness index (TWI).Slope was defined as the steepest gradient of each grid, which can be estimated by choosing the steepest gradient among eight surrounding grids for each grid.The Laplacian is an index of land surface roughness, a definition of which is given in document S1.
The calculated period was from 1980 to 1995, during which land cover conditions could be considered the same as for the year 2000 (Chinese Bureau of Statistics, 1980Statistics, -2000)).The watershed area and LULC composition of the measuring points are summarized in Table S1.When calculating the average dissolved iron concentration, the arithmetic average and weighted average were calculated.As a weighting function, measured discharge was used.
Where there was a lack of measured discharge data, monthly precipitation for each watershed was used as an alternative weighting function.The spatio-temporal averages of the climate parameters and spatial averages of the topographic parameters were the arithmetic average of each parameter included in each watershed area.The topographic parameters were calculated by utilizing the DEM.
Correlation analysis
The results shown in Table I clearly indicate that TWI and slope were correlated with dissolved iron concentration, while no other parameters showed a clear correlation with dissolved iron concentration.The watershed area of data used for correlation analysis ranged widely from 100 km 2 to 233,000 km 2 (Table SI).In spite of non-uniformity in spatial scale, topography was found to be a good index of dissolved iron concentration.Because the correlation coefficient of TWI was slightly higher than that of the slope, TWI was adopted to express the dissolved iron concentration.Since no distinct differences between the arithmetic and weighted average were observed, the weighted average was used in the following analysis.
Construction of a concentration curve for forests and wetlands
We attempted to construct a concentration curve for forests and wetlands with respect to TWI.In the following analysis, dissolved iron concentration at a given point was calculated with a spatial resolution of 0.5°.Thus, catchments of which the watershed area was less than 10,000 km 2 were extracted from the original data.In addition, watersheds in which agricultural lands occupied more than 1% of the catchment were excluded.As a result 17 points, of which the dominant LULC types were forest and wetland, were retained.Using this data, mean annual dissolved iron concentrations were plotted against the average TWI.The calculation period was also from 1980 to 1995.Three different types of curve, i.e. linear, power, and exponential curves, were tested as fitting curves.Figure 1 shows the exponential curve that was found to have the highest correlation coefficient among the three different curves.Correlation coefficients of the linear curve and power curve were respectively 0.49 and 0.59 for average, 0.49 and 0.36 for maximum, and 0.53 and 0.57 for minimum.
Uncertainty analysis of concentration curve
The concentration curve inductively generated was only applicable to a watershed primarily covered by forest and wetland.Thus, if it was used to predict the dissolved iron concentration at other points, it is likely that some discrepancies would occur.In addition, as shown in Figure 1, the inter-annual fluctuation range of dissolved iron at each point was large, especially for higher TWI values.Thus, within the range between the fitted curves of the maximum and minimum value shown in Figure 1, Monte Carlo Simulation was implemented to evaluate the uncertainty inherent in the curves.The trial numbers of Monte Carlo Simulation were 1,000 for each curve.The linear congruential method was used to generate random numbers.Results were compared against observed dissolved iron concentrations along Stations a-g.Dissolved iron concentration at a given point can be estimated by summing up the annual discharge weighted average of dissolved iron concentration of grids that are included in the watershed area of the target point.Document S2 provides a calculation procedure used to determine the annual discharge from each grid.
Figure 2 shows a comparison between observed and estimated dissolved iron concentrations during the period from 1980 to 1995.The calculated values were overestimated except for Station c, regardless of the type of fitting curve.Even if overestimation of the discharge of the Songhua River was taken into consideration, discrepancies between the observed and calculated values were significantly large.Because agricultural land and grassland were the two most dominant LULC types following forest Calculations were conducted using identified concentration curves shown in Figure 1.
and wetland, it is highly probable that the dissolved iron concentrations of grasslands and agricultural lands had lower values than those of forests and wetlands.
Identification of concentration curves for grasslands and agricultural lands
We assumed that the dissolved iron concentration of the grasslands and agricultural lands could also be expressed as a function of TWI.In addition, we presumed that this function could be obtained by multiplying a constant value by the obtained concentration curve.We selected an exponential curve as the concentration curve because the correlation coefficient of the curve was highest.Under these simple assumptions, we introduced two independent constants, b and c, as multipliers for agricultural lands and grasslands, respectively.Moreover, constant a, which is multiplied by the original concentration curve when the LULC is forest, wetland, or other LULC types, was introduced.Multiplying parameters ranging from 0.0 to 2.0 while changing the parameter value at an interval of 0.1 by the concentration curve, a total of 21 × 21 × 21 = 9261 trials were made.The calculation period was 1980 to 1995.The fitness of the calculated result was evaluated by the relative root mean square error (RRMSE).
Figure 3 shows the distribution of the RRMSE in the b-c plane, which represents the intersection of several different values of a.These figures clearly demonstrate that lower RRMSE values were concentrated around the best-fit parameter set, which was as follows: a = 0.8, b = 0.1, c = 0.0, RRMSE = 0.22.These results indicate that the RRMSE surface had only one optimal point in the parameter space and that there was no equifinality problem.The average and variance of parameter sets with good performances ranked in the top 100 were as follows: a = 0.72 ± 0.09, b = 0.17 ± 0.13, c = 0.39 ± 0.28.
By multiplying these average values by the original formula, we obtained new concentration curves for forests/ wetlands, agricultural lands, and grasslands (Figure S3). Figure 4 shows dissolved iron concentration calculated by using the newly developed dissolved iron concentration curves.Most of the discrepancies between the observed and calculated values decreased.In addition, the changing ranges of the observed and calculated values at each site were nearly identical.
Because the dissolved iron data of watersheds dominated by agricultural land or grassland were not obtained, we could not confirm the validity of the curves for agricultural lands and grasslands at this time.However, Yoh et al. (2007) reported that the dissolved iron concentration of dry land and paddy fields was lower than that of wetlands, which indirectly supports the validity of the curve.Annual precipitation on the grasslands for most of the Mongolian high plain is less than 400 mm; thus, it is reasonable to assume that the average dissolved iron concentration of the grasslands in this study area is also low, which supports the validity of the curve for grasslands.
DISCUSSION AND CONCLUSION
This study clarified that TWI can be a good macroscopic index to represent the average dissolved iron concentration of each grid.This means that we can easily assess the dissolved iron concentration of any grid simply by calculating TWI.Since the calculation of TWI only requires DEM data, the calculation procedures are also very simple.Thus, the developed curves will be useful especially for evaluating the dissolved iron productivity of continentalscale large basins.Moreover, the identified curves can be easily incorporated into a hydrological model as an explicit function.This will open up the possibility of predicting the dissolved iron productivity of any watershed.We therefore consider our results to be a first step toward building a comprehensive terrestrial iron transport model.For the further development of such a model, inter-annual and seasonal changes in dissolved iron should also be formulated.
The key factors governing these temporal changes in dissolved iron are biogeochemical factors such as reducing conditions and the presence of humic substances.
Overall, we obtained a dissolved iron concentration curve for forests and wetlands with respect to TWI.Modifying this function, dissolved iron concentration curves for agricultural lands and grasslands were also identified.The results suggest that TWI can describe the average dissolved iron concentration in areas with different land cover types in the Amur River Basin.Future studies should be conducted to incorporate these concentration curves into a hydrological model to simulate temporal changes in the dissolved iron concentration of the basin.
Figure 1 .
Figure 1.Relationships between mean annual dissolved iron concentrations and average TWI of each basin.Exponentially fitted curves against average, maximum and minimum values are shown.
Figure 2 .
Figure 2. Comparison of the average observed and calculated dissolved iron concentrations during the period between 1980 and 1995 along the main course of the Amur River.Calculations were conducted using identified concentration curves shown in Figure 1.
Figure 3 .
Figure 3. Relative Root Mean Square Error (RRMSE) distributions in the fitting parameter space.Six different planes perpendicular to the a-axis cut at six different a values (a = 0.2, 0.4, 0.6, 0.8, 1.0, 1.5) are shown.
Figure 4 .
Figure 4. Comparison between observed and calculated dissolved iron using identified dissolved iron concentration curves.
Table I .
Pearson's correlation coefficient of the topographic index and the climate index against the average dissolved iron concentration | 2019-01-24T04:53:44.770Z | 2010-01-01T00:00:00.000 | {
"year": 2010,
"sha1": "9f2b68ca98012ad9350647f70dfc30f77c70a753",
"oa_license": "CCBY",
"oa_url": "https://www.jstage.jst.go.jp/article/hrl/4/0/4_0_85/_pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9f2b68ca98012ad9350647f70dfc30f77c70a753",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
246748278 | pes2o/s2orc | v3-fos-license | Antiplatelet therapy for Staphylococcus aureus bacteremia: Will it stick?
Staphylococcus aureus bacteremia (SAB) remains a clinically challenging infection despite extensive investigation. Repurposing medications approved for other indications is appealing as clinical safety profiles have already been established. Ticagrelor, a reversible adenosine diphosphate receptor antagonist that prevents platelet aggregation, is indicated for patients suffering from acute coronary syndrome (ACS). However, some clinical data suggest that patients treated with ticagrelor are less likely to have poor outcomes due to S. aureus infection. There are several potential mechanisms by which ticagrelor may affect S. aureus virulence. These include direct antibacterial activity, up-regulation of the innate immune system through boosting platelet-mediated S. aureus killing, and prevention of S. aureus adhesion to host tissues. In this Pearl, we review the clinical data surrounding ticagrelor and infection as well as explore the evidence surrounding these proposed mechanisms of action. While more evidence is needed before antiplatelet medications formally become part of the arsenal against S. aureus infection, these potential mechanisms represent exciting pathways to target in the host/pathogen interface.
Author summary
Staphylococcus aureus remains a challenge to treat given its virulence and its ability to invade the bloodstream and spread to multiple sites in the body. Recently, it has been observed that patients taking the antiplatelet medication ticagrelor may have better infection outcomes. From this clinical observation, investigators have launched in vitro and animal studies to better understand by which mechanisms ticagrelor may affect S. aureus infection and clearance. In this Pearl, we review clinical data surrounding ticagrelor and infection as well as explore 3 different potential mechanisms of action that have been suggested by current studies. These mechanisms may involve boosting the host's plateletmediated innate immunity, representing an exciting direction for the treatment of S. aureus bacteremia. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111
Introduction
Staphylococcus aureus bacteremia (SAB) remains a major clinical challenge with significant patient morbidity and mortality. To better address SAB, investigators seek antibacterial strategies that act in nontraditional ways, including those that augment the host immune response [1]. While platelets are well known for their role in thrombosis, they also participate in innate immunity. In vitro, platelets successfully kill S. aureus [2]. Platelets can phagocytose S. aureus as well as secrete antibacterial peptides from alpha granules that kill S. aureus independent of antibodies [3,4]. In addition to direct activity against S. aureus, platelets can also be activated by intravascular pathogens due to pattern recognition receptors, causing secretion of chemokines to recruit and enhance lymphocytes as well as communicate with endothelial cells [2,5], thereby augmenting the immune response. Clinically, thrombocytopenia in the setting of SAB has been associated with both a greater magnitude of bacteremia and patient mortality [6], although it is not clear if this relationship is correlative or causative.
With advances in vascular medicine, platelet-modifying drugs such as ticagrelor, clopidogrel, and prasugrel are often prescribed for up to 1 year to patients suffering from acute coronary syndrome (ACS) [7]. In the Study of Platelet Inhibition and Patient Outcomes (PLATO) randomized controlled trial [7], ticagrelor was found to be superior to clopidogrel in preventing death from myocardial infarct, stroke, and vascular causes in patients with ACS. Ticagrelor is a reversible inhibitor of the platelet adenosine diphosphate P2Y 12 receptor, whereas clopidogrel and prasugrel are irreversible inhibitors of the same receptor. It remains unclear whether platelet-modifying therapeutics influence the role of platelets in innate immunity, although there is preliminary in vitro, in vivo, and clinical evidence that ticagrelor may mitigate SAB.
Here, we review clinical evidence surrounding ticagrelor and infection as well as explore 3 potential pathways in which ticagrelor may inhibit S. aureus.
Clinical data suggest that ticagrelor alters infection outcomes compared to patients taking other antiplatelet medications
In the PLATO trial, over 18,000 patients with ACS were treated with 1 year of either ticagrelor or clopidogrel [7]. In a post hoc analysis of patients, adverse events were studied including rates of bacteremia/sepsis [8]. Although the rates of these infections were similar in both groups, there were fewer deaths due to sepsis/bacteremia in the ticagrelor group (7 versus 23; p = 0.003).
The PLATO study renewed interest regarding infectious outcomes in patients following ACS. A total of 3 retrospective studies were published comparing patients on clopidogrel and ticagrelor. Among 9,518 patients treated with ticagrelor or clopidogrel (matched using propensity scoring), there were significantly fewer hospital readmissions due to infection with ticagrelor (6.11%) than with clopidogrel (10.53%) (HR 0.736, 95% CI 0.64 to 0.85; p < 0.001) [9]. In another propensity-matched retrospective study, 1.4% of 1,356 patients treated with ticagrelor compared to 3.6% of 1,356 patients treated with clopidogrel had gram-positive infections (HR 0.37; 95% CI 0.22 to 0.63; p < 0.001) [10]. Last, a third retrospective study including over 26,000 patients measured the occurrence of SAB during the first year after initiation of either ticagrelor or clopidogrel [11]. PAU : PleasecheckwhethertheeditstothesentencePatientstreatedwithticagr atients treated with ticagrelor had significantly fewer episodes of SAB with absolute risk reduction of −0.19% (95% CI −0.32% to −0.05%; p = 0.006).
Notably, the findings from PLATO were a post hoc analysis, and these retrospective studies were correlative and not designed to determine cause and effect. However, these data in sum suggest that there may be mechanisms by which ticagrelor mitigates infection risk and, potentially, SAB.
Ticagrelor has direct activity against S. aureus, albeit at supraphysiologic concentration
When evaluated with in vitro time-kill assays, ticagrelor was effective against methicillin-resistant S. aureus (MRSA), methicillin-susceptible S. aureus (MSSA), Staphylococcus epidermidis, Streptococcus agalactiae, and Enterococcus faecalis [12]. It was not effective against 2 gram-negative pathogens, Escherichia coli and Pseudomonas aeruginosa. However, its antibacterial activity occurred at supraphysiologic concentrations. The minimum inhibitory concentration (MIC) of ticagrelor against MRSA was 20 μg/mL, whereas the physiologic concentration of ticagrelor at dosing for ACS in humans is between 0.8 and 1.2 μg/mL [12]. Another study also found that ticagrelor inhibited a clinical isolate of MSSA but only at supraphysiologic concentrations (MIC 64 μg/mL) [13]. Further, the combination of ticagrelor with the antimicrobials cefazolin or ertapenem was only additive rather than synergistic.
If concentrations of ticagrelor required for direct antistaphylococcal activity are not achievable clinically, how does ticagrelor exert an antibacterial effect at physiologic doses? In a murine model in which MRSA-inoculated polyurethane disks were implanted in the flanks of immunocompetent mice, those treated with ticagrelor at physiologic dosing had significantly decreased bacterial burden of their infected implant, suggesting that another mechanism in vivo may be driving the antistaphylococcal activity of ticagrelor [12]. In sum, these results indicate that direct antibacterial activity of ticagrelor is unlikely to account for its apparent activity in vivo.
Ticagrelor improves host platelet-mediated killing of S. aureus and decreases host thrombocytopenia
Platelets engage in the clearance of S. aureus by secretion of antimicrobial peptides and phagocytosis of bacteria as well as recruitment of other lymphocytes [2][3][4][5]. In vitro, ticagrelor at physiologic concentration significantly enhanced the ability of human platelets to kill MRSA, whereas aspirin (another antiplatelet drug) did not [14]. The same effect was reproducible against MSSA [13]. Under microscopy, platelets incubated with S. aureus developed significant structural damage, although platelets treated with ticagrelor were relatively preserved, suggesting that ticagrelor may have a protective/stabilizing effect on platelets in the setting of S. aureus exposure [14].
In an observational prospective study of 49 consecutive patients with SAB, thrombocytopenia correlated with increased mortality [14]. Notably, isolates from SAB patients with more severe thrombocytopenia produced more α-toxin [14], an exotoxin that increases hepatic clearance of platelets through platelet desialylation. Mice infected with α-toxin-deficient S. aureus mutants had decreased thrombocytopenia and bacterial burden compared with mice infected with wild-type S. aureus. However, mice pretreated with physiologic concentrations of ticagrelor had decreased thrombocytopenia and improved survival during wild-type SAB [14].
In a clinical case report, a 60-year-old man with refractory SAB and thrombocytopenia despite 5 days of antibiotic therapy was started on ticagrelor [13]. Within 24 hours, his bacteremia resolved and platelet count improved. Discontinuation of ticagrelor led to recurrent thrombocytopenia, which then reversed with the resumption of ticagrelor. The patient was treated with 3 months of ticagrelor in addition to standard antibiotic therapy without further infection recurrence.
In sum, these in vitro and in vivo studies suggest that ticagrelor can enhance platelet-mediated killing of S. aureus as well as mitigate S. aureus-induced thrombocytopenia likely by preventing α-toxin-related desialylation. Given the role of platelets in innate immunity, maintaining platelet counts may contribute to improved outcomes in SAB as an additional benefit to ticagrelor therapy.
Antiplatelet therapy inhibits S. aureus binding to host endothelial tissues
Among other adherence mechanisms, S. aureus binds platelets via interactions between its clumping factor A (clfA) and host platelet von Willebrand factor and fibrinogen [15]. Activated platelets bind to the exposed extracellular matrix of damaged host endovascular tissue (such as heart valves). Therefore, preventing platelet aggregation on host endothelium by inhibiting platelet activation may mitigate SAB and its infectious complications. For example, S. aureus mutants lacking clfA were 50% less likely to cause endocarditis than wild-type strains in a SAB rat model [16].
In ex vivo perfusion reactors, precoating bovine jugular veins with fibrinogen stimulated both human platelet and S. aureus surface binding [17]. However, the platelet α IIb β3 antagonist eptifibatide decreased S. aureus adhesion, likely due to inhibition of platelets. Likewise, the effect of antiplatelet therapy with aspirin and ticagrelor on S. aureus adhesion in the presence of human blood (including platelets) was tested under shear conditions [17]. Treatment with both aspirin and ticagrelor independently decreased S. aureus attachment to the lumen of bovine jugular veins; the combination of the 2 resulted in significantly less adhesion than aspirin alone. Dual antiplatelet therapy was also found in vivo to decrease endocarditis in a rat model of SAB due to inhibition of platelet binding [18]. Preventing S. aureus binding to platelets and therefore minimizing contact with host tissues may be another mechanism by which ticagrelor and other antiplatelet drugs mitigate infection.
Future directions
Clinical data from large patient cohorts suggest a protective effect of ticagrelor against infection. In vitro and rodent models have demonstrated that ticagrelor has direct antistaphylococcal activity at high concentrations and facilitates platelet-mediated killing of S. aureus, decreases SAB-induced thrombocytopenia, and mitigates binding of S. aureus to platelets and host tissue at physiologic concentrations (Fig 1). Therapeutic strategies that improve host immune function are appealing, as these are not prone to traditional bacterial resistance mechanisms [1]. In addition, repurposing existing licensed drugs is attractive, as the safety and adverse event profiles are well documented [19].
Stronger evidence is needed to conclusively evaluate the clinical efficacy of ticagrelor in SAB. A prospective randomized controlled trial of patients receiving standard of care versus standard of care plus ticagrelor may bring further clarity. Anticipated risks and unanticipated consequences, including increased bleeding, would need to be carefully considered. In a prospective trial of over 20,000 patients, those randomized to take low-or high-dose ticagrelor did have significantly greater bleeding compared to placebo (6.2, 7.8, and 1.5%, respectively; p < 0.01), although 86% of bleeding events were nonmajor [19]. Furthermore, there are conflicting reports that patients with endocarditis on anticoagulation may be more prone to cerebral hemorrhage due to emboli [20], and this will further need to be weighed as a potential risk with use of ticagrelor in SAB. While promising, the potential of antiplatelet medication to treat staphylococcal infection remains uncertain. | 2022-02-12T05:17:43.255Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "6420eda95547e8312828c9864f4b6a878ee9adce",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6420eda95547e8312828c9864f4b6a878ee9adce",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210248581 | pes2o/s2orc | v3-fos-license | Phylogenetic classification supports a Northeastern Amazonian Proto-Tupí-Guaraní Homeland
: The question of where Proto-Tupí-Guaraní (PTG) was spoken has been a point of considerable debate. Both northeastern and southwestern Amazonian homelands having been proposed, with evidence from both archaeology and linguistic classification playing key roles in this debate. In this paper we demonstrate that the application of linguistic migration theory to a recent phylogenetic classification of the Tupí-Guaraní family lends strong support to a northeastern Amazonian homeland.
Introduction
The Tupí-Guaraní (TG) family is striking for its great geographical extent, and the study of movements of TG peoples over historical time scales has correspondingly been an important theme in TG anthropology, ethnohistory, and archaeology (see Noelli 1998Noelli , 2008 for an overview). It is also clear that the TG expansion has significantly shaped the linguistic, cultural and social history of lowland South America (see, e.g., Haynie et LIAMES, Campinas, SP, v. 19, 1-29, e019018, 2019Michael 2014), making the question of where Proto-Tupí-Guaraní (PTG), the ancestor of all modern TG languages, was spoken, and how the languages diversified and radiated across South America an important question for diverse fields engaged with the indigenous peoples of the continent.
In this paper we apply linguistic migration theory (LMT) to: 1) the geographical distribution of modern TG languages; and 2) the most fine-grained and empirically well supported classification of the family, Michael et al.'s (2015) phylogenetic TG classification, to locate the PTG homeland and to clarify key aspects of the dispersal of TG languages. We show that this method indicates a northeastern Amazonian homeland for PTG, supporting the claims of archaeologists such as Lathrap (1970) and Brochado (1984), but contradicting those of archaeologists such as Iriarte et al. (2017), and linguists such as Rodrigues (2000). We also identify a particular subgroup of the TG family as having been especially spatially dynamic, spreading TG languages both west along the upper Amazon, and south along the Atlantic coast and then eastwards into the Paraná River basin and beyond.
Specifically, we propose that the PTG homeland was located on the lower Xingu River, and that several of the major high-level subgroups resulted from splits that took place either within the Xingu basin, or relatively nearby, in adjacent river basins, and near the mouth of the Amazon River. We find that one major subgroup, which we label Diasporic, expanded across much of the continent, spreading up the Amazon (Omagua and Kukama), along the Atlantic Coast (Tupinambá), and southwards (the Southern group, which includes the Guaranian subgroup).
In the remainder of this paper we describe the data and methodology employed ( §2); present the results, including both the inferred PTG homeland and observations about the geographical radiation of PTG's daughter languages ( §3); contextualize these results with respect to previous scholarship on these questions ( §4); explicitly compare the plausibility of northeastern and southwestern PTG homeland hypotheses in light of the application of LMT to the Michael et al. (2015) classification; and then conclude ( §6).
We close this introduction with a caveat: note that in this paper we talk about the homeland for a proto-language, not a historical people or population, and the dispersal 1 of languages, not peoples. This choice is deliberate: while it is of course true that languages only exist and move through space as a consequence of being learned and used by speakers, it is potentially problematic to assume that the movement of languages corresponds directly to the movement of peoples. In particular, processes of language shift can result in changes in the spatial distributions of languages without significant population movements Nichols 1997a, inter alia). The ultimate question of how the diversification and dispersal of Tupí-Guaraní languages corresponds to the movement of cultural practices and populations through space and time is a larger project that will require synthesis of research in archaeology, ethnography, ethnohistory, human LIAMES, Campinas, SP, v. 19, 1-29, e019018, 2019 Tupinambá subgroup we discuss below, dispersal trajectories along waterways would be, all other things being equal, more plausible than overland trajectories.
We conclude this section by observing that the reliability of the results we obtain from the application of the LMT to Michael et al.'s (2015) classification of the TG family is contingent on two factors: 1) the accuracy of classification of the family; and 2) the degree to which the dispersal of the TG languages is parsimonious, in the sense assumed by the LMT. On the first point, Michael et al.'s (2015) classification is the most detailed and empirically best-substantiated classification currently available, but we should expect empirically and analytically improved classifications to emerge in coming years, as more scholars turn to the important question of the internal classification of the TG family.
Whether new classifications will result in any significant changes to the model of the dispersion of the TG languages presented in this paper remains to be seen. On the second point, LMT, as a methodology, yields what one could call the 'simplest' model for the dispersal of languages within any language family. However, the agency exhibited by human societies should make us cautious about assuming that their spatial dynamics in contexts of language diversifications always follow the parsimony assumptions of the LMT. While it strikes us as highly unlikely that the spatial dispersal of TG societies diverged so markedly from the assumptions of the LMT as to affect the major results presented in this paper, e.g., the northeastern Amazonian PTG homeland, and the proposed dispersal trajectories of its major subgroups, it would not surprise us if data from affine disciplines -especially archaeology -will lead to reevaluation of details of the model.
Classification and distribution of languages
As described in §2, the empirical bases of the analyses carried out in this paper are: 1) the classification of TG languages; and 2) their spatial distribution. The classification we employ is Michael et al.'s (2015) conservative 2 classification of TG resulting from the Bayesian phylogenetic analysis of lexical data corresponding to a 543-item concept list, which is reproduced in Fig. 1. Several of the major subgroups that we discuss below are labeled in this classification.
LIAMES 19
LIAMES, Campinas, SP, v. 19, 1-29, e019018, 2019 The spatial distribution of TG languages that we employ in this analysis is given in Fig. 2. Significantly, these are not the modern distributions of the languages in question (some of which, like Tupinambá, are in fact extinct), but rather their 'time of contact' (ToC) locations, which constitute their earliest known locations. We employ ToC locations instead of modern locations for two main reasons: 1) the locations of some languages have changed significantly since Europeans' arrivals in South America; and 2) the geographical extent occupied by some languages has shrunk considerably. In both cases, ToC locations and distributions are better bases for inferring homelands and trajectories than modern ones, which reflect histories of genocide, displacement, and resistance that obscure the relationships of some languages to the homelands in which mid-level proto-languages were spoken. The most significant differences between ToC and modern distributions include: 1) Emerillon and Wayampí, now spoken in French Guyana and northern Amapá, respectively, which were spoken on the lower Xingu in the early colonial period (Grenand 1982); Guajá and Ka'ápor, now both spoken in the state of Maranhão, which were probably spoken on the lower Tocantins not long before the arrival of Europeans (Balée 1994); 3) Tupinambá, now extinct, which was spoken along much of the Atlantic coast of Brazil south of the mouth of the Amazon River; and Omagua, which is now on the verge of extinction, but which was spoken along a significant extent of the upper Amazon (Michael 2014;Michael and O'Hagan 2016). Figure 2: Time-of-contact distribution of TG languages analyzed by Michael et al. (2015) All languages are of course spoken in territories with spatial extent, but for purposes of representational convenience we have for the most part opted to represent the location of languages as points. The only cases where we have not done so involve languages whose spatial extent is so great that using points to indicate their location would be a gross misrepresentation. One such case is that of Tupinambá, whose spatial extension along a significant fraction of the Brazilian coast cannot be adequately represented with a single point. Correspondingly, the spatial distribution is represented by a number of blue polygons (see Fig. 2).
The one case where we depart somewhat from the use of points and polygons as outlined above is that of the Guaranian languages, where we combine point representations with a larger polygon. The points represent the approximate location of the modern languages, while the polygon represents the approximate ToC distribution of this set of closely related languages. Our motivation for combining these two representational schemes for this group of languages is the lack of clarity regarding the distinctness of all the Guaranian varieties, and their location, at ToC. We believe that this hybrid representation allows to more informatively capture both the modern distinctness of the varieties represented by points and remain suitably agnostic about their distinctness at ToC.
Analysis
We now turn to the Linguistic Migration Theory (LMT) analysis, based on the classification given in Fig. 1, and the spatial distributions given in Fig. 2. In §3.1 we use LIAMES, Campinas, SP, v. 19, 1-29, e019018, 2019 the distribution of the major high-level subgroups to determine the location of the Proto-Tupí-Guaraní (PTG) homeland. In §3.2 we turn to clarifying the dispersal trajectories of the successive daughter languages of PTG, which requires us to identify the homelands of a number of mid-level proto-languages which were not necessary to identify the PTG homeland. Michael et al.'s (2015) classification of TG exhibits two major coordinate branches at the root, a single-member branch consisting solely of Kamayurá, and Nuclear-TG (NTG), that is, the remainder of the family. We begin by inferring the homeland of Proto-NTG (PNTG), since it has a number of branches that facilitate LMT inferences, and then employ the location of Kamayurá, as well as the location of the most closely-related non-TG Tupian languages, Awetí and Mawé, to infer the PTG homeland.
The Proto-Tupí-Guaraní homeland
NTG exhibits three coordinate branches: 1) the small subgroup consisting of Avá-Canoeiro, Ka'ápor, and Guajá, which call the Tocantins subgroup; 2) the larger Central subgroup; and 3) the remainder of NTG, which call the Peripheral subgroup, due to its members marking the periphery of the vast TG expansion. We proceed with our inference of the PNTG homeland by inferring the Proto-Central and Proto-Tocantins homelands, and then bring in the distribution of the Peripheral subgroups to infer the Proto-NTG homeland.
Beginning with the inference of the Proto-Central homeland, we observe from Fig. 3a that all the languages of the Central subgroup cluster in the lower Xingu River (or immediately adjacent to it), with the exception of Tapirapé, which is found further south, on the middle Araguaia River. From this we infer the lower Xingu River basin as the Proto-Central homeland. Two of the Tocantins languages, Guajá and Ka'ápor are found immediately to the east, in the lower Tocantins River basin, with Avá found further upriver, on the upper Tocantins. Since these three languages are coordinate, we infer the lower Tocantins River basin to be the Proto-Tocantins homeland. The plausibility of our inferences for the Proto-Central and Proto-Tocantins homelands is enhanced by the fact that the homelands for these two coordinate branches are in immediately adjacent river basins, yielding a straightforward geographical basis for the split. We now turn to the Peripheral subgroup, observing that Peripheral consists of three major subgroups: 1) the Kayabí-Parintintin subgroup; 2) the Emerillon-Wayampí subgroup; and 3) Diasporic, which encompasses the remaining languages. We will briefly defer the question of the Proto-Peripheral homeland to return to our current main concern: The Proto-NTG homeland. The critical observation here is that one of the Peripheral subgroups, the Emerillon-Wayampí subgroup, was, at time of contact, located on the lower Xingu, adjacent to the Proto-Central homeland. This makes the lower Xingu-LIAMES, Campinas, SP, v. 19, 1-29, e019018, 2019 Tocantins area the region of greatest genealogical diversity for the NTG subgroup, 3 since it is the homeland of both the proto-Central and proto-Tocantins subgroups, and was the ToC location for one of the three first-order NTG subgroups. From this, we conclude that the lower Xingu-Tocantins area is the Proto-NTG homeland, as depicted in Fig. 4. Figure 4: The Proto-Nuclear Tupí-Guaraní Homeland Having inferred the Proto-NTG homeland, we now address the question of the PTG homeland, and will then return to the question of the Proto-Peripheral homeland and the diversification and dispersal of the Peripheral languages in §3.2.
Recall that TG consists of two coordinate subgroups, NTG and the single member Kamayurá subgroup. Significantly, Kamayurá is located on the middle Xingu River, while the inferred NTG homeland encompasses the lower Xingu and Tocantins River basins. This distribution suggests that the Proto-Tupí-Guaraní (PTG) homeland was located in the Xingu River basin. The inference is reinforced by the fact that the first sister to TG within the Tupian stock, Awetí (Galúcio et al. 2015, Rodrigues andCabral 2012), is likewise found in the Xingu River basin. This strongly suggests a scenario where the Proto-Awetí-Tupí-Guaraní (PATG) homeland was located in the Xingu River basin, and where the split between Pre-Awetí and PTG involved the languages separating within the Xingu River basin, as was the case for the split between Pre-Kamayurá and Proto-NTG.
Having inferred that the PTG homeland lies in the Xingu River basin, we now consider whether we can further narrow its location. Here we argue that the geographical location of the Mawé and Mundurukuic branches of Tupian make the lower Xingu, rather than, say, the middle or upper Xingu, the most plausible homeland for PTG. LIAMES, Campinas, SP, v. 19, 1-29, e019018, 2019 We first step back and observe that there is little doubt that, as Rodrigues (2012, inter alia) has argued, Rondônia is the Proto-Tupian homeland, given that it is the locus of the family's genealogical diversity. Furthermore, it is clear that Mawé is a sister to the Awetí-TG subgroup, forming the Maweti-TG subgroup (Corrêa da Silva 2010; Meira and Drude 2015; Rodrigues and Dietrich 1997). Likewise, the Mundurukuic branch of Tupian is classified as a sister to the MATG subgroup both in expert 4 classifications (e.g., Rodrigues and Cabral 2012: 496) and in distance-based phylogenetic classifications of the family (Galúcio et al. 2015).
Turning now to the spatial distributional facts, we observe that these languages are either spoken on the lower Tapajós, or in adjacent areas that are easily reached from the lower Tapajós. The two Mundurukuic languages, Mundurukú and Kuruáya, are located on the lower Tapajós and lower Xingu, respectively, where the western tributaries of the lower Xingu meet the eastern tributaries of the lower Tapajós, 5 forming an easily transited corridor between the two river basins. Mawé, on the other hand, is located in the region between the lower Tapajós and lower Madeira Rivers (Nimuendajú 1948b), which is drained by rivers that flow into the Amazon proper. Significantly, the headwaters of these rivers abut the lower Tapajós, and the mouths of these rivers are a relatively small distance upriver, on the Amazon proper, from the mouth of the Tapajós. Mawé territory is thus likewise connected to the lower Tapajós -in fact by two easily transited corridors.
These spatial distributional facts suggest that the following is the simplest dispersal scenario, depicted in Fig. 5: Proto-Mundurukuic-Maweti-TG (PMMATG) moved from the vicinity of the Proto-Tupian homeland in Rondônia towards northeastern Amazonia, sooner or later moving into the Tapajós basin, where PMMATG split into Proto-Mundurukuic and PMATG. Proto-Mundurukuic subsequently split, with Pre-Mundurukú remaining mainly in the Tapajós basin, and Pre-Kuruáya moving a small distance east into the lower Xingu basin via the tributaries of these two rivers, which virtually meet. PMATG then split into Pre-Mawé and Proto-Awetí-TG (PATG). This split mostly likely occurred on the lower Tapajós, or on the Amazon River proper, near the mouth of the Tapajós. The reason for inferring this is that: 1) the inferred homeland for the node above PMATG, i.e., PMMATG, is the lower Tapajós; and 2) the ToC (and modern) location of one of the branches resulting from the PMATG split, i.e., Pre-Mawé, is a small distance east of the lower Tapajós. The proposed lower Tapajós location for the split between Pre-Mawé and PATG split has the virtue of requiring the least movement from the inferred location of the split of the immediately higher node (i.e., the Proto-Mundurukuic-PMATG split) and minimizes movement to the ToC location of one of the coordinate branches resulting from the split of PMATG itself, i.e., the ToC location of Mawé, the descendant of Pre-Mawé.
Regardless of whether the Pre-Mawé-PATG split occurred on the lower Tapajós or nearby on the Amazon proper, the modern locations of Mawé and the ATG subgroup suggest that Pre-Mawé moved a small distance to the west to the tributaries of the Amazon between the Tapajós and Madeira Rivers, while PATG moved a small distance to the east, to the Xingu River basin. Crucially, the shortest movement, on this scenario, would have been to the lower Xingu, not the middle or upper Xingu, supporting the conclusion that the PATG homeland, and thus the PTG homeland, was located on the lower Xingu. Figure 5: Proto-Tupí-Guaraní Homeland
Dispersal of the TG languages
In the previous section we argued that the application of LMT to Michael et al.'s (2015) TG classification, combined with information about the immediate sisters to TG, leads to the inference that the PTG homeland was located in the lower Xingu River basin. Now we reverse the LMT inference process to understand the dispersal of the TG languages across the continent.
The first steps of the dispersal process follow directly from reversing the LMT reasoning process that led us to locate the PTG homeland on the lower Xingu. First, PTG is located on the lower Xingu (and additionally spreads to the lower Tocantins) and splits into Proto-Nuclear TG (PNTG), which remains on the lower Xingu, and Pre-Kamayurá, which eventually migrates upriver to its ToC location, as depicted in Fig. 6. We now need to address the issue of the Proto-Peripheral homeland, since the distribution of the Peripheral languages is quite extensive. As evident from Fig. 8, the Peripheral subgroup encompasses most of the TG languages, and is so named not because it is unimportant --far from it --but because its members are distributed around much LIAMES, Campinas, SP, v. 19, 1-29, e019018, 2019 of the periphery of TG territory in South America. Tupinambá, for example occupied much of the Atlantic coast of Brazil at ToC, the easternmost extent of the TG family, while Omagua and Kukama occupied significant stretches of the upper Amazon River basin, marking the northwestern limit of the TG expansion. Similarly, the languages of the Southern subgroup group mark the southern limit of the family.
In order to address the question of the Proto-Peripheral homeland, we will need to carry out LMT inferences on Peripheral and its various subgroups, with a special focus on the Diasporic subgroup, which is responsible for much of the geographic extent of Peripheral, and within Diasporic, the Southern subgroup, which represents a major geographic extension of the languages of the Diasporic subgroup.
To begin, we observe that Peripheral experienced a three-way split into: 1) the Parintintin-Kayabí subgroup, whose ToC location is centered on the confluence of the Arinos and Juruena Rivers (Nimuendajú 1924(Nimuendajú , 1948c; 2) the Emerillon-Wayampí subgroup, which was located on the lower Xingu at ToC; and 3) the large Diasporic subgroup, where ToC distribution of these subgroups is given in Fig. 8. Given the vast distribution of the Diasporic subgroup languages, inferring the Proto-Peripheral homeland requires that we first determine the Proto-Diasporic homeland, the question to which we now turn. We begin by observing that Diasporic itself splits into three groups: 1) Tembé, 2) the Omagua-Kukama-Tupinambá (OKT) subgroup; and 3) the large Southern subgroup, whose ToC distributions are given in Fig. 9. We will address the question of the Proto-LIAMES, Campinas, SP, v. 19, 1-29, e019018, 2019 Diasporic homeland by first determining the proto-Southern homeland and then integrating the ToC locations of Tembé and the OKT group. Figure 9: First-order subgroups of the Diasporic subgroup; note OKT (outlined in green) is discontinuous, and Tembé (tmb) is a single-member subgroup Southern consists of three coordinate subgroups, the Yuki-Sirionó subgroup, the Warázu-Guarayú subgroup, and the large Guaranian subgroup, as depicted in Fig. 10. The Guaranian subgroup is centered on the Paraná and Paraguay River basins, with a number of varieties located at the periphery of these basins, or outside it to the west, such as Chiriguano and Tapiete. In contrast, the Warázu-Guarayú subgroup is located in the Guaporé River basin, and the Yuki-Sirionó subgroup located in the Mamoré River basin, both relatively far to the west of the Paraná-Paraguay River basin. A mechanical application of LMT would suggest a proto-Southern homeland somewhere in an area spanning the upper Mamoré and Guaporé River basins, as this area encompasses both the Yuki-Sirionó, Warázu-Guarayú, and one member of the Guaranían subgroup, making it the region of highest genealogical diversity for the Southern subgroup. There are some reasons to be cautious about this conclusion, however. First, we know from ethnohistorical sources that the arrival of Chiriguano, the westernmost Guaranian language, in the Andean foothills region dates to only the 14th or 15th century as a result of an east-to-west expansion (Santos-Granero 2009). Similarly, Tapiete has been argued to emerge as a consequence of 'Guaranization' of non-TG-peoples from the 16 th century on (Combès 2008). Both these observations suggest that the expansion of the Guaranian subgroups was not from the Andean foothills region towards the Paraná-Paraguay River basin, but the reverse, and that this in fact took place relatively recently, with the Paraná-Paraguay River basin being the proto-Guaranian homeland. Note that this is consistent with the fact that the greatest number of coordinate branches of the Guaranian subgroup are found in the Paraná basin, suggesting that Proto-Guaranian was spoken there, and that as the Guaranian subgroup diversified, some of its members spread eastwards towards the Andean foothills. This insight undercuts the observation above regarding the genealogical diversity found in the upper Mamoré-Guaporé region.
The second reason to be cautious about positing that the upper Mamoré-Guaporé region as the proto-Southern homeland comes from Nichols' (1997a) observation that the effect of multiple successive spreads from a common center within a given spread zone is increased diversity at the edge of the spread zone, which is precisely the circumstances we find with the languages of the Southern group. Coupled to our conclusion regarding the nature of the Guaranian expansion from the Paraná-Paraguay River basin, this suggests that it was in fact the Paraná-Paraguay River basin that was the proto-Southern homeland, with two earlier spreads bringing the ancestors of the Yuki-Sirionó and Warázu-Guarayú subgroups into the Mamoré and Guaporé basins, with any other daughter languages pertaining to these subgroups, were there any, having been absorbed by the subsequent Guaranian spread. LIAMES, Campinas, SP, v. 19, 1-29, e019018, 2019 For these reasons, we infer that the Proto-Southern homeland was located in the Paraná-Paraguay River basin, and that the presence to the west of this basin of the non-Guaranian Southern subgroups and Guaranian languages such as Chiriguano and Tapiete was due to successive westward spreads.
Having identified the Proto-Southern homeland, we can now return to the question of the Proto-Diasporic homeland. Diasporic, it will be recalled, consists of three firstorder subgroups: 1) Tembé; 2) the Omagua-Kukama-Tupinambá (OKT) subgroup; and 3) the Southern subgroup. Like the Southern subgroup, the OKT subgroup, although consisting of only three languages, extended over a large region: Omagua and Kukama in the upper Amazon, and Tupinambá along the Atlantic coast. Given the central role of aquatic resources and transportation for the groups speaking these languages, the languages clearly spread via major waterways: The Amazon proper for Proto-Omagua-Kukama (POK; Michael, 2014;O'Hagan 2011, O'Hagan 2019a and the Atlantic littoral in the case of the Pre-Tupinambá. It is known from archaeological evidence that the culture associated with POK arrived in the upper Amazon in approximately 1100CE, after a steady progression from points downriver (Lathrap 1970), indicating a migration from the lower Amazon region. Tupinambá, on the other hand, was distributed from territory along southern banks of the Amazon, near the mouth of the river, to large portions of the Atlantic coast south of the Amazon. This suggests a POKT homeland in the vicinity of the lower Amazon region, with POK having migrated upriver and Pre-Tupinambá having expanded southwards along the coast. This conclusion is reinforced by the fact that one of the Diasporic sisters to the OKT subgroup, Tembé, is found in this area, specifically (at ToC), in the region that is now the state of Maranhão.
From these observations, we infer that the Proto-Diasporic homeland was located near the mouth of the Amazon, and south of the river, since two of the first-order daughters of Proto-Diasporic, POKT and Pre-Tembé, were spoken in this region, with only the third first-order daughter, Proto-Southern, spoken outside it. Given the ToC distributions of the relevant groups, LMT does not allow us to clarify whether the Proto-Diasporic homeland was located towards the west, near the mouths of the Xingu or Tocantins rivers, or further to the east, near the Atlantic coast. Regardless, it follows from the location of the Proto-Diasporic homeland in this region that the first order split involved proto-Southern moving far to the south, while POKT and Pre-Tembé remained near the mouth of the Amazon. POKT subsequently split into POK and Pre-Tupinambá, with the former moving far up the Amazon River, and the latter extending south along the Atlantic coast, but crucially also remaining in the area near the mouth of the Amazon River.
An important question that remains unanswered by the Diasporic dispersal scenario just sketched out concerns the route by which Proto-Southern reached the Paraná-Paraguay basin. As observed by Urban (1996), an Amazonian PTG homeland is compatible with multiple plausible routes by which the Southern languages could have reached their ToC locations, including a southward coastal route, followed by an inland western route, and southward routes along a number of Amazonian tributaries, such as the Tocantins or Xingu rivers, followed by an overland route from the headwaters of these rivers to the headwaters of the Paraná-Paraguay basin. Resolution of this question remains an important priority for interdisciplinary research on the history of TG peoples, cultures, and languages.
LIAMES 19
LIAMES, Campinas, SP, v. 19, 1-29, e019018, 2019 Having identified the Proto-Diasporic homeland, we can now address the location of the Proto-Peripheral homeland. Recall that Peripheral consists of three first-order subgroups, the Kayabí-Parintintin subgroup, the Emerillon-Wayampí subgroup, and Diasporic. Given the proximity of the Proto-Diasporic homeland and the ToC locations of Emerillon and Wayampí, LMT leads us to conclude that the Proto-Peripheral homeland was somewhere in the region spanned by these two first order groups, i.e., somewhere between the lower Xingu, in the west, and the Atlantic coast near the mouth of the Amazon, in the east. We argue that within this relatively large region, the eastern portion of this region, i.e., the territory east of the mouth of the Tocantins, extending to the Atlantic coast, is the most likely region for the Proto-Peripheral homeland.
Our inferring this location for the Proto-Peripheral homeland is based on the geographic distribution of the first order subgroups of the next higher node, i.e., Proto-Nuclear-TG (PNTG), and a model for the diversification of NTG. Recall that PNTG has three first-order daughters: Proto-Central, with a homeland on the lower Xingu, Proto-Tocantins, with a homeland on the lower reaches of the Tocantins, and Peripheral itself. Given that the Xingu and Tocantins River basins are each occupied by a daughter of PNTG, we argue it is less likely that Proto-Peripheral shared one of these river basins, than it having its own distinct territory. That territory, by this reasoning, and the above delimitation of the possible area in which Proto-Peripheral was spoken, would have to be the territory to the east of the Tocantins River.
Note that if we posit that the Proto-Peripheral homeland was located in the territory east of the Tocantins basin, an attractive model for the diversification of NTG follows: in brief, once PTG split into Pre-Kamayurá and PNTG, PNTG expanded rapidly eastward from the lower Xingu River basin towards the Atlantic coast, occupying a large territory from the Xingu River basin in the west, to the territory near the Atlantic coast, in the east. The three daughters of PNTG --Proto-Central, Proto-Tocantins, and Proto-Peripheral -then simply correspond to descendants of each segment of the PNTG-speaking population that were isolated by the major river system boundaries, i.e., the populations in the Xingu basin, Tocantins basin, and the area east of the Tocantins basin, respectively.
With the inference of the Proto-Peripheral homeland in hand, we now summarize the diversification and dispersal processes described above:
Previous language classification-based theories regarding the PTG homeland
The question of the geographical origin of the Tupí-Guaraní peoples has been an important one in South American archaeology and anthropology, marked by considerable debate, up to the present (see e.g., Almeida and Neves 2015). Numerous scholars have addressed this question, but as Noelli (1998) usefully summarizes in his cogent overview of the relevant scholarship, the varied proposals that have been made mainly fall into one of two groups: those that posit a southwestern origin, centered on the Paraná River basin, and those that posit a northeastern Amazonia origin. The evidence for these proposals comes from a variety of sources, including modern material culture, archaeological remains, and in some cases, language, where the spatial dynamics of languages is taken to be a reliable indicator of the movements of Tupí-Guaraní peoples (a position about which, we remind the reader, we are much more cautious).
In this section we briefly review the two works that propose PTG homelands and dispersal trajectories on the basis of internal classifications of the TG family, as we do in this paper: Mello and Kneip (2017) and Rodrigues (2000). 6 While other works on the PTG homeland allude to linguistic facts, e.g., Lathrap's (1970:78) appeal to Arawakan loanwords in TG as evidence for a northern origin, or Urban's (1992) discussion of the geographic dispersal of Rodrigues' eight classic subgroups, they do not base their PTG homeland proposals on internal classifications of the family. Mello and Kneip (2017:307) propose a PTG homeland subsuming the one we identify in this paper: a large ellipse spanning the lower Tapajós, Xingu, and Tocantins Rivers. They argue for their proposal using LMT, and the observation that this ellipse encompasses four of Rodrigues' (1984/5) eight classic TG subconjuntos (or five of nine subgroups, in Mello's (2000) modification of Rodrigues' classification), 7 making this area the locus of genealogical diversity of the family. Their conclusions are essentially as precise as the rake-like structure of Rodrigues' (1984/5) and Mello's (2000) classifications permit, and are broadly compatible with the proposal we advance in this paper, which specifies the lower Xingu as the PTG homeland. Mello and Kneip (ibid.) propose three major dispersal trajectories: 1) a backmigration to the Rondônia area which corresponds to our Kayabí-Parintintin group; 2) a southwards coastal expansion by Tupinambá; and 3) a migration by the Guaranian and Bolivian TG languages (corresponding to our Southern group) southwards via Rondônia. These three proposals are broadly compatible with our own language dispersal, with the caveat that they are bolder than we are in proposing a specific migration route for the Southern languages. No clear evidence is presented in favor of this route over any other, however. It is also worth mentioning that since the Bolivian and Guaranian languages constitute three distinct subgroups in the classification that Mello and Kneip employ, they must in effect posit three independent migrations along this trajectory, an issue which they do not address. Note that in the Michael et al. (2015) classification, these three subgroups form a single subgroup, avoiding this difficulty.
Mello and Kneip (2017)
In summary, Mello and Kneip's (2017) PTG homeland and dispersal trajectory proposals are broadly compatible with those proposed in this paper, with many of the differences being traceable to the fact that they base their application of LMT on a less finely-articulated internal classification of the family. Like us, they identify a northeastern homeland for PTG, based on the greater genealogical diversity of the family in that region, 6 Schleicher (1998:320) proposes that the Planalto do Mato Grosso was the PTG homeland, not on the basis of an internal classification of the family per se (which he does not present), but on the spatial distribution of a number of phonological and morphosyntactic isoglosses (ibid.: 322). Although Schleicher (ibid.: 320) makes a nod to LMT, his conclusions do not result from LMT inferences based on a classification of the family. 7 Mello and Kneip (2017:307) suggest that this area could even be considered to encompass six subgroups, depending on how far to the east the Tupinambá expansion may have begun from. LIAMES, Campinas, SP, v. 19, 1-29, e019018, 2019 although the less fine-grained nature of the classification they employ does not facilitate their developing a more precise homeland proposal.
Rodrigues (2000)
In contrast to Mello and Kneip (2017), Rodrigues (2000) proposes a southwestern Amazonian homeland for PTG that lies in the vicinity of the Arinos and upper Juruena River basins. Rodrigues alludes to the following evidence in support of his proposal: 1) that Rondônia, which lies relatively close to the west of this proposed homeland, is the region of greatest genealogical diversity of the Tupian family as a whole (including members of his TG subconjunto VI, that is, Michael et al's (2015) Kayabí-Parintintin group); and 2) certain phonological affinities between particular subconjuntos that Rodrigues suggests lend support to particular migratory scenarios. The latter phonological affinities are most cogently summarized in Rodrigues and Cabral (2002), which we discuss below. 8 As we argue now, however, the evidence cited above does not in fact support the conclusion that the PTG homeland is located in Juruena-Arinos over alternative homeland hypotheses, including a northeastern homeland, as proposed in this paper.
First, Rodrigues' (2000) argument for a southwestern PTG homeland, to the degree that it incorporates LMT-based reasoning, is not framed in terms of the locus of genealogical diversity of the TG family, but rather, indirectly, on that of Tupian family as a whole. However, homeland inferences for the proto-language of a given group of languages should, according to LMT, be principally based on the locus of greatest genealogical diversity for that group of languages, and not that of languages higher up in the tree. Concretely, this means that the PTG homeland should be inferred principally on the basis of the locus of genealogical diversity of the TG subgroup, not on the basis of the locus of genealogical diversity of the larger Tupian family of which it is a part. As such, the fact that the proposed Arinos-Juruena homeland lies relatively near to the Proto-Tupian homeland is not, by itself, compelling support for this PTG homeland hypothesis.
Turning now to the issue of phonological affinities that Rodrigues (2000) mentions in support of the Arinos-Juruena PTG homeland proposal, it is useful to discuss Rodrigues and Cabral (2002), which updates Rodrigues' (1984Rodrigues' ( /1985 classification, making modest changes to the membership of certain subconjuntos and, critically, adding higher-level structure to the classification of TG languages on the basis of morphological affinities and sound changes that they identify. Importantly, Rodrigues and Cabral (2002) address the same basic phonological affinities that Rodrigues (2000) presents to support the homeland and migration account he proposes, but they do so more explicitly, and in greater detail.
The higher-level structure that Rodrigues and Cabral (2002) propose is shown in Fig. 11, which compares this classification with that of Michael et al. (2015). Although LIAMES, Campinas, SP, v. 19, 1-29, e019018, 2019 only make a passing allusion to the matter in this paper, they correctly observe that this classification is compatible with the southwestern origin and migration scenario proposed in Rodrigues (2000). This is due to the fact that: 1) members of two of the three major branches, encompassing three subconjuntos, are present in southwestern Amazonia, which, following LMT, can consequently be inferred to be the PTG homeland; and 2) all the other subconjuntos, most of which are found in northeastern Amazonia, form a single subgroup in this classification, allowing one to explain the modern distribution of these languages by positing a single migration from southwestern to northeastern Amazonia. What this demonstrates is that the higher-level structure of TG classifications is critical in distinguishing southwestern vs. northeastern Amazonian PTG homeland hypotheses. Rodrigues and Cabral's (2002) classification of TG compared to Michael et al.'s (2015) classification In this light, it is crucial to observe that Rodrigues and Cabral (2002) do not provide compelling evidence for the higher-level structure they propose, nor is it supported by the phylogenetic analysis, which does, however, support the traditional subconjuntos. In particular, the shared innovations that Rodrigues and Cabral present as evidence for the higher level structure in their proposed tree are that *ts>h(>ø) in all subconjuntos but II and III, and that *tʃ>h(>ø) in subconjuntos Regarding this evidence, we first observe that none of these innovations group together subconjuntos II and III, meaning that no evidence is presented for this putative subgroup. In particular, note that the observation that languages in subconjuntos II and III did not experience lenition of *ts does not constitute the identification of a shared innovation, but rather, a shared retention, which is not evidence for subgrouping. And LIAMES, Campinas, SP, v. 19, 1-29, e019018, 2019 critically, once we split the putative II+III subgroup into two distinct subgroups, there are then as many first order subgroups located outside of southwestern Amazonia as within it, already significantly weakening the basis for positing a southwestern Amazonian PTG homeland.
Second, we observe the claim that *ts>h(>ø) did not affect subconjuntos II and III is rather misleading, aside from the fact that this would constitute a shared retention, rather than a shared innovation. This is because crosslinguistically, *ts generally does not immediately debuccalize to h (i.e., *ts>h(>ø)), but instead first lenites to s: *ts>s>h(>ø). Once we acknowledge that the sound change process in question is one in which lenition precedes debuccalization, we find that lenition also operated in languages of subconjuntos II and III. In particular, Sirionó of subconjunto II and Tupinambá of subconjunto III underwent lenition, but not Guarayú of subconjunto II, nor Kukama of subconjunto III. Even in terms of shared retentions, then, the languages of subconjuntos II and III do not pattern together, as Guarayú and Kukama are the only languages to retain *ts. Moreover, as we can see, even languages in the same subconjunto can differ in terms of whether and when they underwent lenition. 10 We are thus left with the question of the evidence supporting the large IV-VIII subgroup. As evident from the preceding paragraph, *ts>s is not uniquely associated the IV-VIII subgroup, and even if it were, this lenition process is, again, so crosslinguistically common that it would not serve as compelling evidence for the subgroup. What the languages of IV-VIII do share is s>h>(ø), but again, this is so common a sound change, that it has little probative value for subgrouping. Essentially similar arguments apply to the claim that *tʃ>h(>ø) in all subconjuntos but I, II, and III.
In summary, then, Rodrigues and Cabral (2002) provide no compelling evidence for the higher-level structure they posit for the TG family, meaning that it cannot be adduced as evidence for a southwestern Amazonian homeland for PTG. Whatever the ultimate merits of Rodrigues' (2000) proposal for a southwestern Amazonian homeland, the linguistic evidence and argumentation provided for it is not compelling.
Comparing the Northeastern and Southwestern PTG homeland theories
Abductive reasoning, like that embodied by linguistic migration theory, is incapable of proving a proposition, instead yielding likely hypotheses that are intrinsically probabilistic in nature. For example, above we argued that the lower Xingu River basin was the most likely location of the PTG homeland, but it is certainly within the realm of possibility that it was located in the middle Xingu River basin. Alternative but very similar theories like these ultimately need to be evaluated by additional sources of evidence, such as the study of archaeological remains.
Despite the inherent probabilistic nature of abductive reasoning, it is typically feasible to evaluate the relative plausibility of starkly different alternative hypotheses, and it is this to which this section is dedicated. In particular, we compare the relative plausibility LIAMES, Campinas, SP, v. 19, 1-29, e019018, 2019 of the northeastern PTG homeland hypothesis (the 'NE hypothesis') that we defend in this paper to that of the southwestern homeland hypothesis (the 'SW hypothesis'). As usefully summarized by Noelli (1998Noelli ( , 2008 in his overview of TG homeland proposals, one major tradition identifies the homeland as falling within Paraná River basin, which is the version of the SW hypothesis we compare against the NE hypothesis here. While others have developed alternative proposals as well (e.g., Almeida and Neves 2015), the SW hypothesis remains influential.
Before we begin, we stipulate an important constraint on the SW hypothesis we evaluate, with the goal of making it clearly distinct from the NE hypothesis we defend in this paper. Specifically, we require that the SW PTG homeland remain continuously occupied from the time at which PTG began to diversify to the modern era. The purpose of this restriction is twofold: first, this is consistent with the position taken by many defenders of the SW hypothesis, who cite early dates for 'Guaraní' remains in the Paraná River basin (see, e.g., Iriarte et al. (2017)), and second, it prevents the SW hypothesis from being trivially reduced to a hypothesis very similar to the NE hypothesis by positing an early migration of a high-level proto-language from the SW to NE regions. For example, imagine a version of the SW hypothesis that posits that while PTG was spoken in the SW region, PNTG migrated to the Xingu River basin after the first order split between NTG and Pre-Kamayurá. This latter hypothesis ends up being so similar to the NE hypothesis that it does not provide an insightful basis for comparison.
With this constraint, the following migration scenario is the most parsimonious one consistent with the SW hypothesis, Michael et al.'s (2015) classification, and the ToC distribution of TG languages: 1) PTG splits into PNTG and Pre-Kamayurá; Pre-Kamayurá migrates northeast to the upper Xingu River Basin.
2) NTG splits into Peripheral, Proto-Central and Proto-Tocantins. Proto-Central and Proto-Tocantins then independently migrate to the northeast, with Proto-Central settling and diversifying in the Xingu River Basin, and Proto-Tocantins doing so in the Tocantins River Basin.
3) Peripheral splits into Proto-Diasporic, Proto-Kayabí-Parintintin, and Proto-Emerillon-Wayampí. Proto-Kayabí-Parintintin migrates north, while Proto-Emerillon-Wayampí migrates northeast to the Xingu River basin, like Proto-Central did before it. 4) Diasporic then splits into Proto-Southern, Pre-Tembé, and POKT. Pre-Tembé and POKT then independently migrate to the northeast, each settling in the vicinity of the mouth of the Amazon, with POKT subsequently splitting into Pre-Tupinambá, which migrates back southwards along the coast, and Pre-POK migrating up the Amazon proper. 5) Finally, Proto-Southern diversifies, yield the ToC southern distributions of the languages of this large subgroup.
In light of Michael et al.'s (2015) classification, the migration scenario entailed by the SW hypothesis is considerably less plausible than that entailed by the NE hypothesis, as the former requires six independent migrations from the southwest to the northeast (Pre-Kamayurá, Proto-Central, Proto-Tocantins, Proto-Emerillon-Wayampí, Pre-Tembé, and POKT), including three independent migrations to the same river basin, i.e., Xingu LIAMES, Campinas, SP, v. 19, 1-29, e019018, 2019 River basin (Pre-Kamayurá, Proto-Central, and Proto-Emerillon-Wayampí). While a hypothesis that requires independent but geographically correlated migrations of this sort is not intrinsically unbelievable, 11 it is, without strong additional evidence in its favor, considerably less plausible than one that does not require a set of independent but correlated migrations of this sort. In short, the NE hypothesis is considerably more plausible in light of Michael et al.'s (2015) classification than the SW hypothesis.
Conclusion
In this paper we have demonstrated that the application of Linguistic Migration Theory (LMT) to Michael et al's (2015) TG classification and the time of contact (ToC) distribution of TG languages yields the conclusion that the PTG homeland was located in the vicinity of the lower Xingu basin. We have also shown that the classification and distributions in question are not compatible with a southwestern homeland, which for purposes of explicitness, we took to be in the Paraná River basin, and which is the homeland for the Proto-Tupí-Guaraní people favored by many archaeologists on the basis of physical remains.
We do not, in this paper, seek to resolve the stark discrepancy between the homeland hypotheses favored by scholars working with different sources of evidence (i.e., linguistic and archaeological), but instead call attention to it, and identify it as a critical inter-disciplinary question to be addressed by scholars with overlapping interests regarding the deep social, cultural, and linguistic histories of the TG peoples. At the very least, these results call into question the assumption operative in much work on the topic (Noelli 1998: 649) that the distribution, diversification, and dispersal of TG languages mirrors that of the ceramics traditions associated with TG peoples. It is a truism that dates to early modern anthropology that culture, language, and populations have potentially distinct historical trajectories (Boas 1940; see also Donohue and Denham 2011), and it may be the case that we see evidence for significant differences among these trajectories in the case of TG peoples.
It is also worth noting, in this regard, that archaeological evidence has begun to accumulate that Tupí-Guaraní peoples have inhabited the lower Xingu and Tocantins basins for a considerable time (Almeida 2008;Garcia 2012), leading to proposals for an eastern, if not a northeastern, PTG homeland (Almeida and Neves 2015). Whether further archaeological work in the region will ultimately support an eastern or northeastern PTG homeland remains to be seen, but in light of the linguistics arguments presented in this paper, we suggest that such work should be considered a priority for TG archaeology.
By reversing the abductive processes leading to the hypothesis that the PTG homeland was located in the lower Xingu River basin, we have generated a set of hypotheses regarding the dispersal of the TG languages from the lower Xingu homeland. As described in §3, many of the higher-level splits in the diversification of the family are associated with relatively short-distance language dispersals, e.g., the split of PTG
LIAMES 19
LIAMES, Campinas, SP, v. 19, 1-29, e019018, 2019 into pre-Kamayurá and PNTG, with the former simply moving up the Xingu river, or the split of PNTG into Proto-Central, Proto-Tocantins, and Proto-Peripheral, which we argue was the result of the spread of PNTG to encompass the lower Xingu and lower Tocantins river basin, and the region east of the Tocantins River basin, with each of these second order daughters resulting from the separate development of PNTG in each of these major geographical areas.
Significantly, our analysis indicates that most of the considerable geographical dispersal of the TG languages is associated with the languages of the Peripheral subgroup, and especially the Diasporic subgroup of Peripheral. We argued that Proto-Peripheral diversified in the region east of the Tocantins River basin, with two of its daughters, Proto-Kayabí-Parintintin and Proto-Wayampí-Emerillon, moving eastward, and the third, Proto-Diasporic, continuing to diversify in the region east of the Tocantins River basin. Proto-Diasporic, in turn, split into three daughters, of which one, Pre-Tembé did not move significantly, but the other two, POKT and Proto-Southern, were associated with significant dispersals. POKT split into Pre-Tupinambá, which began a steady expansion southwards along the Atlantic coast, and Pre-POK, which moved far up the Amazon, subsequently experience significant language contact, which resulted in the emergence of POK proper (Michael 2014;O'Hagan 2011O'Hagan , 2019a. Proto-Southern moved an even greater distance to the Paraguay-Paraná River basin, diversifying and spreading there, resulting in the large Guaranian subgroup, centered on the Paraguay-Paraná River basin, and the smaller Siriono-Yuki and Warázu-Guarayu subgroups to the east, in the Guaporé and Mamoré River basins to the east. A major open question concerns the trajectory of the movement of Proto-Southern: the linguistic evidence at this point cannot distinguish between southwards migration along the coast, followed by an inland migration, or a migration along any of several major southern tributaries of the Amazon. This account of the dispersal of TG languages constitutes a set of hypotheses, each of which both stimulates questions in affine disciplines such as archaeology and ethnohistory, and would benefit from evaluation by research in those disciplines. For example, as anticipated by Urban (1996), the fact that greatest geographical expansion of TG languages is localized in a particular subgroup raises the question of whether some social or cultural innovation at a particular node in the tree, say, Proto-Diasporic, drove or facilitated this expansion. Comparative ethnohistorical work and ethnographic work with the modern speakers of Diasporic languages may give us insight into this question. At the same time, our account provides a set of concrete hypotheses regarding the movements of languages, which, to the degree that they are associated with the movements of peoples and their material culture, can be evaluated by archaeological research. | 2019-11-22T01:01:03.796Z | 2019-11-07T00:00:00.000 | {
"year": 2019,
"sha1": "873b893ed35f12225b271d3d5d6d359eb68014a1",
"oa_license": "CCBYNC",
"oa_url": "https://periodicos.sbu.unicamp.br/ojs/index.php/liames/article/download/8655791/21558",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "eab3a48f569f2cc01814654db85ea6e77b8aa38c",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Geography"
]
} |
216721589 | pes2o/s2orc | v3-fos-license | Byzantine church as a dwelling place . Monastic seclusion practices in Byzantium and Old Rus ’ in the ninth – thirteenth centuries *
The juxtaposition of historical and architectural evidence supports the possibility of seclusion practice in the church proper. This hypothesis is valid for both the Byzantine Empire and Old Rus’. Seclusion in a church led to a higher authority and religious status of an ascetic. The structural pair of a cell and a chapel above it was introduced into a number of Middle Byzantine, mediaeval Serbian and Old Russian monuments. Idiosyncratic features of this module suggest its development for the specific needs of recluses imitating the life of a stylite.
The Byzantine church is a highly flexible structure, architectural components of which may vary significantly from one monument to another. The overall typology invented by scholars to facilitate analysis and juxtaposition of the buildings does not preclude crucial internal modifications, differentiating the churches within a common type. However, the role of these variables, i.e. peripheral bays, enveloping the liturgical core of a church, is still underestimated and needs a thorough complex study. Several important reviews and fundamental works concerning chapels, upper-story chambers and ambulatories have already been published. 1 Some studies, such as that of V. Marinis of the Middle and Late Byzantine churches of Constantinople, endeavored to define the liturgical and non-liturgical use of the enveloping structures. 2 However, this major and promising theme of the history of Byzantine architecture still has its lacunae. One of them concerns the chambers integrated into the body of a church, which display idiosyncratic features, distinguishing them from other functional parts of the church complex, such as chapel, baptistery, library, metatorion, skeuophylakion, prothesis and diaconicon.
This lacuna has begun to be filled in by S. Ćurčić for Byzantine and Serbian monuments and V. Sarabianov for the churches in Old Rus' . 3 They put forward the hypothesis that certain outstanding monks of a special status in the ecclesiastical hierarchy may have entered the seclusion in a church proper, so a special chamber-cell needed to be assigned for their habitation and prayer. Unfortunately, neither of the scholars was able to finish his research. In this article and in my previous papers I would like to continue their work on exploring the possibility for a monk to be secluded in a 'solitary monastic cell' or 'hesichasterion' within the church proper. 4 To make the next steps in investigating this issue I propose to examine several monuments and written sources, which I believe could clarify the basic features of this monastic practice and reconstruct its local characteristics in Byzantium and Old Rus' from the ninth to the thirteenth century. Moreover, such a study of the architectural setting and historical context of this phenomenon is necessary for interpreting and verifying several written testimonies on eminent ascetics, inferring their dwelling within a church. This evidence gains even more importance since it informs us on the lives of prominent historical figures and monuments playing a key role in the history of Eastern Christian architecture, such as St. Symeon Nemanja and his cell near the church of Annunciation of the Vatopedi monastery on Mt. Athos and St. Euphrosyne of Polotsk, who initially lived in the main cathedral of her land, St. Sophia of Polotsk, and then moved to the cell in the church of the Saviour at her own monastery.
The extant examples of monastic solitary cells or hesichasterions in monuments with preserved fresco paintings reveal their direct juxtaposition with images of the stylites. This is true for the church of the Theotokos Perivleptos in Mystras from the third quarter of the fourteenth century and the church of the Saviour-Transfiguration in the monastery of Euphrosyne of Polotsk from the second quarter -middle of the twelfth century, to name just a few. This cross-reference of a stylite image and a church chamber made it possible to associate the integration of a special bay into the structure of the church building with the intention of its ktetor or a monk favored by the former to emulate the life of the ancient stylites. 5 That is why to fully appreciate the middle Byzantine phenomenon of seclusion in a church one should study its origins in the Late Antique / Early Christian stylite tradition.
In the fifth century the first stylites appeared in the Syrian lands. They were the ones who chose the space between the heavens and the earth as a place of their feat of faith. To accomplish it they mounted a pillar. The symbolic component of living on a pillar became the defining moment for the tradition under consideration. The pillar, as attested by the Vitae of stylite saints, was a symbol of renunciation of all mundane affiliations, ascending of a person to God, an altar on which a saint makes his sacrifice, and also a hearth where he/she transfigures. The pillar corresponded to the image of mounts Sinai, Tabor and Golgotha. 6 from earth were the key physical characteristics of the pillar. Due to this the specific architectural design may vary from case to case.
L. Schachner has emphasized the ambiguity of the term 'stylite' in contemporaneous and later texts in Coptic, Syriac, Greek and Slavonic. In Syriac the term 'pillar' (esṭunᾱ) may designate both column and tower. The stylites were called esṭunᾱyē or esṭunᾱrē, and the resemblance of pillar and tower, as argued by the scholar, may explain their semantic interchangeability in Syriac texts. 7 Therefore, from the very beginnings of this tradition the ascetics who chose to live in a tower were also perceived as stylites due to their elevated position. Though, of course, mounting a column was far more striking and inevitably a more venerable act. 8 Hence the stylite tradition may be considered in close connection with the seclusion practice. Indeed, the majority of written sources mentioning the design of a pillar testify that there was a small cabin at the top of a column, where the ascetic lived. The presence of a roof, however, was not necessary. In some cases a window is mentioned, which a stylite could open or not according to his own wishes. 9 There are a lot of extant stylite towers in varying degrees of preservation. Within the frontiers of late antique Syria scholars have counted approximately one hundred freestanding monastic towers built for seclusion purposes. 10 As a rule, such towers were multi-storied. The cell of the recluse and a chapel was situated on the upper floors, the lower floor functioned either as a future burial place, or as a cell for the stylite's disciples. 11 The towers were accompanied by stylite columns. L. Schachner offers a captivating picture of the Syrian landscape: the threedimensional model of the former made on the basis of the archaeological data on the pillars shows that "a traveler on the Roman road from Antioch to Dᾱnᾱ and Chalcis / Beroea would have found himself, whether willing or not, within the visual range of a holy man for over 10 km or 2 to 3 hours travel and never more than 2.4 km distant from the next pillar ascetic". 12 This evidence leaves no doubts about the popularity of the stylite vocation. The institutionalization of the monastic movement began in the very first century of its existence. Besides the communities of admirers later transformed into fixed monastic establishments, settled near the pillar of the most prominent stylites, such as St. Daniel and St. Symeon the Younger, some monasteries introduced this practice, seeing the necessity of having their own 'holy man' . 13 John of Ephesus in his 'Lives of Eastern Saints' tells the legend of the brothers Abraham and 25 Maro. In the vicinity of their monastery there was a high column, which was mounted by monks in defined succession in order to become stylites. Abraham earned the right to ascend the column after ten years of hard labour. This tale shows that monasteries which managed a pillar existed already in the sixth century, and stylitism was not only a vocation, but also a special status to be earned by the preceding deeds. 14 Already in the early Byzantine period the pillar / tower could be not only an isolated construction, but also a part of the monastic complex, even incorporated into the structure of a church. H. Butler suggested that upper-story chambers in some Syrian church towers, either flanking the altar of a church or forming its western façade, may have been used by hermits. 15 However, this suggestion was criticized by B. Schellewald, due to the absence of the equipment in such chambers necessary for dwelling of a secluded monk. She emphasized that only the Theotokos church of Schēch Sleimᾱn of the fifth century was equipped with a 'latrine' , located in the northeastern upper-story chamber, and therefore, the practice of seclusion in towers of a church was not widespread in Syrian lands. 16 Nevertheless, one should not underestimate the level of self-mortification of the early ascetics, one of the extreme forms of which is demonstrated by the contemporaneous Coptic tradition of walling up hermits to lead a life of complete isolation. Such elevated cells without a door had only a small hole for communication. 17 R. Morris points to the rare references to the stylites in the hagiographic literature of the eleventh and twelfth centuries. Only two full Vitae of the stylites from the Middle Byzantine period are extant, that of St. Luke of Chalcedon and of Lazaros of Mt. Galesion. The scholar explains the reduction in the number by the fact that the lives of recluses more often adapted to the so-called 'hybrid' form of monasticism. The traditional eremia of a stylite was being replaced by their active involvement in the everyday affairs of a nearby monastery, and the stylos became a symbol of spiritual leadership, earned by the most outstanding among the monastic brethren. 18 However, these arguments need to be modified. Today ten Vitae of stylites survive. Three of them, those of Timothy of Kᾱkhushtᾱ (d. ca. 830), Luke of Chalcedon (d. 979) and Lazarus of Mt. Galesion (d. 1054), belong to the period from the ninth to the eleventh century, and the rest to earlier times. 19 Taking into account the Byzantine Empire's loss of extensive territories in Syria and Egypt and evidence on stylites from other written sources, such as descriptions of pilgrims' travels to the Holy Land, we should not claim a dramatic decrease in popularity of this monastic vocation. It may be that, just as the Byzantine society became so closely acquainted with the stylites, this phenomenon lost its striking effect on the pious Byzan-tines, albeit retaining its highly venerable status for both laymen and monks. Niketas Choniates' account of the rebellion against Isaac II in 1187 testifies to this. According to the Byzantine historian the emperor took all possible steps in order to remain on the throne. He even invited to the palace 'the monks, who walk barefoot, who sleep on earth and who elevate themselves closer to the sun on the columns' . 20 This testimony supports the continuation of the stylite tradition in Constantinople at the end of the twelfth century.
In the Middle Byzantine period the tradition of a hegoumenos-stylite, begun by St. Daniel and St. Symeon the Younger, was further developed. One of such hegoumenoi, St. Luke of Chalcedon, was born in 879 and lived for one hundred years, according to his Vita. He spent the last forty years of his life living on a column on the sea shore near Constantinople. St. Luke began his vocation in one of the major monastic centers -Mount Olympos. Already in Chalcedon Patriarch Theophylact, who was healed by the stylite, visited him frequently. The Vita speaks of how St. Luke, on the patriarch's request, undertook the revival of the monastery of St. Bassianos, which had been founded in the fifth century in the Deuteron quarter of the capital and almost immediately attracted 300 monks. 21 However, by the tenth century the monastery had declined. The work of the stylite was successful, and he was named its second ktetor, considered worthy of burial in the katholikon. 22 The Vita does not give the details of the monastery's renewal, although it is obvious that the intervention of the saint involved more than mere financing.
Evidence from the tenth century testifies that to be called a 'stylite' one was no longer required to mount a column, or even arrange a cell in a tower -climbing a steep rock would have been enough. This was the case of St. Paul the Young, the founder of the monastery of the Theotokos of Stylos on Mount Latros, one of the so-called 'holy mountains' . Scholars have dated its foundation to the 920-930s. The area of the monastic establishment was divided into two parts separated by a brick wall. In the western, larger part, the majority of brethren lived according to communal rules and in the eastern part the cave of St. Paul the Young was located, surrounded underneath by the cells of his closest disciples -hermits. 23 The last Vita of a stylite describes the life and feats of Lazaros of Mount Galesion, born in 966 near Ephesus and died on Mt. Galesion in 1054. He was the founder and hegoumenos of three monasteries on the mountain, which was located in the vicinity of Ephesus. His Vita offers several crucial, but still ambiguous details for the reconstruction of a hesihasterion in the body of a church. Therefore, his life is worthy of a closer examination. His Vita states that Lazaros wished to be a monk since he was twelve years old. His innermost dream was to visit 20 P. Magdalino the Holy Land. To accomplish this he repeatedly tried to run away from home. Finally, at the age of eighteen he reached Jerusalem (991 at the earliest), became a monk in the monastery of St. Sabas and then moved on to the monastery of St. Euthymios. The events of 1009, when the church of the Holy Sepulchre was destroyed on the orders of the Fatimid caliph al-Hakim, may have been the main reason for Lazaros' return to his homeland. On his route back he visited the 'Wondrous Mountain' in Antioch to see the place where St. Symeon the Younger had lived as a stylite in the sixth century. On his return from the Holy Land circa 1010, Lazaros found a small hermitage near Ephesus dedicated to St. Marina, in which two hermits lived. With their help he built a pillar, on which he lived for seven years, and during this time his fame spread all over the land. However, the location of the hermitage near the main road to the city of Ephesus and his growing popularity forced Lazaros to leave the place and search for a calmer dwelling on the wild mountain Galesion. On this mountain the three monasteries of Lazaros were founded one after the other. The monastery of the Saviour was the first. It grew around the cave where Lazaros lived. However, soon his disciples constructed a pillar for him; there he stayed for twelve years. The monastery of the Theotokos was the second one, situated higher on the mountain. The third, still higher, monastery was dedicated to the Resurrection, and there Lazaros spent the rest of his life, also as a stylite. According to the Vita of Lazaros all three monasteries were communal ones, though there certain members could become recluses, including a stylite, by occupying one of the vacant pillars left after Lazaros had gone to the next monastery. The author of the Vita specifically mentioned that it was possible to become a recluse only after receiving permission from Lazaros, and many could not obtain it. 24 The Vita of Lazaros contains a lot of direct and indirect evidence on the construction of his pillars. Unfortunately, no traces of Lazaros's monasteries have been found on the mountain. That is why only hypothetical reconstructions are possible, which is not to say that they are not necessary, since there are few data on the architectural aspect of the stylites' abodes in the middle Byzantine period. Lazaros lived in a cell at the top of his pillars, neither of which had a roof, so the stylite was always exposed to all kinds of weather. The walls of his cell were not high, and when Lazaros stood, he was visible for those being in front of the pillar. Apparently, the latter was not tall, since Lazaros could communicate with monks or laymen standing below. A wooden ladder led to a platform constructed before his cell. There the majority of conversations between the stylite and his visitors took place. From the platform they could not see the stylite, unless he opened a little window overlooking the platform. This window was big enough for one to lean in or out of the cell. There are no clear data on the size of the cell, but chapter 235 of the Vita tells that its width was approximately two feet wide ('three spans'). Lazaros slept on a specially constructed chair, and also there was a certain place to accommodate bodily functions, since the stylite did not descend from his pillar, except when he proceeded to the next one. A small hermitage partially preserved among a great number of caves on mount Latros may be useful for a reconstruction of Lazaros's pillars, if any of them was a freestanding building. Mount Latros is located not far from Galesion to the south from Ephesus. This hermitage, known as Sobran Kalesi, has not been accurately dated, but A. Kirby and Z. Mercangöz have proposed that it could be placed in the Middle Byzantine period. It consisted of a tower, a church, a building of unknown function and a cave. The single-aisled church has preserved only its eastern part. The tower is preserved better. Since the entrance to the tower is at the first storey, the only way its barrel-vaulted chamber could be reached was by a ladder. The tower may have offered refuge for monks in time of danger, but according to the researchers it is more likely to have served as the cell for one of the monks of the hermitage. 26 The Vita describes lots of everyday activities of Lazaros, but most importantly it allows the suggestion that the pillar may have been a part of the church. In several chapters (157, 207, 225) the action takes place in the church, which is hard to explain without presuming that the pillar was incorporated into the building. Chapter 249 mentions another small window which looked towards or into the church. This fact, along with several other pieces of evidence, was used by H. Delehaye and R. Greenfield to suppose that this window looked into the naos of the church, so Lazaros could see everything that was happening inside and even participate in the liturgy. 27 Such a construction would not have precluded the necessity of a ladder and platform for daily communication with the hegoumenos.
One of the first anchorite-hegoumenoi secluded within the confines of the monastic establishment may have been a certain Anthimos -the abbot of the Constantinopolitan monastery Dalmatou in the middle of the sixth century. 28 The following anchorite-hegoumenoi are also mentioned in the sources: Platon the Stoudite, Stephen the Younger, Peter of Atroa, Athanasios and Paul from Latros, Lazaros from Galesion and others. 29 It is worth noting here that in the middle of the eleventh century the same problem of combining the seemingly incompatible monastic callings of a hegoumenos and a recluse were in the process of being solved in Constantinople. One of the solutions implied the presence of two hegoumenoi in charge of a monastery -the senior would become a recluse, and the junior was responsible for organizing the daily life of the community. This model was introduced in the monastery of John and Philotheos at Anaplous, founded in the second quarter of the eleventh century on the European shore of Constantinople, the very place of the feat of St. Daniel the Stylite. John and Philotheos both became the hegoumenoi of the monastery; however, the latter chose to live in seclusion. This arrangement of governance by two hegoumenoi was still alive in the mid- 26 Kirby, Mercangöz, The monasteries of Mt Latros, 59-60. 27 27 dle of the twelfth century. 30 Timothy, the second ktetor of the famous Evergetis monastery and the author of its rule, was a recluse himself. His first intention was to provide for two hegoumenoi for the monastery: one a recluse, another responsible for everyday life. After the death of the senior hegoumenos, the junior one was to become a recluse himself, and a monk from the brethren was to be elected as the second hegoumenos. However, Timothy later decided to remove this stipulation and made provision for a single hegoumenos, who may or may not become a recluse, depending on his inclination only. 31 Speaking about Constantinople and possible architectural structures to accommodate recluses, two monuments should be mentioned, which, to my knowledge, have not been examined from this standpoint yet. The first one is Kalender(i)hane Camii, the extant building of which is dated to the turn of the twelfth century ( fig. 1). Already in 1932 N. Brunov argued that at least the northern corner sections of the building had three levels. It was impossible to investigate the southern corner sections, as both had been significantly altered. The upper space of the north-western section is clearly divided into areas: the walls of the lower area form a square; the upper area was made as a quatrefoil and may have functioned as a chap- el. 32 L. Theis agreed with these suggestions. She was able to find a projection of masonry between the two mentioned areas, which allowed her to presume the presence of a wooden floor separating them. The scholar also notes the presence of a flat dome covering the upper quatrefoil area, an additional indication of its special liturgical status. 33 This hypothesis should also be verified in relation to its possible connection with the stylite tradition. This architectural arrangement may have been intended for a prominent recluse, whose cell was located on the lower floor and his private chapel on the upper. The quatrefoil plan of the upper corner bay and its dome undoubtedly emphasizes its functional specifics, such an articulation would not have been implemented for a mere structural element. The two-storeyed ambulatories flanking the church and stair towers may have provided access to this elevated area. 34 In addition, S. Ćurčić identified another architectural arrangement within the highly complicated structure of the complex of Kalenderhane Camii, which may have been intended for seclusion purposes. The scholar suggested that the so-called 'icons chapel' , abutted upon the south side of the 'Melismos chapel' , was built as the hesichasterion of a high-status monk, or even a pair of relatives, which after his/their death was to be trans-32 N. Brunov This insertion of the 'icons chapel' into the south-eastern area of the church and its decoration have been tentatively dated by S. Striker and Y. Kuban to the second half of the thirteenth century. 36 The functional purpose of the spaces located in the eastern piers of Gül Camii, a monument with several Byzantine construction stages from the eleventh till the fourteenth centuries ( fig. 3), still remains unclear. Both chambers were placed between the upper and lower storeys of the church, and small winding stairs built into the piers led to these spaces. B. Schellewald believed they were used as a concealed treasury to store relics. 37 However, I propose to examine them as a possible place for seclusion. L. Theis argued that these small spaces appeared as the result of the rebuilding undertaken in the Palaiologan period. The layout of the initial building stage, which is dated to the eleventh century, implied a cross-in-square church with flanking ambulatories. In the second stage 35 Ćurčić, Đakonikon kao isposnica, 205-208. 36 the church was significantly reshaped. Gül Camii became two-storeyed through the whole perimeter by introducing galleries with chapels at their eastern ends. To support the galleries several openings of the corner sections of the eastern and western walls were mured to form four massive piers. Into the eastern pair of the piers the chambers in question were integrated. It is important to note that these bays each had a small window looking into the altar space, and a round opening towards the chapels at the eastern ends of the galleries. 38 Thus, this arrangement may be associated with the model of a cell with a chapel above.
A church located at Küçükyali near Istanbul, tentatively associated by A. Ricci with the monastery of Satyros founded by Patriarch Ignatios in the second half of the ninth century had a similar layout of the eastern part. 39 ( fig. 4). As in Gül Camii the central apse was flanked by two bays on both sides, but the inner ones were designed as rectangular chambers, and the outer ones as chapels with apses. S. Ćurčić argued that at least the southern rectangular bay could have served as the abode of a recluse. The bay was separated by a thin wall (ca. 0.25 m) from the southern chapel which probably contained an arcosolium niche in its southern wall. Access to the bay was provided only from the altar space through a wooden door. It was 38 Ibid., 204-205. 39 A. Ricci likely that the southern wall of the rectangular chamber had a small window towards the southern chapel. 40 The reviewed examples of special architectural arrangements, introduced into the mentioned Byzantine monuments alongside the usual features of the Byzantine architectural model of a church, reveal two distinct spatial characteristics of such chambers. They should either occupy an elevated position within the gallery level, or abut upon the altar space. However, the eastern cells of Gül Camii managed to combine both features. To illustrate this phenomenon I will turn to the monuments of mediaeval Serbia, which provide the scholar studying the architectural implications of stylitism with a firmer ground.
The Serbian tradition has been thoroughly explored by S. Ćurčić. First of all, the scholar examined a group of church towers of mediaeval Serbian monuments, which formed a part of their western façade, including the church of the Ascension in the Žiča monastery (exonarthex and tower -between 1219 and 1233-1234), the church of the Holy Trinity in the Sopoćani monastery (exonarthex and tower -after the second half of the thirteenth century), the church of Bogorodica Ljeviška in Prizren (completed in 1309-1310), the church of the Theotokos at Peć (narthex and tower -ca. 1324-1330) and the church of St. Stephen (Lazarica) in Kruševac (1377/1378-1380). All towers had a similar multi-storey arrangement. The first level was executed as a monumental entrance to the church; the chamber on the second 40 Ćurčić, Đakonikon kao isposnica, 204-205. level may have been used as a hesichasterion; the third level accommodated a chapel, while the bells were set in the top section of a tower. 41 The Žiča tower was the monument to inspire the construction of the towers which followed (fig. 5). The hesichasterion of the first Serbian archbishop Sava I (St. Sava) was at the second level of the tower, and there he withdrew when he decided to step down from the archbishop's throne. This chamber was never decorated; it had a wooden ceiling, which functioned also as the floor of the chapel above. The two bays were connected to each other by a wooden ladder. The partially preserved fresco painting shows an elaborate program, including repre- The second phenomenon, that of the south-eastern corner bays, which were accessed only though the sanctuary area, is far more complex and still needs some additional arguments. The scholar proposed that a number of mediaeval Serbian and Byzantine monuments with limited access to their south-eastern corner bays should be examined as probable monastic solitary cells of highstatus monks of the corresponding monasteries, as an alternative to common diaconicon functions. The following monuments were considered: the church of the Ascension in the Žiča monastery (1206-1217), the church of the Apostles at Peć (early thirteenth century), the church of the Assumption of the Morača monastery (1251-2), the church of Christ Pantokrator at Dečani (1327-1335). Others, such as the churches of the monasteries of Pridvorica, Sopoćani, Arilje, Staro Nagoričino, Banja and the churches of the Theotokos at Peć and in Kučevište, were mentioned as revealing similar characteristics. 43 However, his Vita reveals a significant detail, mentioning that the saint found a secret path from the cave church to his tower located on a high cliff, and thus a direct connection between a holy place and his abode was established, creating a similar church-and-cell pair. V. Palestinskiǐ paterik I. Zhitie prepodobnogo ott͡ sa nashego Savvy Osvi͡ ashchennogo, ed. I. V. Pomi͡ alovskiǐ, Sankt-Peterburg 1885, 24. 43 Ćurčić, Đakonikon kao isposnica, 191-209.
Once again S. Ćurčić traced this tradition to the very origins of Serbian history as a Christian state. He referred to the Vitae of St. Symeon -Stefan Nemanja and St. Sava by Domentijan, which gave information on the cell of St. Symeon in the monastery of Vatopedi, where he retired after leaving the throne of the Serbian župan in 1196. The corresponding paragraph informs us that on the orders of Stefan Nemanja a cell was constructed with a window so he could watch 'those who are in prayer in the holy church' . This description, indeed, may be considered as a possible indication of the seclusion of St. Symeon in this hesichasterion, which may have abutted upon the south wall of the church of the Annunciation. 44 The most substantial arguments in favor of the 'hesichasterion' hypothesis are found in the church of the Assumption of the Morača monastery ( fig. 6). The church was built on the orders of Stefan, the grandson of Stefan Nemanja. The south-eastern chamber of this building, accessed through a door from the altar area, has preserved its fresco decoration, which includes the cycle illustrating the life of Prophet Elijah, and the image of the Theotokos with Christ-Emmanuel in a medallion (Theotokos 'Chora tou Achoretou' or Bogorodica Znamenie). The choice 44 Ibid.,[192][193]general plan (after: Ricci,Bizans'ta Kır Sevgisi) 31 of scenes and their placement were interpreted as pointing to a specific monastic status of the compartment in question, and its architectural arrangement as alluding to the cell of St. Symeon at the Vatopedi monastery. 45 The architectural design of the eastern zone of the church of Christ Pantokrator at Dečani also yields some important material for consideration. Its south-eastern two-storeyed bay, accessible from the sanctuary only, lacks any fresco decoration, which makes its function even more difficult to define. However, it was suggested that the rocky landscape depicted in the background of the fresco of Theotokos Eleousa, placed in the lunette above the door leading from the sanctuary to this chamber, may point to the ascetic character of the cell. 46 Thus, the first group of towers, forming western façades of the corresponding monuments, may be associated with the needs of the ecclesiastic hierarchy of the Serbian state: St. Sava, the first Serbian archbishop ordered the first tower, implementing the cell-and-chapel arrangement in the Žiča monastery; he was then followed by Archbishop Sava III in the church of Bogorodica Ljeviška 45 Ibid., [194][195][196][197][198][199][200] in Prizren; and Archbishop Danilo II in the church of Theotokos at Peć. It would be tempting to connect the second group of monuments with the ruling Nemanjić dynasty, as the most conspicuous examples of this arrangement were introduced in the churches built under the auspices of the Serbian rulers, who wished to emulate the saintly founder of their family. However, this hypothesis is premature, since the question of the exact function of these south-eastern blocked-out chambers is still pending and must be ruled out before making any conclusions on whether there was a separate royal tradition.
As in mediaeval Serbia, Russian monasticism received its original impetus and inspiration from Byzantium, but its subsequent development evolved almost independently due to local historical, social and even geographical idiosyncrasies. Nevertheless, the stylite and seclusion traditions were able to find its Russian followers. V. Sarabianov has suggested that the cells on the galleries and in the towers of Russian pre-Mongolian churches were constructed for persons of high spiritual authority and could even be considered as a prerogative of founders and hegoumenoi. The scholar proposed that the cells in the towers of the cathedral of Anthony's and St. George monasteries in Novgorod and on the galleries of the cathe- Ol'govich and his wife, arguing that the southern chamber on its galleries functioned as a chapel and at the same time as a place for the solitary prayer of an eminent person, probably of princely descent. It is highly likely that this chapel was dedicated to Prophet Elijah, due to its fresco decoration depicting scenes from his life. 48 The tower of the cathedral dedicated to the Nativity of Theotokos in the monastery of St. Anthony, founded in 1117, leads to the galleries and then to a domed chapel dedicated to the venerable hermits Onuphrius and Peter of Athos. A small niche was placed at the joint of the western wall of the church and the circumference of the tower (fig. 7). The fresco image of a stylite, depicted near the niche, leaves no doubt about its function. Thus, it was a seclusion cell, where, as legend has it, St. Anthony of Rome, the founder of the monastery, lived his last years. 49 The floor of the cell is significantly lower than the steps of the staircase, and its upper part had been partly walled up earlier, so only a small opening was left. The niche has a small window looking outside. 47 Sarabʹi͡ anov,K voprosu o funkt͡ sionalʹnom naznachenii palatok,[177][178][179][180][181][182][183][184][185][186][187][188][189][190][191][192][193][194] Idem, Pomeshchenii͡ a vtorogo ėtazha, 429-433. 49 Ibid., 407-413.
The tower of St. George's cathedral, founded in 1119, of the monastery of St. George has four niches, located along the ascent of the staircase. Each was designed with a different configuration. The first one, built into the tower's base, was planned for genuflection. The second and third niches admit a standing person, however, enclosing him to the waist. The last niche is situated near the entrance to the chapel, located in the tower's dome. The niche is fully opened. According to V. Sarabianov, it is highly likely that the niches of St. George's cathedral accompanied the monks, who ascended to the chapel, and were a sort of marking, indicating the stages of repentance and prayerful ascent on the spiritual ladder ( fig. 8). 50 The experience of St. Euphrosyne of Polotsk is crucial for a reconstruction of the considered monastic practice and its architectural implementation. Its overall importance is explained by the fact that it allows a scholar to work with all sources necessary for its interpretation: visiting Constantinople on her way. She died in Jerusalem sometime after 1167 and was initially buried in the monastery of St. Theodosius in Jerusalem. 51 Obviously, St. Euphrosyne was well-acquainted with the Constantinopolitan culture and monastic traditions. This awareness was caused by special historical circumstances which her family had found itself in. The princes of Polotsk were struggling against the Kievan rulers. In 1129 they and their families were captured and sent into exile to Constantinople due to their disobedience. Presumably, the father of St. Euphrosyne, George, was one of the exiled princes, who returned to his homeland in 1139/1140. 52 According to her Vita St. Euphrosyne herself did not hesitate to maintain relations with the Byzantine emperor and the patriarch. She sent her servant to Constantinople, asking them to provide her with the famous icon of Theotokos, one of the three created by St. Luke. Her request was granted and the emperor sent her a copy of the image of Theotokos of Ephesus. On route to the Holy Land, she met with Emperor Manuel I in person, who greeted her with honors and directed her to Constantinople. 53 The sources mention that St. Euphrosyne spent her time within the confines of the church of the Transfiguration. 54 Her previous experience in St. Sophia's cathedral and the architectural design of the interior church spaces provide grounds to presume that her abode was set on the church galleries. The main cathedral of the land of Polotsk was rebuilt repeatedly during its long lifetime, but the foundations and some sections of walls from the middle of the eleventh century have survived. The most extensive archaeological investigations were carried out in the 1970-1980s, though the question of where the 'golbets' of St. Euphrosyne may have been located is still unresolved. 55 This issue makes the study of the church, commissioned by St. Euphrosyne herself, even more significant.
Recent archeological works of the archaeological expedition of the State Hermitage Museum in and around the church of the Saviour have shed new light on its overall composition, construction stages and inner arrangements. New finds and observations allowed P. Zykov and E. Torshin to reconstruct the spatial system of St. Euphrosyne's presumed dwelling on the galleries. 56 The gallery area of the Saviour church consists of a rectangular bay, the central part of which overlooks the naos. This 51 E. Maleto, Antologii͡ a khozheniǐ russkikh puteshestvennikov XII-XV veka: issledovanie, teksty, kommentarii, Moskva 2005, 209-220. 52 oblong space connects the two cruciform chambers constructed above the south-western and north-western corner bays of the naos ( fig. 9). The southern chamber was designed as a chapel covered by a dome. It has a prothesis niche and three windows in the eastern, southern and northern walls. The chapel has preserved its fresco decoration, which has been dated to the beginning of the thirteenth century. V. Sarabianov believed that both cruciform bays of the galleries functioned as chapels and cells at the same time. However, despite having the same layout, the northern chamber differs extensively by its spatial articulation from the southern one. Archaeological investigations of the interior of this bay have revealed traces of the wooden constructions and platforms used throughout this small area. The reconstruction shows the two-storeyed arrangement of the northern chamber. The two levels were divided by a wooden floor. Several wooden constructions were introduced into the first level, which were interpreted as shutters for the window and the niche in the northern wall; others were used as bookshelves and a desk for copying books ( fig. 11). Scholars have proposed that this part of the northern chamber was used as a scriptorium and a library. 58 This suggestion is strengthened further by the image of an angel writing on a desk with a quill. 59 It is much harder to assign a function to the second level of this bay. It is covered by a barrel vault, measuring 2.20 meters in height from the wooden floor to its summit. The lower parts of the chamber are enlarged by three rectangular niches in each wall except the northern one. The eastern niche, moreover, was covered by a barrel vault. This chamber was lit by three cruciform windows; two of them were cut in the eastern and southern niches, while the last one was placed in the northern wall ( fig. 12). The key to interpreting how this space may have been used are the traces of fresco painting, which are contemporaneous with the decoration of the rest of the church, i.e. dated to the middle of the twelfth century. According to researchers, this fact precludes a mere storage function of this space, suggesting its more exalted status as a room for 'solitary prayer' . Scholars have proved that all described arrangements were the initial requirements of St. Euphrosyne, introduced at the time of construction of the church of the Saviour. 60 The hypothesis of P. Zykov and E. Torshin is viable, taking into account all the monuments examined above and the awareness of St. Euphrosyne of Byzantine monastic traditions. Comparative analysis even allows the suggestion that St. Euphrosyne and her master builders implemented the chapel-and-cell model, similar to the arrangements of St. Sava for the Žiča tower and the northwestern gallery bay in Kalender(i)hane Camii. The lower level of this area was never decorated with frescoes, except for the image of the writing angel, just like the unadorned cell of St. Sava. The eastern and southern niches of the upper-level chamber of the Saviour church had a barrel vault, while the western one has a flat ceiling. It is not easy to answer the question whether the low location of the niches, probably caused by the necessity of leaving the masonry construction of the main vaults undisturbed, impeded their liturgical use. Summing up, the church of the Saviour in Euphrosyne's monastery in Polotsk, as demonstrated by the recent discoveries, represents a whole complex of spaces constructed on the church galleries to meet the special requirements of St. Euphrosyne, 60 Zykov, Torshin, Issledovanii͡ a severnogo pomeshchenii͡ a na khorakh Spasskoǐ t͡ serkvi, 97-107. with separate rooms for her work 61 and prayers, and even a place for rest. Though there are no arguments in favor of her seclusion at this complex, it would have certainly been a suitable dwelling place for her.
To complete the review of pre-Mongolian Russian monuments, which may be connected to the stylite tradition, I should mention a freestanding tower in Stolp' e from the second half of the twelfth century -beginning of the thirteenth century with unique inner design which has no analogues in Russian architecture. The tower has five levels; the top level was executed as an octagonal chapel. Therefore the tower has all necessary prerequisites to be interpreted as a stylite pillar. 62 To conclude, I should put a question of the sociopolitical status of the persons emulating the life of the stylites, which may explain a significant number of its adepts. S. Ćurčić introduced the concept of a 'living icon' , when a living man, though a 'future saint' , presents himself within architectural frames, which worked to emphasize his special exalted status; the most instructive example being the Enkleistra of St. Neophitos of Cyprus at the end of the twelfth century. 63 However, this was more than a question of self-representation. Since St meliou in Pisidia to seek advice for the emperor. It should be stressed that the monk is represented in the window of a monastic tower ( fig. 13). Both St. Luke of Chalcedon and Lazaros of Mt. Galesion had connections in the higher echelons of the Byzantine society. Hagiographic sources and historical chronicles yield significant data on the relations of emperors and the aristocracy with anchorites. Based on written testimonies historians were able to formulate the concept of the 'politisation' of a Byzantine saint. According to it the prominent members of the monastic movement could exert their influence on politics both on the local level and empire-wide. 65 It is very likely that the status of these monks was emphasized symbolically, above all, by architectural means. The hegoumenos of an eminent monastery, educated and famed for his ascetic deeds, was a suitable figure to assume the role of the emperor's advisor. Meanwhile, seclusion on the galleries or tower of a monastic church marked the authority of such an anchorite. Thus, their seclusion should not be understood as permanent, since by assuming an active social position the ascetics agreed that their peace and solitude would be disturbed. However, the transformation of this phenomenon in mediaeval Serbia and Old Rus' was unavoidable within the different historical context. Russian and Serbian monks, following the lives of the stylites in churches built under their auspices, already had a high status, obtained either by birth in a princely family, or by holding a high position in the ecclesiastical hierarchy, or both. Thus, the juxtaposition of historical and architectural evidence supports the possibility of seclusion practice in the church proper. This hypothesis is valid for both the Byzantine Empire and Old Rus' . Of particular interest is the case when a secluded hegoumenos was at the same time the ktetor of the building, so he or she was directly involved in the creation of the architectural program of a future church. Maintaining the practice of seclusion led to a higher authority and religious status of a hegoumenos, which received its architectural embodiment in the introduction of a cell for a recluse into the church proper. The accumulation of written sources and archaeological data will provide the necessary evidence for further studying the given phenomenon and defining the specific traits that characterized the cells of the recluses. In turn, these definitions could help scholars to obtain a fuller picture of the functionality of the galleries, towers and ambulatories in Byzantine and Old Russian church complexes. Special attention should be given to monuments with the structural pair of 'cell-and-chapel' . | 2020-04-09T09:08:20.999Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "4f4c7c010f0db91a03c093ff25525c8ea3f1aa5b",
"oa_license": "CCBY",
"oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0350-13611943023F",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5076c3d90ef2db4ec47f121a7edcf844e1779106",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"Art"
]
} |
17214074 | pes2o/s2orc | v3-fos-license | Growth on ATP Elicits a P-Stress Response in the Picoeukaryote Micromonas pusilla
The surface waters of oligotrophic oceans have chronically low phosphate (Pi) concentrations, which renders dissolved organic phosphorus (DOP) an important nutrient source. In the subtropical North Atlantic, cyanobacteria are often numerically dominant, but picoeukaryotes can dominate autotrophic biomass and productivity making them important contributors to the ocean carbon cycle. Despite their importance, little is known regarding the metabolic response of picoeukaryotes to changes in phosphorus (P) source and availability. To understand the molecular mechanisms that regulate P utilization in oligotrophic environments, we evaluated transcriptomes of the picoeukaryote Micromonas pusilla grown under Pi-replete and -deficient conditions, with an additional investigation of growth on DOP in replete conditions. Genes that function in sulfolipid substitution and Pi uptake increased in expression with Pi-deficiency, suggesting cells were reallocating cellular P and increasing P acquisition capabilities. Pi-deficient M. pusilla cells also increased alkaline phosphatase activity and reduced their cellular P content. Cells grown with DOP were able to maintain relatively high growth rates, however the transcriptomic response was more similar to the Pi-deficient response than that seen in cells grown under Pi-replete conditions. The results demonstrate that not all P sources are the same for growth; while M. pusilla, a model picoeukaryote, may grow well on DOP, the metabolic demand is greater than growth on Pi. These findings provide insight into the cellular strategies which may be used to support growth in a stratified future ocean predicted to favor picoeukaryotes.
Introduction
Picophytoplankton (< 3 μm), composed of both prokaryotic and eukaryotic organisms, dominate autotrophic biomass in oligotrophic oceans. While single-celled cyanobacteria are the most abundant autotrophs, picoeukaryotes can dominate biomass and productivity in the subtropics [1][2][3], making them important contributors to ocean carbon production [4] and export [5]. In the central North Atlantic Ocean, picoeukaryotes accounted for approximately 87% of the carbon biomass and 68% of the picophytoplankton primary production [1]. Furthermore, eukaryotes in the subtropical North Atlantic Ocean were found to be biochemically different than co-occuring prokaryotic lineages with a higher δ 15 N signature and were estimated to be responsible for nearly all of the new production [6].
Phosphorus (P) is an essential macronutrient utilized by phytoplankton for growth and, as such, has the potential to significantly influence oceanic primary production [7][8][9][10][11]. Oligotrophic oceans, like the North Atlantic subtropical gyre, have consistently low (<10 nmol L -1 ) phosphate (Pi) concentrations during stratified periods [12,13] although it can reach up to >20 nmol L-1 during periods of deep convective mixing [11]. In this region, dissolved organic phosphorus (DOP) is an important nutrient source accounting for >80% of the total dissolved P [8,12,13] and is readily utilized by the resident phytoplankton [11,[14][15][16]. Modeling studies have demonstrated that DOP may be supplied to the subtropical North Atlantic through horizontal transport from the northwest African shelf region [10,17,18] where there is a net DOP production [19]. As surface oceans warm and stratification increases, these cross-basin sources of organic nutrients may become progressively more important in supporting production in oligotrophic gyres.
Blooms of the pico-prasinophyte Micromonas pusilla have been observed in the subtropical North Atlantic with maxima in abundance associated with mixing events and high ambient DOP concentrations [11,20]. M. pusilla has also been shown to be an important member of the picoeukaryote community in the Arctic [21]. Given its vast geographic range M. pusilla has been proposed as a sentinel organism for understanding the effects of climate change on biogeochemical cycling [22]. Despite its ecological importance the metabolic strategies employed by M. pusilla to cope with Pi-deficiency and growth on different P sources are poorly understood. In general, marine phytoplankton elicit a three-pronged response to combat P stress which includes increasing Pi uptake, reducing cellular P demand, and utilizing DOP. Indeed Plimited M. pusilla cultures have been shown to increase alkaline phosphatase activity (APA; [23]), reduce their cellular P quota [23], and adjust their lipid composition [24]. However, the molecular underpinnings driving these physiological responses remain unknown. Furthermore, the molecular and cellular response to growth on DOP has not been explored.
Here, transcriptomics were used to investigate the whole-genome expression response of M. pusilla to P scarcity and P source. RNA-sequencing along with cellular macronutrient composition, and alkaline phosphatase activity (APA) were used to characterize the cellular response to Pi-replete and Pi-deficient conditions. A significant increase in APA and expression of genes that function in P acquisition concurrent with a decrease in growth rate and cellular P content were expected in the Pi-deficient cultures. Given the importance of DOP to picophytoplankton in oligotrophic oceans, we also investigated the response in M. pusilla cultures grown under replete conditions with ATP as the only P source. With the exception of increasing APA and the corresponding gene expression, we hypothesized growth rates and elemental composition to be similar in Pi-and ATP-replete M. pusilla cultures due to growth in an equimolar P environment.
Culture conditions and physiological measurements
Duplicate (denoted 'a' and 'b'), axenic batch M. pusilla (CCMP 2709) cultures (3 L) were grown under Pi-replete, ATP-replete, and Pi-deficient conditions at 16°C in a light:dark cycle of 14:10 h at 120 μE m -2 s -1 . Prior to the start of the experiment, the M. pusilla culture was evaluated for the presence of bacteria by SYTO-staining and processing via flow cytometry [25], while throughout the experiment bacterial contamination was assessed using L1pm media [26]; all samples were negative. The experimental cultures were inoculated with exponentially growing, Pi-replete M. pusilla cells that had been spun, washed, and finally resuspended in the treatment media at a starting concentration of~1.6 x 10 5 cells mL -1 . Cultures were bubbled with 0.2 μm filtered 380 ppm compressed air:CO 2 mix. All media, as well as the culture used for the inoculum, were equilibrated to the pCO 2 condition prior to the start of the experiment. The pCO 2 levels were controlled and monitored as the work presented here is part of continuing research aimed at understanding the impact of changing pCO 2 on picoeukaryote growth.
Cells were grown in artificial sea water [22] amended with either autoclaved (macronutrients and trace-metals) or 0.2 μM syringe-filter sterilized (vitamins) L1 nutrients [26] with the exception of P and silicate, which were omitted. Phosphorus was added separately to achieve the desired condition; Pi-replete media contained 36 μM PO 4 3-, the Pi-deficient media received 0.5 μM PO 4
3-
, and the ATP-replete treatment contained 12 μM ATP (~36 μM P). Cells grown in the Pi-deficient treatment are expected to be limited at this concentration based on the scaling relationship between Pi utilization and cell volume (e.g., [27]) which suggests a 2 μm cell would have a half-saturation concentration (K m ) for Pi uptake of~0.1 μM; a similar K m value for growth on Pi was also reported in the prasinophyte Prasinomonas capsulatus [28]. Over half of the DOP pool is unidentified; ATP was selected as the proxy for DOP as it represents a compound that not only is quantifiable but also has been detected in every marine environment where measurements have been made [29].
Growth was monitored daily by fluorescence measurements using a Turner TD-700 Fluorometer (Sunnyvale, CA) and cell counts which were analyzed by flow cytometry. Samples for cell abundances were fixed with paraformaldehyde (0.5% final concentration), incubated for one hour at 4°C, and stored at -80°C until analysis. Samples were analyzed on a BD FACSJazz cell sorter (San Jose, CA); cells were enumerated and converted to cell abundances using the volume analyzed method [30]. Temperature and pH measurements were also made daily using an Orion Star A211 pH meter (Thermo Scientific, Waltham, MA).
Samples were collected for dissolved and cellular nutrient analysis, APA, salinity, and total alkalinity (A T ) at the beginning of the experiment (day 1) and on day 5 for the Pi-replete treatment and day 7 for the Pi-deficient and ATP-replete treatments. Cells from the Pi-replete and ATP-replete cultures were harvested in early exponential phase and Pi-deficient cells were harvested when growth was reduced when compared to the other treatments (Fig 1). The harvest times were selected so as to capture strong changes in gene expression associated with growth on different P sources, when cell abundances were high enough to support the desired analyses, as well as when pH (S1 Fig) and carbon chemistry changes due to cell growth and biomass accumulation were minimal (S1 Table). Nutrient samples were filtered through 0.2 μm polycarbonate filters and stored in HDPE bottles at -20°C until analyzed. Nitrate and phosphate concentrations were measured using a Seal AA3HR Segemented Flow Autoanalyzer (Mequon, WI). Samples for A T were 0.2 μm filtered and stored in sealed glass vials until analyzed. Duplicate A T measurements were made via titration using 0.1 N HCl and a Metrohm 888 Titrando (Herisau, Switzerland). Certified reference material [31] were included in the measurements. Culture pCO 2 concentrations and DIC were calculated using CO2SYS Software [32] using constants from Mehrbach [33], refit by Dickson and Millero [34], and accounting for Pi concentrations (S1 Table).
Cell samples for particulate carbon (C), nitrogen (N), and phosphorus (P) were collected onto precombusted 25 mm Whatman glass fiber filters (GE Healthcare Bio-Sciences, Pittsburgh, PA) and stored at -20°C. Particulate C and N samples were dried and analyzed on a Costech 4040 elemental analyzer (Valencia, CA) using acetanilide as a standard. Particulate P determinations were made as described by Lomas et al [11]. Briefly, filters were rinsed with 0.017 M MgSO 4 , dried at 90°C, and combusted at 500°C for 2 h. Upon cooling, 0.2 M HCl was added and hydrolyzed at 80°C for 30 min. After cooling, mixed reagent [35] was added, the samples were centrifuged, and absorbance was read at 885 nm using a Genesys 10 spectrophotometer (Thermo Scientific).
Triplicate APA measurements were made by quantifying the hydrolysis of 6,8-difluoro-4-methylumbelliferyl phosphate (Life Technologies, Grand Island, NY) using a Molecular Devices FilterMax F5 microplate reader (Sunnyvale, CA). Abiotic substrate hydrolysis was accounted for in killed controls that were boiled and cooled prior to substrate addition, as well as in media-only controls. The fluorescent reference standard, 6,8-difluoro-4-methylcoumarin (Life Technologies) was used to calculate the rate of hydrolysis, which was then normalized to cell abundance to determine APA per cell.
Additionally, daily APA measurements were made in a separate M. pusilla batch experiment. Growth conditions were similar to those previously described with the following exceptions: cultures (1.5 L) were grown in triplicate for each treatment and the starting cell density was greater (~5.5 x 10 5 cells mL -1 ). Finally, two Pi-deficient cultures received a Pi addition (36 μM) on day 5 to demonstrate cells were indeed limited by P availability.
Statistical Analysis
Analysis of variance (ANOVA) tests were conducted using SigmaStat (version 3.5; Systat Software, San Jose, CA) to determine statistically significant differences among APA measurements collected in the independent culture experiment where cultures were grown in triplicate for each treatment.
RNA preparation and transcriptome sequencing
Approximately 1.5 L of culture volume was gently filtered over 0.8 μm, 47 mm polycarbonate filters on day 5 for the Pi-replete and day 7 for the Pi-deficient and ATP-replete treatments. Filters were stored in lysis buffer, flash frozen, and stored at -80°C until analyzed. Total RNA was extracted using the Qiagen RNeasy Mini Kit (Venlo, Netherlands) according to the manufacturer's protocol, with the following exceptions: cells were lysed using 0.5 mm zirconia/silica beads (BioSpec, Bartlesville, OK, USA) mixed with the lysis buffer and vortexed until the solution appeared homogenous. The lysis solution was then passed through Qiashredder columns (Qiagen) to remove any large cell material that could clog the spin columns. To aid in the removal of DNA, two DNase digestions were performed. First, Qiagen's RNase-free DNase Set (an on-column treatment) was used according to the manufacturer's instructions. The second DNA removal step was conducted using the Turbo DNA-free kit (Life Technologies) according to the manufacturer's protocol. The RNA was then quantified in duplicate using a Qubit Fluorometer (Life Technologies); RNA quality was assessed by gel electrophoresis.
RNA preparation and sequencing were performed by the U.S. Department of Energy Joint Genome Institute (JGI; sequencing project ID 1042280). RNA sequencing libraries were generated from 1 μg of RNA with 100 base pair paired-end reads sequenced using an Illumina HiSeq 2000. Reads were analyzed following the JGI pipeline. First, read quality was assessed using BBDuk [36] where artifact sequences were detected by kmer matching (kmer = 25) and trimmed. Reads were then quality trimmed using the phred trimming method at Q6 and finally, reads under 25 bases were removed. The remaining reads from each library were aligned to the M. pusilla genome [22] using TopHat [37] with only unique mapping allowed. Gene counts for each culture replicate were generated by featureCounts [38]; Pearson's correlation (r) was used to demonstrate the high reproducibility among biological replicates within each treatment (r = 0.97, 0.94, 0.81 for Pi-replete, Pi-deficient, and ATP-replete, respectively). DESeq2 [39] was used to determine differential expression between the Pi-replete and Pi-deficient treatments as well as between the Pi-replete and ATP-replete growth conditions. Differentially expressed genes are those with a p-value <0.05 and a fold change >2 (S2 Table).
Genes that were differentially expressed in at least one treatment were compared in a hierarchical cluster analysis using Cluster 3.0 [40]. Average counts were log transformed, centered about the mean, and normalized by multiplying each gene by a scale factor so that the sum of the squares of the values for each gene is 1. A centered correlation was used as a similarity metric for both the genes and treatments; a complete linkage was used as the clustering method. Java Tree View [41] was used to read and display the data.
Results
Cellular response to the transition of Pi-replete growth to ATP-replete and Pi-deficient growth conditions The Pi-replete and ATP-replete cultures received an equal amount of P, yet the growth rate of cells grown with ATP was~30% less than that of Pi-replete cells (Fig 1; Table 1). Cells from both treatments were harvested during the exponential phase of growth (Fig 1); at this time Pi concentrations had stayed the same or increased in the ATP-replete cultures ( Table 1). The growth rate of Pi-deficient cells decreased by~75% when compared to the Pi-replete treatment ( Table 1).
A pronounced effect of changing the P availability was a decrease in the cellular P content of Pi-deficient and ATP-replete cells ( Table 2). Cellular P content decreased by approximately 50% and 25% in Pi-deficient and ATP-replete cells, respectively, when compared to Pi-replete cells ( Table 2). Pi-deficiency also resulted in increased cellular C content, while cellular N content was largely unaffected by the different treatments ( Table 2). The increase in cellular C in Pi-deficient cells could have been caused by an increase in cell size. Using forward scatter, determined by flow cytometry, as a proxy for size [2,42], the Pi-deficient cells were found to be slightly larger when compared to the Pi-replete cells, although the increase in C content was 20% (Table 2). These changes to the cellular elemental composition resulted in elevated C:P and N:P ratios in the Pi-deficient and ATP-replete conditions ( Table 2).
The highest APA levels were measured in the Pi-deficient cultures with rates significantly greater than both Pi-replete and ATP-replete cultures (Fig 2; p < 0.05). APA measurements were made when cell abundances of the Pi-deficient cultures deviated from the ATP-replete cultures (Fig 1). In a separate growth experiment (Fig 3A), APA was measured daily in cultures where a deviation in growth was detected earlier (Fig 3B). In that experiment, APA levels in the ATP-replete and Pi-deficient cultures remained relatively stable on days 6 and 7 (Fig 3B), demonstrating a difference in response among the treatments.
Differential expression due to Pi-deficiency and transition to DOPreplete growth RNA-sequencing was used to characterize whole-genome expression patterns in Pi-replete, Pideficient, and ATP-replete M. pusilla cultures. Over 30 million sequence reads were generated for each culture with approximately 40% mapping to the M. pusilla genome [22], detecting nearly every protein-coding gene (Table 3). DESeq2 [39] was used to determine which genes were differentially expressed (p < 0.05) in the Pi-deficient and ATP-replete treatments relative to the Pi-replete condition. The differential transcriptomic response was greatest in the Pi-deficient treatment with 960 differentially expressed genes compared to 537 in the ATP-replete condition (Fig 4). There were many differentially expressed genes shared between the two treatments (Fig 4), indicating ATP-replete cells responded similarly to the Pi-deficient cells. A hierarchical cluster analysis [40] was performed to group the differentially expressed genes by similar expression patterns (Fig 5). The Pi-replete treatment clustered separately indicating the ATP-replete and Pi-deficient transcriptomes were more similar to each other than to Pi-replete (Fig 5). Four clusters, or expression patterns, were generated. Cluster 1 contains transcripts that were repressed in Pi-replete M. pusilla cells and so were over-represented in the Pi-deficient and to a lesser extent, the ATP-replete treatments. Included in this cluster were P-stress response genes like AP (protein ID 64401), which was the most differentially expressed gene in both the Pi-deficient and ATP-replete transcriptomes (Fig 6). Pi-deficient cells also significantly upregulated transcripts encoding a sulfolipid synthase (protein ID 58169), 5'-nucleotidase (protein ID 106294), and a phosphodiesterase (protein ID 93904); these genes were not differentially expressed in the ATP-replete treatment (Fig 6). Also highly expressed in both Pideficient and ATP-replete conditions were transcripts encoding genes involved in polyphosphate accumulation (protein ID 61436, 60787) and Pi transporters (protein ID 108777, 84293; Fig 6). The transcriptomes indicated Pi-deficient and ATP-replete M. pusilla cells were combating arsenic toxicity as several glutathione s-transferases (protein ID 57469, 63522) and an arsenate permease (protein ID 56091) were upregulated (Fig 6). Also found in Cluster 1 were several genes that may be involved in glycolytic bypass reactions. Transcripts encoding malate dehydrogenase (protein ID 75917) and malic enzyme (protein ID 97726) were upregulated in both Pi-deficient and ATP-replete cells (Fig 6). Additionally, an acid phosphatase (protein ID 85046) was differentially expressed in the Pi-deficient and ATP-replete treatments (Fig 6).
Clusters 2 and 3 contain transcripts reduced and over-represented in cells grown with ATP, respectively. Several genes involved in chlorophyll biosynthesis, including a protoporphyrinogen IX oxidase (protein ID 60613), magnesium-protoporphyrin O-methyltransferase (protein ID 96236), coproporphyrinogen oxidase (protein ID 104790), and an uroporphyrinogen decarboxylase (protein ID 104963), were significantly repressed when compared to the Pireplete treatment (Fig 6). Cluster 3 contains genes that were similar to those in Cluster 1 including transcripts encoding glutathione s-transferases (protein ID 73823, 107846; Fig 6) and a Pi transporter (protein ID 53790; Fig 6).
The transcripts in Cluster 4 were found to accumulate in the Pi-replete treatment (Fig 5). Genes involved in posttranslational modification, energy production, and carbon fixation were more abundant in Pi-replete cells (Fig 6). For example, a RuBisCO subunit (protein ID 104787) as well as a carbonic anhydrase (protein ID 96952) had accumulated in the Pi-replete M. pusilla cells. Additionally, transcripts for a nitrate/nitrite antiporter (protein ID 63387) were more abundant under Pi-replete conditions (Fig 6).
Discussion
Picoeukaryotes, though not numerically dominant, are equal to or may even exceed cyanobacteria in biomass, productivity, and export in the oligotrophic subtropical North Atlantic, where P stress is an important ecological determinant [1][2][3]11]. Despite their role in ecosystem functioning, the cellular responses and molecular underpinnings to changes in P availability and supply are not well understood in picoeukaryotes. M. pusilla is considered a model organism [43] yet it has been the target of only two other studies which have interrogated changes in gene expression [44,45]. Coupling molecular biology with model organisms provides insight into the interactions of phytoplankton with their environment [43]. Furthermore, characterizing phytoplankton physiological traits and capabilities is important for understanding how community structure may change in a changing marine environment [46]. We have chosen to characterize the response of M. pusilla to P scarcity and P source using batch culturing and transcriptomics. The P concentrations used in this study, though they do not represent what is found naturally in the oligotrophic North Atlantic, recreate the impact of low P availability by reducing growth rate while generating the biomass needed to support the desired analyses. This study is timely as it provides insight into the cellular metabolism of an ecologically important phytoplankton found in oligotrophic oceans which are predicted to expand [47] and become increasingly stratified [48]. Micromonas elicits an extensive cellular response to Pi-deficiency In response to Pi-deficiency, phytoplankton have been shown to reduce and reallocate their cellular P e.g., [49]), utilize DOP (e.g., [16]), and increase P uptake [50]. M. pusilla employs all of these strategies under Pi-deficiency. As has been previously shown [23], the cellular P quota of Pi-deficient M. pusilla cells was dramatically reduced when compared to Pi-replete cells. Phytoplankton can reduce their P content by phospholipid substitution [49]. The upregulation of a sulfolipid biosynthesis gene suggests M. pusilla decreased its cellular P quota by swapping sulfolipids for phospholipids. Sulfolipid substitution has been detected in Pi-deficient M. pusilla [24], diatom [51] and pelagophyte [52] cultures as well as naturally P-limited phytoplankton communities [49], indicating it is an important strategy to combat P stress. Differential expression of sulfolipid biosynthesis genes has also been detected in light-limited Aureococcus anophagefferens where it was likely responding to an increase in plastid membrane surface area and thus an increase in P demand [53]. Together, these results highlight the important role of sulfolipid swapping in maintaining a flexible P pool to support phytoplankton growth during suboptimal growth conditions. Cellular P may also be conserved by inducing glycolytic bypass pathways. Under P deprivation, plants have been shown to generate pryruvate from phosphoenolpyrvate through the activity of phosphoenolpyruvate carboxylase resulting in oxaloacetate and a Pi molecule [54]. Oxaloacetate is then converted to malate and finally pyruvate by malate dehydrogenase and malic enzyme, respectively [54]. Differential expression of phosphoenolpyruvate carboxlyase was not detected; however, the accumulation of malate dehydrogenase and malic enzyme transcripts suggests phosphoenolpyruvate may be diverted through this bypass. The induction of glycolytic bypass pathways under Pi-deficiency has been seen in the diatom Thalassiosira pseudonana [51] as well as in several Aureococcus anophagefferens strains [52,53] suggesting it may be a common strategy used by eukaryotic phytoplankton to combat P stress.
Pi-deficient M. pusilla cells had high rates of APA. The induction of APA is a common strategy used widely among phytoplankton in response to P-stress [55]. Concurrent with the high level of APA was the accumulation of AP transcripts indicating Pi-deficient M. pusilla cells are primed to acquire P from extracellular DOP sources. An acid phosphatase was also upregulated in Pi-deficient cells; acid phosphatases catalyze the hydrolysis of Pi molecules under acidic conditions. Acid phosphatase activity has been shown to increase in P-limited green algae where it may function in intracellular P recycling [56]. The acid phosphatase contains a signal peptide SignalP 4.1; [57]) suggesting it may be secreted. Perhaps the acid phosphatase is secreted to a polyphosphate vacuole where it could function in polyphosphate degradation. Polyphosphate is a linear polymer of Pi molecules of variable length; cells can have multiple polyphosphate pools with different functions and regulation patterns [58]. Recent studies in phytoplankton reflect this complex modulation as Pi-deficient cells have been shown to increase cellular polyphosphate [59,60] or accumulate putative polyphosphate synthesis transcripts [52,53,59,61]. Here, polyphosphate polymerase transcripts accumulated in Pi-deficient M. pusilla suggesting cells were synthesizing polyphosphate in addition to utilizing it as a P source. Pi-deficient M. pusilla cells may be using acid phosphatase to mobilize P from luxury uptake polyphosphate pools to generate P scavenging proteins or support key metabolic pathways, like photosynthesis.
Further evidence for the utilization of organic P is the upregulation of genes encoding 5'nucleotidase and glycerophosphoryl diester phosphodiesterase. The lack of signal peptides suggests these enzymes function in intracellular P recycling. The 5'-nucleotidase hydrolyzes Pi from nucleotides and has been shown to be induced in other eukaryotic phytoplankton under Pi-deficient conditions [51,62]. The induction of a gene encoding for a glycerophosphoryl diester phosphodiesterase indicates phospholipids may be recycled and used to sustain growth under Pi-deficient conditions as has been shown in the diatom T. pseudonana [60]. The ability to utilize phosphodiesters as a P source is not ubiquitous among eukaryotic phytoplankton [63] or cyanobacteria [64]. Interestingly this M. pusilla strain was isolated from the South Pacific, a region well documented to have low Pi [65]; this suggests an ecological advantage may be conferred to those that produce the enzyme when Pi concentrations are at growth limiting levels.
Several Pi transporters were strongly upregulated, suggesting Pi-deficient cells were increasing P uptake. This is a strategy routinely used among phytoplankton to combat P stress (e.g., [50]). A PHO4-containing Pi transporter (protein ID 61702) was identified in the M. pusilla genome suggesting it could be a high-affinity transporter as it is homologous to the high-affinity transporter gene identified in the prasinophyte Tetraselmis chui [66]. However, significant differential expression was not detected as transcript counts were either very low (Pi-deficient and ATP-replete treatments) or zero (Pi-replete). This could be indicative of a very stable protein that doesn't require high transcript copy numbers. If so, M. pusilla could enhance P uptake not only by making more, but also by synthesizing high-affinity P transporters.
Phosphate transporters are unable to discriminate between Pi and its analog, arsenic, making P-stressed cells susceptible to arsenic toxicity [67]. Detoxification strategies commonly include the reduction of arsenate to arsenite by arsenate reductase followed by its excretion out of the cell via an arsenite pump [68]. Glutathione s-transferases have also been shown to play an important role in alleviating arsenic stress in yeast [69] with recent evidence implicating its use by Pi-deficient phytoplankton [53]. In the current study, several glutathione s-transferases were induced along with an aresenite permease suggesting cells have efficient arsenic detoxification strategies. The induction of arsenic detoxification genes under P-limiting conditions is not widespread among phytoplankton [51,61] and could be indicative of the environment the phytoplankton commonly reside, like oligotrophic oceans where Pi concentrations are chronically low.
ATP elicits a muted P-stress response in Micromonas
The ATP-replete cultures maintained relatively high growth rates of 0.6 d -1 and reached similar cell abundances as the Pi-replete cultures indicating M. pusilla was able to grow using ATP as a P source. To do this, M. pusilla elicited a cellular response that was similar to that seen in the Pi-deficient treatment. Cells grown with ATP had reduced cellular P levels elevated APA, but not to the same extent as Pi-deficient cells. A similar trend was observed in the transcriptome responses. Fewer genes were differentially expressed in the ATP-replete treatment when compared to Pi-deficient cells and those that were sensitive were responding in both the Pi-deficient and ATP-replete treatments.
The cluster analysis revealed that transcriptome expression patterns were similar between the ATP-replete and Pi-deficient treatments. ATP-replete cells induced AP, polyphosphate polymerase, P transporters and arensic detoxification genes. The putative glycolytic bypass genes, malate dehydrogenase and malic enzyme, were also induced. Differential expression was not detected in genes that function in sulfolipid synthesis or recycling intracellular P via 5'nucleotidase and phosphodiesterase. Taken together, these transcriptional changes suggest M. pusilla may be sensitive to the severity of P stress. Cells can regulate gene expression to balance P needed to support growth versus survival. If Pi concentrations are low, cells reduce and recycle cellular P in addition to induce extracellular acquisition strategies. If DOP is present, a scaled back response is invoked which enables cells to acquire the P necessary to support cell growth and functioning.
Unique to the ATP-replete treatment was the decrease in chlorophyll biosynthesis gene expression. Nitrogen-deprivation has been shown to elicit a similar response in the diatom, Phaeodactylum tricornutum [70]. This, coupled with the accumulation of a nitrate/nitrite antiporter in Pi-replete cultures, could indicate cells were nitrogen deficient, however the cellular N levels were similar among the treatments. Perhaps the ATP-replete cells reduced chlorophyll biosynthesis as a means to divert resources to produce proteins that function in DOP utilization. Differential expression of the chlorophyll biosynthesis genes was not detected with Pideficiency. Here, M. pusilla may be conserving the already reduced photosynthetic and energy production capabilities [71] to generate the resources needed for producing P stress-response proteins. This hypothesis aligns with the notion that cells have a varied response that is sensitive to the severity of P stress.
Ecological implications
The ecological importance of picoeukaryotes in P deplete oligotrophic oceans has only recently been recognized. We have provided insight into the strategies utilized by the picoeukaryote, M. pusilla, to persist in these suboptimal growth conditions. M. pusilla exhibited an extensive response to Pi-deficiency that included efforts to acquire extracellular P, recycle intracellular P, as well as reduce their cellular P demand. This whole-cell metabolic reconfiguration may be necessary to maintain a foothold in oligotrophic oceans dominated by cyanobacteria that are extremely efficient at acquiring and growing at low Pi concentrations [16]. M. pusilla is able to utilize alternative P sources like ATP to support growth, which is also essential for persisting in oligotrophic oceans [11]. Future oligotrophic oceans that are predicted to become increasingly stratified [48] and are slowly acidifying [72] could portend an enhanced role for picoeukaryotes. The abundance of Micromonas-like cells have been shown to increase in response to ocean acidification [73] while the cyanobacteria Prochlorococcus and Synechococcus are largely unaffected by elevated CO 2 [74]. Additionally, elevated CO 2 coupled with P-limitation promoted M. pusilla growth [23]. These results, combined with the strong, inducible response to P deficiency and their ability to efficiently grow using DOP shown in this study, support the hypothesis that future oceans could favor picoeukaryote growth [23]. Given their relatively larger cell size when compared to single-cell cyanobacteria and contribution to primary production [4] and export [5], this could have pronounced effects on the biogeochemistry of the oligotrophic oceans. | 2018-04-03T02:22:21.446Z | 2016-05-11T00:00:00.000 | {
"year": 2016,
"sha1": "0394dbc14e85534b52ac79735585465b163a0c9b",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0155158&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0394dbc14e85534b52ac79735585465b163a0c9b",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
18064912 | pes2o/s2orc | v3-fos-license | HLA II class alleles in juvenile idiopathic arthritis patients with and without temporomandibular joint arthritis
Background Temporomandibular joint (TMJ) arthritis is seen very often (38–87 %) in children with juvenile idiopathic arthritis (JIA). With contrast enhanced magnetic resonance imaging (MRI) we can detect more cases of TMJ arthritis than ever before. Previous studies show that HLA II class alleles may have protective or risk importance in JIA subtypes. Our objective is to identify HLA II class alleles of risk and protection in JIA patients with TMJ arthritis. Methods During the period from 2010 to 2015 MRI for TMJ was performed in 85 JIA patients who were genotyped for HLA- DRB1; DQB1 and DQA1 using RT-PCR with sequence-specific primers. As a control group, data of 100 individuals were taken from the genetic bank of RSU Joint Laboratory of Clinical Immunology and Immunogenetics. Associations of DRB1; DQB1; DQA1 alleles in patients were examined individually using the χ2 test. P-value (<0.05) and odds ratio were calculated using EPI INFO 6.0 software. Results Out of 85 JIA patients with mean age of 13.7 ± 3.0 years (range 6.9–17.9 years), 59 (69 %) were girls and 26 (31 %) were boys. The mean duration of the disease was 3.07 ± 2.35 years (range 0.2–11.0 year). JIA subtypes were as follows: seronegative polyarthritis 51 (60 %), seropositive polyarthritis 6(7 %), oligoarthritis extended 7(8 %), oligoarthritis persistent 2 (2 %) arthritis with enthesitis 14 (17 %), undifferentiated 3 (4 %) and 2 (2 %) systemic arthritis. Two groups where separated after TMJ MRI exam: first with at least two signs of active inflammation and/or any structural damage (n = 62); second with no pathologic signs or with slight contrast enhancement (n = 23). We discovered that there are risk alleles that are found in all JIA patient’s groups (MRI positive and negative groups) versus controls such as DRB1*07:01, DQB1*03:03; DQB1*05:01. Also some protective alleles as DRB1*18:01, DQB1*06:02–8 were found in overall JIA group. Alleles DRB1*12:01, DQB1*03:01; DQA1*05:01 were found to be protective for TMJ arthrits. Conclusion In our study there were no convincing risk alleles, but there are alleles that probably are protective for TMJ arthritis like DRB1*12:01, DQB1*03:01; DQA1*05:01.
Background
Juvenile idiopathic arthritis (JIA) is the most common autoimmune childhood disease with a main clinical sign -chronic arthritis. Arthritis in JIA can affect any joint but the temporomandibular joint (TMJ) is particularly susceptible to damage. TMJ involvement is seen very often (38-87 %) and can lead to compromised craniomandibular function, dentofacial aestethics and morphology as micrognathia, retrognathia, pathologic occlusion and reduced mouth opening [1][2][3][4]. Contrast enhanced magnetic resonance imaging (MRI) as a golden standard for TMJ arthritis diagnostics has changed perception about the prevalence of the TMJ arthritis in JIA patients. There is data that it can be asymptomatic even in 71 % if evaluated with MRI [5,6]; therefore it is very important to find different risk factors to identify JIA patients who need early and regular evaluation of the TMJ with MRI.
It is known that TMJ arthritis can present with such clinical symptoms as pain during the jaw movement, difficulties with chewing solid food, asymmetry with maximal mouth opening, crepitation, clicking and other; however these symptoms have high specificity but low sensitivity [1,7]. Patients very often do not complain about TMJ problems or do not connect these complaints to JIA. Even with careful rheumatologic and orthodontic evaluation many cases of TMJ arthritis can be missed [6]. On the other hand, using only clinical symptoms it is possible to overdiagnose TMJ arthritis [8].
To find those JIA patients who need early and regular TMJ evaluation by MRI, we have to take into account not only subjective complaints and symptoms but also the type and course of the disease, it's activity and laboratory measurements. Several risk factors for TMJ arthritis have been described such as polyarticular course of the disease, arthritis in upper extremities, younger age and higher ESR at the beginning of the disease, but HLA-B27 in previous studies was thought to be protective [1]. Other studies show that TMJ arthritis prevalence in different JIA groups is not significantly different. It is known that there are cases when TMJ arthritis can develop with no other joint involvement and no obvious laboratory or clinical risk factors can be found in these cases [3].
It is known that the course of disease, including the involvement of certain joints in case of JIA, can have genetic predisposition. HLA Class I and Class II genes are described as genetic risk factors for JIA along with PTPN22 gene and IL2RA/CD 25 gene. The combination of both general autoimmune genes and JIA-specific genes contributes to the different disease phenotypes [9]. There are reports that different HLA II class alleles are with protective or risk importance in JIA subtypes. JIA is very heterogeneous group of diseases and beside 7 subtypes of JIA, there are still significant variations, especially within the polyarticular disease. HLA typing data may provide further information for JIA subtypes and in the future it could be used as a diagnostic criterion [10]. We speculate that TMJ arthritis could be more characteristic for patients with definite HLA II class alleles.
As stated previously, it is very important to use different risk factors starting from gender, age, disease type, laboratory parameters, clinical subjective and objective signs and also genetic factors to detect JIA patients who need early and regular TMJ evaluation with MRI. It can change therapeutic tactics -local (intraarticular steroids) or systemic (addition or change of biological medication), that can prevent further damage of TMJ. The aim of our study was to identify possible HLA II class alleles of risk and protection in JIA patients with TMJ involvement.
Methods
We performed the retrospective study with 85 JIA patients treated at Children's University hospital who had a MRI exam for TMJ during the period from 2010 to 2015. All of the patients were diagnosed with different JIA types using the International League of Associations for Rheumatology (ILAR) criteria. Almost all of them had a polyarticular disease course according to the American College of Rheumatology (ACR) JIA treatment groups. Approval from Central Medical Ethics Committee of Latvia was obtained and patients' parents and patients had signed the written consent form for participating in the study.
Most of the patients were evaluated with MRI because of subjective complains and/or objective findings from TMJ. 11 patients were without subjective complaints and/or objective signs of TMJ arthritis. TMJ MRI with contrast enhancement was done with standard T1, T2 FS (fat saturation) in coronar plane; T1, T2 sagitally; after contrast enhancement (0, 2 mmol/kg) T1 sagitally and T1 axially (8-10 min after injection).
Depending on MRI findings, patients were divided into two groups: the first group had active signs of synovitis [contrast enhancement (excluding light and symmetric contrast enhancement), effusion or pannus, bone oedema] and/or joint structure damage as flattening of the mandibular head, flattening of the fossa, osteophytes and erosions. The second group did not have any active or chronic damage findings of the TMJ arthritis or had light and symmetric contrast enhancement that can be considered as a normal finding [11]. Characteristics of these two groups where analysed using STATA program. Pearson's correlation coefficient and Fisher's exact test where used (P < 0,05). In order to obtain 80 % power of the study and to detect protective OR 0.2 with p < 0.05, assuming prevalence among controls 70 % and cases 30 % it was calculated that at least 36 cases and 18 controls (if Case-control ratio is 0.5) had to be included.
Associations of DRB1, DQB1 and DQA1 alleles among patient group were examined individually using the χ 2 test (P-value <0,05). Odds ratios (OR) were calculated using EPI INFO software version 6 with 95 % confidence intervals and Fisher correction for small numbers [14].
After the MRI exam, we divided JIA patients into two groupsone group with TMJ arthritis findings (MRI positive group) and the other without (MRI negative Active joint count -joints with non-bony swelling or limitation of motion with either pain on motion or tenderness to palpation and also those where we have detected signs of arthritis using ultrasound or MRI group). The demographics of these two groups can be seen in the Table 1. There were statistically significant more girls in the TMJ arthritis group (p = 0.04). Also these patients were older -14.2(±2.6) years (p = 0.01). There were no statistically significant differences between two groups in such parameters as disease duration, years from the diagnosis, active joint count and also laboratory parameters as ESR, ANA, RF and HLA-B27 antigen. While it was not statistically significant, all seropositive patients were in the TMJ arthritis positive group. Differences in CRP was statistically significant -higher CRP was in the TMJ arthritis group (p = 0.03).
Our results revealed that there are general JIA risk alleles in JIA patients' groups versus controls such as DRB1*07:01 that was found in the total JIA group and MRI positive group. Risk alleles for JIA as DQB1*03:03; DQB1*05:01 were detected in total JIA group and MRI negative group. Also some protective alleles as DRB1*18:01(in total JIA group and MRI positive group), DQB1*06:02-8 (in total JIA group and MRI negative group) were found.
Discussion
In the study of United Kingdom children [15], DRB1*07:01 was associated with a decreased risk for persistent oligoarthritis, RF positive and negative polyarthritis and enthesitis related arthritis. Similar findings are described in Mexicans [16]. Surprisingly, our results show DRB1*07:01 as a risk allele in the total JIA patient group compared to control group and also in those with TMJ arthritis (patients with polyarticular disease course and different JIA subtypes) (OR = 7,28. p < 0,000). In the study of Hollenbach et al. [10], they have analysed 802 JIA patients with two most common JIA subtypesoligoarthritis (both persistent and extended) and RF negative polyarthritis. They found several risk haplotypes regardless of clinical subtype and age of onset, including DRB1-DQA1-DQB1 haplotypes: DRB1*08:01-DQA1*04:01-DQB1*04:02 and DRB1*11:03/4-DQA1*05:01-DQB1*03:01. They also determined that protective haplotypes DRB1*15:01-DQA1*01:02-DQB1*06:02 were protective. In our study we also found DQB1*06:02-8 as a protective allele regardless of TMJ involvement. Similar results are shown in a study of JIA patients in Colombian Mestizos, where allele DQB1*06:02 was found as protective [17]. In our study allele DQB1*03:01 appears to have protective role for TMJ involvement, but in Hollenbach's study it was protective for early onset oligoarticularpersistent subtype. There are data in Hollenbach's study Table 2 Comparison of HLA alleles between TMJ arthritis positive and TMJ negative patient groups and control groups suggesting that the presence of two predisposing DRB1 alleles is associated with a significantly greater predisposition to disease than a single one. We are also planning to analyse similar data in our study. As in our results, DRB1*03 was found less frequent in JIA patients than in control group in Hungarian patients [18]. A protective role of the alleles DRB1*12:01, DQB1*03:01; DQA1*05:01 in the development of TMJ arthritis may be associated with a less aggressive disease course (e.g.,these patients had a lower CRP) and consequently less radiological damage. This trend has to be analysed in the future by evaluating patients who have these alleles more carefully and over long-term course regardless of TMJ involvement.
When analysing HLA II class alleles, we must take into account the differences of nationality and different genetic background since most of our patients are of Latvian or Russian descent. Also there should be more evaluation and discussion of the associations between ILAR groups and TMJ involvement; in this study we divided the groups into MRI-confirmed and MRI-non confirmed TMJ arthritis and could not take into account, how disease could change over time with TMJ developing in the future. Most of our patients were already adolescents with mean disease duration of 3 years and it might be that the disease course was already established.
We suspect that TMJ arthritis can be used as a prognostic feature for disease course. Identifying risk and protective HLA II class haplotypes could help to predict TMJ arthritis development in the future that in turn could help to treat and prevent TMJ joint damage in high risk patients. We will evaluate further other characteristics of the patients with TMJ involvement that could help to detect JIA patients who need early and regular evaluation with MRI. It would be important to analyse HLA II class alleles in more homogeneous groups of JIA to see if the same results about protective alleles are found.
Conclusions
In our study there were no convincing risk alleles, but there are alleles with probable protective role for TMJ arthritis such as DRB1*12:01, DQB1*03:01; DQA1*05:01. Further analysis of different haplotypes may help us detect HLA II class alleles that can be used in the early detection of high risk for TMJ arthritis. | 2016-05-04T20:20:58.661Z | 2016-04-19T00:00:00.000 | {
"year": 2016,
"sha1": "6db6fbc3b08a8a105a742e3704c6f880b785e74c",
"oa_license": "CCBY",
"oa_url": "https://ped-rheum.biomedcentral.com/track/pdf/10.1186/s12969-016-0086-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6db6fbc3b08a8a105a742e3704c6f880b785e74c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269870455 | pes2o/s2orc | v3-fos-license | Role of pyroptosis-related cytokines in the prediction of lung cancer
Objectives Lung cancer is the leading cause to induce cancer-related mortality. Effective biomarkers for prediction the occurrence of lung cancer is urgently needed. Our previous studies indicated that pyroptosis-related cytokines TNF-α, IFN-γ, MIP-1α, MIP-1β, MIP-2 and IP-10 is important to influence the efficacy of chemotherapy drug in lung cancer tissues. But the role of pyroptosis-related cytokines in prediction the occurrence of lung cancer is still unknown. Methods Blood samples were collected from 258 lung cancer patients at different stage and 80 healthy volunteers. Serum levels of pyroptosis-related cytokines including TNF-α, IFN-γ, MIP-1α, MIP-1β, MIP-2 and IP-10 were measured by Cytometric Bead Array (CBA). ROC curve was performed to evaluate the cut-off value and diagnosis value for prediction and diagnosis of lung cancer. Results Compared with control group, the levels of IP-10, MIP-1α, MIP-1β, MIP-2 and TNF-α were significantly higher in lung cancer patients (45.5 (37.1–56.7): 57.2 (43.0–76.5), 34.4 (21.8–75.2): 115.4 (96.6–191.2), 49.3 (25.6–78.7): 160.5 (124.9–218.6), 22.6 (17.8–31.2): 77.9 (50.1–186.5), 3.80 (2.3–6.2): 10.3 (5.7–16.6)), but the level of IFN-γ was decreased in the patients (12.38 (9.1–27.8): 5.9 (3.5–9.7)). All the above cytokines were significantly associated with the diagnosis of lung cancer, and the AUC values of IFN-γ, IP-10, MIP-1α, MIP-1β, MIP-2, and TNF-α were 0.800, 0.656, 0.905, 0.921, 0.914, and 0.824. And the AUC can rise to 0.986 after combining the above factors, and the sensitivity and specificity also up to 96.7 % and 93.7 %, respectively. Additionally, TNF-α (r = 0.400, P < 0.01), MIP-2 (r = 0.343, P < 0.01), MIP-1α (r = 0.551, P < 0.01) and MIP-1β (r = 0.403, p < 0.01) were positively associated with occurrence of lung cancer, but IFN-γ (r = −0.483, p < 0.01) was negatively associated with occurrence of lung cancer. As far as the potential of early diagnosis of lung cancer, TNF-α (AUC = 0.577), MIP-1α (AUC = 0.804) and MIP-1β (AUC = 0.791) can predict the early stage of lung cancer, and combination of the above three cytokines has a better predictive efficiency (AUC = 0.854). Conclusion Our study establishes a link between the levels of IP-10, MIP-1α, MIP-1β, MIP-2, TNF-α and IFN-γ and diagnosis of lung cancer. Besides, we observed a synergistic effect of these five pyroptosis-related cytokines in diagnosing lung cancer patient, suggesting their potential as biomarkers for lung cancer diagnosis. Moreover, the combination of TNF-α, MIP-1α and MIP-1β are also potential predictors for the early diagnosis of lung cancer.
biomarkers for lung cancer diagnosis.Moreover, the combination of TNF-α, MIP-1α and MIP-1β are also potential predictors for the early diagnosis of lung cancer.
Introduction
Lung cancer including non-small cell lung cancer (NSCLC) and small-cell lung cancer (SCLC) is a global health concern and remains the leading cause of cancer-related mortality [1].In recent years, advances in diagnostic methods, including computed tomography (CT) and various plasma tumor biomarkers, including carcinoembryonic antigen (CEA), squamous cell carcinoma antigen (SCCA), Cytokeratin 19 fragment (CYFRA21-1) and Neurospecific enolase (NSE), have greatly contributed to the diagnosis and progression assessment of lung carcinoma [2].Due to the improvement of diagnostic method and standard treatment, the mortality of lung cancer is gradually decreased in recent years [3,4].However, many lung cancer patients are still being diagnosed at an advanced stage, leading to a poor prognosis.The overall 5-year survival rate for lung cancer patients is still only 19.7 % in China and 24 % in the United States [4,5].One factor that contribute this result is that many patients are fear of radiation damage from X-rays or CT examinations, which lead to many patients decide to get an X-ray or CT scan finally until they experience discomfort and decide to seeking medical attention.In addition, the examination of traditional lung cancer biomarkers is typically conducted after the lung lesions are already visible.Hence, it is imperative to develop a more convenient and acceptable diagnostic method for lung cancer patients.
In our previous studies, we demonstrated that pyroptosis-related cytokines TNF-α, IFN-γ, IP-10, MIP-1α, MIP-1β and MIP-2 can induce T cell infiltration into lung cancer tissues and ultimately promote the lung cancer tissue regression [6].While normal lung cells switch to lung cancer cells, the cancerous cells were immediately identified by immune cells [7].TNF-α, IFN-γ, IP-10, MIP-1α, MIP-1β and MIP-2 as indispensable immune factors is necessary in regulating anti-cancer immune.IP-10, MIP-1α, MIP-1β and MIP-2 as chemokines induce T cell infiltration into tumor tissue [8,9].Infiltrated T cells releases TNF-α and IFN-γ to kill cancer cells [10,11].However, evidence suggests that these anti-tumor factors are unable to rapidly eliminate tumor cells within the body, which results in the immune system need to continuously release these factors.Hence, the abnormal concentration of these factors maybe means the occurrence of lung carcinoma.In addition, elevated levels of TNF-α or IFN-γ is linked with the development of lung cancer.Increased MIP-1α, MIP-1β or MIP-2 expression were also associated with poorer prognosis and decreased survival rate in lung adenocarcinoma patients.Nonetheless, whether the combination examination of these factors is benefitted to found out lung cancer is still needed to be clarified.
Furthermore, the detection of these factors can be easily completed using patient blood samples, which provide a convenient, inexpensive, and non-invasive approach.Therefore, in this study, we detect the concentrations of TNF-α, IFN-γ, IP-10, MIP-1α, MIP-1β and MIP-2 in collected blood samples from both lung cancer patients and healthy volunteers, to elucidate the potential association between these pyroptosis-related cytokines and lung carcinoma.
Patient samples
Plasma samples were collected from 258 primary lung cancer patients and 80 healthy volunteers at the Third Xiangya Hospital of Central South University (Changsha, China) from May 2021 to July 2023.The study was approved by the Ethics Committees of the Third Xiangya Hospital, Central South University (No. 23135).The blood samples used in this study is the remaining part from previous studies and frozen in the Biobank of Third Xiangya Hospital.Inclusion criteria: All Patients in the study were new diagnosed lung cancer, and diagnosed by pathology.In addition, none of the lung cancer patients received chemotherapy, radiotherapy, or surgery before treatment and blood sampling.Exclusion criteria: Patients who had undergone anticancer treatment, such as radiotherapy and chemotherapy; patients with severe infection; patients with severe hepatic and renal dysfunction; patients with other malignant tumors; patients with acute myocardial infarction, unstable angina, uncontrollable hypertension, and symptomatic sustained arrhythmia within 6 months; patients with communication and cognitive impairment.Detailed clinical characteristics of patients and healthy volunteers were shown in Table 1.
Cytometric Bead Array
The levels of IFN-γ, TNF-α, MIP-1α, MIP-1β, MIP-2, and IP-10 levels from patient plasmas were measured using a Multiplex Luminex assay (BD sciences).Reagents for quantitative ProcartaPlex Luminex immunoassay were sourced from Affymetrix eBioscience.Cytometric Bead Array (R&D) were performed according to the manufacturer's instructions.and results were read on the Bio-Plex 200 instrument.
Quantification and statistical analysis
Statistical analysis was performed using SPSS version 18.0 (SPSS; IBM Inc., Chicago, IL, USA) and GraphPad Prism 9 (GraphPad Software, San Diego, CA, USA).The normally distributed data were expressed as mean ± standard deviation (SD), and the skewed data were presented as median (interquartile range).Independent-sample-t test was used for comparison of variable between two groups.
One-way analysis of variance was used for comparison of multiple groups.Count data were compared using the χ2 test.The receiver operator characteristic curve (ROC) was used to evaluate the diagnostic efficacy of biomarkers for lung cancer.The area under the ROC curve (AUC) was used to evaluate the predictive accuracy of biomarkers in the lung cancer diagnosis.Pearson correlation coefficient analysis method was used for correlation analysis.All P-values were two-sided and P < 0.05 was considered statistically significant.
Baseline characteristics and clinical data of the two groups
Characteristics and clinical data of lung cancer patients and healthy volunteers in the study are shown in Table 1.The study included in 258 primary lung cancer patients and 80 control subjects.The mean age of them was 61.5 ± 10.81 and 60.9 ± 7.68.Baseline of characteristics of these individuals, including gender, age, body mass index (BMI), smoking status and alcohol drinking have no statistically difference between the two groups.Compared to healthy volunteers, the levels of IP-10, MIP-1α, MIP-1β, MIP-2, and TNF-α were significantly higher in lung cancer patients (45. 5 was in lung cancer patients (12.38 (9.1-27.8):5.9 (3.5-9.7)).Since smoking and alcohol drinking may affect the level of pyroptosis-related cytokines in lung cancer patients, we further divided lung patients into smoking and non-smoking groups, or drinking and non-drinking groups, and then compared the levels of TNF-α, IFN-γ, IP-10, MIP-1α, MIP-1β and MIP-2 in different group.
Results showed that the expression levels of these factors are slight difference between smoking and non-smoking group, or drinking and non-drinking group (Table 2), suggesting that smoking or drinking are minor factors to influence the expression of pyroptosisrelated cytokines.
The diagnostic value of biomarkers in the lung cancer
Table 3 showed the cut-off levels, the area under the ROC curve (AUC) values, sensitivity, specificity, and Youden index of the inflammatory cytokines in lung cancer diagnosis.ROC curve analysis showed that IFN-γ, IP-10, MIP-1α, MIP-1β, MIP-2, and TNF-α had a significantly predictive efficiency in the diagnosis of lung cancer, and the AUC values of the above biomarkers were 0.800, 0.656, 0.905, 0.921, 0.914, and 0.824 (Table 3 and Fig. 1).To investigate whether the combination of above cytokine provides stronger connection to the diagnosis of cancer than each individual cytokine alone, we also evaluated the relevance between combined inflammatory cytokine score and cancer.Results showed that the AUC increased to 0.986 when the above factors were combined.In addition, the diagnostic value of these biomarkers in the diagnosis of lung cancer is reflected by their high sensitivity and specificity.IFN-γ, IP-10, MIP-1α, MIP-1β, MIP-2 and TNF-α had a sensitivity of 64.0 %, 58.1 %, 89.9 %, 97.7 %, 88.0 % and 70.5 %, and specificity of 85.0 %, 72.5 %, 83.8 %, 82.5 %, 82.5 % and 77.5 %, respectively.and the combination of these factors exhibits a higher diagnostic efficiency (AUC = 0.986) than individual biomarkers, with sensitivity and specificity of 96.9 % and 93.7 %, respectively (Table 3).Due to CEA is an important indicator for the currently clinical diagnose of lung cancer, we also detected the correlation between CEA and the occurrence of lung cancer in the same samples.In our results, although the level of CEA is increased in part of lung cancer patients, the AUC values of CEA is only 0.778 (Table 1, Table 3 and Fig. 2), which is markedly lower than the combination of pyroptosis-related cytokines.Therefore, these results suggest that the combination detection of IFN-γ, IP-10, MIP-1α, MIP-1β, MIP-2 and TNF-α maybe provide better accuracy, sensitivity and specificity in the diagnosis of lung cancer.
Correlation between the biomarkers and clinical stage of lung cancer
To further explore the correlation between the biomarkers' level and occurrence of lung cancer (including TNM 0, I, II, III, IV stage), we performed Pearson correlation analysis.The results indicated that TNF-α (r = 0.400, P < 0.01), MIP-2 (r = 0.343, P < 0.01), MIP-1α (r = 0.551, P < 0.01) and MIP-1β (r = 0.403, p < 0.01) were positively association with the occurrence of lung cancer, but IFN-γ (r = − 0.483, p < 0.01) was negatively association with the occurrence of lung cancer (Table 4).These results indicated that the above three cytokines were associated with progression of lung cancer.
The early diagnostic value of biomarkers in the lung cancer
The early diagnosis of lung cancer is an important clinical problem.To further evaluate the early diagnosis value of the above six biomarkers in lung cancer, we divided the lung cancer patients into two groups of early stage (0, I and II) and late-stage (III and IV) patients and then performed the ROC curve analysis (Table 5 and Fig. 3).The results showed that the AUC values of IFN-γ was 0.533 (P = 0.374), while TNF-α, MIP-1α and MIP-1β were 0.577, 0.804 and 0.791 (P < 0.05), respectively.while combined these three factors, the AUC value was up to 0.854 (P < 0.0001), suggesting high diagnostic accuracy.Besides, the sensitivity of TNF-α, MIP-1α, MIP-1β and combination were 75.2 %, 65.4 %, 84.3 % and 82.4 %, with the specificity was 46.7 %, 86.7 %, 61.9 % and 57.6 %.These results suggested that the combination of MIP-1α, MIP-1β and TNF-α also can effectively predict the occurrence of lung cancer in early stage.
Table 4
The correlation between the biomarkers and occurrence of lung cancer.for lung cancer diagnosis.The combination of these showed better accuracy, sensitivity and specificity in lung cancer diagnose.Pearson correlation analysis also revealed significant associations between these cytokines and the occurrence of lung cancer.Additionally, the ROC curve indicated that combination of MIP-1α, MIP-1β, and TNF-α is a good predictor to the early stage of lung cancer.
In our previous studies, we found that the adaptive immunity which against lung carcinoma can be activated by pyroptotic lung cancer cells, which lead to significant increase of IFN-γ, IP-10, MIP-1α, MIP-1β, MIP-2, and TNF-α [6].However, this phenotype cannot fully explain the abnormal elevation of chemokines and cytokines in all stages of lung cancer patients.Besides, lung cancer cells and immune cells are known to produce MIP-1α, MIP-1β and MIP-2 [12,13].When lung tissue cells transform into lung cancer cells, immune cells release these chemokines to facilitate infiltration into tumor tissue and release TNF-α and IFN-γ to eliminate the cancer cells [14].Since immune cells continuously recognize and eliminate tumor cells, this process may play a pivotal role in maintaining the sustained high levels of TNF-α, IFN-γ, MIP-1α, MIP-1β, and MIP-2 in the plasma.
Furthermore, these chemokines and cytokines have been long implicated in the development and progression of cancer.For instance, TNF-α, despite its anti-tumor properties, has long been implicated in the development of various cancers, including lung cancer, gastric cancer, pancreatic cancer, and liver cancer [15][16][17][18].Similar with our results, elevated levels of TNF-α have been considered to be linked to an increased risk of lung cancer.IFN-γ, secreted from T cells and natural killer cells, exhibits an anti-tumor role.However, when secreted by tumor-associated macrophages, it can promote the growth of cancer cells [19,20].Previous studies indicate that high levels of IFN-γ were associated with poorer prognosis and worse clinical outcomes in lung cancer patients [21].In our study, we found that TNF-α alone as a diagnostic biomarker for lung carcinoma lacks sufficient sensitivity (70.5 %) and specificity (77.5 %).IFN-γ presented a better specificity (85.0 %) but poor sensitivity (64.0 %) for lung cancer diagnosis.Nevertheless, both cytokines were correlated with the occurrence of lung cancer.TNF-α was positively associated with lung cancer, while IFN-γ exhibits negative association.However, the two factors may not be enough to determine the occurrence of early cancer accurately.Combining them with other indicators might provide a more reliable diagnosis.
MIP-1α, MIP-1β and MIP-2, as important chemokines, play pivotal role in attracting immune cells to tumor tissues sites [22,23].However, other studies also found that higher expression of MIP-1α, MIP-1β or MIP-2 were also associated with poorer prognosis and decreased survival rate in lung adenocarcinoma patients [24][25][26].High levels of MIP-1α have also been linked to promoting cancer cells growth and survival by stimulating tumor neovascularization [27,28].Increased MIP-1β not only contributes to the tumor angiogenesis and therapeutic response during radiotherapy, but also takes part in the metastasis of lung cancer [29][30][31].Although MIP-2 has not been extensively studied in relation to lung carcinoma, higher expression of MIP-2 has been to be associated with higher lung cancer stages [32].In our study, we found that MIP-1α, MIP-1β and MIP-2 exhibit a better sensitivity (89.9 %, 97.7 % and 88.0 %) and specificity (83.8 %, 82.5 % and 82.5 %) for lung cancer diagnosis.However, when combined with TNF-α and IFN-γ, the combination of these five pyroptosis-related cytokines exhibited higher sensitivity (96.9 %) and specificity (93.7 %) for lung cancer compared to individual markers.However, there are still some limitations in this study.Firstly, it is essential to expand the sample size to further validate the value of this test.Additionally, the specific increase of pyroptosis-related cytokines may not be exclusive to lung cancer, the diagnostic role of pyroptosis-related cytokines in other type of tumor still need to further study.Moreover, developed a formula in the future is better to predict the association between the level of pyroptosis-related factors and lung cancer occurrence.However, our findings hold potential clinical and translational implications.Firstly, this study demonstrated that a combination of five factors provides excellent accuracy for the occurrence of lung cancer, which is higher than traditional lung cancer indicator CEA.Secondly, the levels of pyroptosis-related cytokines can be conveniently, inexpensively, and non-invasively measured by CBA or ELISA assay in patient's blood samples.This makes it suitable for rapid lung cancer diagnosis.Integrating the examination of these pyroptosis-related factors into routine blood tests is likely to be well-received by most patients.Additionally, these markers could help to identify patients who may benefit from pro-pyroptosis treatment or T cell immunity treatment.Furthermore, this detection may provide an earlier diagnose for lung cancer.Overall, this examination holds significant promise for enhancing the detection of lung carcinoma.
Fig. 2 .
Fig. 2. ROC curve analysis of CEA in the diagnosis of lung cancer.
Table 1
Characteristics and clinical data of lung cancer patients and healthy volunteers in the study.
Table 2
The level of cytokines in lung cancer patients with different habit.
Table 3
The diagnostic efficacy of biomarkers in lung cancer patients and healthy volunteers.
Notes: Normally distributed data were expressed as mean ± standard deviation (SD), and the skewed data were presented as median (interquartile range).*P<0.05.-=Not applicable.Z.Peng et al.
Table 5
The early diagnostic efficacy of biomarkers in lung cancer patients. | 2024-05-19T15:43:21.416Z | 2024-05-01T00:00:00.000 | {
"year": 2024,
"sha1": "9a2432afecbc851bbe0230884861cd31fc21b068",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S2405844024074309/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e87900dfaa52b154ab4822de3d902e9b0cc3b376",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
228825571 | pes2o/s2orc | v3-fos-license | Kaitiakitanga: A transformation of supervision
INTRODUCTION: This article explores Māori social work supervision in Aotearoa New Zealand, from cultural, iwi, hapā and whānau perspectives. It describes an emerging model of kaitiakitanga (supervision) entitled “He Maunga, He Tangata, He Tapu, He Kahu.” APPROACH: It is based on the author’s experience and tribal relationships, and proposes a model reinterpreting the supervisory relationship by first re-examining the meanings of these relationships from a Māori perspective. It explains the rationale of the model in order to clarify its origins, principles, purpose, obligations and responsibilities in the field of kaitiakitanga (supervision). The nine principles discussed, along with four overarching themes identified within Te Ao Māori, reflect the importance of integrating customary practices in to achieve the best outcomes for the people we serve and work with. IMPLICATIONS: These principles are crucial to the practice of kaimahi-a-iwi and kaitiakitanga, where it is important not only to care, protect, guide, teach, influence and encourage, but also to consider self-care, and develop safe and accountable practices for all people.
ORIGINAL ARTICLE
THEORETICAL RESEARCH service organisations. Even with-and since-the advent of Puao-Te- Ata-Tū in 1988(Department of Social Welfare, 1988, and the declared importance of Te Tiriti o Waitangi, there continues to be, by many, a fundamental ignorance of Māori processes. With this article, I will explore social work supervision in Aotearoa New Zealand, addressing it from my Māori perspective. It has several discussion points about social work supervision, its transformation from a western perspective to a Māori perspective, and an emerging model of kaitiakitanga (supervision) titled "He Maunga, He Tangata, He Tapu, He Kahu." It begins with a personal notion that relates to the transformation of the word supervision and subsequently the transformation of supervisor, supervisee, client and social work. There is a brief discussion around the history of supervision, and a question: "What is cultural supervision?" Following this is a notion that explains the theoretical sphere of my developing model, and the rationale to clarify its origins, its principles, its purpose, its obligations and its responsibilities in the field of kaitiakitanga.
Reinterpretation
The word supervision has never resonated with me. It seemed such a severe word, with a variation of meanings including to direct, command, order, control, instruct, and manage to name a few. Instead, I have chosen to use the word kaitiakitanga as description of a supervision relationship between the Kaitiaki and the Tiaki. It thereby became important for me to rangahau (research) "kaitiakitanga." In doing so, I located Ahukaramu Charles Royal's (2007) interpretation of kai-tiaki-tanga: • Te Kai -We are the instrument of action • Tiaki -To watch over, to care for, to conserve, to nurture, to protect • Kaitiaki -Caretaker, protector, guardian • Tanga -preservation, conservation and protection I thereby elected to reinterpret not only supervision, supervisor, supervisee, but also social worker, client and social work (see below). Graham Smith describes such a reinterpretation (from a western perspective to a Māori perspective) as "transformative praxis, a Māori form of resistance against all acts, ideologies and forces, which attempt to subordinate Māori knowledge, Māori worldviews and Māori aspirations" (2003, p. 3). The transformation of these words within the construct of supervision must surely systematically advance them to a customary kaupapa Māori concept, because they are valuable components of effective kaupapa Māori supervision and to reposition them within kaitiakitanga purposely fits them within a Te Ao Māori framework.
Inclusively, the following are their respective changes and responsibilities.
• Kaitiakitanga (supervision)-is a very specific instrument of action. Its role is valuable, crucial within social work, because it is an action to support, uphold and maintain responsible, trustworthy engagements between the Supervisor and social worker (Supervisee), to assist, guide, encourage and maintain best social work practice when working with clients and their whānau, hapū, iwi and or family.
Although, using the word kaitiakitanga instead of supervision is recent for me, it has a philosophical, scholarly and ethical position that emphasises and expresses the absolute worth of people, individually and collectively, and for me it represents a much more humanistic, sensitive, social and thoughtful approach than the word supervision.
• Kaitiaki (supervisor) whose role is to care, protect, guide, teach, influence and encourage the supervisee in their work. Additionally, it includes a THEORETICAL RESEARCH
ORIGINAL ARTICLE
concentration on the "how"! How the Kaitiaki communicates, how the Kaitiaki carries out their role and how the Kaitiaki delivers the kaupapa "i hiringa a ia ki te mahi"-"she/he put their heart and soul into the work." • Tiaki (supervisee/social worker) whose role is to support, protect, guide, encourage and care for the people they serve-the tangata whaiora.
• Kaimahi-a-iwi (social work) An action that concerns itself with individuals, whānau, families, groups and communities to improve, enhance and enrich the mauri-ora (wellbeing), and the restoration of social functioning and the overall health, not only for Māori tangata whaiora, but of all cultures.
Responsibilities
• The Kaitiaki role and responsibility is firstly to the Tiaki, the Tangata Whaiora, and their whānau, hapū, iwi, and or family, and inclusively their own whānau, hapū, iwi.
• The Tiaki role is firstly to support, protect, guide and care for the people they servethe tangata whaiora, their whānau, hapū, iwi and or family, and their own whānau, hapū, iwi and or families.
• The last responsibilities of the Kaitiaki and Tiaki, are to the profession of social work, and their places of employment. Eruera (2005) contends that, kaitiakitanga, whilst not named or known as supervisory, is supervisory in nature. Inclusively it is my contention that kaitiakitanga is positioned as being a socially, heartfelt and humanistic approach, with concerns for the people, their physical, emotional and spiritual needs, their welfare, their values and their dignity which do not fit neatly within many western supervision approaches and processes, because of kaitiakitanga's adaptability, its application and its cultural differences. It is traditionally an intimate relationship between Māori, their environment and nature, based on the care of all things (Pohatu, 1995(Pohatu, , 2008. It is deeply rooted and embedded within the multidimensional and complex systems of tikanga, which contributes to the effectiveness and efficient performance of the Tiaki, when working with tangata whaiora and their whānau or family. It is also a process that allows the Kaitiaki to understand and gain more in-depth insight to the Tiaki and his/her practice. In 1990, the Anglican Archbishop, Whakahuihui Vercoe told the people present at the remembrances of the signing of Te Tiriti o Waitangi, including royalty, that "One hundred and fifty years ago, a compact was signed, a covenant was made between two people, but since the signing of Te Tiriti, our partners have marginalised us … and they have not honoured Te Tiriti" (Phillips, 1990). To progress past such injustices, our Te Tiriti partners Pākehā, need to recognise and accept without ridicule, the value of kaupapa and tikanga Māori advancement, especially in the world of kaimahi-a-iwi and kaitiakitanga. When considering the development of these, they must be aligned with Māori worldviews that shift the focus from the past to the present, to the future and progress them to capture and recognise the value of kaupapa Māori.
With the resurgence of Te Reo Māori language and the heightening realisation of the importance of tribal identity and whakapapa, it is important for Kaitiaki and Tiaki to recognise this. But, while this vocation may well be undertaken for philosophical reasons, "there is also a serious obligation to move from 'theory to applied practice', if we as Māori want to positively shape our destiny, and that of the people 'whom we serve' (Webber-Dreadon, 2018 np). Te Ao Māori is the core source THEORETICAL RESEARCH of Māoridom, revealing many traditional values and concepts that can be translated into theories of practice and provide practical tools for Tiaki and Kaitiaki in their work. It is these that will ensure positive development if Tiaki and Kaitiaki potential is to be realised If kaitiakitanga is to be effective for the Kaitiaki and Tiaki here in Aotearoa New Zealand, it needs to be positive, practical, constructive, educative, reflective and empowering, with a tikanga Māori base-taking into account that tikanga is derived from the word tika (Mead, 2003), regarded as the proper, correct and right procedures, with protocols specifying the right way of doing things, underpinned by core values and principles governed by Māori politically, socially and spiritually.
As Māori, we need to consider Māori frameworks within kaimahi-a-iwi and kaitiakitanga that have common themes influenced by Māori values, Māori philosophies and Māori aspirations. These starting points are from Māori cultural paradigms and theories, supported by Māori cultural traditions and gifts that our tīpuna have passed down to us through time. In addition, there are those published Māori writers such as Leland Ruwhiu (1995Ruwhiu ( , 2005Ruwhiu ( , 2013 A process in which the supervisor; enables, guides and facilitates the social worker(s) in meeting certain organisational, professional and personal objectives. These objectives are "professional competence, accountable and safe practice, continuing professional development, education and support. Supervision should be an open, honest and transparent process. ( ANZASW, 2015) In addition, Beddoe and Davys (1994) defined supervision as being much more client centred, rather than administrative (i.e., recording, reviews, reports etc.), line supervision or managerial (accountability to the employing organisation), with the focus being more on developing the supervisee's skills than dealing with the emotional personal content of a supervisee's work.
But do these words have the same innate, or a "deeper heart" meaning in comparison to kaitiakitanga?
Unfortunately, in my experience, there are very few non-Māori supervisors and managers who have or can provide the type of supervision needed when working with our Mā ori people here in Aotearoa New Zealand through bicultural or kaupapa THEORETICAL RESEARCH
ORIGINAL ARTICLE
Māori kaitiakitanga. This I believe is due to their colonial bias (Webber-Dreadon, 1999). It seems many are not interested in wanting to gain more in-depth knowledge of kaupapa or tikanga Māori.
What is cultural supervision-a Pā kehā concept?
In Aotearoa New Zealand, the term cultural social work or cultural supervision, over many years has been (and continues to be) explained by many, to meet the cultural needs of Māori but, in my opinion, it is part of a mainstream colonial afterthought.
Kaupapa Whakaroa (theory) -an emerging model
Consider theory in a Pākehā world and consider theory in a Māori world. Ngata's English-Māori Dictionary (1996) enlightens us that kaupapa whakaroa is the Māori term for theory. While these words are simple, they are scholarly and sophisticated because they offer a practice framework that is positioned within Te Ao Māori, the receptacle and proprietor of all Māori words, terms and expressions. It is, therefore, my contention that there is not just one theory in a Māori world, there are many which make up kimikimihia kaupapa whakaroa or in Pākehā terms eclectic theories. They don't follow one entity or system, but rather an assortment of different entities, because Māori words are adaptable and variable with a whakapapa that is responsive and dependent on the context and how they are used. Hollis-English (2017) asserted that Māori-centred theory is developed out of a metaphysical and theoretical view and, as such, kimikimihia kaupapa whakaroa, in its varying forms, is the foundation theory of my emerging model "He Maunga, He Tangata, He Tapu, He Kahu," because it has many different entities and mediums within it, which suggests that Māori articulation is the source of theory in my Māori world.
It is important to note that Māori coming out of the shadows are continuously developing new and different theories and models of practice, as we claim back our own kaimahi-a-iwi and kaitiakitanga methodologies (Eruera, 2005). Academically, the development of Māori theories and models of practice in kaimahi-a-iwi and kaitiakitanga have grown, but there are still many racial and tribal barriers to overcome. There is so much depth and detailed meaning in a word far beyond tino rangatiratanga, and kimikimihia kaupapa whakaroa which are only minute parts of the transformation of the Pākehā context of theory, because there is a clear intent that is grounded in Māori cultural frameworks and history. It is a collective of customary approaches that draws out the innate gifts of Māori that set out the obligations and responsibilities within kaimahi-a-iwi and kaitiakitanga, because its main concern is the well-being of others (Pohatu, 2004). While there might be few set, practical frameworks, there are many informed guiding principles that are grounded in Māori philosophies and values based on traditional Māori worldviews and Māori knowledge that are powerful tools for the transformation of kaitiakitanga. Māori have an ancestral relationship with kaitiakitanga, which is not only about the wellbeing of people, but also about the wellbeing of the environment and the whenua, and protecting it for the future of all people. The route to Māoritanga through abstract interpretation is a dead end. The way can only lie through a passionate, subjective approach. That is more likely to lead to a goal (Marsden, 1992) The beginning and reality of an emerging model Rau river, to stop the marauding tribes from going up the river to plunder the many Pa set on the river. She represents "he tāngata -the people," and her final resting place represents "he tapu -the sacredness not only of the whenua but also the occasion of kaitiakitanga." The kāhu not only represents the Kaitiaki, but it also represents the Tiaki, the Tangata Whaiora and their whānau.
Origins of an emerging model
The kāhu (hawk) is very significant to me, because everywhere I go, it follows me, Noticeably, there are nine triangular sectors and within each of them is a takepū (principle) that is intended to guide the Kaitiaki and Tiaki through a kaitiakitanga session. Inclusively, there is a beginning, "He Karakia Timatanga" and an ending, "He Karakia Whakamutunga." These are the spiritual and safe (ahurutanga) pathways for, during, and at the completion of, kaitiakitanga.
In considering and using ngā takepū (principles) in Table 1, and their valued actions, it is essential that they be aligned with Te Ao Māori, because they shift the focus in practice from the past, to the present, and on to the future, i.e., Kaitiaki (past), to Tiaki (present), to Tangata Whaiora (future). Doing this involves the need to capture and recognise the value of kaupapa Māori advancement.
Additionally, the triangular sectors hold a wrap-around action of self-care. Whilst this is often regarded as a personal responsibility, it is also the role of the Kaitiaki to encourage self-care, because kaimahi-a-iwi can be a pathway to mental, emotional and physical exhaustion causing burnout. Being with nature is but one natural and practical activity that assists self-care and mauri-ora, for both the Tiaki and Kaitiaki, because it helps them to maintain hope in the midst of suffering.
Refl ective learning
An important part of this framework is reflective learning which promotes deeper learning and questions. It is an extension of critical thinking. It assists us to question practice; this includes stepping back from what we have done, or are doing, to analyse a situation, and looking at how it might or will improve social work practice, with a human element. It makes learning a more conscious process to find things out that one might not have thought of before, or how one would do it differently next time: to frame and reframe one's social work practice for the future. Reflective learning is something that we consciously focus on Tika: (Best Practice) The Kaitiaki must remember that the Tiaki are the experts of themselves, thus it is important that the Kaitiaki encourages the Tiaki to bring their 'whole selves' to kaitiakitanga, and to build on their knowledge.
Manaakitanga: (Respect and
Compassion) It is important for the Kaitiaki to always act with respect, compassion and aroha -Even perhaps consider a tuā kana-teina relationship and provide a safe and supportive environment. Most of all, be honest.
Kaupapa: (Having a Collective Vision) It is important for the Kaitiaki to always encourage the Tiaki to have a collective vision for themselves and the tangata whaiora that they serve.
Pū manawa: (Natural Talents) As a Kaitiaki, always try to locate, explore and encourage the natural talents of the Tiaki, so that they in turn will encourage the pū manawa of the tangata whaiora.
Whakamana: (Empowerment) The Kaitiaki must always try to empower the Tiaki, so that they will do the same for the tangata whaiora.
Whā nau: (A Sense of Belonging) The Kaitiaki encourage and assist the Tiaki to always know who they are, who they belong to and who belongs to them, so that they can awhi, encourage and assist the Tangata Whaiora to locate themselves. This is an important part of kaimahi-a-iwi.
Mā tauranga: (Knowledge and Wisdom) The Kaitiaki must always consider and encourage the knowledge and wisdom of the Tiaki, so it may come forth more, and then the Tiaki can do likewise with the Tangata Whaiora.
Mauri Ora: (Well-being) At the completion of each and any session, the Tiaki must leave with a sense of mauri ora, and this can be passed on by the Tiaki to the Tangata Whaiora.
ORIGINAL ARTICLE
in order to improve aspects of the lives of tangata whaiora. In doing so, we explore and examine situations to assist us to understand and make sense of our own practice experiences and how we work or want to work as Tiaki and Kaitiaki.
TAKE (KEY ISSUE/S) Identify key issues and priorities
What is the take (key issue)? (take away the word -problem) What have you done about it so far?
WHANAUNGATANGA (BUILDING A RELATIONSHIP) Consider the ethnicity or tribal connection of the Tangata Whaiora
How did you make the connection with the Tangata Whaiora? Where do they come from i.e., Whā nau, hapū , iwi links? Where is the Family from?
Questions
We cannot disagree about the importance of questions within kaitiakitanga, but the questions must reflect Māori values and beliefs, because they are a principled craft THEORETICAL RESEARCH of kaitiakitanga. The constant framing and re-framing of the questions should not only be an attempt to find answers, but also for the Tiaki to seek new knowledge, thoughts, positioning and direction, to determine more positive pathways, growth, motivations and advancement for the tangata whaiora. Along with listening, they are essential tools of the Kaitiaki. The art is in how you ask the questions.
Allyson Davys provided a set of supervision questions in her teaching ("101 Questions"), later published in Davys and Beddoe (2010), which are pertinent to western supervision. With some adjustments they could fit other cultures and approaches including Kaitiakitanga. However, such an adjustment needs to focus on the Tiaki, their qualities, their culture, their nature, and their creativeness. The following Kaitiakitanga Pātai, Table 2, is a set of questions that can be asked in a session.
The philosophy of kaitiakitanga Carroll (2000) enjoined us to believe that the spirituality of kaitiakitanga draws a distinction between functional kaitiakitanga and the philosophy of kaitiakitanga. He maintained that functional supervision is something that is done, like applied balanced techniques, strategies and methods that are used for a purpose, but the philosophy of kaitiakitanga focuses on the being of people and the meaning that kaitiakitanga has for us, almost before anything is done. It is an ongoing extension of our lives that contributes to a philosophy of kaitiakitanga for Māori, as the basis from which to build a kaitiakitanga framework and explore functional supervision techniques.
Within the contexts of kaitiakitanga and tikanga Māori, there are many consortiums that indicate that Māori have an extraordinary social infrastructure that supports kaimahi-a-iwi and kaitiakitanga from a mātauranga Māori perspective, because Māori have a way of knowing that deepens understanding. Māori Marsden contends that Māori knowledge is the understanding of everything visible or invisible that exists across the universe. This includes all Māori knowledge systems, and ways of knowing and doing, which he defined as wisdom (Marsden, 1988) and it is these that guide the social relationship between the Kaitiaki and Tiaki, but also guide the use of the principles that are set on the maunga.
Having the aptitude and skill to apply all the principles within a kaitiakitanga session is a challenge within itself and to do this, it is important that the Kaitiaki and Tiaki identify their own knowledge and understanding of tikanga and its customs at the beginning of the kaitiakitanga relationship, with the simplest question perhaps being, "What do you know about tikanga Māori?" There is no strict pattern in the use of the principles except for mauri ora which is the most practical and should not be used until last, as it is the outcome that the Kaitiaki and Tiaki ought to be aiming for when using the model. The focus of kaimahi-a-iwi and kaitiakitanga is always for the best outcomes, and while there are challenges in applying tikanga within kaitiakitanga, we first need to understand how to action the traditional concepts and principles. Mātauranga Māori provides that value and belief which forms the ethics and principles of kaitiakitanga, because they govern the responsibilities to include customary practice and values since these help explain and enlighten us about different spaces and aspects of the world around us-they provide an insight into different perspectives about knowledge and knowing (Royal, 2007). Māori have a fondness for trying to understand the connections and relationships between all things human and non-human, the visible and invisible (Marsden, 1988), which is in direct contrast to western thinking because they are always trying to seek knowledge and understanding by a close and deep examination of something or someone in isolation first. For example, "What does it,
ORIGINAL ARTICLE
that he/she do? What is it for?" While Te Ao Māori, tikanga and Mātauranga Māori hold on to their value, because this enables new creativity-one that honours and treasures the past, responds appropriately to the present and challenges, and enables the creation of new possibilities and new knowledge for the future.
"He Maunga He Tangata, He Tapu, He Kahu" provides me with the medium of taku manawa (from my heart) as the Kaitiaki, so as to deliberate the nine triangular principles with the four overarching themes identified within Te Ao Māori (the maunga, the kuia, he tapu, he kāhu), because they reflect the importance of integrating customary practices as a professional to achieve the best outcomes for the people we serve and work with and for. The principles are imperative in the practice of kaimahi-a-iwi and kaitiakitanga, where it is important not only to care, protect, guide, teach, influence and encourage, but to also consider self-care, and develop safe and accountable practices for all people. We all require inner depth Māori cultural perspectives to ensure the development of best practice for the Tiaki which, in turn, will eventually interrelate with the Tangata Whaiora, their whānau, hapū and iwi to bring about mauri ora for all.
In Conclusion
Whilst this paper has been a challenge for me, my tribal constructs, my whenua and whakapapa have played a significant role, because my personal, ethical and professional identities stem from my whakapapa. Using my own maunga, my kuia, and the kāhu as starting points in the development of this model of practice has given me the courage to explore new philosophies and concepts which I had never thought of doing before. It has opened a whole new kaitiakitanga pathway for me, and to use such a humanist and valued approach with the Tiaki must, in turn, allow reflective connections of belonging. When communicated with the Tangata Whaiora, they also learn who they are, who they belong to and who belongs to them. A pathway to move forward more positively.
I feel that, during this journey, I have been embraced by the kāhu, my kaitiaki, which has led me each step of the way. My moemoea for this aromatawai is that it will contribute to kaupapa Māori supervision so that those who follow will discover their own pathways to open the doors of Te Ao Māori and grow their own Māori world of kaitiakitanga.
Whether it be written, sung, carved, danced, drawn or chanted, it is hoped that globally, indigenous people are encouraged to celebrate their traditional beliefs, knowledge and approaches as the unique gift they have to offer the world. (Thomas & Davis, 2005 p. 196) | 2020-11-05T09:09:51.787Z | 2020-11-02T00:00:00.000 | {
"year": 2020,
"sha1": "3fb544950a035c23388a80c2b0f737cadde6a266",
"oa_license": "CCBY",
"oa_url": "https://anzswjournal.nz/anzsw/article/download/770/708",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2aed3ff3198523b6f4799eb4f3f348f280d4defc",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
108289749 | pes2o/s2orc | v3-fos-license | Utilisation of Phosphogypusm and Flyash in Soil Stabilisation
Background/Objectives: Buildable land with good natural bearing capacity is reducing and it leads to the construction of buildings on poor soils which are finally leads to structural foundation failures. This necessitated the use of available admixtures for the improvement of soil characteristics economically. Among the available resources, industrial by-products can be effectively used as admixtures since it can solve the hazardous problems due to its disposal. Methods: Grain size analysis and Atterberg’s limits tests are conducted to classify the soils used in this study. To evaluate the effect of admixtures, strengths characteristics of soils were observed by Unconfined Compressive Strength and California Bearing Ratio tests. Weak soils having high expansive characteristics are used to study the activation of 5% fly ash blended with Phospho Gypsum (PG) at varying percentages of 2%, 4% and 6% with two weak soils at different curing conditions. Findings: Results shows higher strength development upto a percentage of 4% PG with 5% flyash with both the soils. Effect of curing periods on strength characteristics of treated soils at 7, 28 and 60 days were also considered in this study. Microstructural studies are also showing an improvement in microstructure which is examined in SEM micrographs and XRD results. The influence of flyash with different percentages of phosphogypsum on swelling characteristics shows a decrease in swell potential of treated soil with increase in curing periods. Improvement: This study gives an effective application of Phospho Gypsum and Fly Ash in geotechnical field by using it as a relevant soil stabilizer. Increase in CBR values were obtained with Fly Ash and Phospho Gypsum combinations with soils, which reduces the thickness of pavement and making more productive use of industrial wastes with considerable environmental benefits. Utilisation of Phosphogypusm and Flyash in Soil Stabilisation K. Divya Krishnan*, P. T. Ravichandran, C. Sudha, V. Janani and Manisha Gunturi Department of Civil Engineering, SRM University, Kattankulathur 603203, Tamil Nadu, India; er.divyakrishnan@gmail.com, ptrsrm@gmail.com
Introduction
Presently the need for soil modification was arising situation in construction industry, since the construction over good natural soil is found to be difficult due to increasing demands. It has been found that the structure resting on problematic soils causes immense damage to the foundation as well as superstructure. Among the numerous ground improvement techniques, soil stabilization proves an innovative treatment technique to exceed the difficulties of problematic soils. In some situations like pavement construction, the soil stabilization of the subgrade soil confirms to be a cost effective technique in construction by resulting reduction in the total depth of the layers of pavement. Considering the effectiveness of stabilization, the utilization of waste materials as additives suggests the soil strength improvement and solves the problem of disposal. Chemical methods with the use of fly ash, lime, cement etc has been increasingly utilized to improve the strength significantly 1-3 . Various researchers 4,5 have attempted to stabilize the black cotton soil. In soil stabilization the use of agricultural waste 6 , is also becoming an effective admixture in increasing the bearing capacity of weak soils, because of the pozzolanic properties when it gets oxidized. With this view, an investigation was undertaken with industrial wastes to produce cementitious binders by blending the Fly Ash with Phosphogypsum for treating the expansive soils. The phosphogypsum with flyash combination increases the later strength development by accelerated pozzolonic reactions has been studied for increasing proportions at different curing periods.
Test Materials and Properties
Two soils of different physical and geotechnical properties were selected from the sitesin Tamil Nadu is used in this study. One soil samples was collected from the site located in Tholudur-Vadagaram Pondi road and the other one was from the Perungudi. For the investigations, the selection of site was based on the observed structural damages caused on the many buildings and pavements in the areas. The two expansive soils used in the test programme were collected from a depth of 0.6 m below the ground level 7 and both the samples are varying in composition and its plasticity nature. The soils have a Free Swell Index (FSI) 8 of 120% and 109% respectively. Both the soils are classified as OH (organic clay with high plasticity)as per the Unified Soil Classification System and it was found that sample D1 contained 70% clay, 28% silt and 2% sand and 66% clay, 32% silt and 2% sand for sample D2 as per the grain size distribution. The geotechnical properties 9-12 of the soil are shown in 15 were also conducted for the samples D1 and D2 and the values obtained as 1.45% and 2.19%.
Flyash and phosphogypsum were used as the additives. Fly ash which was generated in the combustion of sub bituminous coals exhibiting bonding characteristics is collected from Neyveli in Tamil Nadu is used in this study. Phosphogypsum is another admixture used, which is the by -product obtained during the production of ammonium phosphate fertilizer.
Preparation of Specimens
Specimens are prepared by blending the fly ash with phosphogypsum in different proportions were kept for different curing periods of 7, 28 and 60 days. The range of addition of phosphogypsum was 2, 4 and 6% with fixed 5% of flyash. The tests were performed on compacted soil specimens with the admixtures added in different percentages for determining the strength characteristics and followed by the free swell tests, in order to evaluate the changes swelling potential of the soils. The test involves compacting the natural soils and stabilized soil in the UCC and CBR mould with its optimum moisture content and maximum dry densities. Mineral identification and microstructural changes in untreated and treated soils are also studied with X-ray diffraction technique and Scanning Electron Microscopy.
Unconfined Compressive Strength (UCS) test
The results of the compressive strengths values obtained foruntreatedsoil and soil samples stabilized with the addition of phosphogypsum and flyash, cured at room temperature for increasing curing periods of 7, 28 and 60 days are listed in Table 2.
It was seen that the compressive strengths test conducted on treated soils are greatly developed compared with the strength of untreated soil samples. Figures 1 and 2 shows the changes in the modification of the unconfined compressive strength with respect to curing time for soil samples D1 and D2. The influences of stabilizers on the strength gain of the treated soils are shown in Figure 3. It is noticed that the effect of fly ash and PG on strength is due to its pozzolanic reactions with the soil samples. Increase in strength achievement is more in soil samples with the admixture content and increase in curing periods. While compared with the virgin soil results, the UCS values obtained for treated soil samples D1 and D2 with 6% PG with 5% flyash at 60 days curing period shows an increase of 5.99 and 6.33 times respectively.
California Bearing Ratio (CBR) Test
The CBR results of the natural clay soil samples D1 and D2 are found to be 1.45% and 2.19% respectively. Since these values are below the standard specified values required for sub-grade material demands the treatment. The soil samples used for the determination of CBR values are prepared with the optimum moisture content and maximum density which are obtained from compaction characteristics, tested as per IS: 2720 (Part 16). Table 3 shows the variation in CBR values withuntreated soil samples with treated for different curing periods. Figure 3 and 4 shows that the load penetration graphs of the stabilized soil samples atspecified curing periods. It was observed that an increase in CBR values for the treated soils D1 and D2 by 13.5% and 14.6% with the effect of admixture percentage of 6% PG with 5% FA at a curing period of 60 days.
Free Swell Index
The bonding between particles with the presence cemetitious elements limits the volume increase in clays soil. This cementation process occurred as a result of pozzalonic reactions takes place with the FA and PG treated soils and which reduces the swell potential. A subsequent reduction in the Free Swell Index values is observed in both soils with the increase in admixture content. Table 4 presented the free swell index of treated soils samples D1 and D2. Increase in curing time also influencing in the reduction of swell value of this treated soil samples.
Scanning Electron Microscopy (SEM)
The SEM results shown in Figure 5 indicates the microstructure of typical samples used in this study. Phosphogypsum, flyash and untreated soil sample micrographs shows a pore structure which having more reactive surface results in pozzalonic reactions and resulting in cementatious products by stabilization. Figure 5 (d) indicates that large quantities of hydrated products are propagated with the aging of soil samples by fly ash-phospogypsum binder, and it indicates the strength development in treated soil.
The cemetatious products which are formed by the soils stabilized with Phosphogypsum and Flyash, makes the particles integrate with each other and results in better performances compared to unstabilized soil.
X-Ray Diffraction (XRD)
X-ray images of additives and representative untreated and treated samples for 28 day specimens stabilised with 6% PG along with 5% FA are shown in Figure 6. The various hydraulic compounds that are appeared in hydration process of soil with admixture combination shows the higher peak in XRD images. The effect of the addition of PG along with flyash has enhanced the formation of hydraulic compounds in faster rate. XRD patterns of soil sample shows that there is a remarkable difference in the hydration products in specimens in untreated and treated conditions.
Conclusion
An increase in strength was observed for treated soil samples and the UCS values obtained for treated soil samples D1 and D2 with 6% PG with 5% flyash at 60 days curing period shows an increase of 5.99 and 6.33 times respectively. Similarly the CBR values for the treated soils D1 and D2 are increased by 13.5% and 14.6% with the effect of admixture percentage of 6% PG with 5% FA at a curing period of 60 days. A reduction in FSI from 120% and 109% of untreated soils to 50% and 45% for treated soil sample when the Phosphogypsum content was increased from 0% to 6% with 5% flyash at curing period of 60 days.
SEM and XRD images justifies that the soil treated with the fly ash and Phosphogypsum changes the mineralogy in the treated soils by the production of hydraulic compounds.
The fly ash with phosphogypsum treatment is effective in developing the strength characteristics of problematic soils and making more productive use of industrial wastes with considerable environmental benefits. | 2019-04-29T13:13:32.425Z | 2016-09-14T00:00:00.000 | {
"year": 2016,
"sha1": "41859f703b0f5d00ce22e87534e68832bbe9dc0f",
"oa_license": null,
"oa_url": "https://doi.org/10.17485/ijst/2016/v9i33/95976",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "609d3f472546cf1ffe5751cf7805bf2273c0684d",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
253326062 | pes2o/s2orc | v3-fos-license | Association between physical activity and serum liver aminotransferases in Southwestern Iran: A Cross-sectional study
Background: The main aim of the present study is to investigate the independent association objectively measured level of physical activity (PA) and serum concentration of liver aminotransferases (alanine aminotransferase [ALT] and aspartate aminotransferase [AST]) among seemingly healthy individuals. Materials and Methods: The current secondary study was conducted in the framework of Khuzestan Comprehensive Health Study, a large population-based multicentric cross-sectional study, conducted between 2016 and 2019 on 18,966 individuals living in Khuzestan province, southwestern Iran. International PA Questionnaire was used for evaluating PA levels, and participants were divided into three groups: low, moderate, and high PA, and ALT and AST were compared between these groups. Results: The mean ± standard deviation age of participants was 38.65 ± 11.40 years. The majority of participants were female (71%). The mean concentration of ALT in total sample was 18.22 ± 13.06 (male: 23.65 ± 16.26 and female: 15.57 ± 10.06), while the mean concentration of ALT in total sample was 19.61 ± 8.40 (male: 22.44 ± 10.03 and female: 18.23 ± 7.08). A statistically significant inverse correlation was found between AST (r = −0.08, P = 0.02) and ALT (r = −0.038, P < 0.001) with total PA score. The mean concentration of ALT was 19.96 ± 13.63 in people with low PA, 17.62 ± 12.31 with moderate PA, and 18.12 ± 13.47 with high PA (P < 0.001). The mean concentration of AST in total sample was 20.37 ± 8.85 in people with low PA, 19.21 ± 8.83 with moderate PA, and 19.75 ± 8.85 with high PA (P < 0.001). The difference between people in different levels of PA in terms of mean concentration of AST was remained significant (P = 0.003); however, the difference for ALT was not remained significant after adjusting potential confounders. Conclusion: The current study based on large sample showed that PA had a statistically negative association with the concentration of liver aminotransferases in the seemingly healthy individuals; however, the observed associations were weak. People in the lowest levels of PA had the highest levels of ALT and AST.
most specific to this feature, AST may also be elevated in other conditions such as thyroid disorders and celiac disease. [2] Abnormal liver enzymes may be also present in the absence of symptoms and signs of liver disease. [1,2] Mild-to-moderate increase in the serum concentration of liver aminotransferases might also be found in asymptomatic patients with liver disorders, particularly in fatty liver or chronic hepatitis C. [3] Lifestyle factors, especially physical activity (PA), can irrefutably modulate the risk of developing several chronic diseases, including liver diseases. [4] The global prevalence of physical inactivity was about 22% in 2011. [5] Among the Iranian population, 40.0%, 24.7%, and 35.3% of individuals were categorized into low, moderate, and high PA, respectively. [6] Calorie restriction and PA are important factors for the management of nonalcoholic fatty liver disease (NAFLD). Patients with compensated cirrhosis, in addition, have withstood PA well. [4] Adults need to perform at least 150 min/week of moderate intensity exercise or 75 min/week of vigorous intensity exercise to achieve the beneficial and protective effects of PA. [7] Some previous studies showed that although PA in adults may be associated with decreased likelihood of abnormal liver function in apparently healthy asymptomatic individuals, severe PA can lead to increase in liver enzymes. [8] As a result, the main aim of the current study was to evaluate the independent relationship between objectively measured level of PA and serum concentration of ALT and AST among seemingly healthy individuals in Khuzestan Province, Iran. Our data can be a new baseline of liver aminotransferase for Iranian population.
Participants and study design
The current secondary study was conducted in the framework of Khuzestan Comprehensive Health Study (KCHS), a large population-based multicentric cross-sectional study. KCHS, a cross-sectional population-based study, was conducted between October 2016 and November 2019 in Khuzestan Province (Southwest), Iran, on 30,506 participants. The individuals in KCHS were recruited from primary care centers, called "Health Houses," in 27 counties of Khuzestan province by applying a stratified, multistage, clustered probability sampling method. A total of 30,506 people aged 20-65 years who met the inclusion criteria were enrolled in KCHS. Written informed consent was obtained from all participants at the beginning of the study. The protocol of the KCHS study was approved by the Ethics Committee and the Review Board of the university (Project No. RDC-9908, Ethics Committee and the Review Board Certificate NO. IR.AJUMS. REC.1399.224). This study was funded by the National Institute for Medical Research Development (NIMAD, grant number: 940,406). [9] Exclusion criteria consisted of any history of alcohol ingestion, the harmful use of hepatotoxic agents (an amount of drug or nondrug agent that induce hepatocyte damage), chronic liver diseases, alcoholic liver disease, the infection of hepatitis C virus (HCV) or hepatitis B virus (HBV), glomerular filtration rate (GFR) <30, metabolic syndrome, and a disinterest to participate. [10] We also did not include the NAFLD in the study. We calculate GFR using MDRD (Modification of Diet in Renal Disease) Formula. [11] In this study, metabolic syndrome was defined based on the National Cholesterol Education Program Adult Treatment Panel-III (ATP-III) report of 2001 (updated in 2004) criteria (three items or more) including Fasting plasma glucose level ≥100 mg/dL and/or specific medication or previously diagnosed Type 2 diabetes, Hypertension (blood pressure ≥130 mmHg systolic and/or ≥85 mmHg diastolic and/or specific medication), Hypertriglyceridemia (triglyceride [TG] level ≥150 mg/dL and/or specific medication), Low high-density lipoprotein [HDL] cholesterol (<40 mg/dL for men and < 50 mg/dL for women, respectively, and/or specific medication), and central obesity (waist circumference >102 cm for male, >88 cm for Female). [12] Study instruments and assessment of variables In the KCHS, a multipart questionnaire was completed for each participant, which included basic sociodemographic variables, physical examination and anthropometric features (body weight, height, and waist and hip circumferences), sleep quality, PA, history of fertility, history of chronic diseases, habitual history, drug history, smoking, family history of chronic diseases, risk factors related to disease transmission, and history of psychological disorders. All questionnaires were completed by trained interviewers.
Physical activity assessment
International Physical Activity Questionnaire (IPAQ) (short version, PA in the last 7 days, issued in 2002). [13] Self-reporting method were performed to complete all items related to IPAQ.
IPAQ is a scoring system of 27 questions divided into five parts, including job-related PA, transportation PA, domestic and gardening (yard) activities, leisure-time PA, and time spent sitting. Job-related PA includes any paid and unpaid work that people did outside their home. Transportation PA is about how to travel from place to place. Domestic and gardening (yard) activities are some of the physical activities that have been done in the last 7 days in and around your home. Leisure-time PAs are all the physical activities of the last 7 days that have been done solely for recreation, sport, exercise, or leisure. Time spent sitting is the time spends for sitting while at work, at home, while doing course work, and during leisure time. The score of each main part of the questionnaire was calculated using the mean of metabolic equivalent of task (METs)-minutes per week (MET-min/week) and the total score of PA for each participants was obtained from addition of all parts. According to this score, each eligible participant was classified into one of the three categorical levels of PA. Low PA described the participants who do not have the criteria for moderate and high categories (<599 MET-min/week). Moderate PA represented individuals who achieve a minimum of at least 600 MET-min/week. High PA showed individuals who accumulate at least 1500 MET-min/week, usually meet vigorous-intensity activity on at least 3 days or any combination of walking, moderate-intensity or vigorous-intensity activities achieving a minimum of at least 3000 MET-min/week for 7 or more days. Vigorous-intensity activities were defined as activities that take hard physical effort and make individuals breathe much harder than normal. Moderate-intensity activities referred to activities that take moderate physical effort and make individuals breathe to some degree harder than normal. Validity and reliability of IPAQ-SF were approved in Iranian population. Cronbach's alpha coefficient (0.7) indicated a good internal consistency for this instrument. Spearman-Brown correlation coefficient (0.9) showed good test-retest reliability. Furthermore, exploratory factor analysis showed five factors. [13,14] Clinical and biochemical measurements The venous blood samples were taken from all participants, fasted for at least 8 h. Serum samples were immediately processed and transported to reliable certified laboratories of each cities. All samples were analyzed within 24 h to determine the level of TG, total cholesterol, low-density lipoprotein, HDL, alanine transaminase (ALT), aspartate transaminase (AST), alkaline phosphatase, and gamma-glutamyl transpeptidase. The same analyzer and the α-ketoglutarate reaction were utilized measurement of ALT and AST. The upper limit of normal ALT and AST level was accepted for magnitude under 40 U/L in both men and women. [15] Vital signs including systolic and diastolic blood pressures (mmHg), respiratory rate (breath/minute), heart rate (pulse/minute), and temperature (degree of centigrade) were also recorded.
Other variables
Body mass index (BMI) (kg/m 2 ) was calculated as the individual's weight (kg) divided by the square of the height (in meters). The following categories were considered for BMI: underweight, BMI of <18.5; normal weight, BMI of 18.5-24.9; overweight, BMI of 25-29.9; and obesity, BMI of 30 or greater. [16,17] Age, gender, level of education (elementary school degree, guidance school degree, associate's degree, bachelor's degree, master's degree, and doctorate/PhD), marital status (single, married, divorced, and widowed), cigarette smoking, and opium use were also recorded.
Statistical analysis
Continuous and categorical variables were reported as mean ± standard deviation (SD) and frequency (percentage), respectively. Normality of continuous variables was evaluated using Kolmogorov-Smirnov test and Q-Q plot, and nonnormally positively skewed data were subjected to logarithmic transformation. Basic demographic and clinical continuous variables were compared between three levels of PA using analysis of variance (ANOVA) and categorical data using Chi-squared test.
The bivariate association of total PA score with serum concentration of liver aminotransferases (ALT and AST) was evaluated using Spearman's rank correlation coefficient. Multiple linear regression was used for evaluating association between total PA score with ALT and AST when adjustment was made for potential confounders.
We also evaluated the mean value of serum concentration of liver aminotransferases (ALT and AST) between people in three levels of PA using multivariate ANOVA (MANOVA) and multivariate analysis of covariance by adjusting confounders. Bonferroni post hoc test was used for pairwise comparisons after MANOVA/MANOCVA.
RESULTS
The data of 30,506 participants were assessed for eligibility. We removed the data of 11,402 participants due to alcohol ingestion, fatty liver disease, HCV infection, HBV infection, chronic liver diseases or liver cirrhosis, GFR <30, and metabolic syndrome sequentially. Finally, 18,966 participants with complete data on our study's main variables were included in the data analysis. The flow diagram of participants' recruitment is shown in Figure 1.
The classification of participants in terms of PA led to 3524 (18.58%) participants with low PA, 8784 (46.31%) participants with moderate PA, and 6658 (35.10%) participants with high PA. Table 1 shows the basic demographic and clinical characteristics of the study participants for total sample and across categories of PA. The mean ± SD age of the total participants was 38.65 ± 11.40. Of total sample 12,749 were women (71%) and the remaining were men. The mean age of participants was significantly different between categories of PA (P < 0.001), in which people in low levels of PA were older. People in different categories of PA had significant different BMI (P < 0.001), and surprisingly, participants with low PA had lower mean BMI due to lower age. Other basic variables of participants were significantly distributed across categories of PA (P < 0.001, for all). More details are presented in Table 1.
The association between PA scores with AST and ALT was evaluated using Spearman's rank correlation coefficient in bivariate setting and multiple linear regression when adjustment was made for potential cofounder, i.e., age, gender, BMI, systemic blood pressure, and cigarette and opium use. A statistically significant inverse correlation was found between AST (r = −0.08, P = 0.02) and ALT (r = −0.038, P < 0.001) with PA scores. The association of PA scores with AST based on regression analysis resulted that each SD increase in PA led to a significant decrease in AST: crude regression coefficient B = −0.00067 (P = 0.005) and adjusted B = 0.00076 (P = 0.001). We did not observe a significant association between total PA score with ALT in adjusted model (adjusted B = −0.00026, P = 0.62). 18.12 ± 13.47 with high PA (P < 0.001). Bonferroni post hoc test showed a significant difference between all pair-wise groups (P ≤ 0.001). The mean concentration of AST was 20.37 ± 8.85 in people with low PA, 19.21 ± 8.83 with moderate PA, and 19.75 ± 8.85 with high PA (P < 0.001). Bonferroni post hoc test resulted significant difference between people with low PA and high and moderate PA levels (P < 0.001, for all) but the difference was significant between moderate and high PA levels. The difference between people in different levels of PA in terms of mean concentration of AST was remained significant (P = 0.003); however, the difference for ALT was not remained significant (P = 0.47) after adjusting potential confounders.
Subgroup analysis by gender showed similar results as total sample in terms of AST, in which the mean concentration of AST was significantly different between people in different categories before adjusting the confounders in both gender (P < 0.001, for male, P = 0.008, for female) and after adjusting confounders (P = 0.012, for male, P = 0.015, for female). However, in both crude and adjusted difference in terms of ALT in different categories of PA, we did not significant results for men however it was significant for women [ Table 2].
DISCUSSION
The main finding of our study was that PA can affect serum liver aminotransferases using a province database. Our data showed that there was a significant difference in the serum level of liver aminotransferases in participants with moderate PA in comparison to whom with low PA, probably owing to abundant number of participants [ Table 2]. [18] The novelty of our study is the evaluation of this association in seemingly healthy individuals. Our data can be a new baseline for Iranian population according to racial diversity in Khuzestan province.
High BMI (as increasing factor) and aging (as decreasing factor) are considered independent confounding factors to evaluate the association between the serum liver aminotransferases and PA. [10,19] Among individuals with normal total bilirubin level, serum AST and ALT concentrations are significantly lower in the females than in the males. [20] In our comparison, we statistically adjusted the effect of the age, gender, and BMI on serum level of liver aminotransferases to remove these confounding effects.
Some previous studies under adjusted models indicated among the participants with NAFLD, high PA has only an independent association with lower ALT level but not with AST level. These independent negative associations were not observed in participants with the moderate PA comprising to the inactive ones except for the risk of lean NAFLD. [10,19] In this study, serum level of liver aminotransferases (both ALT and AST) was significantly higher among the ostensibly health participants with low PA in comparison ones who had moderate PA [ Table 2]. The present study demonstrated that PA can be an effective factor to prevent liver disorders, especially in apparently healthy asymptomatic individuals with no previous history of liver diseases.
Gender can be a contributing factor on the concentration of liver aminotransferases in healthy population. Some studies revealed independently of the PA level, the mean serum concentration of ALT has apparently lower range among female healthy individuals than male ones. [8,20] Our data showed that the mean levels of AST and ALT were significantly higher in males than females in the population of study and also between three categories of PA [ Table 2]. Regular PA has irrefutable benefits in the primary and secondary prevention of several chronic diseases, particularly NAFLD. [4] NAFLD is the most common chronic liver disease with an estimated global prevalence of 25.2% and 2.9% to 37.8% in Iranian adult general population. [21,22] Although among the global population, NAFLD is strongly associated with high weight/ obesity, insulin resistance, and dyslipidemia, [23] it is particularly common in male sex and those with old age in Iranian population. [22] In addition, PA has been tolerated acceptably in patients with compensated cirrhosis and might modulate the risk of hepatocellular carcinoma (HCC), especially in NAFLD patients. [4] Regular adequate PA can also improve underlying metabolic disorders in NAFLD and particularly nonalcoholic steatohepatitis (NASH). [24] Following an increase in liver aminotransferase in asymptomatic individuals, NAFLD is usually detected by imaging techniques. [25] Thus, we suggest that it can be a good idea that PA presumably can be used to predict the likelihood of metabolic liver disease including NAFLD and NASH.
The strength of the present study was the existence of a large sample sizes with various ethnicities. Therefore, the total mean serum concentration of liver aminotransferases can be used as reference for Iranian population. Oppositely, our study had some limitations. First, similar to other cross-sectional studies, this study has inherently limitation to clarify causal relationships. Therefore, more longitudinal cohort studies and randomized controlled trials are needed to explain the underlying causal relationships between PA and the level of the liver aminotransferases. Second, we had some limitations of charge and skilled sonographer for screening of all participants. Third, we measured PA using self-reporting questionnaire rather than an objective measurement such as accelerometer readings, while we have no information on the validity of older individuals. [12] Furthermore, the results of this study showed no time that PA should be continued to make good effects on ALT and AST. Finally, we did not follow individuals. We assessed them only at the first visit and evaluate the PA and also LFT.
CONCLUSION
Our results suggest that the level of PA has a statistically negative association with the concentration of liver aminotransferase in the seemingly healthy individuals.
Although among the participants with vigorous PA, the concentration of ALT and AST is statistically lower than individuals who have low PA, this difference was not clinically significant. | 2022-11-05T16:00:17.051Z | 2022-10-31T00:00:00.000 | {
"year": 2022,
"sha1": "bcc2207867e00aefca03f48b0c42fcddd7eda582",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/jrms.jrms_835_21",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a8428624359c2ce177b7d20e0de3c513f3ebdfaa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
218687676 | pes2o/s2orc | v3-fos-license | Remote Sensing Applications in Monitoring of Protected Areas Remote Sensing Applications in Monitoring of Protected Areas
: Protected areas (PAs) have been established worldwide for achieving long-term goals in the conservation of nature with the associated ecosystem services and cultural values. Globally, 15% of the world’s terrestrial lands and inland waters, excluding Antarctica, are designated as PAs. About 4.12% of the global ocean and 10.2% of coastal and marine areas under national jurisdiction are set as marine protected areas (MPAs). Protected lands and waters serve as the fundamental building blocks of virtually all national and international conservation strategies, supported by governments and international institutions. Some of the PAs are the only places that contain undisturbed landscape, seascape and ecosystems on the planet Earth. With intensified impacts from climate and environmental change, PAs have become more important to serve as indicators of ecosystem status and functions. Earth’s remaining wilderness areas are becoming increasingly important buffers against changing conditions. The development of remote sensing platforms and sensors and the improvement in science and technology provide crucial support for the monitoring and management of PAs across the world. In this editorial paper, we reviewed research developments using state-of-the-art remote sensing technologies, discussed the challenges of remote sensing applications in the inventory, monitoring, management and governance of PAs and summarized the highlights of the articles published in this Special Issue.
Introduction
The World Commission on Protected Areas adopted a definition that describes a protected area (PA) as a clearly defined geographical space, recognized, dedicated and managed, through legal or other effective means, to achieve the long-term conservation of nature with the associated ecosystem services and cultural values [1].In general, protected areas (PAs) include national parks (NPs), national forests, national seashores, all levels of natural reserves, wildlife refuges and sanctuaries, designated areas for the conservation of native biological diversity and natural and cultural heritage and significance.PAs also include some of the last frontiers that have unique landscape characteristics and ecosystem functions in wilderness conditions [2].Along shorelines and over ocean and sea, the International Union for the Conservation of Nature (IUCN) has defined marine protected areas (MPAs) as any area of intertidal or subtidal terrain, together with its overlying water and associated flora, fauna, historical and cultural features, which has been reserved by law or other effective means to protect part or all of the enclosed environment [3].As reported by the World Database on Protected Areas (WDPA, https://www.protectedplanet.net/),15% of the world's terrestrial lands and inland waters, excluding Antarctica, is under protection.About 4.12% of the global ocean and 10.2% of the coastal and marine areas under national jurisdiction are set as MPAs.About 19.2% of key biodiversity areas are completely covered as PAs [4].Protected lands and waters serve as the fundamental building blocks of virtually all national and international conservation strategies, supported by governments and international institutions.These policies and their implementations provide the protection of threatened species around the world.The IUCN has categorized PAs into seven types, namely the strict nature reserve (Ia), wilderness area (Ib), national park (II), natural monument or feature (III), habitat/species management area (IV), protected landscape/seascape (V) and the protected area with a sustainable use of natural resources (VI) [1].PAs are increasingly recognized as essential providers of ecosystem services and biological resources, key components in climate change mitigation strategies, as well as vehicles for protecting threatened human communities or sites of great cultural and spiritual value.
PAs have been created over past millennia for a multitude of reasons [5].The establishment of the Yellowstone National Park in 1872 by the United States (U.S.) Congress ushered in the modern era of the governmental protection of natural areas, which catalyzed a global movement [6,7].The 1916 National Park Service Organic Act of the United States established the purpose of national parks, including to conserve the scenery and the natural and historic objects and the wild life therein, and to provide for the enjoyment of the same in such a manner and by such means that will leave them unimpaired for the enjoyment of future generations [8].As the National Parks Omnibus Management Act of 1998, the agency undertook a program of inventory and monitoring of National Park System resources to establish the baseline information and to provide information on the long-term trends in the condition of National Park System resources [9].Remote sensing applications have contributed greatly in such inventory and monitoring efforts [10][11][12].
Even with the implementation of a tremendous variety of monitoring programs and conservation efforts with achievements, wild species' population decline, biodiversity loss, extinction, system degradation, pathogen spread and state change events are occurring at unprecedented rates [13,14].The effects are augmented by continued changes in land use, invasive spread, alongside the direct, indirect and interactive effects of climate change and disruption.PAs become more important in serving as indicators of ecosystem conditions and functions either by their status and/or by contrasting to their adjacent unprotected areas.PAs are highly prized by the society with diversified representative characteristics.Earth's remaining wilderness areas are becoming increasingly important buffers against the changing environmental conditions.However, they are not an explicit target in many international policy frameworks [15].The most recent United Nation's report concluded that up to one million animal and plant species were facing extinction, for which humans were to blame [16].The most impacting drivers on global biodiversity scenarios toward the year 2100 include human-induced changes in land use, climate, nitrogen deposition, biotic exchange and atmospheric CO2 [17].
The WDPA data showed that the Latin American and the Caribbean regions have 4.85 million km 2 (24%) of protected land.Brazil has half (2.47 million km 2 ) of the entire region protected, making it the largest national terrestrial PA network in the world [WDPA, https://protectedplanet.net/].Worldwide, 77% of land, excluding Antarctica, and 87% of the ocean has been modified by the direct effects of human activities [18].PAs in China, for example, have typically incorporated core and buffer zones with human habitation.A study mapped and analyzed the human footprint index at 1km scale for 1834 terrestrial nature reserves of mainland China and concluded that the reserves designated at higher levels of governance were more pristine than those at lower levels.This was significant as China started to consider the reclassification of some reserves as NPs [19].Another nationwide assessment quantified the provision of threatened species habitats and key regulating services in natural reserves in China.The study illuminated a strategy for strengthening PAs through creating the first comprehensive national park system of China [20].As a strategic movement, in June 2019, the Chinese government announced a guideline for the establishment of a new NP-centered system for the protection of natural areas with the implementation plan in 2020.The crown jewels on the list of the first 10 designated NPs included the Three-River-Source NP, Giant Panda NP, Northeastern China Tiger and Leopard NP, among others.The Three-River-Source NP covers an area of about 363,000 km 2 and encompasses the headwaters of three major rivers, i.e., the Yellow, Yangtze and Lancang rivers, in the eastern Tibetan Plateau.The comprehensive system of NPs aims to protect the lands and waters with key natural resources and biodiversity.
Remote sensing provides a comprehensive geospatial capacity to map and study PAs in different spatial details and contexts, e.g., pixel size, area coverage, immediate adjacent areas of PAs and the broader background of the land and water that support the PAs; temporal frequencies, e.g., daily, weekly and monthly observations; and spectral properties.Remote sensing observations, in combination with field-based measurements, create new and exciting opportunities to meet the needs of monitoring PAs [21].
Remote Sensing Applications in Monitoring of Protected Areas
It has long been recognized that the on-the-ground monitoring of PA ecosystems is expensive, primarily due to the size and logistical constraints of national parks, designated wilderness, wildlife refuges and other large PAs.Remote sensing monitoring can provide essential information for the efficient, transparent, repeatable and defensible decision making in ecological systems [22].The integration of ground-based data (e.g., focal species populations) and remote sensing has been practiced in monitoring and modeling environmental change in many PAs [5,[23][24][25].
Remote sensing has unique advantages in monitoring the landscape dynamics of PAs around the world.The temporal depth of remote sensing can be used to provide monitoring with the continuity of deployments of new satellites and sensor systems and image acquisition capability.Multispectral optical sensors, e.g., the Landsat Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+) and Operational Land Imager (OLI), Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), SPOT High Resolution Visible (HRV) and High Resolution Visible and Infrared (HRVIR), Moderate Resolution Imaging Spectroradiometer (MODIS), Visible Infrared Imaging Radiometer Suite (VIIRS), Advanced Very-High-Resolution Radiometer (AVHRR) and Sentinel-2 MultiSpectral Instrument (MSI), and the derivatives of data products, have been routinely applied in PA inventory and monitoring research and applications.The approaches translated an ecologically based view of change into the spectral domain when archives of multispectral images were considered.Spectral indices have been used as the proxy for ecological attributes and have been tracked as time-series trajectories.The developed algorithms use statistical fitting rules to identify periods of consistent progression in the spectral trajectory (segments) and the turning points (vertices) that separate these periods.The change detection methods capture a wide range of processes affecting vegetation, such as the decline and mortality, growth and recovery and the combination of other driving factors [18,26,27].Active sensors, such as the synthetic aperture radar (SAR), and satellites including the European Remote Sensing (ERS-1/-2) and Envisat, the Japanese Earth Resources Satellite 1 (JERS-1), the Phased Array type L-band Synthetic Aperture Radar (PALSAR-1/-2), RADARSAT-1/-2, the Constellation of Small Satellites for Mediterranean Basin Observation (COSMO-SkyMed), TerraSAR-X and TanDEM-X, and Sentinel-1A/B, have been proven effective in monitoring the changing environments at the local, regional and global scales [28][29][30][31][32][33].The interferometric synthetic aperture radar (InSAR) has been used to construct a global digital elevation model (DEM), to map characteristics of the Earth's surface and measure land surface deformation at an unprecedented precision and spatial resolution under all-weather conditions [34].
Time-series remote sensing data have allowed for the reconstruction of the histories of disturbances induced by anthropogenic and natural impacts.Typical examples have included: the inventory and monitoring studies in NPs and PAs in a landscape context, such as in the Acadia NP and other northeastern U.S. NPs [12,35], the Yellowstone NP [36] and the Olympic NP of the Pacific Northwest of the U.S. [37]; for monitoring the interannual variability in snowpack and lake ice in southwest Alaska [38]; in the assessment of national forests of eastern U.S. [39]; in monitoring the land cover change and ecological integrity of Canada's national parks [40], such as the wildlife habitat changes in Kejimkujik NP and the national historic site in southern Nova Scotia of the Canadian Atlantic Coastal Uplands Natural Region [41]; in operational active fire mapping and burnt area identification to Mexican nature PAs [42]; in Tibetan Plateau [43] and Changbai Mountain National Nature Reserve [44,45] of China.
Remote sensing has unique advantages in monitoring frontier lands, which are always in remote and difficult-to-reach locations.Examples have included: satellite-observed dynamics of lake-rich regions across the Tibetan Plateau and the Arctic; forest disturbance and dynamics in Siberia; the assessment of the complex Amur tiger and Far Eastern leopard habitats in the Russian Far East; in the landscape and ecosystem characterizations in China and Southeast Asia; in conservation efforts of tree kangaroos in Papua New Guinea; and in PAs in the Albertine Rift of Africa [46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62].Remote sensing has advantages in monitoring vast habitats both inside and surrounding the PAs.This is particularly true when ecological functioning and habitats within NPs and PAs are influenced by natural resources outside of their borders [63][64][65].Remote sensing applications have been among the critical approaches in the assessment of landscape contexts and the conversion risks of PAs surrounded by accelerated human population growth [66][67][68][69][70].
MPAs are among the critical components of protected waters.Important factors that affect the way plants and animals respond to MPAs include the distribution of habitat types, the level of connectivity to nearby fish habitats, wave exposure, depth distribution, prior level of resource extraction and regulations.Conservation benefits are evident through increased habitat heterogeneity at the seascape level, the increased abundance of threatened species and habitats and the maintenance of a full range of genotypes [71].Remote sensing data that quantify spatial patterns in habitat type, oceanographic conditions, and benthic complexity can be integrated with in situ ecological data for the design, evaluation and the monitoring of MPA networks to design, assess and monitor MPAs [72,73].Combining remote sensing products with in situ ecological and physical data can support the development of a statistically robust monitoring program of the living marine resources within and adjacent to marine protected areas [74].Individual MPAs need to be networked in order to provide large-scale ecosystem benefits and to have the greatest chance of protecting all species, life stages and ecological linkages if they encompass representative portions of all ecologically relevant habitat types in a replicated manner.High-resolution remote sensing data are capable of mapping the physical and biological features of a benthic habitat, such as the monitoring of the coral reef in the Hawaii Archipelago and near-shore PAs in California and New England [75].
Coastal habitats, such as sand dunes, barrier islands, tidal wetlands, marshes, mangrove forests and submerged aquatic vegetation provide foods, shelters and breeding grounds for terrestrial and marine species.Coastal habitats also provide irreplaceable services such as filtering pollutants and retaining nutrients, maintaining water quality, protecting shorelines and absorbing flood waters.As coastal habitats are facing intensified natural and anthropogenic disturbances through direct impacts such as hurricanes, tsunamis, harmful algae blooms and cumulative and secondary impacts such as climate change, sea level rise, oil spill and urban development, the inventory and monitoring of coastal environments has become one of the most challenging tasks of the society in resource management and humanity administration.Remote sensing technologies with space-borne and airborne sensor systems in data acquisition and observation have profoundly changed the practice of monitoring and understanding the dynamics of coastal environments.Remote sensing applications have greatly enhanced the monitoring capacity of coastal PAs and practical implementations across spatial scales [76,77].Very high resolution (VHR) imageries from airborne and satellite sensors, unmanned aerial vehicles (UAV), light detection and ranging (LiDAR), hyperspectral sensors, ground-based sensor networks and wireless geospatial service web systems have been increasingly applied with local focused interests on coastal PAs [78][79][80][81][82][83][84][85][86].
The improved capacity of data science and infrastructure, e.g., cloud computing, Google Earth Engine (GEE) and big Earth data approaches, facilitates data sharing and the integration and modeling processes [87][88][89].For example, the capacity and service from GEE open opportunities for explorations that benefit from decades of data acquisition from remote sensing [90][91][92][93][94][95][96].
Challenges of Remote Sensing Monitoring of Protected Areas
The impacts of climate and human-induced environmental changes will continue to disrupt ecosystem functions and services, as well as the habitats and biodiversity.The future projections indicate a potentially catastrophic loss of global biodiversity [97][98][99][100][101][102].Earth's remaining wilderness areas are becoming increasingly important buffers against changing conditions.Protected lands and waters are becoming more important, serving as indicators of ecosystem status and functions and as the barometer for guiding the national and international strategies in collaborated mitigation and conservation efforts.
PAs are functional from many forms of direct human intervention.The landscapes and seascapes of PAs are dynamic rather than static.Vegetation is changing continuously in response to both endogenous and exogenous pressures.PAs and their networks provide critical habitats for biodiversity conservation and yet their performances are challenged under the changing climate and shifting resource patterns [103].Monitoring the dynamics of PAs requires tools that capture a wide range of processes over large areas.The evaluation of management effectiveness is a vital component of responsive, pro-active PA management [104,105].Ecosystem indicators, whether process-based (e.g., productivity), pattern-based (e.g., land-use activities), or component-based (specie populations), vary in space and time.A major limiting factor in comprehensive ecological models is the lack of explanatory geospatial data.The issues conspire against the ready, standardized integration of remote sensing into ecological research for the management and governance of PAs.
Remote sensing is a universal tool for scientists and land managers.New developments of remote sensing platforms, sensors and improvements in science and technology provide crucial support for monitoring PAs across the world.Remote sensing data products, coupled with userfriendly data exploration, analyses, and accessible modeling tools, allow scientists and practitioners to gain a better understanding of how environmental changes affect specie populations, ecosystem functions and the services that sustain them.The lessons learned and the recommendations put forward for the remote sensing of PAs include: the allocation of sufficient time to develop a genuine science-management partnership; the communication of results in a management-relevant context; the confirmation or embellishment of existing frameworks and processes; plans for persistence and change; and to build on existing, widely used data analysis tools and software frameworks [10,21].
Field survey and in situ observations are essential to identify protected habitats through remote sensing.Almost every remote sensing exercise requires a field survey to define the habitats, to calibrate remote sensing imagery and to evaluate the accuracy out of remote sensing outputs [106].With precise and accurate positioning and field survey becoming a routine operation, challenges remain for the incorporation of data from ground-based sensor networks and wireless geospatial service web systems with remote sensing observations for the comprehensive analyses and assessment of PAs.
The monitoring of landscape dynamics of PAs is among the primary advantage of remote sensing.The link between the pattern and the process, however, has been identified as a seminal challenge in landscape ecology.Disturbance is an important process that creates and responds to a pattern.The integration of remote sensing-based and in situ monitoring, including the consideration of scaling site observations, to the ecosystem level and the explicit link through ecosystem-based modeling to management options and recommendations, present the practical challenges and opportunities in the variety of PAs [23,26,39].
Remote sensing science is effective for managers and researchers across many domains.The lack of standardized protocols, workflow architecture, guidelines, training and software tools has led into a complexity.When evaluating the trends in resource and ecological conditions, the resource managers of PAs pursue analyses that use all the available information.Thus, they seek remote sensing change detection analyses that may include historical aerial photography, combined with more recent satellite images acquired in different spectral bands at various spatial and temporal resolutions.In addition, many resource problems must be evaluated at multiple spatial scales [12,69].These practical issues result in unusually complex requirements and procedures that can be worked out only through the sustained collaboration between remote sensing scientists and PA managers.A key lesson is about the importance, difficulty and time-consumption of the mutual learning process [11].In the management perspectives, there is a considerable potential to expand the operational use of remote sensing to monitor PAs among routine implementations.The uses of such information in operational monitoring present difficulties in designing and implementing a program that provides useful information at management levels and at an affordable cost [18,107].The integration of remote sensing data into a framework for the data assimilation, processing, modeling and reporting is becoming essential [108][109][110].
It is worthy to point out that one of the most important limitations to the use of remote sensing data for the monitoring of PAs is the variant mapping accuracy and the cost of acquiring groundbased data for verification and validation.This is a common challenge of obtaining and integrating traditional in situ measurements and approaches with remote sensing mapping and modeling.It also shows that remote sensing cannot always meet the entire information collection needs.Whereas remote sensing-based techniques address spatial and temporal domains inaccessible to traditional approaches, remote sensing cannot match the accuracy, precision and the thematic richness of in situ measurements and monitoring at the plot scale.Therefore, the design of remote sensing-based monitoring methods needs to be carefully integrated with a very efficient protocol for the inclusion of field observations and survey data [10,111].
As the amenity values of PAs attract the rapid developments and impacts of human-induced land use change, remote sensing has to meet an increasingly essential requirement to address a range of monitoring across spatial scales and from terrestrial to coastal and open waters [112][113].Challenges and uncertainties remain for the data continuity and systematic technology improvements toward consistent long-term monitoring applications in the future [114].
Highlights of the Special Issue Articles
With the rapid development of remote sensing science and technologies, this Special Issue aims to publish original manuscripts of the latest innovative research and advancement in the remote sensing of PAs.The articles in this Special Issue include applications of using data from multiple sensor systems in the monitoring of PAs from global to local interests.
The Defense Meteorological Satellite Program/Operational Linescan System (DMSP/OLS) nighttime stable light (NTL) has been proven to be an effective indicator of the intensity and change of human-induced urban development over a long time span and at a larger spatial scale [115].The study by Fan et al. [116] used the NTL data from 1992 to 2013 to characterize the human-induced urban development and studied the spatial and temporal variation of the NTL of global terrestrial PAs.The study selected seven types of PAs defined by the IUCN, including the strict nature reserve (Ia), the wilderness area (Ib), the national park (II), the natural monument or feature (III), the habitat/species management area (IV), the protected landscape/seascape (V), and the protected area with a sustainable use of natural resources (VI).The study evaluated the NTL magnitudes in PAs and their surrounding buffer zones.The results revealed the level, growth rate, trend and distribution pattern of the NTL in global PAs.
Terrestrial biophysical variables play an essential role in quantifying the amount of energy budget, water cycle and carbon sink over the Three-River Headwaters Region of China (TRHR).Bei et al. [117] evaluated the spatiotemporal dynamics of the biophysical variables including meteorological variables, vegetation and evapotranspiration (ET) over the TRHR and analyzed the response of the vegetation and the ET to climate change in the period from 1982 to 2015 using the China Meteorological Forcing Dataset (CMFD) and the Global Inventory Modeling and Mapping Studies (GIMMS) NDVI3g product, among others.The main input gridded datasets included meteorological reanalysis data, a satellite-based vegetation index dataset and the ET product developed by a process-based Priestley-Taylor algorithm.The study suggested a 'dryer warming' and a 'wetter warming' tendency in different areas of the TRHR.The study revealed that more than 56.8% of the areas in the TRHR presented a significant increment in vegetation.The analysis noted that the ET was governed by the terrestrial water supply in the arid region of the western TRHR.
Salt marshes are changing due to natural and anthropogenic stressors such as sea level rise, nutrient enrichment, herbivory, storm surge and coastal development.A study by Campbell and Wang [105] analyzed the salt marsh change at the Fire Island National Seashore, a nationally protected area in New York, using the object-based image analysis (OBIA) to classify a combination of data from Worldview-2 and Worldview-3 satellites, Topobathymetric LiDAR, and National Agricultural Imagery Program (NAIP) aerial imageries.The salt marsh classification was trained and tested with the vegetation plot data.In October 2012, Hurricane Sandy caused extensive overwash and breached a section of the island.This study quantified the continuing effects of the breach on the surrounding salt marsh.The tidal inundation at the time of image acquisition was analyzed using the LiDAR-derived DEM to create a bathtub model at the target tidal stage.The study revealed the geospatial distribution and rates of change within the salt marsh interior and the salt marsh edge.The Worldview imagery was able to classify the salt marsh environments accurately at an overall accuracy of 92.75%.The study suggested that the NAIP data were adequate for determining the rates of salt marsh change with a high accuracy.The cost and revisit time of the NAIP imagery created an ideal open data source for high spatial resolution monitoring and the change analysis of salt marsh environments.
Anticipating how boreal forest landscapes will change in response to fire regimes requires disentangling the effects of various spatial controls on the recovery process of tree saplings.The spatially explicit monitoring of post-fire vegetation recovery through moderate resolution Landsat imagery is a popular technique but is filled with ambiguous information due to mixed pixel effects.On the other hand, very-high resolution satellite imagery accurately measures the crown size of tree saplings but has gained little attention.Its utility for estimating leaf area index (LAI) and tree sapling abundance (TSA) in post-fire landscapes remains untested.A study by Fang et al. [118] compared the explanatory power of the Landsat imagery with 0.5-m WorldView-2 VHR imagery for the LAI and TSA based on the field-sampling data and subsequently mapped the distribution of the LAI and TSA based on the most predictive relationships.The results showed that the pixel percentage of the canopy trees (PPCT) derived from VHR imagery outperformed all the Landsat-derived spectral indices for explaining the variance of the LAI and TSA.The analyses concluded that mitigating wildfire severity and size may increase forest resilience to wildfire damage.Given the easily damaged seed banks and relatively short seed dispersal distance of coniferous trees, reasonable human help for the natural recovery of coniferous forests was necessary for severe burns with a large patch size, particularly in certain areas.The research showed that WorldView-2 VHR imagery better resolved the key characteristics of forest landscapes, providing a valuable tool to land managers and researchers alike.
Climate change and human activities alter the spatial distribution and structure of vegetation, especially in drylands.In this context, the object-based image analysis (OBIA) has been used to monitor changes in vegetation, but only a few studies have related them to anthropogenic pressure.Guirado et al. [119] assessed changes in the cover, number and shape of Ziziphus lotus shrub individuals in a coastal groundwater-dependent ecosystem in Spain over a period of 60 years and related them to human activities in the area.In particular, the study evaluated how sand mining, groundwater extraction and the protection of the area affected the shrubs.To do this, the study developed an object-based methodology to create accurate maps of the vegetation patches and compared the cover changes in the individuals.The changes in shrub size and shape were related to soil loss, seawater intrusion and the legal protection of the area measured by the average minimum distance and average random distance analysis.It was found that both the sand mining and seawater intrusion had a negative effect on individuals; on the contrary, the protection of the area had a positive effect on the size of the individuals' coverage.The findings supported the use of the OBIA for monitoring scattered vegetation patches in drylands, key to any monitoring program aimed at vegetation preservation.
Forest condition is the baseline information for ecological evaluation and management.A study by Chen et al. [120] mapped the structure and function parameters for forest condition assessment in the Changbai Mountain National Nature Reserve (CMNNR).Various mapping algorithms, including statistical regression, random forests, and random forest kriging were employed with predictors from Advanced Land Observing Satellite (ALOS)-2, Sentinel-1, Sentinel-2 satellite sensors, digital surface model of ALOS and 1803 field-sampled forest plots.The combined predicted parameters and weights from the principal component analysis as well as the forest conditions were assessed.The models explained the spatial dynamics and characteristics of forest parameters based on the independent validation.The mean assessment score suggested that forest conditions in the CMNNR were mainly the result of spatial variations of function parameters such as stand volume and soil fertility.This study provided a methodology on forest condition assessment at regional scales, as well as the upto-date information for the forest ecosystem in the CMNNR.
Han et al. [121] reported on the monitoring of droughts in the Greater Changbai Mountains (GCM) region by six drought indices, i.e., the precipitation condition index (PCI), temperature condition index (TCI), vegetation condition index (VCI), vegetation health index (VHI), scaled drought condition index (SDCI) and the temperature-vegetation dryness index (TVDI), between 2001 and 2018.This study provided a reference for the selection of drought indices for monitoring droughts to gain a better understanding of the ecosystem conditions and the environment.
The Songnen Plain (SNP) is an important grain production base and a designated red-line protection in China.The understanding of carbon use efficiency (CUE) of natural ecosystems in protected farmland areas is vital to predicting the impacts of natural and anthropogenic disturbances on carbon budgets and evaluating ecosystem functions.An article by Li et al. [122] studied variations in the ecosystem CUE in the SNP using MODIS data products and the Carnegie-Ames-Stanford approach (CASA) model.The relationships revealed between the CUE and the phenological and climate factors helped explain the CUE of the natural ecosystems in the protected farmland areas and improved the understanding about the dynamics of ecosystem carbon allocation in temperate semihumid to semi-arid transitional regions under climate and phenological fluctuations.
The comparative evaluation of cross-boundary wetland PAs is essential to underpin knowledgebased bilateral conservation policies and funding decisions by governments and managers.The article by Lu et al. [123] reported on a study of monitoring wetland change in the Wusuli River Basin in the crossboundary zone of China and Russia from 1990 to 2015 using Landsat images.The spatialtemporal distribution of wetlands was identified using a rule-based object-oriented classification method.The wetland dynamics were determined by combining the annual land change area (ALCA), the annual land change rate (ALCR), the landscape metrics and the spatial analysis.The study revealed the changes of the natural wetlands in the Wusuli River Basin and the patterns of change.The study provided critical information for the conservation and management of ecological conditions in cross-boundary wetlands.
Despite recent progress in landslide susceptibility mapping, a holistic method is still needed to integrate and customize influential factors with a focus on forest regions.A study by Shirvani [124] tested the performance of geographic object-based random forest modeling in the susceptibility of protected and non-protected forests to landslides in northeast Iran using Landsat 8 multispectral images and DEM data.The study derived features of conditioning factors.The study confirmed that some anthropogenic activities such as forest fragmentation and logging significantly intensified the susceptibility of the non-protected forests to landslides.
As the largest freshwater lake in China, Poyang Lake provides tremendous services and functions to its surrounding ecosystem, such as water conservation and the sustaining of biodiversity, and has significant impacts on the security and sustainability of the regional ecology.The lake and associated wetlands are among the protected aquatic ecosystems with global significance.The Poyang Lake region has recently experienced increased urbanization and anthropogenic disturbances, which has greatly impacted the lake environment.The concentrations of chlorophyll-a (Chl-a) and total suspended matter (TSM) are important indicators for assessing the water quality of lakes.The study by Xu et al. [125] used data from the Gaofen-1 (GF-1) satellite, in situ measurements of the reflectance of the lake water and the analysis of the Chl-a and TSM concentrations of the lake water samples to investigate the spatial and temporal variation and distribution patterns.The study analyzed the measured reflectance spectra and conducted a correlation analysis to identify the spectral bands that were sensitive to the concentration of Chl-a and TSM, respectively.The modeling results revealed the spatial and temporal variations of the water quality in Poyang Lake and demonstrated the capacities of the GF-1 satellite data in the monitoring of lake water quality.
The article by Duan et al. [126] presented an analysis of research publications, from a bibliometric perspective, on the remote sensing of PAs.The analysis focused on the time period from 1991 to 2018.The study extracted 4546 academic publications from the Web of Science database.Using VOSviewer software, the study evaluated the co-authorships among countries and institutions, as well as the co-occurrences of the keywords.The results indicated an increasing trend of annual publications in the remote sensing of PAs.This analysis revealed the major topical subjects, leading countries and most influential institutions around the world that have conducted relevant research in scientific publications.The study also revealed the journals that published the most articles in the subject of interests and the collaborative patterns related to the remote sensing of PAs.The analysis provided insights for understanding the intellectual structure of the field and identifying the future research directions.
Conclusion Remarks
Remote sensing is among the most fascinating frontiers of science and technology that are constantly improving our understanding of PAs.PAs are by no means uniform entities and have a wide range of management aims and are governed by many stakeholders.Advances in remote sensing have helped gather and share information about PAs at unprecedented rates and scales.There are many new and exciting applications for remotely sensed data that contribute to the better informing management of PAs.The achievements through the applications of science and technologies, the challenges, the lessons learned and the recommendations for the remote sensing of PAs deserve further attention [127].
The subjects and contents of the articles collected in this Special Issue reflect the state-of-the-art of remote sensing technologies for: capturing the dynamics of ecosystem variations; the evaluations of available sensors, data and the new development of integrated approaches; the methods for processing advanced remote sensing and time series data; and the integration of multisource and open source data.These studies contributed in the monitoring of PAs from the perspectives of in situ measurements, habitat assessments, socio-economic development, policy and management factors, and in inventory and practical implementations.The applications of monitoring from biospheric, atmospheric, hydrospheric and societal dimensions reflect the advantages of remote sensing in habitat mapping and biodiversity conservation, in the detection of effects from natural and anthropogenic disturbances, as well as in revealing uncertainties for the assessment of the resilience and sustainability of PAs and the mitigation approaches under the changing environments. | 2020-04-28T23:45:04.637Z | 2020-04-26T00:00:00.000 | {
"year": 2020,
"sha1": "bf1dc5400f41571cde3155784500963e2bab33e8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/12/9/1370/pdf?version=1588083121",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2b3f7aa18d601026db8f53ee68e67eea054cd3bd",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science",
"Geography"
]
} |
208532388 | pes2o/s2orc | v3-fos-license | Does bone preparation impact its shape: consequences for comparative analyses of bone shape
Vertebrate osteological collections provide comparative material for morphological analysis. Before being stored in the collection and studied by researchers, specimens are treated by preparators or curators and are cleaned. The preparation protocol employed ideally should not damage the material. Here, we explore the potential deformation of bones due to preparation using geometric morphometric methods. We focus both on intraspecific and interspecific variability. Our data on the scapular girdle of birds show that, at an intraspecific level, the effect of preparation on bone shape cannot be neglected. Paired and unpaired bones did not respond to the preparation process in the same way, possibly due to differences in function and their anatomical characteristics. Moreover, deformations due to preparation can be estimated by looking at the texture of the bone. At the interspecific level, we found no significant differences as the deformations induced by preparation are relatively small compared to differences among species. This study highlights the importance of carefully selecting preparation methods in order to avoid physical damage that could impact the shape of bones, especially for studies at the intraspecific level.
INTRODUCTION
Museum collections provide a rich source of anatomical material, often collected over the span of several centuries. These collections provide access to specimens, allowing for the study of a broad diversity and large number of animals from around the world. Before being added to collections, specimens are usually treated by preparators or curators. In order to prepare osteological material, common before the advent of computed microtomography facilities, specimens have to be cleaned using either natural (ranging from natural maceration, cleaning by boiling, to cleaning by bugs such as terrestrial isopods, marine isopods or dermestid beetles) or chemical (enzyme detergent soup, hydrogen peroxide or potassium hydroxide) treatments. Next, bones are dried using different techniques (natural drying lying on a flat surface or dried with artificial heat) allowing access to the bones (Fernández-Jalvo & Marin-Monfort, 2008). In theory, the preparation methods employed should not damage the integrity of the material. Thus, the protocol used should be adapted with products that are compatible with the material treated and must not interfere with possible future scientific studies. Possible consequences on the integrity of different skeletal elements depending on the preparation protocol used have already been studied and reported in several papers (Fernández-Jalvo & Marin-Monfort, 2008;Hahn, Vogel & Delling, 1991;Lemoine, 2011). Such consequences can be somehow compared to morphological deformations induced by the processes of fossilization (i.e., taphonomy). Only few studies have attempted to characterize taphonomical processes and to develop approaches taking into account the deformation induced by these taphonomic effects (Denys, 2002;Fernández-Jalvo & Andrews, 2016;Lyman, 2010). Indeed, the consequences of preparation on bones often remain underestimated and poorly documented (López-Polín, 2012). However, a study of Fernández-Jalvo & Marin-Monfort (2008) evaluated the effect of preparation methods on bones using electron microscopy. They found that for a same bone, only 2 out of the 12 methods used could be recommended: burying and the use of enzymes with close control of the duration to minimize damage. Furthermore, another method was acceptable but not excellent: the use of potassium hydroxide (KOH) with careful control of the duration to avoid the risk of damage. This study highlights the importance of carefully selecting the preparation method in order to avoid physical damage that could impact the structure and shape of the treated bones.
Here, we decided to investigate variation in bone shape due to preparation given the large amount of variability observed in collection specimens. We predict that these deformations could be due to the preparation process using chemicals dissolving fat and proteins. However, some parts of the bone may also be more easily deformed (Fernández-Jalvo & Marin-Monfort, 2008;Hahn, Vogel & Delling, 1991). We further predict that these deformations can have an impact on morphometric studies. Preparation deformations can cause and render more complex intra-individual and intra-species variability, modifying the bone shape depending of its composition, function, thickness or robustness (Lemoine, 2011).
We use geometric morphometric methods as these methods are commonly used to detect shape differences and are sensitive to small variations in shape. Shape variability can either be natural (natural variability including variability due to the functional role of a bone) or non-natural (due to preparation). We focused on the bones of the scapular girdle in birds. The scapular girdle is composed by two unpaired bones: the sternum and the furcula, and three paired bones: the scapula, the coracoid and the humerus (Fig. 1). All these bones have an important role during locomotion, as muscles involved in wing movements are attached to them.
Two types of functions in scapular girdle of birds were identified previously: (1) bones that need to resist the action of the muscles attached and that thus need to be robust and (2) bones that play a role of protection of the internal organs such as the heart and viscera. Both of these functions may also be related to bone flexibility, like the spring function of the furcula, which can absorb and return energy during the wingbeat (Goslow, Dial & Jenkins, 1989;Kardong, 2012;Mitchell et al., 2017).
To assess the impact of preparation on bone, we analyzed the texture of the bone, its shape variation (disparity), and its asymmetry. The asymmetry was defined as significant differences in shape within a single specimen. We expect that the asymmetry should be higher if there is a preparation effect. To assess whether the observed deformations may impact subsequent analyses, we compared effects at the intraspecific and interspecific level.
Material
We sampled 20 complete quail skeletons (Coturnix coturnix, Galliformes). These quail bones are housed in the research collection of A. Abourachid. All specimens were bred in captivity and prepared using the same protocol (see protocol below). In order to assess whether the intraspecific variability is lower than the interspecific one, we added several other species. We sampled one individual of six species from the collections of the Museum National d'Histoire Naturelle (Paris, France). Four are closely related to quails: Meleagris gallopavo (Galliformes), Anseranas semipalmata, Chauna chavaria and Cygnus olor (Anseriformes, sister group) and two share the same flight type: Coua cristata (Cuculiformes) and Cariama cristata (Gruiformes) ( Table 1). We selected one individual per species for the interspecific dataset.
Preparation protocol
The preparation protocol used for the quail data set is composed of ten steps. First, the birds are eviscerated and feathers, skin and viscera are removed. Then, large muscles are removed (defleshing). This is facilitated by carcass reduction (dismemberment and decapitation). Carcasses are then boiled for three hours and put into a lukewarm salt water bath with an addition of an enzyme (papain: cysteine protease enzyme; 1 g/L) for 48 h at 60 C. At the end of this step, the bones are put into a lukewarm sodium perborate bath until chilled (for more than 24 h). At that point, bones are well separated and free of flesh. Bones are rinsed and dried, lying on an absorbent surface (for 24 h). Finally, if after drying step, traces of fat persist on the surface of the bones they are put in a bath of absolute alcohol for several days and the renewal of the bath is possible many times according to the state of saturation in bone fat (yellowish coloration). When bones appear no longer saturated a final drying step is necessary to evaporate the alcohol.
3D data collection
We generated 3D surface scans with a white light fringe Breuckmann scanner (SmartSCAN) and its scanning software Optocat (http://gmv.cast.uark.edu/scanning-2/ software/opticat/) at the "plateforme de morphométrie" of the UMS 2700 of the MNHN. The scanner consists of a projector of white light fringes and two cameras that are positioned asymmetrically around the projector. Data on the surface of a bone are accurately captured and reconstructed by triangulation angles implemented in the Optocat software. It finally produces a high-resolution meshed 3D object which provides a representation of the surface of the bone only. For each specimen, we scanned eight bones: a sternum, a furcula, both coracoids (right and left), both scapulae (right and left) and both humeri (right and left). Further processing is performed with the Geomagic Studio 2013 (http://www.geomagic.com) software package in order to obtain a surface on which data can be accurately acquired.
Shape quantification using geometric morphometric
In order to assess the effect of the deformations of the bone and its potential effect on shape analysis, we use 3D geometric morphometric analysis on our sample of seven species of birds. Geometric morphometrics allow a quantification of shape variation using Cartesian landmark coordinates. This approach permits to quantitatively study the shape variation of bones in relation to quantitative and qualitative traits. We created as set of landmarks in order to quantify morphological disparity. Morphometric data were acquired on each surface scan of each bone using the IDAV Landmark software. For each bone, landmarks were chosen to accurately describe the complex geometry of each element. We used anatomical landmarks as well as sliding semi landmarks of curves and on surfaces to describe bone shape more accurately. Anatomical landmarks and sliding semi landmarks of curves were manually acquired on each scan by the same person (F.P.) whereas sliding semilandmarks on surfaces were semi-automatically projected onto the surface of each bone using the approach described below (see 3D sliding-landmarks procedures). To be able to compare the paired bones, we mirrored right bones into left bones, allowing to include all paired bones in the same comparative analysis. We kept the side information for each paired bone. Table 2 for landmark definition. Sternum: (C) lateral view, (D) ventral view, see Table 3 for landmark definitions. Left coracoid: (E) dorsal view, (F) ventral view, see Table 4 for landmark definitions. Left scapula: (G) dorsal view, (H) ventral view, see Table 5 for landmark definitions. Left humerus: (I) medial view, (J) lateral view, see Table 6 for landmark definitions. Blue points represent landmarks and gold points represent semi-landmark curves.
3D sliding-landmark procedures
The 3D sliding landmark procedure (Bardua et al., 2019;Bookstein, 1997;Gunz, Mitteroecker & Bookstein, 2005) was used in this study. In this procedure, sliding landmarks are transformed into spatially homologous landmarks that can be used to compare shapes. They will slide along curves that are predefined on each surface. This operation is performed using the Morpho package in R (v3.5.0) (Schlager, 2017;Schlager, Jefferis & Ian, 2019). Curves and surface sliding-landmarks are projected from the template onto each specimen for each data set. In this step, each new specimen is only defined by its landmarks and semi landmarks on curves. Next, the surface sliding-landmarks are projected onto the predefine curves and the surface of the new specimen using a template. Finally, spline relaxation was performed minimizing the bending energy criterion.
Generalized procrustes superimposition
Generalized Procrustes Superimposition or GPA (Rohlf & Slice, 1990) allows the comparison of an object's shape by removing size, orientation, and position relatively to the origin of coordinate system. We computed the first step which was an operation of translation of all the objects, allowing the superimposition on their center of gravity. The second step was an operation of normalization; all the objects were scaled and end up having the same scale. During this operation, all the coordinates of each object were divided by the centroid size which was the square root of the summed squared distances of each landmark to the centroid (Bookstein, 1997). Finally, each conformation was rotated by minimizing the summed square distances between all the landmarks. We performed the GPA using the function ''gpagen'' in Geomorph R package (Adams & Otárola-Castillo, 2013). After superimposition, each object was defined by Procrustes coordinates and rescaled. Thus, differences in conformation or objects shape could be studied and were simply represented by changes in the proportion of structures. After this operation has been performed for each data set, the landmarks of all specimens were comparable.
Statistical analysis
All the statistical analyses below were done in R (v.3.5.0; R Core Team, 2018).
Principal component analysis
In order to explore the distribution of the specimens in the morphological space (morphospace) and to reduce the number of dimension of our dataset, we performed a principal component analysis (PCA) using the function plotTangentSpace of "geomorph" package in R (Adams & Otárola-Castillo, 2013).
Difference of bone shape depending of bone texture
We wanted to compare, for each bone, the external appearance as a proxy for deformation due to preparation. Each bone was categorized depending on its external appearance, from oily to powdery. We created three categories: oily for yellow and shiny bones meaning lot of fat remained, powdery for bones that are very white and dusty representing little fat and neutral for the other bones. We tested for shape differences depending on these qualitative categories using a multivariate analysis of variance (MANOVA) on the principal component scores (PC) accounting for 95% of the overall variance of each bone (furcula: 10 PCs representing 95.5%, sternum: 11 PCs representing 95.7%, coracoid: 22 PCs representing 95.5%, scapula: 18 PCs representing 95.4% and humerus: 23 PCs representing 95.1% of the overall variance).
Visualizing shape similarities using a neighbor joining tree
We computed neighbor joining trees on the Euclidean distances using at least 95% of the overall variance in order to obtain unrooted trees.
Quantification of asymmetry to assess the impact of bone preparation using t-test In order to quantify the preparation effect, we tested the presence of asymmetry using a paired student test comparing right and left parts of the bones (Kharlamova et al., 2010). We used the t-test function in basic package in R. In the same way, we compared symmetrized and non-symmetrized shapes.
Quantification of disparity for each bone shape
We also calculated morphological disparity of each bone in both datasets thanks to the D index which give us a numerical value showing how different bones are between each other using the morphol.disparity function in "geomorph" package in R (Zelditch et al., 2004).
Assessing a possible effect of bone preparation on interspecific morphological studies Finally, we performed a PCA and disparity analyses on the interspecific data set in order to compare it to the intraspecific variability. It allows to assess a possible effect of bone preparation on interspecific morphological studies. If the impact of bone preparation is low, we expect to see a clustering of all the C. coturnix in the same part of the morphospace, whereas the other species should occupy a larger part of the morphospace. We also expect that the disparity of C. coturnix will be lower than those of all the other species combined.
Intraspecific level
Shape differences depending on texture or color The results of the MANOVAs showed that powdery bones are significantly morphologically different from neutral and oily bones (p-value below 0.01; Fig. 3; Table 7). Powdery bones in comparison to neutral and oily ones are characterized by furculae with narrower clavicles, sterna with dorsolateral and caudolateral processes that are more distant from the central part, coracoids with a thinner shaft, scapulae with a thinner blade and humeri with a more gracile shaft. No shape differences were found between oily and neutral bones.
Furcula
We computed the consensus shape of the furcula. The points on each side were very dispersed which means that there is considerable shape variation in the furculae (Fig. 4A). The four first axes of the intraspecific PCA explained 83.5% of the total variance (PC1 = 44.1%, PC2 = 26.9%, PC3 = 7.4% and PC4 = 5.1%; Fig. 4B). Two types of shapes were distinguished along the first axis. The negative axis was represented by a furcula with the clavicles being more distant from one another and a rounded caudally oriented symphysis. On the contrary, narrow clavicles and elongated dorsally oriented symphysis were situated towards the positive side of the axis.
Sternum
We found the same pattern for the sternum as observed for the furcula (Fig. 5A). Thin parts on each side were very variable in orientation and shape. However, both the center part and the keel, showed little deformation. The four first axes of the PCA explained 65.3% of the total variance (PC1 = 25.7%, PC2 = 17.2%, PC3 = 12.0% and PC4 = 8.7%; Fig. 5B). Two types of shapes were distinguished along the first axis.
The negative part was represented by a sternum with dorsolateral and caudolateral processes more distant from the central part of the sternum. The second axis showed differences in the anterior part of the sternum with the coracoid joint and the craniolateral process which were more prominent on the negative part of the axis compared to the positive part.
Coracoids
For the coracoid bone, which is a paired bone, the consensus shape showed that all the landmarks overlapped (Fig. 6A). This was confirmed by the fact that all right and left coracoids were each other's closest neighbors in the neighbor joining trees (Fig. 6B).
The four first axes of the PCA explained 54.3% of the total variance (PC1 = 17.9%, PC2 = 15.1%, PC3 = 12.0% and PC4 = 9.3%; Fig. 6C). Two types of shapes could be distinguished along the first axis. The positive part was represented by a coracoid with angular sternocoracoidal process. The second axis showed differences on the anterior part of the coracoid with the acromion and the clavicle facet being more prominent on the positive part of the axis than on the negative part.
Scapula
Scapula consensus shape showed that all the landmarks overlapped (Fig. 7A). Yet, not all right and left scapulae were each other's closest neighbors in neighbor joining trees (Fig. 7B). The four first axes of the PCA explained 67.8% of the total variance (PC1 = 27.3%, PC2 = 16.2%, PC3 = 14.2% and PC4 = 10.1%; Fig. 7C). Along the first axis, the positive part was represented by a gracile scapula with the anterior part of the blade being enlarged. The second axis showed differences on the global robustness of the blade on the positive part of the axis and a more gracile and curved blade on the negative part.
Humerus
For the humerus, the consensus shape showed that all the landmarks overlapped (Fig. 8A). This seemed to be congruent with the neighbor joining tree where almost all right and left humeri were each other's closest neighbors (Fig. 8B). The four first axes of the PCA explained 51.7% of the total variance (PC1 = 17.9%, PC2 = 15.1%, PC3 = 10.2% and PC4 = 8.5%; Fig. 8C). The positive part was represented by a robust humerus with a large shaft and articulation. In contrast, gracile humeri with long and thin shaft were associated with the negative part of the axis. The second axis highlighted a difference in the head length on the anterior part of the humerus with a longer head at the negative part of the axis.
Disparity and symmetry
Unpaired bones, furcula and sternum, had a higher disparity than paired bones (Table 8). Symmetry tests showed that the bones have different patterns of symmetry (Table 9). Unpaired bones, such as the furcula and the sternum, seemed to be less symmetrical than paired bones such as the coracoid, scapula and the humerus. Among the paired bones, the results showed that the sternum seemed to be more asymmetrical than the furcula. These symmetry test results were congruent with the disparity tests. Interspecific level analyses to assess the impact of bone deformation in a broader context
The disparity calculation showed a larger disparity between species than among quails (Table 8).
Scapula
The four first axes of the PCA computed on the coracoid shapes explained 79.8% of the total variance (PC1 = 39.2%, PC2 = 20.4%, PC3 = 12.9% and PC4 = 7.3%; Fig. 9D). Quails were clustered together, yet, Coua cristata overlapped with the quails on the first two axes. The disparity calculation confirmed that there was less disparity among quails than at the interspecific level (Table 8).
DISCUSSION
The preparation process is an obligatory step in the preparation of bones for collections. It is, however, important to be able to quantify potential effects of preparation on the Delling, 1991;Lemoine, 2011). In practice, it appears to be no specific preparation protocol for bird bones. Yet, birds bones are pneumatic and this characteristic should be taken into account during preparation (Baumel et al., 1993;Fernández-Jalvo & Marin-Monfort, 2008;Novitskaya et al., 2017;Pennycuick, 1967;Ritzen, 1978). Moreover, the preparation protocol with enzymes used for our bones is one of two best protocols studied by Fernández-Jalvo & Marin-Monfort (2008) to avoid physical damage.
What is the impact of deformation due to preparation on the bone shape at the intraspecific level?
Differences in shape depending on the color and texture The results of the MANOVAs performed on each bone show significant shape differences depending on the texture. The main differences are between powdery bones and other types of bone ( Fig. 3; Table 7). Powdery bones appear to have a wider distribution in the morphospace for each bone. Considering extreme bones shapes shown in the PCA for each bone, most of the time the gracile shapes match the powdery bones. This suggests a direct impact on the thickness and the composition of the bone because of the preparation process.
Looking more specifically at the concerned individuals, some individuals have powdery bones for all the paired bones. A oilyness texture is not found on all the bones of the same specimen, which suggest that this characteristic may not be individual-specific. It could be linked to the type of preparation, more specifically to the removal of the fat. Preparators are used to evaluate the fat saturation by looking at the bone texture directly after an obligatory first bath. There are three possibilities during preparation: (1) the fat saturation of the bone looks low and the treatments are stopped; (2) the fat saturation of the bone is still too important so the renewal of this step is decided or (3) the first bath treatment itself may be too aggressive for the bone and texture is already powdery after the initial fat removal step. It is known that the bird furcula is composed of Haversian bone for a large part of the fused part of the clavicle (Cubo et al., 2005;Mitchell et al., 2017;Ponton et al., 2007). This particular bone formation may result in a different reaction when treated with the chemicals used in the preparation protocol (Lemoine & Guilminot, 2009). For this reason, preparation protocols have to be adapted to the specific bones (Hahn, Vogel & Delling, 1991). Because all individuals and bones may differ in internal composition, length, width, weight and thickness, using the same quantity of chemicals or the same time of processing for all bones could impact the bone. The external appearance of the bone appears to be a good indicator of the impact of preparation and as such a good proxy for preparation deformation. It would be interesting in future studies to perform histological analyses to be able to detect the effect of chemicals on the preparation of the bones.
Furcula
The analysis of the furcular shape shows that the main shape modifications occur on the clavicles and their symphysis. Considering the results of the PCA and shape differences depending on the texture, the deformation appears to result in a flatter furcula with narrower and straighter clavicles and with an elongated and more dorsally oriented symphysis (Fig. 4). These shape modifications could be explained by a modification of the Haversian bone, which is specifically located in this area of the furcula. Indeed, furcula bone composition is known to be different from the other bird bones (Mitchell et al., 2017). Furthermore, wing beats during locomotion have been shown to induce cyclic deformations, with bone remodeling replacing damaged bone with Haversian bone (Ponton et al., 2007). This bone type seems more likely to be affected by the chemical preparation process compared to the non-Haversian bone.
Sternum
The main parts of the sternum shape affected by preparation are the lateral processes, the thicker parts of the sternum which appear more distal from the central part (Fig. 5). The central part of the sternum has a protection function and provides support for the carina. This part of the sternum is thick and robust to hold the pectoral muscles and to withstand their force (Baumel et al., 1993;Harvey, Kaiser & Rosenberg, 1969). The cranial and central part is involved into the coracoid joint area, it functional constraint could explain the light amount of deformation. The lateral thin parts of the sternum are inter-connected with fasciae and aponeuroses of the flat oblique abdominal muscle (Goslow, Dial & Jenkins, 1990). Moreover, these abdominal muscle forces may deform the bone during wingbeats to keep the unity of the trunk (Jenkins, Dial & Goslow, 1988). Jenkins, Dial & Goslow (1988) showed that the sternum also exhibits cyclical movements with each wingbeat. During down-stroke, the sternum ascends and retracts caudodorsally, and then during the subsequent upstroke it descends and protracts cranio-ventrally. As in the furcula, flexible parts of the sternum involved in wingbeats seem to be more easily affected by the preparation process.
Coracoid
Coracoid bones display less shape variation than unpaired bones. The main shape modification seems to be the gracile conformation of the bone. The shaft is sharper, the distal part is sharp-edged and the proximal part is more curved. These deformations look like a slight contraction of the whole bone on itself. Coracoids have an important function during flight, as they acts as a pulley for the pectoral muscles, which are the biggest muscles involved in the wing upstroke. Coracoids have to be robust enough to support and transmit muscles forces without deforming (Beaufrère, 2009;Nesbitt et al., 2009). Its crucial role in force transmission could be a strong constraint on both shape and robustness (George & Berger, 1966;Shufeldt, 1901Shufeldt, , 1909. This result seems to be confirmed by the neighbor joining tree, showing that both right and left coracoids are well paired for each individual (Fig. 6). This result supports the hypothesis of strong solidity of this bone (George & Berger, 1966;Gordon et al., 2008).
Scapula
The neighbor joining tree performed on the shape data of the scapula shows some morphological variation between the right and left bones for each individual. Natural asymmetry is not expected to be higher within individuals than between individuals, thus, these differences could be due to the preparation process. This result is supported by the wide distribution in the morphospace, especially on the positive part of the first axis which is characterized by a gracile and low scapula (Fig. 7). This suggests that these morphologies may not be due to natural asymmetries but more likely due to the preparation process (Hahn, Vogel & Delling, 1991;Lemoine, 2011).
Humerus
In contrast to the results obtain for the scapula, the neighbor joining tree of the humeri shows that left and right bones belonging to the same individual cluster together. This suggests that the preparation process may have less impact on the humerus. Looking at the PCA, a group of bones seems more isolated from the others. Their shape is gracile, the deltoid crest is less prominent and the distal extremity is less robust (Fig. 8). As for the scapula, extreme humerus bone shapes have a more gracile morphology than the mean bone shapes. Moreover, the humerus is known to be not significantly loaded in direct tension or compression, which implies no particular ossification or solidification of this bone (Pennycuick, 1967). Again, it suggests a non-natural deformation and thus could be due to preparation process affecting the thickness of the whole bone (Hahn, Vogel & Delling, 1991;Lemoine, 2011).
In general, powdery paired bones are more gracile than neutral and oily bones. It seems that the last step of the preparation protocol, the fat removal which can be repeated several times, is the main factor causing bone shape deformation.
Disparity and asymmetry
We observed that unpaired bones display a greater disparity than paired ones and the same pattern is found in interspecific analyses (Table 8). This could mean that unpaired bones are more easily deformed by preparation than paired bones. This could be explained by two factors: (1) paired bones can easily be dried in a specific position. For unpaired bones, the most convenient method is to put it on its side. Thus, this position can induce a morphological deformation only on one side due to the fact that the bones have to support their own weight. This way of drying can lead the bone to have a directional drying asymmetry; (2) all vertebrates display a bilateral symmetry, yet are not perfectly symmetric. Many factors can impact symmetry including lateralisation (Galatius & Jespersen, 2006;Klingenberg, 2003;Mays, Steele & Ford, 1999;Palmer, 2004). This phenomenon should, however, impact paired and unpaired bones similarly. However, the symmetry tests shows a significant differences between right and left sides for unpaired bones, such as furculae and especially sterna, whereas the differences are not significant for paired bones. Given that one side is always significantly different from the other one this suggests an impact of the drying process on bone asymmetry.
What is the impact of deformation due to preparation on shape analysis at the interspecific level?
The interspecific dataset demonstrates that, despite the large morphological disparity observed within the quail dataset, analyses conducted at an interspecific level are not impacted by the effect of bone preparation (Table 8). It suggests that, even if there are some deformations due to the preparation protocol, at an intraspecific dataset level of analyses, these deformations are too small to be significant.
CONCLUSIONS
In summary, it appears that flexible bones and bones with thin parts such as the blades of the sternum and scapula are more likely deformed by the preparation process. However, the central part of the sternum and the keel which provide protection and have large muscle insertions or the coracoid with its robust pulley function are not deformed. Symmetry tests show that shape variations cannot be natural because they are located mainly on unpaired bones and are not equally distributed between the two sides of the bone. Thus, the drying process could induce some deformations on unpaired bones. Moreover, for paired bones, the more gracile bone shape with a powdery texture appeared to be a direct consequence of the preparation process. We showed that these preparation deformations can influence intraspecific analysis and lead to functional erroneous conclusions, especially when studying the effect of symmetry on bones. Finally, these deformations due to the preparation have little effect at the interspecific level. This study highlights the importance of carefully selecting preparation methods in order to avoid physical damage that could impact the shape of the treated bones. To more accurately understand the effect of preparation on the deformation of bones, future studies need to be done comparing X-ray computed tomography of specimens before and after preparation. | 2019-12-03T17:26:57.422Z | 2019-11-28T00:00:00.000 | {
"year": 2019,
"sha1": "3ac6e922b0fd0af6edeefe0c03e0ae37e5308e7e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.7932",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3ac6e922b0fd0af6edeefe0c03e0ae37e5308e7e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
247451971 | pes2o/s2orc | v3-fos-license | A data driven approach to identify trajectories of prenatal alcohol consumption in an Australian population-based cohort of pregnant women
Accurate information on dose, frequency and timing of maternal alcohol consumption is critically important when investigating fetal risks from prenatal alcohol exposure. Identification of distinct alcohol use behaviours can also assist in developing directed public health messages about possible adverse child outcomes, including Fetal Alcohol Spectrum Disorder. We aimed to determine group-based trajectories of time-specific, unit-level, alcohol consumption using data from 1458 pregnant women in the Asking Questions about Alcohol in Pregnancy (AQUA) longitudinal study in Melbourne, Australia. Six alcohol consumption trajectories were identified incorporating four timepoints across gestation. Labels were assigned based on consumption in trimester one and whether alcohol use was continued throughout pregnancy: abstained (33.8%); low discontinued (trimester one) (14.4%); moderate discontinued (11.7%); low sustained (13.0%); moderate sustained (23.5%); and high sustained (3.6%). Median weekly consumption in trimester one ranged from 3 g (low discontinued) to 184 g of absolute alcohol (high sustained). Alcohol use after pregnancy recognition decreased dramatically for all sustained drinking trajectories, indicating some awareness of risk to the unborn child. Further, specific maternal characteristics were associated with different trajectories, which may inform targeted health promotion aimed at reducing alcohol use in pregnancy.
www.nature.com/scientificreports/ delivering harm reduction messages, especially to women for whom abstinence is difficult to achieve. Alternatively, if the risk to the fetus is known to increase rapidly with only small increases in the level of PAE, it would be important to emphasise the need for abstinence. Further, increasing use of meta-analysis to consolidate research evidence has highlighted the lack of detailed, comparable measures of PAE which can be aggregated across studies 5,6 . Ideally, these measures should provide detailed information on metric units (e.g. grams of absolute alcohol) that characterise consumption patterns at specific pregnancy timepoints.
The ' Asking Questions about Alcohol in Pregnancy' longitudinal cohort study (AQUA) was specifically designed to capture common prenatal drinking patterns in order to investigate potential effects on long-term developmental outcomes of children in the general population 7 . The PAE classifications originally used in the study were categorised to reflect real-life maternal drinking, taking into account the dose, pattern and timing of alcohol consumption during pregnancy and based on the 'composite' method described by O'Leary et al. in 2010 8 . Briefly, classification was based on a composite of continuous measures, i.e. the total number of grams of absolute alcohol consumed per week, and the maximum amount of absolute alcohol consumed per occasion of drinking. Units of alcohol (grams) were calculated from maternal self-report of the type and amount of alcohol consumed, using a detailed pictorial drinks guide at four timepoints during pregnancy. This enabled different patterns of PAE to be discerned, particularly low-level consumption, episodic binge drinking, special occasion drinking, and cessation of alcohol consumption upon pregnancy recognition, usually between five and seven weeks of gestation 9 .
Developments in classifying temporal continuous data allow for maternal alcohol consumption patterns to be identified more objectively. Methods such as group-based trajectory modelling (GBTM) are increasingly employed in clinical research to describe the course of an outcome or behaviour over time 10 . In alcohol research, GBTM can be used as a tool to measure consumption trajectories arising directly from the source data, without the need for pre-determined classification. This allows analysis of temporal drinking patterns and provides more nuanced results than aggregate or presence/absence drinking information 5,11,12 . This is especially important when investigating the potential relationship between low-level alcohol use and adverse child outcomes 6 . Moreover, trajectory modelling using unit-level and temporal alcohol consumption data has the potential to provide more detailed information on actual drinking patterns than predetermined alcohol consumption categories (i.e. low/ moderate/high levels).
The main aim of this paper was to determine the different alcohol consumption trajectories in a general pregnant population using a data driven approach. Another aim was to describe salient maternal characteristics predictive of these patterns, which may both assist in targeting prevention approaches in different populations and identify potential confounding factors to be considered when investigating the casual relationship between PAE and child outcomes.
Methods
The Asking Questions about Alcohol (AQUA) longitudinal cohort study commenced in July 2011 and comprises a cohort of 1570 mother/child dyads recruited from the general population in early pregnancy. All women with a singleton pregnancy, attending their first antenatal appointment before 19 weeks gestation at one of seven metropolitan public hospital antenatal clinics in Melbourne, Australia, were eligible to participate. Being 16 years or older and being able to read and write English were prerequisites for participation. The methods and participation rates are described in detail in the original study protocol 7 . During pregnancy, women completed three questionnaires, and after birth, questionnaires were sent at 12 and 24 months to women for whom complete PAE information was available (n = 1570). Data from 1458 (92.9%) women were used in this analysis. 9 We excluded 112 women (7.1%) who were lifetime abstainers because our target population was women who would normally drink some alcohol but who may abstain during pregnancy.
Prenatal alcohol consumption data. Detailed information on the quantity and frequency of alcohol consumption was collected via questionnaires delivered at:1) recruitment (< 18 weeks' gestation); 2) 25 weeks' gestation; and 3) 35 weeks' gestation. Together these provided data on alcohol consumption in the three months pre-pregnancy, post conception but pre-pregnancy recognition, and for each trimester of pregnancy. The mean (SD) gestational age at pregnancy recognition was 4.9 (1.5) weeks 9 .
Women were provided with a pictorial drinks guide showing common types and volumes of alcoholic drinks including red and white wine, champagne, beer, cider, spirits, alcoholic sodas, pre-mixed spirits, port, sherry, and cocktails. This drinks guide was developed with input from a focus group study 13 . Women were asked to use the drinks guide to identify their 'usual' pattern of drinking, with provision for up to five types of drinks. For each beverage identified, they were asked how often they usually drank this type of alcohol (less than once per month, 1-2 days per month, 1-2 days per week, 3-4 days per week, 5 or more days per week) and how many drinks they usually consumed on each occasion (less than one drink, 1-2 drinks, 3-4 drinks, 5-6 drinks, 7 or more drinks). Women were then asked if there were any 'special occasions' (or difficult times) when they consumed more alcohol than usual, the frequency of these occasions, the drink types, and the number of drinks per occasion. If a woman reported consuming seven or more drinks on any occasion, she was asked to provide the maximum number. Estimates from 'special occasions' were combined with information from 'usual' alcohol consumption to calculate a maximum weekly intake 9 .
The amount of alcohol consumed per week was derived from the number and types of drink reported by women, which were converted to standard drinks to calculate the amount of absolute alcohol in grams (gAA) consumed. One standard drink in Australia is equal to 10 gAA. A binge episode was defined as consumption of at least five standard drinks (50gAA) per drinking occasion. prior to pregnancy was dichotomised as "yes, at least one binge episode" or "no binge drinking". Women were also asked about their drinking history, including how old they were when they first started drinking regularly and the age when they first became intoxicated from drinking alcohol (defined as slurred speech, unsteady on their feet, or blurred vision). Responses were dichotomised into whether or not women were at least 18 yearsold at the time to reflect the legal drinking age in Australia. To gauge possible individual variation in alcohol metabolism, women were asked if, prior to their pregnancy, they felt the effects of alcohol very quickly, quickly, normally, slowly, or very slowly.
Statistical analysis.
Group-based Trajectory modelling (GBTM), a specialised form of finite mixture modelling that does not require complete data across all time points 10 was used to investigate prenatal alcohol consumption trajectories. For a hypothesised number of underlying latent groups, it uses maximum-likelihood estimation to: identify distinctive clusters of individuals who follow similar trajectories for an outcome; outline the shape of each trajectory and size of each group; and profile the characteristics of individuals within trajectory groups. GBTM allows analysis of factors influencing group membership through the inclusion of time-invariant predictors. Analyses were conducted in Stata/ICv15.1 (StataCorp LLC) using the traj plugin. Model selection involved 2 stages: (1) identification of the optimal number of trajectory groups and (2) determination of preferred polynomial orders specifying the shape of the identified trajectories. Best fitting models were determined for two to six groups (models of seven groups and above failed to converge) and then compared using the Bayesian Information Criterion (an increase of BIC [ΔBIC] > 2), model parsimony, entropy closer to 1, and fit with prior theory 14,15 . Participants were then classified into trajectory groups according to their maximum posterior probability of group membership. Model fit was further assessed by calculating group mean posterior probability and odds of correct classification (OCC) 10 .
Total alcohol intake per week in grams was modelled over four time points, roughly classified in terms of gestational week: prior to pregnancy recognition = 5 weeks, trimester one = 13 weeks, trimester two = 25 weeks, and trimester three = 38 weeks. To accommodate the commonly skewed distribution of grams of alcohol outcomes 12,16 , a square-root transformation was applied to adjust for non-normality.
Binge episodes were also recorded at each time point. Almost 1 in 5 women had a binge episode during pregnancy, but 99.4% of these occurred at the 'prior to pregnancy recognition' time point 9 . Therefore, presence of one or more binge episodes was included as a dichotomised, time-static predictor of group membership in GBTM.
Following identification of a best model fit, we investigated the association of several non-pregnancy related maternal alcohol use characteristics with group membership using chi-squared tests, making planned comparisons between different alcohol consumption groups and abstainers.
Multivariate logistic regression was used to examine associations between maternal characteristics and group membership as compared to abstinent women (control). Unadjusted and adjusted odds ratios (controlling for all characteristics significantly related to any of the drinking patterns) were calculated. For predictor variables with more than two categories (maternal age, educational attainment, household income and pre-pregnancy body mass index), p-values from likelihood ratio tests were used to evaluate the predictors. Alpha was set to 0.05 for all analyses.
Results
Group-based trajectory modelling (GBTM). GBTM was conducted to determine the best-fitting models for two to six groups. In the best-fitting models for each number of groups, we found no higher-order polynomial effects of time. All groups followed a linear or intercept-only trajectory. Each best-fitting model per group number is detailed in Table 1. www.nature.com/scientificreports/ Comparatively, the AIC and BIC showed that the five-and six-group models were a better fit than models with four groups or less. Optimal BIC was found in the five-group model. This is illustrated by the difference in BIC: ΔBIC was 114.72 for the five-group model whereas the six-group model showed poorer fit with a ΔBIC of -3.03. Entropy was best in two-and four-group models, but acceptable (> 0.8) in models with five or less groups. With all fit information considered, the five-group model was determined the best fit (Supp Fig. 1).
However, the GBTM found no mathematical difference between pregnancy abstainers and women who had consumed some alcohol at the 'prior to pregnancy recognition' time point. To accommodate this theoretical implication, whole of pregnancy abstainers were 'forced' into their own group, resulting in a final six group solution. Trajectories of the five-group model with abstainer/control separation (group six) are illustrated in Fig. 1. Group means with 95% confidence intervals (CI-shaded area) are presented on a logarithmic scale to illustrate group differences at the lower end of alcohol consumption. The same six alcohol trajectories are also presented on a normal y-scale in Supp Fig. 2.
Examination of the trajectories, which are summarised in Table 2, resulted in six groups which have been named as follows: Abstained/control -no alcohol consumption during pregnancy (33.8% of the total sample); Association between group membership and non-pregnancy related alcohol use behaviour. Pregnant women with a moderate to high alcohol consumption trajectory were less likely to report that they felt the effects of alcohol quickly than controls or women with a low consumption trajectory. Women with moderate to high consumption were also more likely to have experienced their first alcohol intoxication before the Australian legal drinking age of 18 years and to have had at least one binge drinking episode in the three months before pregnancy. (Table 3).
Association between group membership and demographic and pregnancy-related characteristics. Multivariate analysis revealed no discernible difference between controls and the low discontinued trajectory in any of the characteristics investigated (Table 4). Compared to controls, women in all other alcohol consumption trajectory groups were two to seven times more likely to be Caucasian (e.g. low sustained (AOR 2.32 (95%CI 1.40-3.85)) and moderate sustained (AOR 7.13 (95%CI 3.99-12.73)). Cigarette smoking in pregnancy was associated with all moderate to high drinking trajectories, e.g. moderate sustained (AOR 4.05 (95%CI 2.60-6.31)) and high sustained (AOR 4.28 (95%CI 1.92-9.54)). Women in the moderate discontinued trajectory were more likely to have an unplanned pregnancy (AOR 2.99 (95%CI 1.92-4.67)) and be pregnant with their firstborn (AOR 2.09 (95%CI 1.38-3.16)). Women with low sustained group membership were less likely to be primipara (AOR 0.61 (95%CI 0.42-0.91)). A sustained alcohol consumption pattern was more likely in women in their early to mid-thirties and a high sustained level was more likely in women aged 35 years or more. Increasing household income was associated with moderate to high sustained group membership.
Discussion
Group-based trajectory modelling of continuous data on grams of absolute alcohol consumed per week during pregnancy identified five distinct trajectories of prenatal alcohol consumption. In this population-based cohort of pregnant women, the group that consumed one to two standard drinks, once or twice per month until they became aware that they were pregnant (14.4%), was mathematically indistinguishable from the group that abstained from alcohol (33.8%). A second group of women who discontinued their alcohol consumption at some point during the first trimester averaged around three standard drinks per week until then. Women in this moderate-discontinued group were more likely to have an unplanned pregnancy, be primiparous and smoke cigarettes. Cigarette smoking was also associated with moderate and high sustained alcohol use, as was higher maternal age and household income. Pre-pregnancy and early pregnancy binge episodes were common in the Table 3. Pre-pregnancy binge drinking, drinking age and alcohol sensitivity by trajectory. p = p-value, ES = effect size (Cramer's V), boldface = statistically significant difference between group and control. www.nature.com/scientificreports/ moderate and high-level groups, regardless of whether alcohol use was discontinued or sustained. Other characteristics of all moderate and high-level groups were a history of underage intoxication and higher self-reported tolerance for the effects of alcohol. Importantly, all sustained drinking trajectories showed a dramatic decrease in levels of alcohol use after pregnancy recognition, potentially indicating a degree of awareness of the potential harms to the unborn child. Improved understanding of the factors that contribute to alcohol consumption in pregnancy in specific sub-populations is critical when developing health promotion programs. Here, the most important trajectory we identified is the moderate-sustained group, which comprised almost a quarter of all women in our study. Women following this consumption pattern are clearly not responding to existing public health messaging advising abstinence.
Qualitative research exploring the reasons for alcohol use in pregnancy has shown that while most women are aware that abstinence is recommended, there is a general perception that the risk of harm from occasional alcohol use is low. This usually results from conflicting advice by maternity clinicians, the women's own observations of the behaviour of family and friends, plus the lack of convincing research evidence on harm from low-level consumption patterns 17 . Consequently, some women make individual decisions about the perceived quantity of alcohol that is without risk of harm, even if they received best practice health messages advising abstinence. The women in our study who followed a moderate-sustained alcohol use pattern had often been drinking alcohol regularly from an early age, and although they reduced their intake following pregnancy recognition, alcohol consumption may be well-established and normal aspect of their social environment. Further, a perception of not being easily affected by alcohol may contribute to feeling that some level of alcohol consumption is unlikely to affect the unborn baby. This sizeable group of pregnant women will require sophisticated health messages that acknowledge the uncertainties around risk of harm from low-level or occasional alcohol use, but also emphasise the importance of maximising health outcomes for their baby through abstinence. Brief psychosocial interventions have established benefits in women with heavier alcohol consumption and although the evidence www.nature.com/scientificreports/ of effectiveness is not as strong for pregnant women identified as consuming low-levels of alcohol, behaviour change techniques such as tailored information about consequences and fostering positive social support, or goal setting appear to increase abstinence rate 18 . It may be that abstinence rates among pregnant women will only improve if maternity service systems consider the different social and cultural contexts which influence women's drinking choices. Consideration could be given to encouraging positive involvement from partners, family and friends, and attention to the provision of clear and consistent messages about the benefits of abstaining from alcohol use in pregnancy. We previously classified drinking patterns during pregnancy according to pre-determined cut-off levels based on the 2001 Australian National Health & Medical Research Council Alcohol Guidelines: Health Risks and Benefits 19 . These guidelines, which were revoked in 2009, stated that pregnant women should consider not drinking alcohol, but if they chose to drink, they should have less than seven standard drinks (< 70 g absolute alcohol) over the course of a week, and no more than two standard drinks (≤ 20 g absolute alcohol) per day. To date, we have classified pregnant women in the AQUA study who followed this drinking pattern as "low level" drinkers.
However, GBTM showed that most women in the lowest drinking trajectory (low discontinued) consumed a minimal amount of alcohol, less than one standard drink (< 10 g absolute alcohol) per week, compared with our original classification which included women who consumed up to almost seven standard drinks (< 70 g absolute alcohol) per week. This distinction may prove invaluable when investigating the potential harmful effects on the unborn child of various prenatal alcohol exposure patterns.
This study is not the first to use GBTM to describe maternal alcohol consumption patterns. In an earlier Australian study Tran et al. identified three trajectories from six pre-specified frequency and quantity questions asked at four different time points, ranging from the period before pregnancy to six months postpartum in a longitudinal pre-birth cohort of 6,597 Australian women that commenced in 1981 20 . The three trajectories comprised women who abstained or drank minimally (53%), those who fluctuated at an average of about 0.37 glasses per day (39%) and those who drank at a higher level of about 2.5 glasses per day before pregnancy but dropped their alcohol intake to about 0.6 glasses per day during pregnancy. A major difference between these data and the present study is that even with a smaller sample size, we identified an additional two trajectories of women who discontinued alcohol use at, or soon after, pregnancy recognition. This difference most likely reflects changes in community awareness and advice on alcohol abstinence given by maternity providers in the 30 years between studies.
A more recent analysis by Dukes et al. of about 11,692 women taking part in the Safe Passage Study in 2007 identified five trajectories more akin to those we found in the AQUA study 21 . Although the levels of consumption and the timing of cessation differed in their population, the five groups included one abstinent/minimal use group, two that discontinued and two that continued some alcohol consumption throughout pregnancy, like the current study. In 2019, Bandoli et al. published an analysis of five GBTM-derived alcohol consumption trajectories from 471 pregnant women and their potential association with infant growth and early development 11 . The authors reported an association between the highest consumption trajectory and deficits in infant birth weight and length and psychomotor development at six to 12 months of age. However, the evidence generated from the study is limited given its small sample size and inclusion of only 24 participants in the highest consumption trajectory.
Most importantly, both Dukes and Bandoli et al. reported specific metrics that characterised each trajectory. These were presented as a daily average; either as the number of standard drinks defined as 14gAA (Dukes 21 ) or in ounces of absolute alcohol (Bandoli 11 ). Use of similar reporting methods across studies will improve our ability to integrate our results with those from other studies going forward. This is critical to accumulate robust research evidence to better predict which prenatal alcohol exposure patterns are most strongly associated with particular adverse child outcomes.
Strengths and limitations.
A strength of this study is the detail of the alcohol measures and the focus on the most frequent prenatal patterns of alcohol consumption (low, moderate and discontinued) rather than heavy and sustained alcohol use. This focus has played a key role in the study's high participation and low attrition rates over the course of the women's pregnancy, but most importantly, in providing alcohol consumption data of the highest quality possible 7,13,22 . Although we measured these data prospectively, and thus optimised our ability to measure frequency, dose and timing of exposure, the use of self-reported questionnaires runs the risk of reporting bias. However, our focus group research showed that if questions on alcohol in pregnancy are appropriately contextualised and include an option to report unusual drinking episodes, this encourages more accurate reporting 13 , a finding which appears confirmed by the high number of binge episodes reported in response to the special occasion question 9 .
In our final GBTM model we found one small group (high sustained, n = 52), which contained fewer than the suggested minimum in any one group, being < 5% of the sample 10 . We acknowledge this limitation but believe that this finding is a true reflection of the small number of consistently high drinkers in this population-based cohort of pregnant women and that it is imperative to describe such an important clinical group.
Another strength of this study lies in the ability of GBTM to directly classify the continuous source data without the need for arbitrary cut-offs. However, GBTM is an application of finite mixture modelling, which assumes that the study population is composed of distinct groups defined by their trajectory membership. This theoretical assumption may be compromised when using non-research data and there may be women who could be assigned to more than one trajectory. We considered entropy as a measure of classification accuracy in our final model selection and found this to indicate a high degree of precision in the assignment of individuals to their most likely group.
Conclusion
GBTM-derived trajectories of prenatal alcohol consumption can reflect real-life maternal drinking patterns because they preserve the timing, quantity, and frequency of consumption derived directly from unit-level source data. Understanding these distinct consumption trajectories and their associated maternal characteristics can assist in identifying antenatal populations for targeted alcohol cessation approaches. The trajectories also provide a discerning classification method for investigating causal relationships between prenatal alcohol exposure and child outcomes. Further, an inherent ability to mathematically define the underlying unit-level consumption patterns of each trajectory may reduce heterogeneity in exposure classification across studies, thereby improving the ability to aggregate data in future meta-analyses. | 2022-03-16T06:18:13.044Z | 2022-03-14T00:00:00.000 | {
"year": 2022,
"sha1": "2c11327ca4de538a07f8bb01615ac81a7efe5e19",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-022-08190-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c37ded15bbd3018f1c6a5309b03281b715bb3ffc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211017787 | pes2o/s2orc | v3-fos-license | Metachronous rupture of a residual pancreaticoduodenal aneurysm after release of the median arcuate ligament: a case report
Background Multiple pancreaticoduodenal artery aneurysms in association with median arcuate ligament syndrome (MALS) are relatively rare. A treatment option, such as a median arcuate ligament (MAL) release or embolization of the aneurysms, should be considered in such cases, but the treatment criteria remain unclear. Case report A 75-year-old man was transferred to our hospital because of a ruptured pancreaticoduodenal aneurysm. Emergency angiography showed stenosis of the root of the celiac axis (CA), a ruptured aneurysm of the posterior inferior pancreaticoduodenal artery (PIPDA), and an unruptured aneurysm of the anterior inferior pancreaticoduodenal artery (AIPDA). Coil embolization of the PIPDA was performed. Five days after embolization, the gallbladder became necrotic due to decreased blood flow in the CA region, and an emergency operation was performed. We performed a cholecystectomy and released the MAL to normalize the blood flow of the CA region. However, the patient died on postoperative day 8 because of rupture of the untreated aneurysm of the AIPDA. Conclusions This is the first report of metachronous ruptures of multiple pancreaticoduodenal aneurysms due to MALS, even after a MAL release. Although rare, a residual aneurysm in the pancreatic head region may need to be embolized quickly.
Background
Median arcuate ligament syndrome (MALS) is a relatively rare disease in which the median arcuate ligament (MAL) compresses the root of the celiac axis (CA). Chronic compression leads to luminal narrowing of the celiac trunk and reduced blood supply to the abdominal splanchnic organs. To compensate for decreased blood flow in the CA area, blood flow from the superior mesenteric artery to the gastroduodenal artery is usually increased through the inferior pancreaticoduodenal artery, possibly resulting in pseudoaneurysms and spontaneous bleeding along the way, i.e., in the pancreatic head arcade [1]. Coil embolization is performed for aneurysm ruptures. On the other hand, endovascular treatment for CA stenosis or MAL release on laparotomy is performed if an aneurysm is not ruptured [2]. However, precise treatment indications and criteria remain unclear. Herein, we report an unusual case of metachronous ruptures of multiple pancreaticoduodenal aneurysms, even after a MAL release.
Case presentation
A 75-year-old man visited a physician for abdominal pain and vomiting. Abdominal computed tomography (CT) indicated a ruptured pancreaticoduodenal aneurysm (Fig. 1a) and stenosis of the root of the CA (Fig. 1b). He was transferred to our hospital. He had a past medical history of mental retardation and gastric cancer that had been treated with a distal gastrectomy with Billroth-I reconstruction.
Blood examination findings revealed an elevation of inflammation markers (white blood cell 19,010/μL, Creactive protein 7.08 mg/dL) and anemia (hemoglobin 8.9 g/dL) ( Table 1). Emergency angiography revealed stenosis of the root of the CA, and spindle-shaped dilatation and pseudoaneurysm formation were observed both in the posterior inferior pancreaticoduodenal artery (PIPDA) and in the anterior inferior pancreaticoduodenal artery (AIPDA). Contrast medium extravasation from the PIPDA was observed (Fig. 2a). Coil Fig. 1 a Computed tomography taken at visiting a physician showed a hematoma, and inferior pancreaticoduodenal artery aneurysm rupture was suspected (shown by arrow). b Computed tomography taken at visiting a physician showed stenosis of the root of the celiac axis (shown by arrow) embolization was performed on the PIPDA to the posterior superior pancreaticoduodenal artery, which was the bleeding source ( Fig. 2b). At that time, coil embolization of the AIPDA was not performed because no extravasation of the contrast agent was observed. After coil embolization of the PIPDA, the celiac arterial region was visualized from the anterior inferior pancreaticoduodenal artery via the gastroduodenal artery. Then, the patient was hospitalized for follow-up, but right-sided flank pain appeared on the sixth day after the embolization. CT showed a swollen gallbladder and encapsulated fluid retention around it. It suggested that the wall was broken at the fundus of the gallbladder (Fig. 3). An emergency laparotomy was then performed because gallbladder necrosis was suspected due to the decreased blood flow in the CA region. The wall of the gallbladder was found to be partially necrotic (Fig. 4). Fig. 2 a Emergency angiography revealed stenosis of the origin of the celiac artery (shown by arrow), and spindle-shaped dilatation and pseudoaneurysm formation were observed in posterior inferior pancreaticoduodenal artery to posterior superior pancreaticoduodenal artery and anterior inferior pancreaticoduodenal artery (shown by red arrowheads). Contrast medium extravasation from posterior inferior pancreaticoduodenal artery was observed (shown by yellow arrowheads). Irregular vasodilation and stenosis were observed in multiple arteries, which were considered to be the effect of segmental arterial mediolysis. b Embolization was performed on posterior inferior pancreaticoduodenal artery to posterior superior pancreaticoduodenal artery (shown by arrow). After coil embolization of the posterior inferior pancreaticoduodenal arterial aneurysm, the celiac arterial region was visualized from the anterior inferior pancreaticoduodenal artery via the gastroduodenal artery (shown by arrowheads) Fig. 3 Post-embolization computed tomography shows swollen gallbladder and encapsulated fluid retention around it (shown by arrow). It suggested that the wall was broken at the bottom of the gallbladder The gallbladder was distended, and bile leakage was not observed. In addition, the MAL was dissected to improve blood flow in the CA region. A fibrous ligament anterior to the CA was confirmed, and the MAL was transected to expose the wall of the CA and the aorta (Fig. 5). The blood flow in the celiac and hepatic arteries was confirmed by echography, and the surgery was completed after the abdominal cavity was thoroughly washed.
After the operation, the patient entered the emergency care unit under intubation. Extubation was performed on postoperative day (POD) 2. Perioperative blood pressure was controlled with antihypertensive agent so that systolic blood pressure was around 100 mmHg. Pleural effusion and ascites were observed, and bilateral thoracic drainage and ascites drainage were performed on POD 4. The abdominal CT scan on POD 4 determined that the stenosis of the CA was released (Fig. 6a) and that the size of the untreated aneurysm of the AIPDA was unchanged ( Fig. 6b). However, the patient exhaled blood on the evening of POD 8, and an emergency CT scan and upper gastrointestinal endoscopy were performed. At that time, no obvious source of bleeding was found, and fresh blood had accumulated in the esophagus. The emergency CT findings showed little change in the peritoneal hematoma and extravasations from the untreated AIPDA aneurysm. In addition, the AIPDA aneurysm, which had been unchanged in the previous CT scan, increased slightly from the previous CT scan (Fig. 7). The patient soon exhaled blood again, and his systolic blood pressure dropped rapidly to less than 60 mmHg. Because of heavy bleeding, endotracheal intubation was performed to control the airway. After transfusion, another upper gastrointestinal endoscopy was performed. Persistent bleeding from the posterior wall of the descending duodenum was observed, and the duodenal wall broke down. We consulted radiology about the indication of endovascular treatment, but it was judged difficult because of vital instability. The patient died of hemorrhagic shock 12 h after the first hematemesis. An autopsy was not performed because the consent of the family was not obtained. Eventually, the rupture of an untreated AIPDA aneurysm was diagnosed as the cause of death.
Discussion
An abdominal visceral aneurysm is a relatively rare condition. It was reported that 60% of abdominal visceral aneurysms occur in a splenic artery, 20% in a hepatic artery, 10% in the superior mesenteric artery, and 2% in the pancreaticoduodenal artery [3]. A ruptured abdominal visceral aneurysm has a poor prognosis; the mortality rates are 25% for a splenic artery, 35% for a hepatic artery, and 50% for a pancreaticoduodenal artery [3]. Therefore, a rapid and accurate treatment is desirable. The cause of visceral aneurysms varies from arteriosclerosis and inflammation to stenosis. In particular, there are many reports of pancreaticoduodenal aneurysms caused by MALS [4,5].
MALS is a disorder caused by compression of the root of the CA by the MAL, resulting in decreased blood flow in the CA region [1]. Instead, blood flow in the common hepatic and splenic arteries is compensated for by the superior mesenteric artery via the pancreatic head arcade. Increased blood flow induces hemodynamic stress on the arterial wall of the pancreatic head arcade, causing an aneurysm in the pancreatic head region. In the present case, CA stenosis due to MALS was considered to be the cause of the pancreaticoduodenal aneurysm.
Coil embolization is usually performed for ruptured aneurysms due to MALS [6,7]. We performed embolization of the ruptured aneurysms in the PIPDA.
On the sixth day after the embolization, we performed an emergent laparotomy because the gallbladder became necrotic. In addition to the cholecystectomy, the surgical incision was located in the MAL to release the stenosis of the root of the CA.
There are two types of treatment for MALS that normalize the blood flow in the CA region: endovascular treatment and surgical incision of the MAL [2]. Recently, endovascular treatment has often been selected because of the high risk of surgery [8]. Sugae et al. evaluated CA stenosis due to MAL compression with 3D-CT images and classified them into three types according to the stenosis rate and stenosis length: type A, < 50% and ≤ 3 mm; type B, 50-80% and 3-8 mm; type C, 80-100% and ≥ 8 mm, respectively [9]. Although 3D-CT was not used in the present case, the stenosis was considered type B, Fig. 6 a The abdominal computed tomography on postoperative day 4 determined that the arterial diameter of the celiac axis has expanded slightly (shown by arrow). b The abdominal computed tomography on postoperative day 4 determined that there was no significant change in the size (13 × 7 mm) of the untreated aneurysm of the anterior inferior pancreaticoduodenal artery (shown by arrow) and the MAL division was recommended. In the present case, an emergent laparotomy was necessary, so we performed a MAL release at the same time for emergent cholecystectomy to normalize the blood flow of the CA region and to prevent further aneurysm rupture in the pancreatic head arcade arteries. Pancreaticoduodenal aneurysms are different from other abdominal visceral aneurysms because of the low correlation between the diameter of the aneurysm and the possibility of rupture. Fujisawa et al. reported that 41 (71%) out of 58 ruptured pancreaticoduodenal aneurysms are less than 20 mm in diameter [10]. Thus, it is reported that treatment is necessary even if the diameter of the aneurysm is small [11]. There are some reports that a MAL release can be postponed for aneurysms in the pancreatic head region arteries, even for multiple ones [12]. Similarly, in the present case, we thought that a MAL incision would decrease blood flow from the superior mesenteric artery to the pancreatic head arcade and reduce the risk of rupture of the untreated aneurysms. There are many reports that the long-term prognosis after a MAL release is good [13,14]. There is also a report that among multiple aneurysms, large aneurysms were embolized in the inferior pancreaticoduodenal artery, but small ones were left untreated, and the MAL was not incised but did not rupture [15]. Unfortunately, untreated aneurysms were lethally ruptured even after a MAL release in the present case. This case is characterized by an aneurysm at the root of the AIPDA in addition to the PIPDA, as shown by an angiography at the time of admission. Multiple pancreaticoduodenal aneurysms have rarely been reported in the international English language literature. Possible causes of aneurysms in multiple arterial systems include arteriosclerosis, pancreatitis, trauma, congenital malformation, fibromuscular hyperplasia, infection, collagen disease, and segmental arterial mediolysis (SAM) [3,10,16]. SAM is a concept proposed by Slavin and Gonzalez-Vitale in 1976 and is a noninflammatory and nonarteriosclerotic degenerative disease of uncertain cause that occurs in arteries [16]. It is an acute disease that is subject to emergency treatment, mainly due to medial lysis of the muscular arteries in the abdominal organs, formation of an aneurysm, and rupture of the abdominal cavity. A definitive diagnosis of SAM requires a biopsy of the affected artery and pathologic evaluation [17]. Although there is a possibility that SAM coexisted with MALS in this case, there are few similar reports [18]. Moreover, no pathological finding could be obtained, and in general, it has been reported that SAM can be conservatively treated if it does not bleed or rupture [19]. There are also reports of complete disappearance of SAM with conservative treatment [20].
We did not choose the option to embolize the untreated aneurysm immediately after the MAL release. However, in this case, another aneurysm fatally ruptured despite MAL release and normalized blood flow. It may have been necessary to embolize an untreated AIPDA aneurysm immediately after the release of the MAL. There are no reports of early aneurysm rupture after Fig. 7 Computed tomography after first vomiting showed extravasations from the untreated anterior inferior pancreaticoduodenal artery aneurysm (shown by arrowheads). An untreated anterior inferior pancreaticoduodenal artery aneurysm appeared to be rounded (13 × 11 mm) MAL release, as occurred in this case. Multiple aneurysms caused by MALS may require attention to another aneurysm rupture, even after a MAL release, if one ruptures. The rare clinical course of this case may make a valuable contribution to the development of treatment strategies in the future.
Conclusions
This is the first report of metachronous ruptures of multiple pancreaticoduodenal aneurysms due to MALS, even after a MAL release. Although rare, a residual aneurysm in the pancreatic head region may need to be embolized quickly. | 2020-02-04T12:30:44.047Z | 2020-02-03T00:00:00.000 | {
"year": 2020,
"sha1": "b89ab91eb6ef28676498d130fbee7967b8bcfbe4",
"oa_license": "CCBY",
"oa_url": "https://surgicalcasereports.springeropen.com/track/pdf/10.1186/s40792-020-0784-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b89ab91eb6ef28676498d130fbee7967b8bcfbe4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15046576 | pes2o/s2orc | v3-fos-license | Traffic Congestion Evaluation and Signal Control Optimization Based on Wireless Sensor Networks: Model and Algorithms
This paper presents the model and algorithms for traffic flow data monitoring and optimal traffic light control based on wireless sensor networks. Given the scenario that sensor nodes are sparsely deployed along the segments between signalized intersections, an analytical model is built using continuum traffic equation and develops the method to estimate traffic parameter with the scattered sensor data. Based on the traffic data and principle of traffic congestion formation, we introduce the congestion factor which can be used to evaluate the real-time traffic congestion status along the segment and to predict the subcritical state of traffic jams. The result is expected to support the timing phase optimization of traffic light control for the purpose of avoiding traffic congestion before its formation. We simulate the traffic monitoring based on the Mobile Century dataset and analyze the performance of traffic light control on VISSIM platform when congestion factor is introduced into the signal timing optimization model. The simulation result shows that this method can improve the spatial-temporal resolution of traffic data monitoring and evaluate traffic congestion status with high precision. It is helpful to remarkably alleviate urban traffic congestion and decrease the average traffic delays and maximum queue length.
Introduction
The traffic crowds seen in intersection of urban road networks are highly influential in both developed and developing nations worldwide 1 .Urban residents are suffering poor transport facilities, and meanwhile the considerable financial loss caused by traffic becomes a large and growing burden on the nation's economy, including costs of productivity losses from traffic delays, traffic accidents, vehicular collisions associated with traffic jams, higher emission, environmental pollution, and more.The idea that the improvements to transport infrastructure are the efficient way has been central to transport economic analysis, but in fact this problem cannot be resolved with better roads 2-4 .Intelligent transportation systems ITS have been proven to be a scientific and efficient solution 5 .Comprehensive utilization of information technology, transportation engineering and behavioral sciences to reveal the principle of urban traffic, measuring the traffic flow in real time, and try to route vehicles around them to avoid traffic congestion before its formation promotes a prospective solution to resolve the urban traffic problem from the root 5-7 .
Nowadays, in an information-rich era, the traditional traffic surveillance and control methods are confronted with great challenges 8, 9 .How to get meaningful information from large amounts of sensor data to support transportation applications becomes more and more significant 6, 10 .Modern traffic control and guidance systems are always networked in large scale which need real time, traffic data with higher spatial-temporal resolution that challenges the traditional traffic monitoring technologies such as inductive loop, video camera, microwave radar, infrared detector, UAV, satellite image, and GPS 11 .The stateof-the-art, intelligent, and networked sensors are emerging as a novel network paradigm of primary relevance, which provides an appealing alternative to traditional traffic surveillance approaches in near future 12 , especially for proactively gathering monitoring information in urban environments under the grand prospective of cyber physical systems 13, 14 .Wireless sensors have many distinctive advantages such as low cost, small size, wireless communication, and distributed computation.Over the last decade, many researchers have endeavored to study traffic monitoring with novel technologies, and recent research shows that the tracking and identification of vehicles with wireless sensor networks for the purpose of traffic surveillance and control are widespread applications 15-19 .Traffic research currently still cannot fully express the intrinsic principle of traffic congestion formation and predict under which conditions traffic jam may suddenly occur.In the essentials, urban traffic is a typical self-driven many-particle huge system which is far from equilibrium state, where the traffic flow is a complicated nonlinear dynamic process, and the traffic congestion is the spatial-temporal conglomeration of traffic volume in finite time and space.In 2009, Flynn et al. have conducted some theoretical work to model traffic congestion with macroscope traffic flow theory and obtained some basic results in congestion prediction 20 , which are regarded as a creative solution of traffic equations proposed in 1950s and reported as a great step towards answering the key question that is how can the occurrence of traffic congestion be avoided.Based on current research, the congestion status of traffic flow is expected to be evaluated in real time and higher precision to support traffic control.
Traffic light control at urban intersection can be modeled as a multiobjective optimization problem MOP .In UTCS Urban Traffic Control System such as SCOOT/SCATS/REHODES system, it always employs single loop sensor or double loops as vehicle detector deployed at upstream of the signalized intersections.Generally, in current traffic control strategies, optimization objectives include stop of vehicle, average delay, travel time, queuing length, traffic volume, vehicle speed, and even exhaust emission 21 .The traditional traffic detection is Eulerian sensing which collects data at predefined locations 22 , and the sensors cannot be deployed in large amount as compared to the huge scale of urban road networks for sake of budget restriction and maintenance cost; as a result the data such as vehicle stops and delays of individual's vehicle is difficult to be achieved accurately.In the essentials, comparing to existing criteria mentioned above, the traffic congestion is a directly relevant factor and the root reason.Introducing a method to evaluate the degree of traffic congestion and proposing into the optimization model of traffic light control promote a feasible approach to improve traffic control performance.
In this paper, we studied the intrinsic space-time properties of actual traffic flow at the intersection and near segments and build an observation system to estimate and collect traffic parameters based on sparsely deployed wireless sensor networks.We are interested in understanding how to evaluate and express the degree of traffic congestion quantitatively and what the performance for traffic signal control would be if we take into account the traffic congestion factor as one of the objectives in timing optimization.
The rest of the paper is organized as follows.The current studies on traffic surveillance with wireless sensor networks are briefly reviewed in Section 2. The observation model based on traffic flow theory and traffic flow parameters estimation algorithm based on wireless sensor networks are described in detail in Section 3. The traffic congestion evaluation model and congestion factor based signal phrase optimization algorithms are discussed in Section 4. The performance is analyzed based on simulation and experimental results in Section 5. Finally, a conclusion and future works are given in Section 6.
Related Works and Problem Statement
Several research works on traffic monitoring with wireless sensor networks have been carried out in recent years.Most of them have focused on individual vehicle and point data detection, where the traffic spatial-temporal property is not an issue in these circumstances.Pravin et al. creatively applied the magnetic sensor networks to vehicle detection and classification in Berkeley PATH program from 2006 and obtained high precision beyond 95% 12, 23 .In 2008, UC Berkeley launched a pilot traffic-monitoring system named Mobile Century successor project is known as Mobile Millennium to collect traffic data based on the GPS sensor equipped in cellular phones 22 .They found that 2-5% point data provided by mobile sensors is sufficient to provide information for traffic light control, and their conclusion motivates the research to collect traffic data and control traffic flow via sparsely deployed sensor networks in this paper.Hull et al. studied the travel time estimation with Wi-Fi equipped mobile sensor networks 24, 25 .Bacon et al. developed an effective data compress and collection method based on sensor networks using the weekly spatial-temporal pattern of traffic flow data in TIME project 26 .But in current research there are some important aspects out of consideration. 1 Few considerations are given to the intrinsic space-time properties and operation regularity of actual traffic flow and traffic congestion formation.2 How to evaluate traffic congestion quantitatively with sufficient precision and real-time performance and be introduced as an objective to support control optimization in traffic light control?3 How to combine traffic surveillance sensor networks with traffic control system to analyze future traffic conditions under current timing strategies and try to avoid traffic congestion before its formation.
The discipline of transportation science has expanded significantly in recent decades, and particularly traffic flow theory plays a great role in intelligent transportation systems 27-29 .The typical models include LWR continuum model 30 and Payne-Whitham higher model 31 .From the physical view of traffic flow, the spatiotemporal behavior is the fundamental propriety in nature.In previous work, the vast majority of inductive techniques were focused on state-space methodology that forecasts short-term traffic flow based on historical data with relatively small number of measurement locations 32-34 .Limited amount of work has been performed using space-time model 35 , and the resolution or precision is insufficient for the purpose of traffic light control.In 2008, Sugiyama et al. explained the formation process of traffic congestion by experimental observations 36 , and The goal of this paper is to estimate traffic parameters based on sparsely deployed sensor networks, evaluate the degree of traffic congestion, and obtain a quantitative factor to express the spatiotemporal properties of traffic flow in real time.Based on this, introduce the congestion factor to the optimization model of traffic light control.In this paper we use Lagrangian detection 37 .Not only detect point data via imperfect binary proximity sensor network 38 , but also try to estimate the time-space properties along the road segment based on scattered sensor measurements.The deployment of sensor networks is shown in Figure 1, where p x, t denotes traffic data such as velocity and density.Based on this, the congestion status and evaluation criteria can be studied from the comprehensive scope.The sensor network is expected to monitor real-time traffic data, to predict the subcritical state, and to control traffic signal to avoid the traffic jams before its formation.
The urban road network can be modeled as a directed graph consisting of vehicles v ∈ V and edges e ∈ E. Let L e be the length of edge e.The spatial and temporal variables are road segment x ∈ 0, L e and time t ∈ 0, ∞ , respectively.On a given road segment x e and time t, the traffic flow speed u x, t and density ρ x, t are distributed parameter system in time and space.While vehicle labeled by i ∈ N travels along the road segment with trajectory x i t , the sensor measurements u x i t , t and ρ x i t , t are discrete and instant values, as shown in 2.1 , and here k is the sensor node number.The problem of traffic flow information monitoring can be transformed to estimate traffic parameters in given spatial and temporal variables t with these discrete values Nomenclature and symbols are available in Table 1 : 2.1
Traffic Monitoring and Data Estimation
In this section, we firstly describe the intrinsic characteristic of traffic flow and then propose a method to estimate traffic parameters based on scattered data collected by sparsely deployed sensor networks.
Queue length in phase i
Continuum Traffic Flow Theory and Theoretical Models
The where x and t denote the space and time, respectively, u x, t and ρ x, t are the traffic flow speed and density at the point x and time t, respectively, ρ is traffic density in unit of vehicles/length, τ is delay, and p is traffic pressure which is inspired from gas dynamics and typically assumed to be a smooth increasing function of the density only, that is, p p ρ .The parameter u denotes the equilibrium speed that drivers try to adjust under a given traffic density ρ, which is a decreasing function of the density u u ρ with 0 < u 0 u f < ∞ and u ρ M 0.Here u f is the desired speed on empty road, and ρ M is the maximum traffic density at which vehicles are bumper-to-bumper in the traffic jam.In MIT model of selfsustained nonlinear traffic waves, the relationship between u and ρ denotes as the following.
Here u f denotes free flow speed, and ρ M is the traffic flow density in congestion state: In 3.1 , the s x, t is flow production rate, and for road segment with no ramp s x, t 0, for entrance ramp s x, t < 0, and for exit ramp s x, t > 0. Assume the velocity of vehicle traveling from the given intersection during the green light interval is v x t , and the intervals of green light phase are T ; thus the flow production rate can be denoted as follows: 3.4 Based on the exact LWR solver developed by Berkeley 39 , we can obtain the solutions of traffic equations with given initial parameters.That means the operation status and future parameters of the traffic flow can be predicted and analyzed on a system scale.
Signal Processing for Traffic Data Estimation Based on Sensor Networks
In this paper, we employ high sensitive magnetic sensor, as shown in Figure 2 a , to detect vehicles.Given that the detection radius is R, sensor node detects travelling vehicle with the ATDA algorithm developed by UC Berkeley 12 , which detects vehicle presentence based on an adaptive threshold, and estimates vehicle velocity with the time difference of up/down thresholds and the lateral offset 12, 23 , as shown in Figure 2 b .
Where D is sensor separation, s t is the raw data, which will be sampled as sensor readings in discrete values s k and transformed to a k after noise filtering.h k is the threshold at detection interval k, and d k is the corresponding detection flag.The instantaneous velocity can be estimated by 3.5 .Here time t up and t down are the moments when magnetic disturbance signals exceed the threshold continuously with count N and M, respectively: In actual applications, for sake of cost, the sensor node number is expected as few as possible 40 , so there need a tradeoff between sensor number and measurement precision.In this paper we try to improve the traffic detection exactness based on the spatial and temporal relations of sampled data.The main idea is to estimate the lost traffic information based on the limited sensor readings with traffic flow model and numerical interpolation.Assuming the temporal-spatial scales are Δt and Δx, the vehicle trajectory r and observation time t are dispersed into L and T sections, respectively.Consequently the two-dimensional x − t domain is transformed to a grid mesh, as shown in Figure 3, which can be denoted by 3.6 for an arbitrary location and detection time.Where x i , t j is grid point and the h and k are spatiotemporal scales that can be denoted as h ≡ Δx and k ≡ Δt, x i ih, t j jk, i ∈ 0, L , j ∈ 0, T .
3.6
For sensor reading u x i , t j in grid cell g i, j may be considered as a detection unit on location i, i 1 • Δx, and there is a single sensor node which takes effect in time interval j, j 1 • Δt.To take into account the disconnected vehicle queue under unsaturated state, here the sensed traffic flow speed is defined as the average velocity of all vehicles that pass the detection point in predefined interval.In actual applications, the traffic data is typically collected in 20 s, 30 s, 1 min, or 2 mins.The sensor network is sparsely deployed, and the total number of sensor node is K.We denote by v mk the actual speed of the mth vehicle travelling from the kth sensor in the detection grid g i, j , v mk is the estimated speed calculated from sensor measures, u k is the average speed in detection grid, m and m are the first and last vehicles in detection interval, Mathematical Problems in Engineering Error delay x/∆x Free Figure 3: The detection grid in x-t space.respectively, and u x, t is the theoretical speed based on the continuous traffic flow model.The actual and estimated traffic flow speed can be denoted by the following equations: 3.7 Assume that we have trajectories of a certain number of vehicles M in an observation interval.If the scale is small enough, it could be inferred that the traffic flow speed keeps unchanged in the unit gird.And consequently the partial differential equations 3.1 -3.4 can be rewritten in an approximated way, such as 3.8 .Here the subscripts i and j indicate space and time, respectively: With the scattered measurements as boundary initial values, the traffic data can be estimated by numerical interpolation based on the approximated traffic equations, as shown in Figure 4.For instance of traffic flow speed detection, denote by u m k and u m k the estimated and actual velocities of mth m ∈ 1, M vehicle on sensor k k ∈ 1, K , respectively.The estimation error is e m k , which can be formulated as 3.9 There are many evaluation criteria for error optimization; we use the same objective function as that in 41 , which has the expression of 3.10 as follows.Here E is the objective function, and E k is the mean square error MSE of traffic parameter estimation for all M vehicles on sensor k.And the purpose of optimal estimate algorithm is to minimize the total MSEs of all sensors:
3.10
Assume K point data u x i , t i is obtained in detection area g i, j , and u x i , t i is the corresponding value given by traffic equations.The noise root-mean-square error σ rms between model outputs and measured data can be denoted as 3.11 , which is a twodimensional random field, and we assume it is unbiased:
3.11
The velocity change in real traffic flow u x, t is continuous.To eliminate noise, we introduce the smoothing factor with the minimum sum of squares of the second derivative, as shown in 3.12 .Where Ω denotes two-dimensional square detection area,
3.12
The traffic data estimation can be transformed to a two-dimensional data fitting problem with time-space constraints based on scattered measurements.To solve the conditional extremum problem based on 3.11 and 3.12 , we can use the similar method in 42 based on Lagrange multiplier and finite elements method.
Congestion Factor Based Signal Optimization
In this section, we focus on traffic congestion evaluation and signal optimization.Based on traffic flow theory, the traffic flow near signalized intersections and connecting links can be modeled as entrance and exit ramps.The traffic light control algorithm will generate a shock wave at the stop line of the lanes, from the beginning of red signal phase, which will affect the traffic state in future.We introduce congestion factor to evaluate the degree of traffic congestion, and cost function to represent the influence of current timing phase on next phase.The result is helpful to optimize signal control.
Traffic Congestion Evaluation and Congestion Factor
The traffic congestion without external disturbance is an unsolved mystery.Knowing that traffic on a certain road is congested is actually not very helpful to traffic control system, and the information about how congested it is and the process it formed is more useful.There Mathematical Problems in Engineering is much novel research about traffic congestion prediction and evaluation in last decades 43, 44 .Flynn et al. studied these phenomena and introduced the traffic congestion model named Jamitons 20 , in which the traffic congestion is modeled as traveling wave.Based on the traffic model described in 3.1 -3.2 , the traffic congestion can be expressed and denoted in a theoretical way.Assuming the speed of traveling wave is s, with introducing the selfsimilar variable defined by η s − xt /τ, the traffic equations in Section 3.1 can be rewritten, and 4.1 holds: where s is the speed of the traveling shock wave, and traffic flow density and speed can be denoted as function of μ, ρ ρ η , u u η .The subcritical state can be predicted by 4.1 , where c √ p ρ > 0 denotes the subcritical condition.To solve these equations, we select the shallow water equations 45 denoted as 4.2 to simplify the problem: Applying this assumption to 4.1 and the LWR model denoted by 3.1 and 3.2 , 4.1 can be rewritten as 4.3 .Here m is a constant denoting the mass flux of vehicles in the wave frame of reference: The subcritical condition is therefore denoted as 4.4 .If this equation is satisfied, the traffic congestion is inevitable to occur.The density will reach ρ M immediately when traffic conditions exceed the subcritical state: The road can be regarded as share resource for vehicle and traffic flow link, and according to Jain's fairness index for shared computer systems, the quantitative congestion factor can be defined based on the traffic congestion model, as 4.5 .Here i indicates the lane number, x is the locations coordinate with origin starting from stop line, and the traffic density is sampled in n discrete values with fixed frequency.The congestion factor indicates the general congestion state on the whole road segment, which is a number between 0 and 1, and larger value means more crowded: Considering an intersection with four phases numbered A, B, C, and D, as shown in Figure 5, the phase timing can be denoted as 4.6 .Here g l i and g u i represent the minimum and maximum green times, respectively, and G i is the effective green time of phase i: Under the scenario of traffic flow stops by red signal, for instance of lane m during signal phase i, the traffic flow from west to east will be blocked from the beginning of phase A, and the interval is G A .The corresponding cost function on lane m is denoted as 4.7 .Here ΔT is timing adjustment step length, and C m cf k and C m cf k represent congestion factor on lane m of traffic flow under blocking status by signal and normal condition with green light, respectively.The normal condition can be simulated based on 3.1 and 3.2 with initial values detected by sensor networks at time t, where s t ≡ 0. And traffic parameters can be predicted by resolving the traffic equations: With the Matlab implementation of an exact LWR solver 39 , we can build a virtual simulator of traffic flow scheduling to analyze the traffic equations, congestion factor, and cost function in a theoretical way, based on given initial conditions.For traffic flow of a straight lane, consider two scenarios that traffic flow runs continuously and blocked by red signal at time t, the congestion factor and cost function can be simulated.The result is shown in Figure 6.
The Multiobjective Optimization Model for Signal Control
The problem of traffic timing optimization for an urban intersection in a crowded city has been previously approached in much research 46, 47 , and the existing traffic signal optimization formulations usually do not take traffic flow models in consideration.The variables on a signalized intersection and connecting links of phase j are shown in Figure 7.
We define q j in k and q j out k to be the inflow and outflow, respectively, and define d j k and s j k to be the demand flow and exit flow during the phase j in an interval kΔT, k 1 ΔT , where ΔT is the timing adjustment step, and k is a discrete index.Define S g nj and S y nj as the saturation flow for green and yellow times of phase j at intersection n. u k ni k indicates the signal, and u k ni k 0 means green light and u k ni k 1 means red light.To simplify the problem we just optimize the phase timing, with assumption that phase order is kept unchanged, four phases, as shown in Figure 5, transfer in the presupposed order A, B, C, and D.
Based on the dynamics of traffic flow, the control objective of the dynamic model is to minimize the total delay and traffic congestion factor.To minimize, With constraints subject to For a given time window T , based on constraints of 4.10 , the timing problem can be separated into h 1 ≤ h ≤ T/g l − 1 subproblems.We can solve these h problems and obtain h noninferiority set of optimal solutions and then merge them to get a new noninferiority set of optimal solutions, which is the solution of the original problem.In this paper we use MOPSO-CD Multiobjective Particle Swarm Optimization Algorithm using crowding distance to find the optimal timing.
Traffic Flow Detection and Control Algorithms
Based on the above model and computational method, the overall block diagram of traffic data detection and control algorithm is shown in Figure 8.It employs magnetic sensor and detects magnetic signature based on ATDA.The individual vehicle data is collected in time window W, and traffic flow speed is monitored at regular intervals.The scattered point data U t , P t contains all sensor readings that will be used to approximate the traffic equation and numerical approximation u ih, jk obtained.Finally we can get the traffic data u x, t and ρ x, t , which is expected to provide data to traffic control and evaluate traffic congestion.
The traffic congestion state can be evaluated based on 3.9 , and we can obtain the congestion factor in every segment near the intersection.At the same time, a cost function in next control phase can be calculated with a traffic scheduling simulator which is based on traffic equations and LWR solver.When we give priority to different possible directions and block traffic flow on other directions, the overall delay cost from alternative timing strategy will be taken into consideration before making the final signal, and the optimal timing can be obtained by solving a MOP.Finally, the traffic controller will choose the optimal timing scheme.This process operates in a circulation and in an adaptive way.
Simulation Result and Performance Analysis
The model and algorithms are simulated based on VISSIM platform.The traffic flow data is generated with the Mobile Century field test dataset 22, 48 and LWR solver 39 .VISSIM is a microscope, time interval, and driving behavior based traffic simulation tool kit.It supports external signal control strategies by providing API with DLL.The simulation tool will invoke the Calculate interface with presupposed frequency.And user can obtain the signal control related data in this interface.
With the DLL and COM interfaces, we designed a software/hardware in the loop simulation platform based on VISSIM, as shown in Figure 9.The traffic data for simulation is based on Mobile Century dataset.Traffic data near three intersections is used to simulate traffic data collection and timing phase optimization.The traffic network is shown in Figure 10.
Mathematical Problems in Engineering
We select a fixed coordinate without sensor and try to estimate traffic parameters with the method proposed in this paper based on proximity sensor readings.The estimation precision under different smooth factor ω is shown in Figure 11.The performance is better when compared to traffic prediction based on BP neural network.
In the control simulation, we analyzed the performance by two scenarios: control with delay constraint only and combining delay with traffic congestion factor together as the optimization objective, and compare the performance with fixed time control.On the same traffic flow dataset, the performance is illustrated in Figure 12.The criteria include average delay and the maximum queue length.The result shows that congestion factor based control optimization can increase the performance with lower average waiting time and shorter queue length.
Conclusion and Future Research
In this paper we study the traffic flow congestion evaluation and congestion factor based control method using sparsely deployed wireless sensor network.Taking into consideration the traffic flow intrinsic properties and traffic congestion model, try to obtain optimal phase timing with as few sensor node as possible.The main idea is to study the congestion and its influence on future traffic flow, combine traffic equations with the optimization function, to obtain the numerical solution of the traffic equations via approximate method, and finally to refine traffic sensor data based on data fitting.The model and algorithms are simulated based on VISSIM platform and Mobile Century dataset.The result shows better performance, and it is helpful to decrease average delay and the maximum queue length at the intersection.
Current research is limited to single intersection and simple segments with continuous traffic flow.Future research should focus on complex segments and even road network, such as ramp, long road with multi-intersections.And the traffic control strategy, road capability, and dynamics caused by incidents need to be taken into consideration in actual applications.Furthermore, complex traffic flow pattern simulation and traffic control strategies on a networked scale among multi-intersections and arbitrary connecting segments in road network are also an important aspect in next step.
Figure 1 :
Figure 1: Deployment of wireless sensor networks for urban traffic surveillance.
continuum model is excellent to describe the macroscopic traffic properties such as traffic congestion state.In 1955, Lighthill and Whitham introduced the continuum model LWR model 30 based on fluid dynamics, which builds the continuous function between traffic density and speed to capture the characteristics such as traffic congestion formation.In 1971, Payne introduced dynamics equations based on the continuum model and proposed the second-order model Payne-Whitham model 31 .Consider the Payne-Whitham model defined by 3.1 conservation of mass and the acceleration equation, written in nonconservative form as 3.2 :
Figure 2 :
Figure 2: a Magnetic sensor node and gateway.b Presentence and velocity detection based on ATDA.
Figure 4 :
Figure 4: Scattered data fitting with proximity points.
Figure 5 :
Figure 5: Four phases of traffic control.
Figure 6 :
Figure 6: Traffic congestion factor at observation point x.
Figure 7 :
Figure 7: Urban intersection and road link model for traffic signal control.
LWRFigure 8 :
Figure 8: Flow diagram of traffic flow detection and adaptive control model based on sensor network.
Figure 9 :
Figure 9: Software/hardware in the loop simulation based on VISSIM.
Figure 10 :
Figure 10: Traffic networks for timing optimization simulation.
9 Figure 11 :
Figure 11: Performance of traffic data estimation based on traffic equations.
Figure 12 :
Figure 12: Performance analysis of traffic control based on congestion factor. | 2015-07-06T21:03:06.000Z | 2012-12-31T00:00:00.000 | {
"year": 2012,
"sha1": "5c48bb94165c785b7c8748e157cecc471bc2be85",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2012/573171.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5c48bb94165c785b7c8748e157cecc471bc2be85",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
} |
211217441 | pes2o/s2orc | v3-fos-license | Convergence of pathway analysis and pattern recognition predicts sensitization to latest generation TRAIL therapeutics by IAP antagonism
Second generation TRAIL-based therapeutics, combined with sensitising co-treatments, have recently entered clinical trials. However, reliable response predictors for optimal patient selection are not yet available. Here, we demonstrate that a novel and translationally relevant hexavalent TRAIL receptor agonist, IZI1551, in combination with Birinapant, a clinically tested IAP antagonist, efficiently induces cell death in various melanoma models, and that responsiveness can be predicted by combining pathway analysis, data-driven modelling and pattern recognition. Across a panel of 16 melanoma cell lines, responsiveness to IZI1551/Birinapant was heterogeneous, with complete resistance and pronounced synergies observed. Expression patterns of TRAIL pathway regulators allowed us to develop a combinatorial marker that predicts potent cell killing with high accuracy. IZI1551/Birinapant responsiveness could be predicted not only for cell lines, but also for 3D tumour cell spheroids and for cells directly isolated from patient melanoma metastases (80–100% prediction accuracies). Mathematical parameter reduction identified 11 proteins crucial to ensure prediction accuracy, with x-linked inhibitor of apoptosis protein (XIAP) and procaspase-3 scoring highest, and Bcl-2 family members strongly represented. Applied to expression data of a cohort of n = 365 metastatic melanoma patients in a proof of concept in silico trial, the predictor suggested that IZI1551/Birinapant responsiveness could be expected for up to 30% of patient tumours. Overall, response frequencies in melanoma models were very encouraging, and the capability to predict melanoma sensitivity to combinations of latest generation TRAIL-based therapeutics and IAP antagonists can address the need for patient selection strategies in clinical trials based on these novel drugs.
Introduction
The immune system can eliminate cancer cells by activating cell surface apoptosis-inducing death receptors, such as tumour necrosis factor-related apoptosis-inducing ligand receptors 1 and 2 (also known as death receptors 4 and 5 (DR4/5)). Many cancer cells, including melanoma, overexpress these TRAIL-Rs, possibly due to an additional role these receptors can play in supporting cellular proliferation and invasion by autonomous TRAIL/TRAIL-R signalling [1]. Developing TRAIL-based therapeutics has been a highly active but only moderately successful translational research field for many years, but recent progress in designing superior TRAIL-based biologics and an improved mechanistic understanding of drug-induced TRAIL-sensitisation now provide novel avenues for new anti-cancer therapies [2]. Latest generation TRAIL-derived therapeutics overcome limitations of previous formulations by significantly improving TRAIL receptor oligomerisation and activation by higher valency, and by exerting significantly prolonged serum half-lives. Highly promising variants are hexavalent fusion proteins that couple two single-chain TRAIL trimers and that outperform soluble human TRAIL and TRAIL-R-targeting antibodies [3][4][5]. Cellular inhibitor of apoptosis proteins (cIAPs) 1 and 2 can prevent TRAILinduced cell death by recruiting components of the linear ubiquitin chain assembly complex (LUBAC) to aggregated TRAIL-Rs. The activitiy of LUBAC promotes pro-survival signalling and suppresses both apoptosis and necroptosis signalling cascades [6]. Synthetic IAP antagonists, such as Birinapant (TL32711), BV6 or LCL-161, therefore potently sensitise cells to TRAIL-induced caspase-8 activation and apoptosis [7,8]. IAP antagonists bind to cIAPs and cause conformational changes that allow dimerisation of cIAP RING domains, auto-ubiquitylation and subsequent proteasomal degradation [9]. In cells capable of activating caspase-8, the cleavage of the Bcl-2 family protein Bid initiates the formation of Bax/Bak pores in the outer mitochondrial membrane, followed by activation of downstream caspases-9, -3, -7 and subsequent cell death [10]. Birinapant also binds to and inhibits x-linked inhibitor of apoptosis protein (XIAP), a major antagonist of caspases-9, -3, -7 that is also involved in upstream regulation of cell death signalling, with nM affinity [11][12][13]. Inducing apoptosis through the TRAIL pathway can proceed without the need for transcriptional responses or protein neo-synthesis, processes required for cell death induction by the majority of cytotoxic therapeutics. This suggests that pre-treatment amounts of proteins regulating apoptotic TRAIL signalling might suffice to derive predictors for treatment responsiveness.
Especially in highly heterogeneous cancers, such malignant melanoma, predictive markers and validated companion diagnostic tests developed from such markers will be necessary to identify those patients likely to respond to treatment [14,15]. The incidence of cutaneous melanoma continues to rise rapidly [16]. While chemotherapy-based treatments provide little benefit for patients with metastatic melanoma, more recent treatment options such as targeted immuno-therapeutics, BRAFV600 and MEK inhibitors, and combinations thereof in many cases can prolong survival or, less frequently, induce lasting disease remission [17,18]. However, substantial numbers of patients do not qualify for these treatments or experience disease relapse, so that additional treatment options, for example those building on TRAIL-based therapeutics and IAP antagonists, can be attractive alternatives should it become possible to reliably predict treatment responsiveness.
Here we can report that expression profiles of TRAIL pathway regulators can serve to predict responsiveness to the combination of IZI1551, a prototypical example of a translationally relevant latest generation TRAIL-based biologic [3], and Birinapant (TL32711), a well-characterised example for a translationally relevant IAP antagonist [8]. Across a diverse and heterogeneous melanoma cell line panel, 3D multi-cellular tumour spheroids (MCTS) and melanoma cells isolated from patient metastases, we achieved >80% prediction accuracy. A proof of concept in silico trial based on a cohort of 365 metastatic melanoma patients indicates that IZI1551/Birinapant responsiveness could be expected for up to 30% of tumours.
Materials and methods
Materials TL32711 (Birinapant) was obtained from Active Biochem, Germany. IZI1551 was produced and purified as described before (Hutt et al. 2017). Q-VD-OPh was bought from Selleckchem, Germany. cIAP1 and cIAP2 recombinant proteins, required to determine absolute expression amounts in melanoma cells, were bought from R&D, Germany.
Culturing of 3D spheroids
Cells were harvested and diluted to the concentration of 10 4 cells/mL in RPMI-1640/10% FBS with the addition of 0.24% Methyl Cellulose (Sigma Aldrich, Germany). 250 cells per drop were placed into the lid of a Petri dish filled with PBS. Spheroids were incubated for 10 days at 37°C and 5% CO 2 . The medium was exchanged every other day. Slower growing Malme 3M cells and freshly obtained metastatic melanoma cells (M34) were seeded at 500 cells per drop and incubated for 2 weeks.
Flow cytometry
Semi high-throughput cell death measurements Cells were washed, trypsinised and stained with propidium iodide (PI, Sigma Aldrich, Germany) at 1.33 µg/mL for 10 min. The measurements were performed on a high throughput flow cytometer (BD LSRII SORP) using the 488 nm laser for excitation, while emission was recorded at 617 nm. Flow cytometry data were analysed using Cyflogic v. 1.2.1 (CyFlo Ltd, Finland). All experiments were performed in triplicates and in n = 3 independent repeats.
Annexin V-GFP or APC/PI staining Cells were harvested and washed in PBS and Annexin V Binding buffer (Biolegend, Germany). Cells were stained with Annexin V-APC (Biolegend, Germany) (0.1%) or Annexin V-GFP (made in-house, 0.1%) and PI (Biolegend, Germany) (1 µg/mL). Measurements were conducted on a BD FACS Canto II flow cytometer using 561 nm excitation (emission from 600 to 620 nm) (PI) or 640 nm excitation (emission from 655 to 685 nm) (APC). Alternatively, measurements were conducted with a MacsQuant flow cytometer using 488 nm excitation (emission from 655 to 730 nm (PI), and emission from 500 to 550 nm (GFP)). Flow cytometry data were analysed either with the BD FACS Diva software (BD Biosciences, USA) or with Flowing software (Turku Centre for Biotechnology, Finland).
Data processing and analysis for predictor identification
All data processing and analysis were performed using a customised version of a previously developed pipeline [19]. The script was developed for MATLAB 2017b (The Mathworks, UK), equipped with the statistical toolbox. Prior to statistical analysis, protein data were mean-centred and scaled, dividing by the respective standard deviation. A principal component analysis (PCA) was performed on the standardised dataset and the PCs with an eigenvalue >1 were used for subsequent analyses. Linear discriminant analysis (LDA) was applied to objectively assess the accuracy of response class separation in the space defined by the first six PCs. Then, leave-one-out cross-validation (LOOCV) was applied iteratively to the 16-cell line panel to assess predictive capacity. For each iteration, data from 15 cell lines were used as a training set to define the PC space, and one test cell line was subsequently positioned according to its protein expression profile. LDA was then applied to determine if the test cell line was placed in the correct responsiveness sub-space. The response of 3D grown and patients-derived primary cell lines was predicted with the same workflow, using the predictor obtained from the data set of the 16-cell lines panel. The optimal predictive protein subset (reduced predictor) was determined using the Select attributes panel of the WEKA workbench (Version 3.8.2 [20]). A ranking of the proteins was obtained using the CorrelationAttributeEval attribute evaluator with Ranker search method and 10-fold cross-validation mode. This attribute selection method evaluates the merit of each protein individually by calculating the Pearson's correlation between the individual protein and the responsiveness class. The attribute selection step was performed using the proteins quantified in the 2D cell lines panel. The complete prediction pipeline was iteratively applied taking into account the first six PCs, and removing the protein with the lowest rank at each iteration. Statistical analyses not described above were performed with GraphPad Prism 7 (GraphPad Software).
In silico trial
The protein expression patterns of the melanoma cell line panel were used to estimate the protein expression profiles in melanoma tumours of 472 patients for which transcriptome data are deposited in the cancer genome atlas melanoma cohort (TCGA-SKCM). Normalised mRNA expression data (Upper Quartile normalised Fragments per Kilobase of transcript per Million mapped reads, log2 (FPKM-UQ+1)) generated by the Genomic Data Commons (GDC-NIH) were downloaded from the UCSC-XENA browser (Available at: https://xena.ucsc.edu/. Accessed: 4 February 2019). Data interpolation was performed using Point-to-point curve creation in GraphPad Prism 7 (GraphPad Software). Standard curves were generated using minimum and maximum values of protein expression range (cell line panel) and TCGA-SKCM back transformed mRNA expression data. For response predictions, PCA was applied to the data for the n = 11 predictor proteins in the cell lines dataset, followed by LDA-based definition of responsiveness and resistant subspaces, and subsequent positioning of n = 365 TCGA derived melanoma metastases in the PC space according to their estimated protein values.
IAP antagonist Birinapant sensitises a subset of melanoma cell lines to apoptosis induced by the 2nd generation TRAIL-based biologic IZI1551
To study the responsiveness and the response heterogeneities of melanoma cells to IZI1551, a novel and translationally relevant hexavalent TRAIL receptor agonist [3], to the IAP antagonist TL32711/Birinapant, a compound currently evaluated in clinical trials [21], or combinations thereof, we employed a diverse set of sixteen cell lines (see materials and methods). For each cell line, cell death was determined at 15 treatment conditions, using semi-high throughput flow cytometry. Cell lines varied in their response to the treatments, ranging from high resistance to high sensitivity (Fig. 1a). Many cell lines responded synergistically to the combination treatment (synergistic responders; WM1366, SkMel5, SkMel2, Malme3M, Mel Juso, WM3060, WM115, WM35, SkMel147, WM793, WM1346, WM3248), as determined using Webb's fractional product method, whereas others (WM3211, MeWo, WM1791c, WM852 cells) failed to do so (low responders) (Fig. 1b).
Birinapant had on-target activity in both synergistic responders and low responders, since cIAP1 protein amounts were efficiently and rapidly lost upon single agent and combination treatments (Fig. 1c). Neither single nor combination treatment induced detectable amounts of TNFα secretion (not shown), a response to IAP antagonists that in rare cases can contribute to autocrine cell death induction [22]. The amounts of XIAP remained largely unchanged, except for the combination treatment in synergistically responding Mel Juso cells (Fig. 1c). XIAP is a known caspase-3 substrate [23], and correspondingly caspase inhibitor Q-VD-OPh restored XIAP amounts, indicating that IZI1551/Birinapant induces apoptosis in responder cell lines such as Mel Juso (Fig. 1c). This was further supported by the processing of procaspases 8 and 3, and by the caspase-dependent cleavage of Bid and PARP in Mel Juso cells (Fig. 1d). In poorly responding MeWo cells, instead, PARP cleavage was modest and detectable only as a transient pulse (Fig. 1d, e). In line with these observations, caspase inhibitor Q-VD-OPh prevented IZI1551 and IZI1551/ Birinapant induced cell death in Mel Juso cells and other synergistic responders, such as SkMel2 and Malme 3M (Fig. 1e).
Taken together, these results show that Birinapant sensitises a subset of human melanoma cell lines to cell death induced by IZI1551, a 2nd generation TRAIL-based therapeutic, and that apoptosis appears to be the primary cell death modality in synergistic responders.
Expression patterns of apoptosis proteins allow predicting IZI1551/Birinapant responsiveness
The combination of IZI1551/Birinapant can induce apoptotic cell death without the need for protein neosynthesis. We therefore next explored if baseline expression amounts of apoptosis proteins carry information on the responsiveness of melanoma cell lines to the combination of IZI1551/Birinapant. Pre-treatment amounts of 19 key pro-and anti-apoptotic players that regulate the apoptotic TRAIL signalling pathway was determined by quantitative immunoblotting at high dynamic range or, for death receptors, by cell surface staining ( Fig. 2a; Supplemental Fig. 2). Expression patterns varied considerably between the proteins and across the cell lines. To explore possible correlations between protein expression patterns, we conducted a PCA. A total of six principle components (PCs), all with an eigenvalue >1 and thus fulfilling the Kaiser criterion [24], were required to capture approximately 80% of the data variance (Fig. 2b), highlighting that pre-treatment expression patterns were highly heterogeneous. Similarly, the associated weight coefficients indicated that individual proteins contributed heterogeneously to the first six PCs, without obvious positive or negative correlations between pro-and anti-apoptotic proteins (Fig. 2c). A visualisation of the cell line positions within the space defined by the first three PCs correspondingly failed to identify visually distinct clusters of cell lines (Fig. 2d). In conclusion, these data demonstrate high expression heterogeneity between proteins and between the cell lines.
Interestingly, colour coding the cell lines according to synergistic or low responsiveness indicated that synergistically responding and poorly responding cell lines occupy distinct regions within the plotted space (Fig. 2e). LDA confirmed this visual impression, with 14/16 cell lines (88%) correctly separated into their respective response categories. These results, therefore, indicate that even though apoptosis protein expression is highly heterogeneous across the cell lines, the expression patterns nevertheless carry information on the capability to respond synergistically to the combination of IZI1551/ Birinapant. We next tested if the protein expression patterns would be sufficient to predict responsiveness or resistance to IZI1551/Birinapant in melanoma cell lines. To this end, we performed LOOCV based on the approach described above. PCAs were conducted for sets of 15 cell lines, followed by LDAs to define the hyperspace regions of responsiveness and resistance. Missing cell lines were subsequently positioned into the LDA-segmented PC spaces according to their individual expression patterns of apoptosis regulators. If the tested cell line positioned into the correct response region, the prediction was considered successful (Fig. 3a). Overall, LOOCV was sufficient to correctly predict the responsiveness of 13 out of 16 cell lines (81%) (Fig. 3b), indicating that the measured protein panel allows predicting responsiveness to IZI1551/Birinapant on a case-by-case basis with high accuracy. Table 1. b Percentage of the variance of the original dataset explained by PCs. PCs with an eigenvalue >1 were retained for further analysis. Accumulated "variance explained" is plotted in black. c Weight coefficient table. Bars represent the contributions of the respective proteins to the different PCs. d Cell lines positioned in a multidimensional space according to their individual protein expression profiles. The PC space shown was defined by the first three PCs. Circle sizes decrease with distance from the observer to aid 3D visualisation. e Colour coding indicates responsiveness of cell lines to IZI1551/Birinapant (orange = low response; blue = synergistic response). Table insert indicates accuracy of spatial segmentation between low and synergistic responders.
Responses to IZI1551/Birinapant can be predicted for 3D growth conditions
We next studied if responsiveness to IZI1551/Birinapant can be predicted for cells grown as MCTS. While more demanding as a cell culturing method, spheroids provide the advantage of higher microenvironmental complexity at nevertheless well-controlled experimental conditions [25]. Protein quantification from spheroids of five cell lines able to form MCTS demonstrated that the transition from 2D cell culture to 3D spheroid culture substantially affected protein expression patterns (Fig. 4a, b, Supplemental Fig. 3). A number of pro-as well as anti-apoptotic proteins were considerably downregulated, such as Bid, Bcl-2, Procaspase 3, FADD and Mcl-1. cFLIP and TRAIL-R1, instead, appeared to accumulate, and a number of other proteins changed heterogeneously in their expression amounts across spheroids of different cell lines (Fig. 4b). While a reductionist reasoning based on individual protein changes would intuitively suggest that IZI1551/Birinapant responsiveness of 3D MCTS should differ from 2D cultures, the combined complexity of altered protein expression prevents drawing conclusions prior to experimental validation. We therefore used the PCA/LDA-based approach to generate testable predictions on MCTS responsiveness. Positioning the MCTS forming cell lines into the PC space according to their respective pathway proteome revealed that their coordinates differed substantially from their 2D cultivated counterparts (Fig. 4c). Interestingly, despite the substantial changes in relative protein amounts, all cell lines were predicted to remain within their respective response class (Fig. 4c, colour-coded open circles). To test these in silico predictions, we measured cell death in spheroids treated with IZI1551, Birinapant or the combination thereof. Indeed, the predictions could be confirmed for all five cell lines, with SkMel2, WM1366, Mel Juso and Malme 3M responding to the combination treatment of IZI1551/Birinapant, and MeWo cells remaining resistant in the 3D growth scenario (Fig. 4d). TNFα was not secreted upon growth in 3D or in response to the treatments, as tested for Mel Juso and MeWo cells (not shown). Overall, we therefore conclude that a PCA/LDA-based prediction framework, parameterised with protein expression and treatment responsiveness data from 2D cell cultures, is sufficient to predict responses to IZI1551/Birinapant for 3D spheroid growth conditions.
Responses to IZI1551/Birinapant can be predicted for melanoma cells freshly isolated from metastases
For a translationally more relevant setting, we next tested if IZI1551/Birinapant responses can be predicted for melanoma cells freshly isolated from metastases. Following quantification of apoptosis regulatory proteins (Fig. 5a, Supplemental Fig. 4), cells were positioned into the PC space. Predictions were generated as described above and cells were colour coded according to their expected IZI1551/Birinapant responsiveness. M10, M20, M32 and M45 cells were predicted to respond to IZI1551/Birinapant combination treatment, whereas M34 cells were expected to respond poorly (Fig. 5b). Validation experiments confirmed the predictions on high responsiveness of M10, M32 and M20 cells and poor responsiveness of M34 cells (Fig. 5c). We therefore conclude that high predictions accuracies can also be achieved for cells freshly isolated from clinical materials.
A reduced predictor maintains performance and estimates response prevalence to IZI1551/ Birinapant in metastatic melanoma
The framework to predict responsiveness to IZI1551/Birinapant builds on an otherwise unbiased selection of nineteen regulators known to be involved in canonical apoptosis signal transduction for this treatment combination. We next determined the contribution of the individual protein variables towards accurate predictions. To do so, we used the attribute selection feature of the WEKA workbench [20] to compute the "merit" of each protein, based on the protein expression profiles and the responsiveness data of the melanoma cell line panel. From this, we obtained a ranking of protein variables according to the degree of association with treatment responsiveness (in sequence of decreasing merit: XIAP, Procaspase 3, Cytochrome C, Mcl-1, cIAP1, Bax, Bid, Bcl-xL, Smac, FADD, Bak, cIAP2, TRAIL-R1, Procaspase 9, Apaf-1, TRAIL-R2, Procaspase 8, cFLIP and Bcl-2). We then iteratively performed predictions for the cell line panel, with the protein with the lowest merit removed upon each iteration. Performance was largely maintained (14/16 correct predictions for the cell line panel) when limiting the predictor to the eleven proteins with the highest merit (Fig. 6a). The reduced predictor correctly determined treatment responsiveness in 4/5 MCTS growth Fig. 3 Expression patterns of apoptosis proteins allow predicting IZI1551/Birinapant responsiveness. a Simplified 2D schematic showing the workflow for determining prediction accuracy by combined PCA/LDA/LOOCV. Following PCA, an LDA separates the PC space into areas for synergistic responsiveness and low responsiveness. A cell line of unknown responsiveness (empty circle) is then placed into the segmented PC space according to its protein expression profile, with the positioning serving as the response prediction. Experimental responsiveness data served to validate predictions. b 2D projection of LOOCV results for the 16 cell lines. The responsiveness of the test cell line was predicted (blue for synergistic, orange for low responsive). The empty circle represents the test cell line being placed into the PC space. Circle sizes decrease with distance from the observer to aid 3D visualisation. Table insert summarises prediction accuracy. scenarios and in 4/5 biopsy-derived fresh melanoma cells (Fig. 6b, c). Further validation of the reduced predictor was conducted using nine additional and independently analysed samples, including three 2D and six 3D growth scenarios. Also in these samples prediction accuracies of approximately 80% were achieved (Fig. 6d-f, Supplemental Fig. 5). Overall, we noted strong influences of XIAP and procaspase-3, direct interactors and regulators of type I signalling competency during extrinsic apoptosis [26,27], and various members of the Bcl-2 family in the predictor (Fig. 6a). The ability to predict responsiveness to IZI1551/ Birinapant in cell lines and ex vivo cultures raises the question if responses can be expected in patients, and if so, how frequent such responses might be. We therefore estimated the clinical response prevalence under the assumption that favourable drug pharmacokinetics and pharmacodynamics allow both drugs to reach their targets. Expression profiles of predictor variables were deduced from transcriptome data of metastatic melanoma patients (n = 365, TCGM-SKCM cohort, Supplemental Table 2) by mapping to protein expression ranges measured experimentally. Following positioning into the LDA segmented PC space defined by the predictor, 111 out of 365 patients were expected to respond to treatment (Fig. 6g). The expectation of approximately 30% responders needs to be interpreted in the context of predictor accuracy. The 80% prediction accuracy achieved in the cell line panel is composed of a predictor sensitivity of 92% and a specificity of 75%, so that the predictor strength lies in recalling true positives. Taken together, these results demonstrate that highly accurate predictions can be made for IZI1551/Birinapant responsiveness with a reduced set of input variables, and that in up to 30% of clinical cases an on target responsiveness could be expected, as estimated from a representative cohort of metastatic melanoma patients.
Discussion
Here, we report that protein expression signatures of TRAIL pathway regulators can serve to predict responsiveness to the combination of IZI1551 and Birinapant, targeted therapeutics with high translational relevance [7,28]. High accuracies for response predictions were achieved for melanoma cell lines, for 3D multi-cellular melanoma spheroids and for cells newly isolated from melanoma metastases (approximately 80% prediction accuracy). Protein prioritisation resulted in a reduced marker that, when applied in a proof of concept in silico trial, suggests that IZI1551/Birinapant responsiveness could be expected in up to 30% of tumours in patients with metastatic melanoma.
Previous TRAIL-based therapeutics were tested in translational settings and performed unsatisfactorily [28]. Among the reasons for limited efficacy of TRAIL-R agonistic antibodies in the clinic were short serum halflives and the requirement for immune cell-mediated, Fcγdependent clustering of therapeutic antibodies to induce efficient TRAIL-R1/R2 oligomerisation and caspase-8 activation [29]. 2nd generation TRAIL-based therapeutics address these problems, for example by increased valency and by using Fc regions as dimerisation and halflife extension modules [3,4,28]. IZI1551, consisting of two tri-valent single-chain TRAIL fragments cross-linked via the Fc part of an IgG antibody, is a prototypical example for this principle and potently induces apoptosis in vivo in cells moderately responsive to traditional TRAIL-based therapeutics [3]. However, in many cases sensitising co-treatments are required to ensure efficient apoptosis induction following TRAIL-R1/R2 activation. IAP antagonists are potent sensitisers to extrinsic apoptosis [21], suppressing the formation of LUBAC and the associated initiation of pro-survival signalling. IAP antagonists also sensitise to apoptosis induced by intrinsic cytotoxic stimuli, such as genotoxic therapeutics in pancreatic, colon and brain cancer [30][31][32], where cIAPs likely impair caspase-8 binding and activation on cytosolic ripoptosomes [33,34].
While both 2nd generation TRAIL-R1/R2 agonists as well as IAP antagonists are currently tested in clinical trials (NCT03082209 [5,21]), currently no studies test their combination. In addition, validated biomarkers predictive of treatment responsiveness do not exist for TRAIL-based therapeutics, IAP antagonists or the combination of both. The lack of reliable molecular markers to predict responses to TRAIL might indeed have contributed to the poor performance of TRAIL-based therapeutics in the clinical setting, since no patient selection could be performed [35]. The absence of response predictors for IAP antagonists likewise affects current clinical trials based on this class of therapeutics [21]. Notably, for both TRAIL-R1/R2 agonists as well as for IAP antagonists, the expression amounts of their direct molecular targets, i.e. TRAIL-R1/R2 amounts and cIAP Fig. 4 Responses to IZI1551/Birinapant can be predicted for 3D growth conditions. a Quantification of pro-and anti-apoptotic proteins in cell lines grown as MCTS (red and green, respectively). Circles summarise 285 quantifications and circle sizes represent mean protein quantities determined from at least n = 3 independent experiments. Protein amounts are provided in Supplemental table 1. b Heatmap showing the fold change in protein expression between 3D and 2D culture. Black colour indicates absence in either 2D or 3D conditions. c Positioning of cell lines grown in 3D in the PC space defined by 2D cultured cell lines. Empty circles indicate positions of cell lines grown in 3D. Arrows indicate the change of position in the PC space caused by altered protein expression between 2D to 3D growth conditions. Circle colours reflect expected responsiveness (blue) or resistance (orange), based on the LDA segmented PC space. The circle size decreases with distance from the observer to aid 3D visualisation. d Experimental validation of MCTS responsiveness to IZI1551/Birinapant treatment. MCTS of cell lines were treated with IZI1551 (1 nM) and Birinapant (1 µM) or their combination for 24 h. Cell death was measured by flow cytometry (PI uptake). Data show means of n = 3 measurements.
proteins, appear insufficient to derive response biomarkers [21,36,37]. This indicates that treatment efficacy is determined further downstream within the signal transduction network and/or too complex to be captured by traditional or reductionist biomarker discovery approaches.
With IAP antagonists removing the apical suppression of extrinsic apoptosis induction, we hypothesised that the expression amounts of key regulatory proteins of the TRAIL signal transduction network can serve to predict responsiveness. Indeed, predictions on IZI1551/Birinapant responses, based on the expression patterns of key TRAIL pathway regulators, were highly accurate. Being able to predict responsiveness also in a micro-environmentally more complex 3D setting and in cells newly isolated from patients indicates that concerns about using continuously cultured cell lines to develop a predictor for IZI1551/Birinapant responsiveness can be alleviated, possibly because protein expression alone is sufficient to derive treatment responsiveness. Complex genetic characterisations and careful selection of cell line and in vivo models might, however, be warranted for studies on treatment scenarios that are highly dependent on disease-relevant mutations, and accordingly the genetic representation of the disease [38][39][40].
We initiated our study using 19 proteins considered key regulators of IZI1551/Birinapant induced signal transduction. We could reduce this panel to an 11 protein signature which, compared to traditional biomarkers, still seems rather large. However, this likely reflects the complexity of apoptosis signal transduction and regulation, as well as the disease heterogeneity observed in melanoma. The development of complex protein quantity-based biomarkers for routine clinical application still faces major technological challenges [41,42]. Traditional immunohistochemical analyses of tumour biopsies typically provides insufficient dynamic range and limited calibration possibilities to derive reliable quantitative data. Alternative approaches, such as reverse phase protein arrays and mass spectrometric analyses of clinical specimen can overcome these hurdles, but are difficult to embed into routine pathology and laboratory workflows in the clinical environment. To take intra-tumour cell-to-cell heterogeneity into account, an aspect likely crucial to refine our predictor in a translational setting, technology such as mass cytometry could provide the possibility to capture multiplexed protein markers at the single cell level [43]. However, this technology is difficult to apply to tissue specimen. Developments in the field of high dynamic range fluorescence-based analysis of FFPE materials, coupled to multiplexing technologies that allow re-staining of tissue slices [44,45], might more conveniently and routinely allow obtaining quantitative protein expression data, especially where entire cellular proteomes are not required.
It is noteworthy that none of the melanoma models studied lacked TRAIL-R1/R2 or caspase-8 expression, and TRAIL-Rs or caspase-8 amounts did not appear crucial to predict responsiveness. The amounts of these proteins therefore possibly do not limit IZI1551/Birinapant responsiveness in melanoma. A recent study in models of non-small-cell lung cancer and pancreatic ductal adenocarcinoma interestingly indicates that cancer cells might become addicted to TRAIL receptor expression, with autonomous TRAIL-R signalling contributing to disease progression [1]. Additionally, proliferating cells might rely on a cell death-independent role of caspase-8 in contributing to chromosome alignment during mitosis [46]. In the predictor, the expression of XIAP and caspase-3 strongly contributed to accurate response predictions. Both proteins play crucial roles in controlling cellular life/death decisions during apoptosis execution [10,47]. XIAP additionally holds in check the "type I" link by which caspase-8 can activate caspase-3 [26,27,48]. However, kinetically the mitochondrial route still seems preferred in cells capable to die by type I signalling [26], most likely due to the strong amplification of apoptosis signalling by Bcl-2 family dependent mitochondrial outer membrane permeabilisation and apoptosome formation. Indeed, various Bcl-2 family members, such as Mcl-1, Bax, Bid, Bcl-xL and Bak, display prominently in the predictor. Mcl-1 and Bcl-xL negatively regulate Bax/Bak pore formation, while Bid is a primary substrate of both caspase-8 and caspase-3, with truncated Bid inhibiting Mcl-1 and Bcl-xL, and activating Bax and Bak [49]. Taken together, the interplay of caspases-3, XIAP and Bcl-2 family members, initiated by non-limiting amounts of TRAIL receptors and caspase-8, appears to play a central role in melanoma cell death upon exposure to IZI1551/ Birinapant. Taken together, this study represents a successful proof of concept for developing a stratification marker for malignant melanoma in response to a novel, clinically relevant combination treatment based on a 2nd generation hexavalent TRAIL variant (IZI1551) and a representative Fig. 5 Responses to IZI1551/Birinapant can be predicted for cells isolated from melanoma metastases. a Quantification of apoptosis regulatory proteins in cells derived from melanoma metastases. Red coloured circles represent pro-apoptotic and green circles antiapoptotic proteins. Circles summarise 285 quantifications, and circle sizes represent mean protein quantities determined from at least n = 3 independent experiments. Protein amounts are shown in Supplemental table 1. b Positioning of melanoma cells from patient metastases in the PC space defined by 2D cultured cell lines. Empty circles indicate positions of patient cells. Circle colours reflect expected responsiveness (blue) or resistance (orange), based on the LDA segmented PC space. The circle size decreases with distance from the observer to aid 3D visualization. c Experimental validation of primary melanoma cell responsiveness to IZI1551/Birinapant treatment. Cells were treated as indicated for 24 h. Cell death was measured by flow cytometry (PI uptake). Heat maps show the mean of n = 3 independent experiments. IAP antagonist, Birinapant. This can form the basis for future translational and clinical studies in which combination treatments of 2nd generation TRAIL-based therapeutics and IAP antagonists will be tested and for which optimal patient selection strategies are required. Fig. 6 A reduced predictor maintains performance and estimates response prevalence to IZI1551/Birinapant in metastatic melanoma. a Ranking of variables in a reduced predictor, as obtained by computed merit. b, c Responsiveness predictions and prediction accuracies for MCTS growth scenarios and for metastatic melanoma cells isolated from patients. The PC space is shown as a two-dimensional projection. Filled circles represent training data from the melanoma cell line panel. Open circles highlight positions of MCTS (b) or cells isolated from melanoma metastases (c). d Quantities of apoptosis regulators in additional validation samples. Circle sizes represent relative protein amounts. Protein amounts are listed in Supplemental table 1. Western blots are shown in Supplemental Fig. 5. e Validation samples positioned in the PC space obtained by the reduced predictor. Colour-coding indicates responsiveness. Table inserts display accuracy of spatial segmentation and prediction accuracy. f Experimental responsiveness of validation samples. Cells were treated as indicated for 24 h. Cell death was measured by flow cytometry (PI uptake). Heat maps show the mean of n = 3 independent experiments. g Estimation of response prevalence in a hypothetical trial. Estimated protein expression profiles of metastatic melanoma patients (n = 365) were used to predict responsiveness (blue, n = 111) or resistance (orange n = 254) to IZI1551/Birinapant combination treatment. 3D graphs show arrangement of predicted responders and non-responders in the predictor space. | 2020-02-21T15:53:01.304Z | 2020-02-21T00:00:00.000 | {
"year": 2020,
"sha1": "4fdb28ee44fcd29045ecaf2048b37302285a6b84",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41418-020-0512-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c8b8247c49ce08edaf8e5c26c38569de5c93e665",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254680073 | pes2o/s2orc | v3-fos-license | Family wellbeing in general practice: a study protocol for a cluster-randomised trial of the web-based resilience programme on early child development
Background
Social, emotional and behavioural problems in early childhood are associated with increased risk for a wide range of poor outcomes associated with substantial cost and impact on society as a whole. Some of these problems are rooted in the early mother-infant relationship and might be prevented. In Denmark, primary health care has a central role in preventive care during pregnancy and the first years of the child’s life and general practice provides opportunities to promote a healthy mother-infant relationship in early parenthood. Objective In the context of standardised antenatal and child development assessments focused on psychosocial wellbeing, we examine the impact of a complex intervention designed to improve maternal mentalisation skills, involving training of general practice clinicians and signposting towards a web-based resource. Joint main outcomes are child socio-emotional and language development at age 30 months measured by parentally reported questionnaires (Communicative Development Inventory and Strengths and Difficulties Questionnaire). Methods The study is a cluster-randomised controlled trial based in general practices in the Capital Region and the Zealand Region of Denmark. Seventy practices were included. Practices were randomised by a computer algorithm in a ratio of 1:1 to intervention or control groups. Each practice was asked to recruit up to 30 women consecutively at their first scheduled antenatal assessment. Clinicians in both groups received one day of training in preventive antenatal and child development consultations with added focus on parental psychosocial well-being, social support, and parent–child interaction. These preventive consultations delivered in both trial arms require enhanced data recording about psychosocial factors. In intervention clinics, clinicians were asked to signpost a web page at three scheduled antenatal consultations and at four scheduled consultations when the child is 5 weeks, 5 months, 1 and 2 years. Discussion We hypothesise that the intervention will increase mothers’ ability to be sensitive to their child’s mental state to an extent that improves the child’s language and mental state at 30 months of age measured by parent-reported questionnaires. Trial registration ClinicalTrials.gov NCT04129359. Registered on Oct 16 2019. Supplementary Information The online version contains supplementary material available at 10.1186/s13063-022-07045-7.
Introduction
Studies in birth cohorts with long-lasting follow-up have identified factors associated with poor mental health later in life.These may be genetic, such as vulnerability to ADHD or autism; they may be antenatal (e.g.maternal stress hormones, smoking, and alcohol consumption); they may be located in the family or upbringing, (e.g.postnatal depression, harsh or inconsistent parenting, parental discord); or they may be located in the wider environment (e.g.relative poverty, neighbourhood problems) [1,2].
These factors may interact in different ways.Some might increase resilience to adversity: in particular, there is a likely protective effect of positive parent-infant interaction against childhood psychological problems [3][4][5][6][7].Secure infant-parent attachment, itself associated with resilience [8,9] may be a mediating factor.Early childhood social, emotional and behavioural problems are associated with increased risk of a wide range of poor outcomes associated with substantial cost and impact on society as a whole [10][11][12][13][14][15][16].The association of adverse childhood experiences with long-term ill health is incontrovertible [17,18].
Childhood language, social and behavioural development predict long-term health [15,16] and there is a marked overlap between disorders of language development and psychopathology [19][20][21][22].Recent work suggests a stable association between behavioural problems and pragmatic language impairments throughout childhood [23].It is thus essential to consider language and social, emotional and behavioural difficulties together.Other early markers of general neurodevelopmental vulnerability include abnormalities of motor development [24], sleep disorders, seizures and attention difficulties [25]; conditions which should trigger assessment across the neurodevelopmental domains and lead to careful follow-up.
Parental emotional well-being is another major determinant of a child's social and emotional development [26,27].Cohort research [3-7, 28, 29] demonstrates strong associations between parental mental health, parenting behaviours and children's psychiatric outcomes.The antenatal maternal mental state may be an even stronger predictor of sensitive parenting behaviours than the postnatal maternal mental state [30].The mediators of the association between antenatal maternal stress and adverse child outcomes are complex but may involve endocrine effects [31,32] as well as reduced 'maternal preoccupation' with the foetus during a critical period for the development of maternal sensitivity in late pregnancy [30].
The association between postnatal depression and child psychopathology has been long established [33], but the relationship between poor parent-child interaction and poor neurodevelopmental outcomes is probably stronger [34], and treatment of depression alone may be inadequate to achieve improvement in child outcomes [35].Interventions designed to improve both parental mental health and the parent-child relationship are thus likely to optimise benefits in terms of child development and are potentially valuable public health interventions [36,37].
Scheduled antenatal and child development assessments offer an opportunity for clinicians to identify potential risks to child neurodevelopment and take appropriate action.These assessments are carried out in diverse settings and by different health professionals internationally [38] but in Denmark, they are largely based in general practice where 10 preventive contacts are offered to families before a child reaches 5 years of age, with high uptake.W, therefore,e decided to test the effectiveness of a general-practice-based intervention designed to improve the child's psychosocial environment.The intervention is a web-based programme (robustbarn.dk),introduced during practice-based developmental assessments with a psychosocial focus.The programme, signposted to parents when considered appropriate by clinicians, aims to improve parental mentalization skills.Better mentalization skills should help parents to increase their understanding of their own mental state and that of their children, thus improving parent-child interaction and subsequently child developmental outcomes [39].
This protocol paper describes the background, purpose, and design of an effectiveness trial of a complex intervention involving signposting by primary care clinicians towards resources at the robustbarn.dkwebsite during seven scheduled preventive consultations during pregnancy and a child's first 30 months of life.
Trial design
This is a cluster-randomised, non-blinded, parallel-group superiority trial with a 1:1 allocation ratio.Enhanced care-as-usual (i.e.including preventive consultations with a structured collection of data on family psychosocial factors) is used as a comparator as this constitutes the most naturalistic approach.A process evaluation and a health economic evaluation will be undertaken during the study period.
Methods/design
The study is a cluster randomised controlled trial with the general practice as the unit of randomisation.The SPIRIT reporting guidelines were used for this study [40].
Trial setting
The study is conducted in two of the five Danish administrative regions: the Capital Region and Region Zealand.In Denmark, the healthcare system is free of charge for everyone with a social security number.General practitioners (GPs) are self-employed and work under a collective agreement with the administrative regions.General practices can be singlehanded or consist of several physicians.The GP employs staff, such as practice nurses, midwives, GP trainees, and medical students to deliver services to patients.The GP holds responsibility for all scheduled assessments but can delegate the work to, e.g. a nurse or midwife.The GP functions as a gatekeeper to the secondary healthcare system and offers continuity in preventive childcare with three scheduled antenatal assessments and seven scheduled assessments from birth to school entry.Most communication between GPs and other health services is done through pregnancy charts, referrals, and discharge summaries.
Intervention and enhanced care-as-usual
The MRC guideline for developing and evaluating complex interventions was used to inform the trial design [41].(1) We identified existing literature about psychological resilience and the early mother-infant relationship.(2) A programme theory describing the overall rationale for how positive mother-infant relationships could be promoted in the context of scheduled appointments was developed and visualised through the logic model below: (Fig. 1) (3) We performed a pilot study between 2017 and 2018.Ten general practitioners participated in a 2-day training course where key concepts from Robusthedsprogrammet (Eng: the resilience programme) [42] were introduced.Participating clinics took part in a discussion that served to refine the resilience programme to match the context of the antenatal assessments in general practice [41].Lessons learned from Fig. 1 Logic model the feasibility study led to adjustments of the intervention including reduction of the duration of the training programme for GPs to 1 day; invitation of all clinical staff involved in the assessments to the trial (not only the GP); the introduction of the intervention should fit the context of the first antenatal assessment (at 6-10 week of gestation).This assessment is already burdened with administrative tasks, such as journal recording and choice of birth place, so the initial introduction of robustbarn.dk was limited to 15 min at the first antenatal appointment.
In-and exclusion criteria for GPs
GPs were eligible for participation if they had a clinic registration number in the Capital Region or Region Zealand.GPs that participated in similar trials at the time of inclusion were not eligible.
Identification and recruitment of GPs
A list of addresses of every GP clinic in the two Danish administrative regions was retrieved from medcom.dk 1 in March 2019, and letters were sent to all GP clinics in Region Zealand and the Capital Region inviting them to participate in the study in April 2019.An invitation was also sent as part of an online newsletter to all GPs in the two regions.Clinics received a reminder by email after four weeks.Between May and September 2019, 70 general practices accepted the invitation to participate.They and/or their staff involved in preventive consultations agreed to attend a 1-day or 2-day training program, for control and intervention clinics respectively.All GPs participating in the study received reimbursement for administrative tasks and time spent on courses in connection with the project (standard tariff as negotiated between the GP trade union and the administrative regions).
Randomisation of GPs
After completion of GP recruitment but before the training course, GP clinics were randomised to the intervention or the control group.Randomisation was performed by an external statistician using a computer-generated randomisation sequence (evt indsaette navnet på programmet).
Inclusion and exclusion criteria for pregnant women Women were eligible for participation if they were pregnant, ≥ 18 years, and attended their first antenatal assessment in participating general practices.Women were excluded if they are unable to complete questionnaires or participate in the intervention because of very limited Danish language comprehension or if they plan to move to another general practice during the pregnancy or shortly after the birth of the child.Families with other significant difficulties, including those engaged in other therapeutic interventions, were eligible for inclusion.
Identification and recruitment of pregnant women GPs participating in the trial consecutively invited all pregnant women attending their first antenatal assessment, usually in gestation weeks 6-10.Each practice was asked to recruit a minimum of 10 and a maximum of 30 consecutive participants at their first pregnancy assessment starting October 2019.Data were recorded for women who declined participation, and participation rates were monitored carefully.
Intervention group 1-day robustbarn.dk-training course for GPs and staff
On the basis of the feasibility study results, a oneday training programme in robustbarn.dk was developed.
The training programme was offered to all participating clinics randomised to the intervention group.GPs were encouraged to invite clinic staff usually involved in antenatal assessments and child development assessments to the training course.The course was mandatory for the GP, but voluntary for staff and trainees.It involved introducing the pregnant women to the core concepts of robustbarn.dk as well as encouraging women to log in to the web regularly during pregnancy and after giving birth.The training was provided by specialists in the resilience programme, employed at a government-funded health-promoting organisation "Committee for Health Education" and by a GP with specialist training (AHG) who bridged the use of the intervention elements to fit a general practice setting.
Robustbarn.dk GPs and staff participating in the intervention arm were introduced to the background, the structure and the aim of the website robustbarn.dk.Furthermore, they were trained in presenting the intervention to the parents at each preventive examination and in other consultations where the GP considered it likely that the programme could be useful to the family e.g. when they reported mental difficulties during pregnancy or postnatally.All pregnant women in the intervention arm were introduced to the webpage by their GP at the first antenatal appointment.Women also received a leaflet, with a brief description of the website content.Once the GP included a pregnant woman into the project, the woman received a unique login to robustbarn.dk in her safe electronic mailbox (Eboks).This procedure ensured that only women in the intervention group could access the website and thereby should prevent contamination across study arms.
Robustbarn.dk is a website specifically designed for pregnant women and new parents.The robustbarn.dk website is a collection of brief psycho-educational texts, sound-files, and exercises (please see supplementary material for more information).The intervention includes e-learning modules to parents related to the timing of antenatal and postnatal consultations e.g.information about normal emotional reactions in pregnancy, preparing for delivery, support in relating to the newborn child etc.
1-day assessment-training course for GPs and staff
GPs and staff in the intervention arm additionally received a 1-day training course in the appropriate use of the assessment tools screening for symptoms of depression and anxiety [43], the parent-infant interaction assessment tool [3], infant neuro-developmental assessment [44], child examination and the systematic child record [45].
Control group
Pregnant women attended by a GP allocated to the control arm received enhanced care as usual.Control group GPs attended the same 1-day assessment-training course for GP and staff as described above, and were paid to add 15 min to their preventive consultation times to accommodate the extra work.The control group had no insight or training related to the webpage, robustbarn.dk and their patients would not be able to access the website.See Table 1 for an overview of what constitutes the intervention and control groups.
Ensuring adherence
To improve adherence to the protocol, quarterly emails are sent to all participating GPs by the research team enquiring about any problems encountered concerning trial-related tasks.The emails include information on numbers of recruited participants and solutions to potential problems connected to data registration.GPs were paid 1000 Danish Kroner per recruited patient for the clinical time used in the 3-year study period.To secure adequate participant enrolment we asses inclusion every month.All participating GPs receive reminders about the project with regular intervals and those with low inclusion numbers were asked if they needed help with practical issues.To ensure representative enrolment all GPs are asked to make notes about pregnant women who were eligible, but not included.The pregnancy consultation used for inclusion has a special billing code and at the end of the study, it will therefore be possible to perform a register-based analysis of non-participation.
To ensure adherence among the participating pregnant women, three-monthly newsletter emails were distributed during the first 1 year of the project.The newsletter provides updates from the research team and access to an official project website (familietrivsel.dk).
Primary outcome
The joint primary outcomes are parentally-reported child social and emotional functioning measured by the Strengths and Difficulties Questionnaire (SDQ) and expressive language performance measured by the MacArthur-Bates Communicative Development Inventory (CDI):
Intervention clinics Control clinics
• Training in assessing parental mental health, mother-infant-interaction, infant neuro-developmental assessment, systematic child record for all appointments until the child is 2 years • Training in assessing parental mental health, mother-infant-interaction, infant neuro-developmental assessment, systematic child record for all appointments until the child is 2 years • Instructed in including min.10 and max.30 pregnant women consecutively at 1st antenatal appointment Social and emotional functioning will be measured by the Total Difficulties Scale of the maternally-reported SDQ [46] at age 30 months [19].The predictive validity for psychiatric disorders 1-2 years later is good, with the area under the Receiver-Operating Characteristic curve (ROC AUC) 0.821 [47].The SDQ has proved susceptible to change and has been used as a principal outcome in several recent randomised trials reporting successful psychoeducational interventions [48][49][50][51].It is also of note that there are marked differences in SDQ scores at 30 months by socio-economic status: in Glasgow, maternally-reported SDQ Total Difficulties Scale scores are approximately two points higher in the most deprived quintile compared with the least deprived quintile [52].
Long-term outcomes
Data on use of health services, diagnoses and educational attainment will be obtained from national registers.
Sample size
A total sample of children was estimated to be needed to find a difference of two points in the SDQ Total Difficulties Scale score (an effect size of 0.3) with 80% power at a 2.5% significance level.The estimate of 488 was based on the assumption of an intra-class correlation coefficient (ICC) of 0.02 in 60 clinics (an average of eight children per clinic).Allowing for 22% attrition we therefore aimed to recruit 488/0.78= 624 children in 60 clusters (of on average 11 children per GP).The ICC estimate was based on the distribution of HADS scores at baseline, suggesting that the impact of clustering effect by practice was modest.
Allocation, sequence generation, and concealment
The study is a cluster randomised controlled trial with the general practice as the unit of randomisation.General practices were randomised on a 1:1 basis to intervention or control groups using a computer algorithm.The computer randomised allocation sequence was concealed until all general practices were assigned.
Blinding (masking)
The design is open label with only the study statistician being blinded under the dataset locked after collection of primary outcome data so un-blinding will not occur.
Data collection, management, and analysis
Table 2 shows the SPIRIT timeline of the study.
Data collection during the study period
The intervention period for each woman is approximately 37 months (~ 7 months pregnancy and the first 30 months of the child's life).GPs recruited women consecutively and therefore, the 37 months intervention period did not start simultaneously for all included patients.Maternally-reported data (Table 2) are regularly collected from inclusion to the end of the study.Thus, most participant data will be collected at three time-points: 1) Inclusion (baseline) 2) When the child is 15 months and 3) When the child is 30 months The baseline demographic questionnaire included educational qualification level, employment status, and household composition.All data collected from the women are collected through E-boks, a private and secured online digital mailbox that all citizens in Denmark have. 2Baseline data for the study will consist of GP-reported data, Table 2 The SPIRIT timeline of the study patient-reported data, and Danish administrative register data.GP-reported data (including the developmental assessment data) and the families' self-assessments are completed electronically by use of REDCap [60].Information about services from the social and health care system will be collected through Danish registers.
End-of-study data collection
After completion of the approximately 37 months intervention period for each woman, a questionnaire with primary outcomes (CDI and SDQ) will be sent to the women, followed by the remaining questionnaires.Participants will receive 2 automatic reminders and subsequently a text message to ensure the completeness in data.Mothers' and their children's service use will be collected from registers.All data are stored in REDCap [60].
Statistical methods
Binary valued outcomes will be analysed in logistic regression and continuously valued outcomes will be analysed in linear regression.To account for clustering within practices and for possible repeated measurements, generalised estimating equations (GEE) will be employed to adjust the covariance matrix.Possible differential dropout will be adjusted for using inverse probability weighting [61].
Process evaluation
The main objectives of the process evaluation are: ○ To identify the key enablers and barriers for signposting the robustbarn.dkprogramme in clinical practice ○ To provide an empirically grounded explanation of the results from the family resilience cluster randomised clinical trial.○ To contribute to the family resilience project's overall assessment of the value, implications, and potential scalability and transferability of the intervention.
These objectives involve monitoring and exploring how professionals and patients respond to the intervention (with what type of consequences), how the intervention is actually implemented in practice (if it is implemented as intended in the protocol, i.e. implementation fidelity), and how contextual factors influence implementation processes, mechanisms, and outcomes [62].
The implementation process will be evaluated using Normalization Process Theory (NPT), which provides an explanatory framework for investigating how complex interventions are implemented in organisational settings.According to the theory, implementation emerges through four generative mechanisms: coherence, cognitive participation, collective action, and reflexive monitoring [63,64].
Process evaluation data collection and analysis
The study will adopt a mixed-methods approach combining in-depth qualitative data with quantitative process data on intervention activities [62].
Qualitative interviews and observations
During and after the randomised clinical trial, semistructured, face-to-face interviews will be conducted with purposively sampled health professionals and patients from 15 practices.The aim is to conduct approximately 20 interviews with health professionals and 20 interviews with their patients.The interviews will be audio-taped and transcribed verbatim.First, the data material will be analysed using an inductive thematic approach, and subsequently, a more deductive thematic analysis will be performed using NPT as a coding framework.
Quantitative data
This qualitative study will be supplemented by descriptive quantitative data on key process indicators, preferably from all sites or participants, for example number and duration of interventions website visits.Women's use of the homepage during the first year after entering the study will be analysed in order to describe how their characteristics are associated with the use of the intervention.Further, data about the general practices will be assessed to examine how different practice characteristics affect the women's use of the intervention.The following data regarding general practice characteristics will be handled; practice area deprivation, practice organisation categorised as single-handed practice, companionship practice or group practice, practices with or without a nurse or midwife and how many women each practice had recruited.
Interpretation of qualitative and observational data
Qualitative data will be coded according to the framework of Normalization Process Theory and will address the main objectives described above [65].
Economic evaluation
The economic cost analysis will consider and present possible costs and benefits associated with the intervention.In particular, the analysis will estimate the costs of the treatment as compared with the costs of the standard care offered to the control group.Furthermore, the economic analysis will assess the benefits of the treatment vis-à-vis standard treatment, e.g. in terms of potential saved short-and long-term costs.To the extent possible, the economic analysis will thus examine the long-term benefits and costs of the intervention.
Costs and benefits will be assessed using linked information from survey respondents to socio-economic and health information in the Danish registers.Linking to register data will facilitates following individuals (parents and children) over time, and the survey will thus be an invaluable source for future follow-up research.The analysis will take a societal perspective to include costs that fall on GPs, other relevant service providers (for example, health visitors or social services) and the affected mothers and their families.Costs of the intervention will be obtained from trial documentation and in consultation with intervention providers.The intervention costs will include GP cost for delivering any relevant components of the intervention, RP staff costs for delivering training, and costs of any consumables required to deliver interventions.Costs of usual care include costs of current care to affected mothers provided by health visitors (sundhedsplejerske) and general practitioners, and also the cost falling on other services (hospital, local authority or other services) through referrals.These data will be combined with study-specific unit costs or unit costs from publicly available standard sources to produce a total cost for both the intervention and control groups.
Apart from changes in the child and maternal outcomes, there are likely to be wider benefits.These might include work-related or educational benefits for the affected mothers and their families, increased family cohesion, potential reduction of inequalities between socioeconomic groups, as well as better educational outcomes (eventually) for the children involved.Those benefits may come about as a result of increasing interactions between mothers and their children and other family members, and increased knowledge and skills for continued improved functioning in the future.We shall also collect EQ-5D-5L data from mothers at baseline, every 6 months, and at the end of the trial to capture potential improvement in maternal quality of life.
A sensitivity analysis will be undertaken to explore possible variations of outcome measures and estimate mean effects as well as confidence bands.
As suggested, it is possible that the intervention may lead to long-term benefits to society beyond the trial's follow-up period.A longer time horizon will provide more time for the effects to accrue and potentially offset the initial costs of the intervention.The long-term benefits of the intervention may include costs saved as a result of conduct and emotional disorders avoided, avoided criminal justice costs, reduced needs for special educational services, reduced mental health service use, and reduced productivity loss for the family as well as improved quality of life for parents.Using longitudinal register data allows for such long-term follow-up.
Data management
The study adheres to all Danish laws governing medical research.The General Data Protection Regulation is upheld, and data are stored and handled accordingly.The study owner (University of Copenhagen) is responsible for upholding laws and ensuring the confidentiality of data.All data that can identify participants is encrypted and stored securely on password-protected servers with continuous transaction logging.Trial data are stored in accordance with the data policy of the University of Copenhagen.Data are saved for 5 years after data collection and will thereafter be anonymized or deleted.
Monitoring
The study has been approved by the University of Copenhagen Data Protection Agency (Case no.514-0362/19-3000).According to Danish legislation, there is no need to apply for the approval of the National Danish Data Protection Agency when regional approval has been given.A data processing agreement with each GP has been signed before the collection of data.The project management group have met once a month throughout the trial period to monitor recruitment, trial progress, completeness of data and ethical issues.Our Trial steering committee met at the outset of the trial and made a decision to meet on an ad-hoc basis when requested by the investigating team.This has not been required to date, but the TSC will be convened when our final dataset is locked.the conduct of the trial will be agreed upon by the project management group.
Dissemination policy
The results from the study will be published in peerreviewed journals.The final list and order of authors follow the contribution from each researcher and follows the Vancouver rules and the guidelines from The Danish Committees on Scientific Dishonesty.PW is the Chief Investigator; he conceived the study, led the proposal and protocol development together with JK.GO, MG, AG and VS, contributed to the study design and to the development of the proposal.PW, VS, GO, AG and JK and VS were the lead trial methodologists.All authors read and approved the final manuscript.
Access to data
After our primary publication of the trial results, a version of the data will be shared on a public platform and made available for research in accordance with Danish law about the protection of personal data.
Discussion
This cluster-randomised controlled study aims to test the effectiveness and feasibility of an intervention to increase mental well-being and resilience in new mothers and their offspring.GPs and staff received brief training in the core concepts of the web-based resilience training programme.From October 2019 to March 2020 70 GPs/staff were trained and subsequently included pregnant women in the study.An obvious shortcoming of the study might be related to COVID-19, since a large proportion of antenatal and postnatal contacts were reported to be affected by the pandemic.GPs and staff reported that pregnant women's major concerns were infection and infectionrelated risks.This delimited the clinicians' window of opportunity to put mental well-being and mentalisation on the agenda during consultations, and the robustbarn.dk intervention might have been introduced with less enthusiasm than might have occurred otherwise due to other competing tasks in the clinic.
Strengths
The location of this pragmatic cluster-randomised trial in Danish general practice, with its typically high levels of engagement with preventive obstetric and child developmental assessments, is likely to create a robust sample with relatively low levels of attrition assuming patients remain registered with their original GP.Recruitment of consecutive patients attending their first antenatal appointment should provide a sample representing a typical clinical caseload for participating practices.The flexibility given to clinicians in terms of signposting their patients towards the web-based resources should reflect potential future real-world practice underpinned by progressive universalist principles [66] and the quantitative process evaluation of website usage will allow assessment of equity of access to the resources.The strength of Danish population registers will ensure reliable longterm follow-up data for almost all participants.
Limitations
It was necessary to exclude non-Danish-speaking women from participation for pragmatic reasons, and this will reduce the generalizability of findings to migrant populations.
It is possible that there may be variation in recruitment rates across practices -with some practices generating selective samples, while others may recruit almost all eligible women [67].Similarly, there may be variation across practices in the extent to which the intervention is presented to participants.Given the design of the trial, these factors may increase clustering effects and potentially reduce statistical power.
There may be some dilution of the intensity of intervention if participants change practice or consult with untrained clinicians within their existing practice.
Initiation of recruitment of patients by October 2019.Expected to be completed before the end of 2022.
The SPIRIT checklist and timeline have been included as additional information and in the text.
Sponsor and funder
The funders have no role nor authority in the design of the study and collection, analysis, and interpretation of data or dissemination of the project.Committees The trial has a Trial Steering Committee consisting of an independent chair, together with at least two other independent members, the Chief Investigator and a patient representative/service user.Other members will include the grant holders.Observers may also attend, as may other members of the Trial office or members of other professional bodies at the invitation of the Chair.
Biological specimens
Not applicable.No samples are collected.
Funding
The TRYG Foundation l.Grant number: 125227 The Quality and Educational Committee (KEU; Kvalitet og Efterudannelsesudvalget) Capital Region of Denmark.Grant number: 19035774 The funders have no role nor authority in the design of the study and collection, analysis, and interpretation of data dissemination of the project.
Instructed in including min.10 and max.30 pregnant women consecutively at 1st antenatal appointment • Quarterly newsletters to enhance adherence to the trial • Quarterly newsletters to enhance adherence to the trial • Training in introducing the concepts of the mentalisation-based robustbarn.dkwebsite :// www.e-boks.com/ danma rk/ en/ what-is-e-boks/ | 2022-12-14T15:47:51.222Z | 2023-01-04T00:00:00.000 | {
"year": 2023,
"sha1": "55029f53d151dde272cad531105dff159efa825e",
"oa_license": "CCBY",
"oa_url": "https://trialsjournal.biomedcentral.com/counter/pdf/10.1186/s13063-022-07045-7",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e5ee82078d6e18cbf0980fd67680e4227c413c5b",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234475962 | pes2o/s2orc | v3-fos-license | Endophytic colonization of tomato plants by Beauveria bassiana Vuillemin (Ascomycota: Hypocreales) and leaf damage in Helicoverpa armigera (Hübner) (Lepidoptera: Noctuidae) larvae
Background: The endophytic capacity of Beauveria bassiana Vuillemin isolates in 2 tomato varieties and their effects on damage and survival of the tomato fruit worm Helicoverpa armigera Hubner larvae were studied. The bioassays consisted of sowing seeds of 2 tomato cultivars soaked for 24 h in B. bassiana conidial suspension at the concentration of 1 × 10 and 1 × 10 conidia/ml for the isolates Bb 115 and Bb 11, respectively. Ten leaf, stem, and root segments were cut and incubated for assessing the endophytic growth of the fungus. Percentage of leaf consumption and pathogenicity of B. bassiana on H. armigera larvae were estimated. Main body: The fungus B. bassiana developed endophytically in the 2 tomato varieties and was detected in tomato leaf, stem, and root. However, higher colonization rates were observed in roots than in leaves and stems. The B. bassiana isolate Bb 115 had a greater negative effect on the mean survival times (MSTs) of H. armigera larvae and on leaf consumption for local and improved tomato varieties. In fact, the lowest MSTs were recorded at the concentration of 1 × 10conidia/ml for Bb 115 in 1.5 ± 0.2 days, i.e., 7 days less than the surviving larvae of the control group, which MSTs were 8.4 ± 0.9 days. Consumed leaf areas by larvae averaged (89.17 ± 10.33 mm) at a fungal concentration of 1 × 10conidia/ml for Bb115. It was the best compared to that of untreated control (820.3 ± 92.77 mm). The colonization rate of the different plant parts increased with conidia concentration in both tomatoes varieties. Conclusion: This study reported the effect of endophytic colonization of tomato by B. bassiana on the survival of H. armigera larvae and showed that the isolates Bb 115 and Bb 11 could be considered as useful microorganisms for the integrated control of H. armigera.
Background
The tomato (Solanum lycopersicum L) is one of the most important and consumed vegetables in the world. In Benin, yields are low and variable due to insect pests attack, with an average of 9.533 kg per hectare (Assogba Komlan et al. 2016). Tomato producers are facing a significant pressure from insect pests, particularly the damage caused by the tomato fruit worm, Helicoverpa armigera (Hübner) (Lepidoptera: Noctuidae), the major threat to tomato crops throughout the growing season due to its direct damage to the fruits resulting in yield losses ranging from 20 to 60% (Herrero et al. 2018). Management of the pest depends mainly on using synthetic pyrethroids' application. Moreover, the use of such chemical compounds led to many side effects such as human hazards, toxic residues in food, insect pests' resistance, environment pollution, and loss of biodiversity. Therefore, there is an urgent need to develop an alternative safe control method. Among the most sustainable alternatives, biological control with the entomopathogenic stands out. The fungus Beauveria bassiana (Ascomycota: Hypocreales) is one of the most widely used entomopathogens in the biological control of pests through the direct application of conidia suspension (Douro Kpindou et al. 2012). B. bassiana is a fungal species with an extremely broad host spectrum. It is a wellknown, naturally occurring and environmentally safe biological control agent (Prasad and Syed 2010). Similar to other species of fungi, B. bassiana could endophytically colonize plant tissues and negatively affect herbivore species that feed on them. Endophytic microorganisms reside asymptomatically within higher plants, inhabiting leaves, stems, and roots without any apparent harm to the plant. Endophytic fungi are important because they produce secondary metabolites with a range of potential uses in the agricultural and the pharmaceutical industry (Selim et al. 2012).
An endophytic fungus forms a mutually beneficial symbiotic relationship with the plant species. It lives within the tissues of the plant without causing disease; on the contrary, it stimulates its defenses and the plant in return acts as a host. Metabolites produced by some endophytic fungi, as B. bassiana, have been reported to influence the reduction of insect infestations on their host plants (Jaber Lara and Ownley 2017). The insecticidal action of endophytic B. bassiana has also contributed to the management of lepidopteran pests. This is most likely due to plant systemic resistance, elicited by these fungi against insect herbivores. Induced systemic resistance (ISR) is an important mechanism by which the whole plant is primed for enhanced defense against a broad range of insect pests (Pieterse et al. 2014). The efficiency with which B. bassiana can colonize and induce defense responses in tomatoes to repel pests is still unknown in Benin. However, investigations should continue within the framework of the biological control of insect pests using endophytic fungi.
Thus, it was important to study the endophytic character of isolates B. bassiana Bb115 and Bb 11 on 2 different local and improved varieties of tomato, the most consumed in Benin. The aim of the study was to evaluate the endophytic colonization of B. bassiana in tomato plants and their effect on H. armigera survival.
Plant material
The local tomato variety "Tounvi" and the improved variety "Padma" were used during the bioassays. Both varieties are semi-erect, with a development cycle that elapses from 65 to 90 and 60 to 70 days for local and improved varieties, respectively. The average weights of a tomato fruit is 24 g and 120-130 g for local and improved varieties, in that order. Tounvi and Padma are the 2 most produced and marketed varieties in Benin according to their agronomic quality, their color, and resistance to pests such as H. armigera (Assogba Komlan et al. 2016).
Rearing of Helicoverpa armigera
A rearing colony of H. armigera was established at the laboratory by caterpillars collected from tomato fields at different localities in major tomato production areas in Benin. The larvae were placed in plastic containers (6.5 × 19.5 cm) and reared on an artificial diet under controlled conditions (70 ± 5% RH, 26 ± 2°C, with a photoperiod of L: D 12:12) until pupation (Douro Kpindou et al. 2012). In order to prevent cannibalism, third-instar caterpillars were transferred individually in Petri dishes provided with artificial diet. The artificial diet consisted of bean flour, beer yeast, methylparaben, ascorbic and sorbic acids, streptomycin, formaldehyde, vitamin complex, agar, and distilled water. Diet was replaced every 2 days in order to avoid desiccation; moistened filter paper was placed in each Petri dish (Barrionuevo et al. 2012). Pupae were collected and placed in polypropylene containers (6 × 12 cm) until adult emergence. Folded paper was placed inside the cages for egg deposition. Collected eggs were kept until hatching, and then the larvae were reared as described above. Third-instar larvae (L3; 7.4 ± 0.1 days) were used in all bioassays.
Source and production of the entomopathogenic fungi
The isolates Bb 11 and Bb 115 were obtained from the entomopathogenic fungi (EPF) collection of the Applied Entomology Laboratory and were selected in previous studies for their pathogenicity towards H. armigera (Douro Kpindou et al. 2012). The endophytic characters of the isolates remained to be demonstrated on different varieties of tomato most consumed in Benin. Conidia from the 2 fungal isolates were picked from the stock culture and placed onto standard Potato Dextrose Agar (PDA) in Petri dishes (Ø = 9 cm) (Becton, Dickinson and Company; Sparks, MD 21152 USA) for subculture and incubated for 14 days at 26 ± 2°C and a photoperiod of 14:10 h (L: D). Then, conidia of each isolate were harvested by scraping them from the PDA, using a sterilized scalpel and suspended in 0.01% (w / v) Tween 80®. Conidial concentrations were estimated, using a Neubauer hemocytometer and adjusted to 1 × 10 7 conidia/ml and 1 × 10 9 conidia/ml for isolates (Posada and Vega 2005).
The concentration of conidia to be used was calculated as follows: where C' = concentration to be tested, Co = concentration of initial conidial suspension, Vo = volume needed, and V' = volume to be added.
The viability of conidia after 24 h incubation on PDA was 89 ± 3.7% and 92 ± 1.5% for Bb 11 and Bb 115, respectively.
Treatment with B. bassiana through seed coasting
The seeds were sown and monitored under greenhouse conditions (26 ± 5°C, 14:10 h photoperiod) until use of the tomato plants. On the other hand, before sowing, the seeds were mixed with a suspension of fungal conidia with a fungal inoculum of 1 × 10 7 conidia/ml and 1 × 10 9 conidia/ml of each isolate. Then, they were placed on filter paper and kept for 24 h before sowing (Russo et al. 2015). Experiment consisted of 5 treatments: (i) tomato seeds soaked in Bb 11 at 1 × 10 7 conidia/ml; (ii) tomato seeds soaked in Bb 11 at 1 × 10 9 conidia/ml; (iii) tomato seeds soaked in Bb 115 at 1 × 10 7 conidia/ml; (iv) tomato seeds soaked in Bb 115 at 1 × 10 9 conidia/ml; and (v) untreated control. Seeds were sown in 10 pots for each treatment. All treatments were replicated 3 times for each of the 2 tomato varieties.
Assessment of the endophytic colonization B. bassiana
The methods of Arnold et al. (2000) and Kambrekar and Aruna (2018) were used to re-isolate B. bassiana from inoculated plant organs. For this purpose, all the glassware were sterilized, using an autoclave at 121°C for 15 min and then kept in a hot air oven at 55°C for 1 h. Then, 10 leaves and roots randomly sampled from inoculated tomato plants were cut in 5 pieces with a sterilized knife under a laminar air flow chamber. The 5 pieces of each organ (leaf, root) variety and 3 replicates per treatment and per variety pieces (3 cm 2 ) were sterilized in 0.5% sodium hypochlorite for 3 min, then with 70% ethanol for 2 min, and then washed with sterile water and dried before placing it onto PDA in Petri dishes (9 cm diameter). Petri dishes were incubated at room temperature (28 ± 2°C) and periodically checked for fungal growth. The purity and sporulation of the culture were checked using a microscope. Colonization of B. bassiana was confirmed by microscopic observations (Ma et al. 2008). Petri dishes with B. bassiana were counted for each organ per treatment and per variety.
Percentage of colonization ¼ no:of segments colonized= total no:of plant sampled segments  100 ð Þ Percent colonization was determined for different plant organs and the most virulent B. bassiana isolate with the highest endophytic colonization was determined.
Assessment of leaf consumption and larvae survival in inoculated and untreated tomato plants
Leaf consumption was assessed by measuring leaf area consumed by each larva in each treatment. The test consisted of feeding 10 third-instar larvae of H. armigera for 24 h on tomato leaves sampled from inoculated and untreated tomato plants of both local and improved varieties. Ten discs of tomato leaves (3 cm in diameter) were obtained by cutting sampled leaves and offered each to the 10 H. armigera larvae placed in Petri dish (90 mm) (Magrini et al. 2015). The Petri dishes were then incubated for 24 h at 25°C, 60% RH. Thus, using a graduated paper, the consumed area was estimated per treatment and per variety (Russo et al. 2015). Experiments were replicated 3 times. In parallel, larval survival and mortality of H. armigera larvae was checked daily per treatment and per variety. Leaves were replaced every 2 days until the 10th day after treatment (Ma et al. 2008).
Data analysis
The percentage of plants colonized by B. bassiana was compared using the chi-square test. Data on larval mortality and sporulation rates were processed by analysis of variance (ANOVA), using the general linear model (GLM) procedure of SAS (SAS Institute Inc 2003). Percentages were based on the initial number of larvae exposed. In case of significant F values, means were compared by using SNK (Student-Newman-Keuls). The modeling of the time-dose-mortality data was carried out, using the "Cox regression" model (Statistical Package for Social Science (SPSS) Inc. 1989Inc. -2003. Data on leaf area consumed by larva was compared by applying ANOVA, followed by SNK.
Endophytic colonization of tomato plants by B. bassiana
The endophytic colonization of the tomato organs placed onto PDA showed a whitish color, characteristic of B. bassiana mycelium. This colonization was also confirmed by microscopic observations. Thus, the presence of B. bassiana was confirmed in leaves, stems, and roots of tomatoes with seeds coasting inoculation method. Regardless to the concentration, the 2 B. bassiana isolates were able to colonize endophytically the different organs in both tomato varieties (Fig. 1).
Assessment of leaf colonization by B. bassiana
The fungus presence was detected in tomato leaves at the 2 concentrations of Bb 11 and Bb 115 isolates. However, the colonization rate for the isolate Bb 115 was more efficient, depending of the inoculation concentration and tomato variety. The highest endophytic colonization rates 91.2 and 74.55% were observed at the local and improved varieties, respectively, at the concentration of 1 × 10 9 conidia/ml (Fig. 2). After incubation, the lowest colonization for both varieties was registered with the concentration of 1 × 10 7 conidia/ml of the Bb115 isolate (local variety). For all treatments, colonization of leaves in the local variety Tounvi of 1 × 10 9 conidia/ml of the Bb 115 was significantly greater than colonization of leaves in the improved variety Padma (df = 7.81, P = 0.0050) (Fig. 2). Samples from untreated control did not show any fungal colonization. Regarding stem colonization, the fungus was more uniform than for leaves, and the absence of colonization was only recorded by the 10 7 concentration of the Bb 11 strain (Fig. 3). However, non-significant differences were observed between the treatments (DF = 1, P = 0.8266). Colonization rates on the roots showed strong colonization of the fungal isolates unlike those observed in the leaves and stems.
With the improved variety Padma, the highest colonization rates were registered, where the strain Bb 115 stood out, with 93.51 and 89.3% for concentrations 10 9 and 1 × 10 7 conidia/ml, respectively. On the other hand, low colonization of roots was observed in the local variety inoculated with the 2 isolates Bb 115 and Bb 11 at 1 × 10 9 conidia/ml (Fig. 4). However, non-significant difference was observed between the 2 varieties (DF = 1, P = 0.2764). No endophytic colonization was observed at the control.
Leaf consumption by H. armigera and larvae survival
Statistical analysis showed significant differences (F = 13.66, P = 0.0043) in the consumption of leaves inoculated with the strains of the fungus, which suggests that the endophytic presence of B. bassiana reduced the consumption of H. armigera. The use of B. bassiana as endophytic fungus in tomato induced an overall significant reduction in leaf consumption by H. armigera larvae (F = 76.55, P < 0.0001). However, there was nonsignificant difference between local and improved varieties. On the other hand, in the improved variety, the average leaf area consumed was 32.08 ± 7.51mm 2 for Bb 115 at 1 × 10 9 conidia/ml against 730.7 ± 62.41 mm 2 for the untreated control. In the local variety, H. armigera larvae consumed an average of 89.17 ± 10.33 mm 2 of the leaf when plants were inoculated with Bb 115 at 1 × 10 9 conidia/ml against 820.3 ± 92.77 mm 2 for untreated control (Fig. 5). The use of B. bassiana Bb 115 induced an overall significant reduction in leaf consumption by H. armigera larvae (F = 76.55, P < 0.0001).
Discussion
The development of alternative methods for plant protection has become an attractive option. Among these alternative methods to control crop pests, pathogenic microorganisms including B. bassiana are promising. The entomopathogen, B. bassiana was isolated from the leaves, stems, and roots of local and improved varieties of tomatoes, indicating the potential of both isolates as effective endophytic agents on tomato plants. Posada and Vega (2005) reported that the endophytism of the fungus B. bassiana is not harmful to plants growth. Also, this fungus can colonize various plant tissues without affecting their physiological activities. Similar results have been observed by many authors, where B. bassiana has been reported as an endophyte in tomato (Ownley et al. 2004).
Regardless of the inoculation concentration and variety of the tomato plants, the colonization rate of the B. bassiana Bb 115 isolate was high in all sampled organs (leaves, stems and roots) than in the isolate Bb 11. This difference could be attributed to the compatibility of Bb 115 genome which may be more compatible with the internal environment of the plant. Several factors such as the origin, the nature of the entomopathogen, the penetration site, the inoculation method, and the compatibility with the host plant can also influence the endophytic capacity of a fungal isolate compared to another isolate (Renuka et al. 2017). However, the lower rate of endophytic colonization of B. bassiana observed at the isolate Bb 11 at the concentration of 1 × 10 7 conidia/ml could be explained by a limited penetration of germinating conidia in tomato plants.
The colonization rate of tomato by B. bassiana increased with the fungal concentration, suggesting that high conidia number resulted in fungal growth within tomato plants. This may be related to high secondary fungal metabolites production in the different plant organs. The endophytic colonization of tomato plants by B. bassiana was effective with seed coasting. However, non-significant difference was observed between the tomato varieties demonstrating the capacity of B. bassiana to actively penetrate plant tissues, regardless of tomato variety. The highest colonization rate observed in roots than in leaves and stems may be related to their physiological differences and the fungal characteristics. Indeed, colonization started from roots and progressed vertically within stems and leaves with seed coasting.
According to several studies, endophytic fungi present specificity for some plant tissues because they adapted to particular conditions within plant organs. Akello et al. (2007) showed that the seed coasting method allowed the roots to provide a route for the fungus development within plant tissue. On the other hand, Agrios (2005) reported that many pathogenic bacteria and fungi such as B. bassiana might enter in plant tissues through natural openings including stomata. Progression of B. bassiana within tomato plant could be confirmed with the detection of mycotoxins produced by B. bassiana in tomato fruit or seeds.
Moreover, the isolate Bb 115 induced significant decrease in leaf consumption by H. armigera larvae than the control. The differences between the improved and local tomato varieties could be explained as the fact that the improved variety was more susceptible to conidia penetration of spores than the local one. Induced systemic resistance (ISR) is an important mechanism by which the whole plant is prepared for an enhanced defense against a wide range of insect pests (Pieterse et al. 2014). However, the effective mechanism by which the endophyte B. bassiana can induce defense responses in tomatoes to repel pests is still unknown. It was also shown that the B. bassiana Bb 115 isolate at a concentration of 1 × 10 9 conidia/ml reduced the mean survival times (MST) of third-instar H. armigera larvae more than the Bb 11 isolate. The MST of the larvae decreased with the increase in fungal concentration. These results could be considered as indirect effects occurring during the consumption of the B. bassiana-colonized leaves. These indirect effects, such as the production of secondary metabolites or the induction of a systemic response in tomatoes, could inhibit larval feeding behavior. Similar results were obtained with Jaber Lara and Ownley (2017) on H. armigera fed using leaves of Vicia faba plants treated with the endophyte B. bassiana. Obtained findings are in agreement with Castillo Lopez and Sword (2015) who observed lower survival rates of Helicoverpa zea larvae when fed using tomato plants colonized by B. bassiana.
Thus, B. bassiana can reduce insect pest's damage through its endophytic colonization by inhibiting insect development. The endophytic relationship between an EPF and a plant suggests possibilities for biological control, in particular the use of fungal inocula as insecticides. In the present study, the endophytic characteristic of B. bassiana was proved and its effect on H. armigera larvae well established, confirming the usefulness of this fungal species in insect pest control.
Conclusions
The present study demonstrated the endophytic colonization of tomato plants by B. bassiana and its effect on the survival of H. armigera larvae feeding on leaves of local and improved tomato inoculated varieties. The colonization rate increased fungal concentration with seed coasting method, regardless to the fungal isolate. However, B. bassiana isolate Bb 115 at a concentration of 1 × 10 9 conidia/ml was found to be more effective with seed coasting method. Although field studies are necessary to support the obtained results, the endophytic characteristic of B. bassiana could be included in an integrated strategy for the management of H. armigera in tomato. | 2021-05-13T13:54:42.489Z | 2021-05-12T00:00:00.000 | {
"year": 2021,
"sha1": "5e4dd0a3a8f3a8b8117777b0d4e2fae8f51f3431",
"oa_license": "CCBY",
"oa_url": "https://ejbpc.springeropen.com/track/pdf/10.1186/s41938-021-00431-4",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6a21243ac1097b471a1f678a98bef4a0488a78ba",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": []
} |
228913597 | pes2o/s2orc | v3-fos-license | PSPManalysis: Steady‐state and bifurcation analysis of physiologically structured population models
How environmental conditions affect the life history of individuals and how these effects shape population and community dynamics on ecological and evolutionary time‐scales is a central question in many eco‐evolutionary studies. Physiologically structured population models (PSPMs) allow to address this question theoretically as PSPMs are built on a function‐based life‐history model, which explicitly describes how life history depends on individual traits and environmental factors. PSPMs furthermore explicitly account for population feedback on these environmental factors, which translates into density‐dependent effects on life history. PSPManalysis is an r package that allows to simulate ecological dynamics of PSPMs, compute their ecological steady states as a function of model parameters and detect bifurcation points in the computed curves where dynamics change drastically. It furthermore allows for analysing evolutionary dynamics and evolutionary singular states of PSPMs based on Adaptive Dynamics theory. The package only requires a relatively straightforward specification of the life‐history functions as input. Compared to dynamic simulations alone, PSPManalysis uses methods from bifurcation analysis to gain a more complete and comprehensive understanding of model behaviour which is much less dependent on particular parameter values or initial model conditions. Given the central role of the individual life history in many studies, there is substantial scope for using PSPManalysis in fields as diverse as ecology, ecotoxicology, conservation biology and evolutionary biology.
| INTRODUC TI ON
The individual life history plays a central role in ecology and evolution, determining demography and persistence of populations and, together with interactions with other species, shaping the dynamics of interacting populations and communities. An individual's life history furthermore influences its fitness and thereby governs the evolutionary change in species traits. Methodologies to assess how life-history characteristics translate into consequences at the population level are hence a core part of many eco-evolutionary studies.
de ROOS
Physiologically structured population models (PSPMs, de Roos, 1997;Metz & Diekmann, 1986) provide a theoretical approach to analyse links between individual life history and population dynamics, as PSPMs describe population dynamics on the basis of a function-based model of the life history, which also accounts for effects of environmental variables, such as food availability and predator density (de Roos, 2020). PSPMs furthermore account for how these environmental variables are impacted by the population as a whole. While matrix (Caswell, 2001) and integral projection models (Ellner et al., 2016) are better suited to analyse life-history observations and infer their population dynamical consequences (de Roos, 2020), PSPMs capture with considerable mechanistic detail how individual-level processes, like energetics, together with interactions of the individual with its environment shape its life history and how feedback of the entire population on this environment has density-dependent impacts on that life history. PSPMs are therefore particularly useful to analyse how particular mechanisms or aspects of the life history or ecology of an individual would affect the population and community dynamics. The downside of PSPMs, however, is their mathematical tractability. The most simple PSPMs that only account for population size-structure can be formulated using partial differential equations (de Roos, 1997;Metz & Diekmann, 1986), but in general PSPMs are more appropriately couched in terms of coupled systems of nonlinear renewal equations and differential delay equations (Diekmann et al., 2007). Recently, numerical methodology (Kirkilionis et al., 2001;Sánchez-Sanz & Getto, 2016) was developed that allows for analysing ecological and evolutionary dynamics of even fairly complicated PSPMs (ten Brink et al., 2019;Chaparro-Pedraza & de Roos, 2020;Hin & de Roos, 2019) without bothering about these complicated population-level model formulations. This paper introduces the package PSPManalysis implementing this methodology. Because of their formulation in terms of partial differential equations PSPMs are often analysed using numerical simulations of the dynamics at particular parameter values for specific initial conditions. PSPManalysis implements functions for such simulations of ecological dynamics using the Escalator Boxcar Train (EBT) method (de Roos, 1997;de Roos et al., 1992). More importantly, however, PSPManalysis uses the theory on bifurcations in nonlinear dynamical systems (Kuznetsov, 1998) for model analysis. Bifurcation analysis provides more complete and comprehensive understanding of model dynamics because the results are much less dependent on particular parameter values or initial conditions (Kuznetsov, 1998).
| OVERVIE W OF SOF T WARE
For that purpose, PSPManalysis includes functions to compute demographic quantities, such as population growth and stable population states (de Roos, 2008), as well as equilibrium population states (Diekmann et al., 2003) of PSPMs. These models can be of arbitrary complexity with individuals characterized by multiple variables (traits) and multiple environmental variables, such as resource and predator densities, influencing their life history. PSPManalysis also uses curve continuation techniques (Sánchez-Sanz & Getto, 2016) to compute these equilibrium states over ranges of model parameters and detects the so-called bifurcation points, at which a qualitative change in model dynamics occurs (Kuznetsov, 1998). In addition, PSPManalysis allows for an evolutionary analysis of the computed ecological steady states based on the framework of 'Adaptive Dynamics' (Dieckmann & Law, 1996;Metz et al., 1996).
PSPManalysis thus allows for more extensive analysis of a substantially larger class of PSPMs than other packages geared at structured population models such as Mizer (Scott et al., 2014) and plant (Falster et al., 2015) for dynamic simulations of size-spectrum models and size-structured plant populations, respectively, or IPMpack (Metcalf et al., 2013) for demographic analysis of integral projection models.
The main computational routines of the PSPManalysis package are implemented in C for performance reasons with R functions (R Core Team, 2020) providing the interface to these routines. Finally, the package includes a very detailed manual, which discusses the full functionality of the package and illustrates its use with step-by-step instructions.
| ANALYS IS OF AN E X AMPLE MODEL
The life-history model described in Chaparro-Pedraza and de Roos (2020) is used here to illustrate the functionality of the PSPManalysis package. This model is based on the life history of salmon with individuals starting life in a nursery habitat, where they are protected from predation but compete for a shared resource X. Individuals subsequently shift to a growing habitat with negligible and ad libitum food but where they experience predation mortality. All model equations are presented in Table 1, while Table S1 lists default parameters values.
Individuals are characterized by their age a as well as their length l and the focal population (from here on called the 'consumer') is therefore age-and length-structured. Migration to the growing habitat and maturation occur on reaching length l = l s and l = l m respectively. Resource feeding, growth in body size and reproduction are linked through a dynamic energy budget (DEB) model (Chaparro-Pedraza & de Roos, 2020). The DEB model predicts individuals to grow following a von Bertalanffy growth curve with ultimate body length equal to l ∞ X/(K + X) and l ∞ in the nursery and growth habitat respectively (Table 1). Reproduction occurs after migration to the growth habitat since l s < l m , while fecundity is proportional to squared individual length.
Mortality in the nursery habitat is constant, but in the growth habitat decreases with body size (Table 1). In contrast to Chaparro-Pedraza and de Roos (2020), size-dependent mortality is assumed proportional to the density of predators that prey on consumers following a linear functional response and experience a mortality rate μ p . The scaled predator density P incorporates the conversion efficiency between ingested consumer biomass and the predator's numerical response B (Table 1). The contribution of consumer individuals to predator intake equals the product of their vulnerability to predation (l −d ) and their biomass (l 3 ). Resource turnover in the nursery habitat follows semi-chemostat growth dynamics.
The interaction between resource, structured consumer and predator is fully determined by seven functions (Table 1): three life-history functions, describing development, reproduction and mortality, two functions describing consumers' impact on their environment through resource foraging and contribution to predator intake and two functions determining resource and predator dynamics in the nursery and growth habitat respectively. The Supporting Information (Section 2) shows how to implement these functions in an R script for analysis with PSPManalysis (implementation in C is possible and in fact preferable as it speeds up computations by a factor 40-50).
The function PSPMequi, which is the main R function in the PSPManalysis package, uses the methodology of Kirkilionis et al. (2001) and Sánchez-Sanz and Getto (2016) (Supporting Information, Section 5) to compute steady states of PSPMs as a function of a model parameter, called the bifurcation parameter. Figure 1, which was produced from output of PSPMequi using basic plotting commands in R, shows the results of such computations as a function of maximum resource density X max for the example model in Table 1. Three computational steps generated the data for Figure 1.
The R commands for these computational steps are discussed below (see Supporting Information, Section 3, for a more detailed presentation). After installing and loading the PSPManalysis package (see Data availability statement below) running the command: demo("Salmon", package = "PSPManalysis", echo = FALSE) illustrates the generation of all figures in a step-by-step manner and thus supports the following presentation.
For low maximum resource densities, the example model only allows a steady state with zero consumer and predator density and the resource equal to its maximum density (X = X max ), because consumers do not encounter sufficient food to mature (maturation length equals l ∞ X/(K + X)). Starting in this resource-only equilibrium for X max = 0.1, the following call to PSPMequi was used to compute resource-only equilibria for increasing values of X max : EqR <-PSPMequi( modelname = "Salmon.R", biftype = "EQ", startpoint = c(0.1, 0.1), stepsize = 0.5, parbnds = c(1, 0.01, 10), options = c("popZE", "0", "envZE", "1")) This call resulted in the thin curve section with increasing densities X at low values of X max in Figure The two elements "popZE" and "0" in the options argument of the first command instruct the function to assume a zero equilibrium density for the structured population with index 0 TA B L E 1 Life-history functions a of the model of Chaparro-Pedraza and de Roos (2020)
Function Description
Life-history functions Mortality Impacts on environment Resource forageing in nursery habitat Contribution to predator numerical response Functions related to the environment (scaled) predator density; B (day −1 ): predator numerical response; ξ (day −1 ): von Bertalanffy growth rate; l s and l m (cm): body size at habitat shift and maturation; l ∞ (cm): maximum body size at maximum feeding; K (g/m 3 ): half-saturation resource density; B max (cm −2 /day) and I max (g cm −2 day −1 ): fecundity and ingestion proportionality constants; μ 1 and μ 2 (day −1 ): mortality rate in habitats 1 and 2; d (−): exponent in sizedependent predation; ϕ (cm d m 3 /day) and μ p (day −1 ): predator attack and mortality rate; ρ (day −1 ) and X max (g/m 3 ): turnover rate and maximum density of resource; See Table S1 for full details. (Table S1) Age (days) Body length (cm) (the consumer; because PSPManalysis is written in C, the first vector element has index 0 rather than 1) and the two elements "envZE" and "1" do the same for the environmental variable with index 1 (= 2nd vector element, the unstructured predator).
Indicating that these two populations have zero equilibrium density simplifies and speeds up computations. Because these two populations have zero density, the argument startpoint contains only two values: the value of the bifurcation parameter X max at which to start the computations and an estimate for the equilibrium resource density in this state. This computational step is only useful because PSPMequi detects a bifurcation point along the curve, labelling it with the string 'BP #0' (Figure 1) to indicate it represents a branching (or transcritical bifurcation) point (Kuznetsov, 1998) (Figure 1), which is returned in the output list element EqCR$bifpoints of the previous step, as starting point to compute steady states with positive predator density as a function of X max : EqPCR <-PSPMequi( modelname = "Salmon.R", biftype = "EQ", startpoint = EqCR$bifpoints[1,1:4], stepsize = -0.5, parbnds = c(1, 0.0, 10)) The computation of this curve starts off to lower values of X max (notice the negative stepsize argument) as otherwise negative predator densities would result. The result of this computation is a folded curve that extends to a minimum at X max ≈ 4. The function PSPMequi labels this minimum as 'LP' (Figure 1), thus classifying it as a limit (or saddle-node bifurcation) point (Kuznetsov, 1998).
Ecologically, this minimum X max value represents the persistence boundary of the predator, whereas the branching point labelled 'BPE #1' represents its invasion boundary. General bifurcation theory (Kuznetsov, 1998) stipulates that the curve section connecting these two bifurcation points represents unstable steady states or saddle points. PSPMequi also allows for computing the location of the predator's invasion and persistence boundary (labelled 'BPE #1' and 'LP', respectively, in Figure 1) dependent on a second model parameter. Figure 2 illustrates this for the maximum resource density X max and predator mortality rate μ p . For parameter combinations between the invasion ('BPE #1') and persistence boundary ('LP') in Figure 2, two potentially stable steady states occur, a tritrophic steady state with predators and a consumer-resource steady state that predators cannot invade.
The PSPManalysis package also allows for calculating evolutionary singular strategies using the framework of Adaptive Dynamics (Dieckmann & Law, 1996;Metz et al., 1996) and thus permits studying the evolution of life-history traits in a population and community context. While computing equilibrium curves F I G U R E 2 Location of the bifurcation points labelled 'BPE #1' and 'LP' in Figure 1 dependent on maximum resource density X max and predator mortality rate μ p . Otherwise default parameter values (Table S1). See Supporting Information (Section 3) for a detailed discussion of the R commands to compute the data for this figure Maximum resource density (g/m 3 ) Predator mortality (day -1 ) de ROOS as function of a life-history parameter the function PSPMequi can produce as output the selection gradient on this parameter, which equals the derivative of the lifetime reproductive output R 0 with respect to the life-history parameter (Diekmann et al., 2003). Figure 3 shows the equilibrium curves of the example model dependent on the length at habitat shift l s , including its selection gradient (see Supporting Information, Section 3, for details about the R commands for these computations). In the absence of predators, smaller sizes at habitat shift are selected, but with predators present PSPMequi detects an evolutionarily singular state (Brännström et al., 2013), which it labels as 'CSS #0' on the basis of second-order derivatives of R 0 with respect to the length at habitat shift l s (Geritz et al., 1998). The label indicates that this singular state is convergent stable, such that the value of l s will evolve towards this CSS value, while after fixation mutants with slightly different values of l s cannot invade. Starting from this CSS, the function PSPMequi can also compute the curve with zero mutant fitness dependent on both resident and mutant life-history trait value, which corresponds to the boundary separating regions with positive and negative mutant fitness in the pairwise invasibility plot (or PIP, van Tienderen & de Jong, 1986) shown in Figure 4 (left).
Finally, the evolutionary dynamics of life-history parameters, as described by the canonical equation of Adaptive Dynamics (Dieckmann & Law, 1996), can be simulated with the function PSPMevodyn in the PSPManalysis package (Figure 4, right). The trajectory of the length at habitat shift l s over evolutionary time shown F I G U R E 3 Equilibrium densities of predator (top) and consumers (middle) in the nursery (blue) and growth habitat (red) dependent on the length at habitat shift, plus the selection gradient (bottom) on this parameter. Solid and dashed lines represent possibly stable equilibria and saddle points respectively. Curve sections representing consumer-resource steady states that can be invaded by predators are omitted for clarity. See text for details about the bifurcation points labelled 'BPE #1', 'LP' and 'CSS #0'. Otherwise default parameter values (Table S1). See Supporting Information (Section 3) for a detailed discussion of the R commands to compute the data for this figure Methods in Ecology and Evoluঞon de ROOS in Figure 4 (right) confirms that length at habitat shift evolves to its CSS value, but since the function PSPMevodyn cannot simulate combined mutant and resident dynamics, it cannot verify whether evolutionary branching occurs at this evolutionary singular state.
| D ISCUSS I ON
The methodology for analysing PSPMs provided by PSPManalysis has previously been used to investigate how ontogeny affects ecological dynamics of size-structured communities (de Roos & Persson, 2013) and, more recently, to study evolution of metamorphosis (ten Brink et al., 2019), cannibalism (Hin & de Roos, 2019) and timing of habitat shifts (Chaparro-Pedraza & de Roos, 2020).
Given the importance of individual life history for eco-evolutionary dynamics and of environmental feedback on this life history, the methodology is, however, applicable to a wide range of eco-evolutionary questions. This includes questions in ecotoxicology and conservation biology, as PSPManalysis is especially suited to investigate links between the dynamic energy budget (DEB) of individuals and its population consequences (de Roos & Persson, 2013) and DEB models are widely used to assess the consequences of toxicants and changing temperature (Nisbet et al., 2000).
The function PSPMequi is the main component of PSPManalysis and its functionality is more extensive than discussed here. Next The author thanks two anonymous reviewers for their helpful comments that greatly improved the paper.
PE E R R E V I E W
The peer review history for this article is available at https://publo ns. com/publo n/10.1111/2041-210X.13527.
DATA AVA I L A B I L I T Y S TAT E M E N T
The PSPManalysis package is available on CRAN (https://CRAN.Rproje ct.org/packa ge=PSPMa nalysis) and can be installed and subsequently loaded using the commands: install.packages("PSPManalysis") library(PSPManalysis) After loading the package, all computations necessary to reproduce the figures presented in this paper can be executed step-by-step by executing the following demo() command: demo("Salmon", package = "PSPManalysis", echo = FALSE) No data have been used in this paper, other than the contents of the PSPManalysis package available on CRAN. | 2020-11-19T09:10:30.761Z | 2020-12-02T00:00:00.000 | {
"year": 2021,
"sha1": "35f1c9ef450985f57a3fae5d0609e8f21290f34e",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/2041-210X.13527",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "813e2f4f2d3adb190c591037a94dbe788043d43f",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
251743351 | pes2o/s2orc | v3-fos-license | Genome assembly of the chemosynthetic endosymbiont of the hydrothermal vent snail Alviniconcha adamantis from the Mariana Arc
Abstract Chemosynthetic animal-microbe symbioses sustain hydrothermal vent communities in the global deep sea. In the Indo-Pacific Ocean, hydrothermal ecosystems are often dominated by gastropod species of the genus Alviniconcha, which live in association with chemosynthetic Gammaproteobacteria or Campylobacteria. While the symbiont genomes of most extant Alviniconcha species have been sequenced, no genome information is currently available for the gammaproteobacterial endosymbiont of Alviniconcha adamantis—a comparatively shallow living species that is thought to be the ancestor to all other present Alviniconcha lineages. Here, we report the first genome sequence for the symbiont of A. adamantis from the Chamorro Seamount at the Mariana Arc. Our phylogenomic analyses show that the A. adamantis symbiont is most closely related to Chromatiaceae endosymbionts of the hydrothermal vent snails Alviniconcha strummeri and Chrysomallon squamiferum, but represents a distinct bacterial species or possibly genus. Overall, the functional capacity of the A. adamantis symbiont appeared to be similar to other chemosynthetic Gammaproteobacteria, though several flagella and chemotaxis genes were detected, which are absent in other gammaproteobacterial Alviniconcha symbionts. These differences might suggest potential contrasts in symbiont transmission dynamics, host recognition, or nutrient transfer. Furthermore, an abundance of genes for ammonia transport and urea usage could indicate adaptations to the oligotrophic waters of the Mariana region, possibly via recycling of host- and environment-derived nitrogenous waste products. This genome assembly adds to the growing genomic resources for chemosynthetic bacteria from hydrothermal vents and will be valuable for future comparative genomic analyses assessing gene content evolution in relation to environment and symbiotic lifestyles.
Introduction
While most areas of the deep sea depend on sinking organic particles originating from photosynthetic primary production at the ocean's surface, ecosystems around deep-sea hydrothermal vents are fueled by the biochemical processes carried out by chemosynthetic microbes.These organisms are typically chemolitho-or chemoorganotrophic Gammaproteobacteria or Campylobacteria that oxidize reduced hydrothermal fluid compounds, such as sulfide, hydrogen, or methane, to generate energy for carbon fixation (Sogin et al. 2020(Sogin et al. , 2021)).Many chemosynthetic microbes are known to form symbiotic relationships with vent-associated invertebrate animals, thereby supplying these hosts with the bulk of their nutritional requirements and leading to the high animal biomass that is characteristic of hydrothermal vent communities (Dubilier et al. 2008;Sogin et al. 2020Sogin et al. , 2021)).
A diversity of chemosynthetic symbioses has been discovered and described, including that of the hydrothermal vent snail Alviniconcha (Suzuki et al. 2006;Johnson et al. 2015;Breusing, Johnson et al. 2020;Breusing, Castel et al. 2022), a genus of endangered foundation fauna found at hydrothermal vents across the Western Pacific and Indian oceans (https://www.iucnredlist.org;last accessed: August 27, 2022).Most Alviniconcha species foster symbiotic associations with chemosynthetic Gammaproteobacteria that are assumed to be environmentally acquired and reside intracellularly within the snail's gill tissue (Suzuki et al. 2006;Breusing, Castel et al. 2022).Previous genome reports and physiological experiments have shown that Alviniconcha symbionts primarily use reduced sulfur compounds and, in some cases, hydrogen as energy sources for their chemosynthetic metabolism (Beinart et al. 2015;Miyazaki et al. 2020;Breusing, Mitchell et al. 2020), while likely additionally synthesizing essential amino acids for their hosts (Beinart et al. 2019).
With the exception of Alviniconcha adamantis, the dominant endosymbiont genomes of all known Alviniconcha species have been sequenced (Beinart et al. 2019;Trembath-Reichert et al. 2019;Yang et al. 2020;Breusing, Genetti et al. 2022;Hauer et al. 2022).Alviniconcha adamantis is endemic to the Mariana Arc, where it inhabits relatively shallow seamounts in contrast to its deeper living congeners.Due to its basal (though uncertain) phylogenetic position, recent studies have hypothesized that A. adamantis might be the ancestor to all other extant Alviniconcha species, supporting an evolutionary transition from shallow to deep water vent sites (Breusing, Johnson et al. 2020).How the distinct ecological niche of A. adamantis might have shaped gene content and functional potential of its gammaproteobacterial symbiont is currently unknown.Understanding symbiont metabolic capacity can help us infer fundamental characteristics of hydrothermal vent ecology and evolution, giving us insights into how chemosynthetic microbes interact with and adapt to their biogeochemical environment.
In this study, we sequenced a draft genome of the endosymbiont of A. adamantis from the Mariana Arc.Using comparative genomic and phylogenomic analyses, we determined its phylogenetic placement with respect to other chemosynthetic Gammaproteobacteria and compared its metabolic potential with that of related vent-associated symbionts.
Comparative genomics and phylogenomics
A phylogeny of the A. adamantis symbiont and representatives of other chemosynthetic Gammaproteobacteria (Supplementary Table 2) was constructed with IQ-TREE v2.0.6 (Minh et al. 2020) based on an amino acid alignment of concatenated single-copy core genes in the Anvi'o "Bacteria_71" collection (Eren et al. 2015).Phylogenomic trees were inferred from 5 independent runs based on a gene-wise best-fit partition model identified with ModelFinder using the relaxed hierarchical clustering method (Lanfear et al. 2014).Branch support was calculated via ultrafast bootstrapping and Shimodaira-Hasegawa-like approximate likelihood ratio tests, resampling partitions, and sites within resampled partitions 1,000 times.Bootstrap trees were optimized through a hill-climbing nearest neighbor interchange search to minimize the effect of model violations.The free-living SUP05 bacterium Ca.Pseudothioglobus singularis was used as outgroup for tree rooting.The best maximum likelihood tree was displayed and polished with FigTree v1.4.4 (http://tree.bio.ed.ac.uk/soft ware/figtree/; last accessed: August 27, 2022).Gene content differences among the A. adamantis symbiont and related Gammaproteobacteria were assessed in Anvi'o by determining the presence and completeness of metabolic pathways via the "anvi-run-kegg-kofams" and "anvi-estimate-metabolism" programs.Modules were considered as complete when at least 75% of participating genes were found.Core and unique proteincoding genes between the A. adamantis symbiont and closest bacterial relatives were evaluated through the Anvi'o pangenomics workflow.Principal coordinate plots and heatmaps were produced in R v4.1.2with the ggplot2, ComplexHeatmap, and circlize packages (Gu et al. 2014(Gu et al. , 2016; Wickham 2016; R Core Team 2021) and polished in Inkscape v1.0.0b1 (https://inkscape.org; last accessed: August 27, 2022).
Overview of the genome assembly
The A. adamantis symbiont draft genome consists of 427 scaffolds comprising an approximate total size of 3.3 Mb, an N50 value of 16,689 bp, and a GC content of 62.04%, with an average coverage of 931Â (Table 1).Functional annotation analyses predicted 3,821 protein-coding genes, 2 rRNAs and 45 tRNAs, with 833 (21.54%) genes having no designated function (Table 1, Supplementary Table 3).About 11.63% of the genome consisted of intergenic regions.Based on Gammaproteobacteria-specific marker genes, the genome assembly is 98.88% complete with 2.06% contamination and 16.67% strain heterogeneity (Table 1).Read mapping against the A. adamantis symbiont genome recovered 198 variant sites based on FreeBayes but 24,332 variant sites based on LoFreq, which translates into a variant density of 7.44 variants/ kbp.Given that LoFreq is optimized for detecting low-frequency variants, the discrepancy between the 2 programs suggests that the symbiont population within A. adamantis individuals likely consists of one dominant strain (in agreement with Breusing, Castel et al. 2022) as well as several low abundance strains that are only detectable with more sensitive methods.
Comparative genomics and phylogenomics
Phylogenomic analyses and taxonomic assignment indicated that the A. adamantis symbiont represents a sister taxon to the Chromatiaceae endosymbionts of the hydrothermal vent snails Chrysomallon squamiferum (from the Indian Ocean) and Alviniconcha strummeri ("GammaLau," from the Lau Basin; Fig. 2, Supplementary Fig. 1), despite the fact that these symbionts and their hosts inhabit distant biogeographic provinces (Fig. 1).The A. adamantis symbiont shared on average 76.75% and 77.88% nucleotide identity with the A. strummeri and C. squamiferum symbionts, respectively, whereas the latter 2 taxa were less divergent, comprising an average nucleotide identity of 89.02%.The present genome similarities indicate that all 3 symbionts are representatives of distinct bacterial species (Konstantinidis and Tiedje 2005), with the A. adamantis symbiont possibly representing a different genus.All symbionts shared 1,325 core protein-coding gene clusters, while the A. adamantis symbiont contained approximately the same number of accessory gene clusters (1,332; Fig. 2, Supplementary Table 3), in accordance with the observed genomic divergence.Core genes were mostly associated with translation, energy production, and amino acid, cofactor, and cell wall metabolism, whereas accessory genes were predominantly involved in signal transduction, replication, mobilome, and defense mechanisms or had unknown functions (Supplementary Table 3).Interestingly, the phylogenetic affiliations among these taxa were not exactly mirrored in representations of functional potential, given that the A. adamantis and C. squamiferum symbionts were more similar in metabolic pathways than either of these species to the A. strummeri symbiont (Fig. 3, Supplementary Fig. 2).Overall, the A. adamantis and C. squamiferum symbionts exhibited functional proximity (i.e.overlap in gene content and metabolic pathways) to other provannid snail, tubeworm, and Solemya clam symbionts, while the A. strummeri symbiont showed higher affinity to bacteria of the SUP05 group (Fig. 3, Supplementary Fig. 2).
Chemoautotrophic and heterotrophic metabolism
Both hydrogen sulfide and thiosulfate oxidation pathways were detected within the A. adamantis symbiont genome (Supplementary Tables 3 and 4).Oxidation of hydrogen sulfide is likely facilitated through type I and type VI sulfide: quinone oxidoreductases (sqr) and a flavocytochrome c-sulfide dehydrogenase (fccAB), which are hypothesized to be used for growth in habitats with variable sulfide concentrations (Han and Perner 2016;Beinart et al. 2019;Breusing, Mitchell et al. 2020).Typical for chemosynthetic Gammaproteobacteria (Nakagawa and Takai 2008;Gregersen et al. 2011), the thiosulfate-oxidizing Sox multienzyme complex (soxXYZABC) without a complete soxCD subunit was encoded, which likely promotes oxidation of sulfur compounds to elemental sulfur as energy storage in the periplasm (Grimm et al. 2008;Ghosh and Dam 2009).Likewise, we observed genes for the reverse dissimilatory sulfite reductase associated pathway, which catalyzes the oxidation of sulfide to sulfate via sulfite and adenylylphosphosulfate (Nakagawa and Takai 2008) and is characteristic for gammaproteobacterial sulfur-oxidizers Gregersen et al. 2011).An alternative pathway for sulfite metabolization might be performed by sulfite dehydrogenase (soeABC).Apart from potential for sulfur oxidation, the A. adamantis symbiont genome showed capacity for the usage of hydrogen as electron donor for chemosynthesis (Supplementary Table 3).We found evidence for the presence of 2 uptake Ni/Fe hydrogenases, an O 2 -tolerant hydrogenase of type 1d (gene caller ID: 3368) and an O 2 -sensitive hydrogenase of type 1e (gene caller ID: 165, 166), which are likely employed for growth under aerobic and anaerobic conditions, respectively.The expression and formation of these primary hydrogenases might be regulated by a sensory Group 2b Ni/Fe hydrogenase (gene caller ID: 3354).
As in other chemosynthetic Gammaproteobacteria (Hu ¨gler and Sievert 2011), the energy generated through hydrogen or sulfur oxidation is likely transferred to Form II RuBisCO (cbbM) for carbon assimilation via the Calvin-Benson-Bassham cycle, which was the only complete carbon fixation pathway found in the A. adamantis symbiont genome (Supplementary Tables 3 and 4).
Similar to what has been reported from other Alviniconcha symbionts, there is evidence that the A. adamantis symbiont has the potential for heterotrophic metabolism.We found several transporters for the uptake of 4 carbon compounds (TRAP transport system), sugars (phosphotransferase system), lipids, amino acids, and urea in the genome of the A. adamantis symbiont.In addition, genes for the utilization of glycolate (glycolate oxidase), urea (urease), glycogen (glycogen phosphorylase), and formate (formate hydrogenlyase) were observed.
Respiration
The A. adamantis symbiont genome encodes pathways for both aerobic and anaerobic respiration.A full set of genes of the aerobic respiratory chain was detected, including NADH-quinone oxidoreductase, succinate dehydrogenase, cytochrome bc1 complex, cytochrome cbb3-type oxidase, and an F-type ATPase (Supplementary Tables 3 and 4).In addition, subunits I, II, and X of a terminal cytochrome bd-I ubiquinol oxidase were found, which is thought to be used for aerobic respiration under microaerophilic conditions (Borisov et al. 2011;Beinart et al. 2019).The symbiont's capacity to express different respiratory enzymes might be an adaptation to deal with fluctuating oxygen concentrations at hydrothermal vents and to remedy interference with host respiration (Beinart et al. 2019).Under complete anoxia, the A. adamantis symbiont appears to be able to switch to multiple electron acceptors other than oxygen.For example, nitrate respiration is likely supported by the presence of complete pathways for denitrification as well as dissimilatory nitrate reduction (Supplementary Tables 3 and 4).Furthermore, respiration of hydrogen and dimethyl sulfoxide seems possible through genes coding for formate hydrogenlyase and anaerobic dimethyl sulfoxide reductase.
Nitrogen assimilation
The A. adamantis symbiont appears to be able to use multiple nitrogen sources for the incorporation of nitrogen into biomass.For example, we detected several genes for ammonia transporters and urease in the A. adamantis symbiont genome (Supplementary Table 3), which should allow direct uptake of ammonia from the environment or host and disintegration of urea into 2 ammonia molecules.Ammonia would subsequently be available for conversion into glutamine by glutamine synthetase and further incorporation into glutamate by NADPH-dependent glutamate synthase (GOGAT).Interestingly, the KEGG/COG annotation pipeline failed to recover genes for assimilatory nitrate reductase (nasA), which is present in other provannid symbionts (Beinart et al. 2019).This finding is likely an artifact of the annotation database or gene prediction program, as further searches via RAST-Tk (Brettin et al. 2015) indicated the presence of nasA in the genome of the A. adamantis symbiont.Nevertheless, given the oligotrophic nature of the Mariana region (Morel et al. 2010), the abundance of genes for ammonia transport and urea catabolism in the genome of the A. adamantis symbiont could suggest scavenging of host and environmental waste products in adaptation to limited nutrient availability at the Chamorro Seamount.
Amino acid and cofactor biosynthesis
In addition to the synthesis of glutamine and glutamate, the A. adamantis symbiont has the potential for the generation of 13 other amino acids, including the essential amino acids histidine, isoleucine, leucine, lysine, methionine, threonine, tryptophan, and valine, which are critical for host nutrition (Supplementary Table 4).Pathways for the biosynthesis of cysteine, glycine, phenylalanine, serine, and tyrosine appeared incomplete, which might suggest reliance of the symbiont on environmental provisioning of these amino acids or could be indicative of artifacts in the assembly or functional annotations.For example, the terminal enzyme for serine biosynthesis, phosphoserine phosphatase (serB), was missing from the KEGG pathway predictions, but was present in the COG annotations.This could imply that the A. adamantis specific gene is too divergent from reference sequences in the KEGG database to be correctly annotated and that this symbiont is actually able to synthesize serine.
Apart from essential amino acid biosynthesis, pathways for the generation of diverse enzyme cofactors were observed in the A. adamantis symbiont genome.Based on KEGG metabolic reconstructions, the A. adamantis symbiont has the potential to de novo synthesize NAD, heme, siroheme, ubiquinone, molybdenum, lipoic acid and the vitamins biotin, thiamine, folate, and riboflavin (Supplementary Table 4).By contrast, conventional pathways for the biosynthesis of cobalamin, pantothenate, pyridoxal-5 0 phosphate, ascorbate, and phylloquinone appeared incomplete, but might in some cases be substituted by alternative routes.For example, the lack of 2-dehydropantoate-2-reductase for the conversion of 2-dehydropantoate to (R)-pantoate might be compensated by ketol-acid reductoisomerase (ilvC) (Merkamm et al. 2003), thereby allowing autonomous generation of pantothenate and coenzyme A. In the absence of complete biosynthetic pathways, the respective cofactors will have to be acquired from an environmental source, given that several vitamin-dependent enzymes, such as cobalamin-dependent methionine synthase (metH) and pyridoxal-5 0 phosphate-dependent cysteine-S-conjugate beta-lyase, were encoded in the A. adamantis symbiont genome.
Host-symbiont interactions
Aside from chemosynthesis genes, the genome of the A. adamantis symbiont encodes multiple loci that are likely relevant for interactions with its host, including genes for flagella (motAB, flgABC, flgJKLMN, flgZ, fliA, fliCDEFGHIJKLNMOPQRST), pili (pilABC, pilEFGHIJ, pilMNOPQ, pilSTUVW, pilZ, fimT, fimV, cpaBC, cpaF, tadBCD, tadG), chemotaxis (MCP, cheAB, cheD, cheR, cheVW, cheYZ), toxin-antitoxin and 2-component systems (e.g.fitAB, higAB, vapBC, algRZ) as well as outer membrane porins (ompA-F; Supplementary Table 3).The discovery of flagella genes in the A. adamantis symbiont genome is surprising as these genes are typically abundant in campylobacterial, but not gammaproteobacterial Alviniconcha symbiont genomes (Beinart et al. 2019), though are observed in some other symbiotic Gammaproteobacteria, including those of tubeworms and mussels (Robidart et al. 2008;Egas et al. 2012;Gardebrecht et al. 2012;De Oliveira et al. 2022).The presence of flagella-encoding loci could suggest that the biology of the A. adamantis symbiosis is markedly different from other gammaproteobacterial associations in Alviniconcha and has closer resemblance to Campylobacteria-dominated systems, where flagella have been C. Breusing et al. | 5 implicated in host specificity, nutrient transfer and/or continuous symbiont transmission (Sanders et al. 2013).Host specificity might further be promoted by outer membrane porins, which have been shown to play a role in host recognition in both terrestrial and aquatic symbioses (Weiss et al. 2008;Nyholm et al. 2009;Zvi-Kedem et al. 2021).Host colonization and subsequent maintenance of the intrahost symbiont population involves a delicate interplay between host and symbiont molecular factors.Many of the detected toxin-antitoxin and 2-component systems are known to be important for virulence regulation, host invasion, and intracellular growth control in a variety of pathogenic bacteria (Lobato-Ma ´rquez et al. 2016), which could indicate that the A. adamantis symbiont employs comparable strategies for beneficial interactions with its hosts, similar to what has been proposed for mutualistic symbionts of deep-sea mussels (Sayavedra et al. 2015).
Conclusions
Using a combination of Illumina and Nanopore sequencing at an average coverage of 931Â, in this study, we generated the first draft endosymbiont genome of the endemic hydrothermal vent snail A. adamantis from the Mariana Arc.The presented genome assembly closes a gap in the genomic resources currently available for symbionts of deep-sea provannid snails and will be useful for further analyses of host-symbiont dynamics and symbiont genome evolution according to host and environmental factors.While gene content of the A. adamantis symbiont appeared overall characteristic of chemosynthetic Gammaproteobacteria and related Alviniconcha symbionts, notable exceptions were observed, in particular, the presence of flagella-encoding loci and an abundance of genes for ammonia transport and urea usage.These differences might suggest specific adaptations to local habitat conditions at the Chamorro Seamount and possible contrasts in host-symbiont interactions relative to other gammaproteobacterial Alviniconcha symbioses.Future physiological and transcriptomic data paired with geochemical measurements will be helpful to address these hypotheses and determine the molecular basis underlying establishment, homeostasis, and niche adaptation of Alviniconcha symbioses at deep-sea hydrothermal vents.
Fig. 1 .
Fig. 1.Sampling location of Alviniconcha adamantis in the Mariana Arc, from which the symbiont genome reported here was isolated.Habitats of other host species with closely related symbionts are shown, A. strummeri in the Lau Basin and Chrysomallon squamiferum on the Central Indian Ridge.The map was produced with the marmap package (Pante and Simon-Bouhet, 2013) in R.
Fig. 2 .
Fig.2.a) Representative phylogeny of chemosynthetic Gammaproteobacteria, for which whole-genome sequences were available (Supplementary Table2).The A. adamantis symbiont forms a sister clade to the Chromatiaceae symbionts of A. strummeri and C. squamiferum despite the vast geographic distances among the habitats of these species.Numbers on nodes indicate support values from ultrafast bootstrapping and Shimodaira-Hasegawa-like approximate likelihood ratio tests.b) Pangenome of the A. adamantis, A. strummeri, and C. squamiferum symbionts.Symbiont contigs are shown as purple layers, while number of genes and combined homogeneity indices of gene clusters are shown as blue layers.The homogeneity index is a measure of amino acid sequence similarity within computed gene clusters, with higher values indicating more homogeneous clusters.The 3 symbionts share 1,325 core protein-coding gene clusters (containing 4,167 genes), while approximately the same amount of gene clusters is exclusive to the A. adamantis symbiont in agreement with the genomic and phylogenetic divergence among symbiont species.The matrix on the right shows average nucleotide identities among symbiont genomes from 70% to 100%, with darker grey tones indicating higher identities.
Fig. 3 .
Fig.3.Completeness of KEGG metabolic pathways in the A. adamantis symbiont compared to its closest bacterial relatives (left) and functional similarity to other chemosynthetic Gammaproteobacteria (right).In contrast to phylogenetic proximity, the A. adamantis and C. squamiferum symbionts are more similar to each other in terms of functional potential than either of these species to the A. strummeri symbiont.
Table 2 )
. The A. adamantis symbiont forms a sister clade to the Chromatiaceae symbionts of A. strummeri and C. squamiferum despite the vast geographic distances among the habitats of these species.Numbers on nodes indicate support values from ultrafast bootstrapping and Shimodaira-Hasegawa-like approximate likelihood ratio tests.b)Pangenome of the A. adamantis, A. strummeri, and C. squamiferum symbionts.Symbiont contigs are shown as purple layers, while number of genes and combined homogeneity indices of gene clusters are shown as blue layers.The homogeneity index is a measure of amino acid sequence similarity within computed gene clusters, with higher values indicating more homogeneous clusters.The 3 symbionts share 1,325 core protein-coding gene clusters (containing 4,167 genes), while approximately the same amount of gene clusters is exclusive to the A. adamantis symbiont in agreement with the genomic and phylogenetic divergence among symbiont species.The matrix on the right shows average nucleotide identities among symbiont genomes from 70% to 100%, with darker grey tones indicating higher identities. | 2022-08-24T06:17:56.595Z | 2022-08-23T00:00:00.000 | {
"year": 2022,
"sha1": "9a6cb6629c6396f68d58e2fdc4133c53d1fe1fa4",
"oa_license": "CCBY",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9526052",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "efb68f09eb808a820f84f34262ec8c0efd341546",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.